错误信息: This is not a chat model and thus not supported in the v1/chat/completions...

2026年01月04日 16:01 状态: processing

🚨 错误信息

错误名称: AI_APICallError 错误信息: This is not a chat model and thus not supported in the v1/chat/completions endpoint. Did you mean to use v1/completions? 堆栈信息: AI_APICallError: This is not a chat model and thus not supported in the v1/chat/completions endpoint. Did you mean to use v1/completions? at file:///Applications/Cherry%20Studio.app/Contents/Resources/app.asar/out/renderer/assets/dist-Cz6miDbB.js:345:11 at async postToApi (file:///Applications/Cherry%20Studio.app/Contents/Resources/app.asar/out/renderer/assets/dist-Cz6miDbB.js:278:24) at async OpenAICompatibleChatLanguageModel.doStream (file:///Applications/Cherry%20Studio.app/Contents/Resources/app.asar/out/renderer/assets/dist-Cz6miDbB.js:766:48) at async wrapStream (file:///Applications/Cherry%20Studio.app/Contents/Resources/app.asar/out/renderer/assets/store-C63Bp9Zw.js:38683:32) at async fn (file:///Applications/Cherry%20Studio.app/Contents/Resources/app.asar/out/renderer/assets/store-C63Bp9Zw.js:36874:17) at async file:///Applications/Cherry%20Studio.app/Contents/Resources/app.asar/out/renderer/assets/store-C63Bp9Zw.js:34261:19 at async _retryWithExponentialBackoff (file:///Applications/Cherry%20Studio.app/Contents/Resources/app.asar/out/renderer/assets/store-C63Bp9Zw.js:34363:10) at async streamStep (file:///Applications/Cherry%20Studio.app/Contents/Resources/app.asar/out/renderer/assets/store-C63Bp9Zw.js:36842:108) at async fn (file:///Applications/Cherry%20Studio.app/Contents/Resources/app.asar/out/renderer/assets/store-C63Bp9Zw.js:37121:5) at async file:///Applications/Cherry%20Studio.app/Contents/Resources/app.asar/out/renderer/assets/store-C63Bp9Zw.js:34261:19 错误原因: "[undefined]" 状态码: 400 请求路径: https://www.dmxapi.cn/v1/chat/completions 请求体: { "model": "gpt-5.2-pro", "user": "[undefined]", "max_tokens": "[undefined]", "temperature": "[undefined]", "top_p": "[undefined]", "frequency_penalty": "[undefined]", "presence_penalty": "[undefined]", "response_format": "[undefined]", "stop": "[undefined]", "seed": "[undefined]", "serviceTier": "[undefined]", "reasoning_effort": "[undefined]", "verbosity": "[undefined]", "messages": [ { "role": "user", "content": "我是一个网络安全从业人员,我要每个月给我的客户做一次安全巡检(设备运行巡检、安全设备告警巡检),然后根据巡检写一份巡检报告,请帮我写一个使用chatgpt5生成报告模版的提示词" } ], "tools": "[undefined]", "tool_choice": "[undefined]", "stream": true, "stream_options": { "include_usage": true } } 响应首部: { "cache-control": "no-cache", "content-type": "text/event-stream", "date": "Sun, 04 Jan 2026 08:01:33 GMT", "server": "nginx", "x-rixapi-request-id": "20260104160024698485938Ye5VCZ0P" } 响应内容: { "error": { "message": "This is not a chat model and thus not supported in the v1/chat/completions endpoint. Did you mean to use v1/completions?", "type": "rix_api_error", "param": "model", "code": null } } 数据: { "error": { "message": "This is not a chat model and thus not supported in the v1/chat/completions endpoint. Did you mean to use v1/completions?", "type": "rix_api_error", "param": "model", "code": null } }

🤖 AI解决方案

报错分析报告

1. 错误翻译


原文: This is not a chat model and thus not supported in the v1/chat/completions endpoint. Did you mean to use v1/completions?
译文: 这不是一个对话模型,因此不支持在 /v1/chat/completions 接口中使用。您是否想要使用 /v1/completions 接口?

2. 原因分析


根本原因: 您调用的是 OpenAI 兼容的对话模型接口 (/v1/chat/completions),但使用的模型 "gpt-5.2-pro" 在该平台被识别为非对话模型(即传统的"补全/续写"模型)。

具体分析:

  • 请求路径: https://www.dmxapi.cn/v1/chat/completions(对话接口)

  • 使用模型: "gpt-5.2-pro"(被服务端判定为非对话模型)

  • 错误位置: 在调用 OpenAICompatibleChatLanguageModel.doStream 方法时,API 返回了 400 错误

  • 状态码: 400(客户端错误,表明请求参数有问题)
  • 3. 解决方案


    根据 API 的错误提示,您有两种解决方案:

    方案一:更换为对话接口支持的模型


    如果平台有对话模型,请更换模型名称:

    javascript
    // 修复后的请求体(假设更换为可用的对话模型)
    {
    "model": "gpt-4", // 或平台支持的其他对话模型,如 gpt-3.5-turbo
    "messages": [
    {
    "role": "user",
    "content": "我是一个网络安全从业人员..."
    }
    ],
    "stream": true
    // 其他参数...
    }

    方案二:更换为正确的接口端点


    如果 "gpt-5.2-pro" 确实是您要使用的模型,请将其用于传统的补全接口:

    javascript
    // 需要修改代码中的 API 端点
    const apiUrl = 'https://www.dmxapi.cn/v1/completions'; // 注意:从 chat/completions 改为 completions

    // 请求体格式需要调整为补全接口的格式
    const requestBody = {
    "model": "gpt-5.2-pro",
    "prompt": "我是一个网络安全从业人员,我要每个月给我的客户做一次安全巡检...", // 注意:从 messages 改为 prompt
    "max_tokens": 1000,
    "stream": true
    // 补全接口的其他参数...
    };

    关键修改点:

  • 接口端点: /v1/chat/completions/v1/completions

  • 请求参数: messages 数组 → prompt 字符串

  • 模型类型: 确保模型与接口匹配
  • 4. ️ 预防措施

    最佳实践建议


  • 明确模型类型

  • - 在使用 API 前,查阅平台文档确认每个模型的类型(对话模型 vs. 补全模型)
    - 创建模型-接口映射表,避免混淆

  • 增加类型检查

  • ``javascript
    // 在代码中添加模型类型验证
    const chatModels = ['gpt-4', 'gpt-3.5-turbo', 'claude-3'];
    const completionModels = ['gpt-5.2-pro', 'text-davinci-003'];

    function getEndpointForModel(model) {
    if (chatModels.includes(model)) {
    return '/v1/chat/completions';
    } else if (completionModels.includes(model)) {
    return '/v1/completions';
    } else {
    throw new Error(
    未知模型类型: ${model});
    }
    }
    `

    ️ 开发工具推荐


  • API 测试工具: 使用 Postman 或 Insomnia 先测试接口,确认模型可用性

  • 错误监控: 实现详细的错误日志记录,捕获完整的 API 响应信息

  • 文档查阅: 定期查看平台 API 文档更新,了解模型类型的变化
  • ---

    建议: 联系 DMXAPI 平台确认 "gpt-5.2-pro"` 的确切类型和支持的接口,然后选择对应的解决方案。