Openai api Error that i m unable to overcome

Bambarbiakirgudu

Regular Member
Nov 9, 2016
308
51
Guys. Previously I've solved this. but, can'T find out how it has done before. The error is:
ErrorAn error occurred during paraphrasing:: Error code: 404 - {'error': {'message': 'This is a chat
model and not supported in the v1/completions endpoint. Did you mean to use
v1/chat/completions?', 'type': 'invalid_request_error', 'param': 'model', 'code': None}}
I've tried almost anything. Can provide full code either if necessicty.

Thanks.
 
Show the request.
From what I've seen you have to change the model you're using in your request.
 
Once I’m home and have time I can check how I do it, but as @jpeg said it looks like you’re using the wrong model for what you’re doing?

Would definitely help if you posted some code, specifically the request
 
Once I’m home and have time I can check how I do it, but as @jpeg said it looks like you’re using the wrong model for what you’re doing?

Would definitely help if you posted some code, specifically the request


Offcourse i can share it. but, the code is a bit longer and to mutch modules. I will try making an other snippid.
 
You're using the wrong (legacy) endpoint. I guess you switched the model to a newer one (3.5 or 4) and then the error appeared?
You need to rewrite the whole request, if you want to use a newer model. The /v1/completions Endpoint is not compatible with modern openai models. Switch to /v1/chat/completions or use the legacy endpoint but with a compatible model (gpt-3.5-turbo-instruct or davinci).
https://platform.openai.com/docs/models/model-endpoint-compatibility
 
You're using the wrong (legacy) endpoint. I guess you switched the model to a newer one (3.5 or 4) and then the error appeared?
You need to rewrite the whole request, if you want to use a newer model. The /v1/completions Endpoint is not compatible with modern openai models. Switch to /v1/chat/completions or use the legacy endpoint but with a compatible model (gpt-3.5-turbo-instruct or davinci).
https://platform.openai.com/docs/models/model-endpoint-compatibility

I use gpt 3.5 turbo already. Can You share an example please?
 
I use gpt 3.5 turbo already. Can You share an example please?
I don't know what language you're using. In python it should be something like this:

old /v1/completions:
Python:
completion = gptclient.completions.create(
    model="gpt-3.5-turbo",
    prompt="abcd."
)

new /v1/chat/completions:
Python:
completion = gptclient.chat.completions.create(
    model="gpt-3.5-turbo",
    messages=[
      {"role": "system", "content": "You are a helpful assistant."},
      {"role": "user", "content": "abcd"}
    ]
  )
 
Hmmm I've been really thinking about this for a while now and can't figure out what the issue might be either. Below I'll post my JS implementation as a comparison.
There are 2 small things I caught, but might be nothing

1. I don't pass 2 individual properties "messages" and "model", I pass 1 JSON containing both. But since I don't really code python anymore, maybe it's just language specific?
2. I use another model, could you try it with gpt-4o ? Just out of curiosity

JavaScript:
const chat = async (newMessageText, messageHistory = []) => {
    messageHistory.push({
        role: "user",
        content: newMessageText
    });

    const chatCompletion = await openai.chat.completions.create({
        messages: messageHistory,
        model: "gpt-4o",
    });

    messageHistory.push(chatCompletion.choices[0].message);

    return ({
        text: chatCompletion.choices[0].message.content,
        history: messageHistory
    });
}
 
Last edited:
Hmmm I've been really thinking about this for a while now and can't figure out what the issue might be either. Below I'll post my JS implementation, but I don't see it being different.
Only 2 small things I caught, but might be nothing

1. I don't pass 2 individual properties "messages" and "model", I pass 1 JSON containing both. But since I don't really code python anymore, maybe it's just language specific?
2. I use another model, could you try it with gpt-4o ? Just out of curiosity

JavaScript:
const chat = async (newMessageText, messageHistory = []) => {
    messageHistory.push({
        role: "user",
        content: newMessageText
    });

    const chatCompletion = await openai.chat.completions.create({
        messages: messageHistory,
        model: "gpt-4o",
    });

    messageHistory.push(chatCompletion.choices[0].message);

    return ({
        text: chatCompletion.choices[0].message.content,
        history: messageHistory
    });
}


Well buddy. Tried gpt4 too.

A Temprary solution here.

Firstly, Uninstall newer version.
pip uninstall openai

Then downgrade to the older one.
pip install openai==0.27.0


but, it's stupid why can't we find a working solution for Openai v1. You should know that openai official docs looked outdated to me also. If i m not wrong.

Just check this out: https://platform.openai.com/docs/guides/text-generation/chat-completions-api
 
Well buddy. Tried gpt4 too.

A Temprary solution here.

Firstly, Uninstall newer version.
pip uninstall openai

Then downgrade to the older one.
pip install openai==0.27.0


but, it's stupid why can't we find a working solution for Openai v1. You should know that openai official docs looked outdated to me also. If i m not wrong.

Just check this out: https://platform.openai.com/docs/guides/text-generation/chat-completions-api

Yeah been staring at the documentation aswell the last 20 minutes, you literally do what they ask you. Documentation might really be outdated here

But checking openAI's Github it also looks just like your implementation. Only they first pass messages and then the model, but I suppose the way it's written the order doesn't matter in Python?
You could always open an issue with them on GitHub and see if it's an actual bug in the API
 
Yeah been staring at the documentation aswell the last 20 minutes, you literally do what they ask you. Documentation might really be outdated here

But checking openAI's Github it also looks just like your implementation. Only they first pass messages and then the model, but I suppose the way it's written the order doesn't matter in Python?
You could always open an issue with them on GitHub and see if it's an actual bug in the API


I've found this same error asked in Openai forums and Someone answered that. Here it is: https://community.openai.com/t/open...sing-version-1-0-0-of-openai-library/663067/6

Since I've removed the newer version. I don't want to update it again for a while. but, You may want to try weather the solution works or not. Please let us know result of your experiment as well if You're interested in.
 
Back
Top
AdBlock Detected

We get it, advertisements are annoying!

Sure, ad-blocking software does a great job at blocking ads, but it also blocks useful features and essential functions on BlackHatWorld and other forums. These functions are unrelated to ads, such as internal links and images. For the best site experience please disable your AdBlocker.

I've Disabled AdBlock