Skip to content
This repository has been archived by the owner on Sep 16, 2024. It is now read-only.

ollama support #304

Open
ToeiRei opened this issue Jul 7, 2024 · 4 comments
Open

ollama support #304

ToeiRei opened this issue Jul 7, 2024 · 4 comments

Comments

@ToeiRei
Copy link

ToeiRei commented Jul 7, 2024

As far as I know, ollama is working on OpenAI compatibility.

To my understanding, changing the host to use your local ollama instance should work... (have not tested yet)

@philidinator
Copy link

Hi, I'm also interested, did you try it out yet?

@ToeiRei
Copy link
Author

ToeiRei commented Jul 13, 2024

No, I did not spoof the endpoint with /etc/hosts yet as I run a more complex setup here.

@philidinator
Copy link

I tried setting CHATGPT_REVERSE_PROXY=http://ip:11434/v1/chat/completions, but unfortunately it didn't work. I always get an error message: [ERROR] [OpenAI-API Error: Error: Failed to send message. HTTP 400 - {"error":{"message":"[] is too short - 'messages'","type":"invalid_request_error","param":null,"code":null}}].

@Expro
Copy link

Expro commented Sep 4, 2024

Same here.

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants