Skip to main content

OpenAI Proxy Server

Use this to spin up a proxy api to translate openai api calls to any non-openai model (e.g. Huggingface, TogetherAI, Ollama, etc.)

This works for async + streaming as well.

Works with ALL MODELS supported by LiteLLM. To see supported providers check out this list - Provider List.

Requirements Make sure relevant keys are set in the local .env.

Jump to tutorial

quick start

Call Huggingface models through your OpenAI proxy.

Start Proxy
Run this in your CLI.

$ pip install litellm
$ litellm --model huggingface/bigcode/starcoder

#INFO: Uvicorn running on http://0.0.0.0:8000

This will host a local proxy api at: http://0.0.0.0:8000

Test it

import openai 

openai.api_base = "http://0.0.0.0:8000"

print(openai.ChatCompletion.create(model="test", messages=[{"role":"user", "content":"Hey!"}]))

Other supported models:

$ export ANTHROPIC_API_KEY=my-api-key
$ litellm --model claude-instant-1

Jump to Code

setting api base, temperature, max tokens

litellm --model huggingface/bigcode/starcoder \
--api_base https://my-endpoint.huggingface.cloud \
--max_tokens 250 \
--temperature 0.5

Ollama example

$ litellm --model ollama/llama2 --api_base http://localhost:11434

tutorial - using with aider

Aider is an AI pair programming in your terminal.

But it only accepts OpenAI API Calls.

In this tutorial we'll use Aider with WizardCoder (hosted on HF Inference Endpoints).

[NOTE]: To learn how to deploy a model on Huggingface

Step 1: Install aider and litellm

$ pip install aider-chat litellm

Step 2: Spin up local proxy

Save your huggingface api key in your local environment (can also do this via .env)

$ export HUGGINGFACE_API_KEY=my-huggingface-api-key

Point your local proxy to your model endpoint

$ litellm \
--model huggingface/WizardLM/WizardCoder-Python-34B-V1.0 \
--api_base https://my-endpoint.huggingface.com

This will host a local proxy api at: http://0.0.0.0:8000

Step 3: Replace openai api base in Aider

Aider lets you set the openai api base. So lets point it to our proxy instead.

$ aider --openai-api-base http://0.0.0.0:8000

And that's it!