This is a BentoML example project, showing you how to serve and deploy Llama 3.2 11B Vision using vLLM, a high-throughput and memory-efficient inference engine.
See here for a full list of BentoML example projects.
💡 This example is served as a basis for advanced code customization, such as custom model, inference logic or vLLM options. For simple LLM hosting with OpenAI compatible endpoint without writing any code, see OpenLLM.
If you want to test the Service locally, we recommend you use an Nvidia GPU with at least 48G VRAM.
git clone https://github.com/bentoml/BentoVLLM.git
cd BentoVLLM/llama3.2-11b-instruct
# Recommend Python 3.11
pip install -r requirements.txt
export HF_TOEKN=<your-api-key>
We have defined a BentoML Service in service.py
. Run bentoml serve
in your project directory to start the Service.
$ bentoml serve .
2024-01-18T07:51:30+0800 [INFO] [cli] Starting production HTTP BentoServer from "service:VLLM" listening on http://localhost:3000 (Press CTRL+C to quit)
INFO 01-18 07:51:40 model_runner.py:501] Capturing the model for CUDA graphs. This may lead to unexpected consequences if the model is not static. To run the model in eager mode, set 'enforce_eager=True' or use '--enforce-eager' in the CLI.
INFO 01-18 07:51:40 model_runner.py:505] CUDA graphs can take additional 1~3 GiB memory per GPU. If you are running out of memory, consider decreasing `gpu_memory_utilization` or enforcing eager mode.
INFO 01-18 07:51:46 model_runner.py:547] Graph capturing finished in 6 secs.
The server is now active at http://localhost:3000. You can interact with it using the Swagger UI or in other different ways.
CURL
curl -X 'POST' \
'http://localhost:3000/generate' \
-H 'accept: text/event-stream' \
-H 'Content-Type: multipart/form-data' \
-F 'image=@demo.jpg;type=image/jpeg' \
-F 'prompt=Describe this image' \
-F 'system_prompt=You are a helpful, respectful and honest assistant. Always answer as helpfully as possible, while being safe. Your answers should not include any harmful, unethical, racist, sexist, toxic, dangerous, or illegal content. Please ensure that your responses are socially unbiased and positive in nature.
If a question does not make any sense, or is not factually coherent, explain why instead of answering something not correct. If you don'\''t know the answer to a question, please don'\''t share false information.' \
-F 'max_tokens=128'
Python client
import bentoml
with bentoml.SyncHTTPClient("http://localhost:3000") as client:
response_generator = client.generate(
image=PIL.Image.open("demo.jpg"),
prompt="Describe this image",
)
for response in response_generator:
print(response)
OpenAI-compatible endpoints
This Service uses the @openai_endpoints
decorator to set up OpenAI-compatible endpoints (chat/completions
and completions
). This means your client can interact with the backend Service (in this case, the VLLM class) as if they were communicating directly with OpenAI's API. This utility does not affect your BentoML Service code, and you can use it for other LLMs as well.
from openai import OpenAI
client = OpenAI(base_url='http://localhost:3000/v1', api_key='na')
# Use the following func to get the available models
client.models.list()
chat_completion = client.chat.completions.create(
model="meta-llama/Llama-3.2-11B-Vision-Instruct",
messages=[
{
"role": "user",
"content": [
{"type": "text", "text": "Describe this image"},
{"type": "image", "image_url": "./demo.jpg"}
]
}
],
stream=True,
stop=["<|eot_id|>", "<|end_of_text|>"],
)
for chunk in chat_completion:
# Extract and print the content of the model's reply
print(chunk.choices[0].delta.content or "", end="")
These OpenAI-compatible endpoints also support vLLM extra parameters. For example, you can force the chat completion output a JSON object by using the guided_json
parameters:
from openai import OpenAI
client = OpenAI(base_url='http://localhost:3000/v1', api_key='na')
# Use the following func to get the available models
client.models.list()
json_schema = {
"type": "object",
"properties": {
"city": {"type": "string"}
}
}
chat_completion = client.chat.completions.create(
model="meta-llama/Llama-3.2-11B-Vision-Instruct",
messages=[
{
"role": "user",
"content": [
{"type": "text", "text": "Describe this image"},
{"type": "image", "image_url": "./demo.jpg"}
]
}
],
extra_body=dict(guided_json=json_schema),
)
print(chat_completion.choices[0].message.content) # will return something like: {"city": "Paris"}
All supported extra parameters are listed in vLLM documentation.
Note: If your Service is deployed with protected endpoints on BentoCloud, you need to set the environment variable OPENAI_API_KEY
to your BentoCloud API key first.
export OPENAI_API_KEY={YOUR_BENTOCLOUD_API_TOKEN}
You can then use the following line to replace the client in the above code snippet. Refer to Obtain the endpoint URL to retrieve the endpoint URL.
client = OpenAI(base_url='your_bentocloud_deployment_endpoint_url/v1')
For detailed explanations of the Service code, see vLLM inference.
After the Service is ready, you can deploy the application to BentoCloud for better management and scalability. Sign up if you haven't got a BentoCloud account.
Make sure you have logged in to BentoCloud.
bentoml cloud login
Create a BentoCloud secret to store the required environment variable and reference it for deployment.
bentoml secret create huggingface HF_TOKEN=$HF_TOKEN
bentoml deploy . --secret huggingface
Once the application is up and running on BentoCloud, you can access it via the exposed URL.
Note: For custom deployment in your own infrastructure, use BentoML to generate an OCI-compliant image.