From 659e0f1a9d500479e6e7954b4190fccdce2e2868 Mon Sep 17 00:00:00 2001 From: Yevhenii Semendiak Date: Mon, 29 Apr 2024 16:09:17 +0300 Subject: [PATCH] update readme --- README.md | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/README.md b/README.md index 8726babfdc..f78caada2c 100644 --- a/README.md +++ b/README.md @@ -11,8 +11,9 @@ Note: this setup is mostly for POC purposes. For production-ready setup, you'll 5. `neuro-flow run vllm` -- start LLM inference server. Note: if you want to change LLM hosted there, change it in bash command and in `env.VLLM_MODEL` of `pgpt` job. 6. `neuro-flow run pgpt` -- start PrivateGPT web server. +### Running PrivateGPT as stand-alone job
-Running PrivateGPT as stand-alone job + Instruction Currently, we support only deployment case with vLLM as LLM inference server, PGVector as a vector store and Ollama as embeddings server.