From 6cf482bf011d15c935db9664d29b870b94cf977d Mon Sep 17 00:00:00 2001 From: Wei-Chen Wang Date: Fri, 15 Sep 2023 15:15:06 -0400 Subject: [PATCH] Update README.md --- README.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/README.md b/README.md index 1be994a6..1c7599ea 100644 --- a/README.md +++ b/README.md @@ -128,7 +128,7 @@ Here, we provide step-by-step instructions to deploy LLaMA2-7B-chat with TinyCha | W4A8 | ✅ | ✅ | | | W8A8 | ✅ | ✅ | | -- For Raspberry Pi, we only tested on Raspberry Pi 4 Model B with 8GB RAM. For other versions, please feel free to try it out and let us know if you encounter any issues. +- For Raspberry Pi, we recommend using the board with 8GB RAM. Our testing was primarily conducted on Raspberry Pi 4 Model B Rev 1.4. For other versions, please feel free to try it out and let us know if you encounter any issues. - For Nvidia GPU, our CUDA backend may not support Nvidia GPUs with compute capability <= 7.5. We will release a new version to support Nvidia GPUs with lower compute capability soon, please stay tuned! ## Quantization and Model Support