-
Notifications
You must be signed in to change notification settings - Fork 864
Issues: mlc-ai/web-llm
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Author
Label
Projects
Milestones
Assignee
Sort
Issues list
Service worker engine hangs forever if client is lost while streaming results
#620
opened Oct 26, 2024 by
Bainainai
WebLLM always processes on Intel UHD Graphics, not on NVIDIA T1200
#609
opened Oct 10, 2024 by
b521f771d8991e6f1d8e65ae05a8d783
[Tracking][WebLLM] Function calling (beta) and Embeddings
#526
opened Aug 4, 2024 by
CharlieFRuan
5 of 7 tasks
Phi 3 Mini output near random (Phi-3-mini-4k-instruct-q4f16_1-MLC)
#519
opened Jul 28, 2024 by
cdrini
Custom model outputs garbage in firefox nightly, works fine in chrome.
#518
opened Jul 26, 2024 by
gulan28
Runing LLM in a webworker fails due to loglevel dependency
status: tracking
Tracking work in progress
#511
opened Jul 21, 2024 by
jauniusmentimeter
How to let the user cancel loading the model and stop it from fetching params
#499
opened Jul 11, 2024 by
JohnReginaldShutler
Previous Next
ProTip!
What’s not been updated in a month: updated:<2024-09-30.