We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
1 parent e83d4b2 commit 8d3bf05Copy full SHA for 8d3bf05
README.md
@@ -14,11 +14,12 @@
14
15
</div>
16
17
-**WebLLM is a high-performance in-browser LLM inference engine** that directly
+## Overview
18
+WebLLM is a high-performance in-browser LLM inference engine that directly
19
brings language model inference directly onto web browsers with hardware acceleration.
20
Everything runs inside the browser with no server support and is accelerated with WebGPU.
21
-**WebLLM is fully compatible with [OpenAI API](https://platform.openai.com/docs/api-reference/chat).**
22
+WebLLM is **fully compatible with [OpenAI API](https://platform.openai.com/docs/api-reference/chat).**
23
That is, you can use the same OpenAI API on **any open source models** locally, with functionalities
24
including json-mode, function-calling, streaming, etc.
25
0 commit comments