Skip to content

Commit c04225f

Browse files
authored
Update README.md
1 parent 1c2994b commit c04225f

File tree

1 file changed

+2
-2
lines changed

1 file changed

+2
-2
lines changed

README.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -11,7 +11,7 @@
1111
</div>
1212

1313
**WebLLM is a high-performance in-browser LLM inference engine** that directly
14-
brings language model chats directly onto web browsers with hardware acceleration.
14+
brings language model inference directly onto web browsers with hardware acceleration.
1515
Everything runs inside the browser with no server support and is accelerated with WebGPU.
1616

1717
**WebLLM is fully compatible with [OpenAI API](https://platform.openai.com/docs/api-reference/chat).**
@@ -21,7 +21,7 @@ including json-mode, function-calling, streaming, etc.
2121
We can bring a lot of fun opportunities to build AI assistants for everyone and enable privacy while enjoying GPU acceleration.
2222

2323
You can use WebLLM as a base [npm package](https://www.npmjs.com/package/@mlc-ai/web-llm) and build your own web application on top of it by following the [documentation](https://mlc.ai/mlc-llm/docs/deploy/javascript.html) and checking out [Get Started](#get-started).
24-
This project is a companion project of [MLC LLM](https://github.com/mlc-ai/mlc-llm), which runs LLMs natively on iPhone and other native local environments.
24+
This project is a companion project of [MLC LLM](https://github.com/mlc-ai/mlc-llm), which enables universal deployment of LLM across hardware environments.
2525

2626
<div align="center">
2727

0 commit comments

Comments
 (0)