|
1 |
| -# Building ExecuTorch Llama and Llava iOS Demo App |
| 1 | +# ExecuTorch Llama iOS Demo App |
2 | 2 |
|
3 |
| -This app demonstrates the use of the LLM chat app demonstrating local inference use case with ExecuTorch, using [Llama 3.1](https://github.com/meta-llama/llama-models) for text only chat and [Llava](https://github.com/haotian-liu/LLaVA) for image and text chat. |
| 3 | +We’re excited to share that the newly revamped iOS demo app is live and includes many new updates to provide a more intuitive and smoother user experience with a chat use case! The primary goal of this app is to showcase how easily ExecuTorch can be integrated into an iOS demo app and how to exercise the many features ExecuTorch and Llama models have to offer. |
4 | 4 |
|
5 |
| -## Prerequisites |
6 |
| -* [Xcode 15](https://developer.apple.com/xcode) |
7 |
| -* [iOS 17 SDK](https://developer.apple.com/ios) |
8 |
| -* Set up your ExecuTorch repo and environment if you haven’t done so by following the [Setting up ExecuTorch](https://pytorch.org/executorch/stable/getting-started-setup) to set up the repo and dev environment: |
| 5 | +This app serves as a valuable resource to inspire your creativity and provide foundational code that you can customize and adapt for your particular use case. |
9 | 6 |
|
10 |
| -```bash |
11 |
| -git clone https://github.com/pytorch/executorch.git --recursive && cd executorch |
12 |
| -``` |
| 7 | +Please dive in and start exploring our demo app today! We look forward to any feedback and are excited to see your innovative ideas. |
13 | 8 |
|
14 |
| -Then create a virtual or conda environment using either |
15 |
| -```bash |
16 |
| -python3 -m venv .venv && source .venv/bin/activate |
17 |
| -``` |
18 |
| -or |
19 |
| -```bash |
20 |
| -conda create -n executorch python=3.10 |
21 |
| -conda activate executorch |
22 |
| -``` |
| 9 | +## Key Concepts |
| 10 | +From this demo app, you will learn many key concepts such as: |
| 11 | +* How to prepare Llama models, build the ExecuTorch library, and perform model inference across delegates |
| 12 | +* Expose the ExecuTorch library via Swift Package Manager |
| 13 | +* Familiarity with current ExecuTorch app-facing capabilities |
23 | 14 |
|
24 |
| -After that, run: |
25 |
| -```bash |
26 |
| -./install_requirements.sh --pybind coreml mps xnnpack |
27 |
| -./backends/apple/coreml/scripts/install_requirements.sh |
28 |
| -./backends/apple/mps/install_requirements.sh |
29 |
| -``` |
| 15 | +The goal is for you to see the type of support ExecuTorch provides and feel comfortable with leveraging it for your use cases. |
| 16 | + |
| 17 | +## Supported Models |
30 | 18 |
|
31 |
| -## Exporting models |
32 |
| -Please refer to the [ExecuTorch Llama2 docs](https://github.com/pytorch/executorch/blob/main/examples/models/llama2/README.md) to export the Llama 3.1 model. |
| 19 | +As a whole, the models that this app supports are (varies by delegate): |
| 20 | +* Llama 3.1 8B |
| 21 | +* Llama 3 8B |
| 22 | +* Llama 2 7B |
| 23 | +* Llava 1.5 (only XNNPACK) |
33 | 24 |
|
34 |
| -## Run the App |
| 25 | +## Building the application |
| 26 | +First it’s important to note that currently ExecuTorch provides support across several delegates. Once you identify the delegate of your choice, select the README link to get a complete end-to-end instructions for environment set-up to export the models to build ExecuTorch libraries and apps to run on device: |
35 | 27 |
|
36 |
| -1. Open the [project](https://github.com/pytorch/executorch/blob/main/examples/demo-apps/apple_ios/LLaMA/LLaMA.xcodeproj) in Xcode. |
37 |
| -2. Run the app (cmd+R). |
38 |
| -3. In app UI pick a model and tokenizer to use, type a prompt and tap the arrow buton. |
| 28 | +| Delegate | Resource | |
| 29 | +| ------------------------------ | --------------------------------- | |
| 30 | +| XNNPACK (CPU-based library) | [link](docs/delegates/xnnpack_README.md)| |
| 31 | +| MPS (Metal Performance Shader) | [link](docs/delegates/mps_README.md) | |
| 32 | + |
| 33 | +## How to Use the App |
| 34 | +This section will provide the main steps to use the app, along with a code snippet of the ExecuTorch API. |
39 | 35 |
|
40 | 36 | ```{note}
|
41 | 37 | ExecuTorch runtime is distributed as a Swift package providing some .xcframework as prebuilt binary targets.
|
42 |
| -Xcode will dowload and cache the package on the first run, which will take some time. |
| 38 | +Xcode will download and cache the package on the first run, which will take some time. |
43 | 39 | ```
|
44 | 40 |
|
| 41 | +* Open XCode and select "Open an existing project" to open `examples/demo-apps/apple_ios/LLama`. |
| 42 | +* Ensure that the ExecuTorch package dependencies are installed correctly. |
| 43 | +* Run the app. This builds and launches the app on the phone. |
| 44 | +* In app UI pick a model and tokenizer to use, type a prompt and tap the arrow buton |
| 45 | + |
| 46 | + |
45 | 47 | ## Copy the model to Simulator
|
46 | 48 |
|
47 |
| -1. Drag and drop the Llama 3.1 and Llava models and tokenizer files onto the Simulator window and save them somewhere inside the iLLaMA folder. |
48 |
| -2. Pick the files in the app dialog, type a prompt and click the arrow-up button. |
| 49 | +* Drag&drop the model and tokenizer files onto the Simulator window and save them somewhere inside the iLLaMA folder. |
| 50 | +* Pick the files in the app dialog, type a prompt and click the arrow-up button. |
49 | 51 |
|
50 | 52 | ## Copy the model to Device
|
51 | 53 |
|
52 |
| -1. Wire-connect the device and open the contents in Finder. |
53 |
| -2. Navigate to the Files tab and drag and drop the models and tokenizer files onto the iLLaMA folder. |
54 |
| -3. Wait until the files are copied. |
| 54 | +* Wire-connect the device and open the contents in Finder. |
| 55 | +* Navigate to the Files tab and drag&drop the model and tokenizer files onto the iLLaMA folder. |
| 56 | +* Wait until the files are copied. |
| 57 | + |
| 58 | +If the app successfully run on your device, you should see something like below: |
| 59 | + |
| 60 | +<p align="center"> |
| 61 | +<img src="./docs/screenshots/ios_demo_app.jpg" alt="iOS LLaMA App" width="300"> |
| 62 | +</p> |
55 | 63 |
|
56 |
| -Click the image below to see a demo video of the app running Llama 3.1 and Llava on an iPhone 15 Pro device: |
| 64 | +For Llava 1.5 models, you can select and image (via image/camera selector button) before typing prompt and send button. |
57 | 65 |
|
58 |
| -<a href="https://drive.google.com/file/d/1yQ7UoB79vMEBuBaoYvO53dosYTjpOZhd/view?usp=sharing"> |
59 |
| - <img src="llama31.png" width="350" alt="iOS app running Llama 3.1"> |
60 |
| -</a> <a href="https://drive.google.com/file/d/1yQ7UoB79vMEBuBaoYvO53dosYTjpOZhd/view?usp=sharing"> |
61 |
| - <img src="llava.png" width="350" alt="iOS app running Llava"> |
62 |
| -</a> |
| 66 | +<p align="center"> |
| 67 | +<img src="./docs/screenshots/ios_demo_app_llava.jpg" alt="iOS LLaMA App" width="300"> |
| 68 | +</p> |
63 | 69 |
|
64 | 70 | ## Reporting Issues
|
65 | 71 | If you encountered any bugs or issues following this tutorial please file a bug/issue here on [Github](https://github.com/pytorch/executorch/issues/new).
|
0 commit comments