Skip to content

Commit 0866c52

Browse files
Update GH link in docs (#5496)
* Update GH link in docs (#5493) Summary: Should use the raw link instead of GH web link Pull Request resolved: #5493 Reviewed By: shoumikhin Differential Revision: D63040432 Pulled By: kirklandsign fbshipit-source-id: f6b8f1ec4fe2d7ac1c5f25cc1c727279a9d20065 (cherry picked from commit 16673f9) * Fix link --------- Co-authored-by: Hansong Zhang <[email protected]> Co-authored-by: Hansong Zhang <[email protected]>
1 parent ffbe90b commit 0866c52

File tree

2 files changed

+10
-10
lines changed

2 files changed

+10
-10
lines changed

examples/demo-apps/android/LlamaDemo/README.md

Lines changed: 6 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -46,7 +46,7 @@ Below are the UI features for the app.
4646

4747
Select the settings widget to get started with picking a model, its parameters and any prompts.
4848
<p align="center">
49-
<img src="https://github.com/pytorch/executorch/blob/main/docs/source/_static/img/opening_the_app_details.png" width=800>
49+
<img src="https://raw.githubusercontent.com/pytorch/executorch/refs/heads/main/docs/source/_static/img/opening_the_app_details.png" width=800>
5050
</p>
5151

5252

@@ -55,7 +55,7 @@ Select the settings widget to get started with picking a model, its parameters a
5555

5656
Once you've selected the model, tokenizer, and model type you are ready to click on "Load Model" to have the app load the model and go back to the main Chat activity.
5757
<p align="center">
58-
<img src="https://github.com/pytorch/executorch/blob/main/docs/source/_static/img/settings_menu.png" width=300>
58+
<img src="https://raw.githubusercontent.com/pytorch/executorch/refs/heads/main/docs/source/_static/img/settings_menu.png" width=300>
5959
</p>
6060

6161

@@ -87,12 +87,12 @@ int loadResult = mModule.load();
8787
### User Prompt
8888
Once model is successfully loaded then enter any prompt and click the send (i.e. generate) button to send it to the model.
8989
<p align="center">
90-
<img src="https://github.com/pytorch/executorch/blob/main/docs/source/_static/img/load_complete_and_start_prompt.png" width=300>
90+
<img src="https://raw.githubusercontent.com/pytorch/executorch/refs/heads/main/docs/source/_static/img/load_complete_and_start_prompt.png" width=300>
9191
</p>
9292

9393
You can provide it more follow-up questions as well.
9494
<p align="center">
95-
<img src="https://github.com/pytorch/executorch/blob/main/docs/source/_static/img/chat.png" width=300>
95+
<img src="https://raw.githubusercontent.com/pytorch/executorch/refs/heads/main/docs/source/_static/img/chat.png" width=300>
9696
</p>
9797

9898
> [!TIP]
@@ -109,14 +109,14 @@ mModule.generate(prompt,sequence_length, MainActivity.this);
109109
For LLaVA-1.5 implementation, select the exported LLaVA .pte and tokenizer file in the Settings menu and load the model. After this you can send an image from your gallery or take a live picture along with a text prompt to the model.
110110

111111
<p align="center">
112-
<img src="https://github.com/pytorch/executorch/blob/main/docs/source/_static/img/llava_example.png" width=300>
112+
<img src="https://raw.githubusercontent.com/pytorch/executorch/refs/heads/main/docs/source/_static/img/llava_example.png" width=300>
113113
</p>
114114

115115

116116
### Output Generated
117117
To show completion of the follow-up question, here is the complete detailed response from the model.
118118
<p align="center">
119-
<img src="https://github.com/pytorch/executorch/blob/main/docs/source/_static/img/chat_response.png" width=300>
119+
<img src="https://raw.githubusercontent.com/pytorch/executorch/refs/heads/main/docs/source/_static/img/chat_response.png" width=300>
120120
</p>
121121

122122
> [!TIP]

examples/demo-apps/apple_ios/LLaMA/README.md

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -51,11 +51,11 @@ rm -rf \
5151
* Ensure that the ExecuTorch package dependencies are installed correctly, then select which ExecuTorch framework should link against which target.
5252
5353
<p align="center">
54-
<img src="https://github.com/pytorch/executorch/blob/main/docs/source/_static/img/ios_demo_app_swift_pm.png" alt="iOS LLaMA App Swift PM" width="600">
54+
<img src="https://raw.githubusercontent.com/pytorch/executorch/refs/heads/main/docs/source/_static/img/ios_demo_app_swift_pm.png" alt="iOS LLaMA App Swift PM" width="600">
5555
</p>
5656
5757
<p align="center">
58-
<img src="https://github.com/pytorch/executorch/blob/main/docs/source/_static/img/ios_demo_app_choosing_package.png" alt="iOS LLaMA App Choosing package" width="600">
58+
<img src="https://raw.githubusercontent.com/pytorch/executorch/refs/heads/main/docs/source/_static/img/ios_demo_app_choosing_package.png" alt="iOS LLaMA App Choosing package" width="600">
5959
</p>
6060
6161
* Run the app. This builds and launches the app on the phone.
@@ -76,13 +76,13 @@ rm -rf \
7676
If the app successfully run on your device, you should see something like below:
7777
7878
<p align="center">
79-
<img src="https://github.com/pytorch/executorch/blob/main/docs/source/_static/img/ios_demo_app.jpg" alt="iOS LLaMA App" width="300">
79+
<img src="https://raw.githubusercontent.com/pytorch/executorch/refs/heads/main/docs/source/_static/img/ios_demo_app.jpg" alt="iOS LLaMA App" width="300">
8080
</p>
8181
8282
For Llava 1.5 models, you can select and image (via image/camera selector button) before typing prompt and send button.
8383
8484
<p align="center">
85-
<img src="https://github.com/pytorch/executorch/blob/main/docs/source/_static/img/ios_demo_app_llava.jpg" alt="iOS LLaMA App" width="300">
85+
<img src="https://raw.githubusercontent.com/pytorch/executorch/refs/heads/main/docs/source/_static/img/ios_demo_app_llava.jpg" alt="iOS LLaMA App" width="300">
8686
</p>
8787
8888
## Reporting Issues

0 commit comments

Comments
 (0)