Skip to content

Commit a644e94

Browse files
authored
Miniconda/Anaconda -> Miniforge update in examples (#11194)
* Change installation address Change former address: "https://docs.conda.io/en/latest/miniconda.html#" to new address: "https://conda-forge.org/download/" for 63 occurrences under python\llm\example * Change Prompt Change "Anaconda Prompt" to "Miniforge Prompt" for 1 occurrence
1 parent 5f13700 commit a644e94

File tree

64 files changed

+64
-64
lines changed

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

64 files changed

+64
-64
lines changed

python/llm/example/CPU/HF-Transformers-AutoModels/Advanced-Quantizations/GGUF/README.md

+1-1
Original file line numberDiff line numberDiff line change
@@ -21,7 +21,7 @@ To run these examples with IPEX-LLM, we have some recommended requirements for y
2121
In the example [generate.py](./generate.py), we show a basic use case to load a GGUF LLaMA2 model into `ipex-llm` using `from_gguf()` API, with IPEX-LLM optimizations.
2222

2323
### 1. Install
24-
We suggest using conda to manage the Python environment. For more information about conda installation, please refer to [here](https://docs.conda.io/en/latest/miniconda.html#).
24+
We suggest using conda to manage the Python environment. For more information about conda installation, please refer to [here](https://conda-forge.org/download/).
2525

2626
After installing conda, create a Python environment for IPEX-LLM:
2727

python/llm/example/CPU/HF-Transformers-AutoModels/Model/aquila/README.md

+1-1
Original file line numberDiff line numberDiff line change
@@ -12,7 +12,7 @@ To run these examples with IPEX-LLM, we have some recommended requirements for y
1212
## Example: Predict Tokens using `generate()` API
1313
In the example [generate.py](./generate.py), we show a basic use case for a Aquila model to predict the next N tokens using `generate()` API, with IPEX-LLM INT4 optimizations.
1414
### 1. Install
15-
We suggest using conda to manage the Python environment. For more information about conda installation, please refer to [here](https://docs.conda.io/en/latest/miniconda.html#).
15+
We suggest using conda to manage the Python environment. For more information about conda installation, please refer to [here](https://conda-forge.org/download/).
1616

1717
After installing conda, create a Python environment for IPEX-LLM:
1818

python/llm/example/CPU/HF-Transformers-AutoModels/Model/aquila2/README.md

+1-1
Original file line numberDiff line numberDiff line change
@@ -12,7 +12,7 @@ To run these examples with IPEX-LLM, we have some recommended requirements for y
1212
## Example: Predict Tokens using `generate()` API
1313
In the example [generate.py](./generate.py), we show a basic use case for a Aquila2 model to predict the next N tokens using `generate()` API, with IPEX-LLM INT4 optimizations.
1414
### 1. Install
15-
We suggest using conda to manage the Python environment. For more information about conda installation, please refer to [here](https://docs.conda.io/en/latest/miniconda.html#).
15+
We suggest using conda to manage the Python environment. For more information about conda installation, please refer to [here](https://conda-forge.org/download/).
1616

1717
After installing conda, create a Python environment for IPEX-LLM:
1818

python/llm/example/CPU/HF-Transformers-AutoModels/Model/chatglm/README.md

+1-1
Original file line numberDiff line numberDiff line change
@@ -12,7 +12,7 @@ To run these examples with IPEX-LLM, we have some recommended requirements for y
1212
## Example: Predict Tokens using `generate()` API
1313
In the example [generate.py](./generate.py), we show a basic use case for a ChatGLM model to predict the next N tokens using `generate()` API, with IPEX-LLM INT4 optimizations.
1414
### 1. Install
15-
We suggest using conda to manage the Python environment. For more information about conda installation, please refer to [here](https://docs.conda.io/en/latest/miniconda.html#).
15+
We suggest using conda to manage the Python environment. For more information about conda installation, please refer to [here](https://conda-forge.org/download/).
1616

1717
After installing conda, create a Python environment for IPEX-LLM:
1818

python/llm/example/CPU/HF-Transformers-AutoModels/Model/codegemma/README.md

+1-1
Original file line numberDiff line numberDiff line change
@@ -7,7 +7,7 @@ To run these examples with IPEX-LLM, we have some recommended requirements for y
77
## Example: Predict Tokens using `generate()` API
88
In the example [generate.py](./generate.py), we show a basic use case for a CodeGemma model to predict the next N tokens using `generate()` API, with IPEX-LLM INT4 optimizations.
99
### 1. Install
10-
We suggest using conda to manage the Python environment. For more information about conda installation, please refer to [here](https://docs.conda.io/en/latest/miniconda.html#).
10+
We suggest using conda to manage the Python environment. For more information about conda installation, please refer to [here](https://conda-forge.org/download/).
1111

1212
After installing conda, create a Python environment for IPEX-LLM:
1313

python/llm/example/CPU/HF-Transformers-AutoModels/Model/codeshell/README.md

+1-1
Original file line numberDiff line numberDiff line change
@@ -12,7 +12,7 @@ To run these examples with IPEX-LLM, we have some recommended requirements for y
1212
## Example: Predict Tokens using `generate()` API
1313
In the example [generate.py](./generate.py), we show a basic use case for a CodeShell model to predict the next N tokens using `generate()` API, with IPEX-LLM INT4 optimizations.
1414
### 1. Install
15-
We suggest using conda to manage the Python environment. For more information about conda installation, please refer to [here](https://docs.conda.io/en/latest/miniconda.html#).
15+
We suggest using conda to manage the Python environment. For more information about conda installation, please refer to [here](https://conda-forge.org/download/).
1616

1717
After installing conda, create a Python environment for IPEX-LLM:
1818

python/llm/example/CPU/HF-Transformers-AutoModels/Model/deepseek-moe/README.md

+1-1
Original file line numberDiff line numberDiff line change
@@ -12,7 +12,7 @@ To run these examples with IPEX-LLM, we have some recommended requirements for y
1212
## Example: Predict Tokens using `generate()` API
1313
In the example [generate.py](./generate.py), we show a basic use case for a DeepSeek-MoE model to predict the next N tokens using `generate()` API, with IPEX-LLM INT4 optimizations.
1414
### 1. Install
15-
We suggest using conda to manage the Python environment. For more information about conda installation, please refer to [here](https://docs.conda.io/en/latest/miniconda.html#).
15+
We suggest using conda to manage the Python environment. For more information about conda installation, please refer to [here](https://conda-forge.org/download/).
1616

1717
After installing conda, create a Python environment for IPEX-LLM:
1818

python/llm/example/CPU/HF-Transformers-AutoModels/Model/distil-whisper/README.md

+1-1
Original file line numberDiff line numberDiff line change
@@ -8,7 +8,7 @@ To run these examples with IPEX-LLM, we have some recommended requirements for y
88
## Example: Recognize Tokens using `generate()` API
99
In the example [recognize.py](./recognize.py), we show a basic use case for a Distil-Whisper model to conduct transcription using `generate()` API, with IPEX-LLM INT4 optimizations.
1010
### 1. Install
11-
We suggest using conda to manage the Python environment. For more information about conda installation, please refer to [here](https://docs.conda.io/en/latest/miniconda.html#).
11+
We suggest using conda to manage the Python environment. For more information about conda installation, please refer to [here](https://conda-forge.org/download/).
1212

1313
After installing conda, create a Python environment for IPEX-LLM:
1414

python/llm/example/CPU/HF-Transformers-AutoModels/Model/flan-t5/README.md

+1-1
Original file line numberDiff line numberDiff line change
@@ -8,7 +8,7 @@ To run these examples with IPEX-LLM, we have some recommended requirements for y
88
## Example: Predict Tokens using `generate()` API
99
In the example [generate.py](./generate.py), we show a basic use case for a Flan-t5 model to predict the next N tokens using `generate()` API, with IPEX-LLM INT4 optimizations.
1010
### 1. Install
11-
We suggest using conda to manage the Python environment. For more information about conda installation, please refer to [here](https://docs.conda.io/en/latest/miniconda.html#).
11+
We suggest using conda to manage the Python environment. For more information about conda installation, please refer to [here](https://conda-forge.org/download/).
1212

1313
After installing conda, create a Python environment for IPEX-LLM:
1414

python/llm/example/CPU/HF-Transformers-AutoModels/Model/fuyu/README.md

+1-1
Original file line numberDiff line numberDiff line change
@@ -7,7 +7,7 @@ To run these examples with IPEX-LLM, we have some recommended requirements for y
77
## Example: Predict Tokens using `generate()` API
88
In the example [generate.py](./generate.py), we show a basic use case for an Fuyu model to predict the next N tokens using `generate()` API, with IPEX-LLM INT4 optimizations.
99
### 1. Install
10-
We suggest using conda to manage the Python environment. For more information about conda installation, please refer to [here](https://docs.conda.io/en/latest/miniconda.html#).
10+
We suggest using conda to manage the Python environment. For more information about conda installation, please refer to [here](https://conda-forge.org/download/).
1111

1212
After installing conda, create a Python environment for IPEX-LLM:
1313

python/llm/example/CPU/HF-Transformers-AutoModels/Model/internlm-xcomposer/README.md

+1-1
Original file line numberDiff line numberDiff line change
@@ -7,7 +7,7 @@ To run these examples with IPEX-LLM, we have some recommended requirements for y
77
## Example: Multi-turn chat centered around an image using `chat()` API
88
In the example [chat.py](./chat.py), we show a basic use case for an InternLM_XComposer model to start a multi-turn chat centered around an image using `chat()` API, with IPEX-LLM INT4 optimizations.
99
### 1. Install
10-
We suggest using conda to manage the Python environment. For more information about conda installation, please refer to [here](https://docs.conda.io/en/latest/miniconda.html#).
10+
We suggest using conda to manage the Python environment. For more information about conda installation, please refer to [here](https://conda-forge.org/download/).
1111

1212
After installing conda, create a Python environment for IPEX-LLM:
1313

python/llm/example/CPU/HF-Transformers-AutoModels/Model/mistral/README.md

+1-1
Original file line numberDiff line numberDiff line change
@@ -9,7 +9,7 @@ To run these examples with IPEX-LLM, we have some recommended requirements for y
99
## Example: Predict Tokens using `generate()` API
1010
In the example [generate.py](./generate.py), we show a basic use case for a Mistral model to predict the next N tokens using `generate()` API, with IPEX-LLM INT4 optimizations.
1111
### 1. Install
12-
We suggest using conda to manage the Python environment. For more information about conda installation, please refer to [here](https://docs.conda.io/en/latest/miniconda.html#).
12+
We suggest using conda to manage the Python environment. For more information about conda installation, please refer to [here](https://conda-forge.org/download/).
1313

1414
After installing conda, create a Python environment for IPEX-LLM:
1515

python/llm/example/CPU/HF-Transformers-AutoModels/Model/mixtral/README.md

+1-1
Original file line numberDiff line numberDiff line change
@@ -9,7 +9,7 @@ To run these examples with IPEX-LLM on Intel CPUs, we have some recommended requ
99
## Example: Predict Tokens using `generate()` API
1010
In the example [generate.py](./generate.py), we show a basic use case for a Mixtral model to predict the next N tokens using `generate()` API, with IPEX-LLM INT4 optimizations on Intel CPUs.
1111
### 1. Install
12-
We suggest using conda to manage the Python environment. For more information about conda installation, please refer to [here](https://docs.conda.io/en/latest/miniconda.html#).
12+
We suggest using conda to manage the Python environment. For more information about conda installation, please refer to [here](https://conda-forge.org/download/).
1313

1414
After installing conda, create a Python environment for IPEX-LLM:
1515

python/llm/example/CPU/HF-Transformers-AutoModels/Model/phi-1_5/README.md

+1-1
Original file line numberDiff line numberDiff line change
@@ -12,7 +12,7 @@ To run these examples with IPEX-LLM, we have some recommended requirements for y
1212
## Example: Predict Tokens using `generate()` API
1313
In the example [generate.py](./generate.py), we show a basic use case for a phi-1_5 model to predict the next N tokens using `generate()` API, with IPEX-LLM INT4 optimizations.
1414
### 1. Install
15-
We suggest using conda to manage the Python environment. For more information about conda installation, please refer to [here](https://docs.conda.io/en/latest/miniconda.html#).
15+
We suggest using conda to manage the Python environment. For more information about conda installation, please refer to [here](https://conda-forge.org/download/).
1616

1717
After installing conda, create a Python environment for IPEX-LLM:
1818

python/llm/example/CPU/HF-Transformers-AutoModels/Model/phi-2/README.md

+1-1
Original file line numberDiff line numberDiff line change
@@ -12,7 +12,7 @@ To run these examples with IPEX-LLM, we have some recommended requirements for y
1212
## Example: Predict Tokens using `generate()` API
1313
In the example [generate.py](./generate.py), we show a basic use case for a phi-2 model to predict the next N tokens using `generate()` API, with IPEX-LLM INT4 optimizations.
1414
### 1. Install
15-
We suggest using conda to manage the Python environment. For more information about conda installation, please refer to [here](https://docs.conda.io/en/latest/miniconda.html#).
15+
We suggest using conda to manage the Python environment. For more information about conda installation, please refer to [here](https://conda-forge.org/download/).
1616

1717
After installing conda, create a Python environment for IPEX-LLM:
1818

python/llm/example/CPU/HF-Transformers-AutoModels/Model/phi-3/README.md

+1-1
Original file line numberDiff line numberDiff line change
@@ -12,7 +12,7 @@ To run these examples with IPEX-LLM, we have some recommended requirements for y
1212
## Example: Predict Tokens using `generate()` API
1313
In the example [generate.py](./generate.py), we show a basic use case for a phi-3 model to predict the next N tokens using `generate()` API, with IPEX-LLM INT4 optimizations.
1414
### 1. Install
15-
We suggest using conda to manage the Python environment. For more information about conda installation, please refer to [here](https://docs.conda.io/en/latest/miniconda.html#).
15+
We suggest using conda to manage the Python environment. For more information about conda installation, please refer to [here](https://conda-forge.org/download/).
1616

1717
After installing conda, create a Python environment for IPEX-LLM:
1818

python/llm/example/CPU/HF-Transformers-AutoModels/Model/phixtral/README.md

+1-1
Original file line numberDiff line numberDiff line change
@@ -12,7 +12,7 @@ To run these examples with IPEX-LLM, we have some recommended requirements for y
1212
## Example: Predict Tokens using `generate()` API
1313
In the example [generate.py](./generate.py), we show a basic use case for a phixtral model to predict the next N tokens using `generate()` API, with IPEX-LLM INT4 optimizations.
1414
### 1. Install
15-
We suggest using conda to manage the Python environment. For more information about conda installation, please refer to [here](https://docs.conda.io/en/latest/miniconda.html#).
15+
We suggest using conda to manage the Python environment. For more information about conda installation, please refer to [here](https://conda-forge.org/download/).
1616

1717
After installing conda, create a Python environment for IPEX-LLM:
1818

python/llm/example/CPU/HF-Transformers-AutoModels/Model/qwen-vl/README.md

+1-1
Original file line numberDiff line numberDiff line change
@@ -7,7 +7,7 @@ To run these examples with IPEX-LLM, we have some recommended requirements for y
77
## Example: Multimodal chat using `chat()` API
88
In the example [chat.py](./chat.py), we show a basic use case for a Qwen-VL model to start a multimodal chat using `chat()` API, with IPEX-LLM INT4 optimizations.
99
### 1. Install
10-
We suggest using conda to manage the Python environment. For more information about conda installation, please refer to [here](https://docs.conda.io/en/latest/miniconda.html#).
10+
We suggest using conda to manage the Python environment. For more information about conda installation, please refer to [here](https://conda-forge.org/download/).
1111

1212
After installing conda, create a Python environment for IPEX-LLM:
1313

python/llm/example/CPU/HF-Transformers-AutoModels/Model/replit/README.md

+1-1
Original file line numberDiff line numberDiff line change
@@ -7,7 +7,7 @@ To run these examples with IPEX-LLM, we have some recommended requirements for y
77
## Example: Predict Tokens using `generate()` API
88
In the example [generate.py](./generate.py), we show a basic use case for an Replit model to predict the next N tokens using `generate()` API, with IPEX-LLM INT4 optimizations.
99
### 1. Install
10-
We suggest using conda to manage the Python environment. For more information about conda installation, please refer to [here](https://docs.conda.io/en/latest/miniconda.html#).
10+
We suggest using conda to manage the Python environment. For more information about conda installation, please refer to [here](https://conda-forge.org/download/).
1111

1212
After installing conda, create a Python environment for IPEX-LLM:
1313

python/llm/example/CPU/HF-Transformers-AutoModels/Model/stablelm/README.md

+1-1
Original file line numberDiff line numberDiff line change
@@ -7,7 +7,7 @@ To run these examples with IPEX-LLM, we have some recommended requirements for y
77
## Example: Predict Tokens using `generate()` API
88
In the example [generate.py](./generate.py), we show a basic use case for a StableLM model to predict the next N tokens using `generate()` API, with IPEX-LLM INT4 optimizations.
99
### 1. Install
10-
We suggest using conda to manage the Python environment. For more information about conda installation, please refer to [here](https://docs.conda.io/en/latest/miniconda.html#).
10+
We suggest using conda to manage the Python environment. For more information about conda installation, please refer to [here](https://conda-forge.org/download/).
1111

1212
After installing conda, create a Python environment for IPEX-LLM:
1313

python/llm/example/CPU/HF-Transformers-AutoModels/Model/yi/README.md

+1-1
Original file line numberDiff line numberDiff line change
@@ -7,7 +7,7 @@ To run these examples with IPEX-LLM, we have some recommended requirements for y
77
## Example: Predict Tokens using `generate()` API
88
In the example [generate.py](./generate.py), we show a basic use case for an Yi model to predict the next N tokens using `generate()` API, with IPEX-LLM INT4 optimizations.
99
### 1. Install
10-
We suggest using conda to manage the Python environment. For more information about conda installation, please refer to [here](https://docs.conda.io/en/latest/miniconda.html#).
10+
We suggest using conda to manage the Python environment. For more information about conda installation, please refer to [here](https://conda-forge.org/download/).
1111

1212
After installing conda, create a Python environment for IPEX-LLM:
1313

python/llm/example/CPU/HF-Transformers-AutoModels/Model/yuan2/README.md

+1-1
Original file line numberDiff line numberDiff line change
@@ -9,7 +9,7 @@ In addition, you need to modify some files in Yuan2-2B-hf folder, since Flash at
99
## Example: Predict Tokens using `generate()` API
1010
In the example [generate.py](./generate.py), we show a basic use case for an Yuan2 model to predict the next N tokens using `generate()` API, with IPEX-LLM INT4 optimizations.
1111
### 1. Install
12-
We suggest using conda to manage the Python environment. For more information about conda installation, please refer to [here](https://docs.conda.io/en/latest/miniconda.html#).
12+
We suggest using conda to manage the Python environment. For more information about conda installation, please refer to [here](https://conda-forge.org/download/).
1313

1414
After installing conda, create a Python environment for IPEX-LLM:
1515

python/llm/example/CPU/HF-Transformers-AutoModels/Model/ziya/README.md

+1-1
Original file line numberDiff line numberDiff line change
@@ -12,7 +12,7 @@ To run these examples with IPEX-LLM, we have some recommended requirements for y
1212
## Example: Predict Tokens using `generate()` API
1313
In the example [generate.py](./generate.py), we show a basic use case for a Ziya model to predict the next N tokens using `generate()` API, with IPEX-LLM INT4 optimizations.
1414
### 1. Install
15-
We suggest using conda to manage the Python environment. For more information about conda installation, please refer to [here](https://docs.conda.io/en/latest/miniconda.html#).
15+
We suggest using conda to manage the Python environment. For more information about conda installation, please refer to [here](https://conda-forge.org/download/).
1616

1717
After installing conda, create a Python environment for IPEX-LLM:
1818

python/llm/example/CPU/PyTorch-Models/Model/aquila2/README.md

+1-1
Original file line numberDiff line numberDiff line change
@@ -7,7 +7,7 @@ To run these examples with IPEX-LLM, we have some recommended requirements for y
77
## Example: Predict Tokens using `generate()` API
88
In the example [generate.py](./generate.py), we show a basic use case for a Aquila2 model to predict the next N tokens using `generate()` API, with IPEX-LLM INT4 optimizations.
99
### 1. Install
10-
We suggest using conda to manage the Python environment. For more information about conda installation, please refer to [here](https://docs.conda.io/en/latest/miniconda.html#).
10+
We suggest using conda to manage the Python environment. For more information about conda installation, please refer to [here](https://conda-forge.org/download/).
1111

1212
After installing conda, create a Python environment for IPEX-LLM:
1313

python/llm/example/CPU/PyTorch-Models/Model/bark/README.md

+1-1
Original file line numberDiff line numberDiff line change
@@ -7,7 +7,7 @@ To run these examples with IPEX-LLM, we have some recommended requirements for y
77
## Example: Synthesize speech with the given input text
88
In the example [synthesize_speech.py](./synthesize_speech.py), we show a basic use case for Bark model to synthesize speech based on the given text, with IPEX-LLM INT4 optimizations.
99
### 1. Install
10-
We suggest using conda to manage the Python environment. For more information about conda installation, please refer to [here](https://docs.conda.io/en/latest/miniconda.html#).
10+
We suggest using conda to manage the Python environment. For more information about conda installation, please refer to [here](https://conda-forge.org/download/).
1111

1212
After installing conda, create a Python environment for IPEX-LLM:
1313

python/llm/example/CPU/PyTorch-Models/Model/bert/README.md

+1-1
Original file line numberDiff line numberDiff line change
@@ -7,7 +7,7 @@ To run these examples with IPEX-LLM, we have some recommended requirements for y
77
## Example: Extract the feature of given text
88
In the example [extract_feature.py](./extract_feature.py), we show a basic use case for a BERT model to extract the feature of given text, with IPEX-LLM INT4 optimizations.
99
### 1. Install
10-
We suggest using conda to manage the Python environment. For more information about conda installation, please refer to [here](https://docs.conda.io/en/latest/miniconda.html#).
10+
We suggest using conda to manage the Python environment. For more information about conda installation, please refer to [here](https://conda-forge.org/download/).
1111

1212
After installing conda, create a Python environment for IPEX-LLM:
1313

python/llm/example/CPU/PyTorch-Models/Model/bluelm/README.md

+1-1
Original file line numberDiff line numberDiff line change
@@ -7,7 +7,7 @@ To run these examples with IPEX-LLM, we have some recommended requirements for y
77
## Example: Predict Tokens using `generate()` API
88
In the example [generate.py](./generate.py), we show a basic use case for a BlueLM model to predict the next N tokens using `generate()` API, with IPEX-LLM INT4 optimizations.
99
### 1. Install
10-
We suggest using conda to manage the Python environment. For more information about conda installation, please refer to [here](https://docs.conda.io/en/latest/miniconda.html#).
10+
We suggest using conda to manage the Python environment. For more information about conda installation, please refer to [here](https://conda-forge.org/download/).
1111

1212
After installing conda, create a Python environment for IPEX-LLM:
1313

0 commit comments

Comments
 (0)