Skip to content

Commit ac576f9

Browse files
Release 22 mm (#6157)
* adding wrench * correct build path * release branch and 6.0 target * XmlDoc update * adressing xml docs * more docs * updating the release * test xmldoc fixes * more xml doc fixes * Uncompress the 3DBall sample * Fix API documentation * more xml doc fixes * Revert "Uncompress the 3DBall sample" This reverts commit d67dc94. * reformat MaxStep xml * more xml doc fixes * fix more xml doc issues * fix summary tag * Updated changelog for missing PRs. * Removed tabs from .tests.json. * Updated changelog. * Removed tabs from CHANGELOG. * Fix failing ci post upgrade (#6141) (#6145) * Update PerformancProject and DevProject. * Removed mac perf tests. * Removing standalone tests dep from wrench packaging. * Fixed package works issues. Updated com.unity.ml-agents.md. * Updated com.unity.ml-agents.md. * Updated package version in Academy.cs * Adding back in package pack deps. * Updated package pack testing deps.. * Regenerated wrench ymls. * License update. * Extensions License update. * Another license tweak. * Another license tweak. * Upgraded to sentis 2.1.0. * Updated standalone yamato build test to using new ml-agents ubuntu ci bokken image. * Bumped python and extensions package versions. * Changed ci image for pytest gpu yamato test. * Changed default cuda dtype to torch.float32. * Updated version validation and extensions version. * Fixed failing GPU test. * Fixed failing GPU test. * Updated readme table and make_readme_table.py * Updated publish to pypi gha. --------- Co-authored-by: alexandre-ribard <[email protected]> Co-authored-by: Aurimas Petrovas <>
1 parent 8760552 commit ac576f9

25 files changed

+64
-63
lines changed

.github/workflows/publish_pypi.yaml

+1-1
Original file line numberDiff line numberDiff line change
@@ -35,7 +35,7 @@ jobs:
3535
python setup.py bdist_wheel
3636
- name: Publish distribution 📦 to Test PyPI
3737
if: startsWith(github.ref, 'refs/tags') && contains(github.ref, 'test')
38-
uses: pypa/gh-action-pypi-publish@master
38+
uses: pypa/gh-action-pypi-publish@release/v1
3939
with:
4040
password: ${{ secrets.TEST_PYPI_PASSWORD }}
4141
repository_url: https://test.pypi.org/legacy/

.yamato/pytest-gpu.yml

+1-1
Original file line numberDiff line numberDiff line change
@@ -2,7 +2,7 @@ pytest_gpu:
22
name: Pytest GPU
33
agent:
44
type: Unity::VM::GPU
5-
image: ml-agents/ml-agents-ubuntu-18.04:latest
5+
image: ml-agents/ubuntu-ci:v1.0.0
66
flavor: b1.large
77
commands:
88
- |

colab/Colab_UnityEnvironment_1_Run.ipynb

+2-2
Original file line numberDiff line numberDiff line change
@@ -32,7 +32,7 @@
3232
},
3333
"source": [
3434
"# ML-Agents Open a UnityEnvironment\n",
35-
"<img src=\"https://github.com/Unity-Technologies/ml-agents/blob/release_21_docs/docs/images/image-banner.png?raw=true\" align=\"middle\" width=\"435\"/>"
35+
"<img src=\"https://github.com/Unity-Technologies/ml-agents/blob/release_22_docs/docs/images/image-banner.png?raw=true\" align=\"middle\" width=\"435\"/>"
3636
]
3737
},
3838
{
@@ -149,7 +149,7 @@
149149
" import mlagents\n",
150150
" print(\"ml-agents already installed\")\n",
151151
"except ImportError:\n",
152-
" !python -m pip install -q mlagents==1.0.0\n",
152+
" !python -m pip install -q mlagents==1.1.0\n",
153153
" print(\"Installed ml-agents\")"
154154
],
155155
"execution_count": 1,

colab/Colab_UnityEnvironment_2_Train.ipynb

+3-3
Original file line numberDiff line numberDiff line change
@@ -22,7 +22,7 @@
2222
},
2323
"source": [
2424
"# ML-Agents Q-Learning with GridWorld\n",
25-
"<img src=\"https://github.com/Unity-Technologies/ml-agents/blob/release_21_docs/docs/images/gridworld.png?raw=true\" align=\"middle\" width=\"435\"/>"
25+
"<img src=\"https://github.com/Unity-Technologies/ml-agents/blob/release_22_docs/docs/images/gridworld.png?raw=true\" align=\"middle\" width=\"435\"/>"
2626
]
2727
},
2828
{
@@ -152,7 +152,7 @@
152152
" import mlagents\n",
153153
" print(\"ml-agents already installed\")\n",
154154
"except ImportError:\n",
155-
" !python -m pip install -q mlagents==1.0.0\n",
155+
" !python -m pip install -q mlagents==1.1.0\n",
156156
" print(\"Installed ml-agents\")"
157157
],
158158
"execution_count": 2,
@@ -190,7 +190,7 @@
190190
"id": "pZhVRfdoyPmv"
191191
},
192192
"source": [
193-
"The [GridWorld](https://github.com/Unity-Technologies/ml-agents/blob/release_21_docs/docs/Learning-Environment-Examples.md#gridworld) Environment is a simple Unity visual environment. The Agent is a blue square in a 3x3 grid that is trying to reach a green __`+`__ while avoiding a red __`x`__.\n",
193+
"The [GridWorld](https://github.com/Unity-Technologies/ml-agents/blob/release_22_docs/docs/Learning-Environment-Examples.md#gridworld) Environment is a simple Unity visual environment. The Agent is a blue square in a 3x3 grid that is trying to reach a green __`+`__ while avoiding a red __`x`__.\n",
194194
"\n",
195195
"The observation is an image obtained by a camera on top of the grid.\n",
196196
"\n",

colab/Colab_UnityEnvironment_3_SideChannel.ipynb

+6-6
Original file line numberDiff line numberDiff line change
@@ -23,7 +23,7 @@
2323
},
2424
"source": [
2525
"# ML-Agents Use SideChannels\n",
26-
"<img src=\"https://raw.githubusercontent.com/Unity-Technologies/ml-agents/release_21_docs/docs/images/3dball_big.png\" align=\"middle\" width=\"435\"/>"
26+
"<img src=\"https://raw.githubusercontent.com/Unity-Technologies/ml-agents/release_22_docs/docs/images/3dball_big.png\" align=\"middle\" width=\"435\"/>"
2727
]
2828
},
2929
{
@@ -153,7 +153,7 @@
153153
" import mlagents\n",
154154
" print(\"ml-agents already installed\")\n",
155155
"except ImportError:\n",
156-
" !python -m pip install -q mlagents==1.0.0\n",
156+
" !python -m pip install -q mlagents==1.1.0\n",
157157
" print(\"Installed ml-agents\")"
158158
],
159159
"execution_count": 2,
@@ -176,7 +176,7 @@
176176
"## Side Channel\n",
177177
"\n",
178178
"SideChannels are objects that can be passed to the constructor of a UnityEnvironment or the `make()` method of a registry entry to send non Reinforcement Learning related data.\n",
179-
"More information available [here](https://github.com/Unity-Technologies/ml-agents/blob/release_21_docs/docs/Python-API.md#communicating-additional-information-with-the-environment)\n",
179+
"More information available [here](https://github.com/Unity-Technologies/ml-agents/blob/release_22_docs/docs/Python-API.md#communicating-additional-information-with-the-environment)\n",
180180
"\n",
181181
"\n",
182182
"\n"
@@ -189,7 +189,7 @@
189189
},
190190
"source": [
191191
"### Engine Configuration SideChannel\n",
192-
"The [Engine Configuration Side Channel](https://github.com/Unity-Technologies/ml-agents/blob/release_21_docs/docs/Python-API.md#engineconfigurationchannel) is used to configure how the Unity Engine should run.\n",
192+
"The [Engine Configuration Side Channel](https://github.com/Unity-Technologies/ml-agents/blob/release_22_docs/docs/Python-API.md#engineconfigurationchannel) is used to configure how the Unity Engine should run.\n",
193193
"We will use the GridWorld environment to demonstrate how to use the EngineConfigurationChannel."
194194
]
195195
},
@@ -282,7 +282,7 @@
282282
},
283283
"source": [
284284
"### Environment Parameters Channel\n",
285-
"The [Environment Parameters Side Channel](https://github.com/Unity-Technologies/ml-agents/blob/release_21_docs/docs/Python-API.md#environmentparameters) is used to modify environment parameters during the simulation.\n",
285+
"The [Environment Parameters Side Channel](https://github.com/Unity-Technologies/ml-agents/blob/release_22_docs/docs/Python-API.md#environmentparameters) is used to modify environment parameters during the simulation.\n",
286286
"We will use the GridWorld environment to demonstrate how to use the EngineConfigurationChannel."
287287
]
288288
},
@@ -419,7 +419,7 @@
419419
},
420420
"source": [
421421
"### Creating your own Side Channels\n",
422-
"You can send various kinds of data between a Unity Environment and Python but you will need to [create your own implementation of a Side Channel](https://github.com/Unity-Technologies/ml-agents/blob/release_21_docs/docs/Custom-SideChannels.md#custom-side-channels) for advanced use cases.\n"
422+
"You can send various kinds of data between a Unity Environment and Python but you will need to [create your own implementation of a Side Channel](https://github.com/Unity-Technologies/ml-agents/blob/release_22_docs/docs/Custom-SideChannels.md#custom-side-channels) for advanced use cases.\n"
423423
]
424424
},
425425
{

colab/Colab_UnityEnvironment_4_SB3VectorEnv.ipynb

+2-2
Original file line numberDiff line numberDiff line change
@@ -7,7 +7,7 @@
77
},
88
"source": [
99
"# ML-Agents run with Stable Baselines 3\n",
10-
"<img src=\"https://github.com/Unity-Technologies/ml-agents/blob/release_21_docs/docs/images/image-banner.png?raw=true\" align=\"middle\" width=\"435\"/>"
10+
"<img src=\"https://github.com/Unity-Technologies/ml-agents/blob/release_22_docs/docs/images/image-banner.png?raw=true\" align=\"middle\" width=\"435\"/>"
1111
]
1212
},
1313
{
@@ -127,7 +127,7 @@
127127
" import mlagents\n",
128128
" print(\"ml-agents already installed\")\n",
129129
"except ImportError:\n",
130-
" !python -m pip install -q mlagents==1.0.0\n",
130+
" !python -m pip install -q mlagents==1.1.0\n",
131131
" print(\"Installed ml-agents\")"
132132
]
133133
},

com.unity.ml-agents.extensions/Documentation~/com.unity.ml-agents.extensions.md

+6-6
Original file line numberDiff line numberDiff line change
@@ -28,24 +28,24 @@ The ML-Agents Extensions package is not currently available in the Package Manag
2828
recommended ways to install the package:
2929

3030
### Local Installation
31-
[Clone the repository](https://github.com/Unity-Technologies/ml-agents/tree/release_21_docs/docs/Installation.md#clone-the-ml-agents-toolkit-repository-optional) and follow the
32-
[Local Installation for Development](https://github.com/Unity-Technologies/ml-agents/tree/release_21_docs/docs/Installation.md#advanced-local-installation-for-development-1)
31+
[Clone the repository](https://github.com/Unity-Technologies/ml-agents/tree/release_22_docs/docs/Installation.md#clone-the-ml-agents-toolkit-repository-optional) and follow the
32+
[Local Installation for Development](https://github.com/Unity-Technologies/ml-agents/tree/release_22_docs/docs/Installation.md#advanced-local-installation-for-development-1)
3333
directions (substituting `com.unity.ml-agents.extensions` for the package name).
3434

3535
### Github via Package Manager
3636
In Unity 2019.4 or later, open the Package Manager, hit the "+" button, and select "Add package from git URL".
3737

38-
![Package Manager git URL](https://github.com/Unity-Technologies/ml-agents/blob/release_21_docs/docs/images/unity_package_manager_git_url.png)
38+
![Package Manager git URL](https://github.com/Unity-Technologies/ml-agents/blob/release_22_docs/docs/images/unity_package_manager_git_url.png)
3939

4040
In the dialog that appears, enter
4141
```
42-
git+https://github.com/Unity-Technologies/ml-agents.git?path=com.unity.ml-agents.extensions#release_21
42+
git+https://github.com/Unity-Technologies/ml-agents.git?path=com.unity.ml-agents.extensions#release_22
4343
```
4444

4545
You can also edit your project's `manifest.json` directly and add the following line to the `dependencies`
4646
section:
4747
```
48-
"com.unity.ml-agents.extensions": "git+https://github.com/Unity-Technologies/ml-agents.git?path=com.unity.ml-agents.extensions#release_21",
48+
"com.unity.ml-agents.extensions": "git+https://github.com/Unity-Technologies/ml-agents.git?path=com.unity.ml-agents.extensions#release_22",
4949
```
5050
See [Git dependencies](https://docs.unity3d.com/Manual/upm-git.html#subfolder) for more information. Note that this
5151
may take several minutes to resolve the packages the first time that you add it.
@@ -67,4 +67,4 @@ If using the `InputActuatorComponent`
6767
- No way to customize the action space of the `InputActuatorComponent`
6868

6969
## Need Help?
70-
The main [README](https://github.com/Unity-Technologies/ml-agents/tree/release_21_docs/README.md) contains links for contacting the team or getting support.
70+
The main [README](https://github.com/Unity-Technologies/ml-agents/tree/release_22_docs/README.md) contains links for contacting the team or getting support.

com.unity.ml-agents/Runtime/Academy.cs

+2-2
Original file line numberDiff line numberDiff line change
@@ -20,7 +20,7 @@
2020
* API. For more information on each of these entities, in addition to how to
2121
* set-up a learning environment and train the behavior of characters in a
2222
* Unity scene, please browse our documentation pages on GitHub:
23-
* https://github.com/Unity-Technologies/ml-agents/tree/release_21_docs/docs/
23+
* https://github.com/Unity-Technologies/ml-agents/tree/release_22_docs/docs/
2424
*/
2525

2626
namespace Unity.MLAgents
@@ -61,7 +61,7 @@ void FixedUpdate()
6161
/// fall back to inference or heuristic decisions. (You can also set agents to always use
6262
/// inference or heuristics.)
6363
/// </remarks>
64-
[HelpURL("https://github.com/Unity-Technologies/ml-agents/tree/release_21_docs/" +
64+
[HelpURL("https://github.com/Unity-Technologies/ml-agents/tree/release_22_docs/" +
6565
"docs/Learning-Environment-Design.md")]
6666
public class Academy : IDisposable
6767
{

com.unity.ml-agents/Runtime/Actuators/IActionReceiver.cs

+1-1
Original file line numberDiff line numberDiff line change
@@ -184,7 +184,7 @@ public interface IActionReceiver
184184
///
185185
/// See [Agents - Actions] for more information on masking actions.
186186
///
187-
/// [Agents - Actions]: https://github.com/Unity-Technologies/ml-agents/blob/release_21_docs/docs/Learning-Environment-Design-Agents.md#actions
187+
/// [Agents - Actions]: https://github.com/Unity-Technologies/ml-agents/blob/release_22_docs/docs/Learning-Environment-Design-Agents.md#actions
188188
/// </remarks>
189189
/// <seealso cref="IActionReceiver.OnActionReceived"/>
190190
void WriteDiscreteActionMask(IDiscreteActionMask actionMask);

com.unity.ml-agents/Runtime/Actuators/IDiscreteActionMask.cs

+1-1
Original file line numberDiff line numberDiff line change
@@ -16,7 +16,7 @@ public interface IDiscreteActionMask
1616
///
1717
/// See [Agents - Actions] for more information on masking actions.
1818
///
19-
/// [Agents - Actions]: https://github.com/Unity-Technologies/ml-agents/blob/release_21_docs/docs/Learning-Environment-Design-Agents.md#masking-discrete-actions
19+
/// [Agents - Actions]: https://github.com/Unity-Technologies/ml-agents/blob/release_22_docs/docs/Learning-Environment-Design-Agents.md#masking-discrete-actions
2020
/// </remarks>
2121
/// <param name="branch">The branch for which the actions will be masked.</param>
2222
/// <param name="actionIndex">Index of the action.</param>

com.unity.ml-agents/Runtime/Agent.cs

+13-13
Original file line numberDiff line numberDiff line change
@@ -192,13 +192,13 @@ public override BuiltInActuatorType GetBuiltInActuatorType()
192192
/// [OnDisable()]: https://docs.unity3d.com/ScriptReference/MonoBehaviour.OnDisable.html]
193193
/// [OnBeforeSerialize()]: https://docs.unity3d.com/ScriptReference/MonoBehaviour.OnBeforeSerialize.html
194194
/// [OnAfterSerialize()]: https://docs.unity3d.com/ScriptReference/MonoBehaviour.OnAfterSerialize.html
195-
/// [Agents]: https://github.com/Unity-Technologies/ml-agents/blob/release_21_docs/docs/Learning-Environment-Design-Agents.md
196-
/// [Reinforcement Learning in Unity]: https://github.com/Unity-Technologies/ml-agents/blob/release_21_docs/docs/Learning-Environment-Design.md
195+
/// [Agents]: https://github.com/Unity-Technologies/ml-agents/blob/release_22_docs/docs/Learning-Environment-Design-Agents.md
196+
/// [Reinforcement Learning in Unity]: https://github.com/Unity-Technologies/ml-agents/blob/release_22_docs/docs/Learning-Environment-Design.md
197197
/// [Unity ML-Agents Toolkit]: https://github.com/Unity-Technologies/ml-agents
198-
/// [Unity ML-Agents Toolkit manual]: https://github.com/Unity-Technologies/ml-agents/blob/release_21_docs/docs/Readme.md
198+
/// [Unity ML-Agents Toolkit manual]: https://github.com/Unity-Technologies/ml-agents/blob/release_22_docs/docs/Readme.md
199199
///
200200
/// </remarks>
201-
[HelpURL("https://github.com/Unity-Technologies/ml-agents/blob/release_21_docs/" +
201+
[HelpURL("https://github.com/Unity-Technologies/ml-agents/blob/release_22_docs/" +
202202
"docs/Learning-Environment-Design-Agents.md")]
203203
[Serializable]
204204
[RequireComponent(typeof(BehaviorParameters))]
@@ -728,8 +728,8 @@ public int CompletedEpisodes
728728
/// for information about mixing reward signals from curiosity and Generative Adversarial
729729
/// Imitation Learning (GAIL) with rewards supplied through this method.
730730
///
731-
/// [Agents - Rewards]: https://github.com/Unity-Technologies/ml-agents/blob/release_21_docs/docs/Learning-Environment-Design-Agents.md#rewards
732-
/// [Reward Signals]: https://github.com/Unity-Technologies/ml-agents/blob/release_21_docs/docs/ML-Agents-Overview.md#a-quick-note-on-reward-signals
731+
/// [Agents - Rewards]: https://github.com/Unity-Technologies/ml-agents/blob/release_22_docs/docs/Learning-Environment-Design-Agents.md#rewards
732+
/// [Reward Signals]: https://github.com/Unity-Technologies/ml-agents/blob/release_22_docs/docs/ML-Agents-Overview.md#a-quick-note-on-reward-signals
733733
/// </remarks>
734734
/// <param name="reward">The new value of the reward.</param>
735735
public void SetReward(float reward)
@@ -756,8 +756,8 @@ public void SetReward(float reward)
756756
/// for information about mixing reward signals from curiosity and Generative Adversarial
757757
/// Imitation Learning (GAIL) with rewards supplied through this method.
758758
///
759-
/// [Agents - Rewards]: https://github.com/Unity-Technologies/ml-agents/blob/release_21_docs/docs/Learning-Environment-Design-Agents.md#rewards
760-
/// [Reward Signals]: https://github.com/Unity-Technologies/ml-agents/blob/release_21_docs/docs/ML-Agents-Overview.md#a-quick-note-on-reward-signals
759+
/// [Agents - Rewards]: https://github.com/Unity-Technologies/ml-agents/blob/release_22_docs/docs/Learning-Environment-Design-Agents.md#rewards
760+
/// [Reward Signals]: https://github.com/Unity-Technologies/ml-agents/blob/release_22_docs/docs/ML-Agents-Overview.md#a-quick-note-on-reward-signals
761761
///</remarks>
762762
/// <param name="increment">Incremental reward value.</param>
763763
public void AddReward(float increment)
@@ -945,8 +945,8 @@ public virtual void Initialize() { }
945945
/// implementing a simple heuristic function can aid in debugging agent actions and interactions
946946
/// with its environment.
947947
///
948-
/// [Demonstration Recorder]: https://github.com/Unity-Technologies/ml-agents/blob/release_21_docs/docs/Learning-Environment-Design-Agents.md#recording-demonstrations
949-
/// [Actions]: https://github.com/Unity-Technologies/ml-agents/blob/release_21_docs/docs/Learning-Environment-Design-Agents.md#actions
948+
/// [Demonstration Recorder]: https://github.com/Unity-Technologies/ml-agents/blob/release_22_docs/docs/Learning-Environment-Design-Agents.md#recording-demonstrations
949+
/// [Actions]: https://github.com/Unity-Technologies/ml-agents/blob/release_22_docs/docs/Learning-Environment-Design-Agents.md#actions
950950
/// [GameObject]: https://docs.unity3d.com/Manual/GameObjects.html
951951
/// </remarks>
952952
/// <example>
@@ -1203,7 +1203,7 @@ void ResetSensors()
12031203
/// For more information about observations, see [Observations and Sensors].
12041204
///
12051205
/// [GameObject]: https://docs.unity3d.com/Manual/GameObjects.html
1206-
/// [Observations and Sensors]: https://github.com/Unity-Technologies/ml-agents/blob/release_21_docs/docs/Learning-Environment-Design-Agents.md#observations-and-sensors
1206+
/// [Observations and Sensors]: https://github.com/Unity-Technologies/ml-agents/blob/release_22_docs/docs/Learning-Environment-Design-Agents.md#observations-and-sensors
12071207
/// </remarks>
12081208
public virtual void CollectObservations(VectorSensor sensor)
12091209
{
@@ -1245,7 +1245,7 @@ public ReadOnlyCollection<float> GetStackedObservations()
12451245
///
12461246
/// See [Agents - Actions] for more information on masking actions.
12471247
///
1248-
/// [Agents - Actions]: https://github.com/Unity-Technologies/ml-agents/blob/release_21_docs/docs/Learning-Environment-Design-Agents.md#actions
1248+
/// [Agents - Actions]: https://github.com/Unity-Technologies/ml-agents/blob/release_22_docs/docs/Learning-Environment-Design-Agents.md#actions
12491249
/// </remarks>
12501250
/// <seealso cref="IActionReceiver.OnActionReceived"/>
12511251
public virtual void WriteDiscreteActionMask(IDiscreteActionMask actionMask) { }
@@ -1312,7 +1312,7 @@ public virtual void WriteDiscreteActionMask(IDiscreteActionMask actionMask) { }
13121312
///
13131313
/// For more information about implementing agent actions see [Agents - Actions].
13141314
///
1315-
/// [Agents - Actions]: https://github.com/Unity-Technologies/ml-agents/blob/release_21_docs/docs/Learning-Environment-Design-Agents.md#actions
1315+
/// [Agents - Actions]: https://github.com/Unity-Technologies/ml-agents/blob/release_22_docs/docs/Learning-Environment-Design-Agents.md#actions
13161316
/// </para>
13171317
/// </remarks>
13181318
/// <param name="actions">

com.unity.ml-agents/Runtime/Demonstrations/DemonstrationRecorder.cs

+1-1
Original file line numberDiff line numberDiff line change
@@ -19,7 +19,7 @@ namespace Unity.MLAgents.Demonstrations
1919
/// See [Imitation Learning - Recording Demonstrations] for more information.
2020
///
2121
/// [GameObject]: https://docs.unity3d.com/Manual/GameObjects.html
22-
/// [Imitation Learning - Recording Demonstrations]: https://github.com/Unity-Technologies/ml-agents/blob/release_21_docs/docs//Learning-Environment-Design-Agents.md#recording-demonstrations
22+
/// [Imitation Learning - Recording Demonstrations]: https://github.com/Unity-Technologies/ml-agents/blob/release_22_docs/docs//Learning-Environment-Design-Agents.md#recording-demonstrations
2323
/// </remarks>
2424
[RequireComponent(typeof(Agent))]
2525
[AddComponentMenu("ML Agents/Demonstration Recorder", (int)MenuGroup.Default)]

docs/Installation-Anaconda-Windows.md

+4-4
Original file line numberDiff line numberDiff line change
@@ -123,10 +123,10 @@ commands in an Anaconda Prompt _(if you open a new prompt, be sure to activate
123123
the ml-agents Conda environment by typing `activate ml-agents`)_:
124124

125125
```sh
126-
git clone --branch release_21 https://github.com/Unity-Technologies/ml-agents.git
126+
git clone --branch release_22 https://github.com/Unity-Technologies/ml-agents.git
127127
```
128128

129-
The `--branch release_21` option will switch to the tag of the latest stable
129+
The `--branch release_22` option will switch to the tag of the latest stable
130130
release. Omitting that will get the `main` branch which is potentially
131131
unstable.
132132

@@ -151,7 +151,7 @@ config files in this directory when running `mlagents-learn`. Make sure you are
151151
connected to the Internet and then type in the Anaconda Prompt:
152152

153153
```console
154-
python -m pip install mlagents==1.0.0
154+
python -m pip install mlagents==1.1.0
155155
```
156156

157157
This will complete the installation of all the required Python packages to run
@@ -162,7 +162,7 @@ pip will get stuck when trying to read the cache of the package. If you see
162162
this, you can try:
163163

164164
```console
165-
python -m pip install mlagents==1.0.0 --no-cache-dir
165+
python -m pip install mlagents==1.1.0 --no-cache-dir
166166
```
167167

168168
This `--no-cache-dir` tells the pip to disable the cache.

0 commit comments

Comments
 (0)