Skip to content

Commit eac07df

Browse files
authored
Update versions for release 14 hotfix. (#5040)
1 parent 2c1acd4 commit eac07df

21 files changed

+72
-53
lines changed

README.md

+3-3
Original file line numberDiff line numberDiff line change
@@ -2,7 +2,7 @@
22

33
# Unity ML-Agents Toolkit
44

5-
[![docs badge](https://img.shields.io/badge/docs-reference-blue.svg)](https://github.com/Unity-Technologies/ml-agents/tree/release_13_docs/docs/)
5+
[![docs badge](https://img.shields.io/badge/docs-reference-blue.svg)](https://github.com/Unity-Technologies/ml-agents/tree/release_14_docs/docs/)
66

77
[![license badge](https://img.shields.io/badge/license-Apache--2.0-green.svg)](LICENSE)
88

@@ -49,8 +49,8 @@ descriptions of all these features.
4949
## Releases & Documentation
5050

5151

52-
**Our latest, stable release is `Release 13`. Click
53-
[here](https://github.com/Unity-Technologies/ml-agents/tree/release_13_docs/docs/Readme.md)
52+
**Our latest, stable release is `Release 14`. Click
53+
[here](https://github.com/Unity-Technologies/ml-agents/tree/release_14_docs/docs/Readme.md)
5454
to get started with the latest release of ML-Agents.**
5555

5656
The table below lists all our releases, including our `master` branch which is

com.unity.ml-agents.extensions/Documentation~/Grid-Sensor.md

+1-1
Original file line numberDiff line numberDiff line change
@@ -36,7 +36,7 @@ These limitations provided the motivation towards the development of the Grid Se
3636

3737
An image can be thought of as a matrix of a predefined width (W) and a height (H) and each pixel can be thought of as simply an array of length 3 (in the case of RGB), `[Red, Green, Blue]` holding the different channel information of the color (channel) intensities at that pixel location. Thus an image is just a 3 dimensional matrix of size WxHx3. A Grid Observation can be thought of as a generalization of this setup where in place of a pixel there is a "cell" which is an array of length N representing different channel intensities at that cell position. From a Convolutional Neural Network point of view, the introduction of multiple channels in an "image" isn't a new concept. One such example is using an RGB-Depth image which is used in several robotics applications. The distinction of Grid Observations is what the data within the channels represents. Instead of limiting the channels to color intensities, the channels within a cell of a Grid Observation generalize to any data that can be represented by a single number (float or int).
3838

39-
Before jumping into the details of the Grid Sensor, an important thing to note is the agent performance and qualitatively different behavior over raycasts. Unity MLAgent's comes with a suite of example environments. One in particular, the [Food Collector](https://github.com/Unity-Technologies/ml-agents/tree/release_13_docs/docs/Learning-Environment-Examples.md#food-collector), has been the focus of the Grid Sensor development.
39+
Before jumping into the details of the Grid Sensor, an important thing to note is the agent performance and qualitatively different behavior over raycasts. Unity MLAgent's comes with a suite of example environments. One in particular, the [Food Collector](https://github.com/Unity-Technologies/ml-agents/tree/release_14_docs/docs/Learning-Environment-Examples.md#food-collector), has been the focus of the Grid Sensor development.
4040

4141
The Food Collector environment can be described as:
4242
* Set-up: A multi-agent environment where agents compete to collect food.

com.unity.ml-agents.extensions/Documentation~/Match3.md

+1-1
Original file line numberDiff line numberDiff line change
@@ -10,7 +10,7 @@ Our aim is to enable Match-3 teams to leverage ML-Agents to create player agents
1010
This implementation includes:
1111

1212
* C# implementation catered toward a Match-3 setup including concepts around encoding for moves based on [Human Like Playtesting with Deep Learning](https://www.researchgate.net/publication/328307928_Human-Like_Playtesting_with_Deep_Learning)
13-
* An example Match-3 scene with ML-Agents implemented (located under /Project/Assets/ML-Agents/Examples/Match3). More information, on Match-3 example [here](https://github.com/Unity-Technologies/ml-agents/tree/release_13_docs/docs/docs/Learning-Environment-Examples.md#match-3).
13+
* An example Match-3 scene with ML-Agents implemented (located under /Project/Assets/ML-Agents/Examples/Match3). More information, on Match-3 example [here](https://github.com/Unity-Technologies/ml-agents/tree/release_14_docs/docs/docs/Learning-Environment-Examples.md#match-3).
1414

1515
### Feedback
1616
If you are a Match-3 developer and are trying to leverage ML-Agents for this scenario, [we want to hear from you](https://forms.gle/TBsB9jc8WshgzViU9). Additionally, we are also looking for interested Match-3 teams to speak with us for 45 minutes. If you are interested, please indicate that in the [form](https://forms.gle/TBsB9jc8WshgzViU9). If selected, we will provide gift cards as a token of appreciation.

com.unity.ml-agents.extensions/Documentation~/com.unity.ml-agents.extensions.md

+4-4
Original file line numberDiff line numberDiff line change
@@ -29,14 +29,14 @@ The ML-Agents Extensions package is not currently available in the Package Manag
2929
recommended ways to install the package:
3030

3131
### Local Installation
32-
[Clone the repository](https://github.com/Unity-Technologies/ml-agents/tree/release_13_docs/docs/Installation.md#clone-the-ml-agents-toolkit-repository-optional) and follow the
33-
[Local Installation for Development](https://github.com/Unity-Technologies/ml-agents/tree/release_13_docs/docs/Installation.md#advanced-local-installation-for-development-1)
32+
[Clone the repository](https://github.com/Unity-Technologies/ml-agents/tree/release_14_docs/docs/Installation.md#clone-the-ml-agents-toolkit-repository-optional) and follow the
33+
[Local Installation for Development](https://github.com/Unity-Technologies/ml-agents/tree/release_14_docs/docs/Installation.md#advanced-local-installation-for-development-1)
3434
directions (substituting `com.unity.ml-agents.extensions` for the package name).
3535

3636
### Github via Package Manager
3737
In Unity 2019.4 or later, open the Package Manager, hit the "+" button, and select "Add package from git URL".
3838

39-
![Package Manager git URL](https://github.com/Unity-Technologies/ml-agents/blob/release_13_docs/docs/images/unity_package_manager_git_url.png)
39+
![Package Manager git URL](https://github.com/Unity-Technologies/ml-agents/blob/release_14_docs/docs/images/unity_package_manager_git_url.png)
4040

4141
In the dialog that appears, enter
4242
```
@@ -67,4 +67,4 @@ following versions of the Unity Editor:
6767
- No way to customize the action space of the `InputActuatorComponent`
6868

6969
## Need Help?
70-
The main [README](https://github.com/Unity-Technologies/ml-agents/tree/release_13_docs/README.md) contains links for contacting the team or getting support.
70+
The main [README](https://github.com/Unity-Technologies/ml-agents/tree/release_14_docs/README.md) contains links for contacting the team or getting support.
+2-2
Original file line numberDiff line numberDiff line change
@@ -1,10 +1,10 @@
11
{
22
"name": "com.unity.ml-agents.extensions",
33
"displayName": "ML Agents Extensions",
4-
"version": "0.1.0-preview",
4+
"version": "0.2.0-preview",
55
"unity": "2018.4",
66
"description": "A source-only package for new features based on ML-Agents",
77
"dependencies": {
8-
"com.unity.ml-agents": "1.8.0-preview"
8+
"com.unity.ml-agents": "1.8.1-preview"
99
}
1010
}

com.unity.ml-agents/Documentation~/com.unity.ml-agents.md

+2-2
Original file line numberDiff line numberDiff line change
@@ -123,10 +123,10 @@ Please refer to "Information that is passively collected by Unity" in the
123123
[unity ML-Agents Toolkit]: https://github.com/Unity-Technologies/ml-agents
124124
[unity inference engine]: https://docs.unity3d.com/Packages/com.unity.barracuda@latest/index.html
125125
[package manager documentation]: https://docs.unity3d.com/Manual/upm-ui-install.html
126-
[installation instructions]: https://github.com/Unity-Technologies/ml-agents/blob/release_13_docs/docs/Installation.md
126+
[installation instructions]: https://github.com/Unity-Technologies/ml-agents/blob/release_14_docs/docs/Installation.md
127127
[github repository]: https://github.com/Unity-Technologies/ml-agents
128128
[python package]: https://github.com/Unity-Technologies/ml-agents
129129
[execution order of event functions]: https://docs.unity3d.com/Manual/ExecutionOrder.html
130130
[connect with us]: https://github.com/Unity-Technologies/ml-agents#community-and-feedback
131131
[ml-agents forum]: https://forum.unity.com/forums/ml-agents.453/
132-
[ML-Agents GitHub repo]: https://github.com/Unity-Technologies/ml-agents/blob/release_13_docs/com.unity.ml-agents.extensions
132+
[ML-Agents GitHub repo]: https://github.com/Unity-Technologies/ml-agents/blob/release_14_docs/com.unity.ml-agents.extensions

com.unity.ml-agents/Runtime/Academy.cs

+3-3
Original file line numberDiff line numberDiff line change
@@ -20,7 +20,7 @@
2020
* API. For more information on each of these entities, in addition to how to
2121
* set-up a learning environment and train the behavior of characters in a
2222
* Unity scene, please browse our documentation pages on GitHub:
23-
* https://github.com/Unity-Technologies/ml-agents/tree/release_13_docs/docs/
23+
* https://github.com/Unity-Technologies/ml-agents/tree/release_14_docs/docs/
2424
*/
2525

2626
namespace Unity.MLAgents
@@ -61,7 +61,7 @@ void FixedUpdate()
6161
/// fall back to inference or heuristic decisions. (You can also set agents to always use
6262
/// inference or heuristics.)
6363
/// </remarks>
64-
[HelpURL("https://github.com/Unity-Technologies/ml-agents/tree/release_13_docs/" +
64+
[HelpURL("https://github.com/Unity-Technologies/ml-agents/tree/release_14_docs/" +
6565
"docs/Learning-Environment-Design.md")]
6666
public class Academy : IDisposable
6767
{
@@ -103,7 +103,7 @@ public class Academy : IDisposable
103103
/// Unity package version of com.unity.ml-agents.
104104
/// This must match the version string in package.json and is checked in a unit test.
105105
/// </summary>
106-
internal const string k_PackageVersion = "1.8.0-preview";
106+
internal const string k_PackageVersion = "1.8.1-preview";
107107

108108
const int k_EditorTrainingPort = 5004;
109109

com.unity.ml-agents/Runtime/Actuators/IActionReceiver.cs

+1-1
Original file line numberDiff line numberDiff line change
@@ -218,7 +218,7 @@ public interface IActionReceiver
218218
///
219219
/// See [Agents - Actions] for more information on masking actions.
220220
///
221-
/// [Agents - Actions]: https://github.com/Unity-Technologies/ml-agents/blob/release_13_docs/docs/Learning-Environment-Design-Agents.md#actions
221+
/// [Agents - Actions]: https://github.com/Unity-Technologies/ml-agents/blob/release_14_docs/docs/Learning-Environment-Design-Agents.md#actions
222222
/// </remarks>
223223
/// <seealso cref="IActionReceiver.OnActionReceived"/>
224224
void WriteDiscreteActionMask(IDiscreteActionMask actionMask);

com.unity.ml-agents/Runtime/Actuators/IDiscreteActionMask.cs

+1-1
Original file line numberDiff line numberDiff line change
@@ -17,7 +17,7 @@ public interface IDiscreteActionMask
1717
///
1818
/// See [Agents - Actions] for more information on masking actions.
1919
///
20-
/// [Agents - Actions]: https://github.com/Unity-Technologies/ml-agents/blob/release_13_docs/docs/Learning-Environment-Design-Agents.md#actions
20+
/// [Agents - Actions]: https://github.com/Unity-Technologies/ml-agents/blob/release_14_docs/docs/Learning-Environment-Design-Agents.md#actions
2121
/// </remarks>
2222
/// <param name="branch">The branch for which the actions will be masked.</param>
2323
/// <param name="actionIndices">The indices of the masked actions.</param>

com.unity.ml-agents/Runtime/Agent.cs

+13-13
Original file line numberDiff line numberDiff line change
@@ -175,13 +175,13 @@ public override BuiltInActuatorType GetBuiltInActuatorType()
175175
/// [OnDisable()]: https://docs.unity3d.com/ScriptReference/MonoBehaviour.OnDisable.html]
176176
/// [OnBeforeSerialize()]: https://docs.unity3d.com/ScriptReference/MonoBehaviour.OnBeforeSerialize.html
177177
/// [OnAfterSerialize()]: https://docs.unity3d.com/ScriptReference/MonoBehaviour.OnAfterSerialize.html
178-
/// [Agents]: https://github.com/Unity-Technologies/ml-agents/blob/release_13_docs/docs/Learning-Environment-Design-Agents.md
179-
/// [Reinforcement Learning in Unity]: https://github.com/Unity-Technologies/ml-agents/blob/release_13_docs/docs/Learning-Environment-Design.md
178+
/// [Agents]: https://github.com/Unity-Technologies/ml-agents/blob/release_14_docs/docs/Learning-Environment-Design-Agents.md
179+
/// [Reinforcement Learning in Unity]: https://github.com/Unity-Technologies/ml-agents/blob/release_14_docs/docs/Learning-Environment-Design.md
180180
/// [Unity ML-Agents Toolkit]: https://github.com/Unity-Technologies/ml-agents
181-
/// [Unity ML-Agents Toolkit manual]: https://github.com/Unity-Technologies/ml-agents/blob/release_13_docs/docs/Readme.md
181+
/// [Unity ML-Agents Toolkit manual]: https://github.com/Unity-Technologies/ml-agents/blob/release_14_docs/docs/Readme.md
182182
///
183183
/// </remarks>
184-
[HelpURL("https://github.com/Unity-Technologies/ml-agents/blob/release_13_docs/" +
184+
[HelpURL("https://github.com/Unity-Technologies/ml-agents/blob/release_14_docs/" +
185185
"docs/Learning-Environment-Design-Agents.md")]
186186
[Serializable]
187187
[RequireComponent(typeof(BehaviorParameters))]
@@ -674,8 +674,8 @@ public int CompletedEpisodes
674674
/// for information about mixing reward signals from curiosity and Generative Adversarial
675675
/// Imitation Learning (GAIL) with rewards supplied through this method.
676676
///
677-
/// [Agents - Rewards]: https://github.com/Unity-Technologies/ml-agents/blob/release_13_docs/docs/Learning-Environment-Design-Agents.md#rewards
678-
/// [Reward Signals]: https://github.com/Unity-Technologies/ml-agents/blob/release_13_docs/docs/ML-Agents-Overview.md#a-quick-note-on-reward-signals
677+
/// [Agents - Rewards]: https://github.com/Unity-Technologies/ml-agents/blob/release_14_docs/docs/Learning-Environment-Design-Agents.md#rewards
678+
/// [Reward Signals]: https://github.com/Unity-Technologies/ml-agents/blob/release_14_docs/docs/ML-Agents-Overview.md#a-quick-note-on-reward-signals
679679
/// </remarks>
680680
/// <param name="reward">The new value of the reward.</param>
681681
public void SetReward(float reward)
@@ -704,8 +704,8 @@ public void SetReward(float reward)
704704
/// for information about mixing reward signals from curiosity and Generative Adversarial
705705
/// Imitation Learning (GAIL) with rewards supplied through this method.
706706
///
707-
/// [Agents - Rewards]: https://github.com/Unity-Technologies/ml-agents/blob/release_13_docs/docs/Learning-Environment-Design-Agents.md#rewards
708-
/// [Reward Signals]: https://github.com/Unity-Technologies/ml-agents/blob/release_13_docs/docs/ML-Agents-Overview.md#a-quick-note-on-reward-signals
707+
/// [Agents - Rewards]: https://github.com/Unity-Technologies/ml-agents/blob/release_14_docs/docs/Learning-Environment-Design-Agents.md#rewards
708+
/// [Reward Signals]: https://github.com/Unity-Technologies/ml-agents/blob/release_14_docs/docs/ML-Agents-Overview.md#a-quick-note-on-reward-signals
709709
///</remarks>
710710
/// <param name="increment">Incremental reward value.</param>
711711
public void AddReward(float increment)
@@ -883,8 +883,8 @@ public virtual void Initialize() { }
883883
/// implementing a simple heuristic function can aid in debugging agent actions and interactions
884884
/// with its environment.
885885
///
886-
/// [Demonstration Recorder]: https://github.com/Unity-Technologies/ml-agents/blob/release_13_docs/docs/Learning-Environment-Design-Agents.md#recording-demonstrations
887-
/// [Actions]: https://github.com/Unity-Technologies/ml-agents/blob/release_13_docs/docs/Learning-Environment-Design-Agents.md#actions
886+
/// [Demonstration Recorder]: https://github.com/Unity-Technologies/ml-agents/blob/release_14_docs/docs/Learning-Environment-Design-Agents.md#recording-demonstrations
887+
/// [Actions]: https://github.com/Unity-Technologies/ml-agents/blob/release_14_docs/docs/Learning-Environment-Design-Agents.md#actions
888888
/// [GameObject]: https://docs.unity3d.com/Manual/GameObjects.html
889889
/// </remarks>
890890
/// <example>
@@ -1141,7 +1141,7 @@ void ResetSensors()
11411141
/// For more information about observations, see [Observations and Sensors].
11421142
///
11431143
/// [GameObject]: https://docs.unity3d.com/Manual/GameObjects.html
1144-
/// [Observations and Sensors]: https://github.com/Unity-Technologies/ml-agents/blob/release_13_docs/docs/Learning-Environment-Design-Agents.md#observations-and-sensors
1144+
/// [Observations and Sensors]: https://github.com/Unity-Technologies/ml-agents/blob/release_14_docs/docs/Learning-Environment-Design-Agents.md#observations-and-sensors
11451145
/// </remarks>
11461146
public virtual void CollectObservations(VectorSensor sensor)
11471147
{
@@ -1172,7 +1172,7 @@ public ReadOnlyCollection<float> GetObservations()
11721172
///
11731173
/// See [Agents - Actions] for more information on masking actions.
11741174
///
1175-
/// [Agents - Actions]: https://github.com/Unity-Technologies/ml-agents/blob/release_13_docs/docs/Learning-Environment-Design-Agents.md#actions
1175+
/// [Agents - Actions]: https://github.com/Unity-Technologies/ml-agents/blob/release_14_docs/docs/Learning-Environment-Design-Agents.md#actions
11761176
/// </remarks>
11771177
/// <seealso cref="IActionReceiver.OnActionReceived"/>
11781178
public virtual void WriteDiscreteActionMask(IDiscreteActionMask actionMask)
@@ -1248,7 +1248,7 @@ public virtual void WriteDiscreteActionMask(IDiscreteActionMask actionMask)
12481248
///
12491249
/// For more information about implementing agent actions see [Agents - Actions].
12501250
///
1251-
/// [Agents - Actions]: https://github.com/Unity-Technologies/ml-agents/blob/release_13_docs/docs/Learning-Environment-Design-Agents.md#actions
1251+
/// [Agents - Actions]: https://github.com/Unity-Technologies/ml-agents/blob/release_14_docs/docs/Learning-Environment-Design-Agents.md#actions
12521252
/// </remarks>
12531253
/// <param name="actions">
12541254
/// Struct containing the buffers of actions to be executed at this step.

com.unity.ml-agents/Runtime/Demonstrations/DemonstrationRecorder.cs

+1-1
Original file line numberDiff line numberDiff line change
@@ -19,7 +19,7 @@ namespace Unity.MLAgents.Demonstrations
1919
/// See [Imitation Learning - Recording Demonstrations] for more information.
2020
///
2121
/// [GameObject]: https://docs.unity3d.com/Manual/GameObjects.html
22-
/// [Imitation Learning - Recording Demonstrations]: https://github.com/Unity-Technologies/ml-agents/blob/release_13_docs/docs//Learning-Environment-Design-Agents.md#recording-demonstrations
22+
/// [Imitation Learning - Recording Demonstrations]: https://github.com/Unity-Technologies/ml-agents/blob/release_14_docs/docs//Learning-Environment-Design-Agents.md#recording-demonstrations
2323
/// </remarks>
2424
[RequireComponent(typeof(Agent))]
2525
[AddComponentMenu("ML Agents/Demonstration Recorder", (int)MenuGroup.Default)]

com.unity.ml-agents/Runtime/DiscreteActionMasker.cs

+1-1
Original file line numberDiff line numberDiff line change
@@ -32,7 +32,7 @@ internal DiscreteActionMasker(IDiscreteActionMask actionMask)
3232
///
3333
/// See [Agents - Actions] for more information on masking actions.
3434
///
35-
/// [Agents - Actions]: https://github.com/Unity-Technologies/ml-agents/blob/release_13_docs/docs/Learning-Environment-Design-Agents.md#actions
35+
/// [Agents - Actions]: https://github.com/Unity-Technologies/ml-agents/blob/release_14_docs/docs/Learning-Environment-Design-Agents.md#actions
3636
/// </remarks>
3737
/// <param name="branch">The branch for which the actions will be masked.</param>
3838
/// <param name="actionIndices">The indices of the masked actions.</param>

com.unity.ml-agents/package.json

+1-1
Original file line numberDiff line numberDiff line change
@@ -1,7 +1,7 @@
11
{
22
"name": "com.unity.ml-agents",
33
"displayName": "ML Agents",
4-
"version": "1.8.0-preview",
4+
"version": "1.8.1-preview",
55
"unity": "2018.4",
66
"description": "Use state-of-the-art machine learning to create intelligent character behaviors in any Unity environment (games, robotics, film, etc.).",
77
"dependencies": {

docs/Installation-Anaconda-Windows.md

+2-2
Original file line numberDiff line numberDiff line change
@@ -123,10 +123,10 @@ commands in an Anaconda Prompt _(if you open a new prompt, be sure to activate
123123
the ml-agents Conda environment by typing `activate ml-agents`)_:
124124

125125
```sh
126-
git clone --branch release_13 https://github.com/Unity-Technologies/ml-agents.git
126+
git clone --branch release_14 https://github.com/Unity-Technologies/ml-agents.git
127127
```
128128

129-
The `--branch release_13` option will switch to the tag of the latest stable
129+
The `--branch release_14` option will switch to the tag of the latest stable
130130
release. Omitting that will get the `master` branch which is potentially
131131
unstable.
132132

0 commit comments

Comments
 (0)