Skip to content

Commit 28303ad

Browse files
miguelalonsojrRuo-Ping Dongjmercado1985maryamhonariHenry Peteet
authored and
GitHub Enterprise
committed
Develop python api ga (#6)
* Dropped support for python 3.6 * Pinning python 3.9.9 for tests due to typing issues with 3.9.10 * Testing new bokken image. * Testing new bokken image. * Updated yamato standalone build test. * Updated yamato standalone build test. * Updated standalone build test. * Updated yamato configs to use mla bokken vm. * Bug fixes for yamato yml files. * Fixed com.unity.ml-agents-test.yml * Bumped min python version to 3.7.2 * pettingzoo api prototype * add example * update file names * support multiple behavior names * fix multi behavior action index * add install in colab * add setup * update colab * fix __init__ * clone single branch * import tags only * import in init * catch import error * update colab * move colab and add readme * handle agent dying * add tests * update doc * add info * add action mask * fix action mask * update action masks in colab * change default env * set version * fix hybrid action * fix colab for hybrid actions * add note on auto reset * Updated colab name. * Update README.md * Following petting_zoo registry API (#5557) * init petting_zoo registry * cherrypick Custom trainer editor analytics (#5511) * cherrypick "Update dotnet-format to address breaking changes introduced by upstream changes (#5528)" * Update colab to match pettingZoo import api * ToRevert: pull exp-petting-registry branch * Add init file to tests * Install pettingzoo-unity requirements for pytest * update pytest command * Add docstrings and comments * update coverage to pettingzoo folder * unset log level * update env string * Two small bugfixes (#5589) 1. Add the missing `_cumulative_rewards` property 2. Update `agent_selection` to not error out when an agent finishes an episode. * Updated gym to 0.21.0 and petting zoo to 1.13.1, fixed bugs with AEC wrapper for gym and PZ updates. API tests are passing. * Some refactoring. * Finished inital implementation of parallel. Tests not passing. * Finished parallel API implementation and refactor. All PZ tests passing. * Cleanup. * Refactoring. * Pinning numpy version. * add metadata and behavior_specs initialization * addressing behaviour_spec issues * Bumped PZ version to 1.14.0. Fixed failing tests. * Refactored gym-unity and petting-zoo into ml-agents-envs * Added TODO to pydoc-config.yaml * Refactored gym and pz to be under a subpackage in mlagents_env package * Refactored ml-agents-envs docs. * Minor update to PZ API doc. * Updated mlagents_envs docs and colab. * Updated pytest gh workflow to remove ref to gym and pz. * Refactored to remove some test coupling between trainers and envs. * Updated installation doc. * Update ml-agents-envs/README.md Co-authored-by: Andrew Cohen <[email protected]> * Updated failing yamato jobs. * pettingzoo api prototype * add example * update file names * support multiple behavior names * fix multi behavior action index * add install in colab * add setup * update colab * fix __init__ * clone single branch * import tags only * import in init * catch import error * update colab * move colab and add readme * handle agent dying * add tests * update doc * add info * add action mask * fix action mask * update action masks in colab * change default env * set version * fix hybrid action * fix colab for hybrid actions * add note on auto reset * Updated colab name. * Update README.md * Following petting_zoo registry API (#5557) * init petting_zoo registry * cherrypick Custom trainer editor analytics (#5511) * cherrypick "Update dotnet-format to address breaking changes introduced by upstream changes (#5528)" * Update colab to match pettingZoo import api * ToRevert: pull exp-petting-registry branch * Add init file to tests * Install pettingzoo-unity requirements for pytest * update pytest command * Add docstrings and comments * update coverage to pettingzoo folder * unset log level * update env string * Two small bugfixes (#5589) 1. Add the missing `_cumulative_rewards` property 2. Update `agent_selection` to not error out when an agent finishes an episode. * Updated gym to 0.21.0 and petting zoo to 1.13.1, fixed bugs with AEC wrapper for gym and PZ updates. API tests are passing. * Some refactoring. * Finished inital implementation of parallel. Tests not passing. * Finished parallel API implementation and refactor. All PZ tests passing. * Cleanup. * Refactoring. * Pinning numpy version. * add metadata and behavior_specs initialization * addressing behaviour_spec issues * Bumped PZ version to 1.14.0. Fixed failing tests. * Refactored gym-unity and petting-zoo into ml-agents-envs * Added TODO to pydoc-config.yaml * Refactored gym and pz to be under a subpackage in mlagents_env package * Refactored ml-agents-envs docs. * Minor update to PZ API doc. * Updated mlagents_envs docs and colab. * Updated pytest gh workflow to remove ref to gym and pz. * Refactored to remove some test coupling between trainers and envs. * Updated installation doc. * Update ml-agents-envs/README.md Co-authored-by: Andrew Cohen <[email protected]> * Updated CHANGELOG. * Updated Migration guide. * Doc updates based on CR. * Updated github workflow for colab tests. * Updated github workflow for colab tests. * Updated github workflow for colab tests. * Fixed yamato import error. Co-authored-by: Ruo-Ping Dong <[email protected]> Co-authored-by: Miguel Alonso Jr <miguelalonsojr> Co-authored-by: jmercado1985 <[email protected]> Co-authored-by: Maryam Honari <[email protected]> Co-authored-by: Henry Peteet <[email protected]> Co-authored-by: mahon94 <[email protected]> Co-authored-by: Andrew Cohen <[email protected]>
1 parent b4cbaa6 commit 28303ad

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

62 files changed

+2707
-186
lines changed

.github/workflows/publish_pypi.yaml

+1-1
Original file line numberDiff line numberDiff line change
@@ -16,7 +16,7 @@ jobs:
1616
runs-on: [self-hosted, Linux, X64]
1717
strategy:
1818
matrix:
19-
package-path: [ml-agents, ml-agents-envs, gym-unity]
19+
package-path: [ml-agents, ml-agents-envs]
2020

2121
steps:
2222
- uses: actions/checkout@main

.github/workflows/pytest.yml

+2-4
Original file line numberDiff line numberDiff line change
@@ -5,7 +5,6 @@ on:
55
paths: # This action will only run if the PR modifies a file in one of these directories
66
- 'ml-agents/**'
77
- 'ml-agents-envs/**'
8-
- 'gym-unity/**'
98
- 'test_constraints*.txt'
109
- 'test_requirements.txt'
1110
- '.github/workflows/pytest.yml'
@@ -47,7 +46,7 @@ jobs:
4746
# # This path is specific to Ubuntu
4847
# path: ~/.cache/pip
4948
# # Look to see if there is a cache hit for the corresponding requirements file
50-
# key: ${{ runner.os }}-pip-${{ hashFiles('ml-agents/setup.py', 'ml-agents-envs/setup.py', 'gym-unity/setup.py', 'test_requirements.txt', matrix.pip_constraints) }}
49+
# key: ${{ runner.os }}-pip-${{ hashFiles('ml-agents/setup.py', 'ml-agents-envs/setup.py', 'test_requirements.txt', matrix.pip_constraints) }}
5150
# restore-keys: |
5251
# ${{ runner.os }}-pip-
5352
# ${{ runner.os }}-
@@ -60,14 +59,13 @@ jobs:
6059
python -m pip install --progress-bar=off -e ./ml-agents-envs -c ${{ matrix.pip_constraints }}
6160
python -m pip install --progress-bar=off -e ./ml-agents -c ${{ matrix.pip_constraints }}
6261
python -m pip install --progress-bar=off -r test_requirements.txt -c ${{ matrix.pip_constraints }}
63-
python -m pip install --progress-bar=off -e ./gym-unity -c ${{ matrix.pip_constraints }}
6462
python -m pip install --progress-bar=off -e ./ml-agents-plugin-examples -c ${{ matrix.pip_constraints }}
6563
- name: Save python dependencies
6664
run: |
6765
pip freeze > pip_versions-${{ matrix.python-version }}.txt
6866
cat pip_versions-${{ matrix.python-version }}.txt
6967
- name: Run pytest
70-
run: pytest --cov=ml-agents --cov=ml-agents-envs --cov=gym-unity --cov-report html --junitxml=junit/test-results-${{ matrix.python-version }}.xml -p no:warnings -v
68+
run: pytest --cov=ml-agents --cov=ml-agents-envs --cov-report=html --junitxml=junit/test-results-${{ matrix.python-version }}.xml -p no:warnings -v
7169
- name: Upload pytest test results
7270
uses: actions/upload-artifact@v2
7371
with:

.pre-commit-config.yaml

-4
Original file line numberDiff line numberDiff line change
@@ -22,10 +22,6 @@ repos:
2222
# Exclude protobuf files and don't follow them when imported
2323
exclude: ".*_pb2.py"
2424
args: [--ignore-missing-imports, --disallow-incomplete-defs]
25-
- id: mypy
26-
name: mypy-gym-unity
27-
files: "gym-unity/.*"
28-
args: [--ignore-missing-imports, --disallow-incomplete-defs]
2925

3026
- repo: https://gitlab.com/pycqa/flake8
3127
rev: 3.8.1

.yamato/gym-interface-test.yml

-1
Original file line numberDiff line numberDiff line change
@@ -30,7 +30,6 @@ test_gym_interface_{{ editor.version }}:
3030
pull_request.changes.any match "Project/**" OR
3131
pull_request.changes.any match "ml-agents/tests/yamato/**" OR
3232
pull_request.changes.any match "ml-agents-envs/**" OR
33-
pull_request.changes.any match "gym-unity/**" OR
3433
pull_request.changes.any match ".yamato/gym-interface-test.yml") AND
3534
NOT pull_request.changes.all match "**/*.md"
3635
{% endif %}

README.md

+2-2
Original file line numberDiff line numberDiff line change
@@ -38,8 +38,8 @@ developer communities.
3838
- Train using multiple concurrent Unity environment instances
3939
- Utilizes the [Unity Inference Engine](docs/Unity-Inference-Engine.md) to
4040
provide native cross-platform support
41-
- Unity environment [control from Python](docs/Python-API.md)
42-
- Wrap Unity learning environments as a [gym](gym-unity/README.md)
41+
- Unity environment [control from Python](docs/Python-LLAPI.md)
42+
- Wrap Unity learning environments as a [gym](docs/Python-Gym-API.md)
4343

4444
See our [ML-Agents Overview](docs/ML-Agents-Overview.md) page for detailed
4545
descriptions of all these features.

com.unity.ml-agents/CHANGELOG.md

+6-4
Original file line numberDiff line numberDiff line change
@@ -9,15 +9,17 @@ and this project adheres to
99
## [Unreleased]
1010
### Major Changes
1111
#### com.unity.ml-agents / com.unity.ml-agents.extensions (C#)
12-
#### ml-agents / ml-agents-envs / gym-unity (Python)
13-
- The minimum supported Python version for ml-agents-envs was changed to 3.7.2 (#4)
12+
#### ml-agents / ml-agents-envs
13+
- The minimum supported Python version for ml-agents-envs was changed to 3.7.2 (#5)
14+
- Added support for the PettingZoo multi-agent API (#6)
15+
- Refactored `gym-unity` into the `ml-agents-envs` package (#6)
1416

1517
### Minor Changes
1618
#### com.unity.ml-agents / com.unity.ml-agents.extensions (C#)
17-
#### ml-agents / ml-agents-envs / gym-unity (Python)
19+
#### ml-agents / ml-agents-envs
1820
### Bug Fixes
1921
#### com.unity.ml-agents / com.unity.ml-agents.extensions (C#)
20-
#### ml-agents / ml-agents-envs / gym-unity (Python)
22+
#### ml-agents / ml-agents-envs
2123

2224
## [2.2.1-exp.1] - 2022-01-14
2325
### Major Changes

docs/Installation.md

-2
Original file line numberDiff line numberDiff line change
@@ -18,8 +18,6 @@ The ML-Agents Toolkit contains several components:
1818
a Unity scene. It is a foundational layer that facilitates data messaging
1919
between Unity scene and the Python machine learning algorithms.
2020
Consequently, `mlagents` depends on `mlagents_envs`.
21-
- [`gym_unity`](../gym-unity/) provides a Python-wrapper for your Unity scene
22-
that supports the OpenAI Gym interface.
2321
- Unity [Project](../Project/) that contains several
2422
[example environments](Learning-Environment-Examples.md) that highlight the
2523
various features of the toolkit to help you get started.

docs/Learning-Environment-Executable.md

+1-1
Original file line numberDiff line numberDiff line change
@@ -62,7 +62,7 @@ can interact with it.
6262

6363
## Interacting with the Environment
6464

65-
If you want to use the [Python API](Python-API.md) to interact with your
65+
If you want to use the [Python API](Python-LLAPI.md) to interact with your
6666
executable, you can pass the name of the executable with the argument
6767
'file_name' of the `UnityEnvironment`. For instance:
6868

docs/Limitations.md

-1
Original file line numberDiff line numberDiff line change
@@ -5,4 +5,3 @@ See the package-specific Limitations pages:
55
- [`com.unity.mlagents` Unity package](../com.unity.ml-agents/Documentation~/com.unity.ml-agents.md#known-limitations)
66
- [`mlagents` Python package](../ml-agents/README.md#limitations)
77
- [`mlagents_envs` Python package](../ml-agents-envs/README.md#limitations)
8-
- [`gym_unity` Python package](../gym-unity/README.md#limitations)

docs/ML-Agents-Overview.md

+11-5
Original file line numberDiff line numberDiff line change
@@ -167,7 +167,7 @@ The ML-Agents Toolkit contains five high-level components:
167167
process to communicate with and control the Academy during training. However,
168168
it can be used for other purposes as well. For example, you could use the API
169169
to use Unity as the simulation engine for your own machine learning
170-
algorithms. See [Python API](Python-API.md) for more information.
170+
algorithms. See [Python API](Python-LLAPI.md) for more information.
171171
- **External Communicator** - which connects the Learning Environment with the
172172
Python Low-Level API. It lives within the Learning Environment.
173173
- **Python Trainers** which contains all the machine learning algorithms that
@@ -179,9 +179,15 @@ The ML-Agents Toolkit contains five high-level components:
179179
- **Gym Wrapper** (not pictured). A common way in which machine learning
180180
researchers interact with simulation environments is via a wrapper provided by
181181
OpenAI called [gym](https://github.com/openai/gym). We provide a gym wrapper
182-
in a dedicated `gym-unity` Python package and
183-
[instructions](../gym-unity/README.md) for using it with existing machine
182+
in the `ml-agents-envs` package and
183+
[instructions](Python-Gym-API.md) for using it with existing machine
184184
learning algorithms which utilize gym.
185+
- **PettingZoo Wrapper** (not pictured) PettingZoo is python API for
186+
interacting with multi-agent simulation environments that provides a
187+
gym-like interface. We provide a PettingZoo wrapper for Unity ML-Agents
188+
environments in the `ml-agents-envs` package and
189+
[instructions](Python-PettingZoo-API.md) for using it with machine learning
190+
algorithms.
185191

186192
<p align="center">
187193
<img src="images/learning_environment_basic.png"
@@ -286,10 +292,10 @@ In the previous mode, the Agents were used for training to generate a PyTorch
286292
model that the Agents can later use. However, any user of the ML-Agents Toolkit
287293
can leverage their own algorithms for training. In this case, the behaviors of
288294
all the Agents in the scene will be controlled within Python. You can even turn
289-
your environment into a [gym.](../gym-unity/README.md)
295+
your environment into a [gym.](Python-Gym-API.md)
290296

291297
We do not currently have a tutorial highlighting this mode, but you can learn
292-
more about the Python API [here](Python-API.md).
298+
more about the Python API [here](Python-LLAPI.md).
293299

294300
## Flexible Training Scenarios
295301

docs/Migrating.md

+23-4
Original file line numberDiff line numberDiff line change
@@ -1,6 +1,25 @@
11
# Upgrading
22

33
# Migrating
4+
<!---
5+
TODO: update ml-agents-env package version before release
6+
--->
7+
## Migrating to the ml-agents-envs 0.29.0.dev0 package
8+
- Python 3.7 is now the minimum version of python supported due to [python3.6 EOL](https://endoflife.date/python).
9+
Please update your python installation to 3.7.2 or higher. Note: Due to an issue with the typing system, the maximum
10+
version of python supported is python 3.9.9.
11+
- The `gym-unity` package has been refactored into the `ml-agents-envs` package. Please update your imports accordingly.
12+
- Example:
13+
- Before
14+
```python
15+
from gym_unity.unity_gym_env import UnityToGymWrapper
16+
```
17+
- After:
18+
```python
19+
from mlagents_envs.envs.unity_gym_env import UnityToGymWrapper
20+
```
21+
22+
423
## Migrating the package to version 2.0
524
- The official version of Unity ML-Agents supports is now 2020.3 LTS. If you run
625
into issues, please consider deleting your project's Library folder and reponening your
@@ -260,9 +279,9 @@ vector observations to be used simultaneously.
260279
- The `play_against_current_self_ratio` self-play trainer hyperparameter has
261280
been renamed to `play_against_latest_model_ratio`
262281
- Removed the multi-agent gym option from the gym wrapper. For multi-agent
263-
scenarios, use the [Low Level Python API](Python-API.md).
282+
scenarios, use the [Low Level Python API](Python-LLAPI.md).
264283
- The low level Python API has changed. You can look at the document
265-
[Low Level Python API documentation](Python-API.md) for more information. If
284+
[Low Level Python API documentation](Python-LLAPI.md) for more information. If
266285
you use `mlagents-learn` for training, this should be a transparent change.
267286
- The obsolete `Agent` methods `GiveModel`, `Done`, `InitializeAgent`,
268287
`AgentAction` and `AgentReset` have been removed.
@@ -487,7 +506,7 @@ vector observations to be used simultaneously.
487506
### Important changes
488507

489508
- The low level Python API has changed. You can look at the document
490-
[Low Level Python API documentation](Python-API.md) for more information. This
509+
[Low Level Python API documentation](Python-LLAPI.md) for more information. This
491510
should only affect you if you're writing a custom trainer; if you use
492511
`mlagents-learn` for training, this should be a transparent change.
493512
- `reset()` on the Low-Level Python API no longer takes a `train_mode`
@@ -497,7 +516,7 @@ vector observations to be used simultaneously.
497516
`UnityEnvironment` no longer has a `reset_parameters` field. To modify float
498517
properties in the environment, you must use a `FloatPropertiesChannel`. For
499518
more information, refer to the
500-
[Low Level Python API documentation](Python-API.md)
519+
[Low Level Python API documentation](Python-LLAPI.md)
501520
- `CustomResetParameters` are now removed.
502521
- The Academy no longer has a `Training Configuration` nor
503522
`Inference Configuration` field in the inspector. To modify the configuration

docs/Python-Gym-API-Documentation.md

+161
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,161 @@
1+
# Table of Contents
2+
3+
* [mlagents\_envs.envs.unity\_gym\_env](#mlagents_envs.envs.unity_gym_env)
4+
* [UnityGymException](#mlagents_envs.envs.unity_gym_env.UnityGymException)
5+
* [UnityToGymWrapper](#mlagents_envs.envs.unity_gym_env.UnityToGymWrapper)
6+
* [\_\_init\_\_](#mlagents_envs.envs.unity_gym_env.UnityToGymWrapper.__init__)
7+
* [reset](#mlagents_envs.envs.unity_gym_env.UnityToGymWrapper.reset)
8+
* [step](#mlagents_envs.envs.unity_gym_env.UnityToGymWrapper.step)
9+
* [render](#mlagents_envs.envs.unity_gym_env.UnityToGymWrapper.render)
10+
* [close](#mlagents_envs.envs.unity_gym_env.UnityToGymWrapper.close)
11+
* [seed](#mlagents_envs.envs.unity_gym_env.UnityToGymWrapper.seed)
12+
* [ActionFlattener](#mlagents_envs.envs.unity_gym_env.ActionFlattener)
13+
* [\_\_init\_\_](#mlagents_envs.envs.unity_gym_env.ActionFlattener.__init__)
14+
* [lookup\_action](#mlagents_envs.envs.unity_gym_env.ActionFlattener.lookup_action)
15+
16+
<a name="mlagents_envs.envs.unity_gym_env"></a>
17+
# mlagents\_envs.envs.unity\_gym\_env
18+
19+
<a name="mlagents_envs.envs.unity_gym_env.UnityGymException"></a>
20+
## UnityGymException Objects
21+
22+
```python
23+
class UnityGymException(error.Error)
24+
```
25+
26+
Any error related to the gym wrapper of ml-agents.
27+
28+
<a name="mlagents_envs.envs.unity_gym_env.UnityToGymWrapper"></a>
29+
## UnityToGymWrapper Objects
30+
31+
```python
32+
class UnityToGymWrapper(gym.Env)
33+
```
34+
35+
Provides Gym wrapper for Unity Learning Environments.
36+
37+
<a name="mlagents_envs.envs.unity_gym_env.UnityToGymWrapper.__init__"></a>
38+
#### \_\_init\_\_
39+
40+
```python
41+
| __init__(unity_env: BaseEnv, uint8_visual: bool = False, flatten_branched: bool = False, allow_multiple_obs: bool = False, action_space_seed: Optional[int] = None)
42+
```
43+
44+
Environment initialization
45+
46+
**Arguments**:
47+
48+
- `unity_env`: The Unity BaseEnv to be wrapped in the gym. Will be closed when the UnityToGymWrapper closes.
49+
- `uint8_visual`: Return visual observations as uint8 (0-255) matrices instead of float (0.0-1.0).
50+
- `flatten_branched`: If True, turn branched discrete action spaces into a Discrete space rather than
51+
MultiDiscrete.
52+
- `allow_multiple_obs`: If True, return a list of np.ndarrays as observations with the first elements
53+
containing the visual observations and the last element containing the array of vector observations.
54+
If False, returns a single np.ndarray containing either only a single visual observation or the array of
55+
vector observations.
56+
- `action_space_seed`: If non-None, will be used to set the random seed on created gym.Space instances.
57+
58+
<a name="mlagents_envs.envs.unity_gym_env.UnityToGymWrapper.reset"></a>
59+
#### reset
60+
61+
```python
62+
| reset() -> Union[List[np.ndarray], np.ndarray]
63+
```
64+
65+
Resets the state of the environment and returns an initial observation.
66+
Returns: observation (object/list): the initial observation of the
67+
space.
68+
69+
<a name="mlagents_envs.envs.unity_gym_env.UnityToGymWrapper.step"></a>
70+
#### step
71+
72+
```python
73+
| step(action: List[Any]) -> GymStepResult
74+
```
75+
76+
Run one timestep of the environment's dynamics. When end of
77+
episode is reached, you are responsible for calling `reset()`
78+
to reset this environment's state.
79+
Accepts an action and returns a tuple (observation, reward, done, info).
80+
81+
**Arguments**:
82+
83+
- `action` _object/list_ - an action provided by the environment
84+
85+
**Returns**:
86+
87+
- `observation` _object/list_ - agent's observation of the current environment
88+
reward (float/list) : amount of reward returned after previous action
89+
- `done` _boolean/list_ - whether the episode has ended.
90+
- `info` _dict_ - contains auxiliary diagnostic information.
91+
92+
<a name="mlagents_envs.envs.unity_gym_env.UnityToGymWrapper.render"></a>
93+
#### render
94+
95+
```python
96+
| render(mode="rgb_array")
97+
```
98+
99+
Return the latest visual observations.
100+
Note that it will not render a new frame of the environment.
101+
102+
<a name="mlagents_envs.envs.unity_gym_env.UnityToGymWrapper.close"></a>
103+
#### close
104+
105+
```python
106+
| close() -> None
107+
```
108+
109+
Override _close in your subclass to perform any necessary cleanup.
110+
Environments will automatically close() themselves when
111+
garbage collected or when the program exits.
112+
113+
<a name="mlagents_envs.envs.unity_gym_env.UnityToGymWrapper.seed"></a>
114+
#### seed
115+
116+
```python
117+
| seed(seed: Any = None) -> None
118+
```
119+
120+
Sets the seed for this env's random number generator(s).
121+
Currently not implemented.
122+
123+
<a name="mlagents_envs.envs.unity_gym_env.ActionFlattener"></a>
124+
## ActionFlattener Objects
125+
126+
```python
127+
class ActionFlattener()
128+
```
129+
130+
Flattens branched discrete action spaces into single-branch discrete action spaces.
131+
132+
<a name="mlagents_envs.envs.unity_gym_env.ActionFlattener.__init__"></a>
133+
#### \_\_init\_\_
134+
135+
```python
136+
| __init__(branched_action_space)
137+
```
138+
139+
Initialize the flattener.
140+
141+
**Arguments**:
142+
143+
- `branched_action_space`: A List containing the sizes of each branch of the action
144+
space, e.g. [2,3,3] for three branches with size 2, 3, and 3 respectively.
145+
146+
<a name="mlagents_envs.envs.unity_gym_env.ActionFlattener.lookup_action"></a>
147+
#### lookup\_action
148+
149+
```python
150+
| lookup_action(action)
151+
```
152+
153+
Convert a scalar discrete action into a unique set of branched actions.
154+
155+
**Arguments**:
156+
157+
- `action`: A scalar value representing one of the discrete actions.
158+
159+
**Returns**:
160+
161+
The List containing the branched actions.

0 commit comments

Comments
 (0)