Fixed epsilon decay in dqn example #117
Merged
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Before submitting
What does this PR do?
This fixes issue Lightning-AI/pytorch-lightning#10883, where there was an incorrect version of epsilon decay for the epsilon-random policy of a DQN. The original code has a single
train_step
withself.hparams.eps_start
and then immediately switches toself.hparams.eps_end
. The intended behavior is to linearly decrease epsilon fromself.haparms.eps_start
toself.hparams.eps_end
over the firstself.hparams.eps_last_frame
steps. I wrote a small functionget_epsilon
which fixes this logic and returns the correct epsilon.I have also made a few minor changes on other lines, because the code would not run on my local machine without these changes. Specifically, the type hint for the
__iter__
method of theRLDataset
class was a Tuple, and should be an Iterator[Tuple], because it returns a generator of tuples representing (state, action, reward, done, new_state). Additionally, on line 264 (formerly 276), I got an error that the index for thegather()
function must be of typeint64
, so I cast theactions
tensor to type long. Finally, I added logging ofself.episode_reward
andepsilon
, so that I could see that a) it still learned successfully, and b) my intended changes to epsilon were working as intended.PR review
Anyone in the community is free to review the PR once the tests have passed.
If we didn't discuss your PR in Github issues there's a high chance it will not be merged.
Did you have fun?
Sure did!