fix: use exact numbers
This commit is contained in:
parent
1b401f80e6
commit
ab6801a7bd
@ -3,10 +3,10 @@ Playing Flappy Bird using Evolution Strategies
|
|||||||
|
|
||||||
After reading [Evolution Strategies as a Scalable Alternative to Reinforcement Learning](https://blog.openai.com/evolution-strategies/), I wanted to experiment something using Evolution Strategies, and Flappy Bird has always been one of my favorites when it comes to Game experiments. A simple yet challenging game.
|
After reading [Evolution Strategies as a Scalable Alternative to Reinforcement Learning](https://blog.openai.com/evolution-strategies/), I wanted to experiment something using Evolution Strategies, and Flappy Bird has always been one of my favorites when it comes to Game experiments. A simple yet challenging game.
|
||||||
|
|
||||||
The model learns to play very well after ~3000 epochs, but not completely flawless and it usually loses in difficult cases (high difference between two wall entrances).
|
The model learns to play very well after 3000 epochs, but not completely flawless and it usually loses in difficult cases (high difference between two wall entrances).
|
||||||
Training process is pretty fast as there is no backpropagation, and is not very costy in terms of memory as there is no need to record actions as in policy gradients.
|
Training process is pretty fast as there is no backpropagation, and is not very costy in terms of memory as there is no need to record actions as in policy gradients.
|
||||||
|
|
||||||
Here is a demonstration of the model after ~3000 epochs (~5 minutes on an Intel(R) Core(TM) i7-4770HQ CPU @ 2.20GHz):
|
Here is a demonstration of the model after 3000 epochs (~5 minutes on an Intel(R) Core(TM) i7-4770HQ CPU @ 2.20GHz):
|
||||||
|
|
||||||
![after training](/demo/flappy-success.gif)
|
![after training](/demo/flappy-success.gif)
|
||||||
|
|
||||||
|
Loading…
Reference in New Issue
Block a user