Flappy Bird AI using Evolution Strategies
Go to file
2021-11-08 11:20:12 +00:00
assets initial commit 2017-04-02 16:48:56 +04:30
demo initial commit 2017-04-02 16:48:56 +04:30
web remove aylien logo 2021-11-08 11:20:12 +00:00
.gitignore web version 2017-04-04 22:34:18 +04:30
demo.py initial commit 2017-04-02 16:48:56 +04:30
draw_chart.py feat(charts): draw reward charts 2018-12-07 10:42:26 +03:30
es.py feat(charts): draw reward charts 2018-12-07 10:42:26 +03:30
fig-log.jpg feat(charts): draw reward charts 2018-12-07 10:42:26 +03:30
game.py web version 2017-04-04 22:34:18 +04:30
load.npy fix: use a random seed to increase reproducibility 2017-04-03 12:10:32 +04:30
long.npy fix: increase space between bird and wall 2017-04-03 11:54:56 +04:30
package-lock.json feat(charts): draw reward charts 2018-12-07 10:42:26 +03:30
package.json feat(charts): draw reward charts 2018-12-07 10:42:26 +03:30
play.py initial commit 2017-04-02 16:48:56 +04:30
README.md feat(charts): draw reward charts 2018-12-07 10:48:36 +03:30
record.py fix: no need for gtk 2017-04-04 23:02:04 +04:30
requirements feat(charts): draw reward charts 2018-12-07 10:42:26 +03:30
train.py feat(charts): draw reward charts 2018-12-07 10:42:26 +03:30
win.py fix: increase space between bird and wall 2017-04-03 11:54:56 +04:30

Playing Flappy Bird using Evolution Strategies

After reading Evolution Strategies as a Scalable Alternative to Reinforcement Learning, I wanted to experiment something using Evolution Strategies, and Flappy Bird has always been one of my favorites when it comes to Game experiments. A simple yet challenging game.

The model learns to play very well after 3000 epochs, but not completely flawless and it rarely loses in difficult cases (high difference between two wall entrances). Training process is pretty fast as there is no backpropagation, and is not very costy in terms of memory as there is no need to record actions as in policy gradients.

Here is a demonstration of the model after 3000 epochs (~5 minutes on an Intel(R) Core(TM) i7-4770HQ CPU @ 2.20GHz):

after training

Before training:

Before training

For each frame the bird stays alive, +0.1 score is given to him. For each wall he passes, +10 score is given.

Demonstration of rewards for individuals and the mean reward over time (y axis is logarithmic): reward chart

Try it yourself

You need python3.5 and pip for installing and running the code.

First, install dependencies (you might want to create a virtualenv):

pip install -r requirements

The pretrained parameters are in a file named load.npy and will be loaded when you run train.py or demo.py.

train.py will train the model, saving the parameters to saves/<TIMESTAMP>/save-<ITERATION>.

demo.py shows the game in a GTK window so you can see how the AI actually plays (like the GIF above).

play.py if you feel like playing the game yourself, space: jump, once lost, press enter to play again. 😁

pro tip: reach 100 score and you will become THUG FOR LIFE 🚬

Notes

It seems training past a maximum point reduces performance, learning rate decay might help with that. My interpretation is that after finding a local maximum for accumulated reward and being able to receive high rewards, the updates become pretty large and will pull the model too much to sides, thus the model will enter a state of oscillation.

To try it yourself, there is a long.npy file, rename it to load.npy (backup load.npy before doing so) and run demo.py, you will see the bird failing more often than not. long.py was trained for only 100 more epochs than load.npy.

Web Server

To setup the web server, after installing the dependencies by running pip install -r requirements, follow the steps below:

cd web
npm install
node server.js

You may want to use pm2 to run the server as a daemon in the background.

Now you can redirect traffic to the port the server is listening on (8088 by default).