diff --git a/README.md b/README.md index a2d5e91..6e7b560 100644 --- a/README.md +++ b/README.md @@ -3,16 +3,16 @@ Playing Flappy Bird using Evolution Strategies After reading [Evolution Strategies as a Scalable Alternative to Reinforcement Learning](https://blog.openai.com/evolution-strategies/), I wanted to experiment something using Evolution Strategies, and Flappy Bird has always been one of my favorites when it comes to Game experiments. A simple yet challenging game. -The model learns to play very well after ~1500 iterations, but not completely flawless and it usually loses in difficult cases (high difference between two wall entrances). +The model learns to play very well after ~3000 iterations, but not completely flawless and it usually loses in difficult cases (high difference between two wall entrances). Training process is pretty fast as there is no backpropagation, and is not very costy in terms of memory as there is no need to record actions as in policy gradients. -Here is a demonstration of the model after ~1500 iterations (less than an hour of training): +Here is a demonstration of the model after ~3000 iterations (less than an hour of training): ![after training](/demo/flappy-success.gif) also see: [Before training](/demo/flappy-lose.gif) -For each frame the bird stays alive, +1 score is given to him. For each wall he passes, +10 score is given. +For each frame the bird stays alive, +0.1 score is given to him. For each wall he passes, +10 score is given. Try it yourself --------------- @@ -37,6 +37,6 @@ _pro tip: reach 100 score and you will become THUG FOR LIFE :smoking:_ Notes ----- -It seems training for too long reduces the performance after a while, learning rate decay might help with that. +It seems training past a maximum point reduces performance, learning rate decay might help with that. To try it yourself, there is a `long.npy` file, rename it to `load.npy` (backup `load.npy` before doing so) and run `demo.py`, -you will see the bird failing more often than not. `long.py` was trained for ~2000 more iterations than `load.npy`. +you will see the bird failing more often than not. `long.py` was trained for 100 more iterations than `load.npy`. diff --git a/game.py b/game.py index 1cf565b..6686f47 100644 --- a/game.py +++ b/game.py @@ -94,7 +94,7 @@ class dotdict(dict): # too long to finish after a while of training MAX_FRAMES = 10000 def play(fn, step=None): - game = Game(200, 200) + game = Game(250, 200) frame = 0 # while showing to user, we want to update the GTK frontend diff --git a/load.npy b/load.npy index c9ac916..1aa4acb 100644 Binary files a/load.npy and b/load.npy differ diff --git a/long.npy b/long.npy index 3343557..375bd8c 100644 Binary files a/long.npy and b/long.npy differ diff --git a/train.py b/train.py index a686231..ce1d60b 100644 --- a/train.py +++ b/train.py @@ -47,7 +47,7 @@ for i in range(10000): print("{}: ".format(i), end='') es.train() - if i % SHOW_EVERY == 0: + if SHOW_EVERY and i % SHOW_EVERY == 0: play(es.forward, step=step) Gtk.main_quit() print(' shown') diff --git a/win.py b/win.py index 8e22e93..2778260 100644 --- a/win.py +++ b/win.py @@ -68,7 +68,7 @@ class Window(Gtk.Window): else: self.gameover.set_text('') - self.score.set_text(str(self.game.score)) + self.score.set_text(str(round(self.game.score, 2))) self.game.update()