From c6b91ee4f87b5041d45235c2749b99c304be0d7d Mon Sep 17 00:00:00 2001 From: Mahdi Dibaiee Date: Tue, 4 Apr 2017 11:22:12 +0430 Subject: [PATCH] fix: not usually, rarely --- README.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/README.md b/README.md index b0edbe3..7d2c9ad 100644 --- a/README.md +++ b/README.md @@ -3,7 +3,7 @@ Playing Flappy Bird using Evolution Strategies After reading [Evolution Strategies as a Scalable Alternative to Reinforcement Learning](https://blog.openai.com/evolution-strategies/), I wanted to experiment something using Evolution Strategies, and Flappy Bird has always been one of my favorites when it comes to Game experiments. A simple yet challenging game. -The model learns to play very well after 3000 epochs, but not completely flawless and it usually loses in difficult cases (high difference between two wall entrances). +The model learns to play very well after 3000 epochs, but not completely flawless and it rarely loses in difficult cases (high difference between two wall entrances). Training process is pretty fast as there is no backpropagation, and is not very costy in terms of memory as there is no need to record actions as in policy gradients. Here is a demonstration of the model after 3000 epochs (~5 minutes on an Intel(R) Core(TM) i7-4770HQ CPU @ 2.20GHz):