http://gcaaa31928.github.io/FlappyBirdLearning/#
#
#
Flappy Bird Learning#
This is a project that uses machine learning to automatically learn how to play Flappy Bird, with the learning method being Q Learning.
Partially referenced from http://sarvagyavaish.github.io/FlappyBirdRL
Game Framework#
Creating the Flappy Bird game using Phaser.js, as shown below
(Referenced from http://www.lessmilk.com/tutorial/flappy-bird-phaser-1)
Q Learning#
The key lies in this formula
Encountered some difficulties when initially training using this formula
When using only these two state spaces, where QState is a two-dimensional space
It caused the obstacle at the lower point to not know the distance from the ground or the sky and often exceeded the boundaries
So I added another state space, the distance to the sky
But this led to other problems, when passing through bricks at normal speed, theoretically it should act like this
The position of the red dot will gradually train to a situation where the Q Value is higher when not pressing compared to when pressing
But in this scenario
Due to the rapid descent speed, it caused the Q Value to train to a point where it must be pressed to avoid hitting the bricks
Because these two states cannot converge to the correct position and converge to other positions
So we must add another state space for the speed
Basically, this completes the practice