Log In  


My first try at making a genetic algorithm, I will use it as the base for other projects.
I recommend using the black and white filter in the accessibility settings, since the values are based on the darkness of the color, darker to clearer as values become larger.

Cart #hahizufini-1 | 2023-03-14 | Code ▽ | Embed ▽ | License: CC4-BY-NC-SA
8

8


I'm ... not sure what's going on here, @drawnator.

I recognize the little dinosaur game you get when you cannot connect to the internet. Yet - is it learning or what ?


@dw817 this is an implementation of a genetic algorithm. A type of artificial intelligence that learn something by simulating how things evolve in the real world. It starts with random connections in its "brain", and the ones that performs better have more chance to be part of the next generation.

Each generation is made of copies of the previous generation with some little mutations. The idea is that by automatically cloning the best ones, it will eventually learn how to play.

Each one of those points connected by colorful lines are the brains, the number under them are their respective scores. You will see that each dinosaur is the same color as the score and the respective brain.

The left balls in each brain represent game variables, the top one gets darker as the closest cactus approach the dinosaurs and the second one as the second-closest cactus also gets closer, the last one is white as the dino is in the air and black if it is on the ground

The right balls is the output, higher than 0.5 the dinosaur jumps.

The middle ones are the hidden layers of each brain, the neural networks changes them in order to try to get higher scores.


This is super cool, but I did encounter something that makes me think there might be something wrong with the inheritance - one of the networks got a really good run, up to 24, and was jumping as the cacti got close enough. But it didn't seem to carry on that line, the next generation went back to jumping continuously.

Other than that run, they were all either doing nothing or jumping continuously, which isn't unexpected. But it didn't seem to actually learn from the best run.

Edit: Just run it again and it almost immediately figured out to tie jumping to how close the cacti are, and it definitely kept thinking along those lines - soon they were all jumping at the right time, and got a run into the hundreds. Maybe a mutation stunted it last time? Either way, it's working for me now. Great job :)


1

Aww this is cute for some reason

Best score: 148 at Gen ~65


wow thats so cool



[Please log in to post a comment]