Overview
This project leverages Genetic Neural Networks (GNNs) to train artificial agents to play the classic game Snake using an evolutionary approach. Instead of traditional supervised learning or reinforcement learning, the agents evolve through a genetic algorithm, simulating natural selection to gradually improve their gameplay. Over successive generations, the agents develop strategies to increase their survival time and maximize food collection, ultimately mastering the game.
Features
- Evolutionary Learning: Agents evolve through natural selection, with genetic operations like crossover and mutation helping create new generations of players.
- Dynamic Scoring: Agents are evaluated based on their performance, with better performers being selected for breeding the next generation.
- Visualization: Real-time visual representation of agents as they play, showcasing their increasing competency over time.
- Autonomous Training: The system automatically trains the agents over multiple generations, improving their gameplay without human intervention.
Implementation
- Algorithm: The core of the project is a genetic algorithm that guides the evolution of agents. Each agent is represented by a neural network, and their performance is based on how well they play Snake, measured by survival time and food collection.
- Neural Networks: The agents' brains are simple feed-forward neural networks that take the game state (e.g., Snake position, food location) as input and output the agent's next move.
- Genetic Operations: The top-performing agents are selected based on fitness (performance) and used to create offspring through crossover and mutation, introducing new genetic material into the population.
- Visualization: Built using Python libraries such as Pygame for real-time visualization, showing agents in action and providing visual feedback on their improvement.
Challenges & Solutions
- Exploration vs. Exploitation: A key challenge in evolutionary algorithms is balancing exploration (introducing genetic diversity) with exploitation (focusing on the best-performing agents). This was addressed by carefully tuning the selection and mutation rates.
- Fitness Evaluation: Determining how to evaluate and score agents was tricky, as performance can vary greatly from one agent to another. The scoring system was designed to reward agents for surviving longer and eating more food.
- Performance Scaling: As the population grew and generations increased, the system had to be optimized to ensure training remained efficient and scalable.
What I Learned
This project deepened my understanding of genetic algorithms, neural network training, and evolutionary computation. It was a great hands-on experience in applying machine learning techniques to game AI, and it helped me appreciate the power of unsupervised training methods in solving complex problems. It also provided valuable insights into algorithm design and how different learning approaches can be used to tackle real-world challenges.
GitHub
View the source code here.