Tags : Browse Projects

Select a tag to browse associated projects and drill deeper into the tag cloud.

reinforcementlearning

Compare

  Analyzed about 1 year ago

Reinforcement Learning with Genetic Programming in Java. A controller for a Snake Wars agent is evolved. In the auto-run demo, the one on the top left is random, the one on the bottom right is evolved. The evolved snake quickly learns to outperform the random player.

7.15K lines of code

0 current contributors

almost 9 years since last commit

0 users on Open Hub

Activity Not Available
0.0
 
I Use This

opennero

Compare

  Analyzed about 1 year ago

NOTE: as with any active project, OpenNERO is a work in progress and many updates are frequently being made. If you have trouble with OpenNERO, check the discussion group and then consider submitting an issue. We also welcome any helpful comments or suggestions on this site or the platform at ... [More] opennero-questions. And of course, if you would like to contribute, let us know! About OpenNEROOpenNERO is an open source software platform designed for research and education in Artificial Intelligence. The project is based on the Neuro-Evolving Robotic Operatives (NERO) game developed by graduate and undergraduate students at the Neural Networks Research Group and Department of Computer Science at the University of Texas at Austin. In particular, OpenNERO has been used to implement several demos and exercises for the Russell and Norvig's textbook Artificial Intelligence: A Modern Approach. These demos and exercises illustrate AI methods such as brute-force search, heuristic search, scripting, reinforcement learning, and evolutionary computation, and AI problems such as maze running, vacuuming, and robotic battle. The methods and problems are implemented in three different environments (or "mods"), as described below. More environments, problems, and methods, as well as demos and exercises illustrating them, will be added in the future. The current ones are intended to serve as a starting point on which new ones can be built, by us, but also by the community at large. If you'd like to contribute something you have built, you can reach project managers at opennero-questions. Getting startedYou can download a binary build of OpenNERO for your platform, or checkout the current source and build it yourself. You can then try out the various demos, work on the exercises, or explore the various mods on your own. Building from source Demos Depth First Search `A*` Search Q-Learning Sarsa(λ) Neuroevolution Co-evolution Human-generated solutions Exercises RunningOpenNeroExercise AddingStuffExercise CreateRoombaAgentExercise MazeGeneratorExercise MazeSolverExercise MazeLearnerExercise Running The Maze mod The Roomba mod The NERO mod ComponentsOpenNERO is built using open source components, including: Irrlicht 3D Engine - released under the Irrlicht Engine License Boost C++ libraries - governed by the Boost Software License Python scripting language - governed by the Python License rtNEAT algorithm - created by Ken Stanley and Risto Miikkulainen at UT Austin. ContributorsThere are many people who have contributed and continue to contribute to OpenNERO. Here is a list in progress. Igor V. Karpov John B. Sheblak Adam Dziuk Minh Pham Dan Lessin Members of the Neural Networks Research Group at UT Austin Students and alumni of the Freshman Research Initiative Computational Intelligence and Game Design stream. [Less]

327K lines of code

0 current contributors

over 2 years since last commit

0 users on Open Hub

Activity Not Available
0.0
 
I Use This

ia2013-tpi-rl

Compare

  Analyzed about 2 months ago

Implementación de Técnicas de Aprendizaje por Refuerzo mediante el algoritmo Q-Learning para la cátedra Inteligencia Artificial (Ciclo 2013) perteneciente a UTN Facultad Regional Resistencia)

41.8K lines of code

1 current contributors

11 months since last commit

0 users on Open Hub

Activity Not Available
0.0
 
I Use This