Simple Q is a free modular framework that uses the Q Learning algorithm process to allow you to build and train AI agents within the Unity App, without the need for additional assets.Need a simple AI/ML framework that uses pure Unity and C#? Without needing to download other assets or learn Python?If you answered yes, then Simple Q is the framework for you. Start a brain, make a choice, update the reward. All in three easy functions, done from one script.Fully documented, with all code accompanied by comments explaining their functionality. And three examples included.With Simple Q you not only build, train, and deploy working AI agents into your games, but also learn how the basics of AI and Machine Learning work, allowing you to build upon the fundementals and create amazing things.Included in the Framework:Q Learning Algortihm + QTableReplay Buffer ExperienceExploration Vs Exploitation (two methods available)Dynamic Decay (two common methods, and a new example method)Shared Data Component (or Individual Experience without)Persistent DataPrioritized Learning (with three working examples to choose from)Buffer Replay Removal Policy (with four working examples to choose from)Attribute Pairs (For better Experience Sampling)Little Q (a very basic setup for Q Learning - great for learning the basics).Three agents/environments (Virtual Battle Bot, 2D Navigator, Dodge Bot (3D)).Simple Q’s reinforcement learning framework integrates advanced techniques such as prioritized learning and experience replay to accelerate the training process and improve learning efficiency.Trained data is saved with a simple string set on the component (in the Inspector), which allows multi agents (or NPC’s) to use the same learned data by accessing the same saved path. This enables continuous experimentation and iteration, for developers to freely create trained AI agents for Games/Apps.Notes:- There is a single robot model used in the "Dodge Bot" example. This is credited to Quaternius, and is under a CC0 license; see Third-Party Notices.txt file in package for details- To get the most out of Simple Q, it is advised you read the doc included with the asset.- Use the namespace "QLearning" to access the classes when building!Q Learning Algortihm + QTableReplay Buffer ExperienceExploration Vs Exploitation (two methods available)Dynamic Decay (two common methods, and a new example method)Shared Data ComponentPersistent DataPrioritized Learning (with three working examples to choose from)Buffer Replay Removal Policy (with four working examples to choose from)Attribute Pairs (For better Experience Sampling)Little Q (a very basic setup for Q Learning - great for learning the basics).Three agents/environments (Virtual Battle Bot, 2D Navigator, Dodge Bot (3D)).