【パブリッシャー丸ごとセール第91弾】6月28日 0:00 〜 7月4日(木) 23:59まで
Opsiveのアセット50%オフ
↓↓↓ 今週の無料アセットプレゼント🎁
『Omni Animation - Core Locomotion Pack』$16.50 => FREE(7月4日 23:59まで無料)
クーポンコード:OPSIVE2024
【Humble Bundle】 その他 ソフトウェアバンドルはこちら
『THE UNREAL ENGINE AND UNITY MEGA BUNDLE』 ⏰️ 7月9日(火) 15時まで NEW!!
アンリアルエンジンとUnity - 高品質3Dモデルバンドル $30で38アイテム
『LOW POLY GAME DEV BUNDLE』 ⏰️ 7月2日(火) 13時まで
『AUDIO ARCADE: THE DEFINITIVE COLLECTION OF MUSIC AND SOUND FX FROM OVANI SOUND』 ⏰️ 7月16日(火) 3時まで
Simple Q is a free modular framework that uses the Q Learning algorithm process to allow you to build and train AI agents within the Unity App, without the need for additional assets.Need a simple AI/ML framework that uses pure Unity and C#? Without needing to download other assets or learn Python?If you answered yes, then Simple Q is the framework for you. Start a brain, make a choice, update the reward. All in three easy functions, done from one script.Fully documented, with all code accompanied by comments explaining their functionality. And three examples included.With Simple Q you not only build, train, and deploy working AI agents into your games, but also learn how the basics of AI and Machine Learning work, allowing you to build upon the fundementals and create amazing things.Included in the Framework:Q Learning Algortihm + QTableReplay Buffer ExperienceExploration Vs Exploitation (two methods available)Dynamic Decay (two common methods, and a new example method)Shared Data Component (or Individual Experience without)Persistent DataPrioritized Learning (with three working examples to choose from)Buffer Replay Removal Policy (with four working examples to choose from)Attribute Pairs (For better Experience Sampling)Little Q (a very basic setup for Q Learning - great for learning the basics).Three agents/environments (Virtual Battle Bot, 2D Navigator, Dodge Bot (3D)).Simple Q’s reinforcement learning framework integrates advanced techniques such as prioritized learning and experience replay to accelerate the training process and improve learning efficiency.Trained data is saved with a simple string set on the component (in the Inspector), which allows multi agents (or NPC’s) to use the same learned data by accessing the same saved path. This enables continuous experimentation and iteration, for developers to freely create trained AI agents for Games/Apps.Notes:- There is a single robot model used in the "Dodge Bot" example. This is credited to Quaternius, and is under a CC0 license; see Third-Party Notices.txt file in package for details- To get the most out of Simple Q, it is advised you read the doc included with the asset.- Use the namespace "QLearning" to access the classes when building!Q Learning Algortihm + QTableReplay Buffer ExperienceExploration Vs Exploitation (two methods available)Dynamic Decay (two common methods, and a new example method)Shared Data ComponentPersistent DataPrioritized Learning (with three working examples to choose from)Buffer Replay Removal Policy (with four working examples to choose from)Attribute Pairs (For better Experience Sampling)Little Q (a very basic setup for Q Learning - great for learning the basics).Three agents/environments (Virtual Battle Bot, 2D Navigator, Dodge Bot (3D)).
▼ Humble Bundle
ソフトウェアバンドル
リンク集
copyright © Unity AssetStoreまとめ 割引情報 beta All Rights Reserved.