Watson Translator Speech Sandbox
IBM
$0.0
(no ratings)
Date |
Price |
---|---|
日期和时间 |
价钱($) |
11/23(2018) |
0.0 |
11/20(2024) |
0.0 |
Jump AssetStore
This is a version of the Speech Sandbox that uses Watson Translator to translate from any language (Japanese in this example) to English. It is then passed to two Watson services: Speech-to-Text and Watson Assistant to extract intents and entities and enable AI/voice in your AR/VR/XR environment.
To use this, change Scripts/SpeechSandboxStreaming.cs line #51 to use a different language translation model:
private string _translationModel = "ja-en";
and change line #142:
_speechToText.RecognizeModel = "es-ES_BroadbandModel";
see: Language Translation models
A sample application that demonstrates how to integrate voice commands and speech recognition into a virtual reality experience.
To use:
1. Unzip the Models.zip folder
2. Install Blender
3. Load each of the 3 scenes in sequence by selecting:
_Scenes->MainGame->MainMenu
_Scenes->MainGame->Tutorial
_Scenes->MainGame->Sandbox
and load them in File->BuildSettings->Add Open Scenes
(If you are building for the Oculus Rift, Add File->BuildSettings->PlayerSettings->Virtual Reality SDKs -> Oculus)
4. Add your IBM Watson Language Translator, Speech to Text and Conversation Credentials, as well as Conversation ID to the MainMenu Scene by clicking the Save Credentials object and filling out the SpeechToText and Watson Assistant Service URLS, the Assistant Workspace ID and Version Date, and either username and password or IAM Apikey.
5. Drag and Drop the SaveCredentials object from _Scenes->MainGame->MainMenu to Prefabs->SaveCredentials object.
For detailed instructions see:
VR Speech Sandbox Vive
or
VR Speech Sandbox Rift
or view in a Markdown editor:
SupportingFiles/Readme.md