Voice SDK’s dictation feature enables your app to efficiently transcribe speech to text in real time. It functions similarly to voice commands except that it does not process the resultant text with natural-language understanding (NLU).
The dictation feature is not designed to be used for voice commands, but instead as a text input modality. While you could use regular expressions to parse it to use for handling commands, the text is formatted in a human readable format. As a result, you’ll get much better recognition results using immersive voice commands with Voice SDK’s AppVoiceExperience actor.
Getting Started
To use dictation in your app, add an AppVoiceExperience actor to your map that connects to Wit.ai or platform services. This is similar to adding a voice command, except that you also need a WitDictationExperience actor.
Adding Dictation to your Map
In Wit.ai, create an app to use for dictation. This can be the same one you used for voice commands or a dedicated app specific for dictation. You do not need to train any utterances for this app.
Go to Blueprint > New Empty Blueprint Class... > All Classes > Search for AppVoiceExperience. SelectAppVoiceExperience > use BP_AppVoiceExperience for the name.
Drag BP_AppVoiceExperience onto the map.
In World Outliner, select BP_AppVoiceExperience and go to Details > Voice > Configuration. Set it to the Wit configuration file you created earlier.
Go to Blueprint > New Empty Blueprint Class... > All Classes > Search for WitDictationExperience > SelectWitDictationExperience** > use *BP_DictationExperience for the name.
Drag BP_DictationExperience onto the map.
Add event handling to BP_DictationExperience as needed.
Starting Dictation
To start dictation, call the ActivateDictation method from BP_DictationExperience.
Stopping Dictation
To stop dictation, call the DeactivateDictation method from BP_DictationExperience.