Develop
Develop
Select your platform

Use Oculus Lipsync for Unreal

Updated: Jul 1, 2020
End-of-Life Notice for Oculus Spatializer Plugin
The Oculus Spatializer Plugin has been replaced by the Meta XR Audio SDK and is now in end-of-life stage. It will not receive any further support beyond v47. We strongly discourage its use. Please navigate to the Meta XR Audio SDK documentation for your specific engine:
- Meta XR Audio SDK for Unity Native
- Meta XR Audio SDK for FMOD and Unity
- Meta XR Audio SDK for Wwise and Unity
- Meta XR Audio SDK for Unreal Native
- Meta XR Audio SDK for FMOD and Unreal
- Meta XR Audio SDK for Wwise and Unreal
This documentation is no longer being updated and is subject for removal.
This guide describes how to use the Oculus Lipsync Plugin in your Unreal Projects your own projects. You may find it helpful to use the demo project as reference. You should complete the download and setup steps to add Lipsync to your Unreal project.

Using the OVRLipSync Actor Component

To use Lipsync in live mode:
  • The OVRLipSync Actor component must be added to each Actor which has a morph targets that you want to control. Select the Actor you want to use to drive lip animation, choose Add Component, and add the OVRLipSync Actor component. The following image shows an example.
  • The OVRLipSync Actor component provides the following options:
    • Provider Kind specifies what type of laughter provider to use. Available options are:
      • Original
      • Enhanced
      • Enhanced with Laughter
    • Sample rate of the input audio stream.
    • Enable Hardware Acceleration specifies whether DSP acceleration should be used on supported platforms. The following image shows an example.
  • In the actor or level Blueprint, read visemes and change appropriate morph targets in the On Visemes Ready event.
  • Start live capture by calling the Start function of the component
When a prediction is ready, the OVRLipSync Actor component will trigger the On Visemes Ready event.

Driving Your Actor Lip Animations with Lipsync

The OVRLipSync Actor component also defines following Blueprint functions to drive your Actor lip animations:
Function/MethodResult
GetVisemes
Returns the current array of viseme probabilities.
GetVisemeNames
Returns the default list of viseme names.
GetLaughterScore
returns laughter probability of the current audio frame (non-zero only when component is configured to use Enhanced with Laughter provider.
FeedAudio
Feeds the audio data (as package mono 16-bit signed integer audio stream at the specified sample rate) into the Oculus Lipsync engine.
Assign Visemes To Morph Targets
Takes an array of Morph Targets names and Skeletal Mesh component and assigns current viseme weights to those targets.
Start
Starts live processing of an audio stream.
Stop
Stops live processing of audio stream.
Did you find this page helpful?