Develop
Develop
Select your platform

Precompute Visemes to Save CPU Processing with Unreal

Updated: Sep 14, 2023
End-of-Life Notice for Oculus Spatializer Plugin
The Oculus Spatializer Plugin has been replaced by the Meta XR Audio SDK and is now in end-of-life stage. It will not receive any further support beyond v47. We strongly discourage its use. Please navigate to the Meta XR Audio SDK documentation for your specific engine:
- Meta XR Audio SDK for Unity Native
- Meta XR Audio SDK for FMOD and Unity
- Meta XR Audio SDK for Wwise and Unity
- Meta XR Audio SDK for Unreal Native
- Meta XR Audio SDK for FMOD and Unreal
- Meta XR Audio SDK for Wwise and Unreal
This documentation is no longer being updated and is subject for removal.
You can save a lot of processing power by pre-computing the visemes for recorded audio instead of generating the visemes in real-time. This is particularly useful for lip synced animations on non-playable characters or in mobile apps as there is less processing power available.

Generate the Visemes

To Generate LipSync Sequence:
  • Import an audio file to your Unreal project
  • Right click the audio file and choose Generate LipSyncSequence
The following image shows an example:

Apply the Visemes

  • Add an OVRLipSyncPlaybackActor component to your scene. The OVRLipSyncPlaybackActor works the same as an OVRLip Sync Actor component, but reads the visemes from a pre-computed sequence asset instead of generating them in real-time.
  • Set the sequence of the OVRLipSyncPlaybackActor to the previously precomputed lipsync asset. The following image shows an example:
Did you find this page helpful?