FaceSync - Audio Lip Sync & Facial Animation
Frostember Studios
$30.79
$43.99
30%OFF
(no ratings)
Jump AssetStore
Audio-driven Lip Sync & Facial Animation! Features realtime & zero-CPU Offline Baking, native Timeline support, procedural eyes/head motion, & 1-click ARKit blendshapes. VR Ready & Mobile Optimized.Welcome to Frostember FaceSync, the ultimate audio-driven facial animation ecosystem for Unity. Whether you are building AAA cinematic cutscenes, highly optimized VR games, or live VTubing avatars, FaceSync delivers accurate lip-sync and procedural facial movements with zero hassle.Stop wasting hours manually animating blendshapes or fighting with heavy real-time CPU loads. FaceSync bridges the gap between high-end realism and perfect optimization.DOCUMENTATION + ROADMAP | FORUM | YOUTUBE 🔥 Key FeaturesZero-CPU Offline Baking: Real-time audio analysis kills frame rates on Mobile and VR. Our built-in Baker simulates the audio at 60 FPS in the Editor, baking visemes, procedural head noise, and blinks into a highly optimized FaceSyncAnimationAsset. Play it back at runtime with absolute zero CPU cost!Native Timeline Integration: Drag and drop your baked data into our custom FaceSync Track. Smoothly blend animations, scrub through time, and dynamically override eye and head behaviors directly from the Timeline Inspector.Procedural Eyes & Head Motion: Bring dead stares to life. FaceSync automatically calculates realistic blinking, saccadic eye darts, and micro-jitters. The head and neck procedurally react to the character's voice intensity using Perlin noise.1-Click ARKit & CC3/CC4 Setup: Stop manually assigning dozens of blendshapes. Our VisemeMappingProfile features 1-click auto-generation for standard ARKit (52 blendshapes) and Character Creator rigs, creating highly accurate mouth shapes instantly.Live Microphone (VTuber Ready): Switch to Live Mode and use the included Frostember_MicrophoneInput component to drive your character's face in real-time using any connected microphone. Perfect for multiplayer voice chat and VTubers.Setup Wizard: Get your character fully rigged and talking in under 30 seconds using the automated Setup Wizard.💻 API & ScriptingTriggering dialogue dynamically via code is incredibly easy using the FaceSyncPlayer component. Just assign the baked asset and call .Play(). Perfect for custom RPG dialogue and quest systems.Questions?contact@frostemberstudios.comCore Architecture & Features:Audio Processing: Utilizes the Oculus LipSync engine for highly accurate real-time FFT phoneme detection (15 core visemes).Offline Baking System: Custom Editor Window (FaceSyncBaker) bakes complex audio analysis and procedural animation at 60 FPS into a lightweight ScriptableObject (FaceSyncAnimationAsset).Zero-CPU Playback: The FaceSyncPlayer component reads baked assets, eliminating runtime CPU cost for audio analysis, making it highly optimized for Mobile and VR.Native Timeline Integration: Custom Playable API implementation (FaceSyncTrack, FaceSyncClip, FaceSyncTrackMixer) supports smooth blending, editor scrubbing, and dynamic state overrides.Procedural Animation: Mathematical real-time generation of saccadic eye darts, micro-jitters, and voice-energy-driven Perlin noise for head and neck rotations.Auto-Mapping Profiles: VisemeMappingProfile includes algorithms to auto-generate mapping for ARKit (52 blendshapes) and Character Creator 3/4 standards.Live Input: Included Frostember_MicrophoneInput script captures system audio for real-time processing (VTubing/Multiplayer).Performance & LOD: Built-in distance-based Level of Detail (LOD) and frustum culling automatically disable processing when the mesh is off-screen or far away.GC Optimized: Runtime playback loops and Timeline mixers are built using array-based iterations to prevent Garbage Collection allocation spikes.C# API: Clean and accessible API for triggering dialogue dynamically via code (.Play(), .Stop()).Dependencies & Requirements:Requires the free Oculus LipSync Unity Integration package (Meta SDK). => Included in packageCompatible with Built-in, URP, and HDRP render pipelines.Works with any Humanoid or Generic rig that utilizes SkinnedMeshRenderer blendshapes.




