Launching Dialogue and Music/Effects Separation on Live

AudioShake
March 21, 2024

New separations are now available on AudioShake Live: dialogue and music/effects (DME) separation. For the first time, Live users can use AudioShake’s DME model to isolate dialogue while retaining the original music and effects of a track or video. 

The rollout of these new separations also marks the launch of brand-new DME models across our API and web platforms. The new models bring heightened clarity to each stem and a cleaner overall separation between the sounds of foreground dialogue and background music and sound effects.

“These new models create even cleaner, more robust dialogue tracks with a near complete absence of music, crowd noise, and other sound effects..” - Cheng-i Wang, Research Engineer at AudioShake.

Winner of NAB’s PILOT Innovation Challenge, AudioShake’s dialogue, music, and effects separation has already been used for projects like dubbing Doctor Who into German. Through our API, we’ve also integrated into the workflows for localization, captioning, and content creation with partners including cielo24, Dubverse, OOONA, Papercup, and Yella Umbrella.

“I am certain the AudioShake cleanup tool will help our users who frequently have to deal with noisy audio. We aim to provide our customers with the option to use any tool that facilitates their production work. AudioShake is the latest in the series of API integrations we are investing in to ensure the OOONA ecosystem truly has it all.” – Wayne Garb, OOONA co-founder and CEO

AudioShake will be exhibiting at NAB 2024 and demoing its newest DME technologies, including its use across the film, TV, and content industries. Visit us in the South and West Hall.