Meta Tested the Leading Audio Separation Models. AudioShake Came Out on Top.

Long recognized as the industry leader in stem separation (aka ‘source separation’), AudioShake has again topped the leaderboards in an analysis by Meta of targeted source separation models.
As part of its SAM Audio evaluation, Meta evaluated top models from AudioShake, as well as companies like Moises/Music.AI, FADR, Lalal.ai, Demucs, Spleeter, ElevenLabs, Auphonic, Tiger, and Mossformer3, and Fast GeGo.
Across the listening tests reported by Meta, AudioShake was the highest-performing discriminative model on perceptual quality. This is the most recent third-party evaluation of AudioShake models, following our win in the Sony Demixing Challenge and setting state-of-the-art benchmarks in both source separation and lyric transcription and alignment.
The Meta evaluation focused on perceptual tests—meaning, how an output sounds. AudioShake has set the numeric, state-of-the-art benchmarks on many of its models, so this perceptual evaluation underscores that AudioShake’s models don’t just score higher than others in the field, they also sound better.
For the world's largest film studios, music labels, sports leagues, and technology companies who rely on AudioShake for mission-critical workflows, these independent findings confirm the technical superiority that drives their choice of our platform.
To get a technical deep dive, you can read more about the methodology, Meta’s SAM Audio, and different approaches to stem separation here.
We thank the Meta team for including us in this analysis and look forward to continuing to contribute to research towards finding the best way to separate the world’s sound.