From Startup Competition Winners to Keynote Feature, AudioShake Returns to AWS re:Invent

AudioShake
January 7, 2026

AWS CEO Matt Garman opened re:Invent 2025 with a spotlight on AudioShake. In the first five minutes of his keynote, in front of 50,000+ developers, architects, and technical leaders, he highlighted how audio source separation solves complex problems in media production, AI data prep and training, enterprise communications, and accessibility.

For the AudioShake team, the keynote feature was an exciting validation of a journey that started last year when we won AWS’s Startup Competition. We returned this year to demonstrate what we've built since: more music and film stems, music detection and removal, multi-speaker separation, and a cross-platform SDK that delivers real-time audio separation on any device. From a vision that began a few years ago with a small research team, to our most recent appearance on the AWS stage, it was exciting to hear the CEO of AWS confirm what our customers already know: audio separation has evolved from a post-production tool into core infrastructure.

After the keynote, our CEO, Jessica Powell, joined AWS on Air to demonstrate these capabilities live, including:

  • Multi-Speaker Separation: AudioShake detects, diarizes, and separates overlapping voices into clean individual tracks even when speakers share similar frequencies and recording conditions are unknown—enabling call center quality analysis, automated meeting transcription, multi-host podcast editing, and broadcast production workflows where overlapping speech was previously unusable.
  • Music Removal: AudioShake removes copyrighted background music from mixed audio while preserving natural speech and effects. Ideal for sports teams posting game highlights without copyright strikes, broadcasters can extract clean dialogue for remixing, and film studios localize content for international markets—while also improving transcription accuracy by 25% when voices are isolated before speech recognition, particularly in noisy environments where traditional ASR fails.
  • Real-Time Audio Separation: AudioShake's SDK provides real-time audio separation with state-of-the-art model quality, running locally on iOS, MacOS, Android, Windows, and Linux. Users get the same production-grade separation results that power workflows for tech companies, major music labels and film studios, with consistent performance. 

For years, AWS's infrastructure has supported AudioShake’s production pipeline–from our earliest days as a three-person team, to processing millions of audio at scale via a sophisticated deployment pipeline for major media companies, music labels, film studios, and AI developers. On AWS, we shared more technical details on how we architected our audio separation pipeline to deliver consistent quality across music stem separation, vocal isolation, and speech separation at production scale.

Ready to work with AudioShake?  Our API offers pay-as-you-go pricing with free trial access. Developers building applications that need audio source separation, vocal removal, or multi-speaker isolation can start testing today.

For enterprise implementations—whether you're processing audio for AI training, building media production tools, or analyzing communications at scale—contact our team to discuss your specific requirements.