HANCE 26: The Audio Algorithm Designed for Real-World Complexity
Live audio is situational and imperfect. It comes with background noise, room reverb, overlapping sounds, and unpredictable conditions that controlled environments don't capture.
Our latest and most capable models yet, HANCE 26, are built to handle that complexity. Here's what that means for your audio:
Ready to test it on your audio? Get started here.
To build a models that work across real-world conditions, you need scale. HANCE 26 uses significantly more training data than its predecessor, allowing the models to generalize across more audio situations than ever before.
HANCE 26 includes:
We redesigned our neural network specifically for audio processing. The previous architecture found challenges with sharp transitions and sustained sounds: long vowels, drawn-out musical notes, continuous background tones. These would often be processed inconsistently or could introduce artifacts. The new architecture handles transitions and extended audio events smoothly and predictably, and manages these scenarios more consistently overall..
Models only generalize as well as their training data allows. HANCE 26 is trained on significantly more high quality data than HANCE 3.0, with increased voice variations and augmentation applied across thousands of additional hours of diverse audio. This includes different microphones, acoustic environments, recording conditions, and sound sources
We went back to fundamentals and rebuilt how we handle high-frequency content. The previous approach occasionally introduced artifacts in the upper frequency range. The new processing significantly reduces those artifacts without compromising the rest of the signal.
The best way to know if it fits your needs is to test it on your actual audio. Get instructions on how to download our plugin, try our API, or embed our SDK directly into your codebase here.