Hance.ai Blog | Latest AI audio enhancement solutions

Introducing HANCE 26: Our Latest Speech Processing Models

Written by Joote Michal Hika | Nov 12, 2025 3:03:39 PM

HANCE 26: The Audio Algorithm Designed for Real-World Complexity

Live audio is situational and imperfect. It  comes with background noise, room reverb, overlapping sounds, and unpredictable conditions that controlled environments don't capture.

Our latest and most capable models yet, HANCE 26, are built to handle that complexity. Here's what that means for your audio:

  • Removes more noise and reverb, with even fewer artifacts.
  • Improves speech intelligibility, also in challenging conditions.
  • Handle tonal languages and non-English phonetics with improved accuracy.

Ready to test it on your audio? Get started here.

What changed from HANCE 3.0

To build a models that work across real-world conditions, you need scale. HANCE 26 uses significantly more training data than its predecessor, allowing the models to generalize across more audio situations than ever before.

HANCE 26 includes:

  • Rebuilt neural network architecture
  • Expanded training data
  • Refined signal processing

Rebuilt neural network architecture

We redesigned our neural network specifically for audio processing. The previous architecture found challenges  with sharp transitions and sustained sounds: long vowels, drawn-out musical notes, continuous background tones. These would often be processed inconsistently or could introduce artifacts. The new architecture handles transitions and extended audio events smoothly and predictably, and manages these scenarios more consistently overall..

Expanded training data

Models only generalize as well as their training data allows. HANCE 26 is trained on significantly more high quality data than HANCE 3.0, with increased voice variations and augmentation applied across thousands of additional hours of diverse audio. This includes different microphones, acoustic environments, recording conditions, and sound sources

Refined signal processing

We went back to fundamentals and rebuilt how we handle high-frequency content. The previous approach occasionally introduced artifacts in the upper frequency range. The new processing significantly reduces those artifacts without compromising the rest of the signal.

Try it on your audio

The best way to know if it fits your needs is to test it on your actual audio. Get instructions on how to download our plugin, try our API, or embed our SDK directly into your codebase here