Artificial intelligence (AI) has transformed synthesised speech from the monotone of robocalls and decades-old GPS navigation systems to the polished tone of virtual assistants in smartphones and smart speakers.
But there is still a gap between AI-synthesised speech and the human speech people hear in daily conversation and in the media. That is because people speak with complex rhythm, intonation and timbre that is challenging for AI to emulate.
The gap is closing fast: NVIDIA researchers are building models and tools for high-quality, controllable speech synthesis that capture the richness of human speech, without audio artifacts. Their latest projects are now on display in sessions at the Interspeech 2021 conference, which runs through Sept. 3.
These models can help voice automated customer service lines for banks and retailers, bring video-game or book characters to life, and provide real-time speech synthesis for digital avatars.
NVIDIA’s in-house creative team even uses the technology to produce expressive narration for a video series on the power of AI.
Expressive speech synthesis is just one element of NVIDIA Research’s work in conversational AI—a field that also encompasses natural language processing, automated speech recognition, keyword detection, audio enhancement and more.
Optimised to run efficiently on NVIDIA GPUs, some of this cutting-edge work has been made open source through the NVIDIA NeMo toolkit, available on NVIDIA’s NGC hub of containers and other software.
Behind the Scenes of I AM AI
NVIDIA researchers and creative professionals do not just talk the conversational AI talk. They walk the walk, putting groundbreaking speech synthesis models to work in their I AM AI video series, which features global AI innovators reshaping just about every industry imaginable.
But until recently, these videos were narrated by a human. Previous speech synthesis models offered limited control over a synthesized voice’s pacing and pitch, so attempts at AI narration did not evoke the emotional response in viewers that a talented human speaker could.
That changed over the past year when NVIDIA’s text-to-speech research team developed more powerful, controllable speech synthesis models like RAD-TTS, used in their winning demo at the SIGGRAPH Real-Time Live competition. By training the text-to-speech model with audio of an individual’s speech, RAD-TTS can convert any text prompt into the speaker’s voice.
Another of its features is voice conversion, where one speaker’s words (or even singing) are delivered in another speaker’s voice. Inspired by the idea of the human voice as a musical instrument, the RAD-TTS interface gives users fine-grained, frame-level control over the synthesised voice’s pitch, duration and energy.
With this interface, a male video producer could, for example, record himself reading the video script, and then use the AI model to convert his speech into the female narrator’s voice. Using this baseline narration, the producer could then direct the AI like a voice actor—tweaking the synthesised speech to emphasise specific words and modifying the pacing of the narration to better express the video’s tone.
The AI model’s capabilities go beyond voiceover work: text-to-speech can be used in gaming, to aid individuals with vocal disabilities or to help users translate between languages in their own voice. It can even recreate the performances of iconic singers, matching not only the melody of a song, but also the emotional expression behind the vocals.
Giving Voice to AI Developers, Researchers
With NVIDIA NeMo—an open-source Python toolkit for GPU-accelerated conversational AI—researchers, developers and creators gain a head start in experimenting with, and fine-tuning, speech models for their own applications.
Easy-to-use APIs and models pretrained in NeMo help researchers develop and customise models for text-to-speech, natural language processing and real-time automated speech recognition. Several of the models are trained with tens of thousands of hours of audio data on NVIDIA DGX systems. Developers can fine tune any model for their use cases, speeding up training using mixed-precision computing on NVIDIA Tensor Core GPUs.
Through NGC, NVIDIA NeMo also offers models trained on Mozilla Common Voice, a dataset with nearly 14,000 hours of crowd-sourced speech data in 76 languages. Supported by NVIDIA, the project aims to democratise voice technology with the world’s largest open data voice dataset.
Voice Box: NVIDIA Researchers Unpack AI Speech
Interspeech brings together more than 1,000 researchers to showcase groundbreaking work in speech technology. At this week’s conference, NVIDIA Research is presenting conversational AI model architectures as well as fully formatted speech datasets for developers.
Archive
- April 2024(77)
- March 2024(112)
- February 2024(109)
- January 2024(95)
- December 2023(56)
- November 2023(86)
- October 2023(97)
- September 2023(89)
- August 2023(101)
- July 2023(104)
- June 2023(113)
- May 2023(103)
- April 2023(93)
- March 2023(129)
- February 2023(77)
- January 2023(91)
- December 2022(90)
- November 2022(125)
- October 2022(117)
- September 2022(137)
- August 2022(119)
- July 2022(99)
- June 2022(128)
- May 2022(112)
- April 2022(108)
- March 2022(121)
- February 2022(93)
- January 2022(110)
- December 2021(92)
- November 2021(107)
- October 2021(101)
- September 2021(81)
- August 2021(74)
- July 2021(78)
- June 2021(92)
- May 2021(67)
- April 2021(79)
- March 2021(79)
- February 2021(58)
- January 2021(55)
- December 2020(56)
- November 2020(59)
- October 2020(78)
- September 2020(72)
- August 2020(64)
- July 2020(71)
- June 2020(74)
- May 2020(50)
- April 2020(71)
- March 2020(71)
- February 2020(58)
- January 2020(62)
- December 2019(57)
- November 2019(64)
- October 2019(25)
- September 2019(24)
- August 2019(14)
- July 2019(23)
- June 2019(54)
- May 2019(82)
- April 2019(76)
- March 2019(71)
- February 2019(67)
- January 2019(75)
- December 2018(44)
- November 2018(47)
- October 2018(74)
- September 2018(54)
- August 2018(61)
- July 2018(72)
- June 2018(62)
- May 2018(62)
- April 2018(73)
- March 2018(76)
- February 2018(8)
- January 2018(7)
- December 2017(6)
- November 2017(8)
- October 2017(3)
- September 2017(4)
- August 2017(4)
- July 2017(2)
- June 2017(5)
- May 2017(6)
- April 2017(11)
- March 2017(8)
- February 2017(16)
- January 2017(10)
- December 2016(12)
- November 2016(20)
- October 2016(7)
- September 2016(102)
- August 2016(168)
- July 2016(141)
- June 2016(149)
- May 2016(117)
- April 2016(59)
- March 2016(85)
- February 2016(153)
- December 2015(150)