Exclusive: Adobe’s AI Tool Can Transform the Emotional Tone of Voice-Overs

Adobe’s Oriol Nieto shared a brief video featuring several scenes accompanied by a voice-over but lacking sound effects. The AI model examined the video, breaking it into scenes and assigning emotional tags along with descriptions for each. Next came the sound effects. For example, when a scene with an alarm clock appeared, the AI generated an appropriate sound effect. It also recognized a scene with the main character—a driving octopus—and added a door-closing sound effect.
While impressive, the results were not flawless. The alarm sound felt unrealistic, and in a scene depicting two characters hugging, the AI added an awkward rustling of clothes that was ineffective. Instead of opting for manual adjustments, Adobe utilized a conversational interface, similar to ChatGPT, to suggest changes. In the car scene, the absence of ambient car sounds was noted. Rather than manually sifting through the scene, Adobe employed the conversational interface to request the addition of a car sound effect, which the AI then successfully identified and integrated seamlessly.
Although these experimental features aren’t available yet, they often make their way into Adobe’s software lineup. For example, Harmonize, a tool in Photoshop designed to accurately place assets with proper color and lighting, was demonstrated at Sneaks last year and is now part of Photoshop. Expect similar innovations to emerge by 2026.
Adobe’s announcement arrives just months following the conclusion of a nearly year-long strike by video game voice actors, who fought for AI protections—requiring companies to obtain consent and provide disclosure agreements when developers seek to replicate a voice actor’s voice or likeness through AI. Voice actors have been preparing for the ramifications of AI on their industry, and Adobe’s new capabilities, even if they don’t involve generating voice-overs from scratch, signify a notable shift driven by AI within the creative sector.
