Webinar Recap: “Hey Car, What’s Next?”

Exploring the Future of Voice AI in Automotive

Sensory recently hosted an exciting webinar, “Hey Car, What’s Next?”, where we explored the evolving role of voice AI in the automotive space. Moderated by Peter Cooney, CEO & Founder of SAR Insight & Consulting, the panel brought together thought leaders from BMW, MediaTek, and Sensory.

If you missed the live event, don’t worry! We’ve captured the key highlights below, or you can watch the highlights here. Like what you see? Watch the full recording!

Setting the Stage: The Rise of Voice in Cars

Peter Cooney opened the webinar with a look back at the history of voice technology in vehicles — from basic voice recognition systems to today’s rapidly advancing AI-powered assistants. He outlined how consumer expectations, privacy concerns, and edge computing trends are reshaping the future of automotive voice experiences.

The Advantages and Architecture of Hybrid Voice AI

Todd Mozer, President and CEO of Sensory, introduced Sensory’s distinctive position in the crowded voice AI market: a full-stack solution that runs entirely on-device. Unlike cloud-reliant platforms, Sensory’s technology emphasizes speed, privacy, and low-power performance, making it ideal for automotive applications where reliability and data security are paramount and hallucinations are not allowed.

Mozer highlighted the architecture needed to put in place today and evolve with the quickly changing landscape of LLMs, including the flexibility of “sometimes required” wake words, personalization through biometrics, and hybrid agents that can run on device or access cloud services for a fully intelligent voice assistant

The Automotive Perspective: Scaling AI for Cars

Dr. Ephrem Anthony Chemaly, representing MediaTek’s Automotive Business Unit, shared how MediaTek is applying its chip design expertise to enable next-gen, in-car experiences. Chemaly emphasized the growing need for scalable, power-efficient platforms that support both traditional voice recognition and more complex AI workloads.

As vehicles become increasingly software-defined, integrating fast, private, and reliable voice AI solutions will be key to enhancing driver experiences while maintaining safety and data privacy.

The OEM Perspective: Privacy and Intelligence at the Edge

Roman Wild, who is the Lead Engineer, Intelligent Personal Assistant and Context Awareness at BMW of North America, emphasized the crucial role of privacy and trust in the future of in-car voice AI. When asked about the future of privacy and security in automotive voice, Wild stated, “I think the most important thing is building up trust so that people are willing to give up information because it’s just the fact that systems can only be as intelligent as they know about context and know about the user…You have to convey why you are giving up certain data, what’s happening to that data.” This approach not only improves vehicle responsiveness but also safeguards drivers’ information.

Key Discussion Topics

The panel discussion touched on several critical areas shaping the future of automotive voice AI:

  • The Role of LLMs (Large Language Models): While LLMs are making waves in many industries, panelists agreed that automotive applications require a balance between LLM-driven intelligence and strict performance, safety, and privacy constraints.
  • Edge vs. Cloud-Based AI: There was a strong consensus around the advantages of edge AI for cars. By keeping voice processing local to the vehicle, systems can offer faster response times, increased reliability in areas without connectivity, and stronger protection of personal data.
  • Privacy Matters: Privacy was a recurring theme throughout the discussion. As vehicles collect more user data, designing systems that prioritize on-device processing helps mitigate concerns and aligns with emerging data privacy regulations globally.
  • The Evolution of the Voice User Experience: Today’s drivers expect natural, conversational interactions with their vehicles. Panelists discussed how advancements in wake word detection, intent recognition, and multimodal interaction (combining voice, touch, and visual cues) are reshaping the in-car experience.

Audience Q&A Highlights

Throughout the webinar, attendees asked questions about:

  • Real-world implementation challenges for on-device AI
  • The need for passengers, alongside the driver, to have voice AI access
  • The evolving role of wake words and the need for these in the future
  • Several questions on the Sensory solution shown at the start of the webinar, including:
    • Confirming it all ran 100% on the edge
    • Asking about the hardware the demo ran on and that no GPU was required
    • Asking about the different components of the Sensory automotive solution

The panelists offered real-world insights into how companies are overcoming technical and UX hurdles to deliver seamless, human-like voice experiences.

The Road Ahead

As vehicles become smarter and more connected, the role of voice AI will only grow. Sensory and other industry leaders are paving the way for safer, more intuitive, and more secure in-car interactions. Ready to explore how Sensory’s embedded AI solutions can transform your next automotive project? Contact us today to learn more or schedule a demo!