The Mobile Cognitive Assistant: Bridging the Gap between In-Car and Outside-the-car Experiences

Abstract:

By 2020, there will be 26 billion intelligent, capable, connected devices armed with conversational virtual assistants that manage nearly every possible consumer experience. But already today one of the biggest consumer challenges in our more and more connected world is the need to learn and remember the specific capabilities and vocabularies of multiple assistants spread across different services and devices. The next generation of the mobile cognitive assistants for connected (and eventually autonomous, electric and shared) cars are determined by number of indispensable prerequisites. These so-called automotive assistants need to be always available and not limited by connectivity constraints, but based on a robust hybrid, embedded-cloud architecture, taking consideration for data privacy needs. At the same time, they need to be intelligent and knowledgeable, leveraging machine learning and contextual reasoning that enables more personalized and contextual results as well as collaborative interactions with assistants in order to complete more complex, realistic tasks with higher accuracy. They need to interact with drivers as well as passengers using latest advancements in speech signal enhancement technologies paired with reliable voice biometrics technology differentiating between multiple users. And last, but not least the automotive assistant needs to intelligently give drivers seamless access and control to third party assistants and chatbots inside and outside the car using latest cognitive arbitration capabilities. In the following speech we will elaborate the underlying technologies and capabilities to bring the mobile cognitive assistant to live.


Year: 2018
In session: Hauptvortrag
Pages: 1 to 6