Navigation

CONNECTED
Blog-Automotive-technology-takes-FI
(From left to right, clockwise) The ZF ProAI for highway automated driving system introduced at CES; Gill Pratt, CEO of Toyota Research Institute; Toyota's Yui driving assistant; and the VocalZoom optical-sensor module.
Automotive technology takes center stage at CES

On the show floor and on keynote stages at CES 2017, automotive technology was a major area of emphasis. Exhibiting automakers included BMW, Chrysler, Ford, Honda, Hyundai, Mercedes-Benz, Nissan, Toyota, and Volkswagen; and there were nearly a dozen Tier 1 auto suppliers present on the show floor. SAE International hosted the Connect2Car conference track Jan. 5, with sessions examining connected cars, cybersecurity, and standards for automotive software development.

Here are some highlights of Automotive Engineering magazine’s coverage of the event:

Nader: Are glitch-free autonomous vehicles possible?

The CES 2017 conference highlighted broad industry-wide interest in artificial intelligence (AI). ZF and Audi both announced partnerships with Nvidia, a graphic processing unit provider that’s long focused on deep learning. Bosch, Continental, and Visteon all discussed their development solutions. Toyota and Honda demonstrated AI concept vehicles.

But while AI is becoming a hot topic in automotive development circles, skepticism remains. Ralph Nader, the veteran automotive safety advocate and industry critic, said during CES that it will be quite difficult for automakers to determine whether AI-based systems can operate without dangerous glitches. He noted that defects in Takata airbags, General Motors ignition switches, and the causes of Toyota’s sudden-deceleration problems were all simple technical technologies to debug relative to the many nuances of AI and related autonomous software.

“The maximum possible simplicity is the genius of engineering,” Nader said. “If companies can’t produce comparatively simple systems without shipping defective products, how can we expect someone to find problems with complex autonomous vehicles?”

Read the full article.

SAE Level 3 ‘hand off’ is challenging AI researchers

When Gill Pratt, the CEO of Toyota Research Institute, the carmaker’s AI lab in Menlo Park, CA, mounted the CES 2017 stage, he delivered a reality check about automated driving.

“We’re not even close to Level 5 autonomy, which the SAE defined as full robotic control everywhere, at any time, in any conditions,” Pratt told the audience. “We have many years of machine learning research to do to achieve Level 5.”

Later, in an interview with Automotive Engineering, Pratt credited recent steady progress to most driving being relatively easy—”we do most of it without half thinking,” he said. But true self-driving vehicles will need “trillion-mile reliability” and the elusive ability to handle “corner cases” in their automated search for the best solutions. These are the difficult and rare problems or situations that can occur outside of normal operating parameters.

He likened the required robo-driving skills of the future to those of trained professional airline pilots. Current driving capabilities are more like the skills of general-aviation pilots.

Read the full article.

Car as close companion

Yui, Toyota’s new personal assistant in the automaker’s Concept-i vehicle unveiled at CES 2017, is a dashboard-dwelling, AI-based drivers’ aide whose aim is to create a closer relationship between you and your car. Yui, your devoted virtual-twin buddy and clever little helper, watches your every move like your dog to better know you and predict your preferences. And maybe even extend you emotionally into the vehicle you control, even if you usually let Yui and Concept-i do the driving.

Yui is Toyota’s first smart ambassador to a new kind of personalized, “relationship-based” driving environment that Toyota hopes can augment the user experience in its future cars. The Concept-I designers at Calty Design Research in California exploited everything from floor lighting cues to haptic feedback to exterior text displays and a giant windshield head-up display to help cultivate this link.

“We’ve designed a lot of concept cars,” observed chief designer Ian Cartabiano, “but this is our first ‘philosophical’ design in a while.” The Concept-i is designed, he continued, “from inside out to foster a warm and friendly user experience while presenting a futuristic vision of 2030. The idea is to explore how we might most harmoniously connect the driver and car to society, and create a bond strong enough to help reignite a love for cars in the future.”

Read full article.

Honda partners with VocalZoom to advance speech-recognition technology

At the 2017 CES, Honda announced a collaboration—through its startup company-advancing Xcelerator program—to develop for automotive use the “optical microphone” of Israel-based VocalZoom to markedly enhance the accuracy of speech recognition. Honda said VocalZoom’s optical sensor can deliver a “near-perfect reference signal that automotive voice-control systems can understand and quickly respond to, regardless of noise levels. The result is clean, isolated driver commands that are significantly easier for automotive voice-recognition systems to understand and obey than was previously possible with traditional voice-control solutions.”

The VocalZoom module incorporates a lens, laser, and application-specific integrated circuit (ASIC) chip, using the laser to measure tiny vibrations in the throat and face when speaking, greatly augmenting the system’s accompanying acoustic microphone signal by providing “an isolated, near-perfect reference signal that automotive voice control systems can understand and quickly respond to, regardless of noise levels,” said Honda in a release. The company said testing has shown at least a 50% improvement over standard acoustic voice recognition in a quiet vehicular environment and better results in noisy environments.

Eitan David, VocalZoom vice-president of products, told Automotive Engineering at CES 2017 that although the optical microphone componentry could be incorporated into a vehicle’s existing camera system, the VocalZoom technology does require its own sensor. Ideally, the VocalZoom sensor would be placed in the rearview mirror, dashboard or headliner to enable a clear line of sight to the driver’s face.

Read the full article.