At the 2017 AUVSI Xponential “All Things Unmanned” Conference this past May, a panel of industry experts explored and addressed the following topics: OEM technology incubation and acceptance process, globalization and its impact on technology adoption, evolving business models, challenges and potential solutions, and more.
You can view the full presentation entitled AUVSI XPONENTIAL SAE Industry Expert Panel Session: Break Through the Barriers to Work with OEMS – a How To here.
Plus, take advantage of these additional SAE resources:
*Courtesy of AUVSI (Association for Unmanned Vehicle Systems International)
As the automotive industry progresses toward autonomy, the need for simulation-based development and validation increases, as does the need for greater detail and volume in simulations. Full autonomy requires an unprecedented amount of trust placed in the vehicle’s systems to safely handle a broad range of scenarios, and such trust requires extensive testing. Estimates are on the order of 100 million km and several hundred million euros for validation of autonomous systems using road tests alone. These estimates, along with the dangers associated with testing specific scenarios, further motivates the use of simulation.
The systems to be simulated also go beyond vehicle dynamics alone, requiring sensor models in the loop with perception and control algorithms, to test all aspects of an autonomous vehicle or driver-assist system. This includes the generation of synthetic camera data at the RGB level, synthetic LiDAR point clouds, and synthetic radar data.
To facilitate the development of perception and control algorithms for level 4 autonomy, engineers from MathWorks and Ford Motor Co. developed a shared memory interface between MATLAB, Simulink, and Unreal Engine 4 (a free, open source video game engine) to send information such as vehicle control signals back to the virtual environment.
The shared memory interface conveys arbitrary numerical data, RGB image data, and point cloud data for the simulation of LiDAR sensors. The interface consists of a plugin for Unreal Engine, which contains the necessary read/write functions, and a beta toolbox for MATLAB, capable of reading and writing from the same shared memory locations specified in Unreal Engine, MATLAB, and Simulink.
The LiDAR sensor model was tested by generating point clouds with beam patterns that mimic Velodyne HDL-32E (32 beam) sensors and is demonstrated to run at sufficient frame rates for real-time computations by leveraging the Graphics Processing Unit (GPU).
The engineers successfully established and tested a workflow that provides an interface between a 3D virtual driving environment and vehicle perception systems related to autonomy or active safety. This virtual environment was shown to be capable of generating a synthetic camera and LiDAR data that resembles data from real sensors, and is capable of communicating bidirectionally, via shared memory with algorithms in development.
Ford Model-Based Design Engineer Ashley Micks (SAE Member, 2016) is co-author of an SAE International technical paper, presented at WCX17: World Congress Experience, that presents and in-depth overview of the workflow for the simulation of vehicle perception systems in a 3D driving environment. Micks, who earned a bachelor’s degree in aeronautical and astronautical engineering from Massachusetts Institute of Technology and master’s and doctorate degrees in aeronautical and astronautical engineering from Stanford University, also describes her research in a Ford corporate video.
This article is based on SAE International technical paper 2017-01-0107 by Arvind Jayaraman of MathWorks and Ashley Micks and Ethan Gross of Ford.
Cadillac’s CTS sedan, one of the first production vehicles in the world to contain Vehicle-to-Vehicle (V2V) communication, recently conducted successful demonstrations of Vehicle-to-Infrastructure (V2I) capability in Michigan. V2I connects vehicles to the surrounding infrastructure, allowing the vehicle to alert the driver of safety, mobility, or environment-related conditions ahead.
CTS development vehicles received real-time data from traffic controllers on signal phasing and timing during successful demonstrations recently conducted in collaboration with Michigan road agencies. The traffic signals, located adjacent to the GM Warren Technical Center campus, sent real-time data using Dedicated Short-Range Communications (DSRC) protocol to the development vehicles, which alerted the drivers of a potential red-light violation at current speed. This alert could help avoid the potentially dangerous decision to brake abruptly or accelerate through a busy intersection.
Vehicles do not transmit any identifying information such as VIN number, registration or MAC address, in their messages. For example, if a connected car runs a red light, the traffic signal may be able to say someone ran a red light, but will not be able to say who or what vehicle. Firewalls and other measures ensure the DSRC signals cannot be interfered with and are only exchanged between the vehicle and the infrastructure.
“When cars can talk to the infrastructure, the benefits will rise exponentially. For example, V2I-enabled red lights won’t hold up traffic when they’re not needed, and highly accurate, real-time traffic updates will help further reduce congestion—which we all know creates driver frustration and waste,” said General Motors CEO Mary Barra at the 2014 Intelligent Transport Society Congress “The sooner the industry puts a critical mass of V2V-equipped vehicles on the road, the more accidents we’ll prevent… and the more society—and individual drivers—will benefit. The same holds true for V2I.”
The Michigan Department of Transportation, Macomb Country Department of Roads, and General Motors’ Research & Development are collaborating to showcase leadership in the connected and automated vehicle environment.
For more on this subject, view this episode of “SAE Eye on Engineering” by Automotive Engineering Editor-In-Chief Lindsay Brooke.
Connected and automated vehicles (CAVs) are receiving significant attention as a technology solution to realize safer, more cost-effective, and efficient operation of several transportation systems. CAVs can also potentially help curb energy consumption and greenhouse gas (GHG) emissions from the transportation sector. One of the most promising CAV technologies that could experience widespread adoption in the next 5 to 10 years in the U.S. is the platooning for combination trucks.
Platooning is a demonstrated method of groups of vehicles travelling close together actively coordinated in formation at high speed that has the potential to reduce energy consumption resulting from aerodynamic drag. Trucks are ideal applications for platooning because of their technical characteristics and mode of operation (several vehicles driving for long distances along the same route, often concentrated in few corridors).
Combination trucks account for the majority of the energy use in the U.S. freight sector (64.9% of freight, and 4.8% of total U.S. energy use in 2013) and an even larger share of GHG emissions (77.1% of freight, and 7.5% of total U.S. GHG emissions in 2013). Looking at the future, the importance of trucking on the U.S. energy use and GHG emissions is likely to increase, due mainly to three factors:
Several studies have been focusing on assessing the potential savings achievable by platooning operations for a group of two or more trucks, as well as extrapolating these savings on a national scale, based on overall miles traveled by trucks. However, a key element has been neglected in the existing literature: what is the “platoonable” fraction of traveled miles during real-world operations? Namely, in a fleet of trucks, what fraction of miles driven is amenable for platooning operation? Clearly not every mile driven can be driven in a platoon formation, and platooning operations at low speeds do not lead to significant fuel saving.
However, for large trucks operating extensively on highways over long distances, the fraction of platoonable miles at high speed can be significant.
Researchers from the National Renewable Energy Laboratory (NREL) conducted an estimation of the platoonable fraction of miles driven by combination trucks in the United States based on more than 3 million miles of driving data collected across a variety of fleet operators, truck manufacturers, times of operation, and regions. The data considered have been collected directly by NREL and other partners who have contributed data to NREL’s Fleet DNA database using on-board data logging devices or telematics systems.
In 2014, 169.8 billion miles were driven by combination trucks in the United States, consuming a total of 29.1 billion gallons of fuel and emitting approximately 6.9 billion metric tons of carbon dioxide equivalent. Based on the NREL researchers’ analysis, approximately 65.6% of those miles could potentially be driven in platoon formation. Assuming an energy (and emissions) savings of approximately 6.4% for each team of platooned vehicles (based on efficiency improvements previously published in a platooning benefits study), widespread adoption of platooning operations can potentially reduce trucks’ energy use by approximately 4.2%.
With these bounding assumptions, the widespread adoption of platooning operations for combination trucks in the United States could lead to a total savings of 1.5 billion gallons of petroleum-derived fuels (equal to 1.1% of the current U.S. import of oil: 2.7 billion barrels in 2015) and 15.3 million metric tons of CO2 (a 0.22% emissions reduction) per year.
This technical potential study presents a best-case scenario because in the real world, truck and fleet operators may not be willing to participate to platoon operations under all the conditions considered here (e.g., an operator might not be willing to wait to form a platoon). Therefore, NREL resarchers plan to perform an expert elicitation study involving truck owners and fleet operators to assess the overall willingness to participate in platooning and the main barriers for the widespread adoption of this technology.
This article is based on SAE International technical paper 2017-01-0086 by Matteo Muratori, Jacob Holden, Michael Lammert, Adam Duran, Stanley Young, and Jeffrey Gonder of the National Renewable Energy Laboratory. The papers was presented as part of the “Intelligent Transportation Systems” technical session at WCX17 World Congress Experience.
Because there are a wide variety of small unmanned aircraft systems (UAS) designs, and because the powerplants of unmanned aircraft systems (UAS) vary greatly, unmanned aerospace vehicles feature propulsion systems that differ significantly from each other, and from those of powering manned vehicles. With the production of unmanned aircraft proliferating, safety is critical, and after extensive review, no consensus standards on unmanned aircraft propulsion have been identified.
Until now. SAE International’s recently-formed E-39 Unmanned Air Vehicle Propulsion System Committee will hold its first official meeting in late May.
This committee, with responsibility for developing and maintaining standards for unmanned vehicle propulsion systems, was formed in response to the need for industry propulsion standards written specifically for unmanned aircraft.
The E-39 committee will categorize propulsion system types as they relate to airframes (rotary or fixed wing, for example) and develop appropriate classifications and distinctions. They will also identify experts in each propulsion system type and class, and in the vehicles in which they are to be installed. After a review of existing applicable SAE standards, the committee will identify the need for standards for specification and testing of propulsion system properties (weight, reliability, durability) and coordinate the development of new standards with industry, regulatory agencies, and other stakeholders.
Stakeholders include engine manufacturers, motor manufacturers, suppliers, airframers, researchers, academia, and regulators.
The committee’s scope includes both chemical and electrical propulsion and the supporting systems, including engines, servo actuators, fuel, motors, electronic speed controllers, batteries, propellers, wiring, connectors, plumbing, filler valves, filters, pumps, propeller balancing rigs, test stands, thrust measurement rigs, and flight management controllers for energy efficient flight.
Today’s customers expect all the applications they are used to on their smartphones to be available in a modern car, along with possibilities to connect personal mobile devices to it. However, new features and functionalities, such as automated driving, unlocking via smartphone, navigation with detailed satellite graphics, etc., increase the need to connect the car to the outside to enable them or at least improve performance and user experience.
But when connecting the car to other devices, IT security becomes a priority, because simultaneously the car becomes an attractive target for attackers. Therefore, confidentiality, integrity, and authenticity must be maintained and privacy protection will be a concern.
Although cybersecurity is a common part of daily routines in the traditional IT domain, necessary security mechanisms are not yet widely applied in vehicles. At first glance, this may not appear to be a major problem as there are lots of solutions from other domains that could potentially be reused. But substantial differences compared to an automotive environment have to be taken into account, drastically reducing the possibilities for simple reuse.
Reuse of established and widely used algorithms and procedures for security mechanisms/implementations has both the benefit of lowering the effort but also increasing the security. After having experienced the downside of vendor specific, proprietary algorithms, the car industry is in agreement to use standardized crypto algorithms such as AES (Advanced Encryption Standard), RSA (Rivest-Shamir-Adleman cryptosystem), and ECC (Elliptic Curve Cryptography) that are publicized and thoroughly reviewed by scientists in a public discourse.
Although reuse of existing and proven security technology is desirable, the car has other specific requirements that must be taken into consideration.
In general cars need to fulfill a high-quality level and should endure robust environmental conditions. Both must be reflected also in the security critical parts. Smart card technology deployed in the car needs to support extended temperature range and requires an automotive-specific security qualification and manufacturing process. The first strongly affected security controller was the SIM (mobile subscription module). The car industry required a solderable SIM to cope with vibrations, extended temperature range and an automotive quality level (AECQ-100).
Legacy busses and ECUs
Current car platforms are renewed every five to seven years. A new car platform does not necessarily mean that every ECU in this platform is developed from scratch or all car communication busses are updated to higher performing variants. A critical example is the CAN bus that is still widely used and in many cases already reaching bandwidth limitations. This fact makes it difficult to support the additional overhead of cryptographic signatures in some cases.
The industry is following the tradeoff listed below:
Isolation can be achieved by domain specific gateways hosting firewalls principally similar to the aforementioned ECU intrinsic firewall.
Cars have a substantially longer lifetime than typical IT equipment. The time the architecture and the security foundation of a car are defined until the car is put out of operation can exceed 20 years or more.
This raises the question whether the used crypto algorithms can be regarded as being secure during such long lifetime. Several institutions publish estimates on the lifetimes of known crypto algorithms and related key lengths. There is an evident risk that the car requires an update of its cryptographic algorithms over the lifetime.
Crypto agility has many aspects:
With respect to first and second point mentioned before, the TPM 2.0 standardization already supports crypto agility in all the required specifications. The TPM specification provides a framework that allows adding further crypto algorithms. It is expected that future versions of TPM will therefore support new crypto algorithms while maintaining support for older still relevant algorithms. An ideal solution would be to update specific hardware components step by step, which support new cryptographic algorithms while the backward-compatibility to older algorithms is preserved. This would enable smooth transitions and provides more flexibility on the development process of ECUs, vehicles, and supporting security infrastructure.
This article is excerpted from the SAE International Technical Paper 2017-01-1652 “Cyber Security in the Automotive Domain – An Overview” by Rolf Schneider and Andre Kohn of AUDI AG and Martin Klimke and Udo Dannebaum of Infineon Technologies AG. It was presented during the Cybersecurity for Cyber-Physical Vehicle Systems technical session at WCX17.
When considering environmental factors in commercial aerospace, the most familiar themes focus on the reduction of emissions, noise, and fuel consumption. These design factors are interlinked and present a substantial technical challenge for designers and manufacturers, who must also ensure that aircraft are attractive to operators and their customers.
Some firms focus on modifying factors less-obvious than weight and emission; factors that can nevertheless make a difference as a byproduct of introducing an enhanced cabin environment for passengers. Countering internal condensation with a humidification system is one such example.
Sweden-based CTT Systems works with major commercial aircraft manufacturers and many specialist completion and maintenance companies around the world in providing humidity-control products and anti-condensation systems. These systems are designed to prevent moisture issues in aircraft and to enhance the in-flight comfort of crew and passengers.
In for the long haul
The human body can tolerate wide variations in humidity, but on a long-haul journey, flying at a high altitude, the air quality in a passenger jet cabin can deteriorate very quickly. Cabin air can reach desert-like, arid conditions within an hour after takeoff and the effects can often be noticed soon after the aircraft levels off into its cruise phase, typically at 35,000 to 40,000 ft. Symptoms often include dry skin and eyes, difficulty sleeping, and cold or allergy symptoms as the linings of the mouth and nose dry out.
Humidifiers designed by CTT are based on evaporative cooling technology that improves air quality and effectively precludes the transfer of bacteria by reducing particles in the cabin air.
Dramatic condensation can occur as each passenger exhales an average of 100 gm of water per hour. If this condensed water is not drained from the aircraft, the effect can be noticed by passengers during takeoff and landing when water on top of ceiling panels seeps down into the cabin, causing “rain in the plane.”
When water works its way into the cabin insulation panels, the insulation effect is reduced and the weight of a large aircraft can increase by over half a ton, depending on the aircraft type, passenger load, and other operational factors, such as climate. This results in higher fuel consumption and associated environmental impacts.
Clearing the air
According to CTT, providing moisture removal systems that use established industry technology can address the root causes of in-flight condensation. The CTT Zonal Drying System removes cabin air moisture by directing air from the crown area or cargo spaces to zonal drying units located at strategic points in the aircraft. Special ducting then circulates this dry air between the outer skin and cabin. This lowers the dew point in the crown area preventing the condensation process from taking place and keeps the insulating blankets dry.
A four-pole inlet fan feeds two separate airstreams into a rotor impregnated with silica gel, which absorbs humidity from one of the air streams and processes it before releasing it as dry air. Electric heaters warm up the second air stream before it enters the rotor. As it passes through the slow-moving rotor the heated air absorbs humidity collected from the processed air. The regenerated air is then fed into the aircraft recirculation system, or dumped through an outflow valve. The system is activated when the aircraft is powered up.
CTT’s CAIR system offers higher levels of passenger comfort through providing fresher air, which can deliver an edge in a highly competitive global airline market. The introduction of such features as flat-bed seats, on-demand TV, and internet access has provided additional benefits to attract premium customer loyalty, but enhancing the supply of fresh air is a very tangible improvement that more discerning frequent fliers are now appreciating.
This post is based on an article by Richard Gardner for Aerospace Engineering magazine. For the complete article, visit articles.sae.org/15271.
At last month’s SAE 2017 Government/Industry Meeting, two NASCAR safety experts were presented the Ralph H. Isbrandt Automotive Safety Engineering Award for their SAE International technical paper, “Development and Implementation of a Quasi-Static Test for Seat Integrated Seat Belt Restraint System Anchorages” (2015-01-0739).
The paper was written by John Patalak, Senior Director of Safety Engineering, NASCAR Research and Development; and Tom Gideon, recently retired Senior Director of Safety Engineering, NASCAR Research Development and Safety.
Patalak’s work at NASCAR includes researching, developing, and approving driver and vehicle safety systems and investigating vehicle crashworthiness and occupant protection issues. Gideon retired as Senior Director of Safety from NASCAR in 2016. He joined NASCAR in 2009 as Director–Safety R&D; before that, he served as Safety Manager for GM Racing.
Their paper describes the development of the quasi-static test for the seat integrated seatbelt restraint system portion of the NASCAR Seat Submission and Test Protocol Criteria. It reviews the methodology used to develop the testing, including the developmental dynamic sled tests. In conjunction with the start of the 2017 Monster Energy NASCAR Cup Series, following is an excerpt of their award-winning paper.
Over the past decade, large safety improvements have been made in crash protection for motorsports drivers. It has been well established that in side and rear impacts the driver seat provides the primary source for occupant retention and restraint. Beginning in the 2015 season, NASCAR required the use of driver seats with all seatbelt restraint system anchorage locations integrated internally to the seat with a minimum of seven anchorage locations. These seats are referred to as All Belts To Seat (ABTS) seats.
Incorporating seatbelt anchorages into the driver’s seat provides several distinct restraint system advantages over chassis-mounted seatbelts. Specifically, ABTS seats allow for shorter seatbelt lengths, improved seatbelt mounting geometry, the elimination of seatbelt pass-through holes, and other seatbelt interference issues. Shorter seatbelt lengths (the length from where the belt leaves the occupant body to the belt anchorage location) reduce the permissible amplitude of occupant motion. Seatbelt mounting geometry can be optimized when using ABTS seats due to eliminating obstructions in the seatbelt routing paths. Eliminating pass-through holes in seats for the seatbelts greatly reduces the possibility of interference issues between the seat and the seatbelt during the crash. This interference may include seatbelt adjusters or hardware becoming stuck or misaligned in seat openings or seatbelt webbing edges being deformed around seat opening edges. These issues can initiate webbing failure, adjuster slippage, hardware deformation, or a combination of these malfunctions.
Additionally, using ABTS seats allows for the future use of deformable seat mounting brackets. The purpose of the deformable seat mounting brackets would not be to lower occupant accelerations, but rather could be used to permit the driver’s seat to be moved away from intrusion during severe impacts, thus limiting the driver’s exposure to intruding structure. If seatbelts are mounted to the vehicle chassis, moving the driver’s seat is not possible.
To realize the advantages of an ABTS seat, a quasi-static test to prove the structural reliability of the seat belt anchorages was designed, developed, and implemented. As a basis for the load magnitudes of the quasi-static test, sled testing was conducted.
For the shoulder belts, the minimum quasi-static load was 9000 lb (4080 k g) with the lap and anti-submarine belts each at a minimum quasi-static load of 6000 lb (2700 kg).
The shoulder belts’ minimum quasi-static load of 9000 lb resulted in a significantly larger safety factor than the lap and anti-submarine belts. This high safety factor for the shoulder belts was selected due to a lack of seatbelt system redundancy for shoulder belts (assuming the minimum single shoulder belt configuration [2 belts] and not the over/under or double shoulder belt system [four belts]) as well as readily available vehicle structure at the shoulder level.
When comparing this quasi-static ABTS test to the FMVSS 210 anchorage test, this ABTS test has a 1.2 times (6000 vs. 5000 lb) greater load for the lap belt alone and a 3.5 times (21,000 vs. 6000 lb) greater load on the total restraint harness combination.
While not all of the minimum quasi-static test loads exhibited a 1.5 safety factor, such as the negative G belt load, this test methodology is being implemented as part of an ongoing process to continue with incremental improvements to occupant safety and is intended to be used as a minimum guideline for integrated seat belt anchorage strength. As such it sets a minimum performance requirement for seat manufacturers to meet and exceed with future designs.
This article is based on SAE International tech paper 2015-01-0739 by John Patalak and Thomas Gideon of NASCAR.
On the show floor and on keynote stages at CES 2017, automotive technology was a major area of emphasis. Exhibiting automakers included BMW, Chrysler, Ford, Honda, Hyundai, Mercedes-Benz, Nissan, Toyota, and Volkswagen; and there were nearly a dozen Tier 1 auto suppliers present on the show floor. SAE International hosted the Connect2Car conference track Jan. 5, with sessions examining connected cars, cybersecurity, and standards for automotive software development.
Here are some highlights of Automotive Engineering magazine’s coverage of the event:
Nader: Are glitch-free autonomous vehicles possible?
The CES 2017 conference highlighted broad industry-wide interest in artificial intelligence (AI). ZF and Audi both announced partnerships with Nvidia, a graphic processing unit provider that’s long focused on deep learning. Bosch, Continental, and Visteon all discussed their development solutions. Toyota and Honda demonstrated AI concept vehicles.
But while AI is becoming a hot topic in automotive development circles, skepticism remains. Ralph Nader, the veteran automotive safety advocate and industry critic, said during CES that it will be quite difficult for automakers to determine whether AI-based systems can operate without dangerous glitches. He noted that defects in Takata airbags, General Motors ignition switches, and the causes of Toyota’s sudden-deceleration problems were all simple technical technologies to debug relative to the many nuances of AI and related autonomous software.
“The maximum possible simplicity is the genius of engineering,” Nader said. “If companies can’t produce comparatively simple systems without shipping defective products, how can we expect someone to find problems with complex autonomous vehicles?”
SAE Level 3 ‘hand off’ is challenging AI researchers
When Gill Pratt, the CEO of Toyota Research Institute, the carmaker’s AI lab in Menlo Park, CA, mounted the CES 2017 stage, he delivered a reality check about automated driving.
“We’re not even close to Level 5 autonomy, which the SAE defined as full robotic control everywhere, at any time, in any conditions,” Pratt told the audience. “We have many years of machine learning research to do to achieve Level 5.”
Later, in an interview with Automotive Engineering, Pratt credited recent steady progress to most driving being relatively easy—”we do most of it without half thinking,” he said. But true self-driving vehicles will need “trillion-mile reliability” and the elusive ability to handle “corner cases” in their automated search for the best solutions. These are the difficult and rare problems or situations that can occur outside of normal operating parameters.
He likened the required robo-driving skills of the future to those of trained professional airline pilots. Current driving capabilities are more like the skills of general-aviation pilots.
Car as close companion
Yui, Toyota’s new personal assistant in the automaker’s Concept-i vehicle unveiled at CES 2017, is a dashboard-dwelling, AI-based drivers’ aide whose aim is to create a closer relationship between you and your car. Yui, your devoted virtual-twin buddy and clever little helper, watches your every move like your dog to better know you and predict your preferences. And maybe even extend you emotionally into the vehicle you control, even if you usually let Yui and Concept-i do the driving.
Yui is Toyota’s first smart ambassador to a new kind of personalized, “relationship-based” driving environment that Toyota hopes can augment the user experience in its future cars. The Concept-I designers at Calty Design Research in California exploited everything from floor lighting cues to haptic feedback to exterior text displays and a giant windshield head-up display to help cultivate this link.
“We’ve designed a lot of concept cars,” observed chief designer Ian Cartabiano, “but this is our first ‘philosophical’ design in a while.” The Concept-i is designed, he continued, “from inside out to foster a warm and friendly user experience while presenting a futuristic vision of 2030. The idea is to explore how we might most harmoniously connect the driver and car to society, and create a bond strong enough to help reignite a love for cars in the future.”
Honda partners with VocalZoom to advance speech-recognition technology
At the 2017 CES, Honda announced a collaboration—through its startup company-advancing Xcelerator program—to develop for automotive use the “optical microphone” of Israel-based VocalZoom to markedly enhance the accuracy of speech recognition. Honda said VocalZoom’s optical sensor can deliver a “near-perfect reference signal that automotive voice-control systems can understand and quickly respond to, regardless of noise levels. The result is clean, isolated driver commands that are significantly easier for automotive voice-recognition systems to understand and obey than was previously possible with traditional voice-control solutions.”
The VocalZoom module incorporates a lens, laser, and application-specific integrated circuit (ASIC) chip, using the laser to measure tiny vibrations in the throat and face when speaking, greatly augmenting the system’s accompanying acoustic microphone signal by providing “an isolated, near-perfect reference signal that automotive voice control systems can understand and quickly respond to, regardless of noise levels,” said Honda in a release. The company said testing has shown at least a 50% improvement over standard acoustic voice recognition in a quiet vehicular environment and better results in noisy environments.
Eitan David, VocalZoom vice-president of products, told Automotive Engineering at CES 2017 that although the optical microphone componentry could be incorporated into a vehicle’s existing camera system, the VocalZoom technology does require its own sensor. Ideally, the VocalZoom sensor would be placed in the rearview mirror, dashboard or headliner to enable a clear line of sight to the driver’s face.
Caterpillar has been developing the foundation for autonomy over the past four decades. Its portfolio of building-block strategies includes operator-assist features, remote-controlled machines, semi-autonomous machines, fully autonomous machines, and a completely integrated worksite. Autonomy on a large scale offers companies the solution of removing process variability resulting in unprecedented improvements in safety, availability, and productivity.
At the recent SAE 2016 Commercial Vehicle Engineering Congress, Matt Glover; Senior Project Team Leader, Command for Dozing; Advanced Technologies and Solutions Division, Caterpillar, discussed “Caterpillar’s Autonomous Journey.” Here, we briefly summarize Caterpillar’s building-block strategy for autonomy.
All Caterpillar remote control, semi-autonomous, and fully autonomous machines are designed to be operated in manual mode with an operator in the seat. Cat autonomous machines have the same performance and functionality of the standard machine models when operated in manual mode. All Caterpillar autonomous base machines are fully electrohydraulic (EH) machines. This enables the electronic controls to be seamlessly integrated into the machine without the need for additional hardware or third-party systems.
Caterpillar uses a building-block approach to autonomy utilizing well-established subsystems. This starts with a fully EH machine and begins to add operator-assist features or automated features to the base machine. In addition to being a building block to autonomy, the operator-assist features allow novice operators to dramatically increase in productivity and efficiency as manual operators in the machine. These operator-assist features are critical in allowing efficient remote operation of machines as the operator moves off of the machine utilizing either a remote console or operator station. Semi-autonomous machines are the next step on the journey and involve an operator periodically interacting with the machine or a group of machines. A fully autonomous machine requires no direct operator control and a completely autonomous job site would have no operators in the pit running equipment.
With a scalable semiautonomous tractor system, a customer can order a machine, upgrade to line-of-sight remote control, upgrade to an operator station (requires additional external networking), add the vision system to upgrade it to non-line-of-sight-remote control, and finally add the semi-autonomous support to be able to operate up to four machines from a single operator station. A customer is able to choose the solution that meets their needs and they can upgrade at any time as their requirements or conditions change.
Each piece of equipment intended for autonomous operation is equipped with specific components and software that allow them to operate autonomously in their distinct application. While uniquely configured, all of Caterpillar’s Cat Command machine products include the following subsystems: positioning, planning, perception, and wireless communications.
Knowing the precise location of autonomous machines, loading tools, auxiliary machines, light vehicles, and stationary equipment is critical for proper system operation. For surface mine sites, the systems depend upon a GPS base station to send correction signals to the onboard positioning systems allowing the global position to be translated into a local coordinate system with sub-centimeter accuracy. Once the autonomous work area is surveyed and a virtual map is created, the location of each vehicle or piece of mobile equipment must be tracked.
Each autonomous machine is equipped with multiple Global Navigation Satellite System (GNSS) antennas, receivers, and an inertial measurement unit (IMU). A single GNSS receiver reveals the machine’s position and speed; however, multiple receivers in combination with the IMU allow the machine to understand its orientation even at rest. In the event that an autonomous machine loses the ability to determine its position, it will come to a controlled stop and await further instructions.
The planning system can be broken down into three subtasks: site planning, job planning, and task planning. Site planning is the broadest view of autonomy, including all machine types and the coordination between the machines and specific job sites. Job planning is planning at a specific job site or within an automated machine type. Task planning is the planning of specific tasks for an autonomous machine and typically includes two major task planning functions—navigation and application-specific tasks. Navigational planning is responsible for taking a machine from point A to point B while avoiding obstacles. For loading or grading machines, the application-specific task would be excavation planning or moving dirt from one location to the other in an efficient manner.
Perception is used to prevent autonomous machines from contacting people, vehicles, or other objects. Sensors are used to monitor the immediate work area, identify obstacles and hazards, and determine an appropriate response without human intervention.
A critical, and often overlooked, piece of the autonomy system is the wireless network that ties all of the systems and communication together. Without constant communication between the machines and office, autonomy is not possible. Robust 24/7 communication is required in the demanding, nonstop environment of a mine. Communications are required to be strong over the entire area where autonomous machines are expected to operate.
Key to success
Application of advanced technologies requires not only innovative systems, but integration with people and process. To get the most from the automation benefits, any industry or application must alter the current process to take advantage of the consistent and optimized operation of autonomy. Operating costs are reduced and the benefits of autonomy are magnified through improved utilization, greater process consistency, and increased execution of best practices. Having autonomous technology is an advantage, but the maximum benefits from technology are achieved when people, process, and products are integrated and changed concurrently.
This article is based on SAE International technical paper 2016-01-8005 by Matthew Glover of Caterpillar’s Advanced Technologies and Solutions Division.