Arm Newsroom Podcast
Podcast

ADAS and the Vision of Automotive Safety

Arm's Graeme Voller and OMNIVISION's Andy Hanvey discuss the road ahead for ADAS technology
The Arm Podcast · Viewpoints: ADAS and the Vision of Automotive Safety

Listen now on:

Applepodcasts Googlepodcasts Spotify

Summary

Previously seen as a premium feature in expensive cars, advanced driver assistance systems (ADAS) are being rapidly expanded into more affordable automotive tiers to support vehicle driving and parking functions such as surround view systems, adaptive cruise control, collision avoidance and lane departure warnings.

ADAS features require safety-capable image processing technology to support both human and machine vision applications and to address this market, the Arm Mali-C78AE image signal processor is able to process data from up to four real-time or 16 virtual cameras to support Arm’s vision to optimize performance, minimize power consumption and provide a clear focus to functional safety for ADAS.

In this episode, Geof Wheelwright is joined by Graeme Voller, Senior Product Manager for Arm’s Automotive and IoT team, and Andy Hanvey, Director of Automotive Marketing, OMNIVISION. The group will discuss ADAS, image signal processing, and the importance of functional safety. In addition to this they will dig deeper into the recent announcement of Arm Mali-G78AE GPU and what it offers the market.

Additional Resources:

Arm Newsroom: Arm introduces new automotive image signal processor to advance adoption of driver assistance and automation technologies

Arm Solutions: Automotive

Speakers

Geof Wheelwright, Arm Viewpoints Host

Geof Wheelwright, Arm Viewpoints Host

Geof has worked as a journalist, author, broadcaster and consultant for more than three decades – and in a variety of technical content management, corporate communications and senior management roles at several technology companies. He has contributed to a broad range of media outlets – including The Guardian, the Financial Times, The Daily Telegraph, The Daily Mail, The Independent, Canada’s National Post, Time Magazine, Newsweek and a number of specialist technology industry sites (such as Geekwire) and Travel titles (including Travel + Leisure).

Graeme Voller, Senior Product Manager, Arm

Graeme Voller, Senior Product Manager, Arm

Graeme Voller is a Senior Product Manager for Arm’s Automotive and IoT Business. Graeme leads the ISP Product Line Lead and has more than 30 years’ experience across Automotive and Aerospace electronics systems. His background has covered working in vehicle OEM, Tier 1 and 2 system design organisations. Over this period he has become a specialist in many applications such including vision, electric/hybrid and IC powertrain, chassis/flight controls and testing systems.

Andy Hanvey, Director of Automotive Marketing, OMNIVISION

Andy Hanvey, Director of Automotive Marketing, OMNIVISION

With over 25 years experience within imaging semiconductors, Andy Hanvey joined OMNIVISION in October 2016 and leads the Automotive Marketing organization. Andy has worked at Andor Technology and Aptina, and most recently at Imagination. Andy has held a variety of positions in Design, applications engineering, business development and marketing.

Transcript

Geof Wheelwright: We’re back with a new Arm Viewpoints podcast. And today we’re going to be talking about a topic that’s very close to my heart, driving. As an enthusiastic driver, I’m feeling really fortunate to be talking about how technology in the form of innovations such as advanced driver assistance systems or ADAS for short, and image signal processing are being used to make driving safer.

I have two guests today that will drive this conversation in interesting directions without too many detours. With us are Graham Voller, Senior Product Manager for Arm’s Automotive and IoT team.

Graeme Voller: Thank you, Geof. Nice to be here.

Geof Wheelwright: And Andy Hanvey, Director of Automotive Marketing, OMNIVISION.

Andy Hanvey: Thank you, Geof

Geof Wheelwright: So, before we get to our discussion, I should probably tell our listeners a little bit more about the technologies we’re going to discuss.

To start with, there’s ADAS, it’s a group of electronic technologies that assist drivers in both driving and parking. ADAS uses multiple cameras, For example, to provide surround view systems that use data from cameras from around the vehicle to display information to drivers to help them make decisions while parking.

Rather than just providing what you might have previously seen in your rear-view mirror, ADAS shows drivers the full context of their surroundings.

Then there’s adaptive cruise control, which directly uses camera data to interpret the world around the vehicle and can make independent decisions for the driver about vehicle control, such as applying the throttle or brake.

Drivers are increasingly depending on ADAS applications, such as collision avoidance, lane departure warnings and automated emergency braking and vehicles increasingly rely on cameras positioned around the car to enable many of these features. So what I’m going to start with is looking at the importance of ads for the automotive industry.

Maybe you can tell us a bit about how consumers are influencing ADAS technology development and how the automotive industry is responding?

Graeme Voller: Alright, thank you, Geof. So the market for ADAS systems is a growing market and it’s expected to continue to grow over the next five years and camera systems are one of the most important parts of that.

And it’s moving away from being a luxury feature on the more expensive vehicles to a standard fit of all the majority of the world’s new cars and some of this is being driven by regulation. And certainly we’ve seen that with rear cameras being a regulatory requirement on vehicles for reversing.

And so these functions that were typically something that was an option are going to become standard features. Now what this is going to do is make the cameras a really key part of those systems and for the vehicle compute it needs to have the right information to make these decisions.

And so as the cameras increase in number, theres technological advancements putting pressure on the systems, the complexity and quantity of data to be processed and the different types of outputs that are needed for the display to the driver, what we call call human vision. And for machine vision, which is the ADAS functions of lane keeping assist and collision avoidance etc, they all have really different requirements.

And this leads to a lot of duplication of signals and cameras and additional compute on the vehicles.

Andy Hanvey: High-level I would say consumers do report high satisfaction and perceived benefit with these functionalities when the systems meet the expectations. If we look at the traditional ADAS camera, which is what we’d call front view so basically it’s a camera that looks at the front with different FOV (Field of Views), you know they typically capture images of the road or street signs or pedestrians, and depending on what use case you’d need to cover, this is then analyzed by algorithms and supporting software, hardware and triggers of response. A recent trend also is that ADAS cameras need to perform more than just the traditional machine vision task and this is where you’re adding in the human functionality. So a good example of this could be, the traditional front view ADAS camera also needs to perform as a car DVR (Digital Video Recorder) as well.

Geof Wheelwright: So Graeme, that brings us to your new technology related to automotive signal processing. Maybe you can tell us a bit about that?

Graeme Voller: Yes. So Arm has a new image signal processor, named Arm Mali-C78AE and it’s part of our AE line of safety capable intellectual property and its intended specifically for combined ADAS and human vision applications.

As Andy just referenced, where the cameras that are primarily used for machine vision, those forward-facing collision avoidance, etc type cameras are also being used to display the driver in the surround view application during parking. So this is a dual use and it brings some technological challenges around how to do the image processing in terms of making sure that you have the ability to use these cameras effectively and efficiently in both use cases.

Arm Mali-C78AE is an important element of specialized processing required for ADAS systems. In combination with Arm’s Cortex-A78AE CPU and Mali-G78AE GPU, known as the 78’s to those of us on the inside. It provides full ADAS and display vision pipeline, and this optimizes performance, minimizes power consumption and provides a consistent approach for functional safety.

And as we mentioned before about the ADAS functions, the C78AE is designed specifically to address these human and machine vision applications and processes data from up to four real-time or 16 virtual cameras. And those are cameras where the data is streamed to memory and is then processed as required for you to display all by the machine.

So it’s not a direct link between the camera and the display. And so these are all used for automated driving functions, such as lane keeping assist, as well as presenting that full surround view for the driver when parking. Clearly you don’t need to present to the driver while the vehicle is traveling at high speed and doing those automated functions but are all needed to be high-quality images, accurate and timely for the driver when using them to manoeuvre the vehicle.

Geof Wheelwright: Andy, it’s clear that image signal processing is important, but how are you using it in your solutions today?

Andy Hanvey: A good question Geof. We are leveraging it in a number of ways. For example, in some of our products, we have added the possibility to output more than one ROI region of interest from the image sensor at the same time. This has a number of motivations. One is to be able to process machine vision with one ROI and use another ROI for human viewing. Another motivation is that you can use both ROIs for different machine vision processing. In addition, we are able to support multiple color filter arrays or CFAs.

There is quite a wide range of them, including RGGB, RCCB, RGBIR, RYYCY. And each CFA has trade-offs, the need for supporting a number of CFA relates to a number of factors. One of those factors is that different algorithms are optimized using different CFAs. Another reason is that for RGBIR specifically, it has a very specific use case.

Geof Wheelwright: Underpinning all of this of course is safety. According to the world health organization, something like 1.3 million people lose their lives in road, traffic accidents every year. So while the circumstances of those accidents can vary, there’s one factor that unites far too many of them, and that’s human error. With all of this machine vision technology and the potential risks involved safety is clearly going to be critical. Graeme, can you explain how it’s being used to make us safer?

Graeme Voller: Absolutely, safety is paramount as any fault or failure in the operation of a vision system could be dangerous or threaten the well-being of the driver and the passengers or of other road users as well. As with all of Arm’s AE IP, Mali-C78AE was developed from the ground up with hardware, safety mechanisms and includes diagnostic software features to enable those safety mechanisms. And this allows designers to meet the requirements of ISO 26262 ASIL B functional safety requirements. And those are driven by an industry guideline around how functional safety is applied to vehicle systems, to protect both the occupants and other road users.

Andy Hanvey: As Graeme says, safety is absolutely important and paramount as well as in the Arm, Mali-C78AE that’s also very critical in the image sensor. And the functions and features that are included in the image sensor are wide ranging. They include from even when you power the sensor up, things like BIST or during run time test patterns or monitoring voltages and temperatures.  The outputs from these mechanisms could be also sent to the whole system where in the Mali-C78AE they could use some of that uinformation. And then this helps the whole system become safer and meet the safety goals of this.

Graeme Voller: So as Andy said, at the system level it’s very important to have a safety goal and the ISP is part of that is that it prevents or detects faults in a single camera frame that may result in incorrect, incorrectly processed frame data.

So to do this, the ISP has over 380 fault detection circuits, including the continuous built-in self test, and comes with a diagnostic software package to manage these safety mechanisms. All of which combined prevents those faults from propagating into subsequent frames, where either the driver would be presented with incorrect information, or where the machine vision system would be making decisions based on faulted data.

Geof Wheelwright: So all of that sounds like a lot of technology and safety requirements. If the cameras are having to detect movements incredibly quickly to safeguard people, just how fast do they have to be? I’m keen to hear both of your views, but let’s start with Graeme.

Graeme Voller: So processing speeds are a key element of ADAS. It should take less than 150 milliseconds to acquire an image at the sensor, process it through the ISP and the GPU, and then display it on the screen to the driver, and that’s a process that we call a glass-to-glass. Any longer than 150 milliseconds is noticeable to the driver when using the parking assist and you’ll perceive this as a delay. In a machine vision application, vehicles shouldn’t really travel more than around 250 millimeters at any speed between an image being acquired and it being presented to the decision-making engine. Anything longer than that means the system’s too slow to react and driving situations where accurate and timely decisions are critical.

Andy Hanvey: At the image sensor level, think about it that the light travels through the optics. The optics is made up of the imaging lens plus also the image sensor lens. The pixel will then convert the light into an electrical signal which is analog and then this signal is converted to a digital number with additional processing before it is formatted and outputted from the image sensor using digital interfaces like MIPI.

Graeme Voller: To enable drivers and machines to make the best possible decision. Vision systems must collect the most relevant information possible from each frame. Mali-C78AE employs advanced noise reduction technology, and dynamic range management to ensure that each frame is clear and properly exposed by adjusting overly dark or bright areas in the frame. This essentially replicates the process that the human eye does.

It can process real-time data from up to four high resolution high frame rate cameras, which significantly reduces memory, communications and processing requirements making for a much more efficient system. To reduce the cost of implementing multiple ADAS functions, Mali-C78AE enables camera sensors to be dual-purposed by either downscaling and color-translating the outputs of sensors optimized for machine vision to create the images adapted to the human eye.

Andy Hanvey: By avoiding duplication in cameras and their associated electronics and wiring, OEMs are saving on costs on complexity and therefore can enable wider deployment or adoption of camera-based ADAS functions.

Geof Wheelwright: So Andy, what should drivers expect? What the, what does the future hold for ADAS technology? And what gets you excited about the future?

Andy Hanvey: Today’s automotive vision systems are very capable, but the complexity and range of functions they’re being asked to perform is ever-expanding as we move to more automation and demand higher levels of safety. A couple of trends that we’re seeing in the automotive camera space is the need for more cameras and moving away from the single RVC towards SVS systems (Surround View Systems), the surround view system adding more functionality viewing in machine vision.

Then looking at ADAS and AD systems adding more cameras. Potentially more than 8 cameras around a car and that doesn’t even include surround view our interior cameras.

Graeme Voller: To support this expansion and capability, image sensing and processing technology needs to be accurate, fast and reliable. Along with displaying to the vehicle operator, the hardware needs to support and compliment the sophisticated ADAS and AD perception software in order to minimize system workloads and maximize the effectiveness of machine vision algorithms.

And you asked the question what does the future hold and what gets you excited about it? I think the whole of the future is what gets us excited about it, the opportunity here to shape the automotive market and so the automotive companies, the semiconductor industry and Arm ourselves and people like OMNIVISION – we’ve got to do all we can to deliver the global adoption of evermore capable ADAS systems which save more lives. Whether those are either the vehicle occupants themselves or other road users. And these systems need to be cost-effective to drive the adoption across the whole of the vehicle market.

They can’t be a luxury item. Safety should be something that comes as standard in all the vehicles and Arm is paying its part through technology innovation and then close ecosystem partnerships within the automotive industry for example with vision technology leaders such as OMNIVISION.

Geof Wheelwright: Thanks Graeme and Andy, you’ve definitely given us a whole new perspective on where vehicle safety systems are going. And you’ve given me an excuse to visit an automotive showroom sometime soon. Suddenly my 12-year old car just doesn’t cut it anymore. We hope you’ve enjoyed today’s conversation and we look forward to exploring where else technology will take us in the next episode of Arm Viewpoints. Thanks again for listening.

Subscribe to Blogs and Podcasts
Get the latest blogs & podcasts direct from Arm
promopromopromopromopromopromopromopromo