Low latency, high performance video processing circuit block built by Renesas for autonomous driving
Renesas Electronics, a supplier of advanced semiconductor solutions, today announced the development of a new video processing circuit block for use in automotive computing system-on-chips (SoCs) in future autonomous vehicles.
The automotive computing SoCs for autonomous vehicles are required to integrate the functionality of both in-vehicle infotainment systems and driving safety support systems, and to operate both in parallel. In particular, driving safety support systems must be able to process video data from vehicle cameras with low latency to notify the driver of appropriate information in a timely manner. One issue that developers of in-vehicle infotainment systems and driving safety support systems face is the need to process large amounts of video data and also to perform autonomous vehicle control functions, without delays and instability.
The newly developed video processing circuit block handles processing of vehicle camera video with low latency. It can perform video processing in real time on large volumes of video data with low power consumption and without imposing any additional load on the CPU and graphics processing unit (GPU), which are responsible for autonomous vehicle control. Renesas has manufactured prototypes of the new video processing circuit block using a 16 nanometer (nm) FinFET process. In addition to 70ms-latency processing of vehicle camera video, it delivers industry-leading Full-HD 12-channel video processing with only 197 mw power consumption.
Recently, in-vehicle infotainment systems foreshadowing the future emergence of autonomous vehicles, such as car navigation systems and advanced driver assistance systems (ADAS), have made significant advances that bring them closer to becoming automotive computing systems integrating the functionality of both in-vehicle infotainment systems and driving safety support systems.
Driving safety support systems are expected to perform cognitive processing based on video transferred from vehicle cameras, such as identifying obstacles, monitoring the status of the driver, and anticipating and avoiding hazards. With the appearance of devices such as the R-Car T2 vehicle camera network SoC from Renesas, it can be anticipated that video data transferred from vehicle cameras will be encoded to video streams, and driving safety support systems must decode the received video streams. In order to make cognitive processing correctly using images from wide-angle cameras, the video data must be processed to correct for distortion. This video processing will be required to be accomplished with low latency to enable the system to notify the driver of appropriate information in a timely manner.
On the other hand, in-vehicle infotainment systems are capable of interoperating with a variety of devices and services, including smartphones and cloud-based services, and therefore data from a large number of external video sources are input to the system. At the same time, it is becoming more common for vehicles to be equipped with multiple interior displays including rear-seat monitors. This means the system must be able to handle simultaneous display of multiple video signals. In-vehicle infotainment systems must have sufficient performance to process and display large volumes of video data in real time.
The newly developed video processing circuit block can decode video streams transferred from vehicle cameras and apply distortion correction, with low latency. It performs the complex video processing required by automotive computing systems, delivering real-time performance and low power consumption, while imposing no additional load on the CPU and GPU responsible for cognitive processing tasks.
Comment on this article below or via Twitter: @IoTNow_ OR @jcIoTnow