Driving smarter: Solving the challenges of autonomous vehicles with AI

John Redford, VP Architecture, FiveAI

There is no longer any doubt that autonomous vehicles (AVs) will soon be a reality on our roads. However, there are multiple challenges which must first be overcome. Initial AVs will be restricted to operating on certain “known” roads, plus these early vehicles won’t be truly autonomous and will require human control and oversight at times.

Early deployments will probably be restricted to highway environments initially, and AVs will need the ability to pass the driving task back to a human when scenarios on the road get too complex.

Despite acres of newsprint and opinion, autonomous driving in the urban environment remains a largely unsolved challenge. Solving this puzzle will deliver massive benefits in terms of increased safety and driver leisure time, and reductions in pollution, congestion and cost, says John Redford, VP Architecture, FiveAI.

When AVs can safely deliver end-to-end journeys anywhere (defined by the SAE as Level 5 autonomy), with zero occupancy where necessary, there will be a dramatic shift away from the car ownership levels we see today towards mobility as a service (think about an Uber-like service, but the car will drive itself).

To get to this point we must surpass many implementation challenges. This is highly complex technology which must be built using commercially feasible hardware. We will also need to test and validate to ensure the technology is safe.

Major scientific challenges also remain, and these can be split roughly under two headings:

    • Perception, i.e. what is happening right now in a given environment
    • Intention Modelling, i.e. what will happen in the future of this environment

 

There is a natural distinction between these two problem spaces, but solving both of these challenges will require novel artificial intelligence (AI) applications. There is a single, determinable truth that describes what is happening in the instant and what has happened until that instant in a given scene.

Some factors are knowable: the instantaneous position of objects and actors relative to the ego-vehicle, their velocity and state, their type and pose, etc. Gaining this information requires a system to know precisely what is happening in a given scene and is delivered by both sensor and software performance, but also its position of observation, i.e. occlusions to the scene, lighting and other environmental hazards all affect the ability of an AV to observe.

untitledSo superior design of a perception system will enable more accurate knowledge of precisely what is happening in a scene, but this will be limited by the observation perspective (what we call “field of vision” in a human).

Computer vision is used to interpret this data, a field which has advanced in leaps and bounds recently. 2012 saw a significant breakthrough when convolutional neural networks (CNNs) were demonstrated to vastly improve image classification tasks over previous state-of-the-art technologies. Those techniques have advanced further since, and software can now outperform humans in most visual tasks required for driving.

This better-than-human perception ability will help deliver autonomous vehicles that are far less likely to suffer collisions than human drivers. But for autonomous vehicles to gain public acceptance they will have to drive in a way that other (human) road users expect, and they will have to make progress in busy traffic – driving assertively where necessary. This means AVs need the ability to anticipate what is likely to happen next in a scene. This remains an unsolved – and critical – challenge.

The future is intrinsically uncertain. Previously hidden objects or actors can enter the scene, and dynamic objects in the scene are also unpredictable. Autonomous vehicles must continually evaluate the possibilities of scene evolution to predict hazards that could arise, at almost any possibility level.

In real-time, AVs need to infer beliefs of the intentions of each actor in a scene and use learnt motion behaviours (relative to road topologies) to run probabilistic real-time modelling to determine possible actor paths consistent with those policies.

Intention modelling will rely on AI techniques such as counterfactual reasoning to interrogate “perceived” against “predicted” motion, thus ensuring AVs behave like other human road users, avoids collisions and find cooperative-competitive behaviours to avoid motion freezes.

These areas of emerging science have been gathering momentum in academic circles over the last 10 years. At FiveAI, we’re taking the best-in-class of this research and implementing it as automotive grade software, and developing the test and validation strategies to prove its safety. In doing so, autonomous vehicles will move from science fiction into reality, with all the associated societal benefits.

The author of this blog is John Redford, VP Architecture, FiveAI

Comment on this article below or via Twitter: @IoTNow_ OR @jcIoTnow

FEATURED IoT STORIES

9 IoT applications that will change everything

Posted on: September 1, 2021

Whether you are a future-minded CEO, tech-driven CEO or IT leader, you’ve come across the term IoT before. It’s often used alongside superlatives regarding how it will revolutionize the way you work, play, and live. But is it just another buzzword, or is it the as-promised technological holy grail? The truth is that Internet of

Read more

Which IoT Platform 2021? IoT Now Enterprise Buyers’ Guide

Posted on: August 30, 2021

There are several different parts in a complete IoT solution, all of which must work together to get the result needed, write IoT Now Enterprise Buyers’ Guide – Which IoT Platform 2021? authors Robin Duke-Woolley, the CEO and Bill Ingle, a senior analyst, at Beecham Research. Figure 1 shows these parts and, although not all

Read more

CAT-M1 vs NB-IoT – examining the real differences

Posted on: June 21, 2021

As industry players look to provide the next generation of IoT connectivity, two different standards have emerged under release 13 of 3GPP – CAT-M1 and NB-IoT.

Read more

IoT and home automation: What does the future hold?

Posted on: June 10, 2020

Once a dream, iot home automation is slowly but steadily becoming a part of daily lives around the world. In fact, it is believed that the global market for smart home automation will reach $40 billion by 2020.

Read more
RECENT ARTICLES

Partnership with Nokia helps Clavister expand in Australia

Posted on: January 19, 2022

Ornskoldsvik, Sweden. 19 January, 2022 – Clavister, a provider of European carrier-grade cybersecurity solutions for mission-critical applications, announced an order intake from a Railway Infrastructure company in Australia

Read more

Wi-Fi 6E trial in Turkey hailed a success by Turk Telekom

Posted on: January 19, 2022

London, UK. 19 January, 2022 – Wireless Broadband Alliance (WBA) member, Turk Telekom, announced the successful completion of Wi-Fi 6E trials designed to demonstrate how the technology can be used to enhance speed and capacity in a variety of different end-user scenarios. Wi-Fi is vital to the success of Turk Telekom, who has the Wi-Fi footprint in

Read more