It will never happen to me: Some rules are required to protect society from unintended consequences of AI systems

Since Asimov wrote his Three Rules of Robotics in 1942, philosophers have debated how to ensure that autonomous systems are safe from unintended consequences. As the capabilities of AI have grown academics and industry leaders have stepped up their collaboration in this area notably at the Asilomar conference on Beneficial AI in 2017 and through the work of the Future of Life Institute and OpenAI organisation, says Michal Gabrielczyk, senior consultant, Technology Strategy at Cambridge Consultants.

With autonomous systems becoming more powerful, the impact of errors also scales – structural discrimination in training data can be amplified into life changing impacts entirely unintentionally. As these risks have become better understood politicians around the world have started debating how to deal with the impact of rapid growth in the capabilities of AI.

  • The Japanese government was an early proponent of harmonised rules for AI systems, proposing a set of 8 principles to the G7 in April 2016
  • In 2016 the White House published two reports “Artificial Intelligence, Automation, and the Economy” and “Preparing for the Future of Artificial Intelligence” highlighting opportunities and areas where regulatory thinking needed to develop in the USA
  • In 2017 the European Parliament Legal Affairs Committee made recommendations about EU-wide liability rules for AI and robotics. MEPs also asked the European Commission to consider establishing a European agency for robotics and AI to provide technical, ethical and regulatory expertise to public bodies
  • The UK’s House of Commons Select Committee investigation into robotics and AI concluded that it was too soon to be setting a legal or regulatory framework but did highlight priorities that would require public dialogue and eventually standards or regulation
  • The domain of autonomous vehicles, being somewhat more tangible than many other applications for AI, seems to have seen the most progress on developing rules. For example, the Singaporean, US and German governments have set out draft regulatory frameworks for autonomous vehicles. These are much more concrete than the general principles being talked about for other applications of AI

In response to a perceived legislative gap many businesses are putting in place their own standards to deal with legal and ethical concerns:

  • At an individual business level, Google DeepMind has its own ethics board and Independent Reviewers
  • At an industry level, the Partnership on AI between Amazon, Apple, Google Deepmind, Facebook, IBM, and Microsoft was formed in early 2017 to study and share best practice
Michal Gabrielczyk

As long as these bottom-up, industry-led efforts prevent serious accidents and problems policymakers are likely not to put much priority on setting laws and regulations. That could benefit AI developers by preventing innovation being stifled by potentially heavy handed rules. On the other hand, this might just store up a knee-jerk reaction for later – accidents are perhaps inevitable and the goals of businesses and governments are not necessarily completely aligned.

Regardless of the way in which rules are set and who imposes, consensus is emerging around the following principles as the important ones to capture in law and working practices:

  • Responsibility: There needs to be a specific person responsible for effects of an autonomous system’s behaviour. This is not just for legal redress but also for providing feedback, monitoring outcomes and implementing changes
  • Explainability: It needs to be possible to explain to people impacted (often laypeople) why the behaviour is what it is
  • Accuracy: Sources of error need to be identified, monitored, evaluated and if appropriate mitigated against or removed
  • Transparency: It needs to be possible to test, review criticise and challenge the outcomes produced by an autonomous system. The results of audits and evaluation should be available publicly and explained
  • Fairness: The way in which data is used should be reasonable and respect privacy. This will help remove biases and prevent other problematic behaviour becoming embedded

Together, these principles, however they might be enshrined in standards, rules and regulations, would give a framework for the field of AI to flourish whilst minimising risks to society from unintended consequences.

The author of this blog is Michal Gabrielczyk, senior consultant, Technology Strategy at Cambridge Consultants

Comment on this article below or via Twitter: @IoTNow_OR @jcIoTnow

FEATURED IoT STORIES

9 IoT applications that will change everything

Posted on: September 1, 2021

Whether you are a future-minded CEO, tech-driven CEO or IT leader, you’ve come across the term IoT before. It’s often used alongside superlatives regarding how it will revolutionize the way you work, play, and live. But is it just another buzzword, or is it the as-promised technological holy grail? The truth is that Internet of

Read more

Which IoT Platform 2021? IoT Now Enterprise Buyers’ Guide

Posted on: August 30, 2021

There are several different parts in a complete IoT solution, all of which must work together to get the result needed, write IoT Now Enterprise Buyers’ Guide – Which IoT Platform 2021? authors Robin Duke-Woolley, the CEO and Bill Ingle, a senior analyst, at Beecham Research. Figure 1 shows these parts and, although not all

Read more

CAT-M1 vs NB-IoT – examining the real differences

Posted on: June 21, 2021

As industry players look to provide the next generation of IoT connectivity, two different standards have emerged under release 13 of 3GPP – CAT-M1 and NB-IoT.

Read more

IoT and home automation: What does the future hold?

Posted on: June 10, 2020

Once a dream, iot home automation is slowly but steadily becoming a part of daily lives around the world. In fact, it is believed that the global market for smart home automation will reach $40 billion by 2020.

Read more
RECENT ARTICLES

Actility and Helium Network announce roaming integration to scale IoT coverage

Posted on: October 18, 2021

Actility, the provider of IoT connectivity solutions and LPWA technology, and Helium have launched a roaming integration partnership, unlocking affordable and ubiquitous coverage for millions of IoT devices.

Read more

Future of data integration: 2022 and beyond

Posted on: October 18, 2021

Traditional methods including manual creation of scripts, scrubbing the data, and later loading it into a data warehouse or ETL, (extract-transform-load) was used to integrate data from different sources. These methods were adopted in the era of resource constraints and have now become very time-intensive, expensive, and error-prone, says Yash Mehta, an IoT and big

Read more