It will never happen to me: Some rules are required to protect society from unintended consequences of AI systems

Since Asimov wrote his Three Rules of Robotics in 1942, philosophers have debated how to ensure that autonomous systems are safe from unintended consequences. As the capabilities of AI have grown academics and industry leaders have stepped up their collaboration in this area notably at the Asilomar conference on Beneficial AI in 2017 and through the work of the Future of Life Institute and OpenAI organisation, says Michal Gabrielczyk, senior consultant, Technology Strategy at Cambridge Consultants.

With autonomous systems becoming more powerful, the impact of errors also scales – structural discrimination in training data can be amplified into life changing impacts entirely unintentionally. As these risks have become better understood politicians around the world have started debating how to deal with the impact of rapid growth in the capabilities of AI.

  • The Japanese government was an early proponent of harmonised rules for AI systems, proposing a set of 8 principles to the G7 in April 2016
  • In 2016 the White House published two reports “Artificial Intelligence, Automation, and the Economy” and “Preparing for the Future of Artificial Intelligence” highlighting opportunities and areas where regulatory thinking needed to develop in the USA
  • In 2017 the European Parliament Legal Affairs Committee made recommendations about EU-wide liability rules for AI and robotics. MEPs also asked the European Commission to consider establishing a European agency for robotics and AI to provide technical, ethical and regulatory expertise to public bodies
  • The UK’s House of Commons Select Committee investigation into robotics and AI concluded that it was too soon to be setting a legal or regulatory framework but did highlight priorities that would require public dialogue and eventually standards or regulation
  • The domain of autonomous vehicles, being somewhat more tangible than many other applications for AI, seems to have seen the most progress on developing rules. For example, the Singaporean, US and German governments have set out draft regulatory frameworks for autonomous vehicles. These are much more concrete than the general principles being talked about for other applications of AI

In response to a perceived legislative gap many businesses are putting in place their own standards to deal with legal and ethical concerns:

  • At an individual business level, Google DeepMind has its own ethics board and Independent Reviewers
  • At an industry level, the Partnership on AI between Amazon, Apple, Google Deepmind, Facebook, IBM, and Microsoft was formed in early 2017 to study and share best practice
Michal Gabrielczyk

As long as these bottom-up, industry-led efforts prevent serious accidents and problems policymakers are likely not to put much priority on setting laws and regulations. That could benefit AI developers by preventing innovation being stifled by potentially heavy handed rules. On the other hand, this might just store up a knee-jerk reaction for later – accidents are perhaps inevitable and the goals of businesses and governments are not necessarily completely aligned.

Regardless of the way in which rules are set and who imposes, consensus is emerging around the following principles as the important ones to capture in law and working practices:

  • Responsibility: There needs to be a specific person responsible for effects of an autonomous system’s behaviour. This is not just for legal redress but also for providing feedback, monitoring outcomes and implementing changes
  • Explainability: It needs to be possible to explain to people impacted (often laypeople) why the behaviour is what it is
  • Accuracy: Sources of error need to be identified, monitored, evaluated and if appropriate mitigated against or removed
  • Transparency: It needs to be possible to test, review criticise and challenge the outcomes produced by an autonomous system. The results of audits and evaluation should be available publicly and explained
  • Fairness: The way in which data is used should be reasonable and respect privacy. This will help remove biases and prevent other problematic behaviour becoming embedded

Together, these principles, however they might be enshrined in standards, rules and regulations, would give a framework for the field of AI to flourish whilst minimising risks to society from unintended consequences.

The author of this blog is Michal Gabrielczyk, senior consultant, Technology Strategy at Cambridge Consultants

Comment on this article below or via Twitter: @IoTNow_OR @jcIoTnow


Aeris to acquire IoT business from Ericsson

Posted on: December 8, 2022

Ericsson and Aeris Communications, a provider of Internet of Things (IoT) solutions based in San Jose, California, have signed an agreement for the transfer of Ericsson’s IoT Accelerator and Connected Vehicle Cloud businesses.

Read more

Telenor IoT passes milestone of 20mn SIM cards

Posted on: December 8, 2022

Telenor, the global IoT provider and telecom operator, has experienced rapid growth over the last years and ranks among the top 3 IoT operators in Europe and among the top IoT operators in the world. The positive development is due to an accelerated pace of new customers combined with a successful growth of existing customers’

Read more

The IoT Adoption Boom – Everything You Need to Know

Posted on: September 28, 2022

In an age when we seem to go through technology boom after technology boom, it’s hard to imagine one sticking out. However, IoT adoption, or the Internet of Things adoption, is leading the charge to dominate the next decade’s discussion around business IT. Below, we’ll discuss the current boom, what’s driving it, where it’s going,

Read more

9 IoT applications that will change everything

Posted on: September 1, 2021

Whether you are a future-minded CEO, tech-driven CEO or IT leader, you’ve come across the term IoT before. It’s often used alongside superlatives regarding how it will revolutionize the way you work, play, and live. But is it just another buzzword, or is it the as-promised technological holy grail? The truth is that Internet of

Read more

Which IoT Platform 2021? IoT Now Enterprise Buyers’ Guide

Posted on: August 30, 2021

There are several different parts in a complete IoT solution, all of which must work together to get the result needed, write IoT Now Enterprise Buyers’ Guide – Which IoT Platform 2021? authors Robin Duke-Woolley, the CEO and Bill Ingle, a senior analyst, at Beecham Research. Figure 1 shows these parts and, although not all

Read more

CAT-M1 vs NB-IoT – examining the real differences

Posted on: June 21, 2021

As industry players look to provide the next generation of IoT connectivity, two different standards have emerged under release 13 of 3GPP – CAT-M1 and NB-IoT.

Read more

IoT and home automation: What does the future hold?

Posted on: June 10, 2020

Once a dream, home automation using iot is slowly but steadily becoming a part of daily lives around the world. In fact, it is believed that the global market for smart home automation will reach $40 billion by 2020.

Read more

5 challenges still facing the Internet of Things

Posted on: June 3, 2020

The Internet of Things (IoT) has quickly become a huge part of how people live, communicate and do business. All around the world, web-enabled devices are turning our world into a more switched-on place to live.

Read more

What is IoT?

Posted on: July 7, 2019

What is IoT Data as a new oil IoT connectivity What is IoT video So what’s IoT? The phrase ‘Internet of Things’ (IoT) is officially everywhere. It constantly shows up in my Google news feed, the weekend tech supplements are waxing lyrical about it and the volume of marketing emails I receive advertising ‘smart, connected

Read more