Now Reading
It will never happen to me: Some rules are required to protect society from unintended consequences of AI systems

It will never happen to me: Some rules are required to protect society from unintended consequences of AI systems

Posted by Zenobia HegdeFebruary 26, 2018

Since Asimov wrote his Three Rules of Robotics in 1942, philosophers have debated how to ensure that autonomous systems are safe from unintended consequences. As the capabilities of AI have grown academics and industry leaders have stepped up their collaboration in this area notably at the Asilomar conference on Beneficial AI in 2017 and through the work of the Future of Life Institute and OpenAI organisation, says Michal Gabrielczyk, senior consultant, Technology Strategy at Cambridge Consultants.

With autonomous systems becoming more powerful, the impact of errors also scales – structural discrimination in training data can be amplified into life changing impacts entirely unintentionally. As these risks have become better understood politicians around the world have started debating how to deal with the impact of rapid growth in the capabilities of AI.

  • The Japanese government was an early proponent of harmonised rules for AI systems, proposing a set of 8 principles to the G7 in April 2016
  • In 2016 the White House published two reports “Artificial Intelligence, Automation, and the Economy” and “Preparing for the Future of Artificial Intelligence” highlighting opportunities and areas where regulatory thinking needed to develop in the USA
  • In 2017 the European Parliament Legal Affairs Committee made recommendations about EU-wide liability rules for AI and robotics. MEPs also asked the European Commission to consider establishing a European agency for robotics and AI to provide technical, ethical and regulatory expertise to public bodies
  • The UK’s House of Commons Select Committee investigation into robotics and AI concluded that it was too soon to be setting a legal or regulatory framework but did highlight priorities that would require public dialogue and eventually standards or regulation
  • The domain of autonomous vehicles, being somewhat more tangible than many other applications for AI, seems to have seen the most progress on developing rules. For example, the Singaporean, US and German governments have set out draft regulatory frameworks for autonomous vehicles. These are much more concrete than the general principles being talked about for other applications of AI

In response to a perceived legislative gap many businesses are putting in place their own standards to deal with legal and ethical concerns:

  • At an individual business level, Google DeepMind has its own ethics board and Independent Reviewers
  • At an industry level, the Partnership on AI between Amazon, Apple, Google Deepmind, Facebook, IBM, and Microsoft was formed in early 2017 to study and share best practice

Michal Gabrielczyk

As long as these bottom-up, industry-led efforts prevent serious accidents and problems policymakers are likely not to put much priority on setting laws and regulations. That could benefit AI developers by preventing innovation being stifled by potentially heavy handed rules. On the other hand, this might just store up a knee-jerk reaction for later – accidents are perhaps inevitable and the goals of businesses and governments are not necessarily completely aligned.

Regardless of the way in which rules are set and who imposes, consensus is emerging around the following principles as the important ones to capture in law and working practices:

  • Responsibility: There needs to be a specific person responsible for effects of an autonomous system’s behaviour. This is not just for legal redress but also for providing feedback, monitoring outcomes and implementing changes
  • Explainability: It needs to be possible to explain to people impacted (often laypeople) why the behaviour is what it is
  • Accuracy: Sources of error need to be identified, monitored, evaluated and if appropriate mitigated against or removed
  • Transparency: It needs to be possible to test, review criticise and challenge the outcomes produced by an autonomous system. The results of audits and evaluation should be available publicly and explained
  • Fairness: The way in which data is used should be reasonable and respect privacy. This will help remove biases and prevent other problematic behaviour becoming embedded

Together, these principles, however they might be enshrined in standards, rules and regulations, would give a framework for the field of AI to flourish whilst minimising risks to society from unintended consequences.

The author of this blog is Michal Gabrielczyk, senior consultant, Technology Strategy at Cambridge Consultants

Comment on this article below or via Twitter: @IoTNow_OR @jcIoTnow

About The Author
Zenobia Hegde

Leave a Response