UK cannot afford to dither on AI

The Committee on Standards in Public Life has published its report on AI. Jonathan Evans, Chair of the Committee on Standards in Public Life commented that it was important that as government organisations adopted and utilised AI that the 7 standards of public accountability first outlined by Lord Nolan were applied – these comprise honesty, integrity, objectivity, openness, leadership, selflessness and accountability.
The report found that currently these standards were either inconsistently applied or not applied when it came to the adoption of AI, but were nevertheless essential to building public trust. In particular, openness, accountability and objectivity were most at risk.
In terms of openness, Evans said: “Public sector organisations are not sufficiently transparent about their use of AI and it is too difficult to find out where machine learning is currently being used in government.”
The report argued that the use of AI risked making accountability harder to pinpoint and introduced grey areas. While data bias it said remained a serious concern and put the principle of objectivity at risk.
While the report stopped short of calling for an AI regulator – instead proposing that existing regulators incorporate the regulation of AI into their own remits – it noted the need for “practical guidance and enforceable regulation”.
The use of AI in government agencies has quietly taken off in the UK, with minimal public debate. And while the government has made no secret of its adoption, it has not informed citizens clearly as to the extent and scope of this adoption – particularly in the health service and police forces which have been early adopters of the tech. As adoption spreads to even more public sector areas – such as education, welfare and social care – the report finds that it’s essential that government adopts and maintains clear ethical standards.
“While standards and regulation are often seen as barriers to innovation, the Committee believes that implementing clear ethical standards around AI may accelerate rather than delay adoption, by building trust in new technologies among public officials and service users,” the report suggests.
Uses of AI in public sector in the UK

  • In the health service AI is being used for healthcare triage (eg using Babylon Health’s chatbot) and to identify eye disease.
  • Police forces and security services (notably the Met) have been trialling facial recognition and predicting reoffending rates.
  • Hampshire County Council are trialling the use of Amazon Echo in the homes of vulnerable adults receiving social care. While one-third of local authorities now use algorithmic systems to make  welfare decisions.
  • Blackpool Council saved £1 million by using AI for road maintenance – detecting potholes and other damage.

Omnisperience’s view
Adopting ethical standards for AI is a good move by the UK government, but only if it can come up with standards that seem reasonable to the public and strike a workable balance within an acceptable timeframe. Sitting on our hands while policy makers do nothing or spend years debating the issues would be disastrous. (see EU considers ban on facial recognition tech)
The UK is already behind in its adoption of AI in both the public and private sectors compared to rival economies such as the US. Relatively little investment has been given by government to help transition the economy, and protectionism of human-based jobs persists (though not as extensively as in economies such as Germany and France). With trillions of dollars at stake, as well as future competitiveness, major nations are investing heavily. Even Vladimir Putin has stated that the country that wins the AI race will rule the world.
It’s no good bemoaning the impact of AI on human jobs. It’s not possible or advisable to turn back the clock. We cannot be Digital Luddites, even if we empathise with the viewpoint. The digital economy will transform jobs, not necessarily destroy them – creating whole new classes of human work that focus on things that humans are better at than machines. A wise government will be one that delivers a workable set of rules for AI quickly and effectively, which supports home-grown start-ups, successfully retrains its workforce, and restructures its economy for the AI Age. Those that dither don’t just risk huge economic losses, but societies that tear themselves apart because of the widening gulf between the digital have’s and have not’s.

  1. […] The issue is a thorny one because the concept of privacy varies massively between countries – with some seeing societal needs and efficiency overriding the rights of the individual, and others being incredibly privacy-centric and demanding action to protect citizens’ rights. What’s important is that robust rules are introduced that build confidence in this type of technology. Otherwise we risk the same kind of backlash that we saw with the Cambridge Analytica fall out. But Europe cannot afford to take too long to introduce such rules, or it will fall even further behind in the development and application of a technology that also has the potential to deliver a huge amount of benefits to society. (see UK cannot afford to dither on AI) […]