Fri. Nov 15th, 2024

Artificial intelligence, or AI, is an exciting technology with the potential to bring significant benefits to humanity. According to B.J. Copeland, artificial intelligence is ‘the ability of a digital computer or computer-controlled robot to perform tasks commonly associated with intelligent beings’. An artificial intelligence possesses typically human characteristics, such as the ‘ability to reason, discover meaning, generalize, or learn from past experience.’ It already contributes to healthcare, finance, transport, and education.[1]

Opportunities

There are innumerable opportunities presented by AI, such as the creation of self-driving cars and virtual assistants. Stanford University splits AI opportunities into two categories.

On the one hand, AI can be used to ‘augment’ human capabilities. For example, a human driving a car might be better equipped for ‘making major route decisions and watching for certain hazards’, while an AI driver might be better at ‘keeping the vehicle in lane and watching for sudden changes in traffic flow’.

Whoever maximises the opportunities associated with this developing technology will be in a very strong political and economic position over the course of the 21st century.

On the other, AI can be used largely autonomously to complete tasks without working with a human being. For example, AI can analyse different protein structures and ‘help monitor and adjust operations’ in fields such as energy, logistics, and communications.

As with nuclear fusion, discussed here, whoever maximises the opportunities associated with this developing technology will be in a very strong political and economic position over the course of the 21st century.

Risks

AI does, however, have its risks.[2] Criminals may use the technology to cause physical harm, commit dating and social media fraud, and undermine national security. There are also concerns, recently expressed by Mayor of London Sadiq Khan, that there are not sufficient protections to respond to similar AI interfering with elections and referendums.

This last concern arises in the wake of a fake audio recording impersonating Sadiq Khan that surfaced online in November 2023. In the audio file, the voice impersonating Khan supports pro-Palestinian demonstrations that were intended to coincide with and disrupt Armistice Day commemorations. The audio was generated using AI.

This resulted in serious backlash against the Mayor. However, this ‘deepfake’ impersonation was not considered a criminal offense by the Metropolitan Police. This raises questions about the regulation of AI in the UK.

UK AI Regulation

Currently, AI is regulated through a ‘complex patchwork’ of laws and requirements.[3] Existing regulatory bodies respond to issues relating to artificial intelligence using the powers they already have. Though this is sufficient for many cases of improper use of AI, it is not comprehensive. There remains significant scope for abuse of AI technology to harm public and private interests.

A new regulator dedicated to AI technology was ruled out under the government plans set out last year. The government created the AI Safety Institute in November 2023, but it is not a regulator. The institute is designed to test and evaluate the safety of AI systems and is capable of vastly improving understanding of AI.

Regulation through legislation is not part of the initial phase of the UK government’s AI plan either. At first, the government plans to work with existing regulators in their respective sectors and use legislation at a later date.[4]

The government policy paper setting out its AI plans stresses the need for ‘interoperability’ with international regulation.[5] Interoperability between existing UK regulators is also a clear aim. The approach set out in the paper is to improve regulation of AI by increasing coordination between regulatory bodies through central oversight based on five core values:

  1. Safety, security, and robustness.
  2. Appropriate transparency and explainability.
  3. Fairness.
  4. Accountability and governance.
  5. Contestability and redress.

Part of the challenge the UK government must overcome is striking a balance between the freedom required for innovation and the oversight necessary to ensure safety. Though introducing a list of values alone is not sufficient, the intention to coordinate regulators so that they work from the same understanding of AI is a step towards entrenchment of AI regulation through statute at a later stage.

Regulation Abroad

Efforts to regulate AI have developed across the world over the last few years. China led the way in regard to AI regulation, with is ‘central goal’ controlling information, until the EU agreed AI legislation towards the end of 2023.

Part of the EU legislation establishes different categories of risks associated with AI. An AI system considered an ‘unacceptable risk’, the legislation’s highest level of risk, is capable of cognitive behavioural manipulation or classifying people based on behaviour, socio-economic status, and personal characteristics. These systems will be completely banned under the new law.

Regulation of AI is only at its beginning. The overall impression is that there is a long way to go, and that international cooperation is required.

The EU approach is similar to that of the UK in that it begins by introducing a set of underlying values to be used as criteria when assessing AI technologies. The difference is that it goes two steps further, firstly by codifying these values in legislation, secondly by establishing means by which to ban AI that violates these values. This legislation, however, has not been fully passed into law and its efficacy cannot be properly assessed until it has come up against real examples of abuses of AI technology.

While arguable lagging behind in AI regulation, the US has not been totally inactive. In February 2024, the US responded to AI impersonations of President Joe Biden by banning ‘robocalls’ that use AI generated voices. This builds on AI legislation passed in 2022, providing a ‘voluntary’ framework for companies to assess the impact of AI. This event is similar to the one involving Sadiq Khan that took place last year. US regulation of AI is flawed and, currently, falls behind that of the EU.

Regulation of AI is only at its beginning. The overall impression is that there is a long way to go, and that international cooperation is required.

Next Steps for AI Regulation

An emerging field of AI ethics has begun to offer ways to improve regulation of artificial intelligence. For example, John-Stewart Gordon proposes four ways to respond to artificial intelligence:[6]

  1. An international AI convention.
  2. Investment in research and development.
  3. Collaboration between governments, industries, researchers, and society.
  4. Education and increased public awareness.

None of these things are particularly revolutionary. Rather, they are the usual steps of a natural process that takes place whenever humanity comes across something with which it is unfamiliar. It is a process that is already underway, as shown by the efforts to regulate AI discussed above.

Gordon’s most useful suggestion is that of an international convention discussing artificial intelligence. Although it is unlikely such an event could establish enforceable regulation of the AI market, it could nevertheless have a serious impact.

The impact of cooperation modelled on the human rights project of the 1940s could be pivotal in the history of the regulation of artificial intelligence.

The Universal Declaration of Human Rights, signed in 1948, did not create a strong system of enforceable international law relating to human rights. However, the political and cultural impact of the declaration was profound. Respect for human rights has become an unopposable force in the protection of civil liberties nationally as well as internationally, leading to the creation of enforceable legislation like the Human Rights Act.

Something similar could be achieved with an international declaration relating to artificial intelligence. It would be difficult to bind the US, EU, China, and Russia and other nations to a single AI charter, especially given the likelihood of interference by large multi-national tech giants. However, the impact of cooperation modelled on the human rights project of the 1940s could be pivotal in the history of the regulation of artificial intelligence. The cultural and political effect it could have could lead to stronger national and international law in the future.

Conclusion

John-Stewart Gordon says that AI ‘could turn Earth into a paradise or a living hell.’[7] The reality is that it is virtually guaranteed to do neither of these things. One of humanity’s greatest abilities is to overestimate the capabilities of things beyond our control. Artificial intelligence is an advancement in the realm of technology, one that has the potential to be of immense value to humanity, but we are a long way off the emergence of a sentient computer like HAL 9000 or Skynet.

The real concern is how human beings will abuse artificial intelligence. How to deal with humanity’s natural habit to harm itself is a very familiar challenge. However, with reference to other attempts to regulate human behaviour in the past, it is a surmountable one.

Above all, it is important to recognize the exciting possibilities offered by AI, some of which have been mentioned in this article. How to make the most of AI is in the hands of innovators in the digital technology market and the researchers behind it. The UK government’s pledge to become an ‘AI superpower’ and the apparent willingness it shows to allow innovation in this industry is a promising sign that this country can make the most of AI technology.[8]


[1] Seah, ‘Modern AI ethics’, p.108.

[2] Department for Science, Innovation & Technology, A pro-innovation approach to AI regulation, p.2.

[3] Department for Science, Innovation & Technology, A pro-innovation approach to AI regulation, p.5.

[4] Department for Science, Innovation & Technology, A pro-innovation approach to AI regulation, pp.5-6.

[5] Department for Science, Innovation & Technology, A pro-innovation approach to AI regulation, p.7.

[6] Gordon, The Impact of Artificial Intelligence, p.85.

[7] Gordon, The Impact of Artificial Intelligence, p.84.

[8] Department for Science, Innovation & Technology, A pro-innovation approach to AI regulation, p.2.

Legislation

  • H.R. 6580 – Algorithmic Accountability Act of 2022 (US).
  • Human Rights Act 1998.c.42 (UK).

Publications

  • ‘AI: EU agrees landmark deal on regulation of artificial intelligence’, BBC News, (9 December 2023), <https://www.bbc.co.uk/news/world-europe-67668469> [accessed: 18/02/2024].
  • AI Safety Institute and Department for Science, Innovation & Technology, ‘Policy Paper: Introducing the AI Safety Institute’, GOV.UK, (November 2023), <https://www.gov.uk/government/publications/ai-safety-institute-overview/introducing-the-ai-safety-institute#establishment> [accessed: 18/02/2024].
  • Copeland, B.J., ‘artificial intelligence’, Britannica, (20 July 1998), <https://www.britannica.com/technology/artificial-intelligence> [accessed: 18/02/2024].
  • Department for Science, Innovation & Technology, A pro-innovation approach to AI regulation, (UK: HH Associates Ltd on behalf of the Controller of his Majesty’s Stationery Office, 2023).
  • European Parliament, EU AI Act: first regulation on artificial intelligence, (8 June 2023), <https://www.europarl.europa.eu/topics/en/article/20230601STO93804/eu-ai-act-first-regulation-on-artificial-intelligence> [accessed: 18/02/2024].
  • Gordon, John-Stewart, The Impact of Artificial Intelligence on Human Rights Legislation: A Plea for an AI Convention, (Cham, Switzerland: Palgrave Macmillan, 2023).
  • McCallum, Shiona, ‘UK rules out new AI regulator’, BBC News, (29 March 2023), <https://www.bbc.co.uk/news/technology-65102210> [accessed: 18/02/2024].
  • Milmo, Dan, ‘UK’s AI Safety Institute ‘needs to set standards rather than do testing’’, The Guardian, (11 February 2024), <https://www.theguardian.com/technology/2024/feb/11/ai-safety-institute-needs-to-set-standards-rather-than-do-testing> [accessed: 18/02/2024].
  • O’Carroll, Lisa, ‘AI fuelling dating and social media fraud, EU police agency says’, The Guardian, (9 January 2024), <https://www.theguardian.com/technology/2024/jan/09/ai-wars-dating-social-media-fraud-eu-crime-artificial-intelligence-europol> [accessed: 18/02/2024].
  • Seah, Josephine, ‘Modern AI ethics is a field in the making’, in Mark Findlay; Josephine Seah and Willow Wong (eds), AI and Big Data: Disruptive Regulation, (Northampton: Edward Elgar Publishing, 2023).
  • Sheehan, Matt, ‘China’s AI Regulations and How They Get Made’, Carnegie Endowment for International Peace, (10 July 2023), <https://carnegieendowment.org/2023/07/10/china-s-ai-regulations-and-how-they-get-made-pub-90117> [accessed: 18/02/2024].
  • Spring, Marianna, ‘Sadiq Khan says fake AI audio of him nearly led to serious disorder’, BBC News, (13 February 2024), <https://www.bbc.co.uk/news/uk-68146053> [accessed: 18/02/2024].
  • Stanford University, ‘What are the most promising opportunities for AI?’, One Hundred Year Study on Artificial Intelligence (AI100), (16 September 2021), <https://ai100.stanford.edu/gathering-strength-gathering-storms-one-hundred-year-study-artificial-intelligence-ai100-2021-1/sq9> [accessed: 18/02/2024].
  • ‘US outlaws robocalls that use AI-generated voices’, The Guardian, (8 February 2024), <https://www.theguardian.com/technology/2024/feb/08/us-outlaws-robocalls-ai-generated-voices> [accessed: 18/02/2024].
  • Warren, Jess, ‘Fake audio of Sadiq Khan is not a crime, says Met’, BBC News, (11 November 2023), <https://www.bbc.co.uk/news/uk-england-london-67389609> [accessed: 18/02/2024].
  • Wheeler, Brian, and Gordon Corera, ‘Fears UK not ready for deepfake general election’, BBC News, (21 December 2023), <https://www.bbc.co.uk/news/uk-politics-67518511> [accessed: 18/02/2024].
  • Wright, David; Yahya Abou-Ghazala and Brian Fung, ‘Fake Biden robocall linked to Texas-based companies, New Hampshire attorney general announces’, CNN, (6 February 2024), <https://edition.cnn.com/2024/02/06/tech/nh-ag-robocall-update/index.html> [accessed: 18/02/2024].

Treaties

  • United Nations, Universal Declaration of Human Rights, <https://www.un.org/en/about-us/universal-declaration-of-human-rights> [accessed: 18/02/2024].
2 thoughts on “New Developments in AI Regulation”

Leave a Reply

Your email address will not be published. Required fields are marked *