The struggle for AI regulatory supremacy
This briefing is the first in a three-part series by Codified Legal on the key legal issues posed by artificial intelligence (AI). Part Two looks at the intellectual property issues in AI and Part Three looks at data protection implications.
This briefing is the first in a three-part series by Codified Legal on the key legal issues posed by artificial intelligence (AI). Part Two looks at issues with intellectual property and AI and Part Three looks at AI and data protection implications.
Much of the current concern surrounding AI centres on large language models or generative AI such as ChatGPT (sometimes referred to as ‘weak AI’ or ‘narrow AI’) which is capable of performing tasks to a high level under direction from humans. We go into more detail on how this type of AI works in Part Two. However, other forms such as sentient ‘True AI’ (sometimes referred to as ‘strong AI’ or ‘general AI’) capable of operating independently much like human intelligence, are currently theoretical. We are focussing our briefing on legal issues surrounding AI as is currently available. Research into strong AI is ongoing and could be possible in the not too distant future.
The current UK position on regulating AI
There is currently no specific, standalone AI law or general AI regulation in the UK. There are some sector specific laws, notably surrounding data protection in GDPR rules on automated decision making which we will discuss in Part Three. Also, most UK law and regulation is “technology neutral” so if an AI product or service creates an outcome that is contrary to existing laws (for example the Equality Act or product safety laws) then the existing laws will apply.
The UK government’s proposals
On 29th March 2023 the UK government produced a white paper on its plans for AI regulation. With the UK aiming to maintain its position as one of the world’s leaders in AI technology, the white paper has a pro-innovation approach. This means an encouragement in investment and building trust in AI rather than placing a regulatory burden on AI developers.
A white paper does not have any legally binding status, instead it seeks to gauge views on a subject and form discussion on key topics. Growth in AI is happening at an exponential rate and this begs the question as to whether a white paper is moving quickly enough to address AI risks. The likes of Elon Musk and other tech leaders have called for a six-month halt on AI developments to allow regulators to get up to speed. There are major fears that if AI development is not kept in check, AI could place itself in an uncontrollable arms race. The UK approach does have some potential advantages on flexibility, but there are concerns that potential economic gain is being prioritised over other issues.
The white paper argues that the government doesn’t want to rush into creating legislation that isn’t fit for purpose or would stifle growth of the industry, particularly with regard to small businesses and start-ups. Therefore, it has suggested five principles that existing regulators and a future AI regulatory framework should adhere to:
- Safety, security and robustness
- Appropriate transparency and explainability
- Accountability and governance
- Contestability and redress
These principles do not have any statutory effect for the time being and will instead be applied by existing AI regulators who will be left to further define the applicability and scope of these principles themselves. In time, the plan is to impose such a statutory duty on those regulators to have due regard to the principles to regulate AI effectively, though the paper is definitive that there will not be a new AI regulator. The main intention will be to allow for maximum flexibility while minimising disruption for business and increasing public confidence in AI applications.
The white paper does not lay down definitive plans as to what the AI regulatory regime will look like, however, there is the intention for a road map to be produced within the next six months that will add further clarity.
The paper provides some direction of how this process will operate, listing out under each principle what it anticipates will need to be done. Under safety, security and robustness for example the anticipated tasks are to “provide guidance about this principle”, “refer to a risk management framework which AI life cycle creators should apply”, and “consider the role of available technical standards”. By the nature of a white paper these are somewhat vague.
It is important to note that the white paper makes no mention of implementing a blanket ban on certain forms of AI, opting for what could be a quite reactive position. There could be some downsides to this approach such as harmful AI outcomes being more likely to slip through the regulatory net or it being too late to act when harm has already occurred, which may be worrying for consumers who could be on the receiving end.
We are already seeing the need for responsible development, having already witnessed the harmful elements of various AI relating to economic and social factors, including instances of racial bias in healthcare provision in the USA where an AI was trained to predict who would require additional healthcare based on expenditure, but predicted incorrectly because some racial groups spent less on the same health issues meaning the AI failed to identify those potentially needing additional healthcare. Other examples include where it is used to cheat in exams or has ‘hallucinated’, producing output which it deems to be plausible despite it being false.
There seems to be an intention in the UK to promote AI innovation and investment, with a clear bias towards encouraging the UK tech industry and assisting the UK economy. The encouragement of AI technologies also has huge potential benefits to society such as the ability to predict which people are likely to suffer certain diseases or picking up on data which doctors may have missed and then alerting them, increasing the chances of a correct medical diagnosis.
The UK’s approach to use light touch, devolved regulation has some advantages. It is faster to devolve power to individual regulators than get a big piece of AI legislation on the statute books (directly at odds with the EU approach outlined below). However, it runs the risk of creating a patchwork of regulation that lacks consistency and increases uncertainty of how to apply the regulations. What is required is clear and consistent regulation especially in areas that may overlap (such as the use of data). There are proposals in the white paper for a small co-ordination layer of centralised monitoring and co-ordination.
Although there is growing excitement for many over AI there is certainly fear too, so the question of whether this less prescriptive and more reactive approach balances the benefits and opportunities against the risks of AI remains to be seen.
What was the AI Safety Summit?
The AI Safety Summit was held in the UK on the 1st and 2nd of November 2023 at Bletchley Park which brought together 28 countries including world leaders, technology companies, AI innovators and researchers to explore, discuss and commit action to responsible AI innovation and development.
The outcome of the AI Safety summit was a joint declaration by those attending, called the Bletchley Declaration. The declaration was high level in nature but intended to create international consensus around some common themes. In particular around the use of “frontier AI”, being general purpose or foundation models that could be used for harmful purposes and also specific AI systems that exhibit capabilities to cause harm. The declaration contained two key statements about how AI policy should be developed internationally:
- Identifying AI safety risks of shared concern, building a shared scientific and evidence-based understanding of these risks.
- Building respective risk-based policies across countries to ensure safety in the development of AI. This should include: increased transparency by private organisations developing frontier AI capabilities, appropriate evaluation metrics, tools for safety testing, and developing relevant public sector capability and scientific research.
The EU’s regulatory approach to AI regulation
As noted above, GDPR does have some existing provisions for automated decision making under Article 22, therefore restricting AI use to certain scenarios.
In terms of future regulation in the EU, there is a far more stringent and centralised regulatory regime being created. The EU is not willing to take the flexible path which the UK is currently pursuing, evidenced by the proposed EU AI Act which is set to be the first AI-specific legal framework passed by a major regulator. It seeks to cement the place of the EU as, arguably, the world’s leading technology regulator, building on the reach created by GDPR in 2018.
There are some similarities between the EU and UK proposals with one key driver in both being increasing trust in AI. The way that the EU AI Act will do this is using a risk-based approach. Under the AI Act there will be designated categories of risk with four tiers. The highest level will be named ‘unacceptable risk’ and will target the most socially harmful AI’s by banning them immediately, considered to be those that cause a “threat to the safety, livelihoods and rights of people” such as “toys using voice assistance that encourages dangerous behaviour”.
A step down from that will be ‘high risk’ and will include AIs used in projects such as construction, education, and the administration of justice, with these AI systems being subject to various requirements such as; having risk management systems in place, data governance, technical documentation, record keeping, provision of information, human oversight, accuracy and security.
Other types of AI systems will have much more limited requirements but these will include certain transparency obligations such as having to notify where a “deep fake” image or video has been used.
The proposed Act also comes with large sanctions for non-compliance: up to €30 million or 6% of turnover for breaching a prohibition, €20 million or 4% of turnover for an infringement of obligations, and €10 million or 2% of turnover for supplying misleading, incomplete, or incorrect information. An EU Artificial Intelligence Board will be established too, supervising the operation of the Act, making recommendations, and providing guidance among other duties.
The passing of the AI Act into legislation had seemed imminent with the EU parliament due to finalise its position (which now looks likely to happen towards the end of April) before the Commission, Council and Parliament discuss final details. The idea had been for the Act to be passed by the end of 2023, but that deadline seems increasingly unlikely to be met. How long the delay could be is not entirely clear. The disadvantage with the EU approach of trying to get a large piece of AI legislation passed is the time it takes is much slower than the pace of growth of AI.
The interaction between the UK and EU regime
When in force, the EU AI Act could impact the UK’s regulatory position, potentially forcing the UK off the flexible path to a more comprehensive and legislated regime as many UK businesses seek to meet the standards of the EU for trading purposes.
Although the circumstances were different with GDPR (being regulation agreed prior to the UK leaving the EU) we have seen that UK business welcomed working to a single standard rather than designing processes that work just in the UK and have different arrangements for EU customers or EU operations. The same is likely to apply to AI. A UK development team will want to ensure that its AI tools and software can be used by customers in the EU, and it will not want to have to retrospectively change development processes to ensure this is the case.
The UK accounts for over one third of Europe’s AI companies and the white paper seems to be focussed on maintaining or increasing that share. The relaxed approach may encourage more companies to base themselves in the UK, but once maturing they will most likely want to be able to operate in the EU market and that will be a driver to adopt EU regulations.
AI governance in the United States
The US is also beginning to make AI regulatory changes, proposing AI regulatory frameworks to accompany existing regulations. There is AI regulation in place to reduce AI risks when it is used in employment decisions in New York, Illinois and Maryland. New York law for example now prescribes that automated employment decision tools should be subject to bias audits on an annual basis.
Other states (such as California) are implementing data privacy rules that are similar to GDPR in dealing with automated decision making tools. There are also a number of state legislatures discussing draft legislation on AI. These tend to focus on the protection of individuals in high impact areas of AI, and also requiring transparency where AI tools are used. Pennsylvania is proposing a state registry of businesses operating AI systems which would have to include details on the AI systems used.
Congress has passed bills on US government AI systems, there have been executive orders and voluntary guidance issued. However, there is little sign of federal law being passed on the more general aspects of AI.
At a regulatory level, the Federal Trade Commission (FTC) has published ground rules largely aimed at increasing fairness and requiring AIs to be trained in a way so as to remove bias. The FTC has also taken enforcement actions against companies misusing AI, requiring them to delete certain algorithms and training data.
The White House has made some efforts to address AI issues with a Blueprint for an AI Bill of Rights and an Executive Order directing Federal agencies to combat algorithmic discrimination.
However, the most specific AI regulatory intervention yet in the US came when President Biden issued an Executive Order on Safe, Secure and Trustworthy AI on 30th October 2023. This Order contained the following:
- Companies developing any foundation models that pose a serious risk to national security, national economic security, or national public health and safety must notify the federal government when training the model, and must share the results of all red-team safety tests.
- Standards, tools, and tests to help ensure that AI systems are safe, secure, and trustworthy will be developed, including new standards for biological synthesis screening.
- Standards and best practices are to be developed for detecting AI-generated content and authenticating official content, such as watermarking to clearly label AI-generated content.
- The plan to establish a cybersecurity program to develop AI tools to find and fix vulnerabilities in critical software.
- Ensure that the United States military and intelligence community use AI safely, ethically, and effectively, and counter adversaries’ military use of AI.
- A call on Congress to pass bipartisan legislation to better protect US citizen’s privacy.
- A strengthening of certain federal level privacy support and guidance.
- Other measures included: advancing equity and civil rights, consumers, patients, students and worker protection, promoting innovation and competition. advancing American leadership abroad and ensuring responsible and effective government use of AI.
Elsewhere, Brazil has provided draft legislation on AI. Key principles under the draft legislation include freedom of choice, auditability, and transparency. Risk assessments are also covered by the draft, with providers having to document the risks of using their AI system before putting it on the market. Like the EU there will also be the designation of ‘high-risk’ AI such as self-driving vehicles as well as the prohibition of some harmful AIs.
Canada has also begun to act, producing a bill for the Artificial Intelligence and Data Act (AIDA). Though many of Canada’s other acts such as The Bank Act already apply to AI, Canada’s AIDA would ensure artificial intelligence is subject to obligations under Canadian consumer protection law and human rights law as well as prohibiting malicious AIs.
Meanwhile, countries such as India and Australia have no planned artificial intelligence legislation, but it will be interesting to see if they change their approach in the coming year with increased discussion of AI around the world.
A country that has had quite prescriptive AI regulation for a little while now is, perhaps unsurprisingly, China with regulation in force since March 2022. It has established an algorithm registry and although public details of this are limited it is understood to be collecting quite detailed information on AI algorithms used by tech companies in China. This is probably focused more on limiting information dissemination via AI so that they do not harm China’s national security. There are also questions whether a central government repository can really understand the impact of every AI algorithm submitted to it.
In conclusion it is clear to see that the development of AI is demanding a lot of attention from AI regulators and there is not yet a clear best approach. Countries such as the UK are focussed largely on the great benefits of AI but with a light touch, decentralised approach on regulating it. The EU is taking a top-down legislative approach and hoping that their AI Act will build on the broadly successful GDPR.
The speed of advancement of AI technology and the adoption of emerging AI technology trends might mean that by the time the best approach is figured out we may be living in a world with pervasive AI, too difficult to undo. However, it could be that large language models and generative AI is the wake-up call required to get our regulatory AI house in order before we become capable of creating strong, sentient AI which will produce even more challenges for our societies. This type of AI may need much stricter control than even the EU is currently proposing and implementation of international regulatory frameworks.
23rd November 2023