What UK AI Regulations Mean for UK Business Innovation and Development

  • Dr Leslie Kanthan, CEO and Founder at TurinTech

  • 14.08.2023 10:45 am
  • #artificialintellegence

UK AI Regulation VS. EU AI Act

Sam Altman, the key figure responsible for the groundbreaking creation of ChatGPT at OpenAI, has recently made a rather unexpected declaration: the potential ceasing of operations in the European Union. This surprising statement stems from the EU AI Act, which classifies OpenAI as high-risk and hampers the advancement of innovation.

In contrast to the legally binding nature of the EU AI Act, the UK Department for Science, Innovation, and Technology (DSIT) has recently published a fresh white paper titled "AI regulation: an innovation-friendly strategy". This document assures that AI will only be regulated when deemed essential, offering an alternative approach to governing AI technologies.

Additionally, notable distinctions involve the UK's broader definition of AI, which enables adaptability for future advancements, and a situation-specific methodology that assesses outcomes prior to enacting regulations. This approach fosters accountable innovation and distinguishes it from the EU AI Act, which imposes constraining regulations that may impede regular operations and even prompt AI enterprises such as OpenAI to consider withdrawing from the market.

By unpacking the concepts within the white paper, businesses and consumers can better understand the regulations established in the UK and uncover the ramifications of these new guidelines.

With New AI Risk comes the Responsibility of Regulation

AI, like a double-edged sword, harbours immense promise across various sectors including medicine, science, and more. Nevertheless, we must also recognise its potential for harm and adverse consequences stemming from inadequate control, biassed training data, and other variables. These risks encompass hazards to human rights, safety concerns, privacy infringements, and impacts on societal well-being. To guarantee the responsible and advantageous application of AI, proactive measures are essential.

However, there are further considerations to take into account. The present regulatory landscape represents an additional catalyst for the necessity of reforms. Why is that? Firstly, current laws inadequately address the unique characteristics of AI products, leaving consumers exposed to potential risks associated with AI. Secondly, despite the presence of multiple regulatory entities overseeing the AI industry, there exists a lack of consistent coordination among them.

Government Objectives

The UK government is dedicated to advancing growth and prosperity through the cultivation of responsible innovation and the reduction of regulatory uncertainties. Moreover, they place significant emphasis on enhancing public trust by actively confronting and mitigating the risks connected to AI. By achieving these objectives, they aspire to solidify the UK's position as a world leader in AI, fulfilling their overarching ambition on a global level.

Hence its decision to adopt a pro-innovation approach. This will ensure that whilst potential risks are mitigated, the development of AI is not hindered. It will do this by regulating AI in a manner proportionate to the apparent risk. By considering outcomes and context, the government will avoid burdening the whole technology industry with cumbersome compliance requirements, a move that will surely stifle innovation. It aims to publish clear rules, create designated enforcement authorities and offer compliance guidelines. Businesses will benefit from this streamlined approach through the entire AI lifecycle. As developments in AI progress, the government intends to remain aware of the evolving opportunities and risks connected to AI.

In order to support the acceptance of AI in society, the government is committed to releasing AI positive communications. This will encourage the wider public to view AI as a trustworthy technology. Additionally, the government will support a collaborative environment between itself, regulators and industry. This will foster AI innovation and uphold the importance of diverse perspectives.   

Five Principles of AI Regulation

Understanding the five guiding principles behind AI regulation is key to unpacking the whitepaper. Primarily, the government seeks to ensure that companies prioritise the principle of  safety, security and robustness in their operations. This can be achieved by ensuring businesses are transparent and able to explain their actions. By sharing the details of the objectives, functioning, deployment strategies and decision making procedures of their AI systems, companies can achieve this. The second guiding principle is fairness. It is important that individuals are not subjected to discrimination and their legal rights remain intact. Following from this are the principles of accountability and governance. Business should proactively designate a responsible individual to oversee AI systems effectively. Finally, it is important that AI systems possess the capacity to detect and rectify detrimental AI outcomes.

Impacts on Foundation Models and LLMs

Prior to delving into the effects of this regulation on consumers and business leaders, let us briefly analyse its consequences for two particular and influential forms of AI: Foundation models and LLMs (Large Language Models), as set out by the DSIT in the white paper.

Foundation Models, as their name implies, serve as the fundamental basis for various AI tasks. They consume extensive datasets to learn patterns and develop a broad comprehension of text, images, and other forms of information. However, comprehending their internal mechanisms, capabilities, and potential risks can be intricate. Concerns such as privacy breaches and biases may emerge. 

To establish accountability, the DSIT recognises the necessity for comprehensive oversight throughout the lifespan of a foundation model. The new regulations propose centralised control through central functions, employing technical standards and assurance techniques to ensure effective compliance and evaluation. Conversely, the white paper cautions against hasty regulatory actions targeting foundation models and LLMs, acknowledging the potential adverse effects on their beneficial applications. For LLMs, recommendations regarding transparency measures are put forth, with ongoing monitoring and evaluation conducted by the government.

The Benefits for Businesses and Consumers

So how does the regulatory plan affect you, whether you are a business leader seeking to implement AI in your organisation or an end consumer utilising AI products? 

The promotion of responsible innovation is a key benefit of AI regulation in the UK. As a business leader, you are encouraged to actively contribute to the advancement of innovative solutions, while ensuring ethical and secure practices. By adhering to these standards, you have the opportunity to foster responsible and sustainable innovation that not only benefits the public but also mitigates potential risks.

Another advantage of AI regulation is the increased clarity and trust it brings to both businesses and consumers. As a business leader, you can navigate the AI landscape confidently, armed with knowledge of the rules and expectations. Similarly, as a consumer, you can have faith in the reliability of AI products and services, knowing that your privacy is safeguarded and your safety is ensured. This empowerment strengthens the bond between businesses and consumers, cultivating trust, loyalty, and fostering overall growth.

Furthermore, AI regulation enables data sharing in a trustworthy manner. Once you have earned the public's trust as a business leader, consumers willingly entrust their personal data to your company. This data becomes a valuable asset that fuels innovation and enhances product performance, benefiting both you and your customers.

Finally, the implementation of government regulations ensures enhanced safety and quality in AI systems. These regulations guarantee that AI systems meet specific criteria, resulting in the development of higher-quality and safer AI systems with reduced biases. As a consumer, you can expect AI technologies that are more precise, reliable, and tailored to your needs.

The Costs for Business and Consumers

Adhering to regulatory requirements can impose significant costs on businesses, encompassing employee training and necessary adjustments. These expenses can be particularly burdensome for small businesses as they represent a fixed amount, occupying a larger proportion of the company's overall expenditures.

A consequence of regulatory compliance is the potential for price increases. Should businesses pass on the costs of regulatory compliance to consumers, it could lead to higher prices for AI products and services. As a consumer, this means you may need to assess the affordability and value of these offerings. Business leaders, on the other hand, must carefully consider the potential negative outcomes, such as a potential decrease in sales volume.

Regulatory compliance can also impose limitations and prohibited outcomes on certain AI systems or outcomes, even if they are not inherently harmful. As a result, the effectiveness of AI systems in assisting consumers in achieving their goals may be constrained.

Moreover, there is a risk of missed opportunities. Rising costs, price hikes, and prohibitions on AI can result in financial losses for businesses, hindering their ability to invest in improving AI products and services. Consequently, consumers are deprived of enhanced AI offerings that could have improved their experiences and catered to their interests.

Future Outlook

The regulatory process in the UK is still in its early stages, actively seeking input and improved solutions from businesses, regulators, and individuals. The objective is to establish effective regulation while minimising negative consequences and avoiding obstacles to innovation.

After the consultation period, the government intends to carefully assess the feedback provided by diverse stakeholders and subsequently publish strategic plans for AI regulation. Over the next year, regulators will be tasked with creating guidance documents that align with the five principles set out in the white paper.

Alongside the white paper, on 7 June, the Prime Minister disclosed the hosting of the inaugural global summit on AI safety in September, further providing an avenue for active participation in the establishment and execution of AI regulations. Therefore, it is crucial to remain engaged and well-informed regarding the ongoing discussions surrounding AI regulation.

Related Blogs

Other Blogs