On December 9th, the European Parliament and Council of the EU European Union policymakers finally reached an agreement on the world’s first law aimed at regulating artificial intelligence. This landmark bill is a significant milestone in protecting the end user from the potential dangers of AI.
When you consider the timeline of activity happening in the last month alone, it is clear that the regulatory landscape is taking AI very seriously:
- On November 26th – The US Department of Homeland Security’s (DHS) Cybersecurity and Infrastructure Security Agency (CISA) and the United Kingdom’s National Cyber Security Centre (NCSC) jointly released Guidelines for Secure AI System Development. The focus of the NCSC and CISA guidelines is to shift the burden of insecure development practices away from the end user.
- On 1st and 2nd November – The AI Safety Summit was hosted by the UK government alongside 21 other ministries and cybersecurity agencies around the world. The main outcome was a declaration by 28 countries to work together on research to make AI safe in the future.
Setting Regulatory Ground Rules for AI Development
From the perspective of instilling trust into a booming AI industry, this should indeed be viewed as a positive development.
Regulatory involvement is often a signal to the wider market that this industry is here to stay, and some ground rules need to be set to protect end users.
Look at industries like automobile manufacturing and pharmaceuticals – regulatory involvement thwarts the ability of any bad actors within these industries to gain an unfair advantage. It establishes agreed levels of standardization and quality.
Of course, there are opposing viewpoints that regulation increases costs and slows down the ability to experiment and iterate quickly.
In response to such viewpoints, one only needs to look as far as the last few years’ worth of headlines in the Cryptocurrency industry to see how an impressive and unregulated technology can be hijacked by bad actors to the detriment of ordinary citizens.
Will Regulatory Rules Reduce Risk?
The EU AI Act and the Joint AI Guidelines have a clear common purpose to reduce the risk to end users. By prohibiting a range of high-risk applications of AI techniques, the risk of unethical surveillance and other means of misuse is certainly mitigated.
Even in circumstances where high-risk biometric AI applications are still permitted for the purposes of law enforcement, there is still a limitation on the purpose and location for such applications, which prevents their misuse (intentional or otherwise) in this sector.
Furthermore, those high-impact AI systems that remain permitted under the EU AI Act will still need impact assessments. This will require organizations that use them to understand and articulate the full spectrum of potential consequences.
This is a step in the right direction, and the proposed penalties will mean that the developers of such high-impact AI applications are rendered accountable for their outcomes.
Will It Hold Back Innovation?
A commonly held view in many industries is that regulation stifles innovation. On one hand, added rules and prohibitions do slow down the pace of development of AI tools, so this view is not unfounded.
However, the positive impact this Act could have on creating a safe and trustworthy AI ecosystem within the EU will lead to an even wider adoption of the technology. To that extent, this regulation will encourage innovation within defined parameters, which will only benefit the AI industry at large.
Secure AI by Design Within Parameters
For many entities in the space, this is going to mean an increased workload to implement Secure Design, Development, and Deployment practices into their workflows.
Ensuring best practices often involves changing how we work, which is always challenging in a rapidly evolving space. That’s why it is vital to establish a tone at the top to ensure the message of “Security First” permeates the teams responsible for developing AI systems.
Once the tone is set and there is sufficient awareness, it is time to implement best practices into the development lifecycle. The first step is to assess the risks associated with AI models used compared to the minimum functionality that is required for the application. This is a key step that likely represents a shift in the current mindset for many developers.
Enabling transparency, a characteristic encouraged by the CISA and NCSC alike, is also key. This means sharing information on known vulnerabilities – and general risks associated with using AI – to benefit the entire industry and its users.
This information might take the form of SBOMs or internal policies governing the manner in which vulnerabilities should be disclosed. Consider the technical knowledge of many end-users of AI applications. There is a need to tailor the language appropriately to the audience to ensure they can make well-informed decisions about how they interact and input data into AI applications.
An AI-Enabled Future
These guidelines are the first of their kind and definitely won’t be the last. Whether your view of an “AI-enabled” future is Utopian or Dystopian, it’s not unreasonable to think AI tools will become secure enough to be an everyday part of our economy and society in the future.
To draw a simple analogy – nobody in this day and age would be willing to set foot on a commercial airliner if they did not have faith in the regulatory systems that govern the quality of its manufacture. Why should AI tools be any different?
Over time, as more regulatory frameworks are created around AI, it should result in an ecosystem that protects consumers while also allowing AI to continue growing and yielding benefits to end users in a controlled manner.
By Martin Davies, Audit Alliance Manager, Drata.