As AI, particularly ChatGPT, has attracted huge popular attention and usage, governments everywhere have realized they are behind the curve and begun moving to regulate the sector. In the European Union (EU), which covers 27 countries, progress has been made relatively quickly and its parliament is expected to pass an AI Act before the end of 2023.
There are clear risks in enabling AI models to perform critical tasks, including creating and spreading misinformation as democratic elections approach, the possible disruption of social norms and transforming entire industries – with some estimates seeing AI impacting 1 billion jobs globally by 2030.
As a result, governments are pushing moves to fill the void, but proposed legislation is moving at differing speeds. In the United States and the UK, for example, progress has been significantly slower than in the EU and in the US is much more diffused.
A wave of regulations worldwide is coming. Even if you're not in the EU, it's important to explore what the EU AI Act means because it’s likely to trigger more rules and changes in AI everywhere. This regulation will change how we build and use AI. Many layers of tooling are needed to comply with the regulations, but underneath them all lies the AI infrastructure which companies should not overlook when preparing for the upcoming regulations.
In this article, we'll provide an overview of the various governmental legislative moves, dive deeper into the EU AI Act and examine its implications for AI infrastructure and the MLOps stack, and offer some predictions that we anticipate.
How Legislative Moves Across the World Differ
The EU
The EU's AI Act is the most detailed, developed and wide-ranging, taking a tiered, risk-based approach.
Unacceptable risk – AI applications that can harm human safety or fundamental rights will be banned
High-risk – AI which while not outlawed will be subject to detailed regulation, registration, certification and formal enforcement
Lower-risk– this category will, for the most part, face a transparency requirement
Other – AI applications will not be regulated
The EU Artificial Intelligence Act (AIA) has been agreed on by the European Commission, the European Council, and the European Parliament. Legislators will have to agree on the details with some EU member states which have individual objections before the draft rules become formal legislation.
There are also bans on intrusive and discriminatory uses of AI relating to facial recognition and biometric surveillance. Some lawmakers are calling for a complete ban, while there are EU countries calling for an exception to be made for national security, defense and military purposes.
The EU has set four levels of fines for infringements of its AI Act:
- Non-compliance with prohibitions: up to 40 million euros or 7% of turnover
- Non-compliance with data and transparency requirements: up to 20 million euros or 4% of turnover
- Non-compliance with other obligations: up to 10 million euros or 2% of turnover
- Supplying incorrect, incomplete, or misleading information: up to €500,000 or 1% of turnover
U.S.
Steps to regulate use of AI are spread across a range of government players and lack a solid focus. Congress held three hearings on AI in September as part of its efforts to create legislation to deal with the dangers of emerging technologies. Talks were held with senior officials from Microsoft, Nvidia, Meta Platforms and Tesla. Before that, OpenAI CEO Sam Altman appeared before Congress and agreed that regulations were critical.
The White House has issued executive orders requesting that government agencies implement AI in their operations, but so far has only published a blueprint for an AI bill of rights and is seeking public comment about how best to legislate for the ways in which AI is used.
Meanwhile, the U.S. Federal Trade Commission (FTC) – a powerful government agency –began a comprehensive investigation into OpenAI in July due to claims that its operations are opposed to consumer protection laws by endangering personal reputations and data.
On 30th of October, US President Joe Biden released an extensive executive order, commonly referred to as the EO, addressing artificial intelligence (AI) and aiming to advance the responsible development and utilization of AI, by prioritizing safety, security, and trustworthiness.
With its comprehensive set of recommendations and actions, it is intended to influence a wide spectrum of businesses, ranging from those well-versed in AI implementation to newcomers in the field. Notably, the order’s definition of AI systems is expansive, encompassing a variety of systems developed over the past several years, rather than being confined to generative AI or those relying on neural networks:
“a machine-based system that can, for a given set of human-defined objectives, make predictions, recommendations or decisions influencing real or virtual environments.”
Here is an overview of the 8 principles that Executive Order underlines:
New Standards for AI Safety and Security: Introducing comprehensive measures to ensure developers share critical information, establish safety standards, and address potential risks, including cybersecurity, to guarantee safe and trustworthy AI systems.
Protecting Americans' Privacy: Prioritizing federal support for privacy-preserving techniques, funding research on cutting-edge technologies, evaluating and enhancing privacy guidance for agencies using AI, and developing guidelines to assess the effectiveness of privacy-preserving techniques.
Advancing Equity and Civil Rights: Implementing clear guidance to prevent discrimination through AI algorithms in housing, federal benefits programs, and federal contractors, addressing algorithmic discrimination in the criminal justice system, and developing best practices to ensure fairness.
Standing Up for Consumers, Patients, and Students: Promoting responsible AI use in healthcare and education, creating resources for educators deploying AI tools, and establishing safety programs to address harms in healthcare practices involving AI.
Supporting Workers: Developing principles and best practices to maximize benefits and mitigate harms for workers affected by AI, producing a report on AI's impact on the labor market, and identifying options for supporting workers facing disruptions.
Promoting Innovation and Competition: Catalyzing AI research, promoting a fair and competitive AI ecosystem, and using existing authorities to enhance the role of skilled immigrants in AI development, all aimed at maintaining leadership in AI innovation.
Advancing American Leadership Abroad: Expanding collaborations on AI through international engagements, accelerating the development of AI standards with global partners, and promoting responsible AI development globally to address shared challenges.
Ensuring Responsible and Effective Government Use of AI: Issuing guidance for agencies to use AI responsibly, facilitating efficient acquisition of AI products and services, and accelerating the hiring of AI professionals while providing training for government employees at all levels.
Additional details about the order can be accessed through a fact sheet provided by the White House, available here.
G7
Leaders of the Group of Seven (G7) major global economies agreed in May on the need for governance of AI and immersive technologies. G7 ministers will report results by the end of 2023.
UK
Britain is aiming to take a global lead on AI safety by hosting the first major global summit in November. The meeting is due to investigate AI’s risks, including border systems, and how they can be alleviated via action coordinated internationally. It also aims to provide a platform for countries to work together on further developing a shared approach to lessen these risks.
Parliament’s Technology Committee has warned that failure to introduce draft legislation by November means a law won’t be passed until 2025 at the earliest.
China
Temporary regulations were implemented in August in a bid to manage the generative AI industry in the country. This requires service providers to submit security assessments and receive clearance before releasing mass-market AI products.
Ireland
Laws are needed to supervise generative AI, but governing bodies must first create regulations that are realistic and workable, Ireland's data protection chief commented in April.
Israel
The country has been working on AI regulations for more than a year, aiming to strike a balance between innovation and maintaining human and civic rights. A draft AI policy document has been published and public input is being sought before a final draft legislative proposal is published.
United Nations
The U.N. Security Council held its first formal discussion on AI in July, where it looked into the military and non-military applications of AI. The aim was to ensure there are no serious consequences for global peace and security, according to Secretary-General Antonio Guterres.
Work will begin by the end of 2023 on a high-level AI advisory body to regularly review AI governance arrangements and offer recommendations.
The Impact of AI Infrastructure on MLOps stack
Firstly, for clarification, let's distinguish between AI infrastructure and the MLOps stack:
AI infrastructure refers to data centers, GPU and CPU clusters, networking between these machines and the whole software for infrastructure management and optimization which makes the whole complex computation of the AI models possible and as efficient as possible.
Machine learning operations (MLOps) is creating new machine learning (ML) and deep learning (DL) models and running them through a repeatable, automated workflow that deploys them to production. An MLOps pipeline provides a variety of services to data science teams, including AI model version control, continuous integration and continuous delivery (CI/CD), model service catalogs for models in production, monitoring of live model performance, security, and governance.
Both are tied to each other: AI infrastructure empowers MLOps, while MLOps empowers data scientists to put an AI model into the real world with a specific workflow so that it is repeatable. AI infrastructure is the hardware and optimization and control of it, while MLOps is a pure software layer on top for management of data, models and experiments.
The common denominator for new legislation is that AI initiatives be closely supervised to ensure that it works for the public good and does not infringe on peoples’ rights. In the context of AI initiatives, it's clear that both AI infrastructure and the MLOps stack are crucial. Companies must keep a close eye on upcoming laws in their operating regions. Under the EU AI Act, there are vital issues that require action and teamwork among data science, IT, MLOps, and decision-making teams.
These issues include:
- AI infrastructure may need to be adapted to meet compliance requirements for organizations that develop, deploy, or use AI technologies to avoid legal consequences. These requirements may include data handling, privacy, security, and transparency standards.
- Data privacy and security is critical, so AI infrastructure will need to comply with data protection regulations, including data encryption, access controls, and secure data storage.
- AI infrastructure may need mechanisms for logging and auditing AI system activities to demonstrate compliance with legal requirements.
- Maintaining detailed records and documentation related to AI development and deployment and updating infrastructure to enable record-keeping and reporting processes.
- Assessing the potential liability implications of AI infrastructure means organizations will need to obtain appropriate insurance coverage.
- AI infrastructure operating internationally creates potential cross-border implications of AI regulations. Since compliance may vary from one jurisdiction to another, infrastructure will need to be adapted accordingly.
With these points in mind, among the major issues likely to impact companies working in AI are:
Copyright Liability: The legal validity of reproducing data unchecked could expose providers to legal action.
Reporting Transparency: Irregular and inconsistent reporting of energy usage, emissions, and AI providers’ strategies for measuring emissions, and what steps are taken to reduce them. Both regulators and the public want to see real and honest reporting.
Risk Assessment and Evaluation Standards: Comprehensive and transparent reporting is essential when it comes to AI-related risks, encompassing malicious use, unintentional harm, and systemic risks. Providers need to report clearly and comprehensively on these risks. Moreover, as the establishment of clear evaluation standards and auditing mechanisms remains a work in progress, organizations are advised to maintain a transparent approach to reporting in these areas. If companies fail to honestly report, legislators may decide to set the rules.
Infrastructure Preparedness: IT and MLOps engineers should start looking into their current infrastructure and AI legislative requirements to avoid any possible bottlenecks affecting their internal teams. It may also lead to potential client losses if capacity to scale services within the regulatory framework is compromised due to infrastructure orchestration challenges.
By addressing these issues collectively, organizations can ensure their compliance with AI legislation and demonstrate their commitment to responsible AI deployment and risk management, thus fostering a more conducive environment for the advancement of AI technologies.
AI Infrastructure Legislation Trend Predictions
The compliance demands set forth by the EU AI Act will require additional actions, leading to a substantial increase in the operational burden on companies' hardware infrastructure. Consequently, data science teams may find themselves struggling due to resource shortages.
Therefore, companies will need better scheduling and orchestration to overcome the possible overheads on their computing platforms. Companies which may not have paid much attention to AI infrastructure before may be impacted by this issue initially.
Another solution that we foresee is the acquisition of more resources (such as buying more on-prem GPUs and CPUs, or reserving them on the cloud), which is not a cost-efficient strategy, particularly with the current GPU shortage, but maybe unavoidable. This remains to be resolved.
With a recent Pew Research Center survey showing that 52% of Americans feel more concerned than excited about AI's increased use, citing privacy and control worries, the likelihood of new laws being passed to regulate AI infrastructure would seem to be clear. That level of public concern is likely to weigh on legislators who won’t want to be caught out again as they were with their attempts to regulate social media bodies that were too little and too late.
With the release of President Biden’s comprehensive Executive Order , it’s likely that this will have a strong domino effect on other countries – leading to legislation being passed sooner than later across the globe.
This is likely to be given a further boost by the so-called ‘Brussels effect’, where countries outside of the EU are highly influenced by its moves. The EU’s influence and market size, influences organizations and corporations outside its borders to follow suit.
Final Words…
The need for AI legislation is clear, and tech giants are on board when it comes to working with legislators. Achieving a balance where lawmakers’ demands are not so onerous as to erase the possible benefits is the aim for organizations in the AI sphere and to ensure that they can operate profitably while the public benefits safely and securely.
Legislative moves gaining momentum, public concerns influencing the path forward, rapid progress in the EU and the "Brussels effect" are likely to drive global legislative changes, making AI regulation a pressing topic worldwide. Companies will need to adapt their AI infrastructure with compliance requirements, encompassing data handling, privacy, security, and transparency standards.
They must also consider copyright liability, maintain reporting transparency on energy usage and AI risk reduction, and assess AI-related risks comprehensively. Most importantly, companies need to be aware that AI infrastructure can become a bottleneck to enable all of these new mitigations and should keep scalability, flexibility and optimizations in mind.
Collaboration between data science, MLOps, and IT teams is the cornerstone for success, allowing organizations to navigate these complexities and ensure both compliance and responsible AI innovation at scale. As AI regulations continue to evolve, this adaptability will be the key to not just surviving but thriving in a world increasingly shaped by artificial intelligence.