What is the EU’s AI Act and how will it affect you?

A Guide to the Groundbreaking EU AI Act

The EU’s AI Act is set to be a game-changer for businesses across Europe and beyond, including those in the UK. As the world’s first comprehensive legal framework on artificial intelligence, this legislation isn’t just about regulating technology—it’s about shaping the future of AI in a way that protects your business and your customers.

If your organisation uses AI in any capacity, whether it’s to improve customer service, enhance operations, or drive innovation, understanding this Act is crucial. The EU AI Act will require you to meet new standards, especially if you operate or trade with EU countries, but it also offers a clear roadmap to compliance, ensuring that you can continue to harness AI’s potential without disruption.

Taking a risk-based approach, the AI Act sets out clear guidelines for the development and use of artificial intelligence in business and society. It sets a precedent for global AI governance and propels the EU to the forefront of AI by establishing universal standards that protect the rights and safety of the public.

If your organisation is looking to leverage the benefits of AI, it’s vital that you understand the EU AI Act and how it affects your operations.

What is the EU AI Act?

The EU AI Act was devised by the European Parliament to ensure AI benefits society while protecting fundamental rights. It aims to strike a practical balance between advancements in technology and ethical considerations.

Based on risk, this new legislation prohibits some AI uses outright while enforcing strict rules around others. It sets out clear requirements and obligations for developers and users of AI while seeking to reduce the administrative and financial burden on businesses and small and medium-sized enterprises (SMEs) in particular.

The law emphasises the importance of transparency and compliance in AI. It also focusses on the use of generative AI tools while addressing copyright concerns that arise.

The Act has far-reaching implications for providers and developers that stretch way beyond the borders of European Union countries. It applies to anyone whose AI, or AI-driven outputs, are used in the EU. For example, if a non-EU AI provider delivers services for an EU company, the provider is bound by the EU AI Act.

As the EU AI Act edges closer to adoption, organisations will be seeking clarity around strategies and compliance. Organisations who fail to comply with the legislation face significant penalties ranging from 7.5 million euros or 1% of global annual turnover up to 35 million euros or 7% of worldwide annual turnover, whichever is greater. For start-ups and other SMEs, the fine is the lower of these two possible amounts.

Key dates from the EU AI act

  • 1 Aug 2024: The AI Act will enter into force;
  • 1 Feb 2025: Chapters I (general provisions) & II (prohibited AI systems) will apply;
  • 1 Aug 2025: Chapter III Section 4 (notifying authorities), Chapter V (general purpose AI models), Chapter VII (governance), Chapter XII (confidentiality and penalties), and Article 78 (confidentiality) will apply, except for Article 101 (fines for General Purpose AI providers);
  • 1 Aug 2026: the whole AI Act applies, except for Article 6(1) & corresponding obligations (one of the categories of high-risk AI systems);
  • 1 Aug 2027 – Article 6(1) & corresponding obligations apply.

What implications does the EU AI Act have for the UK?

Rob May, ramsac founder and Executive Chairman, shared his thoughts and expertise on what the EU AI Act means for the UK:

The Act, though primarily affecting EU member states, does also have implications for the UK due to the interconnected nature of AI development and usage across borders.

  • Extraterritorial Reach: Much like GDPR, the EU AI Act has extraterritorial provisions. Meaning that any AI system developed in the UK that affects individuals within the EU needs to comply with the Act. This includes ensuring transparency, conducting risk assessments, and adhering to specific requirements for high-risk AI systems.
  • Impact on UK Businesses: UK companies that operate within or trade with the EU will need to align their AI practices with the EU AI Act. This includes firms using AI for customer interactions, processing data, or any AI application that might impact EU citizens. Failure to comply could lead to significant penalties, mirroring GDPR enforcement.
  • Influence on UK Legislation: Although the UK has opted for a more flexible approach to AI regulation, focusing on innovation-friendly policies without immediate legislative backing, the influence of the EU AI Act may drive UK regulators to adopt similar standards to ensure seamless operations and market access. The UK’s current regulatory approach, aims to balance innovation with necessary safeguards, potentially incorporating aspects of the EU’s risk-based approach over time.
  • Practical Steps for Compliance: UK businesses, particularly those in sectors like healthcare, finance, and critical infrastructure, should start preparing for compliance by auditing their AI systems, classifying them according to risk, and implementing robust governance frameworks. This includes ensuring data quality, transparency, and human oversight of AI systems. Companies should also establish internal policies and training programs to meet the EU requirements effectively.

So whilst the UK is not directly bound by the EU AI Act, its wide-reaching impact means that anyone engaged with the EU market will need to align their AI practices with the new regulations to ensure compliance and continued market access.”

Why is the EU AI Act important?

The primary aim of the EU AI Act is to build trust and confidence in what AI has to offer businesses and citizens with a presence in the European Union.

Most AI systems pose only minimal risk while delivering multiple benefits for users in the workplace. AI can speed up processes, increase efficiency, tackle complex problems, and perform time-consuming manual jobs that humans traditionally undertake.

While AI is transforming the way we work and interact, certain AI systems also create risks that must be addressed to prevent escalation and problematic outcomes. For instance, without proper governance, it would be tricky to ascertain whether someone applying for a job has been unfairly disadvantaged through AI’s intervention in the hiring process.

The purpose of the EU AI Act is to:

Address and mitigate risks created by AI systems and applicationsEnforce deployers and providers of AI solutions to follow clearly defined obligations
Identify high-risk applicationsProvide a thorough assessment of AI tools and systems before they’re released
Prohibit any AI practices that pose an unacceptable riskEnforce strict AI governance when an AI system goes to market
Establish clear guidelines for high-risk AI systemsCreate a European and global structure for AI development and its use by multiple operators

What is the EU AI Act’s approach to risk?

The EU AI Act categorizes AI systems into four risk levels:

  1. Unacceptable risk: Banned systems include social scoring tools, emotion recognition in work/education, exploitation tools, random facial recognition scraping, biometric identification apps, and predictive policing apps.
  2. High risk: Systems used in vital infrastructure, education, safety components, recruitment, vital services, law enforcement, immigration, and justice system. Some exceptions apply.
  3. Limited risk: Focuses on transparency. Users must be informed when interacting with AI chatbots, and AI-generated content must be clearly labelled.
  4. Low or minimal risk: Freely usable AI systems like video games and spam filters.

The Act aims to regulate AI based on potential harm, with stricter rules for higher-risk categories. Most current AI systems fall under low or minimal risk.

How will the EU AI Act be enforced?

The enforcement of the AI Act and implementation of its regulations within EU member states will fall under the jurisdiction of the European AI Office. By holding businesses and users to account, it aims to create a digital environment where AI technology does not harm human rights and follows an ethical code of practice.

But the AI Act is not only about regulation and enforcement – it aims to foster innovation, research, and collaboration in AI and open global dialogue where international countries are aligned over AI governance. As a result, the European AI Office hopes to establish itself as a leader in the sustainable and ethical development of future AI technologies, while holding those who ignore the EU AI Act to account.

We’ve answered: ‘What is the EU AI Act’. Now it’s time to take our AI Readiness Assessment

Our AI Readiness Assessment provides an in-depth analysis of your organisation’s preparedness to leverage AI technology in the workplace and improve your efficiency and operations. To speak with one of our friendly advisors, contact us today.

Related Posts

  • Why you should be using AI

    Why you should be using AI

    AI

    Discover how AI is revolutionising workplaces by automating tasks, enhancing decision-making, and transforming roles, with practical examples and steps to help your organisation harness its full potential today [...]

    Read article

  • AI-Driven Threat Detection and Response

    AI-Driven Threat Detection and Response

    AICybersecurityTechnical Blog

    This blog explores how AI-driven cybersecurity is transforming threat detection and response with real-time, adaptive defenses against evolving cyber threats. [...]

    Read article

  • Machine Learning Algorithms in Cybersecurity

    Machine Learning Algorithms in Cybersecurity

    AICybersecurityTechnical Blog

    Learn how machine learning algorithms are transforming cybersecurity, improving threat detection and predicting future attacks to help secure your digital environment. [...]

    Read article

  • Harnessing ISO/IEC 42001: The Strategic Advantage for AI-Driven Business 

    Harnessing ISO/IEC 42001: The Strategic Advantage for AI-Driven Business 

    AITechnical Blog

    ISO/IEC 42001 is a global standard designed to guide organisations in implementing and managing AI systems [...]

    Read article

  • Embrace the future: The opportunities and challenges of AI in your organisation

    Embrace the future: The opportunities and challenges of AI in your organisation

    AI

    AI training is vital to use AI responsibly and securely in this blog we explain how you can claim government funding to help finance your AI training [...]

    Read article

  • Revolutionising team dynamics: the true power of Microsoft Copilot Studio

    Revolutionising team dynamics: the true power of Microsoft Copilot Studio

    AITechnical Blog

    Microsoft Copilot Studio revolutionises productivity by seamlessly integrating AI into business operations, allowing organisations to create custom AI assistants. These digital teammates handle tasks ranging from admin functions to [...]

    Read article

Quiz yourself

Are you more cyber savvy than an 11 year old?

11-14 year olds get asked these questions in school. Could you get these right?