GenAI and Privacy

The EU AI Act is coming: What do security teams actually need to do?

22 Apr 2025

The EU AI Act is Coming What Do Security Teams Actually Need to Do

The conversation around artificial intelligence (AI) is rapidly shifting from potential to practical reality, and with it comes a regulatory spotlight. The European Union's AI Act is set to become a landmark piece of legislation, establishing global precedents for how AI systems are developed, deployed, and governed.

While headlines focus on safety, transparency, and fundamental rights, the critical question for security teams is: What does this mean in practice? How do we translate legal requirements into tangible security tasks and controls? It’s time to move beyond awareness and into action. To understand how DevSecAI can support your compliance journey, explore what DevSecAI offers.

Why regulation now? The shift to proactive AI governance

The EU AI Act isn’t just about rules; it reflects a growing recognition that AI’s unique risks require a dedicated governance framework. Unlike traditional software, AI systems learn, evolve, and can exhibit unexpected behaviours. Their impact - spanning loan applications, medical diagnoses, and critical infrastructure - demands a proactive approach to safety and security from the start. Waiting for failures is no longer viable.

Decoding the AI Act: actionable tasks for security practitioners

Operationalising risk management

The Act categorises AI systems by risk: unacceptable, high, limited, and minimal. The ban on ‘unacceptable risk’ systems has been in effect since February 2025. For high-risk systems (standard in finance, healthcare, and critical infrastructure), a robust risk management system is mandatory throughout the AI lifecycle, with obligations applying from August 2026.

Security task: Implement and document AI-specific threat modelling and risk assessments early in the development process. Integrate security risk management into the MLOps pipeline, ensuring continuous evaluation as models and data evolve. Start with a free AI security assessment to benchmark your current posture.

Securing the data pipeline (data governance)

The Act mandates high-quality, relevant, and representative training, validation, and testing data, alongside robust data governance practices.

Security task: Establish strong data integrity checks, provenance tracking, and access controls across the data lifecycle. Develop processes to assess and mitigate bias in datasets. Ensure compliance with privacy regulations, such as GDPR, and the Act’s AI-specific data requirements. This aligns with specialised expertise in our Data AI Security Lab.

Ensuring technical robustness and safety

High-risk AI systems must be able to withstand errors, failures, and adversarial attacks. They require accuracy, fallback mechanisms, and cybersecurity tailored to their risks.

Security task: Conduct rigorous testing, including adversarial robustness testing, performance testing under diverse conditions, and security code reviews. Enforce secure coding practices for AI components. Design and test fail-safe mechanisms and secure shutdown procedures. Expertise from our Gen AI and Privacy Lab is vital for addressing generative AI vulnerabilities.

Enabling transparency and traceability

The Act requires systems to enable traceability and logging for compliance. Transparency requirements for general-purpose AI systems take effect in August 2025.

Security task: Implement comprehensive, secure, and immutable logging for AI operations, decisions, and data inputs. Ensure audit trails are protected and accessible for compliance verification.

Facilitating human oversight

High-risk systems must allow meaningful human oversight, with technical standards still being defined.

Security task: Design and secure interfaces for effective human monitoring, intervention, and control. Ensure oversight mechanisms are tamper-proof.

Applying appropriate cybersecurity measures

High-risk systems need security measures to protect confidentiality, integrity, and availability against unauthorised access.

Security task: Apply tailored cybersecurity best practices—secure infrastructure configuration (cloud or on-prem), robust API security, vulnerability management for AI components, strong access controls, and data encryption. Expertise in AI Deployment Security is essential.

Beyond checkboxes: embedding compliance through DevSecAI

Meeting these requirements isn’t about last-minute compliance. It demands integrating security and regulatory considerations throughout the AI lifecycle—a core philosophy of DevSecAI. This means:

  • Thinking compliance early: Incorporate regulatory requirements during design and planning.

  • Automating security checks: Build security and compliance validation into CI/CD and MLOps pipelines.

  • Continuous assessment: Regularly evaluate the AI system’s compliance posture. A free AI security assessment provides a crucial baseline.

Preparing for a regulated AI future

The EU AI Act, alongside emerging global regulations, signals a new era for AI development. Security teams must upskill, adapt processes, and collaborate with data science, legal, and development teams. While the main obligations for high-risk systems begin in August 2026, and technical standards are still being finalised, proactive preparation is key. Viewing regulations as frameworks for building trustworthy, reliable, and valuable AI systems is essential. Innovation and robust security governance must go hand in hand.

How is your security team preparing for AI regulations like the EU AI Act? What are your biggest challenges?