Information Commissioner's Office
Printable version

Five steps to protect your organisation from AI-powered cyber threats

Cyber criminals are increasingly using artificial intelligence (AI) to carry out attacks that are faster, more advanced and harder to detect. From AI-generated phishing emails that impersonate trusted contacts, to automated tools that scan for and exploit software vulnerabilities, the threat landscape is evolving rapidly.

With this scale and sophistication, cyber security must be a shared responsibility across every part of the economy. As the data protection regulator, we can provide clear expectations and practical support, but all organisations must take proactive steps to prepare themselves for emerging threats. 

By investing in cyber resilience and ensuring appropriate security measures are in place, you can build public trust and confidence in how your organisation protects the personal data you hold. 

Here are five practical steps you can take today to strengthen your resilience to AI-powered threats. 

1) Know what you’re up against

Horizon scanning and understanding potential threats is the foundation of effective security. The main AI-powered risks facing organisations include:

  • AI-enhanced phishing: attackers use AI to generate highly convincing, personalised messages impersonating colleagues, clients or trusted suppliers.
  • Deepfake social engineering: AI-generated audio and video can be used to impersonate colleagues or IT staff to trick employees into resetting credentials or granting system access.
  • Automated vulnerability scanning and exploitation: AI tools can rapidly scan systems, identify weaknesses and launch targeted attacks.
  • AI-powered malware: malicious code that adapts its behaviour in real time to evade detection by conventional antivirus and security tools.
  • Credential stuffing and password attacks: AI accelerates brute-force and credential stuffing attacks, making weak or reused passwords more vulnerable.
  • Data poisoning: where AI models are used in your services, attackers may attempt to corrupt training data or manipulate model outputs to cause harm or extract sensitive data.
  • Indirect prompt injection attacks: where malicious instructions are embedded in external content that an AI system processes and misinterprets as legitimate commands. This includes tool poisoning, where this is hidden within the metadata of tools that an AI agent interacts with. 

The National Cyber Security Centre (NCSC) has updated its Cyber Assessment Framework to reflect AI threats explicitly, with a greater emphasis on organisations understanding how criminals may use AI technologies so they can respond effectively.  

2) Get the basics right and layer your defences

Most successful cyber attacks exploit basic security failures. We expect organisations that are using or storing personal data to have in place the five technical controls outlined in the Cyber Essentials scheme and to have implemented the actions in the Cyber Governance Code of Practice.

But when it comes to AI-powered threats, foundational security alone is not enough. Layers of defence are essential, such as multiple controls so that if one fails, others contain the damage. 

AI tools identify and exploit known vulnerabilities at speed, so make sure there is a solid patching and updating process in place so that available security fixes are applied in a timely manner.

3) Restrict access points 

Weak access points are a primary target for cyber attacks, and that includes your third-party suppliers. You should implement multi-factor authentication (MFA) on all remote access, admin accounts and email, and enforce strong password policies. 

Apply the ‘principle of least privilege’ - for example users, systems and applications should only access what they genuinely need. Audit privileged accounts regularly and remove access the moment it’s no longer required. If you are integrating AI into access control systems, make sure you understand the privacy and security implications of any behavioural and identity data used.

Turning to your supply chain, map what your third parties can access and hold them to appropriate security standards. Ensure you include security requirements in contracts and conduct proportionate due diligence. We have detailed guidance on the responsibilities of data processors and controllers.

4) Improve your detection, monitoring and incident response

You should implement comprehensive security monitoring for suspicious activity such as unusual login patterns, unexpected data transfers, and abnormal API usage, as well as regularly identify weaknesses through vulnerability scanning and penetration testing. 

AI can also be used as a powerful tool for cyber security defence by flagging and containing threats at speed. However, it should operate within a clear framework of human oversight and accountability to prevent misuse and exploitation by malicious actors. 

You should also maintain and regularly test an incident response plan. Ensure staff know their roles and that contacts for reporting are clear. Keep key contact details and offline copies of critical documentation accessible if systems are compromised.

5) Protect personal data 

AI-powered attacks increasingly target personal data, which can also be used to facilitate further attacks. Your obligations under UK GDPR require you to implement appropriate technical and organisational measures to protect personal data. Depending on your organisation, measures could include:

  • Data minimisation and storage limitation: only collect and retain the personal data you genuinely need. The less you hold, the less there is to steal.
  • Data audits: regularly audit what personal data you hold, where it is stored and who has access to it. AI tools that process or are trained on personal data require particular attention.
  • Staff awareness: train staff to recognise AI-powered social engineering attacks such as AI-generated phishing, voice cloning and deepfake techniques. Training should be regular and updated to reflect the current threat landscape.
  • AI governance: if your organisation uses AI tools that process high-risk personal data, you should have a data protection impact assessment (DPIA) and appropriate safeguards in place, including against AI-targeting attacks. You should also follow the government’s AI Cyber Security Code of Practice.
  • You could also consider encryption and pseudonymisation to reduce the impact of a breach.

None of this is new, but AI brings a renewed urgency and greater speed. Organisations can prepare themselves for future cyber threats by establishing robust security fundamentals early, applying layered defences, and ensuring human oversight in their detection and response processes. 

 

Channel website: https://ico.org.uk/

Original article link: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2026/05/five-steps-to-protect-your-organisation-from-ai-powered-cyber-threats/

Share this article

Latest News from
Information Commissioner's Office