In all my years in tech – through the rise of the internet, cloud and mobile – nothing has seen the adoption curve that generative AI (GenAI) has. In such a short period of time, it’s gone from novelty to necessity, with people already growing to rely on it for everything from research to writing code.
And that’s largely due to its intuitive interface. GenAI with its Natural Language interface doesn’t need system integration to generate a lot of value – it integrates directly with people.
Current threats: Familiar tactics, supercharged
While GenAI hasn’t yet led to brand-new attack vectors, it’s made existing ones far more dangerous:
- Visual impersonation is more convincing, thanks to deepfakes
- Phishing emails are nearly indistinguishable from real ones due to improved linguistic capabilities
- Process replication allows for mimicking internal workflows
- Scalability enables attackers to launch highly targeted, large-scale campaigns
Future threats: Unknown unknowns
As GenAI evolves into Agentic AI – tools that make independent decisions and act autonomously – the threats become harder to predict. State-sponsored “AI Threats-as-a-Service” could become a reality. Our best defense? Master the basics.
Preemptive defense: The first line
When threats are unpredictable, the fundamentals become critical:
- DDoS/IP reputation filtering
- Pre-authentication risk scoring
- Dynamic authentication flows that adjust based on risk level
- Two-layered access policies (SSO and app level)
Preemptive defense, as coined by Gartner, starts before authentication. Blocking high-risk attempts, introducing step-up challenges for medium-risk ones and trusting but verifying low-risk traffic reduces exposure without compromising user access.
AI-powered phishing
The sophistication of GenAI makes phishing much harder to detect. Attackers now create nuanced, human-like interactions that evade traditional filters. That’s why context-based authentication is essential:
- Adjust authentication factors dynamically based on risk
- Step up authentication within applications based on behavior
- Incorporate ID-verification challenges, particularly for sensitive access
And don’t rely on a single authentication method – use a blend: OTPs, passkeys, biometrics and ID verification – layered across access points.
Session hijacking
GenAI can lure users into unknowingly giving up session cookies – bypassing the need to authenticate altogether.
Recommendations:
- Enforce step-up authentication when users laterally move to sensitive apps
- Use phishing-resistant factors like FIDO2 or passkeys
- Require re-authentication when users access their SSO profiles
Shadow AI: What you can’t see can hurt you
Let’s talk about Shadow AI – the unsanctioned use of GenAI tools. Even with policies in place, there’s nothing stopping an employee from using ChatGPT on their phone and pasting the output into a report.
The best way to manage this risk? Remove the friction:
- Provide a corporate-controlled GenAI instance
- Maintain a register of AI usage (especially for compliance frameworks like the UK’s ATRS)
- Streamline procurement and integration of GenAI tools into your existing infrastructure
The goal isn’t to block GenAI – it’s to govern it.
Leveraging AI: Four core benefits
AI isn’t just a threat – it’s an opportunity. In the security world, it offers four key benefits:
- Simplify: Delivering the same capability more intuitively
- Accelerate: Improving speed and efficiency
- Fortify: Enhancing what already works
- Expand: Unlocking new functionality altogether
For example, OneLogin’s Vigilance AI uses machine learning (ML) to assess dozens of attributes per authentication attempt, assigning a dynamic risk score using Bayesian probability. This kind of automation improves accuracy and reduces false positives.
Expect to see even more innovation as vendors integrate Small Language Models (SLMs) into security tooling – ideal for tasks like analyzing predictable authentication data.
Securing AI: Managing non-human identities
As AI becomes embedded into business workflows, it’s reshaping our understanding of identity – especially non-human identities (NHIs). Traditionally, NHIs include workloads, machines and service accounts. But with Agentic AI, we’re seeing a new category: AI agents that can make decisions and take action.
These non-human human identities must be governed as rigorously as human users with measures including:
- Lifecycle Management: Track creation, role assignment, and deactivation
- Least Privilege Access: Enforce granular permissions and Just-In-Time (JIT) access
- Separation of Duties (SoD): Prevent conflicts or risky combinations of access
- Continuous Risk Assessment: Adapt authentication and access based on evolving behavior
And here’s a critical reminder: Don’t forget the kill switch. You must be able to immediately revoke access if an AI agent behaves unexpectedly – or worse, begins modifying its own code to prevent shutdown.
Final thoughts
AI is reshaping identity security from every angle. It’s powering new attacks, improving defenses and demanding entirely new governance models. The path forward requires balance: enabling innovation while managing risk.
Start with strong fundamentals, build layers of adaptive defense, embrace AI where it adds value – and never assume that today’s controls are enough for tomorrow’s threats.
Because in the world of AI, speed and adaptability aren’t just advantages – they’re requirements.