A major 2026 Global Incident Response Report just dropped, and one number stopped me cold.
The fastest 25% of attacks reached full data exfiltration in 72 minutes. The year before, that same group took 285 minutes (4.75 hours).
In one year, attackers got four times faster.
Your security team did not get four times faster.
Researchers at Palo Alto Networks analyzed more than 750 major cyber incidents between October 2024 and September 2025, across 50+ countries and every major industry. The report is detailed. The patterns are clear.
Two statistics stand out:
First, identity weaknesses played a role in nearly 90% of investigations. Not sophisticated zero-days. Not nation-state exploits. Identity. Stolen credentials, over-permissioned accounts, and abandoned service accounts that no one cleaned up.
Second, in more than 90% of breaches, the gap was preventable. Limited visibility. Inconsistent controls. Excessive trust left behind by routine operations.
These are related but distinct findings. Identity is the attack vector. Preventable gaps are why the attack succeeded.
Attackers are walking through doors your team forgot to close. It's NOT that they are outpacing you with superior technology.
AI has become a force multiplier for threat actors.
It compresses reconnaissance. It personalizes phishing at scale. It helps ransomware operators run multiple campaigns simultaneously without proportional increases in human effort. One case describes an extortion operation where the attacker read an AI-generated script word-for-word from a screen while visibly intoxicated. The script was coherent. The threat was real.
Your exposure is the enterprise AI tool already running inside your environment, not just external attackers armed with AI. It is the internal assistant with broad permissions that becomes a reconnaissance tool the moment an attacker gains access. They can query your own systems, pull runbooks, and map your network, using tools your team trusted.
This is what the report calls "Living off the AI land" (LOTAIL). It is the next evolution of living off the land attacks, and most organizations have no policy for it. (Related: practical examples of read-only automation patterns here: A Read-Only File Tree for Technical Writing).
The threat surface has changed. The tools protecting it need to catch up. There are three capability layers every organization should be evaluating right now.
Runtime visibility and protection.
You need a solution that discovers AI applications running across your cloud environment, monitors traffic between your apps, models, and data sources in real time, and blocks prompt injection and exfiltration attempts before they complete.
Traditional security tools were not built to inspect AI traffic. If yours cannot, you have blind spots you probably do not know about.
Without this visibility, an attacker can use a compromised AI assistant to query your Active Directory for hours before anyone notices.
Model integrity scanning.
Open-source and third-party models can arrive in your environment carrying malicious payloads, unsafe serialization formats, or silent backdoors. You need scanning that runs locally so your data stays in your control, and automated gates in your CI/CD pipeline that stop vulnerable models before they ever reach production.
Adversarial testing before an attacker does it for you.
Red teaming for AI systems means running structured attack simulations across safety, security, and compliance categories, mapping results to frameworks like OWASP Top 10 for LLMs and NIST RMF, and producing a risk score you can actually present to leadership. If you have not tested your AI environment this way, you do not know what is exposed.
Data privacy and AI governance.
Every time an employee pastes sensitive information into an AI tool, submits a prompt to a third-party model, or uses an AI assistant with broad data access, there is a privacy exposure. Most organizations have no visibility into what data is flowing through their AI stack, where it is going, or whether it is being retained or used for training. You need controls that enforce data classification at the AI layer, prevent sensitive data from leaving your environment through AI channels, and give you an auditable record of what your AI systems touched and when.
These reflect the exact patterns documented across 750 real incidents. The question is whether your current stack addresses them.
Stop treating detection as your primary defense. The fastest attacks are already past your perimeter before most teams finish reading an alert.
You need identity governance that treats permissions as a liability, not an afterthought. You need visibility that spans endpoints, cloud, SaaS, and AI tools in a single view. You need automated containment that does not wait for a human to approve every step.
And you need to assess your AI environment before someone else does it for you.
The organizations that close the gap are the ones that stopped assuming their current tools were enough; not necessarily the ones with the biggest budgets. Visibility, identity governance, AI risk assessment - none of these are optional anymore. The data is clear. The question is whether your organization is treating it that way.
I work with organizations navigating exactly this.
The 72-minute window is not a warning about the future. It is a description of what is already happening.
Bianca | Founder & CEO, ITRADE Innovations
Cybersecurity | AI | Workforce Solutions | SIM South Florida Board
Data sourced from the Palo Alto Networks 2026 Global Incident Response Report. All statistics referenced are attributed accordingly.



