Agentic AI gives Oklahoma cyber ops a powerful but ‘scary’ tool

Michael Toland, the Oklahoma state government’s top cybersecurity official, was recently confronted with a frightening experience.
It wasn’t a ransomware attack. He didn’t get into a car accident. It was the day he finally took his AI client off its leash.
Like most large government organizations, Oklahoma has for years been using various AI tools and machine learning models to trawl endless network logs for abnormal activity, but the growing barrage of cyberattacks fueled by the latest generative AI models has forced government agencies to react in kind. After several years of using network monitoring software from the British firm Darktrace in “human confirmation mode,” Oklahoma last year allowed the company’s “Cyber AI Analyst” product to make decisions on its own.
“It’s a little scary. It was for my teams,” said Toland, who’s served as Oklahoma’s chief information security officer the past two years. “You tell it what to do, and it will decide how to do it. And anyone who’s had children knows the fear of handing control over. You have to do it if you want to get to a certain place, but that doesn’t make it easy.”
The “place” Toland wants to go is one where he can keep pace with his attackers, the antagonistic nation-states and opportunistic ransomware groups volleying digital shells at his network. Bad actors are using AI to draft more-convincing phishing emails, gather better intelligence and write effective malware faster than they could in years past. Toland said he has no choice but to embrace a similar level of AI-fueled automation.
“My staff isn’t going to get any bigger. My budget isn’t going to get any bigger,” he said. “You can’t buy your way out of the problem, you can’t hire your way out of the problem. You have to be able to meet the bad guys where they are, which is why we need these AI agents: to deal with their AI agents.”
Overseeing an IT security team of just 35 people, Toland said Oklahoma’s network repels about 28 billion potential cybersecurity threats each year. All of Oklahoma’s network traffic is funneled through a single digital canal where the Darktrace AI agent can make copies of network packets and scan them to alert staff of anything strange, like a foreign process or user, or a familiar one acting in new ways. In a recent month, the agent raised 3,000 alerts, 18 of which were identified as “critical.”
Toland estimated that the near-real-time scanning the Darktrace agent provides is the equivalent of as many as 500 human analysts.
“The bad guys have the same advantage,” he said. “They can do more things faster. They can do things with less knowledge. The attack surface is getting bigger, the attack vectors are evolving and the bad guys are getting smarter. In terms of pure human capital, even if I had 1,000 people on my staff, there’s a million out there trying to break in.”
Darktrace is among the cybersecurity vendors advertising products that are “agentic” — that is, possessing the agency to behave on its own, within predefined limits. Toland said the agent doesn’t have access to workstations, but can put the network’s 4,000 devices on progressively longer time-outs when they behave oddly. As one tool in a broad suite of cybersecurity protections, Toland said the state monitors everything, down to the tone people use in their emails. A change in writing style could indicate nefarious activity is afoot.
Of the four functions AI agents can serve — sensing, sensemaking, decision-making and acting — they’re increasingly being granted greater autonomy, according to Sounil Yu, chief technology officer of the Virginia software firm Knostic. Yu said cybersecurity professionals should be careful.
“With many cybersecurity tools in the past, for good reason, we did not let it make decisions and then act on those decisions autonomously, because what could possibly go wrong? There’s a lot of things that could possibly go wrong,” he said. “And that’s still the case today. I think a lot of people are going to be playing Russian roulette with security tools that they let go wild, without the proper decision-making and acting guardrails.”
But Toland pointed out that it’s critical he find abnormal activity as quickly as possible. Data breaches go unidentified for 194 days, on average, according to IBM. And during that time, bad actors can plant malware, ensuring that even organizations’ backups are contaminated.
Yu agreed that in cybersecurity, the good guys are overwhelmed and underresourced, but said that it would be “extraordinarily foolish” for them to unleash agentic systems too early. He likened those using AI agents to wielding the power of 100 interns: AI is potentially a highly useful resource in the field of cybersecurity, but one that could go wrong quickly if not properly supervised.
But long-term, Yu said, AI will favor the good guys.
“AI effectively levels the playing ground,” he said. “I think the advantage is moving over to the defender’s side because we [will] have a first-mover’s advantage. We can deploy things that are more secure from the beginning than we do today.”