Conversations from The‑C2: Smarter Security with AI?

I spent much of my time at The‑C2 conference this year speaking with business leaders, AI specialists, and security experts about the role of artificial intelligence in cyber security. The conversations were honest, sometimes challenging, and refreshingly sceptical. If there’s one message that came through loud and clear, it’s this: intelligent security tools can be useful, but only if you’re deliberate about when and how you use them.

AI Isn’t One Thing

A theme that came up again and again is that “AI” is a broad term that means very different things to different people. Delegates shared examples ranging from basic pattern matching tools to full-blown machine learning systems and LLM‑powered assistants. There’s no single AI capability that fits all. You need to start by understanding exactly what the tool does and whether that aligns with the problem you’re trying to solve.

For instance, one company talked about deploying AI to prioritise alerts in their SIEM environment. Another was using a model to improve detection of phishing emails by learning from user-reported examples. These aren’t interchangeable technologies, they’re fundamentally different approaches under the same “AI” label.

Don’t Use AI Unless You’ve Got a Real Problem to Solve

Plenty of people at the conference shared stories of tools being brought in because they were “clever” or “cutting edge,” only to gather dust because the organisation hadn’t clearly identified the problem they were supposed to address.

One delegate from a large financial firm said it plainly: “If you can’t define the pain point, you don’t need a robot to solve it.” That really stuck with me. Whether it’s an overwhelmed SOC, suspicious email patterns, or unexpected lateral movement across your estate, you have to start with something concrete. Without a specific challenge in mind, intelligent tooling is just noise with a price tag.

Data is the Foundation, and the Risk

The quality of your results depends entirely on the quality of your data. That was one of the most repeated points. An AI tool can only be as good as the inputs it’s trained on. If your logs are patchy, mislabelled, or inconsistent, the system might reinforce your blind spots instead of fixing them.

There was also a quiet but important conversation around data sensitivity. Feeding large datasets into external or opaque AI platforms raises questions about exposure and control. One technical lead told me they’d had to shut down a promising project because the data required for training was simply too sensitive to share with any third party, no matter how secure they claimed to be.

Skills Matter More Than the Tech

Many organisations simply don’t have the in-house capability to manage AI-based tools effectively. That’s not a criticism, just a reality. What’s key is recognising the gap early. Several security heads talked about investing in training internal teams to understand how the tools work, rather than relying blindly on vendor dashboards. Others opted to partner with experts who could guide deployment and tune the system based on real-world use.

Either way, the message was clear: if you don’t understand the tool, you can’t get value from it. And worse, you might make the wrong assumptions based on what it’s telling you.

AI Has Limits. Stay Curious.

Even the most advanced AI systems have blind spots. Delegates shared examples of tools that generated too many false positives, or failed to flag subtle, context-specific threats. The best results came when AI was treated as an assistant, not an oracle.

This is especially true in dynamic environments where threat patterns shift quickly. No matter how clever the model, it still needs retraining, human oversight, and a willingness to question its outputs. One conversation I had really drove this home: “If you’re not asking ‘why did it say that?’ at least once a week, you’ve probably stopped thinking.”

Bake in Security at Every Stage

Across nearly every conversation, people emphasised the importance of secure-by-design thinking. It’s not just about using an AI tool securely, it’s about making sure the tool itself has been developed and deployed in a way that doesn’t introduce new vulnerabilities.

This means proper data handling, secure APIs, resilience to adversarial inputs, logging, version control, and clear ownership models. I heard from companies who had already run into issues with things like prompt injection, model poisoning, and unexpected outputs in live environments. The message was: don’t bolt security on after the fact. Build it in from the start.

Risks and Rewards Are Tightly Coupled

What became obvious at The-C2 is that AI can amplify both strengths and weaknesses. On one hand, it can spot patterns at speed, summarise complex activity, and scale response efforts. On the other, it can expand the attack surface, introduce new threat vectors, or lull you into a false sense of confidence.

There was no one-size-fits-all answer, but the general view was that AI should be seen as part of a layered defence, not a silver bullet. Delegates advised using it where it supports decision-making, not where it replaces it.

Five Questions That Kept Coming Up

By the end of the conference, I’d noticed a set of questions that people kept asking themselves and each other:

  1. What exactly are we trying to fix?
  2. Do we have the data to make AI useful, and is it clean enough to trust?
  3. Who will actually run and maintain the system?
  4. What happens when the tool gets it wrong, what are the safeguards?
  5. Can we integrate this into our current setup without making things more fragile?

These conversations were grounded, experienced, and a bit wary. Which I think is exactly the right approach when adopting something as powerful as AI.