When the Bots Start Doing Billy’s Job (Part 3)

Written by:  William White, CISSP

Chief Technology Officer, Ultimate Risk Services

(Part 3 in our AI vs AI series)

In a previous post (about getting your CISSP to keep your job), I stated:

“Try asking an AI to convince a senior executive to invest in a security initiative that won’t show ROI until after something bad happens. Exactly.”

But then I got to thinking again…hmmm…

Who would be more effective at convincing, a CISSP or a machine? This is within the per view of a CISO , after all.

Let’s both write a convincing letter to our CEO, Mr. Chuckles, to find out: 

The Machine

Dear Mr. Chuckles and Esteemed Members of the Board,

I’d like to propose an investment that, if we are successful, will appear—on paper—to accomplish absolutely nothing.

No new revenue stream.
No eye-catching growth metric.
No celebratory press release.

In fact, the best-case outcome is so uneventful that it will be indistinguishable from inaction.

Naturally, I’m referring to a cybersecurity initiative.

Before this begins to sound like an elaborate budgetary prank, let me assure you: this is one of the more rational (if counterintuitive) decisions in front of us.

Out of curiosity—and in the spirit of efficiency—I asked an AI to make this case for me. It did an admirable job. It talked about risk reduction, probability curves, and cost avoidance. It was precise, articulate, and entirely unburdened by the reality of being accountable for the outcome. Which is, perhaps, the key difference.

Because while AI can explain why security matters, it doesn’t sit in this room when trade-offs are made. It doesn’t weigh short-term performance against long-term resilience. And it certainly doesn’t answer for what happens if we guess wrong.

That responsibility, inconveniently, remains ours.

Here’s the situation in plain terms:

We are currently accepting a level of cyber risk that is typical, tolerated, and—given the broader landscape—increasingly fragile. The proposed initiative reduces that risk. Not to zero (no honest person will promise that), but to a level more consistent with the scale and ambition of this organization.

The challenge is that its success will be measured in negatives:

  • Incidents that don’t occur

  • Headlines we don’t make

  • Revenue we don’t lose

  • Customers who never have a reason to question their trust

It is, essentially, an investment in non-events.

And yet, history is unambiguous on this point: when organizations delay these investments, they rarely eliminate the cost—they simply defer it to a moment when it is more expensive, more disruptive, and far less controllable. So the decision before us is not whether we will pay for security.
It is whether we will do so:

  • Proactively, with intent and leverage

  • Or reactively, with urgency and fewer options

If I may offer one final, uncharacteristically blunt framing:

Approving this initiative is choosing to spend money and wonder, occasionally, if it was necessary.

Declining it is choosing to save money and eventually find out.

Thank you for your time and consideration. I’m happy to discuss this further—preferably before it becomes urgently self-explanatory.

Respectfully,
EZ2USEAI, AI CISO

Yours Truly

Dear Mr. Chuckles and Board,

I recently conducted a small experiment that I think you’ll appreciate; it’s equal parts innovation and mild existential curiosity. I asked an AI to convince a senior executive to invest in a security initiative that, by design, will not demonstrate ROI unless something goes wrong.

The result? A polished, logically sound, and slightly soulless argument about risk, probability, and cost avoidance. It checked all the boxes. It used phrases like “proactively” and “reactively.” It was, in a word, correct.

And yet… I wouldn’t trust it to actually close the deal.

Because here’s the interesting part, Mr. Chuckles: while AI can explain security, it can’t own it. It doesn’t sit in your chair weighing trade-offs between growth and resilience. It doesn’t feel the weight of a decision that might only prove itself during a crisis. And it certainly doesn’t get a call at 2 a.m. when something goes sideways.

Which brings me to the point of this letter.

Investing in this security initiative is not about chasing ROI in the traditional sense. It’s about deciding, deliberately, what kind of risk we’re willing to carry as an organization.

Right now, we are effectively betting that nothing sufficiently bad will happen to expose the gaps we know exist. That’s a very common, very human and very negligent calculation. It’s bet, with terrible odds for us.

This initiative simply changes the odds.

It won’t generate revenue next quarter. It won’t make a dashboard spike upward in a satisfying way. What it will do is reduce the likelihood that a single event disrupts operations, erodes customer trust, or forces us into reactive decisions under pressure.

In other words, it buys us the ability to handle a bad day without it becoming a defining one.

If another AI was writing this, it would probably end with a neat summary about “aligning security investments with business objectives.”

Instead, I’ll put it more plainly and humanly. We can choose to invest now, on our terms, with clarity and control. Or we can invest later, under duress, with significantly worse options.

I feel the AI understands that distinction intellectually.

You, however, get to decide what it means in practice.

Who did a better job?  Me or the AI? As a CISSP or CISO, could you leverage an AI tool to make your life easier? If so… How?  Leave a comment!