
I thought AI was going to make me more productive. Maybe even give me some of my time back. That was the plan 😅 It didn’t quite work out that way.
Instead, it made it a lot harder to ignore where things were actually broken.
If I’m being honest, the signs were already there. Over the past year of building with AI, I kept seeing the same pattern: the tools weren’t the bottleneck. The system was.
I wrote about some of that in A Year of Building, Learning, and Laughing My Way Through AI at Ford—but I don’t think I fully appreciated what it meant yet.
The model I started with
I was thinking about AI the same way most leaders are right now: As a tool. Something teams could use to move faster. Write code quicker. Summarize more. Do more in less time. And to be fair—it can do that.
But that framing assumes something important: That your system already works.
That your workflows are clean.
That your systems of record are reliable.
That your teams are operating with enough discipline for speed to matter.
If that’s true, AI helps.
If it’s not…
AI doesn’t fix it.
It exposes it.
What broke first
We had a few teams that were consistently late with their deliverables.
Nothing unusual. Plenty of reasonable explanations: capacity, priorities, dependencies… the usual suspects. Normally, that turns into a series of conversations. You try to figure out the root of the problem through several long meetings, mile-long email chains, and tea leaf reading.
But, this time, I didn’t start with conversations. I started with signals.

I started with a simple prompt. It was something like, “Use this persona…Here’s the problem I’ve been facing… Ask me questions one-by-one to come up with a plan of attack.”
From there, I set up a simple AI-driven view of a few things:
| Signal | What it reveals |
|---|---|
| PRs without review > 2 days | Bottlenecks / unclear ownership |
| Tickets picked up without story points | Weak planning discipline |
| PRs without descriptions | Missing context |
| Stuck tickets (no movement) | Work started without a clear path |
Nothing advanced. Just the basics.
And the pattern showed up almost immediately. We didn’t have a delivery problem. We had a fundamentals problem. AI didn’t help me identify the issue faster. It helped me stop accepting the wrong explanations.
It’s hard to keep saying “we’re overloaded” when the system itself is inconsistent at every step.
The second time it showed up
I kept wondering why teams were spending so much time doing “technical scoping.” The whole process just felt heavier than it should have been. So I went back and reviewed a set of approved product requirement documents.
And there it was again.
- requirements that weren’t fully ready
- missing constraints
- implied decisions that hadn’t been made
We were asking teams to move faster… on inputs that weren’t ready.
So we changed the approach. We built an AI Assistant that:
- surfaces gaps in requirements
- identifies what’s actually scoping-ready
- outlines what still needs clarity
And again—the impact wasn’t just speed. It was clarity. Earlier.
The shift I didn’t expect
That’s when it clicked. AI wasn’t helping us move faster through the system. It was forcing us to look at the system itself. I stopped thinking about AI as something my teams needed to use more. And started thinking about what our systems needed to look like for AI to even be effective.
Because AI reflects the quality of your organization.
If your systems are messy,
if your workflows are inconsistent,
if your inputs are ambiguous—
AI doesn’t smooth that out. It amplifies it.
The uncomfortable part
I don’t think most organizations are structurally ready for full-scale AI adoption. Not because they lack tools. But because they’ve skipped the basics.
The work outlined in Accelerate—clear systems, measurable signals, disciplined execution—that still has to come first. AI doesn’t replace that work. It makes the absence of it obvious.
Where I’ve landed
AI isn’t a tool you simply hand to your team. It’s a constraint you design your systems around. That changes the job.
Less:
- pushing adoption
- chasing updates
- debating symptoms
More:
- designing systems of record
- enforcing clarity in inputs
- making work legible
- creating feedback loops that don’t rely on interpretation
What changed for me wasn’t just how I use AI. It’s how I think about leadership. This isn’t about getting better at using AI. It’s about becoming the kind of organization AI can actually work with.
In the next post, I’ll go deeper into what this looks like in practice:
moving from status chasing → system design
Because that’s where this shift really starts to show up.