AI Made Visible What Was Always There

Lately, I find myself talking about AI all the time. With colleagues, with other architects, with developers. And that's probably true for most of us right now. AI shows up in almost every technical conversation. Discussions about productivity, quality, responsibility, or simply "how are you using it?"

At some point, you start noticing a pattern. You hear yourself explaining the same things again and again. How AI fits into your work, why certain approaches work well, and where the real risks actually are. There's a rule of thumb I've always liked:

If you've explained something three times, write a blog post.

So that's why I'm writing this.

Not to introduce AI, and certainly not to hype it, but to share where I currently stand.

AI tools have gradually become part of my day‑to‑day work over the years. I began using GitHub Copilot in May 2022, when it entered Technical Preview. It wasn't transformative at that moment, it simply felt like another tool that supported the way I already thought about software design and development.

Looking at my work today, I realize something important:

AI didn't fundamentally change how I work. It clarified it.

It amplified habits and principles I already relied on, making my way of thinking, both as an architect and as a software engineer, more explicit and more intentional.

This is how I experience that today.

Architect: Designing Boundaries in a World That Now Thinks Back

As an architect, I've never approached the role as one of prescribing exact solutions. My focus has always been on boundaries, shaping the space in which we can make good decisions together, safely and consistently.

For years, one principle guided how I worked with teams and codebases:

Make the right thing easy, and the wrong thing hard.

That idea shaped how systems were structured, how abstractions were introduced, how conventions emerged, and how developer experience was designed. Good architecture was never about control, it was about guidance. If people consistently made the wrong choice, I learned to look at the system first, not the people.

Over time, I realized how directly this principle applies to AI.

AI doesn't just generate code anymore. It navigates repositories, interprets intent, reasons about structure, and proposes changes within an existing system. Just like us, it performs best when intent is explicit, responsibilities are clear, and boundaries are sharp. A well‑structured codebase makes it easy for AI to do the right thing, and surprisingly difficult for it to drift into unsafe, inconsistent, or unintended directions.

That insight didn't feel like a shift in direction. It felt like confirmation. The architectural work I had always done to help us make better decisions as a team turned out to be exactly what AI needs to operate responsibly inside a real codebase.

AI doesn't need perfect architecture.
But it does need clear architectural decisions.

In practice, that means the same things I've always focused on within teams: naming things with intent, drawing explicit boundaries between responsibilities, making conventions visible and easy to follow. And not just in code: architectural decision records, well‑written user stories, clearly defined bounded contexts and use cases all serve the same purpose. They make intent explicit and traceable. When a team understands why a boundary exists, they respect it and so does AI. Clear decisions reduce ambiguity. It doesn't matter if a colleague or a model is reading your code, your ADR, or your spec. That's also why this connects so naturally to spec‑driven ways of working: the more explicit your decisions are upfront, the better both people and AI can reason within them.

And when decisions are explicit and boundaries are visible, something else becomes possible: governance.

You can't govern what you can't see.

That has always been true for systems: boundaries, ownership, and the traces that make them observable. And it's no different for AI.

Being Transparent, Taking Ownership, and Feeling Responsible

Governance only works when there's transparency. And transparency, for me, goes beyond the technical. It's cultural.

AI should never be something that quietly "just happens" in the background. Teams need to be open about where they use AI, where they deliberately don't, and why. Not to justify its use, but to keep ownership clear.

Because regardless of how good the tooling becomes, one rule never changes:

You remain responsible for what you deliver. Always.

AI can help draft code, suggest designs, or explore options, but the result must still become your story. Something you understand, can explain, can defend, and most importantly feel accountable for. The moment you stop being able to do that, you've given up responsibility and that's where things break.

For me, using AI responsibly means being explicit about its role, while never hiding behind it. "The model suggested it" is not an explanation. The explanation is the reasoning you applied on top of that suggestion.

To practice what I preach this blog post was written in Dutch and translated to English with AI. I'm not a native speaker, and AI helped me express these ideas in a language that isn't entirely my own. But every thought, every opinion, every nuance is mine. I reviewed every sentence, adjusted what didn't feel right, and made sure nothing was lost in translation.

This is 100% my story.

This mindset is something I already practiced with teams before AI. Design decisions were never owned by tools, frameworks, or patterns, they were owned by people. AI didn't change that rule. It only made it easier to accidentally forget it, which is why being intentional about transparency matters so much now.

Engineer: Experience, Learning, and the Role of Reasoning

Alongside architecture, I continue to work hands‑on as a senior software engineer. And for me, this is where AI becomes most interesting, because it intersects directly with how developers learn.

To me, experience has never meant "knowing everything."

Experience means recognizing wrong paths faster.

That becomes especially important with AI, because AI is very good at producing plausible solutions. Experience is what tells you when something that looks clean will fail under real‑world constraints.

When something is new, the first and second steps are often easy to find. Getting started, understanding the mechanics and AI helps enormously there. But I was always someone who wanted to go further than that. To experiment, to connect ideas, to see how things behave outside the happy path. To understand how something truly works.

That mindset didn't change with AI. What changed is how clearly it now shows up in daily practice.

One of the biggest benefits of tools like GitHub Copilot was never speed for me. It was friction. If I wanted useful output, I had to explain what I wanted. And to explain it, I had to reason about intent, constraints, and trade‑offs. That was already true with early versions of Copilot, but with today's reasoning‑capable models something falls into place.

When I explain a problem now, the model reasons with me. It reacts to intent, challenges assumptions, proposes alternatives, and sometimes surfaces options I hadn't consciously considered yet. That feels familiar, because it's the same dynamic I've always valued when developers learn from each other.

Explaining has always been one of the strongest ways to learn. The difference now is that this happens continuously, while I'm building, not just during reviews or mentoring sessions.

Confidence: Not Recall, but Reasoning

This also changed how I think about confidence.

If I work in Python today, I still look up syntax line by line. I won't pretend fluency. But that doesn't stop me from using it. The confidence I'm training isn't recall, it's reasoning. I know what I want to achieve, I understand the structure of the solution, and I trust that I can express it, even if I need help with the details.

The same applies to frontend work. As a backend engineer, I've always worked alongside frontend specialists. I'll never be as strong as a dedicated React developer, and that's fine. What matters is that frontend is no longer a foreign language. I understand the concepts, the trade‑offs, the implications. I can contribute meaningfully.

That's the thing about reasoning, whether it happens in a team discussion or in a conversation with AI: it builds confidence naturally. Not by memorizing answers, but by repeatedly working through intent, constraints, and trade‑offs. Every time you reason through a problem, you understand it more deeply. And that understanding stays, even when the syntax doesn't.

Confidence isn't built by knowing the answer. It's built by reasoning through the problem.

With AI, that confidence no longer fades between projects. It compounds. Learning becomes a continuously won skill, not something you lose when you switch context.

Interestingly, this confidence even influences architectural decisions in my current team. We've found ourselves seriously considering Angular with TypeScript over Blazor with C#, not because C# is weaker, but because AI models currently reason significantly better within the TypeScript ecosystem. That adds a new dimension to technology choices, not replacing engineering judgment, but complementing it.

Reflection: Seeing Yourself and Your Team More Clearly

Looking back, I realize that I've never really learned in isolation. Almost everything that shaped me as an architect and engineer came from working with teams. Reasoning together, challenging assumptions, explaining ideas, and being open to the fact that clarity often emerges through discussion, not authority.

I've always believed that learning happens at every level within a team.

You're never too junior to teach something valuable, and never experienced enough to stop learning something new.

Long before AI became part of day‑to‑day work, I saw this play out constantly: a junior developer asking a question that exposed a hidden assumption, a colleague reframing a problem from a different angle, or a team discussion where no one had the full answer at the start, yet together, reasoning made it clear.

That dynamic shaped how I worked. I never saw my role as providing answers. I reasoned with people. Architecture discussions, design sessions, code reviews. They were about making thinking explicit, not about being right immediately.

What AI changed is not that this suddenly exists, but that I now notice it much more clearly.

AI behaves like a permanent participant in the same reasoning process. It responds to explanations, exposes gaps, and forces intent to be articulated, much like a thoughtful teammate asking the question everyone else skipped. When something doesn't make sense, it becomes visible, fast.

That's when I realized:

What makes AI work well is exactly what makes a team work well.

The same mindset applies. Just like working with people, working with AI reminds you that you're never done learning. Insight doesn't have a seniority level. Its value lies in the reasoning it triggers, not in where it comes from.

In that sense, AI hasn't replaced collaboration.
It has reinforced it.

It surfaced something I had practiced for years without naming it: learning is shared, continuous, and grounded in explanation. AI simply made this loop faster and more visible.

So when I say AI feels like a mirror, it's not only reflecting how I think as an individual.
It reflects how I've always worked with teams: learning together, reasoning together, and staying open to being wrong.

That's why AI fits so naturally into my way of working. Not because it introduced something new, but because it made visible what was always there. The habits, the principles, the way of thinking, they were never about AI. They were about designing clear boundaries, making the right thing easy, taking responsibility for what you deliver, and reasoning together as a team. And today, AI makes it impossible to overlook.