The best teams don't rely on heroes; they rely on each other. Discover how Collective Responsibility transforms engineering culture, turning "it works on my machine" into shared success, better stability, and the psychological safety to take risks.
I want to share one of the most valuable things I've learned about leading software engineering teams. It’s one of those things which is obvious, but it often gets overlooked and isn't given the attention it deserves. In an industry obsessed with the "10x Developer" it is often forgotten that software is a team sport.
This blog post covers:
Early in my career, I thought my value was defined by how much of the codebase only I understood. I wanted to be the one who could swoop in, type furiously for ten minutes, and be the one to save production.
I was wrong.
As I transitioned through my career to CTO, I realised that the "Hero Developer" mindset is actually a liability. It creates bottlenecks, breeds anxiety, and eventually leads to burnout.
Today, my primary job isn’t just to architect systems; it is to architect the environment in which those systems are built. And the foundation of a high-performing, resilient engineering culture is Collective Responsibility.
Collective responsibility is the shift from "I wrote this feature, so I own it" to "We shipped this release, so we own it."
It means the team shares the credit for success and the burden of failure equally. When a bug hits production, the question isn't "Who pushed that commit?" but "How did our process let that slip through?" and "How do we swarm to fix it?"
When you successfully distil this mindset within your team, the chemistry changes. Here is what happens when ownership becomes shared.
For many, "psychological safety" might sound like HR fluff, but look at the data.
Google’s massive two-year study on team performance, Project Aristotle, revealed a startling conclusion. After analysing 180 teams, they found that the number one predictor of high performance wasn't IQ, seniority, or stack expertise. It was psychological safety.
Harvard researcher Amy Edmondson defines this as "a shared belief held by members of a team that the team is safe for interpersonal risk-taking."

Here is why this matters to a tech leader:
By distilling collective responsibility, you create a "sandbox" where interpersonal risk-taking—like proposing a wild architectural change or admitting "I don't know"—is rewarded, not ridiculed. The team acts as the safety net. If a junior dev breaks the build, a senior dev is there to help fix it—not to scold.
When the whole team is responsible for the product, "It works on my machine" is no longer an acceptable defence. Engineers stop throwing code over the wall to QA.

This isn't just a management philosophy; it is a measurable predictor of stability.
In this environment, engineers start asking, "Is this maintainable for the next person who touches it?" because the next person might be their teammate. The quality of the codebase becomes a shared point of pride, rather than a checklist for an individual.
The "Hero" mindset is a fast track to burnout. When one person carries the weight of a critical system, they can never truly disconnect. Collective responsibility acts as a load balancer—not just for requests, but for stress.

With the ubiquitous adoption of AI coding assistants (Copilot, ChatGPT, etc.), I’m seeing a dangerous new anti-pattern emerge: "The AI wrote it."
The 2025 DORA report and recent analysis on the "Stability Tax" of AI confirm something critical: AI is an amplifier. It does not fix broken processes; it magnifies them. If your team has a culture of "shipping fast and breaking things" AI will just help you break things much, much faster.
Read the full blog post on this topic:

Collective responsibility in the age of AI means treating generated code with higher scrutiny than human code. If the AI introduces a security vulnerability or hallucinates a library, and we ship it, that isn't a "bot failure"—that is a failure of our collective review process.
You cannot mandate culture, but you can nurture it. As leaders, we have to model the behaviour we want to see. Here is how I approach it:
When things go wrong—and they will—host a Blameless Post-Mortem.
In my experience, it is remarkably rare that someone actually makes a reckless and stupid mistake. It is almost always down to the process, the tools, or the environment that allowed the individual to make such an error.
If you recognise that and own it, your job as a leader is to then help the rest of the team get on board with you to iron out the issue(s) which caused the error in the first place. Change your vocabulary, ensure your team understands this principle deeply, and encourage them to be part of the solution.
When a critical issue arises—whether it is a fatal bug in production or a developer hitting a brick wall on a new feature—encourage the team to stop starting new work and "swarm" the problem.
Crucially, your team needs to see you doing this too. Be the first one to ask, "How can I help?" or "What do you need?" There is always a fine line between helping and getting in the way, but trust your team to tell you where that line is. By showing up, you validate that asking for help is a strength, not a weakness.
In engineering, we tend to celebrate the person who merges the PR or closes the ticket, often overlooking the critical contributions that made that success possible.
Code reviews are a staple of engineering, yet they are rarely utilised to their full potential. We often treat them as a gatekeeping exercise—a final check to catch bugs before they hit production. While that is valuable, it misses the bigger picture.
The primary value of a code review is not quality control; it is knowledge transfer.
The Role of AI: This is where modern tooling can transform your workflow. AI-powered code review tools can now handle the pedantic parts of the process—the linting, the syntax checking, and the style guide enforcement. They don't have egos, and they don't get tired.
By offloading the "boring" checks to AI, you free up your human engineers to focus on the interesting and challenging parts of the code: architectural fit, business logic, and maintainability. This shift provides significantly more opportunities to learn about the codebase, rather than just arguing about variable names.
Collective responsibility can sometimes be misinterpreted as "design by committee" or result in the Bystander Effect, where everyone assumes someone else is handling the critical path.
To prevent ambiguity, you must distinguish between collective accountability (the result) and individual ownership (the momentum).

Building a culture of collective responsibility isn't a box you check on a quarterly goal sheet. It is a daily practice of trust.
It requires us to suppress the natural human instinct to protect our own reputation when things go wrong, and instead, lean into the discomfort of admitting, "We missed this, how do we fix it?"
As we move deeper into an era where AI can generate syntax in milliseconds, the true value of a software engineer—and a leader—is shifting. We are no longer just defined by the code we produce, but by the environment we cultivate. The syntax will change, the frameworks will rot, and the tools will evolve, but the way your team feels when they open their laptops on a Monday morning? That sticks.
Ultimately, the best software isn't built by the smartest person in the room. It is built by the team that feels safe enough to ask "Why?", brave enough to say "I don't know," and supported enough to know that if they stumble, they won't fall alone.
That is the only architecture that truly scales.