What are you doing is wrong!
Most engineering teams are tracking effort and calling it progress. Story points, commit frequency, PR cycle time, items from a Definitions of Done implemented or respected — these are process metrics dressed up as productivity metrics.
They tell you how busy your team is. They tell you almost nothing about whether your team is building things that matter. Stop doing that, it is wrong!
If you’re an software engineering manager or founder, this should bother you. Because somewhere in your organization, someone is probably celebrating a sprint velocity number while a feature your customers actually need has been sitting in the backlog for eight months.
Here’s a more honest way to think about output.
The four things worth measuring
1. Validated Ideas
An idea is not output. A meeting about an idea is not output. A Confluence page describing an idea is definitely not output.
A validated idea means someone tested the concept against reality — a user interview, a prototype, a technical spike — and made a decision: Start planning building it or kill it.
Both outcomes are good.
The only bad outcome is leaving it in a gray zone where it consumes attention without moving forward.
If your team can’t tell you how many ideas were validated this quarter, you don’t have a development process. You have a wishlist with standups.
2. Proof of Concepts
POCs exist to answer one question: can this actually work? They are not mini-projects. They are not dress rehearsals for the real build. They are throwaway experiments with a specific question attached.
Most teams skip them because they feel like extra work. This is backwards. The POC that fails in week one is the cheapest thing your team will ever build. The feature that fails after three months of development is one of the most expensive. Choose accordingly.
Track POCs completed per quarter. Track how long they take. Celebrate the ones that get killed and plan next steps for those that worked — that’s the system working.
3. MVPs
An MVP has users: internal or external or both. That’s what makes it an MVP and not a prototype. Real users, real feedback, real behavior you didn’t predict.
The teams that struggle to ship MVPs are usually the ones that keep expanding scope until the “minimum” version is actually a full product. That’s not caution. That’s fear of feedback dressed up as thoroughness.
Measure time from concept to first user or to release/shipping. If that number isn’t decreasing over time, something is wrong — either with scope discipline, decision-making, or both.
4. Features in Production
Count what shipped. Not what was “nearly done.” Not what’s in final review. What is live, in front of customers, being used.
Beyond the count, ask whether you’re shipping more ambitious work over time. A team that ships twenty small features a quarter forever isn’t growing. A team that gradually takes on more complex, higher-value work is. Know the difference.
The backlog test
Open your backlog. Find items that have been there for many months. These are ideas your team has silently agreed are too hard, too risky, or not worth it.
Now ask: is that still true? Or has it just been sitting there because no one wanted to make the call because everyone is already too busy?
A backlog full of old items isn’t a roadmap. It’s a graveyard with a product manager’s name on the headstone.
The items migrating from “someday” into active development is one of the clearest signs a team is actually improving, not just staying busy.
What to stop tracking
- Story points — you’re measuring how well your team estimates, not how much they deliver
- Commit frequency — activity is not output
- PR cycle time — useful for spotting bottlenecks, useless as a success metric
- Lines of code — this one should have died in 1995!
None of these answer the question that actually matters: did we ship something a customer can use?
The uncomfortable truth
If your team can’t clearly answer “what did we ship this quarter and who is using it,” you don’t have a measurement problem.
You have a clarity problem. The metrics are just the symptom.
Start simple. Count what shipped. Track how long each stage took. Be honest about what got stuck and why.
Do that for two quarters and you’ll know more about your team’s actual capacity than any velocity dashboard has ever told you.

One thought on “Software engineers, you’re measuring the wrong things. Here’s what actually matters.”
Comments are closed.