Consider This While Measuring Your Software Engineering Team
We often help companies assess their software engineering team and the processes they use.
These relationships usually start with a business-oriented leader (the CEO or department head) asking us, “How do I know if my engineering team is doing a good job? Should they be able to work faster, or is that unrealistic?”
In my experience, the desire to measure a software engineering team’s success usually comes from a well-intentioned place. If you don’t know what’s wrong, you can’t fix it. And many leaders come from disciplines that are significantly easier to measure, like sales. It makes sense that their natural inclination is to apply the same thoughts to engineering teams.
However, applying good measures to engineering is difficult. It’s not because the discipline is “harder” than any other discipline, but rather because the business outcomes (both positive and negative) are spread out over a much longer period.
To summarize, it all comes down to how long your feedback loop is. Let’s compare two example disciplines.
Generally speaking, the feedback loop for measuring your sales team is slightly longer than your average sales cycle.
You need to understand:
- What is entering the top of your funnel
- How it progresses through the funnel
- The success rate of closing
- Some level of impact afterward (did the sales team set up your delivery team for success?)
I’m sure this is a simplification (nothing like a CTO telling everyone about sales metrics!), but it’s a reasonable approximation.
I read a great Twitter thread a while back. A team was affected by an issue that originated in 1974.
It’s not that the bug literally came from 1974, but it came from a string of decisions over the years that started back then.
This perfectly describes the hurdle of measuring software engineering teams: the feedback loop is as long as any of your software is used.
Here’s an example of why the length of the feedback loop is so important.
Take These Scenarios:
- The team continues at a steady pace and keeps on top of tests and refactoring.
- The team stops everything and refactors for a quarter, then is able to move faster for a while after that.
- The team takes on significant technical debt to get lots of features done in a quarter, but then is very slow for a while after that (or forever, if the technical debt isn’t resolved).
Now consider looking at some kind of basic metric like “Features Delivered” (which in reality isn’t an easy metric to quantify at all, but bear with me).
- If you look at “Features Delivered” over a time period of one quarter, scenario #1 will look okay, #2 will look terrible, and #3 will look great.
- If you look at “Features Delivered” over one year: scenario #1 looks okay, #2 looks slightly worse but similar, and #3 starts to look bad.
- Finally, look at “Features Delivered” over two years: scenario #1 remains okay, #2 starts looking similarly good, and #3 looks terrible (assuming nothing is done to correct the issues).
In summary: because the length of the feedback loop is effectively forever, you need to carefully consider both short-term impact and long-term impact in any measure.
With all that in mind, let’s talk about some things to consider when measuring success.
Measure Business Value
At the end of the day, everything software teams do should positively impact the business.
Though this is true, it’s also where many misunderstandings come into play: if you optimize your measures around delivering business value right now, then you’ll get yourself in trouble. Your teams will optimize their behavior around right now and never refactor or work on long-term improvements. Which brings me to:
Business Value Over Any Time Period
Try your best to design your measures in a way where long term business value can be captured as well.
It’s difficult to provide good examples for this, since, in my experience, the best measures are designed for each organization individually.
I mentioned a couple of times that your team will optimize their behavior around the measures you have in place. That’s because of Goodhart’s Law (which, in my opinion, should be mentioned in every article about measures).
In case you’re unfamiliar, Goodhart’s Law is, “When a measure becomes a target, it ceases to be a good measure.”
In other words, be careful what you wish for. Whatever measurement you target will be exactly what they optimize their behavior for.
Measuring your engineering team on strict KPIs can be counterproductive if you’re not careful. It can prioritize arbitrary metrics over others, giving you an incomplete picture of the value that your team actually brings. It can also encourage engineers to focus on metrics that might not actually be useful to your business’s bottom line in the long term.
My advice is to work alongside your senior engineering leadership to identify metrics that track both the short-term and long-term business value that your engineering team is delivering.
This will allow you to have the measures that you need to monitor performance, while simultaneously giving your engineering team the flexibility they need to work on long-term items when it’s impactful to do so.
I realize identifying these metrics is easier said than done, so I’ll be diving into that topic in more detail in an upcoming article. To read that upon release, sign up for my email list here.