Data driven discussions — What is the cost of doing business in agile?

Data is a core pillar that supports working in an agile way (or in Agile). Good data can lead to better decision making, poor quality data can lead to ill-informed decisions and poorly gathered data can lead to anti-patterns. Through agile practices we aim to visualise our data, such that everyone can see it, to better inform their decisions. We use this data in sizing the effort for a particular piece of work and while we can’t know everything up front, we can estimate some of the things we will need to do. We size the work with a degree of caution and when uncertain overestimate to cover potential failures along the way. In short, we attempt to accommodate the unknowable.

Not everything is unknowable though, there are processes in every company that are known, like how much effort it takes to get code moved into production, right? Well…. maybe not. Each team might have a completely different experience in moving their code, depending on the nature of their application and the levels of automation they have put into it. For some it might be a week long exercise, for others a trivial push of a button. So when a large organisation looks to teams to provide an estimate of how long it will take to deliver a feature to production, they might not be thrilled with such a large variance.

So how can they help improve things?

Gather more data!

One method to help gather this data is to track the effort as a Capacity Draw, capturing the work as part of a teams backlog. This draw is estimated at the start of each iteration and compared to the actual at the end of the iteration. The intent of this is to identify areas that belong to the category of “the cost of doing business”, things that need to be done to realise value, that in themselves do not deliver any value directly. By gathering this data the organisation can see where the most time is being spent and prioritise initiatives to improve them, using the data to continually gauge the progress of the initiatives.

Teams need to be very mindful when using this approach, as the data is only useful when you know how to use it. The methods below are crutchs to help teams find a balance in their non-value-add activities and like all crutches should be replaced over time. The key goal of gathering it has to be driving conversations on how to improve things. If it is simply a bookkeeping exercise, it will be of no use and in the worst cases may drive anti-patterns.

So how can it be used correctly?

Hand-off costs

One area that can almost universally be improved is anything involving a hand-off, one of the biggest for software companies is a move to production (MtP). As mentioned above an MtP for Team A might take a week, compared to no time at all for Team B. Gathering the data for this capacity draw is relatively straightforward, capture the time each team spends preparing, executing and validating the MtP.

The conversation from it though could be quite interesting. Team A might resent the additional overhead in capturing the data, as their process is already incredibly manual, which is why it takes a week. For Team B they can simply ignore capturing the data as the effort is always zero, its a push button. This is where sitting the two teams together and having a conversation is what is valuable. Team B is unlikely to have stumbled onto a magical push button, they were once in Team A’s shoes, with a week long MtP. They got to where they are through a series of purposeful actions. If Team A is lucky they will have captured those actions and are willing to share them.

Team A will now happily track the effort against their MtP’s as they have a goal to work towards, a push button MtP. With each iteration and each MtP, they can see how they are improving towards getting to where Team B are. They can use the data in the capacity draw as a yardstick for progress, enabling them to ask for assistance where they are not advancing.

The same incentive can be provided to any other areas of hand off, like Development to QA. Teams can use the data to drive towards a Continuous Integration model, highlighting the areas of high rework as a result of the handover. Organisations can also use it to drive towards a DevOps strategy, gathering data on the time spend handing off from Development to Operations and back again.

Production Support

Another common data point for most companies is how much effort is required to keep what is in production alive and healthy. Here Team A have the advantage, as their MtP process takes so long they have a zero tolerance policy to defects, so everything that goes out is double and triple checked. While Team B’s push button process made them quicker, they still have many issues once they get out into production and so spend much of their effort in pushing out fixes.

Again gathering the data and having a conversation can drive positive change. Team A can walk through their policies and how they arrived at them. Team B can then use the capacity draws to gauge their progress in implementing the policies through successive iterations. Seeing a gradual reduction in the amount of effort being invested in fixing production issues, ultimately building to a process where they don’t allow defects to escape.


This is one of the trickiest ones to gather data on, as it requires a high degree of discipline, technically anything could be considered maintenance. It is very easy to say “I need to add this code to maintain the application”. Do you really? It sounds like new functionality! Couldn’t it go on the backlog and get prioritised with everything else? This is where teams need to be vigilant to potential gaming of the system. If it is easier to get maintenance approved over new functionality teams can load up on “maintenance” work.

Anything that goes into this category needs to be truly maintenance of an existing code base, something that is devoid of any value but still requires effort. A good rule of thumb should be how loud the team groans when the code in question is mentioned. If it is something that requires rock, paper, scissors, it is something the team needs to have a conversation about. That conversation should focus on how much engineering effort is being invested versus what is the business return from it? Given enough scrutiny it may make more economic sense to retire the application, the data gathered by the teams can inform that decision, provided it is accurate and drives the right conversation.


Often training is seen as a cost of doing business, in many cases organisations need to up-skill their employees to the latest tools and technologies. It is tempting to capture this effort as a Capacity Draw but again we need to be mindful of what we want to do with the data. If the intent is to justify to management that “This is what it costs to keep our employees skilled” then it is not an efficient use of teams time to track it. That can be done through the transcripts and bills from the training sessions.

A more useful conversation this could drive is where there is training available to the team but cannot be applied to a feature they are working on. In an optimal situation when picking up a completely new piece of work, a team will factor in an allotment of time towards learning the new technologies involved, ideally doing so as rapid prototypes that give them practical knowledge. In sub-optimal cases there might be classroom training available 6 months ahead of the feature start date, this is far from ideal as the team can’t directly apply what they have learned and the knowledge will have atrophied by the time they start the work.

Capturing this lag as “valueless” training will enable a conversation that the training organised by the company is not optimised for success. The team essentially sees it as a draw on their time to have gone on it. Had it been better positioned to align to the feature work, it would have come out as value to the customers in new feature development.

Remember the basics

Hopefully it is obvious that the world is not as black and white as Team A/B. Most teams will fall on a spectrum between the ideal and the not so ideal. In each of the conversations on these topics everyone needs to be open to improving, using impartial data to drive meaningful discussions. This needs to be understood by all participants prior to starting on this journey. The idea is to move towards minimum capacity draws and maximum value output but it is not necessarily the end goal. In some cases the acknowledgement of needing to improve and acting on it is more important.

As with all agile development practices, we are simply uncovering better ways of developing software. That said the one absolute rule in this is to never make a decision on the data without a conversation. The data alone cannot tell the whole story, that can only be achieved through people and interactions.




Multidimensional Engineer working in HPE with an interest in many things.

Love podcasts or audiobooks? Learn on the go with our new app.

Recommended from Medium

Get to Know the PVS-Studio Static Analyzer for Java

Serverless: Breaking a large serverless.yml into manageable chunks

Rancher 2 and Let’s Encrypt with Ingress-Nginx, Cert-manager

“If it turns out that this is incurable, would you marry me?”

🔥🔥 Runes — All about Runes

How to Batch Import Birthdays from Excel Worksheet to Outlook Calendar

Dynamically Update the HTML Code in Flask Using Htmx Library.

How to SSH to AWS servers using an SSH config file?

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store
Alan Duggan

Alan Duggan

Multidimensional Engineer working in HPE with an interest in many things.

More from Medium

How to Become a Fantastic Business Analyst

Choose right metrics for your product

Analyzing the future of an IT project manager in data analytics

Data-driven working