I’ve been mulling this post over for a while but was spurred into actual writing this week by this twitter thread from Joel Chippindale on the issues with “technical debt” as a term. My response, as you can see in the replies, was that I’d found “technical debt” to be a very useful label for raising the issue with clients*, but, like many similar terms, it is really important to get specific about what you mean early on. Always, always, check your assumptions and make sure all stakeholders have a shared understanding about what is really going on.
To take this example, “technical debt” could potentially cover a wide variety of issues. Apparently it was conceived as a metaphor to explain to non-technical stakeholders why they would need to allocate budget for refactoring code later down the line. In other words because the original code was, intentionally or not, flawed (perhaps because of a choice to prioritise quick development over “perfect” code, a choice for which there may be perfectly valid reasons). However, as this article describes, its usage has gone beyond this meaning to include other areas of technical development that might need redoing, such as choices of CMS or programming language, information architecture, or even documentation. The article also outlines the various ways in which people have tried to contain the definition, but as someone working directly on digital projects with clients, this kind of exercise is of little value to me**.
Even the tightest definition of “technical debt”, whatever that might be, is still too vague to have a useful conversation regarding what to do about it. Instead, there needs to be a detailed examination of the nature of the problem as it presents in this particular case, and the implications inherent in maintaining the current situation vs making changes.
Everybody involved in making decisions that relate to this has to understand how you are using the term in this instance.
Then you can go back to using the term “technical debt” within the project team or organisation as shorthand for all the specifics you’ve identified.
If this seems obvious, I can assure you that it doesn’t always appear to be on the ground. Too often, I have seen terminology relating to digital development being bandied about as if there can be no argument about its meaning. Take “Agile” (Please! Take Agile!) (I jest…). I’m not going to get into a debate about what “Agile” should mean, how it is meant to relate to Scrum or other methodologies, or what proper Agile looks like. Because, honestly, I don’t care. I care about picking the right methodology for the project, and that is based on a variety of factors including existing practices within the organisation, resources, and the type of product under development.
What I see in practice is that very few people are being purist about this stuff anyway. Even “Agile” web agencies might be picking a bit of Scrum, a bit of Kanban, a bit of Lean, a bit… etc. And within that they are likely doing some of the rituals slightly differently, ascribing them different functions and purposes, or leaving out some altogether. They can be heavily influenced by the tools used to support them, as well as the individuals running them. Making an assumption about what your project team means by “Agile” can easily cause problems down the line as different expectations aren’t met.
Likewise, role definitions are slippery. Throughout my digital career, I’ve been called a web designer, web manager, multimedia editor (ugh), multimedia producer, digital producer, product manager and product owner. Despite the different names, in each of those roles I was often doing very much the same thing. The changing terminology has sometimes left me feeling a bit wrong-footed, however, like I was the one who somehow was being left behind; the only one who didn’t understand the fine distinctions between each role. But now I’m realising that what is actually going on here is that nobody agrees precisely what each of these jobs should be, and often they need to adapt to the particular circumstances of a project anyway.
Let’s not even get started on what “digital” really means (see also, what is “art”, what is a “game”, etc).
So how to manage this. Do we need to draw up a great Charter of Digital Definitions (and then spend a year arguing about what a “digital” definition even is)? I would suggest that no, this is not a good use of anybody’s time. The horse has bolted, these terms are always going to mean different things to different people, and will always be fluid to some degree.
The strategy has to be to get specific.
At the beginning of a project, check your assumptions, and be clear about the details. You need someone to oversee delivery of a product? Get specific about the details of the role and then choose the best title that fits, but always make sure that the details don’t get lost until everyone involved is clear about them.
Have you hired a web development agency who says they are “agile”? Get specific about what that looks like for them, and make sure that is genuinely a good fit for your needs (such as demands on client time for participation in regular rituals, or budgeting parameters, or project objectives).
Is someone within the organisation saying that you need to put money towards resolving your “technical debt”? Get specific about what the problem is, what will happen if you don’t fix it, and what the costs and implications of fixing it are.
And for all of these, make sure all the stakeholders involved are clear***, not just the immediate development team. (Probably, this involves clear documentation and working in the open so that these decisions are transparent and shared) .
After that process is complete, you can merrily go back to using whichever title or term it is in discussion, safe in the knowledge that you aren’t going to have any fatally crossed wires later down the line (“Oh but I thought the product owner was looking after that! That’s what I was taught they were meant to do!” “Oh, by technical debt I thought you meant something really trivial/really huge and I failed to allocate budget/was really unnecessarily worried” etc).
To avoid using this terminology altogether because it fails to capture situational nuance is often throwing the baby out with the bathwater****. It is very useful to have a shared label for discussion, for awareness raising and for drawing related issues together, as long as any assumptions involved are exposed and dealt with as early as possible.
Please do share feedback/comments/examples and counter-examples!
*As a freelance consultant, I tend to work with small organisations. In these scenarios, terminology can get especially messy, as small organisations are often less experienced with running digital projects and these concepts might be quite new. It is more plausible to assume shared knowledge if you are working at e.g. Microsoft and you are talking to someone else also in Microsoft (I am guessing, I have never worked for Microsoft). Still, I bet miscommunications do sometimes happen even at Microsoft, and it wouldn’t hurt even there for people to get a bit specific about the jargon they use, if only to make sure any newbies are up to speed and included.
I would like the above to be reassuring for those in smaller organisations: if this sounds like you, you may be finding these terms confusing because they actually aren’t very well defined, or because people are using them inconsistently. Instead of pointlessly battling that fact, we just gotta work with it.
**I found the attempts to break down the term into different categories more useful, however, as part of the process of getting specific.
***It is quite likely that this needs to be some sort of shared process of agreement, rather than one person dictating terms, lest I give the impression otherwise.
****I am talking about technical terminology, not trying to make a wider point about the value or otherwise of various labels. Obviously if a term has offensive connotations or is really unhelpfully problematic, let’s junk it for something better.