Here's a story I've seen quite often in my career: a software product grows at a quick pace during the initial months of development, then at some point progress starts to slow down significantly until it reaches a point where a seemingly trivial change requires days of work and/or has a high chance of introducing regression bugs, and none of the hard work can actually be reused in other projects because of undocumented messy code dependent on deprecated technology.
In this case, there are only two real solutions to prevent abandoning the product's development altogether:
- Trash the code and start over from scratch
- Progressively refactor the code, which would take significantly more work than the previous solution, and that's not counting additional features to be implemented on the old code during the transition period
The work to apply one of these solutions, which could have been prevented if everyone had done their job correctly to begin with, is called technical debt.
But how does the quantity of technical debt reaches this point in the first place?
At first glance, it sounds like an issue with the developers. It's easy to lay blame on programmers and quality assurance (QA) testers for being lazy and/or incompetent.
While there may be some truth to that, the more likely primary cause however is not the developers themselves, but their managers. Without realizing it, these managers will often take decisions that boost productivity in the short term but decrease it in the long term because they do not grasp the severity of the problem if at all due to its technical nature.
Let's review some of the most common anti-patterns I've witnessed on this regard to understand what I mean by that, and some suggestions for managers in preventing them.
IMPORTANT: The following prevention suggestions do not apply to all development teams. They are only general recommendations based on my personal experience. Other solutions, or no solution at all, may be better suited in some cases, so make sure to consider the context of your team before applying them.
Small businesses need to secure clients as fast as possible to be competitive and stay afloat, which encourages fast product deliveries. This philosophy may stick even to medium and large businesses, especially if the profitability risk is high.
When managers put constant pressure on developers to release a new feature or bug fix quickly, for example by imposing deadlines or quotas, it is almost certain that programmers will cut corners to speed up development, and that testers will skip more obscure test cases.
Worse, if expectations are unrealistic, it will burn and/or demotivate developers and have the opposite effect than intended, or in some cases encourage them to falsify proof of their work, e.g., inflation of story points or false bug reports.
First, define a list of tasks that must be fulfilled for any piece of development, and ensure these tasks are completed before it can be shipped. This list, often called Definition of Done, should contain items that enforces high development standard practices, such as documentation, reusable code, security guidelines and verification of work.
As to ensure developers aren't slacking off, consider having the teams manage themselves through agile software development and measure their global output velocity over time. Also, celebrate their deliveries for positive reinforcement.
Many organizations prioritize investments that should bring immediate revenue over long-term investments that would bring more revenue overall, in an attempt to minimize risk.
This philosophy directly causes technical debt when applied on the development cycle itself, as its prevention is a long-term investment.
In addition, feature prioritization may also be sub-optimal, reducing the growth of the organization's assets for future investments, which increases the risk of the previous anti-pattern from occurring and/or perpetuates it.
Instead of rejecting ideas due to lack of resources to implement them, ask for clear impacts for and against their implementation to make an informed decision, and don't be afraid of reasonable risks. If liquidity is an issue, consider looking for additional funding.
Writing detailed requirements takes a long time, so it seems natural to skip writing those that should be obvious to any sane person through common sense.
It is a fallacy to believe that what is obvious to someone is also to another, especially if they have completely different point of views on the matter.
If requirements are not documented, then there's no reason for a programmer to actually implement them since it would be more work for them. Worse, it may lead to developers incorrectly guessing what those hidden requirements are and perform unintentional scope creep and/or counter-productive work that must be undone later.
Ensure that the Definition of Done contains all common requirements for any new work. For individual requests themselves, implement practices that encourages communication between team members so that everyone can discuss their interpretations of requirements until there is common agreement. Also, always have a resource available for questions about requirements throughout development.
Some organizations specializes in delivering custom work for clients to satisfy all of their requirements, in order to sell to them and grow their reputation at the same time.
After fulfilling various features designed for individual potential or existing clients without much value for other clients repeatedly, the product eventually ends up with a bunch of rarely-used features that may require a lot of maintenance work and may not even be profitable because of it.
Analyze the business value for other potential and existing clients before implementing a new feature, and include implementation of analytics in the Definition of Done to determine its real value.
If a potential feature sounds like a good idea but may end up not pleasing existing customers, an easy solution is to implement some configuration settings to toggle between various modes of operations. This is an especially tempting solution if the impact is unknown.
Every time a configuration setting is added to the product, it multiplies the number of potential test cases by its number of options. This causes the number of test cases to grow exponentially. For example, if a module can be configured by 10 on/off switches, that's a total of 1024 possible combinations that should be considered for testing. Now also consider that you may need to execute these tests regularly to prevent regression bugs, and it quickly becomes apparent that no realistic team of testers can do the job.
Automate execution of test cases, including unit tests and front-end tests, and run them constantly during development. Include test automation in the Definition of Done. Also, consider removing rarely used or unnecessary configuration options based on analytics.
QA as last defense
The traditional waterfall development model put the verification phase of a product after the implementation phase. The reasoning is that this should work under this model when proper quality guidelines are included as requirements at the beginning of the development cycle.
Many development teams tend to continue using this tradition even when the waterfall model isn't used, or should not be used.
First, many organizations do not have proper quality guidelines in place, or when they do programmers tend to forget about them because they are not among their main priorities.
Second, because the only people responsible of product quality are the testers at the very end of the product development cycle, this also means that they can only catch errors in requirements and design issues after designers and programmers have already wasted their time.
Lastly, testers are more likely to be missing critical information to define their important test cases because they were not involved during the earlier development process, for example if a potential use case was not documented, or if programmers implements two different solutions to the same problem with an invisible trigger to work around a technical limitation.
Force all developers to be responsible of the product's quality, not just testers, and throughout the entire development cycle. For example, consider implementing the Extreme Programming development methodology. Implement quality guidelines in the Definition of Done when applicable in collaboration with programming and security experts.
Resistance to change
If something has worked in the past, it's natural to believe that it should be continued or repeated in the future for continued success, and doubt other approaches. In addition, human habits are hard to break in general.
When attempting to apply some of the previous suggestions, older developers (or managers) may have a hard time adapting to the changes, if not refusing them altogether. Since they have a lot of influence over the rest of the team, the whole team fails to adapt.
Make sure to communicate to all employees the importance of preventing technical debt and why these practices prevents it. Be open to their concerns and suggestions. If said concerns are not justified and all else fails, consider firing problematic employees even if they were your best assets, as they probably aren't anymore.
Obviously this last suggestion should only be used as a last resort. While it would pass a strong message that you are committed to change, it should not be a mean to that end due to its extreme nature.