My main focus this year revolves around Software Maintainability. In one of my previous blogs I’ve already wrote on maintainability in the cloud,
At this moment, we try to get everyone on board, and we have started training people on the concepts, and how to get your team to write maintainable software.
Often the discussion rushes into tools and you get stuck. One teams wants to use tool X, and another team tool Y.
Well guess what? Not one single tool will fix everything, and there is always a tool gap. No one single tool or set of tools will cover everything you want to check and track.
Does this mean tools are useless? Obviously tools can help you out. Think about Static Code Analysis and Software Composition Analysis. I will blog later on about some of these tools and how to incorporate them into your automated pipelines.
Definition of Done (DoD)
What does the Definition of Done has to do with this? in one of my former teams, we ‘baked’ some items in our DoD. I’m not a real big fan of Scrum, but think about a list of items you should adhere to, before marking a piece of work as done. Regardless of what framework or flow you use, something like a DoD will be in there.
These are examples. Please do note that per team/technology and stack these can differ. This is my opinion and experience, not a golden rule. Take the time to find out what will help your team, but do not blindly enable some tools and think ‘I’m done!’.
- Code Coverage
Yes, I’m entering a world of pain here. What if the tests are no good? What if someone is just testing getters and setters?
Let us assume your team is professional and mature, and you will peer-review each others work. This ensures your tests being sane and of a good quality.
If that is the case? You could argue that a metric like ‘Code coverage needs to be the same, or higher’ will help you get better insights into your maintainability. Testability of code is one of the key metrics and code coverage can help you get a feel of how testable your code is.
- Code Smells
Using tools you could automate finding the ‘cleanness’ of your code. Many tools help you figure out if developers adhere to your standards, like unit length or best-practices.
Setting a bar in the DoD of a x-percentage, or grade (some tools use grades from A to F) will help your team understand what the baseline is.
Does your pullrequest bump the grade from A to B? Then you should fix that!
- Technical Debt
The same goes for technical debt. Measuring this in newly written code could help your team out; ensure you do not enlarge the debt silently and make this part of your DoD.
This will also help you focus on getting stuff fixed, and demanding time in your development team from your stakeholders.
- Library and Package updates
Using opensource (or really any 3rd party) packages? Make sure you enable some type of software composition analysis.
For example, if any of your packages needs updating due to high severity CVEs, make this a big red button in your dashboard. State in your DoD you will not allow high priority vulnerabilities in packages, so you will get to fix these ASAP.
It is my opinion, and experience, that if you dedicate yourself in setting the bar high, you will help yourself, your team and more importantly the product you are building.
Instead of waiting for some audit to come along, baking this into your process will help you catch issues sooner, making your software maintainable and more secure!
Thoughts? Please reach out on this page or Twitter!