We all know there have been discussions on using Generative AI and the pros and cons this new chapter of technology gives. I want to share how Generative AI may assist QA test engineers in their everyday work. And that is by using Large Language Models to assist in writing more sustainable code by reviewing your code. Enabling you to take advantage of AI Code Reviewers to achieve Sustainable Code.
As I have mentioned in my blog, “A way to sustainability through a clean code”, to try shifting focus when writing code to make you code more sustainable. Achieving this with focusing on writing clean and effective code that is easy to maintain and refactor. This can be enforced through high-quality code reviews. When done right it will be effective to use AI to perform this task.
Enabling quality code reviewers
To enable Generative AI to give quality code reviews it is important to give instructions so the feedback follows best practice. You also need to define the experience of the reviewer and how you want the comments to instruct improvements. When reading the feedback from the AI it is important to be critical. Giving feedback to the AI on what needs to be improved in the AI’s review will help it improve. With this you start a learning spiral so the reviewer continuously improves over time. This will also help with finding where to tweak the instructions to gradually improve the value the code review gives.
Where to begin?
When you begin using Generative AI in code review a good starting point is to define a set of rules. The rules should encapsulate what your way of working is when writing and reviewing code. I would recommend starting with a list of rules giving a concise guideline. Start with your most important programming principles and gradually build this rule set over time through experimentation.
The experience of the reviewer is also relevant. It is possible to give instructions so the Generative AI has different work experience. There are benefits of testing feedback from both an experienced programmer and someone with experience in different fields. The given experience will affect the code review both in how the code gets analyzed and how the comments and the feedback is formulated. In this segment of instructions, it is especially important to experiment. With experimentation you will find what gives your team and you the most value. It is possible and advisable to personalize this in order to maximize its advantages for each person in a team.
The way you fine-tune how the feedback should be presented has the potential to yield the greatest value from the code reviews. That is because you may just give the Generative AI the first two parts of the instruction. This process involves the Generative AI modifying the code, which you then needs to be reviewed by you. However, this approach may not give the same learning benefits as reading comments, making improvements, and gaining insights on writing code in different ways.
If you are interested in utilizing and experimenting with code reviews using Generative AI, I would appreciate hearing from you. What are your experiences and how you plan to leverage this new assistance in your approach.
If you are planning to use Generative AI to review your code there are multiple possible pitfalls that you need to be aware of. You should not blindly trust reviews from the AI and you should constantly be critical to the feedback you get. Like humans, AI can also make mistakes. Make sure you understand what the review is suggesting, and always validate it. I would not fully relying on AI code reviews, you should instead be using it as a supplement to regular peer reviews. Try using AI for initial reviews in a trial period. In this period make sure to have both reviews done as before using AI to review and done with AI. After this trial see if this has improved quality and efficiency. I would recommend to also do reviews continuously if deciding to use AI to review code so you can keep track of the AIs performance. Be aware that the reviewing AI can be based on outdated knowledge of technology.
Before you start using Generative AI to review you code, you need to take these aspects into consideration. If you have reflected on all of these you may be ready to explore a new and exciting path writing sustainable code.
About Lars Snellingen
Lars Snellingen is one of Sogeti’s experts in technical testing with a deep knowledge in UI automation and API automation. He has been involved and fully responsible for testing multiple system transitions from on-prem to cloud in the retail industry. As a lead technical tester, Lars has experience in multiple roles such as a Test Developer, Test Analyst, Test Coordinator, managing test environments, and being a mentor. Lars is involved in the technical core team in Norway where he has had several presentations with different tools in API testing. He has been involved in a global API community where he worked with people across Sogeti globally. Lars is driven by taking on new complex challenges and has with this achieved great trust and responsibility from the customers. He is passionate to learn, develop and strongly believes in delivering quality in his work.
More on Lars Snellingen.