AI is vindicating the role of the Software Tester
![]() |
| Figure 1. Image generated by AI. |
Over the last few weeks, I've read and heard all sorts of things about AI (in software development), both good and bad. It's becoming increasingly clear who's in favour of AI and who's against it (although it seems the former are gaining the upper hand). If someone were to ask me my stance on the matter, I would reply that I view the issue with a certain degree of scepticism, but also with a certain degree of optimism (I cannot afford to be left behind on something that affects my entire professional career and which I also use daily at work). I can discuss my personal views on AI in another post.
Today I want to focus on something I've been mulling over for a while. And that is that, with all the scaremongering surrounding AI-driven software development, I can see the following very clearly: AI is turning us all into software testers.
Yes, you read that right: AI is turning us all into software testers.
I hope I don't get too tangled up in this, but I'll start from the beginning. Ever since the first AI agents were released, people (both technical and non-technical) began experimenting with them and developing all sorts of applications. At the same time, it became very clear that many of these creations had critical bugs (as well as serious security vulnerabilities). Over time, software development practices using these agents became standardised, and slightly more robust applications began to be developed (though still not free from defects and security vulnerabilities). There is a wealth of literature online on how to create perfect prompts to ask AI agents to generate solutions to the technical or business problems we face in our day-to-day work. These help us make the agents' work more efficient and use fewer tokens.
There is a lot of talk now that, as developers and testers, we will spend most of our time (at least for now) orchestrating what AI agents do. And they are not wrong. There are many things involved in this orchestration: requirements analysis, planning, design, environment configuration, execution, monitoring and completion (reporting). Does all this sound familiar?
AI is gradually turning us all into software testers, although many (I'm not saying all, as there are exceptions) of our developer colleagues aren't keen on the idea of software testing. And this whole process is «manual» (though it would be more accurate to call it «exploratory»).
If we've jumped on the AI bandwagon, we now spend most of our time analyzing whether it's doing what we've asked, and correcting or adjusting where necessary. Of course, this doesn't mean that at some stage in the software development process we don't have to get our hands dirty and write code (or solve something that the AI can't).
Suggested readings.-
- Bolton, M. (2024, June 25). Sharper Terms for “Manual Testing.” Developsense.com. https://developsense.com/blog/2024/06/sharper-terms-for-manual-testing.
- Bolton, M. (2021, August 24). Alternatives to “Manual Testing”: Experiential, Interactive, Exploratory. Developsense.com. https://developsense.com/blog/2021/08/alternatives-to-manual-testing-experiential-attended-exploratory.
- Bolton, M. (2013, February 24). “Manual” and “Automated” Testing. Developsense.com. https://developsense.com/blog/2013/02/manual-and-automated-testing.


Comentarios
Publicar un comentario