Software testing, cybersecurity & AI in modern times
Versión en español, aquí.
It's been a while since I have written anything here. Life's responsibilities did not allowed me, but now I have some time to dust off my blog.
On Friday 14th of November and Saturday 15th of November, I attended the Nerdearla event in Madrid, Spain (this is the first edition of this event to be held in Europe).
![]() |
| Figure 1. With Brais Moure (https://moure.dev/), a seasoned software engineer. I won one of his books. |
The event consists of a series of talks and workshops related to the world of software development: development, testing, UI/UX design, cybersecurity, infrastructure, management, soft skills, etc. I am not trying to promote the event, but I highly recommend it.
I attended some talks that caught my attention (although unfortunately there were others I could not attend due to scheduling conflicts) about cybersecurity, AI, and testing, and I have confirmed what I have been thinking for some time: the roles of software tester and security engineer will be more essential than ever.
Why? Well, with all the technological boom we are currently experiencing with the incursion of increasingly powerful AI models in software development, people, both technical and non-technical, are creating software that is far from being of good quality and secure. It is very worrying, for example, how the idea has been sold that developers, testers, designers, etc. are going to disappear because they will be replaced by AI; and it is also very worrying how people with zero technical knowledge of software development are creating applications left, right and centre without even asking themselves whether what they are generating is reliable and functional. There has been completely blind trust in AI.
![]() |
| Figure 2. Vibe Coding IS NOT (software) programming. |
Do not take my last point the wrong way. I agree that non-technical people should get involved in software development. I have known several people from outside the IT field who have excelled as developers or testers. What I disagree with are all the utopian/dystopian ideas that various tech influencers have peddled, spouting a string of nonsense about AI. This also reminds me of another case that occurred during the pandemic: several of these same tech influencers were also promoting the idea that a university degree was useless and that the best option was to study at a bootcamp or take any online course to enter the IT industry (any serious and sensible person knows that all of the above are complementary to our university and professional degrees). And the worst part: they also sold the idea that working in IT was a bed of roses and that you could earn a lot. The negative consequences of all this are evident today.
And I insist: Please do not take my comments or opinions the wrong way. I am expressing all of this from my perspective, which reflects much of what I have observed in the industry and what other colleagues have told me. Please do not be too harsh on me when criticizing me.
So, as I mentioned earlier: The roles of tester and security engineer will be more essential than ever (and I hope that is the case).
![]() |
| Figure 4. DO NOT forget the foundations. |
I was sharing with some friends who work in this noble IT industry that I have no doubt that in the future, with everything that is happening so rapidly with AI, development teams will consist of three people: a developer, a tester, and a designer. This model, for me, would be ideal and the healthiest (in every respect) possible.
But unfortunately, we are seeing the complete opposite, and it is very sad how people with the power to make big decisions, in addition to having the money to invest in whatever they want, do nothing—because they do not want to—to make this profession as humane as possible. I would not doubt, then, that in the future it will end up being a single person doing everything (the famous "Jack-of-all-trades").
![]() |
| Figure 5. Create software IS NOT coding. |
Perhaps some will tell me, “Armando, that is how capitalism works. Money rules, and we are driven by it. What is it to you?” or “Armando, are you suggesting a revolution? That is communist!” No, no, and no. Not at all. This is where, for me, the philosophical and theological aspects come into play. Some of you know that I am currently pursuing a master’s degree in Theology of the Body (which I hope to complete in the first half of 2026); and in my studies, I have increasingly explored how new technologies and current trends, along with various ideologies, have degraded human beings, reducing them to mere objects that are useful as long as they produce something good and profitable. In short, this is utilitarianism, and humanity is not defined by the ephemeral but by the transcendent.
The sad thing about this is that there are people who agree to accept working conditions that leave much to be desired because “that is the way it is, and there is nothing we can do about it. Let us just put up with it!”.
I have strayed from the topic, but that was intentional, because I wanted to express something I mentioned earlier that is also closely related to the whole AI issue. But, getting back to the beginning: I find the direction software development is heading very worrying. As a tester, and with some security knowledge, I cannot stop thinking about many things related to all of this: What kind of products are we developing for end users? Are we concerned about end users? Are we truly focused on the quality of what we are delivering? Or are we just focused on delivering, delivering, and delivering, regardless of how many millions of euros (or even human lives) are lost in the process? How closely is "planned obsolescence" related to software development? Have we looked closely enough at the necessary security measures to avoid exposing ourselves to potential cyberattacks that could cost us vast sums of money or, more importantly, human lives? Who has access to our data, and how is it being handled? What criteria will be used for the use of AI in sectors such as the military? How will we know that AI will not be used against us in armed conflicts? What ethical and moral implications are being considered in the further development of AI?
![]() |
| Figure 6. AI-First Web browsers "nightmare": ChatGPT Atlas vs Perplexity Comet. |
Finally, I use AI in my coding tasks. It is been a great ally for me. But I do not blindly trust it with everything. I recently had a technical interview where they asked me how I validated the outputs generated by the AI models when I asked them questions. I answered, "by intuition." And it is true. Every time I read the code the models generate, something inside me says, "This is too good to be true" or "something does not add up," and I start investigating further (but without using AI). As you know, AI can get carried away and generate overly complex or lengthy responses that, at the end of the day, are not very scalable, readable, or even maintainable. It can even invent things that seem to make sense at first glance, but do not work. Critical thinking plays a very important role in all of this. Perhaps my years of experience have helped me develop these skills.







Comentarios
Publicar un comentario