Reasoning behind automated tests
Throughout my apprenticeship at La Française Des Jeux (FDJ), I had the opportunity to delve into performance testing and expand my understanding of testing practices in general.
This post summarizes key ideas for a beginner software engineer.
Why testing software?
A test is a method or process employed to assess or measure the knowledge, skills or abilities of an entity.
In a software context, tests are developed to ensure that a program or some code consistently behaves as expected.
To the best of my knowledge, a test can be either manual or code-based.
When I first began my journey in software development, a significant portion of the code I wrote was exclusively tested manually by me. And depending on the project this approach is fine.
As software engineers, our work revolves around the principles of engineering. Our objective is to maximize value while simultaneously minimizing costs, whether it pertains to time or financial resources.
Depending on the context, writing a complete test suite may provide little to no value and be time-consuming. That’s especially the case for small programs not meant to evolve.
Using manual tests sufficed for my personal/learning projects because it didn’t add any value. These projects had just one user - me, and code bases are relatively small.
However, in the industrial context of FDJ, where millions of euros in revenue can be generated within an hour, manual tests are not a suitable approach.
Tests, both manual and code-based, ensure the proper functioning of the program only at the moment of execution. The moment you make any modifications, all guarantees vanish, especially if your code is tightly coupled.
Testing manually complex programs after every change is theoretically feasible but not practicable. Furthermore, manual tests are not reproducible, making them impossible to review, enhance, or iterate upon.
These are the reasons we heavily invest time and effort in code-based tests.
For instance, when my team and I worked on an API Gateway solution, we needed to write tests not only for every feature we developed but also for every log and metric we added. At times, this process could become quite extensive. We conducted exhaustive testing of the 'kin-openapi' library, responsible for request validation in compliance with an OpenAPI document. We had to write a considerable number of tests.
This was deemed (and continues to be) a worthwhile investment due to the paramount security implications involved.
That being said, code-based tests brings no value if their not run, at least, before every release. Continuous Integration (CI) is the best practice as it eliminates human error, such as forgetting to run tests, from the equation.
Conclusion
On the internet, many people offer advice as if they were universal truths:
- Indie hackers: Don't write tests.
- Test-driven development (TDD) proponents: Write tests first.
However, it's essential to form your own opinion and approach situations with nuances, as no one solution is perfect for every given problem. Think in terms of value and cost, and tailor your approach accordingly.