That situation where there are failing tests in the pipeline and you ask someone about it and the response you get it ‘Oh, these tests are failing because so and so service is not running, so it’s fine; these tests can fail’. Sound familiar? I really dislike this response for threefold reasons
- Why did we write these tests if we are going to be fine with them failing?
- Surely if they fail, this should be a flag that something is wrong? (an unreliable, flaky service)
- If these tests do not give valuable feedback and are useless, just get rid of them. A failing test build should mean that there is no release. If you release with a failing test build and something goes wrong in production then what are you going to do? The tests highlighted that something was wrong and we chose to risk it.
If tests need certain services running to pass, have those services running. If a service stops running for random reasons then how can you be confident that it won’t be the same on production?
If tests need certain data to pass, have that data. That data would be in live right?
A test environment should simulate as close as possible the live environment. We can test in it all we want, but the environment configuration should be almost identical.
Automated tests should always have a meaning and purpose, otherwise there is just no point having them.