Unit Tests

What is a unit test?

Takes a very small piece of testable code and determines whether it behaves as expected. The size of the unit under test is not strictly defined, however unit tests are typically written at the class level or around a small group of related classes. The smaller the unit under test the easier it is to express the behaviour using a unit test since the branch complexity of the unit is lower.

If it is difficult to write a unit test, this can highlight when a module should be broken down into independent, more coherent pieces and tested individually.

Unit testing is a powerful design tool in terms of code and implementation, especially when combined with TDD.

What can/should you unit test?

  1. Test the behaviour of modules by observing changes in their state. This treats the unit as a black box tested entirely through its interface.
  2. Look at the interactions and collaborations between an object and its dependencies. These interactions and collaborations are replaced by mocks/stubs etc.

Purpose of unit tests

  • Constrain the behaviour of the unit
  • Fast Feedback

rjg0g

Failing tests in the pipeline

That situation where there are failing tests in the pipeline and you ask someone about it and the response you get it ‘Oh, these tests are failing because so and so service is not running, so it’s fine; these tests can fail’. Sound familiar? I really dislike this response for threefold reasons

  1. Why did we write these tests if we are going to be fine with them failing?
  2. Surely if they fail, this should be a flag that something is wrong? (an unreliable, flaky service)
  3. If these tests do not give valuable feedback and are useless, just get rid of them. A failing test build should mean that there is no release. If you release with a failing test build and something goes wrong in production then what are you going to do? The tests highlighted that something was wrong and we chose to risk it.

If tests need certain services running to pass, have those services running. If a service stops running for random reasons then how can you be confident that it won’t be the same on production?

If tests need certain data to pass, have that data. That data would be in live right?

A test environment should simulate as close as possible the live environment. We can test in it all we want, but the environment configuration should be almost identical.

Automated tests should always have a meaning and purpose, otherwise there is just no point having them.

Testing in an agile environment

Here at Sagepay we work in weekly iterations, i.e. the aim at the end of every weekly iteration is to deliver something new to the user; whether it be a developer, a customer or the manager of a business.

To help with this, we use Jenkins as our continuous integration tool. Once development is complete (we work on branches), the new code is merged to master. This will trigger builds on the pipeline that will deploy to the development environment and run the e2e tests (also on dev). The deployment to the QA environment is manual; I do this when one particular story is complete and is ‘releasable’. I run some basic manual tests on QA to make sure basic flows are working as expected (I have a checklist!).

What I have learnt when you are working in this way is that you have to have 100% confidence in the automated tests that are on the pipeline and the coverage that these tests have. Otherwise, it would not be possible to release weekly and QA would almost always block a release if it were manual.

This leads on to pair programming. When we have decided which stories we are going to work on in a sprint, I pair with a developer and start discussing what tests we should have for this particular story. This usually starts off by looking at the acceptance criteria as a baseline. We write some basic tests, e2e tests are easiest to write first and make them fail. Unit tests/integration tests follow. I will be writing a separate blog post about the different types of testing and the purpose of each level of testing. Once these new tests have passed and regression tests have passed, I am confident that the new code we will release will not break anything.

One thing to note: my focus is not to test everything and anything. It is to focus on what matters to the user and what they will do. How are they using the application? That way you ensure you test what is necessary and what will give you the most useful feedback.

TDD/BDD

TDD (test driven development) and BDD (behaviour driven development) are usually used hand in hand in an agile environment. However, sometimes it seems like they are used in a way that they mean the same thing which is not true. TDD/BDD what’s the difference eh? Well, to me here it is:

TDD – the focus of writing and making tests pass in the implementation phase helps make the development process faster and more efficient (in terms of faster and useful feedback). Always write tests first when you can (sometimes it is difficult to write tests or it may not make sense to test something so small – I will write another blog post about this).

BDD – real user behaviour and interactions drive tests and development. You are focused on what the user does, whoever the user may be. You will consider the standard user on one end and the extreme user on the other (i.e. someone who will click every button on your site or do strange things like doing a GET on a URL that should only be doing a POST).