TestBash Manchester 2016

Hello all,

I had the pleasure of going to TestBash in Manchester and it was absolutely awesome! I left feeling so inspired and in love with QA and testing!

I wanted to share with you some of the ideas that there were talked about;

1)  The concept of social and critical distance was discussed, social distance being as implied; the distance between people affects the way in which we work. Critical distance is how different individuals’ opinions can be and the consequences of this. The idea of this is that we want to create an environment where we cultivate critical distance (i.e. we encourage it because a difference in opinion can create great ideas and it is a good habit to challenge and question the things we do rather than accept it) and we eliminate social distance because teams should be in a place where they are able to collaborate effectively and challenge one another rather than just trying to please others.

2) Shu Ha Ri – this comes from japanese martial arts as a way of thinking about learning techniques.

Shu – is that you begin by following a rule. You follow the processes that are already there so you can learn the basic concepts and get yourself started.

Ha –  you begin to break the rule. You begin to start to understand more deeply the techniques and principles behind a particular technology or practice.

Ri – you are the rule. You learn for yourself and are seen as independent. You create your own approaches and can adapt techniques to your own style.

The idea here is that people are usually always at different levels of learning and you have to adapt to this. For example, for someone who is quite junior it is enough for them to be content with just following what you tell them as they are still at an early stage. However someone who is quite experienced will usually want to try and understand the underlying principles of what you are trying to teach them to do and so you will need to take a different approach to teaching.

If you want to read more on this there is an article by Martin Fowler here: http://martinfowler.com/bliki/ShuHaRi.html

shuhari

3) Another idea which I really liked was that shared documentation does not equal shared understanding. Pointing someone to a document that they can follow is not necessarily always the best way to share knowledge. This is because they have no context to what they are reading. It is always good to go through a document with someone as it will most likely be more effective rather than them just reading it and getting confused. Also, the danger with just reading documents is that people will make incorrect assumptions and can end up doing something that is completely wrong. People have different perceptions, speeds and languages in which their mind thinks.

4) And finally, there was a great presentation about not using scenarios as test cases. An automation strategy should be decoupled from the acceptance testing because it can get quite overwhelming. For example, you can have a scenario that can be something like given A, and B and C and D and E, then F…this should most certainly not be classified as a test as this doesn’t read well in code. Instead, we should segregate behaviour and test independently and then in e2e tests these ‘and’ descriptions will be in test setup.

There are a few others things which I’m going to write about in another post 🙂

Advertisements

QCon

I had the great opportunity to attend Qcon from March 7th-9th, an international software development conference held annually. It was such a valuable experience, I took away a lot and heard from inspirational speakers who have influenced innovation and creativity in the engineering world. I wanted to share what I heard from the talks I attended. I will split this post into three so it’s easier to read.

Monday 

Keynote – Unevenly Distributed by Adrian Colyer

Firstly I would just like to say how great of a speaker Adrian Colyer was! So engaging, great ideas and just overall a brilliant talk and a great start to Qcon. His presentation was based on this idea; the foundation and principles on which the future builds are carefully researched, implemented, evaluated, reviewed and written down. Adrian reads a research paper every (week)day and posts a summary to his blog ‘The Morning Paper‘. I took away these thoughts from his talk:

  1. What do research papers provide to us? Applied lessons, thinking tools and they raise our expectations
  2. Is it always the question of ‘the more, the better?’
  3. The Scalable Commutativity Rule
  4. The art of testing less without sacrificing quality
  5. Holistic Configuration Management at Facebook

Continuous Delivery: Benefits explained by Lianping Chen

Lianping Chen works at Paddy Power; they offer betting/gambling services in regulated markets, through shops, phones and mobile apps. He talked about how a few years ago, many releases used to be a scary experience as they were unpredictable and not frequent enough. Delivery activities weren’t efficient, setting up environments could take weeks. Continuous delivery came along, what is it? He recommended the book ‘Continuous Delivery: Reliable Software Releases Through Build, Test and Deployment Automation’.

Continuous Delivery – ‘there is more to continuous delivery than wiring together Jenkins instances and buying a new automated deployment tool’.

  1. Reliable Releases – less risk, should be another normal day (no stress), deploy at peak times!?, deployment automation to eliminate human errors
  2. Aligning testing and production environments – saves time on releaes
  3. Small batches

How do we achieve these things?

  • Collaborative culture and organization structure
  • Responsibility change – shift responsibility to developers, create trust between teams not friction and avoidance

He said something that stuck with me: ‘DevOps isn’t efficient without automation’. A task that can be done by clicking a button, automate it. Of course you shouldn’t automate tasks that would take more effort to automate and return little value (might as well do it manually). Anyone should be able to click a button!

When you are able to solve these problems, what do you get?

  • Accelerated time to market – reduced cycle time from creating a user story to deploying it. Start generating revenue
  • Get fast feedback – break big features down so that can be finished in a short period of time
  • Better architecture – the architecture should allow adding features in small increments
  • Zero-downtime deployments

Some other points

  • Testing is not only about writing tests but also about design.
  • Improved product quality – customers don’t find bugs, TDD/BDD/ATDD, eliminate flaky (non-deterministic) tests, fixing failing tests is a priority as it means something
  • Sonar – for metrics on test coverage

Spending time on writing tests and coming up with elegant design is better than spending time on fixing bugs.

Building the right product

‘You can’t just ask customers what they want and then try to give that to them. By the time you get it built, they’ll want something new’. Steve Jobs

Work on hypotheses, not requirements! i.e. decisions should be based on data, not ‘opinions’

How to win hearts and minds by Chris Young and Kate Gray

This talk was around the topic of how to convince your colleagues when you have an idea/something you want to compare. This talk was packed, had to stand to listen to it! Chris Young and Kate Gray were great speakers. They used the example of electoral politics; i.e. the techniques politicians use when they run campaigns. How can you influence behaviour and decisions? What is the common ground? They described three techniques:

Segmentation

  • break down barriers
  • creating options
  • who is likely to support you?
  • you don’t need 100%
  • soft support and hard support: there are those who completely support you and understand your cause (hard support) and those that are not too sure if they should support you (soft support). These are the people you need to focus your attention on.

They described this concept of Impact Mapping.

Vision

  • how do you communicate?
  • powerful, persuasive messages
  • relevant, meaningful
  • emotion over mind, having an effect on people
  • ongoing dialect with customers

Polling

  • keep people in the loop, respect their feedback and ask for their honest opinions

 

CD at LMAX: Testing into production and back again by Sam Adams

Continuous Delivery does not equal to Continuous Deployment. Continuous delivery is deploying when you have finished development. Continuous deployment is having the ability to deploy when you want.

One of the biggest challenges of going straight from the development environment to production is that the development teams usually do not have access to production environments, due to security reasons. However, for the simple tasks that can be done with a click of a button why don’t we just use automation?

Sam also talked about having tests outside the pipeline (performance, integration, stress) that don’t necessarily block the pipeline but if any issues are identified then these are looked into.

Other points

  • Everyone ones the test suite
  • ‘Intermittency testing’ – the best way to understand a system is to try and fix a problem with it!
  • Autotrish – records test results, identifies patterns in failures
  • Reliability tests – kill/fail-over/recover suites
  • Feature toggles – gradually introduce new features to users by only enabling for one/a  few users at a time and then increase the number of users

Acceptance testing for Continuous Delivery by Dave Farley

This talk was given by Dave Farley, one of the writers of the Continuous Delivery book I mentioned above. In this talk he discussed the meaning of acceptance criteria in automated tests.

Firstly, what does ‘done’ mean when you are writing automated tests? This is when the acceptance criteria have been fulfilled and the behaviour outlined has been automated.Acceptance tests should be an executable specification of the system behaviour.

Who owns the tests? This question got me thinking as really, developers, BAs and testers should own the tests. Why is it not only testers? Because developers write code, so they need to own the tests so they know that their code is working and doesn’t break another part of the system in any other way. Because BAs want changes to the system to be made easily and smoothly and so, they should know the test coverage and the value of the tests.

He also talked about what are the tests testing not how they are being tested. By this he meant we should abstract implementations – use mocks, page drivers etc as we are interested in the behaviour when writing acceptance tests, not how it has been implemented.

And last but not least, always use the language of the problem domain for readability and ease of maintenance.

The Testing Pyramid

If you are a good developer/tester you probably ask yourselves these questions 🙂

  • How many tests of each type should I have?
  • Do I need to write more unit tests?
  • Do I need to have less e2e tests?

Well luckily, we have the test pyramid below that can be used as a guideline for how many tests, roughly, you should have of each type. It is fairly self-explanatory; have a lot of unit tests and very few e2e tests. Why you ask?

test pyramid

Unit tests are cheap in terms of power/CPU usage/processes etc and e2e tests are the opposite. Unit tests give you fast feedback and tell you exactly where an issue is if one occurs. With an e2e test this is much more difficult. Because an e2e test most likely is using all the clients and services (or at least two or three), you have to dig more into the code to figure out what is going on.

The test pyramid also highlights testing strategies. This is called ‘Bottom Up’, i.e.

  • Test the domain
  • Tests closer to the code
  • Integrate early
  • Use mocks or stubs
  • Visualise test coverage

What is testing?

I quite rightly had a request from someone who is not in the technology world about what exactly testing is. So time to go back to basics.

To come up with one strict definition of testing is a bit difficult but put simply, testing is ensuring the quality of a product or application before it is released to customers. It is very unlikely that you will have a piece of software that is completely bug free. But it is the testers responsibility to ensure that the bugs that critical and of high importance (i.e. consumer facing/ make the experience of the customer worse) are dealt with and are fixed as soon as possible. It is also the tester’s job to find bugs. Whether this is through manual testing or automated tests it doesn’t matter. Find the flaws in the code and fix them (or get the developers to fix them!).

How do we test?

We test as the new code is developed or the old code is being re-factored/changed. We write automated tests that can run as part of our integration tools. That way we don’t have to go back and keep re-testing the same things over and over again. Regression testing (this is the kind of testing where you check that previous functionality has not been broken) should all ideally be automated. So when a new piece of functionality has been developed, you will have automated tests for it, do some manual testing (to check the standard flows) and then run the regression tests. Then you are done and can sign off for a release 🙂

One another thing I do is change control. Because we release so frequently, I find it necessary to track the changes that are being made from one week to another. What I keep track of:

  1. The version number
  2. Details of changes
  3. Has the build passed?
  4. Is this the release version?
  5. When is it scheduled for release to production?
  6. Was the release successful or rolled back?

Monitoring

So if all is good and well, you have had your changes approved and you can release (yay!). But is it really yay? What is something goes wrong and you need to be able to fix it quickly…This is where monitoring comes in.

At the moment we are using splunk and it is a really great log aggregator. You can take all your log files and get some meaningful information about what your users are doing. How many transactions are successful/failing? What cards are customers using? When are peak times? And in the case of errors, it can give you the exact service that is returning the error from a graph. What is great about seeing your logs returning useful information live is that they can also tell you that in some cases you are not doing great logging. And so, you can go and add better logging 🙂

A note about splunk, the search mechanism it uses is all based on filters and field extractors. For example, let’s say you want to see transaction amounts against card type. You have to extract both these field from the logs then do a search query based on these field extractors.

The key to using these kind of tools usefully and successfully, is to have meaningful logs in the first place. You have to have done that work. This is a place where you want to know instantly whether everything is okay or not….

rpz31

Unit Tests

What is a unit test?

Takes a very small piece of testable code and determines whether it behaves as expected. The size of the unit under test is not strictly defined, however unit tests are typically written at the class level or around a small group of related classes. The smaller the unit under test the easier it is to express the behaviour using a unit test since the branch complexity of the unit is lower.

If it is difficult to write a unit test, this can highlight when a module should be broken down into independent, more coherent pieces and tested individually.

Unit testing is a powerful design tool in terms of code and implementation, especially when combined with TDD.

What can/should you unit test?

  1. Test the behaviour of modules by observing changes in their state. This treats the unit as a black box tested entirely through its interface.
  2. Look at the interactions and collaborations between an object and its dependencies. These interactions and collaborations are replaced by mocks/stubs etc.

Purpose of unit tests

  • Constrain the behaviour of the unit
  • Fast Feedback

rjg0g

Failing tests in the pipeline

That situation where there are failing tests in the pipeline and you ask someone about it and the response you get it ‘Oh, these tests are failing because so and so service is not running, so it’s fine; these tests can fail’. Sound familiar? I really dislike this response for threefold reasons

  1. Why did we write these tests if we are going to be fine with them failing?
  2. Surely if they fail, this should be a flag that something is wrong? (an unreliable, flaky service)
  3. If these tests do not give valuable feedback and are useless, just get rid of them. A failing test build should mean that there is no release. If you release with a failing test build and something goes wrong in production then what are you going to do? The tests highlighted that something was wrong and we chose to risk it.

If tests need certain services running to pass, have those services running. If a service stops running for random reasons then how can you be confident that it won’t be the same on production?

If tests need certain data to pass, have that data. That data would be in live right?

A test environment should simulate as close as possible the live environment. We can test in it all we want, but the environment configuration should be almost identical.

Automated tests should always have a meaning and purpose, otherwise there is just no point having them.