Wednesday, 13 December 2017

Pairing for skill vs. Pairing for confidence

I went to a WeTest leadership breakfast this morning. We run in a Lean Coffee format and today we had a conversation about how to build confidence in people who have learned basic automation skills but seem fearful of applying those skills in their work.

I was fortunate to be sitting in a group with Vicki Hann, a Test Automation Coach, who had a lot of practical suggestions. To build confidence she suggested asking people to:
  • Explain a coding concept to a non-technical team mate
  • Be involved in regular code reviews
  • Practice the same type of coding challenge repeatedly

Then she talked about how she buddies these people within her testing team.

Traditionally when you have someone who is learning you would buddy them with someone who is experienced. You create an environment where the experienced person can transfer their knowledge or skill to the other.

In a situation where the person who is learning has established some basic knowledge and skills, their requirements for a buddy diversify. The types of activities that build confidence can be different to those that teach the material.

Confidence comes from repetition and experimentation in a safe environment. The experienced buddy might not be able to create that space, or the person who is learning may have their own inhibitions about making mistakes in front of their teacher.

Vicki talked about two people in her organisation who are both learning to code. Rather than pairing each person with someone experienced, she paired them with each other. Not day-to-day in the same delivery team, but they regularly work together to build confidence in their newly acquired automation skills.

In their buddy session, each person explains a piece of code that they’ve written to the other. Without an experienced person in the pair, both operate on a level footing. Each person has strengths and weaknesses in their knowledge and skills. They feel safe to make mistakes, correct each other, and explore together when neither know the answer.

I hadn’t considered that there would be a difference in pairing for skill vs. pairing for confidence. In the past, I have attempted to address both learning opportunities in a single pairing by putting the cautious learner with an exuberant mentor. I thought that confidence might be contagious. Sometimes this approach has worked well and others not.

Vicki gave me a new approach to this problem, switching my thinking about confidence from something that is contagious to something that is constructed. I can imagine situations where I’ll want to pair two people who are learning, so that they can build their confidence together. Each person developing a belief in their ability alongside a peer who is going through the same process.


Wednesday, 6 December 2017

Conference Budgets

There has been conversation on Twitter recently about conferences who do not offer speaker compensation. If you haven't been part of this discussion I would encourage you to read Why I Don't Pay to Speak by Cassandra Leung, which provides a detailed summary.

I take an interest in these conversations from two perspectives: I regularly speak at international conferences and I co-organise the annual WeTest conferences for the New Zealand testing community.

As an organiser, the events that I help to run cover all speaker travel and accommodation. We make a special effort to care for our conference speakers and have built a reputation in the international testing community as being an event that is worth presenting at.

WeTest is a not-for-profit company that is entirely driven by volunteers. How do we afford to pay all of our speakers?

Humble Beginnings

Our 2014 WeTest conference was a half-day event in a single city.

We had 80 participants who paid $20 per person. They received a conference t-shirt along with a catered dinner of pizza and drinks.

All of our speakers were local to the city, so there were no travel or accommodation expenses. Our budget was balanced by the support of our primary sponsor, Assurity.

Our total budget for this event was approximately $3,000 where our income and expenses were:

WeTest Budget 2014

Stepping Up

By 2016 we felt that we had built an audience for a more ambitious event. We embarked on a full-day conference tour with the same program running in two cities within New Zealand.

We had 150 participants in each city who paid $150 per person. This was a significant jump in scale from our previous events, so we had to establish a formal scaffold for our organisation. WeTest was registered as a company, we created a dedicated bank account, and launched our own website.

This was also the first year that we invited international speakers. 25% of our speaker line-up, or three out of twelve speakers, traveled to New Zealand from overseas. Covering their travel and accommodation costs significantly altered the dynamics of our budget. Running the conference in two different cities meant that there were travel and accommodation costs for our New Zealand based speakers and organisers too.

Our total budget for this event was approximately $50,000 where our income and expenses were:

WeTest Budget 2016

The Big League

Our 2016 events sold out quickly and we had long waiting lists. To accommodate a larger audience, we grew again in 2017. This meant securing commercial venues, signing contracts, paying deposits, registering for a new level of tax liability and formalising our not-for-profit status.

In 2017 we had around 230 participants in each city. We introduced an earlybird ticket at $150 per person, so that our loyal supporters would not experience a price-hike and we could collect some early revenue to cover upfront costs. Our standard ticket was $250 per person.

40% of our speaker line-up, or four out of ten speakers, traveled to New Zealand from overseas. We incurred similar speaker travel and accommodation expenses to the previous year.

Our total budget for this event was approximately $100,000 where our income and expenses were:

WeTest Budget 2017

To re-iterate, WeTest is a not-for-profit organisation that is volunteer-led. The profit of our 2017 events will be reinvested into the testing community and help us to launch further events in the New Year.

In the discussion about speaker reimbursement we often discuss in the abstract. I hope that these examples provide specific evidence of how a conference might approach speaker reimbursement, whether they are a small community event or a larger endeavour.

At WeTest we have consistently balanced our budget without asking speakers to pay their own way. We are proud of the diverse speaker programs that have been supported by this approach. In 2018 we look forward to continuing to provide a free platform for our speakers to deliver great content.

Sunday, 29 October 2017

Strategies for automated visual regression

In my organisation we have adopted automated visual regression in the test strategy for three of our products. We have made different choices in implementing our frameworks, as we use automated visual regression testing for a slightly different purpose in each team. In this post I introduce the concept of automated visual regression then give some specific examples of how we use it.

What is visual regression?

The appearance of a web application is usually defined by a cascading style sheet (CSS) file. Your product might use a different flavour of CSS like SCSS, SASS, or LESS. They all describe the format and layout of your web-based user interface.

When you make a change to your product, you are likely change how it looks. You might intentionally be working on a design task e.g. fixing the display of a modal dialog. Or you might be working on a piece of functionality that is driven through the user interface, which means that you need to edit the content of a screen e.g. adding a nickname field to a bank account. In both cases you probably need to edit the underlying style sheet.

A problem can arise when the piece of the style sheet that you are editing is used in more than one place within the product, which is often the case. The change that you make will look great in the particular screen that you're working in, but cause a problem in another screen in another area of the application. We call these types of problems visual regression.

It is not always easy to determine where these regression issues might appear because style sheets are applied in a cascade. An element on your page may inherit a number of display properties from parent elements. Imagine a blue button with a label in Arial font where the colour of the button is defined for that element but the font of the button label is defined for the page as a whole. Changing the font of that button by editing the parent definition could have far-reaching consequences.

We use automated visual regression to quickly identify differences in the appearance of our product. We compare a snapshot taken prior to our change with a snapshot taken after our change, then highlight the differences between the two. A person can look through the results of these image comparisons to determine what is expected and what is a problem.

Manufactured example to illustrate image comparison

Team One Strategy

The first team to adopt automated visual regression in my organisation was our public website, a product with a constantly evolving user interface.

The test automation strategy for this product includes a number of targeted suites. There are functional tests written in Selenium that examine the application forms, calculators, and other tools that require user interaction. There are API tests that check the integration of our website to other systems. We have a good level of coverage for the behaviour of the product.

Historically, none of our suites specifically tested the appearance of the product. The testers in the team found it frustrating to repetitively tour the site, in different browsers, to try to detect unexpected changes in how the website looked. Inattentional blindness meant that problems were missed.

The team created a list of the most popular pages in the site based on our analytics. This list was extended, so that it included at least one page within each major section of the website, to define an application tour for the automated suite to capture screenshots for comparison.

The automated visual regression framework was implemented to complete this tour of the application against a configurable list of browsers. It launches BrowserStack, which means that it is able to capture images against desktop, tablet, and mobile browsers. The automated checks replace a large proportion of the cross-browser regression testing that the testers were performing themselves.

The team primarily use the suite at release, though occasionally make use of it during the development process. The tool captures a set of baseline images from the existing production version of the product and compares these to images from the release candidate. The image comparison is made at a page level: a pixel-by-pixel comparison with a fuzz tolerance for small changes.

Team Two Strategy

The second team to adopt automated visual regression was our UI toolkit team. This team develop a set of reusable user interface components so that all of our products have a consistent appearance. The nature of their product means that display problems are important. Even a difference of a single pixel can be significant.

The tester in the this team made automated visual regression the primary focus of their test strategy. They explored the solution that the first team had created, but decided to implement their own framework in a different way.

In our toolkit product, we have pages that display a component in different states e.g. the button page has examples of a normal button, a disabled button, a button that is being hovered on, etc. Rather than comparing the page as a whole with a fuzz tolerance, this tester implemented an exact comparison at a component level. This meant that the tests were targeted and would fail with a specific message e.g. the appearance of the disabled button has changed.

The initial focus for this framework was getting component level coverage across the entire toolkit with execution in a single browser. This suite was intended to run after every change, not just at release. The tester also spent some time refining the reporting for the suite, to usefully abstract the volume of image comparisons being undertaken.

Once the tests were reliable and the reporting succinct, the tester extended the framework to run against different browsers. Cross-browser capability was a lower priority than in the Team One.

Team Three Strategy

A third team are starting to integrate automated visual regression into their test strategy. They work on one of our authenticated banking channels, a relatively large application with a lot of different features.

This product has mature functional test automation. There are two suites that execute through the user interface: a large suite with mocked back-end functionality and a small suite that exercises the entire application stack.

For this product, implementing automated visual regression for a simple application tour is not enough. We want to examine the appearance of the application through different workflows, not just check the display of static content. Rather than repeating the coverage provided by the existing large test suite, the team extended the framework to add an automated visual regression test.

This suite is still under development and, of the three solutions, it is the largest, the slowest, and requires the most intervention by people to execute. The team created a configuration option to switch on screenshot collection as part of the existing functional tests. This generates a set of images that will either represent the 'before' or the 'after' state, depending on which version of the application is under test.

Separate to the collection of images is a comparison program that takes the two sets of screenshots and determines where there are differences. The large suite of functional tests means that there are many images to compare, so the developers came up with an innovative approach to perform these comparisons quickly. They first compare a hash string of the image then, in the event that these differ, they perform the pixel-by-pixel comparison to determine what has changed.

In this team the automated visual regression has a fractured implementation. The collection and comparison happen separately. The focus remains on a single browser and the team continue to iterate their solution, particularly by improving the accuracy and aesthetics of their reporting.

Conclusion

We use automated visual regression to quickly detect changes in the appearance of our product. Different products will require different strategies, because we are looking to address different types of risk with this tool.

The three examples that I've provided, from real teams in my organisation, illustrate this variety in approach. We use visual regression to target:
  • cross-browser testing, 
  • specific user interface components, and
  • consistent display of functional workflows. 
As with any test automation, if you're looking to implement automated visual regression consider the problem that you're trying to solve and target your framework to that risk.

Thursday, 12 October 2017

Identifying and influencing how people in your team contribute to test automation

This is a written version of my keynote at The Official 2017 European Selenium Conference in Berlin, Germany. 
If you'd prefer to watch the talk, it is available on the Selenium YouTube channel.


How do your colleagues contribute to test automation?

Who is involved in the design, development and maintenance of your test suites?

What would happen if people in your team changed how they participate in test automation?

How could you influence this change?

This article will encourage you to consider these four questions.

Introduction

When I was 13 years old I played field hockey. I have a lot of fond memories of my high school hockey team. It was a really fun team to be part of and I felt that I really belonged.

When I was 10 years old I really wanted to play hockey. I have clear memories of asking my parents about it, which is probably because the conversation happened more than once. I can remember how much I wanted it.

What happened in those three years, between being a 10 year old who wanted to play hockey and a 13 year old creating fond memories in a high school hockey team? Three things.

The first barrier to playing hockey was that I had none of the gear. I lived in a small town in New Zealand, both my parents were teachers, and hockey gear was a relatively large investment for my family. To participate in the sport I needed a stick, a mouth guard, shin guards, socks. Buying all this equipment gave me access to the sport.

Once I had the gear, I needed to learn how to use it. It turned out that my enthusiasm for getting onto the field did not translate into a natural ability. In fact, initially I was quite scared of participating. I had to learn to hit the ball and trap it, the different positions on the field, and what to do in a short corner. Learning these skills gave me the confidence to play the game, which meant that I started to enjoy it.

The third reason that I ended up in a hockey team when I was 13 years old was because that is where my friends were. As a teenager, spending time with my friends after school was excellent motivation to be part of a hockey team.

Access, skills, and motivation. These separated me at 10 years of age from me at 13 years of age. These separated a kid who really wanted to participate in a sport from someone who felt like they were part of a team.

This type of division is relatable. Team sports are an experience that many of us share, both the feelings of belonging and those of exclusion. Access, skills, and motivation also underpin other types of division in our lives.

Division in test automation

If you look at a software development team at a stand-up meeting, they are all standing together. People are physically close to each other, not on opposite sides of a chasm. But within that group are divisions, and different divisions depending on the lens that you use.

If we think about division in test automation for a software development team then, given what I’ve written about so far, you imagine something like this:

A linear diagram of division

People are divided into categories and progress through these pens from left to right. To be successful in test automation I need access to the source code, I need the skills to be able to write code, and I need to be enthusiastic about it. Boom!

Except, perhaps it’s not that simple or linear.

What if I’m a new tester to a team, and I have the coding skills but not permissions to contribute to a repository? What if I’m enthusiastic, but have no idea what I’m doing? It’s not always one, then the other, then the other. I am not necessarily going to acquire each attribute in turn.

Instead, I think the division looks something like this.

A Venn diagram of division

A Venn diagram of division by access, skills and motivation. An individual could have any three of these, or any combination of the three, or none at all.

To make sense of this, I’d like to talk in real-life examples from teams that I have been part of, which feature five main characters:

Five characters of an example team

The gray goose represents the manager. The burgundy red characters represent the business: the dog is the product owner, the horse is the business analyst. Orange chickens are the developers, and the yellow deer are the testers.

Team One

Team One

This was an agile development team in a large financial institution. I was one of two testers. We were the only two members of the team who were committing code to the test automation suite. We are the two yellow deer right in the middle of the Venn diagram with access, skills and motivation.

The developers in this team could have helped us, they had all the skills. They didn’t show any interest in helping us, but also we didn’t give them access to our code. The three orange chickens at the top of the Venn diagram show that the developers had skills, but no motivation or access.

The business analyst didn’t even know that we had test automation, and there was no product owner in this team. However there was a software development manager and they were a vocal advocate for the test automation to exist, though they didn’t understand it. The burgundy red horse at the top right is outside of the diagram, the grey goose is in motivation alone.

The test automation that this farmyard created was low-level, it executed queries against a database. As we were testing well below the user interface where the business felt comfortable, they were happy to have little involvement. The code in the suite was okay. It wasn’t as good as it could have been if the developers were more involved, but it worked and the tests were stable.

Team Two

Team Two

This was a weird team for me. You can see that as a tester, the yellow deer, I had the skills and motivation to contribute to test automation, but no access. I was bought in as a consultant to help the existing team of developers and business analysts to create test automation.

The developers and business analysts had varying skills. There were a couple of developers who were the main contributors to the suites. The business analysts had access and were enthusiastic, but they didn’t know how to write or read code. Then there were a couple of developers who had the access and skills, but firmly believed that test automation was not their job, they’re the chickens on the left without motivation.

This team built a massive backlog of technical debt in their test automation because the developers who were the main contributors preferred to spend their time doing development. The test code was elegant, but the test coverage was sparse.

Team Three

Team Three

In this team everyone had access to the code except the project manager, but skills and motivation created division.

I ended up working on this test suite with one of the business analysts. He bought all the domain and business knowledge, helped to locate test data, and made sure that the suites had strong test coverage across all of the peculiar scenarios. I bought the coding skills to implement the test automation.

In this team I couldn’t get any of the developers interested in automation. Half had the skills, half didn’t, but none of them really wanted to dive in. The product owner and the other BA who had access to the code were not interested either. They would say that they trusted what the two of us in the middle were producing, so they felt that they didn’t need to be involved.

I believe that the automation we created was pretty good. We might have improved with the opportunity to do peer review within a discipline. The business analyst reviewed my work, and I reviewed his, but we didn’t have deep cross-domain understanding.

Team Four

Team Four

This was a small team where we had no test automation. We had some unit tests, but there wasn’t anything beyond that. This meant that we did a lot of repetitive testing that, in retrospect, seems a little silly.

I was working with two developers. We had a business analyst and a product owner, but no other management alongside us. The technical side of the delivery team all had access to the code base and the skills to write test automation, but we didn’t have time or motivation to do so. The business weren’t pushing for it as an option.

You may have heard similarities to your situation in these experiences. Take a moment to consider your current team. Where would you put your colleagues in a test automation farmyard?

Contributors to test automation

Next, think about how people participate in test automation dependent on where they fall into this model. Originally I labelled the parts of the diagram as access, skills, and motivation:


If I switch these labels to roles they might become:


A person who only has access to test automation is an observer. They're probably a passive observer, as they don't have the skills or the motivation to be more involved.

A person who only has skills is a teacher. They don't have access or motivation, but they can contribute their knowledge.

A person who only has motivation is an advocate. They're a source of energy for everyone within the team.

Where these boundaries overlap:


A problem solver is someone with access and skills who is not motivated to get involved with test automation day-to-day. These people are great for helping to debug specific issues, reviewing pull requests, or asking specific questions about test coverage. Developers often sit in this role.

Coaches have skills and motivation, but no access. They’re an outside influence to offer positive and hopefully useful guidance. If you consider a wider set of colleagues, you might treat a tester from another development team as a coach.

Inventors are those who have access and motivation, but no skills. These are the people who can see what is happening and get super excited about it, but don’t have skills to directly participate. In my experience they’ll throw out ideas, some are crazy and some are genius. These people can be a source of innovation.

And in the middle are the committers. These are generally the people who keep the test suite going. They have access, skills, and motivation.

How people contribute to test automation

Changing roles

Now that you have labels for the way that your colleagues contribute to your test automation, consider whether people are in the "right" place.

I’m not advocating for everyone in your team to be in the middle of the model. Being a committer is not necessarily a goal state, there is value in having people in different roles. However there might be specific people who you can shift within the model that would create a big impact for your team.

Consider how people were contributing to test automation in the four teams that I shared earlier.

Team One

In team one, all the developers were teachers. They had skills, but nothing else. In retrospect if I was choosing one thing to change here, we should have given at least one of the developers access to the code so that they could step into a problem solving role and provide more hands-on help.

Team Two

In team two, I found it frustrating to coach the team without being able to directly influence the code. In retrospect, I could have fought harder for access to the code base. I think that as a committer I would have had greater impact on the prioritisation of testing work and the depth of test coverage provided by automation.

Team Three

In team three, it would have been good to have a peer reviewer from the same discipline. Bringing in a developer to look at the implementation of tests, and/or another business analyst to look at business coverage of the tests, could have made our test automation more robust.

Team Four

In team four, we needed someone to advocate for automation. Without a manager, I think the product owner was the logical choice for this. If they’d been excited about test automation and created an environment where it had priority, I think it would have influenced the technical team members towards automation.

Think about your own team. Who would you like to move within this model? Why? What impact do you think that would have?

Scope of change

To influence, you first need to think about what specifically you are trying to change. Let's step back out to the underlying model of skills, access, motivation. These three attributes are not binary themselves. If you are trying to influence change for a person in one of these dimensions, then you need to understand what exactly you are targeting, and why.

Access

What does it mean to have access to code? Am I granted read-only permissions to a repository, or can I edit existing files, or even create new ones? Does access include having licenses to tools, along with permission to install and setup a development environment locally.

In some cases, perhaps access just means being able to see a report of test results in a continuous integration server like Jenkins. That level of access may be enough to involves a business analyst or a product owner in the scope of automated test coverage.

When considering access, ask:
  • What are your observers able to see?
  • What types of problems can your problem solvers respond to?
  • How does access limit the ideas of your inventors?

Skill

Skill is not just coding skill. Ash Winter has developed a wheel of testing which I think is a useful prompt for thinking more broadly about skill:

Wheel of Testing by Ash Winter

Coding is one skill that helps someone contribute to test automation. Skill also includes test design, the ability to retrieve different types of test data, creating a strategy for test automation, or even generating readable test reports.

How do the skills of your teachers, coaches, and problem solvers differ? Where do you have expertise, and where is it lacking? What training should your team seek?

Motivation

Motivation is not simply "I want test automation" or "I don’t want test automation". There's a spectrum - how much does a person want it? You might have a manager who advocates for 100% automated test coverage, or a developer who considers anything more than a single UI test to be a waste of time.

How invested are your advocates? Should they be pushing for more or backing off a little?

How engaged are your coaches and inventors?

Wider Perspective

The other thing to consider is who isn’t inside the fences at all. The examples that I shared above featured geese, dogs, horses, chickens, deer. Who is not in this list? Are there other animals around your organisation who should be part of your test automation farmyard?

Test automation may be helpful for your operations team to understand the behaviour of a product prior to its release. If you develop executable specifications using BDD, or something similar, could they be shared as a user manual for call centre and support staff?

A wider perspective can also provide opportunities for new information to influence the design of your test automation. Operations and support staff may think of test scenarios that the development team did not consider.

Conclusion

Considering division helps us to feel empathy for others and to more consciously split ourselves in a way that is "right" for our team. Ask whether there are any problems with the test automation in your team. If there are no problems, do you see any opportunities?

Next, think about what you can do to change the situation. Raise awareness of the people around you who don't have access that would be useful to them. Support someone who is asking for training or time to contribute to automation. Ask or persuade a colleague to move themselves within the model.

Testers have a key skill required to be an agent of change, we ask questions daily.

How do your colleagues contribute to test automation? 

Who is involved in the design, development and maintenance of your test suites?

What would happen if people in your team changed how they participate in test automation?

How could you influence this change?

Sunday, 17 September 2017

How to start a Test Coach role

I received an email this morning that said, in part:

I've been given the opportunity to trial a test coaching approach in my current employer (6-7 teams of 4-5 devs). 

I had a meeting with the head of engineering and she wants me to act almost like a test consultant in that I'm hands off. She expects me to be able to create a system whereby I ask the teams a set of questions that uncover their core testing problems. She's also looking for keys metrics that we can use to measure success.

My question is whether you have a set of questions or approach that allows teams to uncover their biggest testing problems? Can you suggest reading material or an approach?

On this prompt, here is my approach to starting out as a Test Coach.

Avoid Assessment

A test coach role is usually created by an organisation who are seeking to address a perceived problem. It may be that the testers are slower to respond to change, or that testers are less willing to engage in professional development, or that the delivery team does not include a tester and the test coach is there to introduce testing in other disciplines. 

Generally, the role is created by a manager who sits beyond the delivery teams themselves. They have judged that there is something missing. I think it is a bad idea to start a test coach role with a survey of testing practices intended to quantify that judgement. You might represent a management solution to a particular problem that does not exist in the eyes of the team. 

Your first interaction will set the tone of future interactions. If you begin by asking people to complete a survey or checklist, you pitch your role as an outsider. Assessments are generally a way to claim power and hierarchy, neither of which will benefit a test coach. You want to work alongside the team as a support person, not over them.

Assessment can also be dangerous when you enter the role with assumptions of what your first actions will be. If you think you know where you need to start, it can be easy to interpret the results of an assessment so that it supports your own bias.

But if not by assessment, how do you begin?

Initiation Interviews

Talk to people. One-on-one. Give them an hour of your time and really listen to what they have to say. I try to run a standard set of questions for these conversations, to give them a bit of structure, but they are not an assessment. Example questions might include:

  • Whereabouts do you fit in the organisation and how long have you been working here?
  • Can you tell me a bit about what you do day-to-day?
  • What opportunities for improvement do you see?
  • What expectations do you have for the coaching role? How do you think I might help you?
  • What would you like to learn in the next 12 months?

I don't ask many questions in the hour that I spend with a person. Mostly I listen and take notes. I focus on staying present in the conversation, as my brain can tend to wander. I try to keep an open mind to what I am hearing, and avoid judgement in what I ask.

In this conversation I consciously try to adopt a position of ignorance. Even if I think that I might know what a person does, or what improvements they should be targeting, or even where they should be focused on their own development. I start this conversation with a blank slate. Some people have said "This is silly, you know what I do!", to which I say "Let's pretend that I don't". This approach almost always leads me to new insights.

This is obviously a lot slower than sending out a bulk survey and asking people to respond. However, it gives you the opportunity as a coach to do several important things. You demonstrate to the people that you'll be working with that you genuinely want their input and will take the time to properly understand it. You start individual working relationships by establishing an open and supportive dialog. And you give yourself an opportunity to challenge your own assumptions about why you've been bought into the test coach role.

Then how do you figure out where to start?

Finding Focus

When my role changed earlier in the year, I completed 40 one-on-one sessions. This generated a lot of notes from a lot of conversations, and initially the information felt a little overwhelming. However, when I focused on the opportunities for improvement that people spoke about, I started to identify themes.

I grouped the one-on-one discussions by department, then created themed improvement backlogs for each area. Each theme included a list of anonymous quotes from the conversations that I had, which fortuitously gave a rounded picture of the opportunities for improvement that the team could see.

I shared these documents back with the teams so that they had visibility of how I was presenting their ideas, then went to have conversations with the management of each area to prioritise the work that had been raised.

What I like about this approach is that I didn't have to uncover the problem myself. There was no detective work. I simply asked the team what the problems were, but instead of framing it negatively I framed it positively. What opportunities for improvement do you see? Generally people are aware of what could be changed, even when they lack the time or skills to drive that change themselves.

Then asking for management to prioritise the work allows them to influence direction, but without an open remit. Instead of asking "What should I do?", I like to ask "What should I do first?".

Measuring Success

The final part of the question I received this morning was about determining success of the test coach role. As with most measures of complex systems, this can be difficult.

I approach this by flipping the premise. I don't want to measure success, I want to celebrate it.

If you see something improve, I think part of the test coach role is to make sure that the people who worked towards that improvement are being recognised. If an individual steps beyond their comfort zone, call it out. If a team have collectively adopted and embedded a new practice, acknowledge it.

Make success visible.

I believe that people want to measure success when they are unable to see the impact of an initiative. As a test coach, your success is in the success of others. Take time to reflect on where these successes are happening and celebrate them appropriately.


That's my approach to starting in a test coach role. Avoiding assessment activities. Interviewing individuals to understand their ideas. Finding focus in their responses, with prioritisation from management. Celebrating success as we work on improvements together.

Thursday, 24 August 2017

The pricing model for my book

I have encountered mixed opinion about whether I should continue to offer my book for free, along with curiosity as to whether I am earning any money by doing so. In this post I share some information about purchases of my book, rationale for my current pricing model, and my argument for why you should invest in a copy.

What do people pay?

At the time of writing this blog, my book has 1,719 readers.

On average, 1 in 5 people have chosen to pay for my book. Of those who pay, 53% pay the recommended price, 10% pay more, and 37% pay less.

Based on a rough estimate of the number of hours that I spent writing and revising the book, I calculate that my current revenue gives an income of $8 an hour.

I incurred minimal cost in the writing process: the LeanPub subscription, gifts for my reviewers and graphic designer, and a celebratory dinner after I published. If I subtract these expenses, my income drops to $5 an hour.

The minimum wage in New Zealand is $15.75 per hour. This means that based on current sales, I would have been better paid by working in a fast food outlet than in writing my book. I don't think that's a particularly surprising outcome for a creative pursuit.

Why offer it for free?

I wrote the book to collate ideas and experiences about testing in DevOps into a single place. I wanted to encourage people to develop their understanding of an emerging area of our industry. My primary motivation was to share information and to do that I think it needs to be accessible.

I have been told that offering something for free creates a perception that it is not valuable. That people are likely to download the book, but not to read it, as they haven't invested in the purchase. That people won't trust the quality of a book that a writer is willing to offer for nothing.

I weigh these arguments against someone being unable to access the information because they cannot afford it, or someone who is unwilling to enter their payment details online, or someone using cost as an excuse to avoid learning. These reasons make me continue to offer free download as an option.

I recognise that being able to set a minimum price of zero is a privilege. If I was writing for a living, then this would not be an option available to me. I do worry that my actions impact those writers who would be unable to do the same.

Why pay?

If my book is available for free, then why should you pay for a copy?

I believe that I have written a useful resource, but every writer is likely to feel that way about their book! Rather than just taking my word for it, I have collected reader testimonials from around the world including the USA, Netherlands, UK, India, Pakistan, Ecuador, and New Zealand.

The testimonials endorse the practical content, industry examples, and the breadth of my research in my book. They compliment my writing style as being clear and easy-to-read. They state that there is a wide audience for the book - anyone who holds a role in a software development team.

Alongside their opinions, you can preview a sample chapter and the table of contents prior to purchase. I hope that all of this information creates a persuasive case for the value contained within the book itself.

If you believe that the book will be useful to you, then I believe that the recommended price is fair.

A Practical Guide to Testing in DevOps is now available on LeanPub.

Tuesday, 15 August 2017

Encouraging testers to share testing

In theory, agile development teams will work together with a cross-functional, collaborative approach to allow work to flow smoothly. In practice, I see many teams with a delivery output that is limited by the amount of work that their testers can get through.

If testers are the only people who test, they can throttle their team. This can happen because the developers and business people who are part of the team are unwilling to perform testing activities. It can also be the tester who is unwilling to allow other people to be involved in their work. I have experienced both.

There is a lot of material to support non-testers in test activities. I feel that there is less to support the tester so that they feel happy to allow others to help them. I'd like to explore three things that could prevent a tester from engaging other people to help with test activities.

Trust

Do you trust your team? I hope that most of you will work in a team where your instinct is to answer that question with 'yes'. What if I were to ask, do you trust your team to test?

In the past, I have been reluctant to hand testing activities to my colleagues for fear that they will miss something that I would have caught. I worried that they would let bugs escape. I trusted them in general, but not specifically with testing.

On reflection, I think my doubt centered on their mindset.

In exploratory testing, often a non-tester will take a confirmatory approach. This means that if the product behaves correctly against the requirements, it passes testing. It is easy for anyone, regardless of their testing experience, to fall into a habit of checking off criteria rather than interrogating the product for the ways in which it might fail.

In test automation, it is usually the developer who has the skill to assist. My observation is that developers will generally write elegant test automation. They can also fall into the trap of approaching the task from a confirmatory perspective. Automation often offers an opportunity to quickly try a variety of inputs, but developers don't always look at it from this perspective.

If you share these doubts, or have others that prevent you from trusting your team to test, how could you change your approach to allow others to help you?

One thing that I have done, and seen others do, is to have short handovers at either end of a testing task. If a non-tester is going to pick up a test activity, they first spend ten minutes with the tester to understand the test plan and scope. When the non-tester feels that they have finished testing, they spend a final ten minutes with the tester to share their coverage and results.

These short handovers can start to build trust as they pass the testing mindset to other people in the team. As time passes, the tester should find that their input in these exchanges decreases to a point where the handovers become almost unnecessary.

Identity

If your role in the team is a tester, this identity can be tightly coupled to test activities. What will you do if you don't test? If other people can test, then why are you there at all?

I particularly struggled with this as a consultant. I would be placed into an agile development team as a tester, but often the most value that I could deliver would be in encouraging other people to test. This felt a bit like cheating the system by getting other people to do my role. I believe that a lot of people write about the evolving tester role because of this dissonance.

The clearest way that I have to challenge the tester identity in a constructive way is the concept of elastic role boundaries that I co-developed with Chris Priest. This concept highlights the difference between tasks and enduring commitments. We can be flexible in taking ownership of small activities while still retaining a specialist identity in a particular area of the team.

In simpler terms, a colleague helping with a single testing activity does not make them a tester. I do not see a threat to our role in sharing test activities. I believe that a tester retains their identity and standing in the team as a specialist even when they share testing work.

Vulnerability

The third barrier that I see is that testers are unwilling to ask for help. This isn't an attribute that is unique to testers, many people are unwilling to ask for help. In an agile team, failing to pull your colleagues to help with testing may limit your ability to deliver.

Even when you think that it is clear that you need help, don't assume that your colleagues understand when you are under pressure. In my experience, people are often blissfully unaware of the workload of others in the team. Even when the day begins with a stand-up meeting, it can be difficult to determine how much work sits behind each task that a person has in progress. Make it clear that you would welcome assistance.

You may be reluctant to ask for help because you see it as a sign of weakness. It might feel like everyone else in the team can maintain a certain pace while you are always the person who needs a hand. In my experience, asking for help is rarely perceived as weakness by others. More often, I have seen teams respond with praise for bravery, eagerness to alleviate stress, and relief that they have been given permission to help.

It can also be difficult to share testing when you work in a wider structure alongside other agile teams that have the same composition as your team. You might believe that your team is the only one where the testers cannot handle testing themselves. In this situation, remember the ratio myth. There are a lot of variables in determining the ratio of testers to the rest of the team. Sometimes a little bit of development can spin off a lot of testing, or vice versa. Test your assumptions about how other teams are operating.

If you are vulnerable, you encourage others to behave in the same way. A tester sharing testing can encourage others to seek the help that they need too. If you're unwilling to ask someone for help directly, start by making it clear to your team what your workload is and extend a general invitation for people to volunteer to assist.

***

If you work in an agile development team and feel reluctant to share test activities with your colleagues, you might be creating unnecessary pressure on both yourself and your team by limiting the pace of delivery. I'd encourage you to reflect on what is preventing you from inviting assistance.

If you doubt that your colleagues will perform testing to your standard, try a handover. If you worry about the necessity of your role if you share testing, perhaps elastic role boundaries will explain how specialists can share day-to-day tasks and retain their own discipline. If you are reluctant to ask for help directly, start by making your workload clear so that your team have the opportunity to offer.

I encourage you to reflect on these themes and how they influence the way that you work, to get more testers to share testing.