Ministry of Testing has just published the next in a series I’m writing on the subject of testing and management/leadership. This one is on managing the relationship between tester and line manager.
Posts by Ben Kelly
I recently came across this article, Evolving to Continuous Testing. Have a quick read and come on back.
I found myself initially nodding along with the author’s sentiments, but that didn’t last too long, I’m afraid. It is fair to say that software delivery will be as fast as your slowest component and often that component will be testing if you hold a very narrow view of the term. If you view testing as a phase in a process, then the only bit you see is the part that tends to occur just before release. IT doesn’t take into account the testing that occurs during things like requirements/epic review, story grooming, pairing with programmers during implementation and so on.
The term ‘continuous testing’ seems to be an odd way of saying ‘sufficiently advanced monitoring’. I wonder if I’d have been so critical if the author had written about evolving monitoring. My first real reaction was to the concept of automating risk measurement. There are many different kinds of risk. I wonder how you automate measurement of things like potential harm to brand image for example. That’s not to say machine evaluation of known rules around known risk might not be useful, but that should be used to assist/supplement rather than to replace human evaluation.
The author talks of testing (manual and automated) being focused on passing or failing. It is a very small part of what testing focuses on – the ‘known knowns’. I appreciate that the author is saying that there is more that can be done to point to potentially poor code quality, but at this point I’m on the alert to see what else he doesn’t understand about testing.
The ‘defect documentation to simulated replay’ section is one potential way of solving the issue of bug ping pong. Being able to set up a containerised environment with the required test pre-requisites does sound like a nifty thing to have up your sleeve, but it ignores the fact that bug ping-pong points to a systemic problem of relationships between roles. Testers and programmers should be working closely together. Preferably close enough that a tester can turn around and ask a developer if what they’ve just found is significant enough to fix immediately. Ticket ping-pong is an issue that crops up due to the tyranny of distance/time and to some extent of tribalism.
The author’s point about structured and unstructured data is about as close as he comes to actually understanding testing. Providing data from a host of unstructured sources and providing it to an audience that matters in a way that they can immediately digest.
The last couple of points are around speed of release and my initial reaction was to roll my eyes at those too. The old school tester in me says that you always want some sort of human intervention when rolling code to production. Someone to provide that sanity check to make sure that things really are as we need them to be. There are a lot of interesting things going on in the industry though. Take a look at Shopify for example with multiple developers releasing code to production multiple times per day. In many ways this seems antithetical to good testing, but they’re doing it successfully and have been for quite some time. It feels like there’s an opportunity to dig in and learn there.
Getting back to the article, The author is exhorting us to make use of monitoring to improve code quality and to assist with release decisions. That’s not a bad thing in and of itself. It would have been far more palatable if it hadn’t been hung on an emaciated understanding of testing.
I’ve been doing quite a lot of writing lately, just not much here 🙂
I do have a few things to share though and the first of those is a new post on the Ministry of Testing website – How I stopped worrying and learned to love management. Enjoy.
At the 2015 Let’s Test conference in Stockholm, I took the stage at the opening of the conference to make a short, but significant statement. As a representative of eBay, I announced that we will not adopt ISO29119 and reject the notion that software testing can be standardised.
The announcement seems to have been warmly received. As I stated on stage, this statement stands to show organisations they have options. Just because someone claims something to be ‘a standard’, it does not follow that it is useful (or worse, not harmful) or that any notice should be taken of it. There are many existing criticisms of ISO29119 and I’ll not rehash them here. Suffice it to say that this should give pause to anyone who might have thought that this standard was in their interests.
I attended a practice-run of a workshop by Jan Eumann and Phil Quinn at Let’s Test this May. If you’re going to the conference, you should check this workshop out. If you’re not going, you should buy a ticket now and go check this workshop out. It neatly captures many of the dynamics that occur when testers and programmers pair. It gave me the opportunity to reflect on some of the things I’ve learned during my time as a tester embedded in a team of programmers, especially around pairing.
I won’t go into all of the details of this workshop, but for a significant portion of it, participants spend time pairing on a program that requires both fixing and further development. Ostensibly, there should be one programmer and one tester. For the workshop, I got to take on the programmer role. Normally when pairing I’m very much a tester, so this was an eye-opening experience for me, more so than I was expecting. I got to see how things look when I’m doing more driving than observing/questioning. Programmer/Tester pairing is a bit different from 2xProgrammer pairing. In the latter there tends to be a fair amount of taking it in turns to drive and navigate. For a Programmer/Tester pairing, how much drive vs. navigate there is depends on things like how comfortable the tester is writing code and how complex the solution (more complex solutions seems to correlate to the need to spend more time thinking deeply about test design and analysing possible failure modes, in my experience). As a programmer paired with a tester I saw for the first time how difficult it can be to facilitate the inclusion of a tester. It has given me a new appreciation for the skills of my fellow programmers at eBay. Here are a few of the things I noticed or was reminded of:
Remember to narrate as you code.
What are you thinking? Are you hunting for a file? What’s the test you’re writing now? Why that test? As I was coding, I was often silent. I knew what I was trying to do, but since the code was unfamiliar, I was spending a lot of time hunting. What I discovered was that my partner was feeling a bit useless because he felt he couldn’t contribute. As soon as he told me this, I started describing what I was trying to do and he was immediately able to start pointing me to sections of the code that he had fresh in his mind. One change required we refactor things in four different files. He reminded me of a couple of steps I’d missed on several occasions as well as noting a few typos that I completely missed. When you narrate your thoughts as you’re writing, you clarify what you’re doing not only for your partner, but often for yourself. Where you find you are hesitant, you might need to throw an idea around a bit more. You also give your partner the opportunity to make suggestions and ask questions.
As a tester, be sure to ask questions. It can be hard to ask questions that you think are dumb – especially when starting out. When I first started pairing as a tester, I felt reluctant to speak up because I didn’t want the programmer to feel like I was telling them how to do their job. I also didn’t want them to think I was stupid. I’ve not had any of the programmers I’ve worked with get defensive or treat me like an idiot. In fact, many things that I thought were stupid questions led to a discussion where we decided to use a different strategy than the one the programmer initially chose.
Thinking about solving a problem (programming) and thinking about how it might fail (testing) really are quite different
– even for people who are familiar with doing both. As my good friend Ilari Henrik Aegerter is fond of saying, it’s the difference between a finite solution space, versus an infinite problem space. The meeting of these two ways of thinking is why I think pairing testers and programmers can be so powerful. As I finished bits of functionality, my partner did some further testing and often found things I’d missed. I consider myself a fairly experienced tester and yet I rapidly fell into a pattern of wanting to get something written so that it works and even though I thought I was considering edge cases and use patterns, I was overconfident in my abilities to handle problems.
TDD is more about driving design than it is about testing, but it can help to facilitate a testing mindset. When you’re writing tests to drive your code, you’re also laying out a structure to your thinking. I find that gives you something visual to brainstorm with. Other test ideas naturally seem to crop up. The middle of the coding flow might not be the precise place to write them or follow them up, but you can certainly note them down to come back to them later.
For the workshop, I didn’t use TDD for the changes I made and not doing this made describing what I was doing more difficult. I think if I’d tried writing a test (check) first, then it would have been a lot simpler for my testing partner to know exactly what I was trying to achieve and to offer input and do further testing. If I’d written tests up front, I might have noticed other things that needed to be tested as well.
One of the things we did do was to talk about what to do next. It put me in mind of something else I see effective pairs doing.
Spend a little bit of time up-front to define success.
Discuss what you want to achieve before you dive into the code. Are you refactoring existing code? What do you want to achieve by refactoring? Are you exploring, looking for bugs to fix? Maybe a charter for specific kinds of issue will help you decide on what to fix now and what to note down for later. Maybe you’re writing new code. What do you want to get done by the end of the session? Do you have all the resources you need? Any mocks or stubs that you need to build? Any integrations you need to do? Is the task defined well enough to make progress?
Effective, productive pairing takes practice. If you know your own role in a pair, you can be effective. If you have experience from both sides of the pair, then I think that has the potential for a very powerful partnership. I work closely with brilliant programmers every day. I see my role as a tester often as facilitating a testing mindset in my programmer peers. It wasn’t until I actually had to step into the shoes of one that I realised that there’s a lot of facilitation on the programmer’s part as well to make sure that a tester has a detail-rich environment in which to work.
I found this workshop a humbling experience. I want to thank Jan and Phil for the opportunity.
There was to be a debate at Foo Café in Malmö in early November about the ISO29119 software testing standard. It was to feature Karen N. Johnson and myself debating Stuart Reid and Anne Mette Hass on whether the current volumes should be retracted and the upcoming volumes suspended. At the time I was invited to participate, my understanding was that all participants had agreed to the debate. The only thing left to sort out was the format and the moderator.
Sadly, the debate will no longer go ahead. Mr. Reid and Ms. Hass have pulled out. I am obviously disappointed. I was looking forward to finally having some of the questions and concerns raised by my testing colleagues addressed by people from the working group. To my knowledge that has not happened with the exception of a response from Mr. Reid himself. His response inadequately addresses a small number of concerns raised by those opposed to the standard and misrepresents a number of others. If anything, my experience with this debate, in which Mr. Reid and Ms. Hass agreed to participate then subsequently backflipped has raised even more questions that need to be addressed.
I understand how having poured so much time and effort into the standard, it must be difficult to hear people criticise it so strongly, but I am curious as to why no one from the working group seems to want to publicly defend their work. Neither Karen nor myself (nor indeed any of the other testers I’m proud to associate myself with) have any interest whatsoever in personally disparaging Mr. Reid, Ms. Hass or anyone else in the working group. The standard itself is what we take issue with. The issues are what we want the opportunity to discuss further. I do hope an opportunity will arise for Mr. Reid and co. to address the very real concerns my colleagues and I have raised about the standard.
I suppose most people will have filled their conference dance card by now, but in case you haven’t, here are some upcoming conferences that I’ll be presenting at:
Let’s Test Oz
Tasting Let’s Test – South Africa
Johannesburg, South Africa
All of these conferences have presentations from world class speakers (somehow they let me in too), so they’re well worth attending if you can.
I hope to see you there.
To the publisher(s) of the blog post entitled ‘Book burners threaten (old) new testing standard’ on professionaltester.com on August 20, 2014:
(I have attached an image of said blog’s text in case it should change or be removed in future)
At CAST2014, a number of like-minded professional testers got together after a very insightful presentation by James Christie on the subject of the proposed ISO 29119 standard. Out of this meeting of minds, two things emerged. One was a manifesto drafted by Karen N. Johnson about our beliefs as professional testers (http://www.professionaltestersmanifesto.org/). The other was a petition initiated by the International Society for Software Testing (ISST) to demonstrate a lack of consensus by professional software testers to the proposed standard ISO 29119 (http://www.ipetitions.com/petition/stop29119).
The petition exists to show that there are a significant number of software testing professionals who have significant, reasoned and substantial objections to the publication and subsequent adoption of the ISO29119 standard and therefore there is no consensus in the software testing industry that this standard is valid.
ISO’s own guidelines define consensus as :
ISO/IEC Guide 2:2004, definition 1.7
“General agreement characterised by the absence of sustained opposition to substantial issues by any important part of the concerned interests and by a process that involves seeking to take into account the views of all parties concerned and to reconcile any conflicting arguments.
NOTE Consensus need not imply unanimity.”
Over the past week, signatories of this petition and other concerned parties have been circulating the petition and encouraging others to sign it. As of writing, it has upward of 250 signatories.
Your blog seems to be a fairly low-brow effort to understand and respond to the concerns raised by the petition. I see few redeeming qualities upon closer reading. It denounces this petition in what I can only describe as insultingly inflammatory fashion. You appear to be using a number of fallacies to support your attempt at an argument. Let’s go through them.
You begin with a fairly vague appeal “Testers have been waiting many years for ISO29119”. I wonder, which testers specifically are you referring to? Aside from consultants waiting to sell services based on ISO certification and anyone related to the drafting of these documents, who exactly is clamouring for the publication of these documents?
Next up – guilt by association.
You are calling the ISST and the signatories of this petition ‘book burners’. There have been a number of groups known throughout history for burning books and one would be hard pressed not to think first of the Nazis. To the best of my knowledge, neither the ISST, nor any signatories have actually burned any books (Actually, members of ISST read a lot of books and quite a few also write them). If your intention was indeed to draw parallels between the two groups, then I find this repugnant and highly unprofessional. If your intent was otherwise, then by all means, please leave a comment here (my blog unlike yours is open for discussion) and enlighten me.
Since you bring up the subject of books, let’s take a quick look shall we? The published volumes of the ISO29119 standard have bibliographies that refer predominantly to other ISO/IEEE publications. As far as I can see there are three publications referred to that are external sources and one of those is to a publication of ISTQB.
Here’s a small fraction of a list that I think could have been referred to or at least recommended as further reading:
Perfect software and other illusions about other illusions about testing – Gerald M Weinberg
Adrenaline junkies and template zombies – DeMarco, Hruschka, Lister et al
Mistakes were made (but not by me): Why we justify foolish beliefs, bad decisions and hurtful acts – Carol Tavris
Introducing ethics – Dave Robinson
You are not so smart – David McRaney
Why software gets in trouble – Gerald M Weinberg
Antifragile: Things that gain from disorder – Nassim Nicholas Taleb
Lessons learned in software testing – Bach, Kaner, Pettichord
Bad software: What to do when software fails – Cem Kaner
Seeing like a state: How certain schemes to improve the human condition have failed – James C Scott
Tacit and explicit knowledge – Harry Collins
Leprechauns of software engineering – Laurent Bossavit
The structure of magic Volume 1 & 2 – Bandler, Grinder
Lateral thinking: Creative thinking step by step – Edward De Bono
Secrets of consulting – Gerald M Weinberg
An introduction to general systems thinking – Gerald M Weinberg
Becoming a technical leader – Gerald M Weinberg
The psychology of computer programming – Gerald M Weinberg
Kuhn vs. Popper: The struggle for the soul of science
Please understand me (2) – David Keirsey
Frogs into princes – Bandler, Grinder
Sherlock Holmes – the complete novels and stories – Sir Arthur Conan Doyle
You get the idea. There is a good deal more out there that software testers should familiarise themselves with. I’ve left out tomes that refer to specific technologies. They are easily found and I leave them as an exercise for the reader.
For more, see
Returning to your blog post – you falsely assert that our issue with the standard is that
not everyone will agree with what the standard says.
This is at best a gross oversimplification. The text of the petition does not explicitly state what specific disagreements and opposition the signatories have, it simply states that such opposition exists and must be considered. The specifics are not difficult to find. There are a number of other professional testers who have written well–reasoned arguments about their opposition to software testing standards and that number is growing.
You go on to build the following strawman argument
…they don’t want there to be any standards at all. Effective, generic, documented systematic testing processes and methods impact their ability to depict testing as a mystic art and themselves as its gurus
Let’s look at the word ‘effective’ – Effective for what? One might assume for the orderly execution of software testing, but I would hate to put words in your mouth, so please, once again enlighten me as to what specifically you mean by effective and do please back this up with proof that this standard actually achieves this.
As for the rest of the sentence, what has the ISST or any other signatory of the petition said or done that leads you to believe that they gain from depicting testing as ‘a mystic art and themselves as its gurus’. I challenge you to prove this statement or withdraw it and make an apology.
Furthermore, I challenge you to publish your real name next to your blog post and stand behind it and defend it as best you are able – or, retract it and post an apology with your real name attached.
Professional Software Tester
Founding member of the International Society for Software Testing
Hang on a sec, didn’t I just get done saying testing is an activity and not a role? I did say that, didn’t I? Did I mean it? Well, it’s true in the same sense that Darth Vader killing Anakin Skywalker is true. As Obi Wan said – ‘from a certain point of view’ – namely how we as testers pitch our role to non-testers.
How we position software testing to non-software testers is important. I have a strong sense that currently we explain the role of software testing to non-testers very much in terms of what testers do and others don’t (or can’t, or won’t). As testers we bring skills and experience that are different to those of a programmer, or UX or product management and so on and I think it’s important that the value of these skills be recognised. I think though, to say that only skilled testers can/should be responsible for exercising these skills is a bridge too far. I want both testers and programmers to think more fluidly in terms of what their role and responsibilities are. The short version is – mostly because I think the ‘that’s not my job’ mindset is super unhelpful to all involved in software development.
In the comments of my last post James Bach said ‘I think the role of testing is a very useful heuristic’. I agree. It is. I didn’t state that explicitly in my last post and really I should have. It’s a realisation that I have only come to recently and the realisation shocked me. I identified so strongly with the role of a tester that relaxing my grip on ‘tester’ as an identity was incredibly confronting.
It’s not that the role of testing as a concept is not useful, but like any other heuristic, it is fallible. If one is careless in describing the responsibilities and characteristics of testing in terms of what testing is and what other roles are not, it can help to reinforce stereotypes that are not useful. By way of example, here are a few beliefs that I’ve heard from testers about why programmers can’t test, that I think are unhelpful.
‘Programmers shouldn’t test their own code’
I think programmers should not be the only ones to test their own code if quality is at stake. What we think we’ve written is often not what we’ve actually written. Talk to a programmer about reading code they wrote more than a month ago and they’ll often say ‘I wonder wtf I was thinking’. If you talk to a screenwriter, or any other kind of writer really, they’ll often say the same thing. At the time of writing, we often lack the perspective to be effectively critical of what we’ve written. With all that said, if any programmer is writing anything that matters, they absolutely should be testing their own code.
‘Programmers and testers think too differently for either one to be good at each other’s job’
While I believe it’s true that the focus of a tester and that of a programmer are very different, that doesn’t mean we cannot have a good fundamental understanding of each other’s work. I would go so far as to say that if testers and programmers don’t have good understanding of the fundamentals of each other’s craft, then they are almost certainly going to be less effective than someone that does have that knowledge. As with a tester knowing how to code, knowing the basics of the technology stack the programmers are working with, understanding of the patterns they’re using and their advantages and disadvantages is helpful in spotting possible problems, so too should coders have an understanding of testing fundamentals, not just whatever other automated testing they’re doing. You should be able to talk to them about oracles, test heuristics, the various ‘ilities’ and risk without them wondering what the hell you’re on about.
‘Programmers are too tightly focused on what they’re building to see the bigger picture’
Which seems to be saying ‘programmers don’t know how to defocus and wouldn’t see the value of doing so if they did’. Like other testing skills, focusing and defocusing are learned skills and can be honed with practice. Full stack developers have practice doing this because they need to understand the different technologies they’re working with and how they interact, their various gotchas and pitfalls. It is a skill that can be learned and there is benefit for programmers to know how to do it.
There are lots of reasons out there for why programmers are bad at testing. Testers reinforce that mindset every time they trot these little truisms out. It doesn’t have to be that way. Rather than looking at the tester role as something that is altogether separate from a programmer role, consider how the two roles can interact.
The advent of test driven development in its various flavours has helped blur the lines between the roles. TDD is generally used as a way to drive design and thereafter support programmers as they maintain and change code. Programmers write failing tests and then use the support of their IDE to fill in the code to make that test work. They build small pieces one at a time, each supported by tests that exercise what was just written. If a test is difficult to write, it points to a possible problem in the intended implementation. The initial focus of the tests is to help the programmer implement code that is elegant and maintainable. The fact that it may also cover things we’re interested in from a higher level is a bonus. It’s not exactly testing in the way a tester might consider testing, but there is definitely a relationship there.
Automated acceptance testing seems to sit more squarely between the roles. Where unit testing is code-supporting, or tech facing (if you want to go to Brian Marick’s Agile Quadrants model), acceptance tests can potentially have aspects of both code supporting and product supporting tests (tech facing & business facing).
Good programmers write tests before they write code. Great programmers critically question the requirements they’re given before they start building and keep the big picture in mind as they code. In an agile context, well written user stories will help them to do that as the story itself describes the big picture, or is part of an epic that does. Great programmers who pair will often spot and correct issues in the code they write and they’ll use the conversations they have while working to highlight possible remaining problems. If necessary, they’ll ask for specialist help (ie a tester).
In my current team, there is a strong sense of shared ownership of what we build. The programmers I work with are highly motivated to get testing right, because if we put out a substandard product, we are all responsible. We succeed or fail as a unit based on our ability to deliver value to our stakeholders. We’re a pretty new unit, relatively untried. We have a couple of wins on the board, but the quality of the work we put out reflects on us as individuals, as a team and on the department we’re a part of (not to mention the company as a whole). That’s a fair amount of responsibility. When things don’t go to plan, as will inevitably occur, we don’t waste time and energy in finger pointing. By the same token, if someone screws up, they’re the first to put their hand up for it. We fix what we need to fix, work out what we can improve and crack on. We succeed or fail as a unit. We own it. That’s just the way it is and it’s pretty awesome, I have to say.
Is it perfect? Hell no. There’s lots I want to improve, but at the basic level is that shared belief of joint responsibility and that is something that I believe is lacking from most tester/programmer relationships. That’s a damn shame and I want that to change.
Why aren’t more teams out there like this? My hypothesis is twofold.
1. There are a lot of people out there that call themselves testers who are really, really crap at software testing. Unfortunately, most programmers have only encountered this type of ‘tester’.
2. There are several different flavours of the sentiment that ‘programmers can’t test because…reasons’. Programming and Testing are different skills. How you focus your thinking for each of these skills is different, but to say that a programmer can’t test is a fucking cop out and lets them off the hook for work they should be doing.
I think it is a reasonable expectation to hold that developers take some interest in improving at testing if their current abilities are close to nil. Having attained some level of competence in testing fundamentals, I also think it reasonable that they are able to improve further should they so choose.
I also think that programmers are unlikely to spend enough time practicing or improving testing if we take that expectation away by saying things like ‘developers are crap at testing because they’re developers’. I’m not expecting that they’re as proficient as I am but I do expect a significantly higher standard than ‘I wrote a few unit tests and the code does what it should’. I want to be able to chat freely with programmers about what oracles they used to test against and how they approached testing the code they’ve written and what they think still needs attention. That’s not an unreasonable expectation to have from a programmer who values their craft and shares responsibility with you the tester for delivering value.
Is that lazy? Am I expecting someone else to be doing my work for me? No. Not at all. A programmer who has a solid understanding of testing fundamentals will deliver higher quality code so that when I do get ahold of it, I have a challenge on my hands. The obvious holes have been thought of and plugged already. As a tester, I get to do what I do best – exercise my tester skills to find those issues that are both difficult to spot and a significant risk to delivering value.
The roles of programmer and tester contain significant overlap in terms of thinking, skills and activities. It makes sense to me that the duties of each likewise overlap. Knowledge of one does not and should not preclude understanding of the other. The better we understand how each other works, the better we can help each other do better work. It takes effort. You’ll have to do stuff that makes you uncomfortable or feel dumb. The programmers you work with may resist taking on the responsibilities of testing. You might have to have difficult conversations, maybe repeatedly. What works well in one team may not work well in another.
By sharing the work we do, by working closely with our non-testing peers, helping them understand the work we do and educating ourselves about their work, I believe we will better demonstrate the value of the tester’s skill set and better set expectations of what testing is, whether it be a skill set embodied in a specialist role, a set of activities that a team undertakes, or some combination of both.