Hang on a sec, didn’t I just get done saying testing is an activity and not a role? I did say that, didn’t I? Did I mean it? Well, it’s true in the same sense that Darth Vader killing Anakin Skywalker is true. As Obi Wan said – ‘from a certain point of view’ – namely how we as testers pitch our role to non-testers.

How we position software testing to non-software testers is important. I have a strong sense that currently we explain the role of software testing to non-testers very much in terms of what testers do and others don’t (or can’t, or won’t). As testers we bring skills and experience that are different to those of a programmer, or UX or product management and so on and I think it’s important that the value of these skills be recognised. I think though, to say that only skilled testers can/should be responsible for exercising these skills is a bridge too far. I want both testers and programmers to think more fluidly in terms of what their role and responsibilities are. The short version is – mostly because I think the ‘that’s not my job’ mindset is super unhelpful to all involved in software development.

In the comments of my last post James Bach said ’I think the role of testing is a very useful heuristic’. I agree. It is. I didn’t state that explicitly in my last post and really I should have. It’s a realisation that I have only come to recently and the realisation shocked me. I identified so strongly with the role of a tester that relaxing my grip on ‘tester’ as an identity was incredibly confronting.

It’s not that the role of testing as a concept is not useful, but like any other heuristic, it is fallible. If one is careless in describing the responsibilities and characteristics of testing in terms of what testing is and what other roles are not, it can help to reinforce stereotypes that are not useful. By way of example, here are a few beliefs that I’ve heard from testers about why programmers can’t test, that I think are unhelpful.

‘Programmers shouldn’t test their own code’

I think programmers should not be the only ones to test their own code if quality is at stake. What we think we’ve written is often not what we’ve actually written. Talk to a programmer about reading code they wrote more than a month ago and they’ll often say ‘I wonder wtf I was thinking’. If you talk to a screenwriter, or any other kind of writer really, they’ll often say the same thing. At the time of writing, we often lack the perspective to be effectively critical of what we’ve written. With all that said, if any programmer is writing anything that matters, they absolutely should be testing their own code.

‘Programmers and testers think too differently for either one to be good at each other’s job’

While I believe it’s true that the focus of a tester and that of a programmer are very different, that doesn’t mean we cannot have a good fundamental understanding of each other’s work. I would go so far as to say that if testers and programmers don’t have good understanding of the fundamentals of each other’s craft, then they are almost certainly going to be less effective than someone that does have that knowledge. As with a tester knowing how to code, knowing the basics of the technology stack the programmers are working with, understanding of the patterns they’re using and their advantages and disadvantages is helpful in spotting possible problems, so too should coders have an understanding of testing fundamentals, not just whatever other automated testing they’re doing. You should be able to talk to them about oracles, test heuristics, the various ‘ilities’ and risk without them wondering what the hell you’re on about.

‘Programmers are too tightly focused on what they’re building to see the bigger picture’
Which seems to be saying ‘programmers don’t know how to defocus and wouldn’t see the value of doing so if they did’. Like other testing skills, focusing and defocusing are learned skills and can be honed with practice. Full stack developers have practice doing this because they need to understand the different technologies they’re working with and how they interact, their various gotchas and pitfalls. It is a skill that can be learned and there is benefit for programmers to know how to do it.

There are lots of reasons out there for why programmers are bad at testing. Testers reinforce that mindset every time they trot these little truisms out. It doesn’t have to be that way. Rather than looking at the tester role as something that is altogether separate from a programmer role, consider how the two roles can interact.

The advent of test driven development in its various flavours has helped blur the lines between the roles. TDD is generally used as a way to drive design and thereafter support programmers as they maintain and change code. Programmers write failing tests and then use the support of their IDE to fill in the code to make that test work. They build small pieces one at a time, each supported by tests that exercise what was just written. If a test is difficult to write, it points to a possible problem in the intended implementation. The initial focus of the tests is to help the programmer implement code that is elegant and maintainable. The fact that it may also cover things we’re interested in from a higher level is a bonus. It’s not exactly testing in the way a tester might consider testing, but there is definitely a relationship there.

Automated acceptance testing seems to sit more squarely between the roles. Where unit testing is code-supporting, or tech facing (if you want to go to Brian Marick’s Agile Quadrants model), acceptance tests can potentially have aspects of both code supporting and product supporting tests (tech facing & business facing).

Good programmers write tests before they write code. Great programmers critically question the requirements they’re given before they start building and keep the big picture in mind as they code. In an agile context, well written user stories will help them to do that as the story itself describes the big picture, or is part of an epic that does. Great programmers who pair will often spot and correct issues in the code they write and they’ll use the conversations they have while working to highlight possible remaining problems. If necessary, they’ll ask for specialist help (ie a tester).

In my current team, there is a strong sense of shared ownership of what we build. The programmers I work with are highly motivated to get testing right, because if we put out a substandard product, we are all responsible. We succeed or fail as a unit based on our ability to deliver value to our stakeholders. We’re a pretty new unit, relatively untried. We have a couple of wins on the board, but the quality of the work we put out reflects on us as individuals, as a team and on the department we’re a part of (not to mention the company as a whole). That’s a fair amount of responsibility. When things don’t go to plan, as will inevitably occur, we don’t waste time and energy in finger pointing. By the same token, if someone screws up, they’re the first to put their hand up for it. We fix what we need to fix, work out what we can improve and crack on. We succeed or fail as a unit. We own it. That’s just the way it is and it’s pretty awesome, I have to say.

Is it perfect? Hell no. There’s lots I want to improve, but at the basic level is that shared belief of joint responsibility and that is something that I believe is lacking from most tester/programmer relationships. That’s a damn shame and I want that to change.

Why aren’t more teams out there like this? My hypothesis is twofold.

1. There are a lot of people out there that call themselves testers who are really, really crap at software testing. Unfortunately, most programmers have only encountered this type of ‘tester’.

2. There are several different flavours of the sentiment that ‘programmers can’t test because…reasons’. Programming and Testing are different skills. How you focus your thinking for each of these skills is different, but to say that a programmer can’t test is a fucking cop out and lets them off the hook for work they should be doing.

I think it is a reasonable expectation to hold that developers take some interest in improving at testing if their current abilities are close to nil. Having attained some level of competence in testing fundamentals, I also think it reasonable that they are able to improve further should they so choose.

I also think that programmers are unlikely to spend enough time practicing or improving testing if we take that expectation away by saying things like ‘developers are crap at testing because they’re developers’. I’m not expecting that they’re as proficient as I am but I do expect a significantly higher standard than ‘I wrote a few unit tests and the code does what it should’. I want to be able to chat freely with programmers about what oracles they used to test against and how they approached testing the code they’ve written and what they think still needs attention. That’s not an unreasonable expectation to have from a programmer who values their craft and shares responsibility with you the tester for delivering value.

Is that lazy? Am I expecting someone else to be doing my work for me? No. Not at all. A programmer who has a solid understanding of testing fundamentals will deliver higher quality code so that when I do get ahold of it, I have a challenge on my hands. The obvious holes have been thought of and plugged already. As a tester, I get to do what I do best – exercise my tester skills to find those issues that are both difficult to spot and a significant risk to delivering value.

The roles of programmer and tester contain significant overlap in terms of thinking, skills and activities. It makes sense to me that the duties of each likewise overlap. Knowledge of one does not and should not preclude understanding of the other. The better we understand how each other works, the better we can help each other do better work. It takes effort. You’ll have to do stuff that makes you uncomfortable or feel dumb. The programmers you work with may resist taking on the responsibilities of testing. You might have to have difficult conversations, maybe repeatedly. What works well in one team may not work well in another.

By sharing the work we do, by working closely with our non-testing peers, helping them understand the work we do and educating ourselves about their work, I believe we will better demonstrate the value of the tester’s skill set and better set expectations of what testing is, whether it be a skill set embodied in a specialist role, a set of activities that a team undertakes, or some combination of both.

 

If you’re a tester and the title of this post made your heart beat a little faster, then bear with me for a paragraph or two before you scroll down to the comments section to rant.

I’ve been doing this testing thing for a while now. I’ve worked the full spectrum from heavily conservative, highly process driven waterfall style development to Agile with all the bells and whistles and a bunch of hybrids in between. I’ve seen more definitions of what testing is from non-testers than I can count. I’ve seen almost as many definitions from people that call themselves testers.

I have this mental image of the role of testing as a pasty emo teenager railing to instatwitsnapbookplus about how nobody understands their pain. ‘My issues are so complex that it would take you too long to comprehend them, let alone understand them and the answer is you all need to change, and that’s clearly not going to happen, so leave me to wallow in my delicious, delicious pain. Oh and leave Britney alone.’

Blog post after blog post about how testers are devalued by anyone who isn’t a tester. I’ve written more than one myself. I go to testing conferences around the world and yeah it’s fun to catch up with my learned testing peers, but I’d be kidding myself if I thought I was making a difference to how we’re seen by non-testers. I might get through to the occasional meatbot that rote testing is dumb but more and more I’m of the opinion that if we really want to be taken seriously as software development professionals, then we need to seriously look at how we position ourselves in relation to our peers.

The first time I heard ‘testing is an activity, not a role’ I think my reaction was ‘what the fuck do you know, man. I’m a tester. It’s what I do and I do it well.’ Some time later (many months later), I was talking to someone about religion and how people tie belief to their identity, and the strongest reactions you’ll see are when you threaten beliefs that fundamentally make up someone’s identity.

A: ‘I’m an X’

B: ‘X is deeply flawed’

A: ‘I will fucking cut you’

Which made me think about my reaction to the ‘testing is an activity’ statement. At the time I first heard it, it sounded like a statement that trivialised something that I feel is part of my identity. Of course my reaction was a strong one. I am not so smart. My time with the team at eBay has given me serious cause to reassess my initial reaction.

I initially equated ‘testing is an activity’ to ‘anyone can do testing’. The easiest way to troll a tester is to tell them that anyone can do their job. Some people genuinely seem to believe that anyone can do testing. I vehemently disagree. That said, there are some things that testers do that are simple. They also happen to be the things that are the most visible, hence the confusion. Is X different to Y? Yes. Should it be? No. Ok, bug. That’s as complex as testing is to more than one software development professional I have interacted with.

Testers do some of that stuff and you know, it’s stuff anyone can do. If you have a clear oracle to determine the correctness of something and you observe a deviation from it, then you call it out. It’s not rocket surgery. Why the hell as software testers would we want to accept this activity be ours alone, let alone demand it? Anything that simple should be handled by anyone that sees it. It should be the responsibility of every member of the team to be on the lookout for that stuff. Bake it into how you develop software. Make it a basic expectation.

Oh, just ‘make it happen’. Easy for you to say. You landed in a team that ‘just gets it’. Ok. Sure. It’s not as easy as that, but that’s not really what I’m getting at. I think many of us as testers have felt like second class software development citizens for so long and fought for recognition so hard that allowing testing to be thought of as an activity as opposed to a role seems like a massive step backward. I also think it has quite a bit to do with ‘quality’ being a bit of a dirty word amongst learned software testers. ‘I don’t do Quality Assurance, I’m not an engineer, I’m not the quality police’ – we spend so much time trying to get misguided software testers to understand this that I think we’ve gotten tunnel vision. We’re not going to stop the zombie invasion. Rather than encourage testers to step back from ‘quality’, we need to encourage our non-tester peers to embrace it. Quality as a shared responsibility. Shared ownership of what we ship.

What we do as software testing specialists should not be to ‘test all the things’, but to enable every single person involved in our project to bring their skills to bear to improve product quality. Sometimes that will mean we get our hands dirty and use the product, find tricky, unexpected things. Sure. We’re good at that. It also means things like facilitating reviews of the proposed solution, identifying useful, sensible quality criteria and working out where they should be tested. It means training your colleagues to do better testing and to recognise when they need specialist help and learning more about what your colleagues are good at and what you can do to help them do their job better.

The demand for the skill set we have is not going away any time soon. We’ll do more good by letting go of the much maligned chunk of responsibility we’ve carved out for ourselves within professional software development and embracing testing as an activity than we will by demanding recognition that the role of testing is a special snowflake and deserving of special attention.

Trish Khoo wrote an excellent blog post on being a tester in an a programming team. More specifically, a team that values testing and incorporates it into everything they do. I found myself nodding along with Trish’s post and identifying very strongly with her experiences. I fear whatever I write in addition to her post will merely be gilding the lily, nonetheless I will add my voice and say that I find this a wonderful way for software development professionals to work together.

I’m fortunate enough to work with a highly talented team of programmers at eBay. I’ve worked closely with skilled developers before, formed strong and lasting friendships, was supported by them in my role. This is different. This is really the first time that I’ve worked with a group who value testing as much as these guys do, not as a role outside of programming, but as an activity that the team owns.

There are different strengths within the team. Mine happens to be testing. That doesn’t mean the responsibility for testing is abdicated to me. At the start of a sprint, we’ll identify the highest priority work to be done and we’ll talk about the complexity of each story not just in terms of getting a solution in place, but how we’ll know that we have a good enough solution of high enough quality. I’m generally not the one driving the conversation about quality. That’s a refreshing place to be.

One of the things I enjoy most about working in this format is the discussion around testability becomes a lot less contentious. It’s no longer a matter of developers doing you a favour, or becoming wary when you ask for access to their code (or horrified at the prospect of you committing changes), it’s something that just happens. It’s the difference between having a conversation along the lines of ‘I think we need to think about the impact of testing and how it affects this work’ and ‘what else do we need to think about to make sure this work is valuable when we deliver it?’. It is a damn shame that this seems to be such an unusual situation to be in. It should be the norm.

Like Trish, I’ve found that the bulk of my involvement comes at the start when we still know the least, be that at the project inception when we’re gathering requirements, or during sprint planning/backlog grooming when we’re working out what we need to deliver next and how. I recently used James Bach’s heuristic test strategy model as a project planning tool as a way of eliciting questions to ask as we built a model of the project. The work is ongoing, but thus far it seems to be something that the entire team has found incredibly valuable. I still do a lot of exploratory testing. The difference here is that I don’t have to waste time on the trivial and the obvious. Most of that stuff has been taken care of and because I can trust the programmers to take care of the basics, it provides me great freedom to delve deeper into the product and look for more crucial issues.

I’ve been a proponent of the mindset that a tester’s mindset is quite different to that of a programmer and there is some truth in that statement, but that doesn’t mean programmers are incapable of contemplating good testing, nor does it mean they have no responsibility to do good testing. Lisa Crispin and Janet Gregory in their book ‘Agile Testing’ make the distinction between code-supporting tests and product supporting tests and I find that distinction a valuable one to call out when working with programmers. They get the need for code-supporting testing (TDD) – it drives the design of the solution and provides a safety net when creating the solution and later with maintenance. Where programmers sometimes have blinkers on is the testing that happens around the solution itself, from questioning whether we’re building the right thing in the first place to probing the solution to see where it misbehaves. I’m the first person to admit that I’m not a highly skilled programmer. The more I pair with the coders on my team, the more I improve. The same goes for the testing skills of the coders I work with. We’re a multi-skilled team and the ultimate aim is not that I become a rock star programmer and that they become kick arse testers, but to become proficient enough in each of the skills we have to know when we can handle the work in front of us and when we need specialist help – killing off the Dunning Kruger effect and understanding the strengths of our team mates so we can draw on them when we need.

My sincere hope is that this is a way of working that becomes commonplace for testers and programmers alike. That would be an awesome industry to work in.

Utest interviewed me earlier this month.

Here it is.

It’s been quiet on the blog for a while now, mostly because I’ve either been too busy to write, or because I wasn’t yet able to write about the stuff I wanted to write about.

After four plus years in Japan, I have left the land of the rising sun. Japan seems to be equidistant from the major testing destinations I go to, but just a little too far away to be convenient for any of them. Sure I’m a little sad to be leaving. I’ve made some fantastic friends and had some wonderful experiences there. No doubt I’ll be back to Japan at some point, but for the moment, my place is elsewhere. Specifically, I’ve taken up residence in England. I’ve accepted a position at eBay International working with Ilari Aegerter and his very solid group of testers. Early days yet, but so far I’m thoroughly enjoying it. There look to be some very cool things going on and I’m looking to get my hands dirty, especially with iOS driver and Selendroid, both championed and developed by some of my colleagues. I’m travelling for most of June, doing meet & greet and induction stuff in Germany, Switzerland and the USA. From July I should be (more or less) in England.

There are a great many testers in the UK and mainland Europe that I’ve never met, but know by reputation or by email exchange. I’m looking forward to putting faces to the names. Europe seems to be an exciting place to be for context driven testers right now. The Let’s Test conference has put a stamp on testing in Europe that anyone promoting commodity testing ignores at their peril. I get the feeling this is just the beginning and there is a good deal more to come. Exciting times.

The CAST2013 Call for Participation has been announced. I’m stoked to have been selected along with my very good friend Louise Perold as the program co-chairs. We chose the theme “Old Lessons applied and new lessons learned: advancing the practice and building a foundation for the future.” We think it reflects where we’re at as an industry and I’m excited to see what sort of presentations and what sort of conversations this subject will spur.

If you have some experiences you’d like to share about how you’ve changed your approach to testing based on the changes in technology we interact with, we’d love to hear from you. If you know someone you think has an awesome experience to share, please pass this on and encourage them to submit a proposal.

Either way, we hope you’ll come to CAST2013 and help us make it an awesome conference by testers, for testers.

If it’s true that zombie testers are being churned out faster than we can rehabilitate them, then what do we do about them? Asked in a perhaps less provocative way, how do we go about making sure that zombie-like testing behavior is neither encouraged, nor rewarded?

When you begin speaking with management types who have thus far only experienced zombie testing, when you engage them about thinking testing, you may well meet reluctance, disbelief and suspicion. A more highly skilled testing group sounds like a good thing, but how do you measure it? How do you make sure that people are doing what they are supposed to be doing? Skepticism is okay. Testers should be able to explain themselves and their actions. Sometimes it’s a little more than that though. Sometimes it’s a deeply ingrained cultural issue,  demanding adherence to procedures. To some extent, dealing with that sort of mindset and company culture goes beyond convincing them of the non-viability of zombies – there are apparently such things as zombie managers also. They tend to be the ones that roll out that little ‘What gets measured, gets done’ chestnut – to which I invariably reply ‘what gets measured gets gamed’. Nonetheless, thinking testing can be an accountable activity.

If someone wants to be able to see numbers on how effective testers are being, then have members of a project answer a brief questionnaire on how the testers did during the project. How well did they identify and report on risk during design? How well did they find and convey information during hands on testing with delivered builds? Did testers provide information that allowed you to make informed decisions about the project? and so on – if you ask these questions of people the testers interacted with and have them rate them on whatever scale you like (as well providing a few open-ended questions such as ‘what else could testers do / what information could they provide in order to be more effective’ Then you can put that into whatever pretty charts and things you like and you have a basis from which to begin conversation that doesn’t rely on nonsense like test cases executed or bugs reported. As an aside, you should be asking these questions throughout the project anyway. If you do, not only will you be able to alter your strategy when needed, you’ll give your peers points of reference when they’re filling in said questionnaire later.

We need the people we work with and the people we serve to understand what testing is in a way that is valuable to them – I spoke about this a little bit in the second post on this subject – not just within our own organization, but in a broader sense also. We need to make sure that programmers in general understand the value testing adds, the program managers, analysts, upper management understand the value of testing and to hold software testers to a higher standard, or at least to expect a higher standard from them.

Part 3 talked about taking steps in your own backyard. What about the wider world?

I semi-regularly attend gatherings for developers. I’ve given presentations on what developers should be expecting from testers. Initial reactions were interesting. I had some people say to me afterwards things like ‘I’d expect anyone with that sort of skill set to be managing a development group or test group’ or ‘you’re so wasted in testing, we need to get you into a programming role’. I think both comments stem from an overly low expectation of what value a thinking tester can add and my reply to both statements was basically that.

Now that they see that I am a tester by choice and I’m not looking to bridge into a ‘better’ role, they can see that value of having someone around that can speak the same language and brings a different set of skills to the table. Now they contact me for advice or ask me to help them hire skilled testers. I’m happy to do it. If you’re not attending gatherings by non-testers, then have a look around for some in your area and turn up. Spend time with them. You don’t necessarily have to evangelize or proselytize, just spend time and let them get to know you and what you do.

Attend non-testing conferences. This is a tough one. I’d like to see the Association for Software Testing put a grant together to let testers do exactly this. I think there is a lot of good that we could do by attending, and perhaps better still, presenting at non-testing conferences.

Offer to give guest lectures at your local universities. Rather than have young computer scientists indoctrinated to believe that testing is either synonymous with debugging, or something that the unwashed masses do, help them understand how deep and diverse testing can be.

If you can’t find any gatherings for non-testers (or even if you can), invite non-testers to your tester gatherings. Even if you have to bribe them with the promise of alcoholic beverages. Encourage your testing peers to do the same. They may come for the latter, but they’ll likely stay for the conversation. Relationships formed off the clock can have a deep impact when its time to clock on again, whether that be in your own environment, or someone else’s.

The fight against zombie testing will not be won overnight and it won’t be won by taking out the zombies already in the field. Thinking testers need to get amongst the thinkers of our non-testing peers and help them understand how much more we can do. When they start expecting a higher standard from the testers they work with, then zombie testers will start to find it more and more difficult to find work. Getting rid of the zombies will come down to thinking testers doing our part in whatever way we can. To mangle a phrase (probably) by Edmund Burke – The only thing required for zombies to triumph is for thinking testers to do nothing.

Previously on The Testing Dead
Part 1
Part 2
Part 3

Another interesting post linking Zombies and Testing

 

 

 

Previously on the Testing Dead I talked about various forms of behavioral dysfunction that I call Zombie Testing and why that’s a problem. So what can you do if you find yourself with a zombie infestation at your place of work?

 

Well you could fire them,

but you may not want to make that your first action. Some zombies can be rehabilitated. Some are thinking human beings mimicking zombie behavior because either they don’t know any better or perhaps they don’t feel it’s their place to break the mold. So before you consign your zombie testers to the sweet flames of napalm death – do have a look around for ones that show signs of higher brain function and help them if you can.

Pairing them with experienced testers is one way of helping them learn new skills. Some testers with zombie tendencies believe they are skilled testers because they don’t know what they don’t know. Pairing them up with someone who *is* a skilled tester can help impart skills that they didn’t know they needed.

If you can, send them to courses such as Rapid Software Testing - if that doesn’t light a fire under them, nothing will.

For the ones that don’t want help becoming a better tester – You cannot force change. All of the helpful links to blog posts and exercises and offers of coaching will not avail someone who isn’t interested. If that is the case, perhaps you can help them out the door instead.

Identify how zombie testers get into the building and apply liberal defensive countermeasures. Take an active hand in recruiting of new testers. You may need to help educate your HR people or recruiters to get past buzzword bingo and be able to identify real testers, or at least be able to not throw away promising candidates because they don’t have the currently most fashionable acronyms in their resume. I’ve worked with a score of recruiters over the years and perhaps one or two had any real clue about what testing was. The typical testers resume I get from recruiters generally looks like this:

Zombie Tester Resume

 

 

 

 

 

Most recruiters I’ve worked with were open to learning more about testing. If they’re good, they’ll want repeat business from you, so it’s in their best interests to find out what sort of people you want. This is not an overnight process. You’re not going to sit someone down and lecture them on all the stuff they need to know. Tell them what you want up-front, sure. Give them the details that are important, but I find that real understanding takes more time. Develop a relationship with recruiters that make an effort to provide you what you want. Catch up for lunch occasionally or a Friday afternoon beverage of choice and talk about what’s going on in the industry. Talk to them about the frustrations you have when hiring. Doing this has been worth the time investment for me.

Give them a profile of the sort of person you’re looking for. If you’re looking for someone more experienced, then someone with a diverse background and a number of different skills – there’s a marked difference between someone who has 12 years experience and someone who has 3 years experience 4 times. Someone who is constantly honing their skills. They should be able to talk about what they do to stay sharp. I wrote a post a while back about what I look for in a resume. I share that sort of information with recruiters also. Make sure they know that not having certification is okay.

On that subject, I think certification sometimes unfairly demonized. If people get value from studying for a certificate, that’s fine. I don’t think certification creates zombie testers, although I do think it acts as a neat form of spray-on credibility for them. My major objection to certification in its current form is that it is marketed as a measure of tester competence. It is no such thing. I would love to see certification bodies be more up-front about this, but they have a financial disincentive to do so – they grow fat off the ignorance of testers and the people that hire them alike. Quite brilliant in a morally bankrupt kind of way, but I digress.

Rehabilitate the zombies you can, get rid of the ones you can’t and do what you can do to keep any more from getting into the building you work in. What about the wider testing community?   The truth is, with the barrier to entry to the testing profession being no more difficult than knowing where a computer’s on switch is, zombie testers are being churned out far faster than we can hope to rehabilitate them. Does that mean it’s a lost cause? Far from it I think, but focusing on the zombies won’t get the job done. I’ll talk about that some more in the fourth (and likely final) post.

The Testing Dead – Part 4

In my first post on The Testing Dead, I identified a number of patterns of behavior that I like to call Zombie Testing.

Is this really a problem we need to be concerned about?

I think it is, for a number of reasons.

I think Zombie Testing has the ability to infect an organization. It’s a generally less grisly process than your traditional zombie, but the downside is it takes a lot longer to die and it’s only slightly less painful.

How does Zombie Testing infect non-testers? I mentioned in a previous post things like arbitrary entry/exit criteria. Have you ever seen programmers changing bug severity or priority (or reassigning them or closing them) to meet these bogus criteria? Ever been in sign-off meetings where project managers argued about which bugs were severity 1 and which were severity 2 and go on to (re)define what they meant?

This one little artefact that says ‘no more than 1 severity 1 bug and 5 severity 2 bugs or else your code doesn’t get signed off’. It’s a sign that zombie testing has taken hold. Anyone with kids will tell you – it doesn’t matter if it’s number one or number two, you just have to take action before it gets messy.

When Zombie Testers hold themselves up as the quality police, there’s a tendency for others to see them that way also. That invites dysfunction like the segregation of testers and programmers – because the dark gods forbid they should unduly influence one another. The testers need to remain “objective”. Segregation of testers and programmers is one of those ‘won’t someone think of the children’ arguments. It’s a solution in search of a problem that I’ve yet to ever actually see.

Imagine a straight-laced chaperone at a formal high school dance, insisting that testers and programmers may dance, but must keep at least two feet apart whilst they do so. That might seem very civilized and genteel, but everyone knows the real magic happens when the testers and programmers slip away behind the bike sheds and show each other their notes.

More recently I’ve heard and read about some programmers calling to do away with testers all together. It’s a misguided notion, but I can understand where they’re coming from. If your only exposure to testing has been with people that enforce unhelpful rules, have an adversarial attitude, waste your time and otherwise make your life difficult, (whilst adding questionable value) why wouldn’t you want to do away with them?

The problem for thinking testers isn’t so much that Zombie Testers exist. It’s that they’re so prevalent that they’re seen as the norm by non-testers. We need the people that hire testers, the people that manage testers and the people that testers serve to understand what it means to be a thinking, professional tester.

Moreover, we need to them to understand it in a way that’s meaningful to them.

Easier said than done. It’s a tough sell.

Can we go to upper management and tell them that quality will improve as a result of our participation?

No.

It may indirectly, but that’s not generally something we have direct control over. We don’t make design decisions, we don’t hire or fire programmers, we don’t decide what gets fixed or deferred – we might influence one or more of these things, but the final decision is not ours.

Can we tell them their product will be released bug-free?

No. Finding bugs is part of what we do and while we can test for their presence, we cannot prove their absence. Some less scrupulous companies (who may well have a large stable of test zombies corralled somewhere) might say otherwise, but that’s not a claim a tester can make in good conscience.

What then?

The alternative we have is to tell them that we can reveal risks and problems to them much earlier than they might otherwise find out about them, giving them time to take action.

It doesn’t sound like a particularly attractive alternative. In my experience, people don’t want you to tell them about problems (unless you’re also telling them about how you fixed them). They want solutions.

Moreover, many people seem to cling to the broken Taylorist model that software development is mass production. Programmers turn out widgets that come down the conveyor belt. Testers pick up these widgets, compare them to spec and/or known good widgets and if they’re within tolerable limits of variance then all is well.

It’s an attractive fantasy. It’s measurable. It’s controllable. The workers can be switched in and out because it’s repeatable labor. Unfortunately (for those that believe it), it’s complete bullshit.

So how do we put that alternative in a way that is more palatable to an audience that needs to hear such a message, but may not be ready to accept it?

There are no easy answers to that question (that I know of). There’s no silver bullet. In part three I’ll talk about what can be done to help educate our non-testing peers about what software testing is, and what can be done about stemming the flow of zombie testers.

 

The Testing Dead – Part3

The zombie apocalypse has occurred. They walk among us even now – The Testing Dead. These dead-eyed, soulless creatures make sounds that seem human, but they’re an empty shell inside and will bite you if provoked. Left unchecked, Zombie Testers will infect an organization with their disease. Zombie testing is any rote application of testing practice or methodology without regard for how appropriate it is in that context. It often looks like one or more ‘skilled’ testers churning out test cases for meatbot automatons to execute, but there are no doubt those who identify with context driven methodologies who have missed the point and follow the same go-to patterns regardless of context.

While there is some amount of tongue-in-cheek in this analogy, it does describe actual patterns of dysfunction that I’ve observed. I want to be clear at this point that I’m having a go at a kind of behavior. I’m not trying to demonize people.

 

There are a number of different flavours of Zombie. See if you recognize any of them.

The Misled

These are the ones who finished whatever secondary or tertiary education they did and decided they were done learning for the rest of their life and could they please have a job where they could memorize and regurgitate the right answers like they did in school. Not particularly adventurous, they might have found some of the large amounts of crap online about software testing and decided that was just fine thanks. Give me a recipe to follow or a template to fill out, but the dark gods forbid I should have to think for myself.

The Template Weenie

A variant of the Misled. They discovered some testing templates online or perhaps their company had some already put together. Their belief is that if they fill out these templates and no gaps are left, then good testing will have been done. If they’ve got all the requirements covered and tests all trace back to them, and the test plan is all filled out and the schedule is set properly, then we’re all good. It appears that for them, reality is an obstacle to be managed with paperwork.

The Passenger

The passenger has fallen into testing but has no desire to be there. They have no desire to be a tester, but are using testing as a bridge to somewhere ‘better’ (typically programming, business analysis or project management). They tend to do only enough work to avoid reprimand and will often be found hanging around the group they’re trying to break into as though they might be absorbed by osmosis.

I generally try and cut a deal with passengers should I encounter them. It is in our combined best interests to move them on, so I promise them I will do everything in my power to get them where they want to go if they agree to be the best tester they can be while they are with me. If they have a strong body of work I can show the manager of the group they want to transition to, and I can honestly talk about the strength of their efforts, then that tends to lend great credibility to their application. Sometimes that approach works, sometimes not. If they won’t let you help them to get where they’re going, you may wish to help them out the door instead.

The Apathetic

Similarly to the passenger, these zombies have no real desire to get better at testing, they simply want to turn up between 0900-1700, go home, rinse and repeat. They won’t think about or do testing in their spare time, it is merely a job. To some degree there’s nothing inherently wrong with this, but personally I’d rather work with inspired, passionate people that genuinely enjoy what they do and want to do it better.

In some respects having a few apathetic zombies around can make your life easier – they tend to be the ones who enjoy predictable monotony and there’s often no shortage of that in testing. If you have repetitive work that is difficult to automate, these people can be handy.

The Confused

This lot think they’re doing Quality Assurance when what they’re doing is testing. Quality assurance is really a collection of roles and actions that have a direct bearing on the quality of the product, such as the hiring/firing of programmers, architects etc, Decision making about what to include or what to leave out, which design to go with, which vendor to go with and so on. In contrast, software testers reveal information about the product. Some testers write production code, but in their role as a tester, they do not directly influence the product itself. The Confused either do not get this, or vehemently believe that their role is to be the final bastion of quality before the software goes out into the world.

They tend to enjoy grandiose titles such as ‘Quality Assurance Engineer’ despite not doing quality assurance and not being an engineer. They also seem to actively position themselves as the gatekeepers of the software release decision, apparently blissfully unaware that it’s a lose/lose situation for someone with the word ‘quality’ in their title to be attached to. If they say no to a release, they’re either overridden by the people with real power (and who probably have a better business sense of what needs to occur), or they’re seen as the ones holding everything up. If a release goes out and something screws up in production, they’re the ones who get fingers pointed at them and asked questions like ‘Why did you let that bug out?’

I’ve seen on testing forums questions like ‘We had some bugs go to production. My manager is asking me why. Can someone give me some excuses I can tell them?’ Wow, just wow. The level of non-comprehension about one’s own job that this question requires is mind boggling.

The sadness doesn’t end there though. Not only do the confused make their own life hard, but they like to make life harder for their non-testing peers also. Things like testing entry and exit criteria, based on arbitrary bug counts of varying severity (e.g. no more than 1 severity 1 bug and 5 severity 2) tend to make people’s life unnecessarily difficult.

The Priest

A variant on the confused, these guys perform ritual testing. It’s testing theatre in much the same way that the TSA do airport security theatre. It may find some stuff, it may not. It gets applied to everything in the same way because that’s how it has to be done. It’s their religion. This is the way testing must be, for this is the one true way of testing. I’m not sure, but they may be an evolution of the template weenie, like some mutant fucking pokemon. Fortunately I haven’t encountered too many of these.

 

 

The Horde

The horde probably resembles their traditional zombie counterpart more closely than any other zombie type. Although they are a large group, They share nothing more than proximity and brain death. A website (or app or other software) will be left out in the open like a sacrificial virgin. The horde descends upon this website under the guise of crowdsourced testing whereupon individuals compete with the rest of the group in order to find something vaguely bug shaped upon the surface, like little zombie rhesus monkeys.They are paid by the bug, so when you take their bug away from them well, have you ever tried taking food away from a rhesus monkey? It’s a bit like that.

They are largely incapable of following instructions unless said instructions are very, very precise. Instead, these zombies have specific go-to patterns they use to find bugs, such as turning on Internet Explorer’s ‘bitch about everything javascript related’ mode. They will also report every single instance of the same bug despite the fact that they clearly have a single, common root cause. Their bug reports are somewhere between readable and atrocious because it doesn’t matter what quality the bug report is, it just has to be first. Once they have exhausted their suite of patterns, they will immediately leave the victim, generally unmolested but quite free of lice or other surface irritants.

There are no doubt more types of Zombie out there, but these are the ones I have encountered on my travels. There seems to be a common thread amongst zombie testers – the complete lack of desire to do anything differently to how they are doing it now. In a role that demands that we rapidly respond to a frequently changing environment, that seems antithetical to how a tester should operate.

In the next post I’ll talk about why zombie testing is a problem for thinking testers and what we can do about it.

 

The Testing Dead – Part2