I attended a practice-run of a workshop by Jan Eumann and Phil Quinn at Let’s Test this May. If you’re going to the conference, you should check this workshop out. If you’re not going, you should buy a ticket now and go check this workshop out. It neatly captures many of the dynamics that occur when testers and programmers pair. It gave me the opportunity to reflect on some of the things I’ve learned during my time as a tester embedded in a team of programmers, especially around pairing.

I won’t go into all of the details of this workshop, but for a significant portion of it, participants spend time pairing on a program that requires both fixing and further development. Ostensibly, there should be one programmer and one tester. For the workshop, I got to take on the programmer role. Normally when pairing I’m very much a tester, so this was an eye-opening experience for me, more so than I was expecting. I got to see how things look when I’m doing more driving than observing/questioning. Programmer/Tester pairing is a bit different from 2xProgrammer pairing. In the latter there tends to be a fair amount of taking it in turns to drive and navigate. For a Programmer/Tester pairing, how much drive vs. navigate there is depends on things like how comfortable the tester is writing code and how complex the solution (more complex solutions seems to correlate to the need to spend more time thinking deeply about test design and analysing possible failure modes, in my experience). As a programmer paired with a tester I saw for the first time how difficult it can be to facilitate the inclusion of a tester. It has given me a new appreciation for the skills of my fellow programmers at eBay. Here are a few of the things I noticed or was reminded of:

Remember to narrate as you code.
What are you thinking? Are you hunting for a file? What’s the test you’re writing now? Why that test? As I was coding, I was often silent. I knew what I was trying to do, but since the code was unfamiliar, I was spending a lot of time hunting. What I discovered was that my partner was feeling a bit useless because he felt he couldn’t contribute. As soon as he told me this, I started describing what I was trying to do and he was immediately able to start pointing me to sections of the code that he had fresh in his mind. One change required we refactor things in four different files. He reminded me of a couple of steps I’d missed on several occasions as well as noting a few typos that I completely missed. When you narrate your thoughts as you’re writing, you clarify what you’re doing not only for your partner, but often for yourself. Where you find you are hesitant, you might need to throw an idea around a bit more. You also give your partner the opportunity to make suggestions and ask questions.

As a tester, be sure to ask questions. It can be hard to ask questions that you think are dumb – especially when starting out. When I first started pairing as a tester, I felt reluctant to speak up because I didn’t want the programmer to feel like I was telling them how to do their job. I also didn’t want them to think I was stupid. I’ve not had any of the programmers I’ve worked with get defensive or treat me like an idiot. In fact, many things that I thought were stupid questions led to a discussion where we decided to use a different strategy than the one the programmer initially chose.

Thinking about solving a problem (programming) and thinking about how it might fail (testing) really are quite different
- even for people who are familiar with doing both. As my good friend Ilari Henrik Aegerter is fond of saying, it’s the difference between a finite solution space, versus an infinite problem space. The meeting of these two ways of thinking is why I think pairing testers and programmers can be so powerful. As I finished bits of functionality, my partner did some further testing and often found things I’d missed. I consider myself a fairly experienced tester and yet I rapidly fell into a pattern of wanting to get something written so that it works and even though I thought I was considering edge cases and use patterns, I was overconfident in my abilities to handle problems.

TDD is more about driving design than it is about testing, but it can help to facilitate a testing mindset. When you’re writing tests to drive your code, you’re also laying out a structure to your thinking. I find that gives you something visual to brainstorm with. Other test ideas naturally seem to crop up. The middle of the coding flow might not be the precise place to write them or follow them up, but you can certainly note them down to come back to them later.

For the workshop, I didn’t use TDD for the changes I made and not doing this made describing what I was doing more difficult. I think if I’d tried writing a test (check) first, then it would have been a lot simpler for my testing partner to know exactly what I was trying to achieve and to offer input and do further testing. If I’d written tests up front, I might have noticed other things that needed to be tested as well.

One of the things we did do was to talk about what to do next. It put me in mind of something else I see effective pairs doing.

Spend a little bit of time up-front to define success.
Discuss what you want to achieve before you dive into the code. Are you refactoring existing code? What do you want to achieve by refactoring? Are you exploring, looking for bugs to fix? Maybe a charter for specific kinds of issue will help you decide on what to fix now and what to note down for later. Maybe you’re writing new code. What do you want to get done by the end of the session? Do you have all the resources you need? Any mocks or stubs that you need to build? Any integrations you need to do? Is the task defined well enough to make progress?

Effective, productive pairing takes practice. If you know your own role in a pair, you can be effective. If you have experience from both sides of the pair, then I think that has the potential for a very powerful partnership. I work closely with brilliant programmers every day. I see my role as a tester often as facilitating a testing mindset in my programmer peers. It wasn’t until I actually had to step into the shoes of one that I realised that there’s a lot of facilitation on the programmer’s part as well to make sure that a tester has a detail-rich environment in which to work.

I found this workshop a humbling experience. I want to thank Jan and Phil for the opportunity.

There was to be a debate at Foo Café in Malmö in early November about the ISO29119 software testing standard. It was to feature Karen N. Johnson and myself debating Stuart Reid and Anne Mette Hass on whether the current volumes should be retracted and the upcoming volumes suspended. At the time I was invited to participate, my understanding was that all participants had agreed to the debate. The only thing left to sort out was the format and the moderator.

Sadly, the debate will no longer go ahead. Mr. Reid and Ms. Hass have pulled out. I am obviously disappointed. I was looking forward to finally having some of the questions and concerns raised by my testing colleagues addressed by people from the working group. To my knowledge that has not happened with the exception of a response from Mr. Reid himself. His response inadequately addresses a small number of concerns raised by those opposed to the standard and misrepresents a number of others. If anything, my experience with this debate, in which Mr. Reid and Ms. Hass agreed to participate then subsequently backflipped has raised even more questions that need to be addressed.

I understand how having poured so much time and effort into the standard, it must be difficult to hear people criticise it so strongly, but I am curious as to why no one from the working group seems to want to publicly defend their work. Neither Karen nor myself (nor indeed any of the other testers I’m proud to associate myself with) have any interest whatsoever in personally disparaging Mr. Reid, Ms. Hass or anyone else in the working group. The standard itself is what we take issue with. The issues are what we want the opportunity to discuss further. I do hope an opportunity will arise for Mr. Reid and co. to address the very real concerns my colleagues and I have raised about the standard.

I suppose most people will have filled their conference dance card by now, but in case you haven’t, here are some upcoming conferences that I’ll be presenting at:

Let’s Test Oz
Sydney, Australia
September 15-17

Øredev
Malmö, Sweden
November 4-7

Tasting Let’s Test  - South Africa
Johannesburg, South Africa
November 14

All of these conferences have presentations from world class speakers (somehow they let me in too), so they’re well worth attending if you can.

I hope to see you there.

To the publisher(s) of the blog post entitled ‘Book burners threaten (old) new testing standard’ on professionaltester.com on August 20, 2014:

(I have attached an image of said blog’s text in case it should change or be removed in future)

At CAST2014, a number of like-minded professional testers got together after a very insightful presentation by James Christie on the subject of the proposed ISO 29119 standard. Out of this meeting of minds, two things emerged. One was a manifesto drafted by Karen N. Johnson about our beliefs as professional testers (http://www.professionaltestersmanifesto.org/). The other was a petition initiated by the International Society for Software Testing (ISST) to demonstrate a lack of consensus by professional software testers to the proposed standard ISO 29119 (http://www.ipetitions.com/petition/stop29119).

The petition exists to show that there are a significant number of software testing professionals who have significant, reasoned and substantial objections to the publication and subsequent adoption of the ISO29119 standard and therefore there is no consensus in the software testing industry that this standard is valid.

ISO’s own guidelines define consensus as :

ISO/IEC Guide 2:2004, definition 1.7

“General agreement characterised by the absence of sustained opposition to substantial issues by any important part of the concerned interests and by a process that involves seeking to take into account the views of all parties concerned and to reconcile any conflicting arguments.

NOTE Consensus need not imply unanimity.”

 

Over the past week, signatories of this petition and other concerned parties have been circulating the petition and encouraging others to sign it. As of writing, it has upward of 250 signatories.

Your blog seems to be a fairly low-brow effort to understand and respond to the concerns raised by the petition. I see few redeeming qualities upon closer reading. It denounces this petition in what I can only describe as insultingly inflammatory fashion. You appear to be using a number of fallacies to support your attempt at an argument. Let’s go through them.

You begin with a fairly vague appeal “Testers have been waiting many years for ISO29119”. I wonder, which testers specifically are you referring to? Aside from consultants waiting to sell services based on ISO certification and anyone related to the drafting of these documents, who exactly is clamouring for the publication of these documents?

Next up – guilt by association.

You are calling the ISST and the signatories of this petition ‘book burners’. There have been a number of groups known throughout history for burning books and one would be hard pressed not to think first of the Nazis. To the best of my knowledge, neither the ISST, nor any signatories have actually burned any books (Actually, members of ISST read a lot of books and quite a few also write them). If your intention was indeed to draw parallels between the two groups, then I find this repugnant and highly unprofessional. If your intent was otherwise, then by all means, please leave a comment here (my blog unlike yours is open for discussion) and enlighten me.

Since you bring up the subject of books, let’s take a quick look shall we? The published volumes of the ISO29119 standard have bibliographies that refer predominantly to other ISO/IEEE publications. As far as I can see there are three publications referred to that are external sources and one of those is to a publication of ISTQB.

Here’s a small fraction of a list that I think could have been referred to or at least recommended as further reading:

  • Perfect software and other illusions about other illusions about testing – Gerald M Weinberg

  • Adrenaline junkies and template zombies – DeMarco, Hruschka, Lister et al

  • Mistakes were made (but not by me): Why we justify foolish beliefs, bad decisions and hurtful acts – Carol Tavris

  • Introducing ethics – Dave Robinson

  • You are not so smart – David McRaney

  • Why software gets in trouble – Gerald M Weinberg

  • Antifragile: Things that gain from disorder – Nassim Nicholas Taleb

  • Lessons learned in software testing – Bach, Kaner, Pettichord

  • Bad software: What to do when software fails – Cem Kaner

  • Seeing like a state: How certain schemes to improve the human condition have failed – James C Scott

  • Tacit and explicit knowledge – Harry Collins

  • Leprechauns of software engineering – Laurent Bossavit

  • The structure of magic Volume 1 & 2 – Bandler, Grinder

  • Lateral thinking: Creative thinking step by step – Edward De Bono

  • Secrets of consulting – Gerald M Weinberg

  • An introduction to general systems thinking – Gerald M Weinberg

  • Becoming a technical leader – Gerald M Weinberg

  • The psychology of computer programming – Gerald M Weinberg

  • Kuhn vs. Popper: The struggle for the soul of science

  • Please understand me (2) – David Keirsey

  • Frogs into princes – Bandler, Grinder

  • Sherlock Holmes – the complete novels and stories – Sir Arthur Conan Doyle

You get the idea. There is a good deal more out there that software testers should familiarise themselves with. I’ve left out tomes that refer to specific technologies. They are easily found and I leave them as an exercise for the reader.

For more, see

 

Returning to your blog post – you falsely assert that our issue with the standard is that

not everyone will agree with what the standard says.

This is at best a gross oversimplification. The text of the petition does not explicitly state what specific disagreements and opposition the signatories have, it simply states that such opposition exists and must be considered. The specifics are not difficult to find. There are a number of other professional testers who have written well-reasoned arguments about their opposition to software testing standards and that number is growing.

You go on to build the following strawman argument

…they don’t want there to be any standards at all. Effective, generic, documented systematic testing processes and methods impact their ability to depict testing as a mystic art and themselves as its gurus

Let’s look at the word ‘effective’ – Effective for what? One might assume for the orderly execution of software testing, but I would hate to put words in your mouth, so please, once again enlighten me as to what specifically you mean by effective and do please back this up with proof that this standard actually achieves this.

As for the rest of the sentence, what has the ISST or any other signatory of the petition said or done that leads you to believe that they gain from depicting testing as ‘a mystic art and themselves as its gurus’. I challenge you to prove this statement or withdraw it and make an apology.

Furthermore, I challenge you to publish your real name next to your blog post and stand behind it and defend it as best you are able – or, retract it and post an apology with your real name attached.

 

Cordially,

Ben Kelly

Professional Software Tester

Founding member of the International Society for Software Testing

Hang on a sec, didn’t I just get done saying testing is an activity and not a role? I did say that, didn’t I? Did I mean it? Well, it’s true in the same sense that Darth Vader killing Anakin Skywalker is true. As Obi Wan said – ‘from a certain point of view’ – namely how we as testers pitch our role to non-testers.

How we position software testing to non-software testers is important. I have a strong sense that currently we explain the role of software testing to non-testers very much in terms of what testers do and others don’t (or can’t, or won’t). As testers we bring skills and experience that are different to those of a programmer, or UX or product management and so on and I think it’s important that the value of these skills be recognised. I think though, to say that only skilled testers can/should be responsible for exercising these skills is a bridge too far. I want both testers and programmers to think more fluidly in terms of what their role and responsibilities are. The short version is – mostly because I think the ‘that’s not my job’ mindset is super unhelpful to all involved in software development.

In the comments of my last post James Bach said ’I think the role of testing is a very useful heuristic’. I agree. It is. I didn’t state that explicitly in my last post and really I should have. It’s a realisation that I have only come to recently and the realisation shocked me. I identified so strongly with the role of a tester that relaxing my grip on ‘tester’ as an identity was incredibly confronting.

It’s not that the role of testing as a concept is not useful, but like any other heuristic, it is fallible. If one is careless in describing the responsibilities and characteristics of testing in terms of what testing is and what other roles are not, it can help to reinforce stereotypes that are not useful. By way of example, here are a few beliefs that I’ve heard from testers about why programmers can’t test, that I think are unhelpful.

‘Programmers shouldn’t test their own code’

I think programmers should not be the only ones to test their own code if quality is at stake. What we think we’ve written is often not what we’ve actually written. Talk to a programmer about reading code they wrote more than a month ago and they’ll often say ‘I wonder wtf I was thinking’. If you talk to a screenwriter, or any other kind of writer really, they’ll often say the same thing. At the time of writing, we often lack the perspective to be effectively critical of what we’ve written. With all that said, if any programmer is writing anything that matters, they absolutely should be testing their own code.

‘Programmers and testers think too differently for either one to be good at each other’s job’

While I believe it’s true that the focus of a tester and that of a programmer are very different, that doesn’t mean we cannot have a good fundamental understanding of each other’s work. I would go so far as to say that if testers and programmers don’t have good understanding of the fundamentals of each other’s craft, then they are almost certainly going to be less effective than someone that does have that knowledge. As with a tester knowing how to code, knowing the basics of the technology stack the programmers are working with, understanding of the patterns they’re using and their advantages and disadvantages is helpful in spotting possible problems, so too should coders have an understanding of testing fundamentals, not just whatever other automated testing they’re doing. You should be able to talk to them about oracles, test heuristics, the various ‘ilities’ and risk without them wondering what the hell you’re on about.

‘Programmers are too tightly focused on what they’re building to see the bigger picture’
Which seems to be saying ‘programmers don’t know how to defocus and wouldn’t see the value of doing so if they did’. Like other testing skills, focusing and defocusing are learned skills and can be honed with practice. Full stack developers have practice doing this because they need to understand the different technologies they’re working with and how they interact, their various gotchas and pitfalls. It is a skill that can be learned and there is benefit for programmers to know how to do it.

There are lots of reasons out there for why programmers are bad at testing. Testers reinforce that mindset every time they trot these little truisms out. It doesn’t have to be that way. Rather than looking at the tester role as something that is altogether separate from a programmer role, consider how the two roles can interact.

The advent of test driven development in its various flavours has helped blur the lines between the roles. TDD is generally used as a way to drive design and thereafter support programmers as they maintain and change code. Programmers write failing tests and then use the support of their IDE to fill in the code to make that test work. They build small pieces one at a time, each supported by tests that exercise what was just written. If a test is difficult to write, it points to a possible problem in the intended implementation. The initial focus of the tests is to help the programmer implement code that is elegant and maintainable. The fact that it may also cover things we’re interested in from a higher level is a bonus. It’s not exactly testing in the way a tester might consider testing, but there is definitely a relationship there.

Automated acceptance testing seems to sit more squarely between the roles. Where unit testing is code-supporting, or tech facing (if you want to go to Brian Marick’s Agile Quadrants model), acceptance tests can potentially have aspects of both code supporting and product supporting tests (tech facing & business facing).

Good programmers write tests before they write code. Great programmers critically question the requirements they’re given before they start building and keep the big picture in mind as they code. In an agile context, well written user stories will help them to do that as the story itself describes the big picture, or is part of an epic that does. Great programmers who pair will often spot and correct issues in the code they write and they’ll use the conversations they have while working to highlight possible remaining problems. If necessary, they’ll ask for specialist help (ie a tester).

In my current team, there is a strong sense of shared ownership of what we build. The programmers I work with are highly motivated to get testing right, because if we put out a substandard product, we are all responsible. We succeed or fail as a unit based on our ability to deliver value to our stakeholders. We’re a pretty new unit, relatively untried. We have a couple of wins on the board, but the quality of the work we put out reflects on us as individuals, as a team and on the department we’re a part of (not to mention the company as a whole). That’s a fair amount of responsibility. When things don’t go to plan, as will inevitably occur, we don’t waste time and energy in finger pointing. By the same token, if someone screws up, they’re the first to put their hand up for it. We fix what we need to fix, work out what we can improve and crack on. We succeed or fail as a unit. We own it. That’s just the way it is and it’s pretty awesome, I have to say.

Is it perfect? Hell no. There’s lots I want to improve, but at the basic level is that shared belief of joint responsibility and that is something that I believe is lacking from most tester/programmer relationships. That’s a damn shame and I want that to change.

Why aren’t more teams out there like this? My hypothesis is twofold.

1. There are a lot of people out there that call themselves testers who are really, really crap at software testing. Unfortunately, most programmers have only encountered this type of ‘tester’.

2. There are several different flavours of the sentiment that ‘programmers can’t test because…reasons’. Programming and Testing are different skills. How you focus your thinking for each of these skills is different, but to say that a programmer can’t test is a fucking cop out and lets them off the hook for work they should be doing.

I think it is a reasonable expectation to hold that developers take some interest in improving at testing if their current abilities are close to nil. Having attained some level of competence in testing fundamentals, I also think it reasonable that they are able to improve further should they so choose.

I also think that programmers are unlikely to spend enough time practicing or improving testing if we take that expectation away by saying things like ‘developers are crap at testing because they’re developers’. I’m not expecting that they’re as proficient as I am but I do expect a significantly higher standard than ‘I wrote a few unit tests and the code does what it should’. I want to be able to chat freely with programmers about what oracles they used to test against and how they approached testing the code they’ve written and what they think still needs attention. That’s not an unreasonable expectation to have from a programmer who values their craft and shares responsibility with you the tester for delivering value.

Is that lazy? Am I expecting someone else to be doing my work for me? No. Not at all. A programmer who has a solid understanding of testing fundamentals will deliver higher quality code so that when I do get ahold of it, I have a challenge on my hands. The obvious holes have been thought of and plugged already. As a tester, I get to do what I do best – exercise my tester skills to find those issues that are both difficult to spot and a significant risk to delivering value.

The roles of programmer and tester contain significant overlap in terms of thinking, skills and activities. It makes sense to me that the duties of each likewise overlap. Knowledge of one does not and should not preclude understanding of the other. The better we understand how each other works, the better we can help each other do better work. It takes effort. You’ll have to do stuff that makes you uncomfortable or feel dumb. The programmers you work with may resist taking on the responsibilities of testing. You might have to have difficult conversations, maybe repeatedly. What works well in one team may not work well in another.

By sharing the work we do, by working closely with our non-testing peers, helping them understand the work we do and educating ourselves about their work, I believe we will better demonstrate the value of the tester’s skill set and better set expectations of what testing is, whether it be a skill set embodied in a specialist role, a set of activities that a team undertakes, or some combination of both.

 

If you’re a tester and the title of this post made your heart beat a little faster, then bear with me for a paragraph or two before you scroll down to the comments section to rant.

I’ve been doing this testing thing for a while now. I’ve worked the full spectrum from heavily conservative, highly process driven waterfall style development to Agile with all the bells and whistles and a bunch of hybrids in between. I’ve seen more definitions of what testing is from non-testers than I can count. I’ve seen almost as many definitions from people that call themselves testers.

I have this mental image of the role of testing as a pasty emo teenager railing to instatwitsnapbookplus about how nobody understands their pain. ‘My issues are so complex that it would take you too long to comprehend them, let alone understand them and the answer is you all need to change, and that’s clearly not going to happen, so leave me to wallow in my delicious, delicious pain. Oh and leave Britney alone.’

Blog post after blog post about how testers are devalued by anyone who isn’t a tester. I’ve written more than one myself. I go to testing conferences around the world and yeah it’s fun to catch up with my learned testing peers, but I’d be kidding myself if I thought I was making a difference to how we’re seen by non-testers. I might get through to the occasional meatbot that rote testing is dumb but more and more I’m of the opinion that if we really want to be taken seriously as software development professionals, then we need to seriously look at how we position ourselves in relation to our peers.

The first time I heard ‘testing is an activity, not a role’ I think my reaction was ‘what the fuck do you know, man. I’m a tester. It’s what I do and I do it well.’ Some time later (many months later), I was talking to someone about religion and how people tie belief to their identity, and the strongest reactions you’ll see are when you threaten beliefs that fundamentally make up someone’s identity.

A: ‘I’m an X’

B: ‘X is deeply flawed’

A: ‘I will fucking cut you’

Which made me think about my reaction to the ‘testing is an activity’ statement. At the time I first heard it, it sounded like a statement that trivialised something that I feel is part of my identity. Of course my reaction was a strong one. I am not so smart. My time with the team at eBay has given me serious cause to reassess my initial reaction.

I initially equated ‘testing is an activity’ to ‘anyone can do testing’. The easiest way to troll a tester is to tell them that anyone can do their job. Some people genuinely seem to believe that anyone can do testing. I vehemently disagree. That said, there are some things that testers do that are simple. They also happen to be the things that are the most visible, hence the confusion. Is X different to Y? Yes. Should it be? No. Ok, bug. That’s as complex as testing is to more than one software development professional I have interacted with.

Testers do some of that stuff and you know, it’s stuff anyone can do. If you have a clear oracle to determine the correctness of something and you observe a deviation from it, then you call it out. It’s not rocket surgery. Why the hell as software testers would we want to accept this activity be ours alone, let alone demand it? Anything that simple should be handled by anyone that sees it. It should be the responsibility of every member of the team to be on the lookout for that stuff. Bake it into how you develop software. Make it a basic expectation.

Oh, just ‘make it happen’. Easy for you to say. You landed in a team that ‘just gets it’. Ok. Sure. It’s not as easy as that, but that’s not really what I’m getting at. I think many of us as testers have felt like second class software development citizens for so long and fought for recognition so hard that allowing testing to be thought of as an activity as opposed to a role seems like a massive step backward. I also think it has quite a bit to do with ‘quality’ being a bit of a dirty word amongst learned software testers. ‘I don’t do Quality Assurance, I’m not an engineer, I’m not the quality police’ – we spend so much time trying to get misguided software testers to understand this that I think we’ve gotten tunnel vision. We’re not going to stop the zombie invasion. Rather than encourage testers to step back from ‘quality’, we need to encourage our non-tester peers to embrace it. Quality as a shared responsibility. Shared ownership of what we ship.

What we do as software testing specialists should not be to ‘test all the things’, but to enable every single person involved in our project to bring their skills to bear to improve product quality. Sometimes that will mean we get our hands dirty and use the product, find tricky, unexpected things. Sure. We’re good at that. It also means things like facilitating reviews of the proposed solution, identifying useful, sensible quality criteria and working out where they should be tested. It means training your colleagues to do better testing and to recognise when they need specialist help and learning more about what your colleagues are good at and what you can do to help them do their job better.

The demand for the skill set we have is not going away any time soon. We’ll do more good by letting go of the much maligned chunk of responsibility we’ve carved out for ourselves within professional software development and embracing testing as an activity than we will by demanding recognition that the role of testing is a special snowflake and deserving of special attention.

Trish Khoo wrote an excellent blog post on being a tester in an a programming team. More specifically, a team that values testing and incorporates it into everything they do. I found myself nodding along with Trish’s post and identifying very strongly with her experiences. I fear whatever I write in addition to her post will merely be gilding the lily, nonetheless I will add my voice and say that I find this a wonderful way for software development professionals to work together.

I’m fortunate enough to work with a highly talented team of programmers at eBay. I’ve worked closely with skilled developers before, formed strong and lasting friendships, was supported by them in my role. This is different. This is really the first time that I’ve worked with a group who value testing as much as these guys do, not as a role outside of programming, but as an activity that the team owns.

There are different strengths within the team. Mine happens to be testing. That doesn’t mean the responsibility for testing is abdicated to me. At the start of a sprint, we’ll identify the highest priority work to be done and we’ll talk about the complexity of each story not just in terms of getting a solution in place, but how we’ll know that we have a good enough solution of high enough quality. I’m generally not the one driving the conversation about quality. That’s a refreshing place to be.

One of the things I enjoy most about working in this format is the discussion around testability becomes a lot less contentious. It’s no longer a matter of developers doing you a favour, or becoming wary when you ask for access to their code (or horrified at the prospect of you committing changes), it’s something that just happens. It’s the difference between having a conversation along the lines of ‘I think we need to think about the impact of testing and how it affects this work’ and ‘what else do we need to think about to make sure this work is valuable when we deliver it?’. It is a damn shame that this seems to be such an unusual situation to be in. It should be the norm.

Like Trish, I’ve found that the bulk of my involvement comes at the start when we still know the least, be that at the project inception when we’re gathering requirements, or during sprint planning/backlog grooming when we’re working out what we need to deliver next and how. I recently used James Bach’s heuristic test strategy model as a project planning tool as a way of eliciting questions to ask as we built a model of the project. The work is ongoing, but thus far it seems to be something that the entire team has found incredibly valuable. I still do a lot of exploratory testing. The difference here is that I don’t have to waste time on the trivial and the obvious. Most of that stuff has been taken care of and because I can trust the programmers to take care of the basics, it provides me great freedom to delve deeper into the product and look for more crucial issues.

I’ve been a proponent of the mindset that a tester’s mindset is quite different to that of a programmer and there is some truth in that statement, but that doesn’t mean programmers are incapable of contemplating good testing, nor does it mean they have no responsibility to do good testing. Lisa Crispin and Janet Gregory in their book ‘Agile Testing’ make the distinction between code-supporting tests and product supporting tests and I find that distinction a valuable one to call out when working with programmers. They get the need for code-supporting testing (TDD) – it drives the design of the solution and provides a safety net when creating the solution and later with maintenance. Where programmers sometimes have blinkers on is the testing that happens around the solution itself, from questioning whether we’re building the right thing in the first place to probing the solution to see where it misbehaves. I’m the first person to admit that I’m not a highly skilled programmer. The more I pair with the coders on my team, the more I improve. The same goes for the testing skills of the coders I work with. We’re a multi-skilled team and the ultimate aim is not that I become a rock star programmer and that they become kick arse testers, but to become proficient enough in each of the skills we have to know when we can handle the work in front of us and when we need specialist help – killing off the Dunning Kruger effect and understanding the strengths of our team mates so we can draw on them when we need.

My sincere hope is that this is a way of working that becomes commonplace for testers and programmers alike. That would be an awesome industry to work in.

Utest interviewed me earlier this month.

Here it is.

It’s been quiet on the blog for a while now, mostly because I’ve either been too busy to write, or because I wasn’t yet able to write about the stuff I wanted to write about.

After four plus years in Japan, I have left the land of the rising sun. Japan seems to be equidistant from the major testing destinations I go to, but just a little too far away to be convenient for any of them. Sure I’m a little sad to be leaving. I’ve made some fantastic friends and had some wonderful experiences there. No doubt I’ll be back to Japan at some point, but for the moment, my place is elsewhere. Specifically, I’ve taken up residence in England. I’ve accepted a position at eBay International working with Ilari Aegerter and his very solid group of testers. Early days yet, but so far I’m thoroughly enjoying it. There look to be some very cool things going on and I’m looking to get my hands dirty, especially with iOS driver and Selendroid, both championed and developed by some of my colleagues. I’m travelling for most of June, doing meet & greet and induction stuff in Germany, Switzerland and the USA. From July I should be (more or less) in England.

There are a great many testers in the UK and mainland Europe that I’ve never met, but know by reputation or by email exchange. I’m looking forward to putting faces to the names. Europe seems to be an exciting place to be for context driven testers right now. The Let’s Test conference has put a stamp on testing in Europe that anyone promoting commodity testing ignores at their peril. I get the feeling this is just the beginning and there is a good deal more to come. Exciting times.

The CAST2013 Call for Participation has been announced. I’m stoked to have been selected along with my very good friend Louise Perold as the program co-chairs. We chose the theme “Old Lessons applied and new lessons learned: advancing the practice and building a foundation for the future.” We think it reflects where we’re at as an industry and I’m excited to see what sort of presentations and what sort of conversations this subject will spur.

If you have some experiences you’d like to share about how you’ve changed your approach to testing based on the changes in technology we interact with, we’d love to hear from you. If you know someone you think has an awesome experience to share, please pass this on and encourage them to submit a proposal.

Either way, we hope you’ll come to CAST2013 and help us make it an awesome conference by testers, for testers.