There are such things as stupid questions

Posted on Posted in Everything, Software Testing

I am making an appeal to those out there who are managing testers, or are in a position where their decisions influence testers.

I loosely follow several testing groups to see what interesting conversations are going on. I don’t often participate, mostly because I can’t commit to having the time to finish a conversation. If I can drop a comment I think will be helpful, then I will.

Then there are questions or statements that make me a little upset. I try not to respond because I tend to say things that are unkind. There are some people out there that need help and seem to be unaware of it. If you manage a testing team and you don’t have a lot of experience as a tester, you may be one of them.

The questions that get me irritated are mostly stupid questions asked by people that (according to their listed job title) should know better. Maybe stupid is not the right word. Lazy, ill-informed, ignorant – these are probably better adjectives.

Recently I saw the following questions asked:

How can testing be monitored? How can we measure the performance of testing?

That was it. No context, no qualification, no further explanation. I assert that becasuse of this, they are stupid questions. I suspect what this person meant is ‘how do I gauge the efficiency and effectiveness of the work that my testers are doing?’, but that’s one of many different possible interpretations of the original questions.

Not to mention the fact that I suspect that what this person actually wants is a bunch of numbers they can use to reassure themselves that all is well, or possibly to threaten perceived underperformers with.

I don’t really want to get into a rant on metrics. It’s been done to death by far more qualified people than I (see ‘meaningful metrics’). What I do want to say is that if you are managing software testers and you are asking questions like this, you really should know better. Stop being lazy and educate yourself. I feel sorry for the people working for you.

For people who are managing testers – especially if you do not have experience in software testing, let me pose the following questions to you.

How do you measure the effectiveness of a systems administrator? a business analyst? How do you know when you have someone who is ineffective in these roles?

Both of these roles require quite specific knowledge. There is also a massive amount of variation within the definition of these roles. A sysadmin that works on webservers is likely to have different strengths than one that specialises in SMTP admin. Of course there will be overlap in their skills, but they are different animals. Applying any sort of blanket measurement to them because they share the same title is stupid.

One thing these jobs do have in common (and in common with testing) is that they are knowledge and service related. If the people they are providing a service to are not satisfied, then you have an issue. You don’t go setting up metrics to count how many shell scripts and admin has written. You don’t judge a BA on how many use cases they write. You gauge their effectiveness by how their work is received by their peers and their clients.

Why would it be any different with testing? Testing requires specialised knowledge. How do you gauge the effectiveness of the people doing it? Look at the information they produce. Consult the people they produce it for. Speak with the testers themselves. Is the information useful, or not? Why/Why not? There you go. That’s how you measure the effectiveness of your testing.

Without the qualitative information these people can convey, you can collect all the metrics and numbers you like. You can draw all the conclusions from them you like, but those numbers alone are not enough to make any meaningful distinctions.

Worse still, if you’re judging the effectiveness of a testing effort during a project by things like bug counts and test case coverage, you’re potentially ignoring important information. A simple example – we have X open bugs. So what? What does that mean? So you’ve covered 100% of test cases – what does that mean? What other testing have you done? What did that find? What does that mean?

If you are using numbers as a barbiturate so that you can feel reassured about how the project is going – if you look at the numbers and and nothing else, and you accept them at face value, then you are being lazy. If you don’t have time to dig deeper then that’s a bigger problem, not an excuse.

If I sound a bit defensive, it’s because I am. I’m sick of seeing lazy people give testers a bad name because they don’t understand testing, aren’t interested in learning about testing and yet want to have some way to control testing because that’s how they think it works.

If you manage testers but you don’t trust them to be able to do their job, and if you aren’t willing to learn about the difference between effective testing and ineffective testing then have the stones to step aside from someone that is. You are worse than useless in your current role.You are a hindrance and are in all likelihood making people deeply unhappy.

If you are willing to learn, do try and think it through. What is it you are trying to achieve? Are you doing it because it’s genuinely useful, or because that’s what everyone else seems to be doing? Do some research. Plenty has been written down about it. At least that way when you ask a question, it will be an informed one. When I read questions like the above, here’s how it comes across:  ‘o hai, I ar new manager. Plz hope me teh metriks with xamplez kthxbai’.

I think that as managers of testers, it his high time we raised the bar. We have a duty of care to understand what they do and the challenges they face. To support their work and to maximise the effectiveness of their efforts.If you are not constantly seeking sources to help you do this, then you are doing them a disservice.

If you have your heart set on measurement, then please, at the very least learn the difference between first, second and third order measurement and how to apply them.

Additionally, it’d be a bonus if you understood more about testing and how testing fits into software development (please do your testers a favour and read this book).

Hopefully for you this is the tip of a very large iceberg. If you do these things, you will look back at questions like the ones posed above and wonder how you could ever have been that naieve. If you can cultivate a habit of curiosity and the desire to learn then you stand good chance of becoming the sort of manager your testers need you to be.

3 thoughts on “There are such things as stupid questions

  1. I agree that metrics in software testing aren’t the solution many seem to think they are. They are useful but you’ve got to use your instinct and intuition just as much (if not more). If you argue that you shouldn’t have metrics at all then it’s similar to arguing that you shouldn’t have a speed indictor on the dash board of your car.

    A speed indicator in a car is very useful, even if we only use it a couple of times a day when driving. But we don’t need a speed indicator to tell us if we’re going to fast down a twisty back road when the wheels squeal as we go round a corner. Our instinct and intuition tells us that we’re pushing the boundaries. Doesn’t matter what the metric says our instinct tells us that something is about to go wrong unless we correct it.

    Conversely when we’re cruising down the motorway/interstate it can be difficult for our instincts to gauge if we’re going to fast or not. Cruising down the interstate there may be no physical boundary to us going to fast. So we use a metric (the speed indicator) to guide us accordingly. And in this instance we might find ourselves referring to the speed indicator on a more regular basis.

    So to my mind it’s not so much if we should or shouldn’t use metrics in software testing. It’s more a case of learning to use metrics when that’s the right indicator. And learning when not to use metrics; when our instincts are a much better indicator.

    William Echlin
    SoftwareTesting.net

  2. Hi William.

    Thanks for stopping by.
    I’m not arguing against the use of metrics, I’m arguing againt lazy thinking.
    I’m making a plea for people who are responsible for testers to make sure they have an understanding of the work before they try to measure it and that they understand the value of what that measurement is.

    There is no silver bullet to measure effective testing. We need to stop asking stupid questions about a non-existant magic formula and take more interest in understanding how skilled testing adds value.

  3. Came across this question recently:

    I’m looking for the formula that predicts Post Productiion defects based on preproduction defects – I think it was by severity – Anyone familiar with this?

    And this one is a beaut too:

    How many test cases for a typical requirement?

Leave a Reply

Your email address will not be published. Required fields are marked *