Tuesday, December 17, 2013

On Test Cases and Conversing with Unicorns

The other day, I was sitting quietly contemplating some measurement functions people were asking about.  Whilst sipping a nice coffee in a small coffee shop, I heard a voice beside me - someone clearing their throat and asking if they could join me.

"Are you Pete?  May I join you?"

Now normally, I'm not easily taken aback.  This time, I was.  It was a unicorn speaking with me.  Apparently he, I think it was a he, asked was waiting for a friendly griffin who did a mix of java work for his day gig but was fluent in other languages.  Alas, the griffin was late.  You may not know it but griffins are notorious for unpunctuality. 

We got to talking about software and software development and software testing.  The unicorn asked me what was on my mind.  This struck me as odd.  I suspect he was simply being polite.  Unicorns can read minds of non magical humans, you see.

I explained that like many companies, I was trying to help people understand something that I thought was pretty fundamental.  The issue was one that it seems a fair number of people are wrestling with these days.

People are being asked to count things.  Tests.  Bugs.  Requirements.  Effort.  Time.  Whatever.

And the unicorn looked at me and asked "Why?"

It seems that people are looking to estimate work and measure effectiveness.  Their managers are trying to find ways to measure progress and estimate the amount of work remaining.

The unicorn started laughing - no, really.  He did.  Have you ever heard a unicorn laugh?  Yeah.  Its kind of interesting.

He looked at me and said "They've always wanted to know that stuff.  It seems things haven't progressed very far.  In the old days, we looked at the work and worked together to make really good software.  It would be ready as soon as it could and we could tell managers when we got close to it being ready.  Now, we expect people to be able to parse tasks and effort before they even figure out everything that needs to be done?  What are the odds of that actually happening?"

We sighed and sipped coffee for a moment.

The problem, of course, is that sometimes we're not quite sure what else can be counted.  The issue with that, the whole metrics thing?  When we latch onto the easy to count stuff it seems that the only stuff we count really never matters very much to the actual outcome of the project.  Why is that?

So, the conversation flowed.  We each had another coffee.

My thoughts focused on test cases.  Why do so many folks insist on counting test cases and the number that passed and failed?  What does that tell us about the software?  If we can logically define for every situation what test cases should look like, and can define instances where they will always be true guidelines, that may work.

My problem is simple:  I can't recall two projects ever conforming to the same rules.  That set of rules does not seem to work most of the environments I've worked in.

The unicorn seemed to understand.

He said "I tend to use failure points at steps in documented test scripts when I need them.  Some people use each failure point as a test case. They get many, many more test cases than I do.  Does that make their tests better?  Are they better testers because of the way they define their test cases?"

We both agreed that simply having more test cases means almost nothing as far as the quality of testing.  That in turn, tells us nothing about the quality of the software.

If "a test case" fails and there are ten or twenty bugs written up - one for each of the failure points - does that tell us something more or less and if ten or twenty test cases resulted in the same number of bugs being written - again - one for every failure point.

What does this mean? 

Why do we count test cases and all the other things we count? 

The unicorn looked at me and said that he could not answer that question.  He said that he preferred to consider more important things, like, whether or not unicorns can talk with humans.

2 comments:

  1. I agree with the unicorn, however management don't believe in unicorns, and they want to have an estimate of when I'll be 'done'. I also think I need more people, but before we get them, I have to quantify how many, with evidence to back it up. Can the unicorn help with that?

    ReplyDelete