Speaking at Agile Testing Days

Sunday, September 4, 2016

On Ego and False Dichotomy

I apparently have a memory that stretches back to the dark ages. Aside from my now-23 year old grandson asking me what King Arthur and I talked about when we had lunch (he was 5 or so at the time) a fair number of my younger colleagues firmly believe that my start working in software, on mainframes, with COBOL and RPG and JCL and other arcane (to them) things must mean I have been working with software shortly after code was written by hammering 1s and 0s into stone tablets with a hammer and chisel.

My response?  "Not quite, but close.Not as old as dirt, but I do remember when fire was a new-fangled idea."

Really. I've used that recently. This last week.

I wonder if anyone else remembers the "dark ages" where "programmers" talked with people outside of tech? You know, like, people who would actually USE the software being worked on? People who were currently using it and had an idea that might make it better? You know, an "enhancement"?

I wonder if anyone else remembers the "dark ages" where "programmers" talked with those same people and figured out things like "requirements" and "expectations" and when pieces might be available, if not the whole thing

I wonder if anyone else remembers the "dark ages" where "programmers" looked at application and system models and did the design work for how their changes, the ones they talked with non-tech people about, would fit into the whole scheme? Like, did the design and architecture work?

I wonder if anyone else remembers the "dark ages" where "programmers" looked at file systems and databases and came up with solutions that would work for their needs and not step on the toes of any other systems or the people who used them.

I wonder if anyone else remembers the "dark ages" where "programmers" figured out how to test the software they made? Figured out what the expected behavior was and how to handle situations that would, almost certainly arise when something went hay-wire or "pear-shaped." (Yeah, that is still kind of my favorite euphemism for SNAFU.) Or how about, worked hard to find faults in their presumptions?

I wonder if anyone else remembers the "dark ages" where "programmers" sat down with other programmers and worked with each other on their code? Where they went beyond "I'm stuck" and looked at "is this the best way?" Where they reviewed each others code to look for potential problems or inconsistencies - like weird variable handling or dangling If-Then constructs - or worse - random periods - dots - that might get missed?

I wonder if anyone else remembers the "dark ages" where "programmers" carefully worked out what data they needed to test with before running the first test? What conditions they needed to meet and what combinations they needed to cover - and how to cover them?

I wonder if anyone else remembers the "dark ages" where "programmers" had other "programmers" look over their shoulder as they were running tests to help them see if they missed anything?

I wonder if anyone else remembers the "dark ages" where "programmers" worked with other "programmers" before testing the work of those other "programmers"?

I don't remember when this all changed.

It was gradual. It seems like it was getting tweaked and nudged and "improved" for some time. Before long, there was "designers" and "architects" and "business analysts" and "developers" and "testers" and... yeah. Next thing I knew, people needed special treatment. They were "specialists" in something. It seems that some needed to feel "special" - more "special" than others that were not as cool as them.

People began not talking to each other so freely - or at least not as colleagues - equals. People who were "testers" seemed less at-ease to talk with "developers" as equals. Instead of working towards solutions together, they saw themselves as part of a hierarchy - a pecking-order.

There were extremely formalized models to developing software that were put in place - models to prevent people from working together (as far as I could see.) Some were focused on  "maximizing individual skills" and "reducing distractions," as far as I could see then, and still see, they were about asserting power and control.

People began to see their work as more valuable than the work other people did. They felt, possibly believed but I find most beliefs to be reassured/asserted feelings, they were better than those "others" and had greater skills than those "others" and so were worth more to the company than those "others" and so they deserved to be paid more than those "others."

This was also about the time I began noticing there were fewer women working in software shops than there used to be. I don't know if they left because they got tired of dealing with jerks or is they simply had found something better to do. I suspect one led to the other.

I began seeing more "frat-boy" behavior on teams I worked with. Behavior that would have gotten you fired a few years ago was now the norm. I did not stick at those shops very long. I apparently was not a "cultural fit." So I quit or was transferred to another team of "old people."

At the time, I was not sure what was going on. I did not like it very much. I spent my time honing skills I saw were missing in many of the "developers" I worked with. I became good at testing. I became a "tester."

A rebellion started.

People began looking for "rapid software development" and "light development methods" and ways to make software get written and developed faster, without the huge overhead imposed by the control and power folks.

I remember reading articles, then a book, on this new cool approach called "Extreme Programming."

Then I remember reading about this cool newness called "Agile." At the time, I shrugged. I still kind of shrug, frankly.

I'd seen too many "hot trends that would revolutionize software development" come and go.I did my job to the best of my ability and helped other people work better.

Then some folks got the idea that "Agile" might really be a thing. There were suddenly loads of books and papers on it. There were people talking about the "whole team" being responsible for "quality."

Whatever. For some of us, that had been the way we worked years before. Back in the "dark ages."

Now, don't get me wrong. Back in the "dark ages" there always were "programmers" who wrote code pretty well, but were better at designing solutions or figuring out file systems and structures or looking for ways to exercise the software.  Everybody was expected to do some pieces of all of this - and everybody was expected to contribute in ways that made sense.

Now, people have decided that "whole team" means "everyone must learn to code."

It took 30 years (or so... ahem) for things to come almost full circle.

I'm watching people freak out over changes where "everyone is a developer" and "developers" being made the same as... everyone else. People are freaking over that. Why?

I'm not sure. Maybe ego? Maybe there is a bit of resentment - everyone is suddenly "equal" - everyone is suddenly the same - no one is special anymore. Or worse, maybe they are not better than other people any more?

What I tend to see is this...

There are legions of people with a shallow understanding of what "whole team" means. There is a huge, potentially intentional, misunderstanding over what "whole team" means.

If everyone "learns to code" do you really expect people to be able to write production facing code in a matter of days or weeks or months? Do you really expect people to be able to develop meaningful test approaches to that code in a matter of days or weeks or months if they have never had to do so in the past?

Even in the "dark ages" there were people who were better at something than others. That is still true. Make use of those differences in skills, abilities and viewpoints.

Don't let your ego get in the way. Don't let manufactured differences and false dichotomies play on your ability to work together.

If you are not sure what I mean, consider the list of items at the beginning of this blog post.

In short:
Developers - put the ego in check and realize you can't do it all yourself;
Testers - look, you'll be treated like second-class citizens as long as you act like second-class citizens;
The rest of you - chill - do what you know how to do and help everyone else do their thing.

Act professionally. Perhaps more importantly, act as a mature adult.

Contribute to the team's success.

If your ego can't handle it, go tell your mummy I hurt your feelings.

Tuesday, August 23, 2016

On the Value of Software Testers

This was originally published under the title "Considering the Value of Software Testers" in Stickyminds, July, 2014. The original, unedited, version appears below. - Pete
I try hard to learn what other people think about testing and how to do it well.  If you are like me, you have as well.  In doing so, youve also heard a variety of answers from gurus, telling us what to focus on. If so, then I suspect youll find these ideas familiar:
  • Software testers find bugs;
  • Software testers verify conformance to requirements;


  • Software testers validate functions.
There are different versions of these ideas; they may be expressed in different ways.  Some people focus exclusively on one item.  Some will look at two.  Sometimes these ideas are presented as the best way (or the “right way”) to deal with questions around testing. 

Some organizations embrace one or more of these ideas. They define and direct testing based on their understanding of what these ideas mean.  They insist that testing be done a specific way, mandating practices (or documents) under the belief that controlling practices will ensure maximum effectiveness and best possible results. Less training time and easier switching between projects are two common reasons to do this.

Frankly, even when they work, I find the results unsatisfying. For example, the result of “standardizing” often consists of detailed scripts.  These scripts direct peoples efforts, which often results in actively discouraging questions.  The reasons for detailed scripts are often wrapped around concepts that many in the organization have a very shallow understanding of, such as Six Sigma in software development and repeatability of effort.

In Six Sigma, variation is viewed as the cause of error. A shallow understanding of Six Sigma leads to the understanding that varying from assigned steps in a test document will result in “error” in testing, making variation in test runs a cause of deep concern.

If the “expected results” explicitly state one thing, those executing the tests will soon find themselves looking only that thing. As Matt Heusser has often said (and I’ve stolen the line time and again), “At the end of every expected result is another, undocumented statement that says ‘… and nothing else strange happened’.” 

The point is the obvious solution is to direct people to look at broader aspects than what is documented as “expected results.”   This sets up a conundrum around what is and is not part of what should be looked for.

Many of us would assert that the tester should, out of responsibility and professionalism, track down apparently anomalous behavior and investigate what is going on.  Consider the team reducing variation in a shallow way that has defined steps that take a known period of time to execute. Then add a little time pressure.  What do you think happens when they encounter something that does not fit but is not to be check for explicitly? 

The human mind ignores these types of errors, often the most important errors, or at the very least, a hint that might lead to the most important error.  If you doubt this, then here is an exercise for you: Go to gmail or google and look for the banner ads.  Your mind has been ignoring these for year.  Do you notice how large and prominent then are?  Funny how you don’t notice them unless you look!

Of course management can insist that testers be “professional” and investigate off-script issues, but when the testers follow that advice, they will exceed the allotted time for the “test case.” If part of their performance review and the resultant pay/bonus is tied to those measures, can we really expect them to branch out from the documented steps?

Teams that rely on “click and get reports” automated tools for functional or UI testing are set up for a similar problem. Without careful investigation of the results in both the too and application logs, the software will only report errors in the explicit results. That means the error has to be anticipated in advance in order for the automation code to look for it. 

A Different Way

I’ve explored the consequences of these ideas and have tried them myself in my early career in testing.  I don’t believe they work as broadly as many say they do.  Frankly, they fail my smell test. Can I suggest a fourth definition of testing, perhaps not academically through, but a working definition, based on the things I have seen that actually work?

Software Testing is a systematic evaluation of the behavior of a piece of software, based on some model.

Instead of looking for bugs, what happens if we look at the softwares behavior?  If we have a reasonable understanding of the intent of how the software is to be used, can we develop some models around that?  One way might be to consider possible logical flows people using the software may use to do what they need to do.  Noting what the software does, we can compare that behavior against the expectations of our customers. 

These observations can serve as starting points for conversations with the product owners on their needs.   The conversations can incorporate the documented requirements, of course, along with the product ownersexpectations and expertise.   This means the project team can choose the path they wish to examine next based on its significance and the likelihood of providing information the stakeholders are interesting in evaluating next.  

Instead of a rote checklist, testers working with product owners, development team and other stakeholders can compare their understanding of the software and ask the crucial question of Will this software meet our needs? 

Comparing system behavior with the documented requirements means that testers can help initiate and participate in discussions around both the accuracy of the requirements (do they match the expectations) and the way those requirements are communicated, thus helping reduce the chance of misunderstanding.  This helps the Business and Requirements Analysts do a better job writing requirements and position us for conversations around how to make requirements better. 

By changing what we are looking for, from specific items in a check list to looking at overall behavior with specific touch points, we change what we do and how we are considered - moving testing from an activity that has to happen to get the software out the door (a cost center to be minimized) to a value-add activity.  

And You 

If you serve in this industry for any length of time, you have probably felt offended if not insulted as a tester. Perhaps someone who had never done testing a day in their life defined a process for you and you did it while knowing that the work you were doing was low-value and would take too long. Perhaps worse might be to be given a detailed low-variation test plan, measured on time, and also told to investigate - a scenario where you cant win for losing! 

If that is the case, it might be time to say something like this Lets talk about what I do as a tester.I know, you may be scared, worried about your review. A few testers I know have been fired over this, but that is only a few.  Consider the alternative:  keeping a job you dont really want to have. 

Sometimes, the way to be most effective at your job is to act as if you dont care about keeping it.
Piper Kenneth McKay at Waterloo




Saturday, July 23, 2016

On Community, Faith and Belonging

There have been several things happen lately that made me consider what it is to be part of a community, any community, and how people relate to you, and you to them, as a result.

Miriam Websters Dictionary defines Community thus:
  • a group of people who live in the same area (such as a city, town, or neighborhood)
  • a group of people who have the same interests, religion, race, etc.
  • a group of nations
This is a fairly common definition.

The questions I have been mulling in my mind revolve around the second definition. People with the same interests, like testing. Or maybe the same religion, or lack there-of. Or something. I'm not sure.

Simple, Obvious Communities

These are the kind that once upon a time,  I would have shaken my head at the lofty idea that this was some form of community. For example, voluntary communities like people in schools. Any level of school works, but let's consider a college or university.

One community meets at 8:00 AM five days a week for calculus, or differential equations or... something. Maybe another community is in the intro-chemistry lecture. There is nothing stopping members of one "community" being in the other. I had the good sense not to push my luck and take courses like that at the same time. Way more work than I wanted to do - which made another form of community.

Then there are people who work for the same company. Mind you, the first company I worked for was fairly large for where I was living - some 2,000 people worked there, all told, in many buildings on their "campus."

This could be narrowed down to people who work in the same building. Or maybe, we could limit it to the same Division or Department or Team. These might be individual communities.

Not terribly long lived, but still a community of sorts. People change positions, job functions, departments or leave the company altogether. They are no longer part of one of those other communities, but they may be part of a new community.

There are other communities I might belong to - like musicians, classical music performers (at one time) or blues and jazz performers (another time) or traditional/folk performers (still another time) or pipe band music performers (still another.) Each of these forms a community based on what the members do. That I was in another community that overlapped all of them, percussion performance, is immaterial. At one time in my life I was in each of these - several at once to be precise.

Then there are other communities...

These are the ones people are born into. Like, Caucasian Males for me - Caucasian Females for my sisters. There is the family community we were born into. In this case, most of us don't have much say who are parents are. None of us have much control over what group we are born into. We may change that community to a point, but somethings we really can't change. We're stuck with them - and the associated communities that go with them.

There are also communities that might be chosen for us, like what religion we are raised in, if any. We might have parents who take the course of teaching us about many religions, then explaining why "we" (they) follow the religion we/they do and the church they go to. Some of us stay with that religion out of conviction and belief. Others may wander away and choose another faith. Still others may choose to abandon organized religion altogether.

Wow. That is a complex "community" configuration.

And still...

That is a pretty common model for communities built on faith or belief. You begin in one community, because that is what those around you do. Then you change yourself... or not. As children grow and learn, it is not uncommon for them to push back, question  and challenge the religious tradition they are raised in. How this period of growth is handled varies from group to group and faith to faith.

This is part of the process of determining what the person really believes, and where they fall in with religious life.

The problem there is, most religions and belief systems have core tenets of faith and expected behaviors. These boil down to, "These are the things we believe and how we act or behave; if you do not act this way and do not believe these things you are not in communion with us."

So, if the religion says "Don't eat meat" then if you eat meat are you really in that community of faith? How about "Do not drink alcohol or caffeinated beverages (e.g., coffee)" and you do anyway, are you really in that community of faith? If the religion calls for respect and protection of women and children, even at the cost of your own life, and you willfully inflict pain on them, are you really in that community of faith?

What if your faith calls for complete non-violence?

What if one of the Commandments your religion says came directly from God says "You shall not commit murder?"

Most religions and faith based communities have ways and means for members who have "fallen away" from the faith. Many leaders will see some variance as "youthful indiscretions" while others will see similar behavior as suitable for damnation. (For me, the latter category likely have never gone through the maturing process where their faith has been challenged. They have likely never experienced the pain of trying to understand why they believe certain things and not others.)

If a person who violates the teachings of their faith or religion on a regular basis does so in complete disregard to the orthodoxy, the leaders of that faith community, are they part of that community? If a person "cherry picks" items so they feel better about their own lives, do they really believe what the community believes or are they setting out to a different community?

What about people who insist they are right, they understand the "true meaning" of what is is to be of that faith, and they fly in the face of the leaders of that community? Are they part of that community or are they part of something else?

Do people who claim to be part of a religious or faith community, who take self-directed actions "in the name of" that community, really do so for the community? For the faith? Or maybe for something else? Are they "faithful warriors" or are they attention seekers who only feel value if they can latch onto something?

What about testing?

In testing, we see people get identified (usually by others) as part of one "school" or self-identify as part of one school. This generally means you agree with certain tenets and theories (a bit like a religion.) The "community" is based on people agreeing on those tenets.

If those tenets are pretty loose, and the first is that the tenets are decent ideas and may need to be applied differently depending on the situation you find yourself in, how do you identify as part of that group?

Are you drawn by the "names" in the school? The community?

What if you question the bold statements and assertions of those "names"? Do you still belong? What if you disagree with those statements? Are you part of the club of those who declare their view is what is real and correct? "No real tester would..."

What if you see people slapped down (metaphorically) on twitter or some other social medium by these "names"?

When questioning is not allowed, or only permitted if the question is phrased very precisely, how do you teach others the "rightness" of your position?

Are the people slapping others down acting for the community or for their own self-glory? Do they need the attention on them? Instead of the message they claim to be spreading?





Tuesday, May 17, 2016

On Quality Engineering and Testing and Defect Prevention

Some time ago, I wrote a response to a post I read extolling the virtues of "Quality Engineering" over mere testing. (You can my response here.) Since then I have received some emails and been in some conversations on the topic. I've also seen a variety of threads on twitter related to the discussions I've had with others.

This, then, is some more of my thinking around the topic, based on what people have said to me - mostly trying to convince me of the error of my thinking.

It was explained to me that Quality Engineering is, at it's heart, prevention of bugs and problems in software. Thus, a Quality Engineer is not looking for bugs, instead, a Quality Engineer focuses on bug prevention - keeping them from being created in the first place.

"A good QE works to avoid bugs in software."

That was precisely what I was told by a very nice young lady. It struck me a bit like "A good AO (Automobile Operator) works to avoid potholes in the road." Apparently I was not amusing to her (maybe she lived in a city, as I do, where there are myriad potholes on nearly every road.)

There were several examples presented.

One involved a QE finding a problem in a planned change to a DB table. The QE prevented a problem by identifying the flaw in the development group's intended change. Their work flow consists of proposing DB changes, reviewing it with the development team, then the full Scrum team, then presenting it to the DBAs for review. It was in the Scrum team review that the QE identified the problem.

Another involved a QE identifying a problem in the design of some changes to an application. Again, the QE spoke up and raised an issue during review of the design with the Scrum team.

The third example was a QE speaking out over requirements that seemed contradictory. The reason was simple. They had not been understood and were noted down incorrectly.

Each of these were presented to me as examples of what a good Quality Engineer does. They prevented bugs from being created.

Except...

My response was, in each of these cases, the QE found a problem or inconsistency and raised the issue. They did not so much prevent a bug, as they did find a problem (bug) in a spot other than the working code. They found the problem earlier in the course of software development.

This, to me, is part of the role of testing and why testers need to be involved in the early discussions.

Taking the next logical step, including a tester who is familiar with the application in the initial discussions could benefit the entire process by helping other participants think critically about what the story/change/new feature is about.

By engaging in these discussions and exploring the intent and nuances around the request, recorded notes and conversations on the work, a tester might be able head-off issues while they are in the "bounce ideas around" mode - while discussions are happening around what terms or concepts mean.

In an Agile team (whatever flavour your group uses) if people are engaged in working toward better quality software, the role of a critical thinker is necessary - whatever you call it.

Some folks tend to get rather, emmm, pedantic over how words get used. Here's what I mean...

Each person in a team is trained to do something. Usually, they are better at that than the other activities needed to be done. Ideally, each person can contribute to each task that needs to be done - but their expertise in certain areas is needed to support and lead the team when it comes to doing those tasks and activities they are trained in particularly.

Some people are trained, and very good, at eliciting and discovering requirements. Some are trained in building a usable design. Some are trained in developing production code. Some are trained in database design. Some people are trained in assembling components together into a working, functioning build and/or release.

Testers have a role in each of these tasks.

Testers can help requirements be defined better.
Testers can help the design be better.
Testers can help the person writing production code write better code and execute unit tests better.
Testers can help with DB work (this may shock some people.)
Testers can help verify and validate the builds are as good as they can be.

Testers can test each of these things. It is what we do.

Getting to a position where testers are trusted, welcome and encouraged to participate fully in each of these tasks takes time, effort and gaining the trust of others on the team.

People tell me that testers only test code.

Those people have no idea what testing can be in their organization.

What some people are calling Quality Engineering tasks, from what I have been told (very patiently in some cases) are testing functions.

Think.

Test.

Saturday, May 14, 2016

On Releases and Making Decisions

I've gotten some interesting feed back in conversation and in email on this blog post.

It generally consisted of "Pete, that's fine for a small team or small organization. My team/department/organization is way too big for that to possibly work. We have very set processes documented and we rely on them to make sure each team with projects going in has met the objectives so we have a quality release."

To begin, I'm not suggesting you have no criteria around making decisions about what is in a release or if the release is ready to be distributed to customers. Instead, what if we reconsidered what it means to be "ready" to be distributed to customers?

In most organizations doing some form of "Agile" development, there is a product owner acting on behalf of the customers, looking after their needs, desires and expectations to the best of their ability. They are acting as the proxy for the customers themselves.

If they are involved in the regular discussions around progress of the development work, testing and results from the testing, and if they are weighing in on the significance of bugs found, is it not appropriate to have them meet and discuss the state of all the projects (stories) each team is working on for a given release?

Rather than IT representatives demanding certain measures be met, what if we were to have the representatives of our customers meet and discuss their criteria, their measures that need to be met for that release?

If each team is working on the most important items for their customers first, then does it matter if less important items are not included in the release, and are moved to the next? Does it matter if a team, working with the product owner, decides to spend more time on a given task than originally scheduled, as new information is discovered while working on it?

As we approach the scheduled release date, as the product owners from the various teams meet to discuss progress being made, is it really the place of IT to impose its own measures over the measures of the customers and their representatives?

I would suggest that doing so is a throw-back to the time when IT controlled everything, and customers got what they got and had to be content with it - or they would never get any other work done... ever.

I might gently suggest that whether your customers are internal or external, we, the people who are involved in making software, should give the decision on readiness to the customers and their representatives - the Product Owners. We can offer guidance. We can cajole and entreat. We should not demand.

Who is it, after all, that we are making to software for?

Friday, April 15, 2016

On Facts, Numbers, Emotions and Software Releases

A recent study published in Science Magazine looks at communication, opinion, beliefs and how they can be influenced, in some cases over very long terms, by a fairly simple technique: open communication and honest sharing.

What makes this particular study interesting is that it was conducted by two researchers who attempted to replicate the results of a previous study also published in Science Magazine on the same topic. The reason they were unable to do so was simple: The previous study had been intentionally fraudulent.

The results of the second study were more astounding, in some ways, than the previous study. In short, people are influenced to the point of changing opinions and views on charged, sensitive topics, after engaging in non-confrontational, personal, anecdotal-based conversation.

The topics viewed included everything from abortion to gay and transgender rights. Hugely sensitive topics, particularly in the geographic areas where the studies were conducted.

In short, when discussing sensitive topics, basing your arguments in "proven facts" does little to bring about a change in perception or understanding with people with firmly held and different beliefs.

Facts don't matter.

Well-reasoned, articulate, fact-based dissertations will often do little to change people's minds about pretty much anything. They may "agree" with you so you will go away, but they really have not been convinced. There are scores of examples currently in the media, I won't bore (or depress) anyone (including myself) with listing any of them.

Instead, consider this: Emotions have a greater impact on most people's beliefs and decision making processes than the vast majority of people want to believe.

This is as true for "average voters" as it is for people making decisions about releasing software.

That's a pretty outrageous statement, Pete. How can you honestly say that? Here's one example...

Release Metrics

Bugs: If you have ever worked at a shop, large or small, that had a rule of "No software will be released to production with known P-0 or P-1 bugs" it is likely you've encountered part of this. It is amazing how quickly a P-1 bug becomes a P-2 bug and the fix gets bumped to the next release if there is a "suitable" work-around for that.

When I hear that, or read it, I wonder "Suitable to whom?" Sometimes I ask flat out what is meant by "suitable." Sometimes, I smile and chalk that up to the emotion of the release.

Dev/Code Complete: Another favorite is "All features in the release must be fully coded and deployed to the Test Environment {X} days (or weeks) before the release date. All code tasks (stories) will be measured against this at the quality of the release will be compared against the percentage of stories done of all stories tasks in the release." What?

That is really hard for me to say aloud and is kind of goofy in my mind. Rules like this make me wonder what has happened in the past to have strict guidelines in place.I can understand wanting to make sure there are no last-minute code changes going in. I have also found changing people's behaviors tends to work better by using the carrot - not a bigger stick to hit them with.

Bugs Found in Testing: There is a fun mandate that gets circulated sometimes. "The presence of bugs found in the Test Environment indicates Unit Testing was inadequate." Hoo-boy. It might indicate that unit testing was inadequate. It might also indicate something far more complex and difficult to address by demanding "more testing." 

Alternatives?

Saying "These are bad ideas" may or may not be accurate. They may be the best ideas available to the people making "the rules." They may not have any idea on how to make them better.

Partly, this is the result of people with glossy handouts explaining to software executives how their "best practices" will work to eliminate bugs in software and eliminate release night/weekend disasters. Of course, the game there is that these "best practices" only work if the people with the glossy handouts are doing the training and giving lectures and getting paid large amounts of money to make things work.

And when they don't, more times than not the reason presented is because the company did not "follow the process correctly" or is "learning the process." Of course, if the organization tries to follow the consultant's model based on the preliminary conversations, the effort is doomed to failure and will lead to large amounts of money going to the consultant anyway.

Consider

A practice I encountered the first time many years ago, before "Agile" was a cool buzzword was enlightening. I was working with on a huge project as a QA Lead. Each morning, early, we had a brief touch point meeting of project leadership (development leads and managers, me as QA Lead, PM, other boss-types) discussing what was the goal for the day in development and testing.

As we were coming close to the official implementation date, a development manager proposed a "radical innovation." At the end of one of the morning meetings, he went around the room asking the leadership folks how they felt about the state of the project. I was grateful because I was pushing hard to not be the gatekeeper for the release or the Quality Police.

How he framed the question of "state of the project" was interesting - "Give a letter grade for how you think the project is going where 'A' is perfect and 'E' is doomed." Not surprising, some of the participants said "A - we should go now, everything is great..." A few said "B - pretty good but room for improvement..." A couple said "C - OK, but there are a lot of problems to deal with." Two of us said "D - there are too many uncertainties that have not been examined."

Later that day, he and I repeated the exercise in the project war-room with the developers and testers actually working on the project. The results were significantly different. No one said "A" or "B". A few said "C". Most said "D" or "E".

The people doing the work had a far more negative view of the state of the project than the leadership did. Why was that?

The leadership was looking at "Functions Coded" (completely or in some state of completion) and "Test Cases Executed" and "Bugs Reported" and other classic measures.

The rank-and-file developers and testers were more enmeshed in what they were seeing - the questions that were coming up each day that did not have an easy or obvious answer; the problems that were not "bugs" but were weird behaviors and might be bugs; a strong sense of dread of how long it was taking to get "simple, daily tasks" figured out.

Upshot

Management had a fit. Gradually, the whiteboards in the project room were covered with post-its and questions written in colored dry-erase markers. Management had a much bigger fit.

Product owner leadership was pulled in to weigh in on these "edge cases" which lead to IT management having another fit. The testers were raising legitimate questions. When the scenarios were being explained to the bosses of people actually using the software, they tried it. And sided with the testers and the developers: There were serious flaws.

We reassessed the remaining tasks and worked like maniacs to address the problems uncovered. We delivered the product some two months late - but it worked. Everyone involved, including the Product Owner leadership who were now regularly in the morning meetings, felt far more comfortable with the state of the software.

Lessons

The "hard evidence" and metrics and facts all pointed to one conclusion. The "feelings" and "emotions" and "beliefs" pointed to another.

In this case, following the emotion-based decision path was correct.

Counting bugs found and fixed in the release was interesting, but did not give a real measure of the readiness of the product. Likewise, counting test cases executed gave a rough idea of progress in testing and did nothing at all to look at how the software actually functioned for the people really using it.

I can hear a fair number of  folks yelling "PETE! That is the point of Agile!"

Let me ask a simple question - How many "Agile" organizations are still relying on "facts" to make decisions around implementation or delivery?

Saturday, March 5, 2016

On Visions and Things Not There

When I was playing in an Irish folk band, one thing we did each March was visit elementary schools and play music and talk a bit about Ireland in an attempt to get away from the image of dancing leprechauns and green beer and "traditional Irish food" like corn beef and cabbage.

One year, we were playing for a room full of kindergartners when one of them asked "Are leprechauns real?" The teacher smiled and chuckled a bit and for some reason, the other four guys in the band looked at me and one said "This one is yours Pete." 

I looked at the little girl who asked the question and said "Just because you don't see something does not mean it is not there." This made the teacher smile and nod. It also got us out of a pickle.

A few days ago, our tomcat, Pumpkin, was staring intently at something neither my lady-wife nor I could see. He was clearly watching something, and it was moving. He looked precisely as if he was stalking something. My lady-wife asked if I knew what he was watching - I had no idea.

Now, we live with three cats in the house. All of them, at different times, will watch something very intently. The fact that the humans could not see anything did not matter in the least.

Software is a bit like that. You know something is wonky and you can stare all that bit all day knowing something isn't right. And not see a blasted thing.

You know something is there. You see bits that don't seem right. No one else seems to see it. You see odd behavior and sometimes you can recreate it - but often, you repeat the same steps and ... nothing is there.

So you keep looking. You might find it. You might lose interest and move on. I find it a good idea to write myself a note on what I saw and what I thought might be factors in the behavior.

Because it is likely to come back again.