Testing in a Throwaway Culture

Picture of a Caterpillar 826C landfill compact...
Image via Wikipedia

When was the last time that a prized possession of yours broke before its time? Did it make you angry and disappointed?  Were you surprised or were you half-expecting it to break?

Craftsmanship is a word we no longer associate with many of the things that come into our possession.  This was brought to my attention recently when I had to buy a new motor for my very pricey KitchenAid, Architect II dishwasher.  As software quality professionals, we are all on the other side of this.  How many tests were you able to run?  How well did you really vet that app?  Did you understand the app?  How much of your testing went according to plan, however much planning you had?  Did the plan really matter anyway?

Last week, a good friend of mine wrote about his frustration at not having enough time to execute tests because of other test activities such as shaking out requirements, managing others, etc.  I kind of know how he feels because, as the test army of 1, I am responsible for many of the same activities.  I’ve done all sorts of reports and activities that will pad my resume as a QA resource, but, in the end, this is not why I do the job that I do.

Here is a post from Chris McMahon’s blog that is, in contrast, ALL ABOUT why I am very content as a technical QA.  The utter hack-itude of the exploits described in this post are exactly the domain of the tester I try to be every day.  But then, I have the bug reports to fill out, the test planning to create and the inevitable smoothing over of dev ego.  These things slowly but surely chip away at my day.   My friends blog is a description of how, for more senior test professionals, it becomes their whole job, and my friend isn’t the only tester I’ve noticed lately opining the strategy tasks that take up their time ( you know who you are).

We live in a throwaway culture where breadth is valued way over and above depth, and it seems, to me, that this can heavily influence our careers, sometimes for better and sometimes for worse.  This includes software development AND QA.  I’ve worked in this type of environment, not as a tester, as a CM.  I noticed that for every role, CM, tester or dev as soon as people became technical experts at what they were doing, they were expected to start managing whether they wanted to or not, whether they made a good manager or not.  Am I right or am I right?  What’s missing here is an association depth with value both on the technical side and the management side.

What does this mean, specifically, for testing?  What does it mean to be an expert craftsman in testing?  Does it mean that I can switchblade an app with heuristics, any time, any place? Or does it mean that I will find a way to make some assessment of quality if given the most mountainous of systems to test in extremely adverse conditions?  My personal goal is to work hard at both.  I use test management activities mainly as a way to manage DRY (do not repeat yourself) and to get on with the tests.  It’s almost as if there is a sliding scale with test execution at one end and management of test activity at the other end.  This seems a rather one dimensional approach, and careers are not one-dimensional.

When you are asked to stop testing so much and to start managing more, what will you say?  Are you ready to give up depth as a tester and increase your breadth as a manager?  Is this really a one-dimensional issue?  For some, and maybe even for me at some point, this can be a great decision.  In some places, maybe there is less of a tradeoff than what I’ve seen.  For some, participating more in the management process might mean better quality for an entire team.  If the entire team improves, maybe the software will break less.  If the team is testing KitchenAid dishwashers, maybe the dishwashers will break less, and I won’t have buyer’s remorse for my fancy kitchen appliances.

Love it? Hate it?  Comments are always welcome.

Reblog this post [with Zemanta]

6 thoughts on “Testing in a Throwaway Culture”

  1. Hmm…I read through that link, and I’m not sure I’m totally on the same page. Being told to do something in a way you feel is inferior is one issue.

    There’s also seeing your job responsiblities slowly shift from what you may have originally intended.

    I think the connection you see is that I’m lamenting an appreciation for deep expertise. There are companies who have this appreciation, but many are caught up in the “plug-and-play” of matrix management.

    Let me know if my interpretation is correct.

  2. He’s clearly not saying exactly what you’re saying.

    I think you’re on target that in both testing and programming so much is known about how to do the job right, but so little of that knowledge is put to work. There are many reasons for this, some are bad, some are at least understandable, but all of them make me feel like I wasted a lot of time becoming skilled at my job.

    Also, we may live in a throwaway culture, but don’t believe for a minute that quality is dead. GM died last week because, among other things, its cars were crap when Japan was making high-quality, built-to-last machines. At least in the shrink-wrap software world, people are willing to jump ship when things get bad enough. Consider Windows Vista (although I’m running it right now without too much drama.)

    I want to believe that as transparency increases due to the Internet, there will be a real penalty for producing crap software, even for custom application developers. Then we will have depth as well as breath, and maybe then testers will get their due.

  3. Well said, Gordon.

    I don’t think quality is dead, by any means. My point is that we are less encouraged to spend time going too deep with quality on any one project, and therefore, cruft happens.

    I’m totally with you about transparency. We need more, and it is happening. What will be interesting to watch is the clash between the transparency of the web and the compromises we have to make when we go back to work.

  4. I wonder why the quality of Windows Vista is considered so low.

    I know that there are only a few reasons why I dislike it and most of them are pretty minor. The glaring problem for me is the lack of regard for the user experience, especially the performance sucking. I think overrating the tolerance most people have for waiting was a big problem. The heavy handed user interrupting security features are the other issue. The best security just stops attacks without bothering the user. Other than that, the default has to be the best possible thing for the customers. It doesn’t matter if you “can shut it off” when the first impression is already made. That’s sort of like slapping someone on the face and then telling them, “You can ask me to stop.” They will ask you to stop but still have a stinging face and a bad impression of you, even if what you had done was kill a bee that was going to sting them. Instead they want you to swat the bug away or put up netting between the two so that no harm is done.

    My analogies have gone too far, but my point is, I think overall Vista got LOTS of testing. I mean the project went on forever! If I think about how many tests were run I’m sure that it dwarfs even the largest projects I’ve ever worked on. Yet a few decisions being off really limited the quality and success of the end result.

Comments are closed.