The Tester’s Paradox

This week I am taking care of a co-worker’s dog while he and his family visit the United States.  It is bittersweet because my own dog, Laika, will not be joining me in Australia.  She was two days away from being put on a plane to Sydney when my mother put her foot down and said, “no.”  There’s no arguing with mom.  End of story.

Of course, I am heartbroken, and guessing that I’m not the only one.

Sniff...sniff...sigh.

However, I take Laika, my beautiful girl, with me to work every day.

A selection from Laika's bookshelf

I am a tester.  That means that I spend most of my days living in a hypercritical, hypersensitive, paranoid-beyond-belief state of awareness.  If a link changes … I FRAKKING KNOW IT.  If I find that the cheese has been moved, that users will be required to think I WILL TELL YOU.  In fact, I WON’T SHUT UP UNTIL YOU FRAKKING FIX IT!!!!!

and yet…

This same attitude is unhealthy for team work, and it’s no way to treat other people.  It is easy, as a tester,  to be intimidating, self-righteous, and insistent that everything be exactly as I want it down to the pixel and bug priority.  I don’t have to work hard at making the devs I work with feel nervous, break a sweat or stutter.  At this point, I know what fear looks like because I see it every day from developers as I am walking in their direction.  I will leave the guess work about how I ended up with these qualities to my shrink (He’s on speed dial.  I know yours is too so don’t bother trying to play stupid, we’re all testers here.)  Testing, at its worst, is all about blame and intimidation.  Think about it.  Have you ever gone looking for problems just to prove that a developer’s work was crap?  I think we all have.  In that case, testing has nothing to do with making better software and everything to do with power.

This used to be how I treated people, in general.  My strategy for getting someone to do something used to involve the following:  manipulation, blame, shame,  fear, intimidation and also lots and lots OF SHOUTING.  I used to think that customer service representatives were for screaming practice.  Of course, I feel quite bad about this now.  At the time, I thought it was the only way.

I persisted with this horrible attitude about motivation and working with others until I got Laika.  When I got her as a puppy, I knew that I did not want to ever

  • hit her (even if she pooped on the floor)
  • scream at her
  • or make her feel bad for something beyond her control

Knowing this and knowing that my own tactics for discipline were absolutely terrible, I signed Laika and myself up for dog training.  What I didn’t know about dog training is that it’s actually dog-owner-training.  Rather than training us dog owners how to scream at our dogs, we were trained on how to be sure our dogs understood whatever the hell it was we wanted them to do.  It was explained pretty clearly to us owners that if our dogs didn’t understand what we wanted, we had no one to blame but ourselves. The assumption here is that dogs want to do what you tell them.  Most dogs don’t want to lead (unless they are alpha and if you chose the alpha, you can’t blame the dog for being who they are.)

What does it mean to use positive reinforcement?  It means that when you tell the dog to do something and they do it, you give them lots of treats and praise, mainly praise unless you want your dog to be fat.  But praise is fun!  I have more fun saying, “Oh you’re so great” than if I had to scream, “bad dog.”  The power of positive reinforcement was most apparent when I trained Laika to use the command, “leave it.”  This command means “drop whatever that is and walk away.”  If Laika ever picked up something I didn’t want her to pick up or she was going somewhere I didn’t want her to go, I would say, “leave it.”  She would immediately turn around and walk away.  When she did that, because she did what she was supposed to do, I could give her positive reinforcement by saying, “Good dog! Good leave it!”  Instead of screaming at her for touching something I didn’t want her to touch, I could tell her “good dog” for doing what she was supposed to do.  No drama!  No hard feelings!

Laika...shake!

I worked with Laika every day at the commands we were taught in dog class. She loved it.  I loved working with her.  It was a great feeling to understand each other and to reach the goal of communicating together.  This extended way outside of the classroom.  It meant that I could take here anywhere she was allowed and know that she would listen to me.  We became very comfortable with each other because we knew what to expect from each other and we enjoyed each other’s company.   There was respect and love on both sides.

Good Dog!

Working through this training changed my whole attitude.  I realized that positive re-inforcement applies to working with people as well and it’s something that I use every day. I still log defects, but I also take every opportunity I can find to tell devs and other teammates when they do something well.  Here are some examples:

  • I’m glad you brought that point up in our meeting
  • Thank you for identifying that risk
  • I really like this feature.  These issues need fixing, but I think people will love it.  Good job!

But this is about much more than just saying nice things.  It’s at the heart of why I like working in software.  It’s because I enjoy knowing that I’ve reached a goal by communicating well with others and treating others the way I want to be treated.  I might be an outspoken individual, but I prefer team wins.  Team wins are not about ego, power or intimidation.  There are no extra points for making developers feel shitty when they are already working as hard as they can (which I believe they are in most cases).  These wins are about a team picking up each other’s slack and getting each through when things are confusing or difficult.  Let’s face it:  in software, as in life, confusing and difficult is most of the time.  For testers, it’s about finding the weaknesses in the team and getting the team to address those with the least amount of damage possible.  When I say damage, I include emotional damage.  Minimizing this damage is not a soft skill.  It is an extremely hard skill which takes more patience, practice and time than causing the damage in the first place.

My dog taught me this.  She’s amazing, and I miss her.

Thanks Laika :)

CAST 2010: Software Testing the Wiki Way

Rock on Atlassian!
Image by Lachlan Hardy via Flickr

On Saturday, I will be flying from Sydney to Grand Rapids, MIchigan for this year’s CAST.  I feel quite privileged for 2 reasons:  the organizers of CAST have included a presentation of mine in their scheduled sessions and Atlassian (in particular my boss at Atlassian) have seen enough potential in the presentation to fund my trip.

Software testing with a wiki is a topic that I started working on quite a while ago, and it’s still an ongoing process.  My presentation began as a potential blog post last Winter.  After a bit of nudging by Matt Heusser and Chris McMahon, I submitted my would-be blog post as a CAST session and it was accepted.  It is quite humbling to be approached and encouraged by testers/people I admire the way I admire Chris and Matt.  I was also humbled by the retweets I got from testers showing their desire to see me present at CAST.

I could take the easy way out with my topic and write up another “talking head” presentation, but I’ve decided to go all WTANZ and do this thang Aussie-style (fearlessly and head-first).  I’ve made a prezi which I will quickly present at the beginning of the session.  I’m posting it here, because I hope everyone looks at it before they show up.  I hope y’all bring your laptops to the session charged and ready.  After I breeze through the prezi, the rest of the session will be all testing.  it will be done through a wiki. It will be fun.

Enhanced by Zemanta

Facing the Future

A Streamgraph of clintjcl's listening habits.

There is something interesting going on at Facebook that has nothing to do with their exceptionally awful privacy policy.  It does have something to do with data visualization and everything to do with test automation.  A few years ago, I read that Facebook had hired Lee Byron.  Lee has done a lot of work on streamgraphs.  His initial claim to fame was the streamgraph he created for visualizing listening history on Last.fm.  If you go to flickr, and search for Last.FM you will see a lot of these.  Here’s the stunning part of that story:  this was a student project.  He ended up working for the New York Times’s famous graphics department over a Summer and assisting in the production of this glorious and award winning streamgraph of box office revenue.  I haven’t seen him mentioned since I read that he went to work for Facebook.

Today, I read this post about Facebook’s testing. This was an interesting read because:

  1. Despite a privacy/data policy that’s so ridiculous I rarely use it anymore, Facebook works and works well.
  2. I see more and more automation in my future.  Definitely not less.
  3. It also mentions that Facebook has hired, Charley Baker, one of the lead developers/architects of Watir.  You can listen to some podcasts with him here.
  4. There is a chance, however small, that Charley Baker will, one day, meet up with Lee Byron.

Microsoft is easy to beat up for their test automation practices partly because their products are so many and so ubiquitous.  Let’s face it:  it’s easy to hate on Windows.  Facebook, however, has one product and it does not go down like twitter.  Chances are you have a friend who uses Facebook, way, way too much, because it’s working for them.

If software is lucky, Lee Byron and Charley Baker will find themselves working together at some point.  I don’t know either of these people and this scenario I have in my head is probably unlikely to happen, but one can dream.  If they did meet, and hit it off, I am imagining the great things that would come out of it for the visualization of software testing.  This Facebook post is a great reminder of why these areas in software come together for me.  It is also a look at what one company on the edge of technology is doing to make their software better.  You can complain about it, deny it and hope it goes away (it won’t) but there’s no denying the fact that you’ve probably been logging into Facebook for quite a while without knowing that it’s mostly automated tests behind the scenes.  Intriguing…

WTANZ 06: Visually thinking through software with models

Paper for Models (or scribbles)

Last year, I visited Microsoft to give a presentation.  Alan Page, one of the authors of How We Test Software at Microsoft was my host.  When he introduced me to the audience, he gave me an autographed copy of his book and a pad of quadrille paper (paper with squares instead of lines).  He told me it was for drawing models.  Apparently this is quite the popular way for testers to understand software at Microsoft (and I hear, at Google as well).  I’ve read a lot of HWTSM but I must admit, I had not looked very closely at the model based testing chapter.

The paper Alan gave me has made it to Australia, and I’ve been using it for keeping up with life and stuff.  Every time I look at it, however, I keep thinking, “What the hell is this model-based testing about?”  So I decided we would check it out for this week’s weekend testing.  I told Alan of my plans on twitter and he replied:

Hmm…keeping this in mind, off I went to read his chapter on model-based testing.  The models are really just finite state machines.  If you’ve taken a discrete math class, you’ve seen these.  If you haven’t…it’s pretty simple.  Have a look at wikipedia’s example and you’ll see what I’m talking about.  In reading through HWTSM I noticed that emphasis was placed on using models to understand specifications.  Dr. Cem Kaner’s sessions with Weekend Testing from a while ago verified this to me.  I read through the transcript from Weekend Testing 21 in which Dr. Kaner is describing his suggested use of models.

Dr. Kaner suggests 2 ways in which models can be helpful.  The primary reason he suggests for using a model is when there is so much information that a tester is having difficulty wrapping their head around all of the possible states they can create and need to test within a system.  In this case, model-based testing is used to deal with information overload.  It looked to me as though he was less concerned about necessarily having a finite state machine and more concerned with the tester having some way of visually mapping the system in a way that made sense.

The secondary reason for using a model, was as a way to approach sensitive devs about holes in their logic.  Saying to a dev, “here’s this diagram I made of the system, but I seem to have a gap, can you help me fill this in?” is much less confrontational than approaching them with their spec and telling them they forgot stuff.

Dr. Kaner’s primary reason for using a model intrigued me because it is contrary to Alan Page’s suggestion in HWTSM that models can get too big.  Dr. Kaner is using models as a remedy for information overload and he uses a decision tree he made showing reasons to buy or sell stock as an example.  It’s not a small picture, but maybe that’s because I’m not used to testing with models or even looking at them on a daily basis.  Here’s what Alan has written about the size of models:

Models can grow quite quickly.  As mine grow, I always recall a bit of advice I heard years ago:  ” ‘Too Small’ is just about the right size for a good model.” By starting with small models, you can fully comprehend a sub-area of a system before figuring out how different systems interact.

-Alan Page, How We Test Software at Microsoft.  p162 (inspired by Harry Robinson)

This week’s mission, is to make some models and compare notes about how successful this strategy is for varying levels of model size/complexity.  Since all iGoogle gadgets have some type of specification, I picked a few google gadgets:

Small:  The Corporate Gibberish Generator

Medium:  XKCD

Large: Thinkmap Visual Thesaurus

Since models can also be useful for api’s, if anyone is feeling super-geeky, you can try modelling some api calls from twitter.  I just blogged using twitter with curl, so that might help those choosing to do api modeling.  Alan writes about how modeling can be useful for testing api’s and it made me very curious.  (Off topic, I have to wonder: what happens when you throw some models at a genetic algorithm?  Learning? Useful tests?  Who knows.  I’m saving that one for later.)

I also have what I’ll call a modeling “experiment.”  This may or may not work.  It may or may not teach you something, but I think it will make your afternoon/evening/morning interesting to say the least.  This link is to a painting in the Museum of Modern Art.  The painting is The City Rises by Umberto Boccioni.  As I read about Dr. Kaner’s approach of using modeling to combat information fatigue, I was immediately reminded of this painting.  There is so much going on in this painting and, if you explore, you will find relationships that make it a masterpiece.  Can a model pull out and define the power of this painting?  Let’s find out.

For graphics software, I suggest Gliffy.  It is browser based so no worries about operating system, and for our purposes, sign-up is not required.

90 Days of Manual Testing

Atlassian + Tourists
Image by Marlena Compton via Flickr

My “probationary” period at Atlassian has recently finished.  This period has lasted 90 days, although I feel like I’ve been here much longer.  Lots has happened since I’ve shown up in Sydney.  As was pointed out to me on twitter by @esaarem, I’ve participated/facilitated in 4 sessions of Weekend Testing Australia/New Zealand.  I’ve turned in  a paper for CAST. I flew back to the states for a brilliant Writing-About-Testing conference and I went on a vision quest, of sorts, in the canyons of Grand Gulch.

What hasn’t shown up on my blog is all of the testing I’ve done for Confluence.  This testing and work environment is such an utter departure from what I was doing previously.  Before, I was looking at a command line all day, every day and writing awk and shell scripts as fast as I could to analyze vast amounts of financial data.  This was all done as part of a waterfall process which meant that releases were few and far between.  To my previous boss’s credit, our group worked extremely well together and he did as much as he could to get the team closer to more frequent releases.

I am now testing Confluence, an enterprise wiki, which is developed in an Agile environment and completely web-based.  I haven’t run a single automated test since I’ve started so it’s been all manual testing, all the time.  This doesn’t mean that we don’t have automated tests, but they haven’t been any responsibility of mine in the past 90 days.  My testing-focus has been solely on exploratory testing.  So what are my thoughts about this?

On Living the “Wiki Way”

Since everything I’ve written at work has been written on a wiki, I haven’t even installed Microsoft Office on my Mac at work.  I’ve been living in the wiki, writing in the wiki and testing in the wiki.  If the shared drive is to be replaced by “the cloud” then I believe the professional desktop will be increasingly replaced by wikis.  Between Atlassian’s issue tracker, JIRA and Confluence there’s not much other software I use in a day.  Aside from using Confluence to write test objectives and collaborate on feature specifications, I’ve been able to make a low-tech testing dashboard that has been, so far, been very effective at showing how the testing is going.  I’ll be talking about all of this at my CAST session.

On the Agile testing experience:

For 5 years, I sat in a cubicle, alone.  I had a planning meeting once a week.  Sometimes I had conversations with my boss or the other devs.  It was kind of lonely, but I guess I got used to the privacy.  Atlassian’s office is completely open.  There are no offices.  The first few weeks of sitting at a desk IN FRONT OF EVERYONE were hair-raising until I noticed that everyone was focusing on their own work.  I’ve gotten over it and been so grateful that my co-worker, who also tests Confluence, has been sitting next to me.

During my waterfall days, I had my suspicions, but now I know for sure:  dogfooding works, having continuous builds works, running builds against unit tests works.

On Manual, Browser Based Testing:

This is something that I thought would be much easier than it was.  I initially found manual testing to be overwhelming. I kept finding what I thought were bugs.  Some of them were known, some of them were less important and some of them were because I hadn’t set up my browser correctly or cleared the “temporary internet files”.  Even when I did find a valid issue, isolating that issue and testing it across browsers took a significant amount of time.  All of this led to the one large, giant, steaming revelation I’ve had in the past 90 days about manually testing browser based applications:  browsers suck and they all suck in their own special ways. IE7 wouldn’t let me copy and paste errors, Firefox wouldn’t show me the errors without having a special console open and Apple keeps trying to sneakily install Safari 5 which we’re not supporting yet.

Aside from fighting with browsers, maintaining focus was also challenging.  “Oh look there’s a bug.  Hi bug…let me write you…Oh!  there’s another one!  But one of them is not important…but I need to log it anyway…wait!  Is it failing on Safari and Firefox too?”  I don’t have ADD, but after a year of this I might.  Consequently, something that suffered was my documentation of the testing I had done.  I was happy not to have to fill out Quality Center boxes, but it would be nice to have some loose structure that I use per-feature.  While I was experiencing this, I noticed a few tweets from Bret Pettichord that were quite intriguing:

Testing a large, incomplete feature. My “test plan” is a text file with three sections: Things to Test, Findings, Suggestions
1:53 PM Jun 22nd via TweetDeck
Things to test: where i put all the claims i have found and all my test ideas. I remove them when they have been tested.
1:54 PM Jun 22nd via TweetDeck
Findings: Stuff i have tried and observed. How certain features work (or not). Error messages I have seen. Not sure yet which are bugs.
1:55 PM Jun 22nd via TweetDeck
Suggestions: What I think is most important for developers to do. Finishing features, fixing bugs, improve doc, whatever.
1:57 PM Jun 22nd via TweetDeck

This is something I’m adding to my strategy for my next iteration of testing.  It made me laugh to see this posted as tweets.  Perhaps Bret knew that some testing-turkey, somewhere was gonna post this at some point.  I’m quite happy to be that testing-turkey as long as I don’t get shot and stuffed (I hear that’s what happens to turkeys in Texas).  After I do a few milestones with this, I will blog about it.

Because of my difficulties with maintaining focus, I’ve now realized that while it’s easy to point the finger at developers for getting too lost in the details of their code, it’s just as easy for me, as a tester, to get lost in the details of a particular test or issue.  I am a generalist, but I didn’t even notice that there was a schedule with milestones until we were nearly finished with all of them.  That’s how lost I was in the details of every day testing.  Jon Bach’s recent blog post resonates with me for this very reason.  He writes about having  20 screens open and going back and forth between them while someone wants updates, etc.  Focus will be an ongoing challenge for me.

One of the few tools that I’ve felt has helped me maintain my focus is my usage of virtual machines for some of the browsers.  They may not be exactly the same as using actual hardware, but being able to copy/paste and quickly observe behavior across the different browsers was hugely important in helping me maintain sanity.

The past 90 days has been intense, busy and fascinating in the best possible way.  Does Atlassian’s culture live up to the hype?  Definitely.  I’ve been busier than at any other job I’ve ever had, and my work has been much more exposed, but I’ve also had plenty of ways to de-stress when I needed it.  I’ve played fussball in the basement, I laughed when one of our CEO’s wore some really ugly pants that I suspect were pajama bottoms to work, I got to make a network visualization for FedEx day and my boss took me out for a beer to celebrate the ending of my first 90 days.  I like this place.  They keep me on my toes, but in a way that keeps me feeling creative and empowered.

By the way, Atlassian is still hiring testers.  If you apply, tell ’em I sent ya ;)

Enhanced by Zemanta

Playing with REST and Twitter

At my new job, we all take a particular specialization. One of my co-workers helps the rest of us learn about testing security while another champions the testing of internationalization.  We share what we learn about our specializations with each other so that everyone benefits.  When I was asked what I wanted my specialty to be, it didn’t take me 2 seconds to say, “I wanna specialize in API testing.” This means I need to know about REST (REpresenational State Transfer).

Here is the simplest explanation of REST I can muster:  You use a URL to call some method that belongs to another application.   So, basically, it’s using the basic http calls of POST, GET, PUT or DELETE.  The data that comes back is usually either XML or JSON.

When I was at the Writing-About-Testing conference, our host, Chris McMahon knew that I was trying to brush up on REST so he put together a few slides and did a talk for the group. While he was talking about it, I noticed @fredberinger tweeting about a slidedeck for a discussion on the management of twitter’s data. It was serendipitous because Twitter uses REST heavily.

Looking through the slides led me to googling about REST and Twitter which led to a fun discovery. If you have a Mac, you have the “curl” command. Straight from the man page… “Curl offers a busload of useful tricks.”  At its simplest, curl is a command for getting data.  What makes curl great is that it allows you to submit commands that normally require interaction with a few screens.  For example, you can use curl to submit authentication credentials.  Curl will also retrieve data for you after you’ve submitted your request. It does all of this with 1 command line.  In the world of shell scripting, since curl will return xml or json data, the data is easily saved off into an xml or json file.

This means that curl can be used to interact with Twitter’s api calls.  Since I’m on twitter way more than I should be (my husband will back me up on this claim), I thought this would be an excellent way to do some playing with REST calls.

This document from twitter’s wiki has some examples of using REST with Curl.  It’s down at number 8. I am posting my own examples as well.

Example of a GET:

“curl -u marlenac:**** http://api.twitter.com/1/statuses/friends_timeline.xml”

This gets some of the most recent updates in my timeline.  I noticed that the xml retrieved 17 updates at 11:00 pm in Sydney.  I’m not going to post them all, but here is  a screenshot of one status update from Andrew Elliott, aka, @andrew_paradigm.  There’s a lot more information here than I see in tweetdeck.

Status update in xml

Notice that the url above ends in “xml.”  One aspect of REST is that you have to know which datatypes are valid so that you can state that in the request.  I’m not going to make the same call, but change the ending to “json” like so:

“curl -u marlenac:**** http://api.twitter.com/1/statuses/friends_timeline.json”

This call produces what should be the same data, but in the JSON format.  If you are not familiar with JSON, it’s just another data format.  If you do not like the “pointy things” aspect of xml, you might prefer JSON.  Here is a screenshot of the same status update in the JSON Format.

Status Update in JSON

The JSON that was returned did not have any line feeds.  This would make it easier to parse since there is only a need to look for braces and commas.  I inserted the line feeds in the first half of the status so that you can make sense of the data format.

Just as it is possible to get data from Twitter with REST, you can also make updates from the command line using a POST command.

“curl -u marlenac:*** -d status=”It’s a lovely day in Sydney” http://api.twitter.com/1/statuses/update.json”

A command line tweet

Here is the JSON that resulted from that update:

Update status JSON

This is obviously the beginning of my playing around with REST.  I now have to practice using REST with the Atlassian product I test, Confluence.  While I was putting this post together, I was thinking about what I would want to test in these api calls.  Since I am learning about this, I welcome input about tests that I could add to this list:

  • Check that the information retrieved from different data formats is the same.
  • Test the different parameters that are possible with each REST url.  Make sure that they work.  The api should only accept well-formed parameters and if the parameter is un-usable, should return an informative error message.
  • Check that authentication works and that users should only be able to make calls within their access rights.
  • Test the limits of how many calls can be made.

If you look through the Twitter documentation, you will notice that the place limits on the number of calls that can be made to some URLs.  This is an indication of the fact that REST can be extremely useful for obtaining lots of data for the purpose of aggregating it.  This aggregated data can make a much better data source for visualizations than, for example, comma-delimited data.  One reason I haven’t been doing much visualization lately is because my inner Scarlett O’Hara has vowed that as “Tufte is my witness, I will never use comma-delimited data ever again!)  (You can laugh, and believe me, I understand that rules were made to be broken.)

For those who have read this entire post, here’s a special treat.  If you are on a mac, you may or may not be familiar with the “say” command.  Oliver Erlewein turned me onto this during our recent Weekend Testing session.  If you type: say “Shall we play a game?” your mac will, vocally, ask you, “Shall we play a game.”  There’s also a parameter that lets you choose the voice.  I decided to try piping the REST calls I made using curl to the say command.   I could “say” what happens, but that would ruin the surprise.  ;) Have some fun with that.

A Desert Tale of Rocks and Ruins

Hiking into the Canyon

After attending the Writing About Testing conference in Durango (which you can read about here), I spent a few days in the desert canyons of Utah.  Finishing a masters degree, presenting at PNSQC, getting a new job and moving to a different hemisphere have all taken their toll.  Thus, my challenge was to clear all the shit out of my head that’s been accumulating for the past 12 months.

I started out with a pack full of “stuff” I thought I would need to survive in the desert.  The picture to the left shows me as I’m hiking down into the canyon.  I’ve only ever backpacked in mountainous environments, such as the Appalachian mountains of the Eastern U.S., where there are plenty of streams and water is never a problem.  This time, I was hiking in a place where the environment is so harsh and water is so scarce, it is all but inaccessible for 2/3 of the year.  Thus, I filled up my pack with lots of water and the “stuff,” intent on keeping nature at bay.

The area of Utah I visited was formerly inhabited by a native american tribe called the Anasazi.  From what archaeologists can tell, the Anasazi were able to thrive for quite a while in a place modern-day humans would call un-liveable.

For the first couple of days, I was very careful not to get my stuff too dirty.  I was worried about getting my camping equipment through customs in Sydney because they are strict about camping equipment being clean on re-entry.  As we hiked on, however, I quit caring and began to let the sand of the desert into my boots and quiet of the desert into my head.

The canyons were full of ruins and rock art.  The Anasazi are long gone, but the dry desert air has preserved many of their dwellings and art.  My host, Chris McMahon, calls it, “the museum of experience.”  We were able to walk right up to the art and look through the dirt for pottery shards (which we left in place).  I put myself in the shoes of an Anasazi artist as I sketched the figures I saw carved and painted on the sandstone walls.

You were here and so was I

The art in this desert was not on a canvas and because its creators are so long gone, there is no way to know exactly why it exists.  The Hopi, who are alive, believe they are descendents of the Anasazi so they probably have some very good ideas, but there will probably never be solid answers about the creation history of this art.  I love that.  Because I love questioning and imagining, I made up a thousand stories for every painting I saw.  I devoured every paint stroke and compared to every other paint stroke I saw.  If the wind blew while I was looking at a painting, I questioned the direction in which it was blowing and whether or not the painter felt the wind coming down the canyon the same way I was feeling it as I observed the results of their labor.

One way to get a more intimate connection with any piece of art is to find your own way of reproducing or re-interpreting it.  I took a small sketchbook with me and sketched out a ruin and some of the paintings.  I’ve also sketched a few more paintings from photographs my husband and I took.  Although the Anasazi paintings are quite ancient, I found figures that were well-drawn by a practiced hand.  “I don’t know who you were, but you did some good work here,” I found myself whispering as my eyes looked over the figure you see in the photograph below.

Painted by a well-practiced hand

As I immersed myself in the evidence of an ancient culture, I began letting go of the present.  My mind began to wander past the 140 character restrictions of my everyday life and the ruins of my own existence.  As my thinking shifted, my physical needs changed as well.

By the third day, I had taken anything I hadn’t yet used or worn and sheepishly stuffed it into the bottom half of my pack.  Why did I bring 2 pairs of pants?  I guess I thought the desert would be cold (ha ha).  I also didn’t need my rain jacket.  There were some other items I also didn’t need and they were much on my mind as I dragged it all on my back through the heat and the sand.

Two-Story Ruin

My husband and I were going through a similar dilemma with our tent.  On night one, we set up the tent and the rainfly we had brought.  The second night, we ditched the rainfly because it was hot, even at night.  On our third and last night we finally came to our senses and slept outside, not even bothering to pitch the tent.

I’ve always had a fear of night creatures when I’m camping.  I have heard bears and mountain lions growl at night.  On one trip, an animal brushed against our tent and I couldn’t go to sleep afterward.  Outside, in the canyon, I lay awake in my sleeping bag, watching the light  cast by the moon on the canyon wall across from our campsite.  I thought about why my fear had vanished.

Aside from the environment itself, there is nothing to fear in the desert.  Everything except for the animals small enough to subsist on the bare minimum of water has already fled.  I had succeeded in leaving all of my anxieties about nocturnal predators and life itself behind. I went to sleep in the light of the moon, picturing the motion of hands making brush strokes over warm sandstone.

We hiked out the next morning.  We had been hiking through sand for most of our trip, and my feet were relieved to finally feel the earth pushing back against them.  As we ascended out of the canyon, thoughts about work and “real life” began to come back.  I made peace with them as we drove away from the wilderness area.

This trip allowed me to gather my strength, and I felt fearless as I left the desert.  It was the same feeling I’ve had before when I’ve ascended from a cave on a rope.  Once you’ve crawled out of a 90 foot pit with nothing holding you up except for a rope, priorities shift and you remember what you really care about in life.  The same holds true for slogging through mile after sandy mile in Grand Gulch.  Fear has departed.

A View of Grand Gulch

Post Context: Here are the tweets that led to my diversity post

Jolly Roger flown by Calico Jack Rackham. Bott...
Image via Wikipedia

One rule I’ve had for myself since I started this blog is to do my best not to write posts in direct reaction to what someone else has said.  This blog is about me.  It is my writing.  It shows what’s on my mind.  There are very few situations in life when I can unabashedly and honestly say, it is all about me.  This blog is it.  There hasn’t been a post about diversity before my previous one because I’m not thinking about gender stuff most of the time.

I broke my own rule with my last post, and I did it for a good reason.  If you want to know why I wrote that last post, please take a look at this twitter transcript put together by Rick Scott.  He also saw the whole thing happen and has blogged his own reaction to it.

I was pretty angry as the tweets unfolded, but most of what I said stands.  The only tweet I would change is the one where I said the context school is a pile of crap. That was wrong of me. I know many people involved in the context school of testing who have much more fair-minded ideas about diversity and gender than those expressed by the Bach brothers in the transcript.  Unsurprisingly, James Bach has blocked me from his twitter account.  Jon Bach has also blogged his version of what happened on twitter.   I hope people draw their own conclusions based on the actual conversation rather than relying solely on one person’s account, and I do feel that it is important for people to have an opportunity to draw their own conclusions.  That’s why I’ve stayed pretty silent about this for the past week.  I love my blog and if I can’t write with respect, I don’t see any point in writing at all.

The irony in all of this is that I still think James Bach’s contributions to software and software testing are brilliant.  We do not, however, see eye to eye on diversity or even, as the transcript points out, workplace ethics.

Thus, I am officially hoisting my own pirate flag of agitator for women’s empowerment in technology.

Enhanced by Zemanta

My post about gender and diversity

Stump in Red Hills
Image by cliff1066™ via Flickr

This is a post I’ve put off writing on purpose because it’s not my favorite topic of discussion.  That’s not because I feel shy about it, it’s just because people have usually already made up their minds on this particular topic which makes the opportunity cost of the discussion high.  I’ve noticed Lisa Crispin and others making valiant efforts to have this discussion in testing.  I agree with her the others involved  that it is time.

We need to talk about the role of gender and diversity in testing.

I am tired of hearing about how my being a woman is important to the way I test.  It’s a poor definition of “woman” that I don’t believe holds up well if it’s really dissected.  There are many different ways to be a woman, and I’m not going to highlight all of them here.  I’ll just point out one stereotype that needs to go:  women have babies.  I don’t have babies and I don’t know that I’ll ever have a baby.  In fact, I have plenty of women friends who never want to have a baby.  Are we still women?

The flip side of this is that each person has differences that make them valuable on a test team.  Hopefully these advantages are obvious enough that there’s no need to go through that argument.  The problem arises when we start stereotyping individuals into monolithic groups that are actually quite varied.

Lately I’ve been thinking about this in terms of levels of measurement.  Maybe that’s because the book I learned this from,  Stephen Kan’s Metrics and Models review, really goes for the throat in the examples used to illustrate levels of measurement.

Nominal: Classifying elements into categories.  Kan uses the example of religion by saying that,  “if the attribute of interest is religion, we may classify the subjects of the study into Catholics, Protestants, Jews, Buddhists, and so on.”

Ordinal: Ranking is introduced.  Kan writes that, “we may classify families according to socio-economic status: upper class, middle class, and lower class.”

Interval Scale: At this level, there are exact, standardized differences between points of measurements.  Elements can be compared using addition and subtraction and depend on having some standard of measurement.  Kan uses a KLOC example to illustrate this one, “assuming products A, B, and C are developed in the same language, if the defect rate of software product A is 5 defects per KLOC and product B’s rate is 3.5 defects per KLOC, then we can say product A’s defect level is 1.5 defects per KLOC higher than product B’s defect level.”

Ratio Scale: This level is differentiated from the interval scale only because there is a zero element present.

Where do humans fit on this scale?  The classifications we have for each other are nominal and ordinal categorizations, but I don’t think that the levels of measurement come anywhere close to defining the measure of human experience.  Gender is what I get thrown in my face because I happen to have a vagina.  Never mind the fact that I am the one earning the money in my family, I don’t have children and I don’t wear pink or even bake.

There is a nasty undercurrent in testing at the moment that tries to define me as “woman tester.”  There’s no need to even look that hard if you want to find it.  When I see this  I will call it out and I will call it out loudly.  I call it out because it undermines hard work done every day not just by women who show up for their tech jobs.  It’s also undermining respect shown to women by hordes of male geeks who want things to be better.  Guys:  I hear you.  I know you want me to feel happy and comfortable at work.  I know you want more diversity in testing and technology.  It means the world to me that you feel this way.  I hope that we are far enough along with this problem that others, male, female, transgendered, will call it out with me.

So if you are among those who think we all ought to be wearing badges announcing how great it is that we fit some cultural stereotype/straightjacket, I hope you take some time to rethink that stance.  It’s a waste of time we could be spending on other problems in testing.

Reblog this post [with Zemanta]

Here’s the story behind the testing dashboard tweets

Low-Tech Dashboard in Confluence

I can’t believe just how hungry testers are for the low-tech testing dashboard.  It’s a great illustration of how we are evolving in our desire for tools that do not push us around or deign to tell us how we should be testing.

At my last job, I ditched Quality Center for a test-cycle and decided to use James Bach’s low-tech testing dashboard with my team’s incredibly un-sophisticated dokuwiki.  There was nothing expensive involved and certainly no rules… just enough structure and more creativity in my testing.  I loved it!  By getting rid of Quality Center, I made my testing activities much more visible to the rest of my team not to mention more creative in terms of exploratory testing.

I’ve been tweeting about it because we’re starting to use it for Confluence testing efforts.  The reason why I haven’t blogged it yet it is because I’m putting it in a paper for this year’s CAST. Those of you attending CAST will have the opportunity to see my full presentation.  Because of the overwhelming interest, I thought I’d give y’all a few tidbits and a push in the right direction.  If I get permission, I’ll be post the full paper.  If my arm is gently twisted, I might add a few more tidbits.

This is very easy to put together, can be free (as in beer) and is 115% customizable.

The ingredients:

A whiteboard or a wiki (I like Confluence and I’m not biased AT ALL.)

The low-tech testing dashboard pdf

Some idea of what *you* want to track in *your* testing

The most challenging aspect of using this dashboard is in deciding what you want to track.  My co-worker and I discussed it for a while and are still undecided on a few points.  Although the pdf suggests that putting this online is less than optimal, I think a wiki is a perfect leap.  I link all of my test objective pages to the components I list in the dashboard.  I also link important issues in the comments section. Thus, a wiki is shallow enough but has the ability to give added depth when and where it is necessary.

Mr. Bach’s dashboard is 11 years old, and thus, is a bit weathered but still very good stuff.  It’s ripe for a bit of botox here and there.  Those of you who decide to take this and “make it your own” are welcome to share how you’ve made changes or which areas of the dashboard could use a bit of tweaking in your environment.  I hope to see some dashboard photos on twitter. (There’s a meatloaf joke in there somewhere.)