Twitter github

Author Archive

WTANZ 06: Visually thinking through software with models

Paper for Models (or scribbles)

Last year, I visited Microsoft to give a presentation.  Alan Page, one of the authors of How We Test Software at Microsoft was my host.  When he introduced me to the audience, he gave me an autographed copy of his book and a pad of quadrille paper (paper with squares instead of lines).  He told me it was for drawing models.  Apparently this is quite the popular way for testers to understand software at Microsoft (and I hear, at Google as well).  I’ve read a lot of HWTSM but I must admit, I had not looked very closely at the model based testing chapter.

The paper Alan gave me has made it to Australia, and I’ve been using it for keeping up with life and stuff.  Every time I look at it, however, I keep thinking, “What the hell is this model-based testing about?”  So I decided we would check it out for this week’s weekend testing.  I told Alan of my plans on twitter and he replied:

Hmm…keeping this in mind, off I went to read his chapter on model-based testing.  The models are really just finite state machines.  If you’ve taken a discrete math class, you’ve seen these.  If you haven’t…it’s pretty simple.  Have a look at wikipedia’s example and you’ll see what I’m talking about.  In reading through HWTSM I noticed that emphasis was placed on using models to understand specifications.  Dr. Cem Kaner’s sessions with Weekend Testing from a while ago verified this to me.  I read through the transcript from Weekend Testing 21 in which Dr. Kaner is describing his suggested use of models.

Dr. Kaner suggests 2 ways in which models can be helpful.  The primary reason he suggests for using a model is when there is so much information that a tester is having difficulty wrapping their head around all of the possible states they can create and need to test within a system.  In this case, model-based testing is used to deal with information overload.  It looked to me as though he was less concerned about necessarily having a finite state machine and more concerned with the tester having some way of visually mapping the system in a way that made sense.

The secondary reason for using a model, was as a way to approach sensitive devs about holes in their logic.  Saying to a dev, “here’s this diagram I made of the system, but I seem to have a gap, can you help me fill this in?” is much less confrontational than approaching them with their spec and telling them they forgot stuff.

Dr. Kaner’s primary reason for using a model intrigued me because it is contrary to Alan Page’s suggestion in HWTSM that models can get too big.  Dr. Kaner is using models as a remedy for information overload and he uses a decision tree he made showing reasons to buy or sell stock as an example.  It’s not a small picture, but maybe that’s because I’m not used to testing with models or even looking at them on a daily basis.  Here’s what Alan has written about the size of models:

Models can grow quite quickly.  As mine grow, I always recall a bit of advice I heard years ago:  ” ‘Too Small’ is just about the right size for a good model.” By starting with small models, you can fully comprehend a sub-area of a system before figuring out how different systems interact.

-Alan Page, How We Test Software at Microsoft.  p162 (inspired by Harry Robinson)

This week’s mission, is to make some models and compare notes about how successful this strategy is for varying levels of model size/complexity.  Since all iGoogle gadgets have some type of specification, I picked a few google gadgets:

Small:  The Corporate Gibberish Generator

Medium:  XKCD

Large: Thinkmap Visual Thesaurus

Since models can also be useful for api’s, if anyone is feeling super-geeky, you can try modelling some api calls from twitter.  I just blogged using twitter with curl, so that might help those choosing to do api modeling.  Alan writes about how modeling can be useful for testing api’s and it made me very curious.  (Off topic, I have to wonder: what happens when you throw some models at a genetic algorithm?  Learning? Useful tests?  Who knows.  I’m saving that one for later.)

I also have what I’ll call a modeling “experiment.”  This may or may not work.  It may or may not teach you something, but I think it will make your afternoon/evening/morning interesting to say the least.  This link is to a painting in the Museum of Modern Art.  The painting is The City Rises by Umberto Boccioni.  As I read about Dr. Kaner’s approach of using modeling to combat information fatigue, I was immediately reminded of this painting.  There is so much going on in this painting and, if you explore, you will find relationships that make it a masterpiece.  Can a model pull out and define the power of this painting?  Let’s find out.

For graphics software, I suggest Gliffy.  It is browser based so no worries about operating system, and for our purposes, sign-up is not required.

90 Days of Manual Testing

Atlassian + Tourists
Image by Marlena Compton via Flickr

My “probationary” period at Atlassian has recently finished.  This period has lasted 90 days, although I feel like I’ve been here much longer.  Lots has happened since I’ve shown up in Sydney.  As was pointed out to me on twitter by @esaarem, I’ve participated/facilitated in 4 sessions of Weekend Testing Australia/New Zealand.  I’ve turned in  a paper for CAST. I flew back to the states for a brilliant Writing-About-Testing conference and I went on a vision quest, of sorts, in the canyons of Grand Gulch.

What hasn’t shown up on my blog is all of the testing I’ve done for Confluence.  This testing and work environment is such an utter departure from what I was doing previously.  Before, I was looking at a command line all day, every day and writing awk and shell scripts as fast as I could to analyze vast amounts of financial data.  This was all done as part of a waterfall process which meant that releases were few and far between.  To my previous boss’s credit, our group worked extremely well together and he did as much as he could to get the team closer to more frequent releases.

I am now testing Confluence, an enterprise wiki, which is developed in an Agile environment and completely web-based.  I haven’t run a single automated test since I’ve started so it’s been all manual testing, all the time.  This doesn’t mean that we don’t have automated tests, but they haven’t been any responsibility of mine in the past 90 days.  My testing-focus has been solely on exploratory testing.  So what are my thoughts about this?

On Living the “Wiki Way”

Since everything I’ve written at work has been written on a wiki, I haven’t even installed Microsoft Office on my Mac at work.  I’ve been living in the wiki, writing in the wiki and testing in the wiki.  If the shared drive is to be replaced by “the cloud” then I believe the professional desktop will be increasingly replaced by wikis.  Between Atlassian’s issue tracker, JIRA and Confluence there’s not much other software I use in a day.  Aside from using Confluence to write test objectives and collaborate on feature specifications, I’ve been able to make a low-tech testing dashboard that has been, so far, been very effective at showing how the testing is going.  I’ll be talking about all of this at my CAST session.

On the Agile testing experience:

For 5 years, I sat in a cubicle, alone.  I had a planning meeting once a week.  Sometimes I had conversations with my boss or the other devs.  It was kind of lonely, but I guess I got used to the privacy.  Atlassian’s office is completely open.  There are no offices.  The first few weeks of sitting at a desk IN FRONT OF EVERYONE were hair-raising until I noticed that everyone was focusing on their own work.  I’ve gotten over it and been so grateful that my co-worker, who also tests Confluence, has been sitting next to me.

During my waterfall days, I had my suspicions, but now I know for sure:  dogfooding works, having continuous builds works, running builds against unit tests works.

On Manual, Browser Based Testing:

This is something that I thought would be much easier than it was.  I initially found manual testing to be overwhelming. I kept finding what I thought were bugs.  Some of them were known, some of them were less important and some of them were because I hadn’t set up my browser correctly or cleared the “temporary internet files”.  Even when I did find a valid issue, isolating that issue and testing it across browsers took a significant amount of time.  All of this led to the one large, giant, steaming revelation I’ve had in the past 90 days about manually testing browser based applications:  browsers suck and they all suck in their own special ways. IE7 wouldn’t let me copy and paste errors, Firefox wouldn’t show me the errors without having a special console open and Apple keeps trying to sneakily install Safari 5 which we’re not supporting yet.

Aside from fighting with browsers, maintaining focus was also challenging.  “Oh look there’s a bug.  Hi bug…let me write you…Oh!  there’s another one!  But one of them is not important…but I need to log it anyway…wait!  Is it failing on Safari and Firefox too?”  I don’t have ADD, but after a year of this I might.  Consequently, something that suffered was my documentation of the testing I had done.  I was happy not to have to fill out Quality Center boxes, but it would be nice to have some loose structure that I use per-feature.  While I was experiencing this, I noticed a few tweets from Bret Pettichord that were quite intriguing:

Testing a large, incomplete feature. My “test plan” is a text file with three sections: Things to Test, Findings, Suggestions
1:53 PM Jun 22nd via TweetDeck
Things to test: where i put all the claims i have found and all my test ideas. I remove them when they have been tested.
1:54 PM Jun 22nd via TweetDeck
Findings: Stuff i have tried and observed. How certain features work (or not). Error messages I have seen. Not sure yet which are bugs.
1:55 PM Jun 22nd via TweetDeck
Suggestions: What I think is most important for developers to do. Finishing features, fixing bugs, improve doc, whatever.
1:57 PM Jun 22nd via TweetDeck

This is something I’m adding to my strategy for my next iteration of testing.  It made me laugh to see this posted as tweets.  Perhaps Bret knew that some testing-turkey, somewhere was gonna post this at some point.  I’m quite happy to be that testing-turkey as long as I don’t get shot and stuffed (I hear that’s what happens to turkeys in Texas).  After I do a few milestones with this, I will blog about it.

Because of my difficulties with maintaining focus, I’ve now realized that while it’s easy to point the finger at developers for getting too lost in the details of their code, it’s just as easy for me, as a tester, to get lost in the details of a particular test or issue.  I am a generalist, but I didn’t even notice that there was a schedule with milestones until we were nearly finished with all of them.  That’s how lost I was in the details of every day testing.  Jon Bach’s recent blog post resonates with me for this very reason.  He writes about having  20 screens open and going back and forth between them while someone wants updates, etc.  Focus will be an ongoing challenge for me.

One of the few tools that I’ve felt has helped me maintain my focus is my usage of virtual machines for some of the browsers.  They may not be exactly the same as using actual hardware, but being able to copy/paste and quickly observe behavior across the different browsers was hugely important in helping me maintain sanity.

The past 90 days has been intense, busy and fascinating in the best possible way.  Does Atlassian’s culture live up to the hype?  Definitely.  I’ve been busier than at any other job I’ve ever had, and my work has been much more exposed, but I’ve also had plenty of ways to de-stress when I needed it.  I’ve played fussball in the basement, I laughed when one of our CEO’s wore some really ugly pants that I suspect were pajama bottoms to work, I got to make a network visualization for FedEx day and my boss took me out for a beer to celebrate the ending of my first 90 days.  I like this place.  They keep me on my toes, but in a way that keeps me feeling creative and empowered.

By the way, Atlassian is still hiring testers.  If you apply, tell ’em I sent ya ;)

Enhanced by Zemanta

Playing with REST and Twitter

At my new job, we all take a particular specialization. One of my co-workers helps the rest of us learn about testing security while another champions the testing of internationalization.  We share what we learn about our specializations with each other so that everyone benefits.  When I was asked what I wanted my specialty to be, it didn’t take me 2 seconds to say, “I wanna specialize in API testing.” This means I need to know about REST (REpresenational State Transfer).

Here is the simplest explanation of REST I can muster:  You use a URL to call some method that belongs to another application.   So, basically, it’s using the basic http calls of POST, GET, PUT or DELETE.  The data that comes back is usually either XML or JSON.

When I was at the Writing-About-Testing conference, our host, Chris McMahon knew that I was trying to brush up on REST so he put together a few slides and did a talk for the group. While he was talking about it, I noticed @fredberinger tweeting about a slidedeck for a discussion on the management of twitter’s data. It was serendipitous because Twitter uses REST heavily.

Looking through the slides led me to googling about REST and Twitter which led to a fun discovery. If you have a Mac, you have the “curl” command. Straight from the man page… “Curl offers a busload of useful tricks.”  At its simplest, curl is a command for getting data.  What makes curl great is that it allows you to submit commands that normally require interaction with a few screens.  For example, you can use curl to submit authentication credentials.  Curl will also retrieve data for you after you’ve submitted your request. It does all of this with 1 command line.  In the world of shell scripting, since curl will return xml or json data, the data is easily saved off into an xml or json file.

This means that curl can be used to interact with Twitter’s api calls.  Since I’m on twitter way more than I should be (my husband will back me up on this claim), I thought this would be an excellent way to do some playing with REST calls.

This document from twitter’s wiki has some examples of using REST with Curl.  It’s down at number 8. I am posting my own examples as well.

Example of a GET:

“curl -u marlenac:**** http://api.twitter.com/1/statuses/friends_timeline.xml”

This gets some of the most recent updates in my timeline.  I noticed that the xml retrieved 17 updates at 11:00 pm in Sydney.  I’m not going to post them all, but here is  a screenshot of one status update from Andrew Elliott, aka, @andrew_paradigm.  There’s a lot more information here than I see in tweetdeck.

Status update in xml

Notice that the url above ends in “xml.”  One aspect of REST is that you have to know which datatypes are valid so that you can state that in the request.  I’m not going to make the same call, but change the ending to “json” like so:

“curl -u marlenac:**** http://api.twitter.com/1/statuses/friends_timeline.json”

This call produces what should be the same data, but in the JSON format.  If you are not familiar with JSON, it’s just another data format.  If you do not like the “pointy things” aspect of xml, you might prefer JSON.  Here is a screenshot of the same status update in the JSON Format.

Status Update in JSON

The JSON that was returned did not have any line feeds.  This would make it easier to parse since there is only a need to look for braces and commas.  I inserted the line feeds in the first half of the status so that you can make sense of the data format.

Just as it is possible to get data from Twitter with REST, you can also make updates from the command line using a POST command.

“curl -u marlenac:*** -d status=”It’s a lovely day in Sydney” http://api.twitter.com/1/statuses/update.json”

A command line tweet

Here is the JSON that resulted from that update:

Update status JSON

This is obviously the beginning of my playing around with REST.  I now have to practice using REST with the Atlassian product I test, Confluence.  While I was putting this post together, I was thinking about what I would want to test in these api calls.  Since I am learning about this, I welcome input about tests that I could add to this list:

  • Check that the information retrieved from different data formats is the same.
  • Test the different parameters that are possible with each REST url.  Make sure that they work.  The api should only accept well-formed parameters and if the parameter is un-usable, should return an informative error message.
  • Check that authentication works and that users should only be able to make calls within their access rights.
  • Test the limits of how many calls can be made.

If you look through the Twitter documentation, you will notice that the place limits on the number of calls that can be made to some URLs.  This is an indication of the fact that REST can be extremely useful for obtaining lots of data for the purpose of aggregating it.  This aggregated data can make a much better data source for visualizations than, for example, comma-delimited data.  One reason I haven’t been doing much visualization lately is because my inner Scarlett O’Hara has vowed that as “Tufte is my witness, I will never use comma-delimited data ever again!)  (You can laugh, and believe me, I understand that rules were made to be broken.)

For those who have read this entire post, here’s a special treat.  If you are on a mac, you may or may not be familiar with the “say” command.  Oliver Erlewein turned me onto this during our recent Weekend Testing session.  If you type: say “Shall we play a game?” your mac will, vocally, ask you, “Shall we play a game.”  There’s also a parameter that lets you choose the voice.  I decided to try piping the REST calls I made using curl to the say command.   I could “say” what happens, but that would ruin the surprise.  ;) Have some fun with that.

A Desert Tale of Rocks and Ruins

Hiking into the Canyon

After attending the Writing About Testing conference in Durango (which you can read about here), I spent a few days in the desert canyons of Utah.  Finishing a masters degree, presenting at PNSQC, getting a new job and moving to a different hemisphere have all taken their toll.  Thus, my challenge was to clear all the shit out of my head that’s been accumulating for the past 12 months.

I started out with a pack full of “stuff” I thought I would need to survive in the desert.  The picture to the left shows me as I’m hiking down into the canyon.  I’ve only ever backpacked in mountainous environments, such as the Appalachian mountains of the Eastern U.S., where there are plenty of streams and water is never a problem.  This time, I was hiking in a place where the environment is so harsh and water is so scarce, it is all but inaccessible for 2/3 of the year.  Thus, I filled up my pack with lots of water and the “stuff,” intent on keeping nature at bay.

The area of Utah I visited was formerly inhabited by a native american tribe called the Anasazi.  From what archaeologists can tell, the Anasazi were able to thrive for quite a while in a place modern-day humans would call un-liveable.

For the first couple of days, I was very careful not to get my stuff too dirty.  I was worried about getting my camping equipment through customs in Sydney because they are strict about camping equipment being clean on re-entry.  As we hiked on, however, I quit caring and began to let the sand of the desert into my boots and quiet of the desert into my head.

The canyons were full of ruins and rock art.  The Anasazi are long gone, but the dry desert air has preserved many of their dwellings and art.  My host, Chris McMahon, calls it, “the museum of experience.”  We were able to walk right up to the art and look through the dirt for pottery shards (which we left in place).  I put myself in the shoes of an Anasazi artist as I sketched the figures I saw carved and painted on the sandstone walls.

You were here and so was I

The art in this desert was not on a canvas and because its creators are so long gone, there is no way to know exactly why it exists.  The Hopi, who are alive, believe they are descendents of the Anasazi so they probably have some very good ideas, but there will probably never be solid answers about the creation history of this art.  I love that.  Because I love questioning and imagining, I made up a thousand stories for every painting I saw.  I devoured every paint stroke and compared to every other paint stroke I saw.  If the wind blew while I was looking at a painting, I questioned the direction in which it was blowing and whether or not the painter felt the wind coming down the canyon the same way I was feeling it as I observed the results of their labor.

One way to get a more intimate connection with any piece of art is to find your own way of reproducing or re-interpreting it.  I took a small sketchbook with me and sketched out a ruin and some of the paintings.  I’ve also sketched a few more paintings from photographs my husband and I took.  Although the Anasazi paintings are quite ancient, I found figures that were well-drawn by a practiced hand.  “I don’t know who you were, but you did some good work here,” I found myself whispering as my eyes looked over the figure you see in the photograph below.

Painted by a well-practiced hand

As I immersed myself in the evidence of an ancient culture, I began letting go of the present.  My mind began to wander past the 140 character restrictions of my everyday life and the ruins of my own existence.  As my thinking shifted, my physical needs changed as well.

By the third day, I had taken anything I hadn’t yet used or worn and sheepishly stuffed it into the bottom half of my pack.  Why did I bring 2 pairs of pants?  I guess I thought the desert would be cold (ha ha).  I also didn’t need my rain jacket.  There were some other items I also didn’t need and they were much on my mind as I dragged it all on my back through the heat and the sand.

Two-Story Ruin

My husband and I were going through a similar dilemma with our tent.  On night one, we set up the tent and the rainfly we had brought.  The second night, we ditched the rainfly because it was hot, even at night.  On our third and last night we finally came to our senses and slept outside, not even bothering to pitch the tent.

I’ve always had a fear of night creatures when I’m camping.  I have heard bears and mountain lions growl at night.  On one trip, an animal brushed against our tent and I couldn’t go to sleep afterward.  Outside, in the canyon, I lay awake in my sleeping bag, watching the light  cast by the moon on the canyon wall across from our campsite.  I thought about why my fear had vanished.

Aside from the environment itself, there is nothing to fear in the desert.  Everything except for the animals small enough to subsist on the bare minimum of water has already fled.  I had succeeded in leaving all of my anxieties about nocturnal predators and life itself behind. I went to sleep in the light of the moon, picturing the motion of hands making brush strokes over warm sandstone.

We hiked out the next morning.  We had been hiking through sand for most of our trip, and my feet were relieved to finally feel the earth pushing back against them.  As we ascended out of the canyon, thoughts about work and “real life” began to come back.  I made peace with them as we drove away from the wilderness area.

This trip allowed me to gather my strength, and I felt fearless as I left the desert.  It was the same feeling I’ve had before when I’ve ascended from a cave on a rope.  Once you’ve crawled out of a 90 foot pit with nothing holding you up except for a rope, priorities shift and you remember what you really care about in life.  The same holds true for slogging through mile after sandy mile in Grand Gulch.  Fear has departed.

A View of Grand Gulch


Post Context: Here are the tweets that led to my diversity post

Jolly Roger flown by Calico Jack Rackham. Bott...
Image via Wikipedia

One rule I’ve had for myself since I started this blog is to do my best not to write posts in direct reaction to what someone else has said.  This blog is about me.  It is my writing.  It shows what’s on my mind.  There are very few situations in life when I can unabashedly and honestly say, it is all about me.  This blog is it.  There hasn’t been a post about diversity before my previous one because I’m not thinking about gender stuff most of the time.

I broke my own rule with my last post, and I did it for a good reason.  If you want to know why I wrote that last post, please take a look at this twitter transcript put together by Rick Scott.  He also saw the whole thing happen and has blogged his own reaction to it.

I was pretty angry as the tweets unfolded, but most of what I said stands.  The only tweet I would change is the one where I said the context school is a pile of crap. That was wrong of me. I know many people involved in the context school of testing who have much more fair-minded ideas about diversity and gender than those expressed by the Bach brothers in the transcript.  Unsurprisingly, James Bach has blocked me from his twitter account.  Jon Bach has also blogged his version of what happened on twitter.   I hope people draw their own conclusions based on the actual conversation rather than relying solely on one person’s account, and I do feel that it is important for people to have an opportunity to draw their own conclusions.  That’s why I’ve stayed pretty silent about this for the past week.  I love my blog and if I can’t write with respect, I don’t see any point in writing at all.

The irony in all of this is that I still think James Bach’s contributions to software and software testing are brilliant.  We do not, however, see eye to eye on diversity or even, as the transcript points out, workplace ethics.

Thus, I am officially hoisting my own pirate flag of agitator for women’s empowerment in technology.

Enhanced by Zemanta

My post about gender and diversity

Stump in Red Hills
Image by cliff1066™ via Flickr

This is a post I’ve put off writing on purpose because it’s not my favorite topic of discussion.  That’s not because I feel shy about it, it’s just because people have usually already made up their minds on this particular topic which makes the opportunity cost of the discussion high.  I’ve noticed Lisa Crispin and others making valiant efforts to have this discussion in testing.  I agree with her the others involved  that it is time.

We need to talk about the role of gender and diversity in testing.

I am tired of hearing about how my being a woman is important to the way I test.  It’s a poor definition of “woman” that I don’t believe holds up well if it’s really dissected.  There are many different ways to be a woman, and I’m not going to highlight all of them here.  I’ll just point out one stereotype that needs to go:  women have babies.  I don’t have babies and I don’t know that I’ll ever have a baby.  In fact, I have plenty of women friends who never want to have a baby.  Are we still women?

The flip side of this is that each person has differences that make them valuable on a test team.  Hopefully these advantages are obvious enough that there’s no need to go through that argument.  The problem arises when we start stereotyping individuals into monolithic groups that are actually quite varied.

Lately I’ve been thinking about this in terms of levels of measurement.  Maybe that’s because the book I learned this from,  Stephen Kan’s Metrics and Models review, really goes for the throat in the examples used to illustrate levels of measurement.

Nominal: Classifying elements into categories.  Kan uses the example of religion by saying that,  “if the attribute of interest is religion, we may classify the subjects of the study into Catholics, Protestants, Jews, Buddhists, and so on.”

Ordinal: Ranking is introduced.  Kan writes that, “we may classify families according to socio-economic status: upper class, middle class, and lower class.”

Interval Scale: At this level, there are exact, standardized differences between points of measurements.  Elements can be compared using addition and subtraction and depend on having some standard of measurement.  Kan uses a KLOC example to illustrate this one, “assuming products A, B, and C are developed in the same language, if the defect rate of software product A is 5 defects per KLOC and product B’s rate is 3.5 defects per KLOC, then we can say product A’s defect level is 1.5 defects per KLOC higher than product B’s defect level.”

Ratio Scale: This level is differentiated from the interval scale only because there is a zero element present.

Where do humans fit on this scale?  The classifications we have for each other are nominal and ordinal categorizations, but I don’t think that the levels of measurement come anywhere close to defining the measure of human experience.  Gender is what I get thrown in my face because I happen to have a vagina.  Never mind the fact that I am the one earning the money in my family, I don’t have children and I don’t wear pink or even bake.

There is a nasty undercurrent in testing at the moment that tries to define me as “woman tester.”  There’s no need to even look that hard if you want to find it.  When I see this  I will call it out and I will call it out loudly.  I call it out because it undermines hard work done every day not just by women who show up for their tech jobs.  It’s also undermining respect shown to women by hordes of male geeks who want things to be better.  Guys:  I hear you.  I know you want me to feel happy and comfortable at work.  I know you want more diversity in testing and technology.  It means the world to me that you feel this way.  I hope that we are far enough along with this problem that others, male, female, transgendered, will call it out with me.

So if you are among those who think we all ought to be wearing badges announcing how great it is that we fit some cultural stereotype/straightjacket, I hope you take some time to rethink that stance.  It’s a waste of time we could be spending on other problems in testing.

Reblog this post [with Zemanta]

Here’s the story behind the testing dashboard tweets

Low-Tech Dashboard in Confluence

I can’t believe just how hungry testers are for the low-tech testing dashboard.  It’s a great illustration of how we are evolving in our desire for tools that do not push us around or deign to tell us how we should be testing.

At my last job, I ditched Quality Center for a test-cycle and decided to use James Bach’s low-tech testing dashboard with my team’s incredibly un-sophisticated dokuwiki.  There was nothing expensive involved and certainly no rules… just enough structure and more creativity in my testing.  I loved it!  By getting rid of Quality Center, I made my testing activities much more visible to the rest of my team not to mention more creative in terms of exploratory testing.

I’ve been tweeting about it because we’re starting to use it for Confluence testing efforts.  The reason why I haven’t blogged it yet it is because I’m putting it in a paper for this year’s CAST. Those of you attending CAST will have the opportunity to see my full presentation.  Because of the overwhelming interest, I thought I’d give y’all a few tidbits and a push in the right direction.  If I get permission, I’ll be post the full paper.  If my arm is gently twisted, I might add a few more tidbits.

This is very easy to put together, can be free (as in beer) and is 115% customizable.

The ingredients:

A whiteboard or a wiki (I like Confluence and I’m not biased AT ALL.)

The low-tech testing dashboard pdf

Some idea of what *you* want to track in *your* testing

The most challenging aspect of using this dashboard is in deciding what you want to track.  My co-worker and I discussed it for a while and are still undecided on a few points.  Although the pdf suggests that putting this online is less than optimal, I think a wiki is a perfect leap.  I link all of my test objective pages to the components I list in the dashboard.  I also link important issues in the comments section. Thus, a wiki is shallow enough but has the ability to give added depth when and where it is necessary.

Mr. Bach’s dashboard is 11 years old, and thus, is a bit weathered but still very good stuff.  It’s ripe for a bit of botox here and there.  Those of you who decide to take this and “make it your own” are welcome to share how you’ve made changes or which areas of the dashboard could use a bit of tweaking in your environment.  I hope to see some dashboard photos on twitter. (There’s a meatloaf joke in there somewhere.)

Being a writer makes me a better tester

Durango and Silverton Narrow Gauge Railroad
Image by fritzmb via Flickr

This weekend, I’ve left the kangaroos and beaches behind for the mountains and red rock of Durango, Colorado.  I’m participating in the Writing-About-Testing peer conference being organized by Chris McMahon. In an effort to keep the organization cost and logistics light & easy, there are only 15 people.  We all write a blog or write presentations for conferences and are software testers.

At the beginning of this year, one of my predictions for 2010 was that software testers would begin to take writing skills more seriously.  I hope that this conference is a means to that end goal.  One of the unspoken skills in testing is our power to communicate.  I can spend all day long finding problems with an application, but what happens if I don’t have the skills to let someone know or clue someone in as to why they need to care?  Testers need good writing skills because:

  • We must be able to concisely state what is going wrong in a defect report, especially in the summary line.  I’ve been reading the book, “Don’t Make Me Think” which is about usability testing, but one of the points rang true for writing defects:  humans like to scan.  This makes defect titles especially important.  I’ve actually blogged this point before because I liked what the book How We Test Software At Microsoft had to say about the subject
  • We use persuasive writing anytime we’re writing a communication to developers in an effort to convince them of the need to fix something we feel is especially important.  Persuasive writing is a skill unto itself.  My father is an attorney and has made a career out of persuasive writing.
  • Writing out tests or explaining how we’ve tested is an especially difficult and necessary task.  If you work in an environment where you are testing something that is under intense scrutiny, then this must be doubly important.  I consider writing tests adjacent to technical writing because we have to explain in very specific terms what we are trying to do, what happens when we do it and why we are trying to do that in the first place.

My writing experience at work involves each of the three points above and more.  If you’d like to see some examples of my writing, you can check out the Atlassian bug reports I’ve written thus far.

Although I keep time constraints in mind, I agonize over the title of every single bug and the repro steps I record probably much more than I should.  When I am describing my tests or writing test objectives, I make every effort to be extremely precise.  My bonus for this effort is that I am always finding new words I can use to describe the features I am testing and the testing that I do.   Am I a nerd if this makes me happy?  I frakking hope so!

Here’s a little known fact about myself:  I competed in a spelling bee for charity when I was in my twenties.  I participated with some co-workers from my job.  I remember that my team took home a ribbon other than first, but I don’t remember exactly what our placement was.  I do remember a conversation that I had with someone else on my team.  We must have spent 20 minutes or so talking about words that we like.  My challenge to every reader of this blog whether you are involved in testing or not:  find a new word and use it to make your own daily writing more descriptive and informative.

If I’m lucky you’ll leave me a comment and tell me about it.  I want to know about your new word, because I’d also like to use it ;)

Reblog this post [with Zemanta]

A Visualization of Comments from Jira and Confluence

A FedEx Express Airbus A310 taxis for takeoff ...
Image via Wikipedia

Last week, I happily participated in my first FedEx at Atlassian.

FedEx is a quarterly competition held by the company in an effort to keep it’s staff feeling creative and hopefully produce some exciting new functionality for Atlassian’s products. Pairing is also encouraged for this so I worked with Confluence developer, Anna Katrina Dominguez.

We produced a network visualization that shows where users of Confluence and Jira have left comments. I found something similar (and maybe a bit more, ahem, polished) in this blog post of Jeffrey Heer’s about his project Exploring Enron. He visualized which members of Enron’s staff had been sending each other emails.  My post from a few days ago explains why I chose this for my FedEx project.

I’ve done a write up of our project on Atlassian’s web-site. It also includes a pdf of the visualization. The colors are pretty bad and the UI designer was just in agony that we weren’t able to remove the outline around each shape. Since we went from having 0 lines of code and a vague idea of what we were doing to a simple implementation of a semantic triple store and a visualization that does have some meaning in 24 hours, I’d say we did ok.

Python, and Anna’s skill at writing Python were impressive. Python is so well-suited to this type of project that I’m not looking forward to going back and re-doing some of this in Java. I want to though, because I like the idea of building up some classes that handle semantic data. I’ve noticed that Python is frequently mentioned as one of the best languages for getting data together and now I know why. We didn’t have to waste any lines of code setting up containers and the data we got out of our test instances of Confluence and Jira was easier to process.

The one disappointment, aside from our My Little Pony colors, was REST. We didn’t get very far with using it and ended up grabbing data through xml-rpc instead. I couldn’t figure out if this was because we just haven’t used it that much or we couldn’t get the data we wanted with it. It’s clear that I need to do some more poking at REST. I’ve got some links and Atlassian’s technical docs on REST + O’Reilly Safari at my disposal. Looks like I might do some blogging about this.

If you’d like to see a few more of the FedEx projects from this round, you can look through the deliveries on this page.

Reblog this post [with Zemanta]

Owning My Celeb-u-tester

Nada Surf taught me all I need to know about being Popular

I wrote this blog for quite a while with the attitude of, “I know know that no one is reading my blog…whoopee!”   It’s a very special feeling, being able to post whatever you want and knowing that no one is reading.  My first posts were written as part of my independent study of the semantic web/web 2.0 and morphed into posts reviewing games as part of my summer semester video games class.  (Hell yeah, I know you’re jealous!)  After that the blog languished as I slogged through a project management class that nearly killed me.

Something happened in that semester of project management.  I attended GTAC, and the blog turned into something real.  I met David Burns who writes The Automated Tester and is doing great things with Selenium and .NET.  I was so impressed with how seriously he was involved with his blog.  It rubbed off and when I returned, I started thinking of how I could get more serious about my own blog.

I knew that I wanted a new job and not in the city where I was currently residing.  I also knew that, unfortunately, my school has negative street credibility outside of Atlanta, GA.  If you want to debate me on that point, just ask the people around you, if anyone has heard of Southern Polytechnic State University. You won’t see very many hands.

I decided to blog everything that I did in school.  At least I would be able to say, “Hi prospective employer x.  I’ve done really great things at school, here they are on my blog.”  As part of this whole, getting-more-serious-thing I moved my blog from blogger to wordpress. I still had that “whoopee” feeling that no one was reading my blog.  I just thought that people would be reading it later.

I kept working my way through classes, having fun with visualization and writing my thesis.   My advisor pushed me to submit my thesis to PNSQC.  I was very shocked when it was accepted.  I had no idea it would be chosen.  I honestly thought I would get the “thanks, but no thanks” email.  It made me nervous to think about presenting my work, but I figured I would just stand up in some tiny back, basement room filled with 2 people and stumble through some slides.  Then this post of Alan Page’s happened.

At this point, I had read most of Alan’s book, (no small feat considering I had classes requiring lots of attention) and decided it was my FAVE-O-RITE testing book.  I can’t tell you how many times I’ve used this book.  I actually did a report on Equivalence class partitioning as part of my Formal Methods class.  There is no way I can possibly overestimate how important this book was/is for me.  In fact, I was unable to post a review of it because the one I wrote was just so Fangirl it even made me want to barf.  To have one of the authors saying that he couldn’t wait to see MY presentation at PNSQC was enough to make me hyperventilate.  If your favorite tester called you up and told you how much they liked what you were doing, how would you feel?

I felt watched.  I couldn’t write for a week.  I jumped at every noise and kept turning around because I could feel someone behind me.  I was spooked.

Here’s the thing about knowing that people read your blog…you KNOW people are reading your blog.  I know that when I go into work tomorrow, my boss, several co-workers and my former neighbor will have read this post.

How would you feel if you felt your every word being read by someone who mattered to you in real time as you wrote them down?  Are you spooked yet?
I got over it which is a good thing because, at this point, not only have I appeared on Alan’s blog, I’ve made an appearance on Matt Heusser’s blog, Chris McMahon’s blog and had James Bach and Mike Kelly make comments on my posts.  I’ve even appeared in the Carnival of Testing a couple of times.  It was pointed out to me recently that my blog is number 38 on a list of software testing blogs.
So I now have an admission to make that might sound callous but is, in reality, very difficult for me:  People read my blog.  Not only do people read my blog, they like it! When my post shows up in their reader they actually take time out of their day to process what I’ve written.  They give me comments, they tweet about it, they write about it.  They tell other people to read what I’ve written.
My reaction to this has continuously been a huge forehead smack and a very loud, “WHO KNEW!!!!”  After PNSQC, I totally lost control and had a big, fat, happy cry about it over pancakes at Lowell’s in Pike Place Market as I watched the ferries chug past the window.  I mean, my make-up was smeared and everything.  I just couldn’t help it.
So what have I done with this success?  I could have tried getting a job at Microsoft or Google, but instead, I chose to take a job with Atlassian, a company I want to see succeed on a grand scale.  What does that mean?
I’ve moved.  To A-U-S-T-R-A-L-I-A.
This move has forced me to sit up and take notice of my own involvement in a community of well-known software testing bloggers.  In the past week, I’ve been reading about what I’ve missed:
It’s time for me to acknowledge my own role as a voice in the testing community.  I’ve ignored it and tried to pretend that no one reads my blog. I guess this was an effort to persist in the state of “whoopee!” but I’m at a point where I’m not sure that’s the most appropriate way to live my life as a blogger anymore.  I’m not an oracle or a buzzword-inducing, testing savant, but I seem to have a voice that people enjoy hearing.  The challenge, at this point, is for me to stay true to myself and understand where I fit in this mix of testing, technology and visualization.
That also means I need to recognize the gift I’ve been given and find a way to participate despite the distance I’ve created between myself and my fellow von Testerbloggers.  This wasn’t a challenge I anticipated I would be creating for myself when I moved, but it’s proving to be a tough one.  I had no intention of crumpling up all of the relationships I’ve developed over the past year or two and tossing them into the Tasman Sea, but I feel like that’s exactly what I’ve done.
Networks:  the best ones do not involve business cards and, once cultivated, are worth the effort in maintaining.
I’ve got no idea how I will make this work, but I’m committed to meeting this challenge I’ve created for myself, and I know that I can do it.