90 Days of Manual Testing

Atlassian + Tourists
Image by Marlena Compton via Flickr

My “probationary” period at Atlassian has recently finished.  This period has lasted 90 days, although I feel like I’ve been here much longer.  Lots has happened since I’ve shown up in Sydney.  As was pointed out to me on twitter by @esaarem, I’ve participated/facilitated in 4 sessions of Weekend Testing Australia/New Zealand.  I’ve turned in  a paper for CAST. I flew back to the states for a brilliant Writing-About-Testing conference and I went on a vision quest, of sorts, in the canyons of Grand Gulch.

What hasn’t shown up on my blog is all of the testing I’ve done for Confluence.  This testing and work environment is such an utter departure from what I was doing previously.  Before, I was looking at a command line all day, every day and writing awk and shell scripts as fast as I could to analyze vast amounts of financial data.  This was all done as part of a waterfall process which meant that releases were few and far between.  To my previous boss’s credit, our group worked extremely well together and he did as much as he could to get the team closer to more frequent releases.

I am now testing Confluence, an enterprise wiki, which is developed in an Agile environment and completely web-based.  I haven’t run a single automated test since I’ve started so it’s been all manual testing, all the time.  This doesn’t mean that we don’t have automated tests, but they haven’t been any responsibility of mine in the past 90 days.  My testing-focus has been solely on exploratory testing.  So what are my thoughts about this?

On Living the “Wiki Way”

Since everything I’ve written at work has been written on a wiki, I haven’t even installed Microsoft Office on my Mac at work.  I’ve been living in the wiki, writing in the wiki and testing in the wiki.  If the shared drive is to be replaced by “the cloud” then I believe the professional desktop will be increasingly replaced by wikis.  Between Atlassian’s issue tracker, JIRA and Confluence there’s not much other software I use in a day.  Aside from using Confluence to write test objectives and collaborate on feature specifications, I’ve been able to make a low-tech testing dashboard that has been, so far, been very effective at showing how the testing is going.  I’ll be talking about all of this at my CAST session.

On the Agile testing experience:

For 5 years, I sat in a cubicle, alone.  I had a planning meeting once a week.  Sometimes I had conversations with my boss or the other devs.  It was kind of lonely, but I guess I got used to the privacy.  Atlassian’s office is completely open.  There are no offices.  The first few weeks of sitting at a desk IN FRONT OF EVERYONE were hair-raising until I noticed that everyone was focusing on their own work.  I’ve gotten over it and been so grateful that my co-worker, who also tests Confluence, has been sitting next to me.

During my waterfall days, I had my suspicions, but now I know for sure:  dogfooding works, having continuous builds works, running builds against unit tests works.

On Manual, Browser Based Testing:

This is something that I thought would be much easier than it was.  I initially found manual testing to be overwhelming. I kept finding what I thought were bugs.  Some of them were known, some of them were less important and some of them were because I hadn’t set up my browser correctly or cleared the “temporary internet files”.  Even when I did find a valid issue, isolating that issue and testing it across browsers took a significant amount of time.  All of this led to the one large, giant, steaming revelation I’ve had in the past 90 days about manually testing browser based applications:  browsers suck and they all suck in their own special ways. IE7 wouldn’t let me copy and paste errors, Firefox wouldn’t show me the errors without having a special console open and Apple keeps trying to sneakily install Safari 5 which we’re not supporting yet.

Aside from fighting with browsers, maintaining focus was also challenging.  “Oh look there’s a bug.  Hi bug…let me write you…Oh!  there’s another one!  But one of them is not important…but I need to log it anyway…wait!  Is it failing on Safari and Firefox too?”  I don’t have ADD, but after a year of this I might.  Consequently, something that suffered was my documentation of the testing I had done.  I was happy not to have to fill out Quality Center boxes, but it would be nice to have some loose structure that I use per-feature.  While I was experiencing this, I noticed a few tweets from Bret Pettichord that were quite intriguing:

Testing a large, incomplete feature. My “test plan” is a text file with three sections: Things to Test, Findings, Suggestions
1:53 PM Jun 22nd via TweetDeck
Things to test: where i put all the claims i have found and all my test ideas. I remove them when they have been tested.
1:54 PM Jun 22nd via TweetDeck
Findings: Stuff i have tried and observed. How certain features work (or not). Error messages I have seen. Not sure yet which are bugs.
1:55 PM Jun 22nd via TweetDeck
Suggestions: What I think is most important for developers to do. Finishing features, fixing bugs, improve doc, whatever.
1:57 PM Jun 22nd via TweetDeck

This is something I’m adding to my strategy for my next iteration of testing.  It made me laugh to see this posted as tweets.  Perhaps Bret knew that some testing-turkey, somewhere was gonna post this at some point.  I’m quite happy to be that testing-turkey as long as I don’t get shot and stuffed (I hear that’s what happens to turkeys in Texas).  After I do a few milestones with this, I will blog about it.

Because of my difficulties with maintaining focus, I’ve now realized that while it’s easy to point the finger at developers for getting too lost in the details of their code, it’s just as easy for me, as a tester, to get lost in the details of a particular test or issue.  I am a generalist, but I didn’t even notice that there was a schedule with milestones until we were nearly finished with all of them.  That’s how lost I was in the details of every day testing.  Jon Bach’s recent blog post resonates with me for this very reason.  He writes about having  20 screens open and going back and forth between them while someone wants updates, etc.  Focus will be an ongoing challenge for me.

One of the few tools that I’ve felt has helped me maintain my focus is my usage of virtual machines for some of the browsers.  They may not be exactly the same as using actual hardware, but being able to copy/paste and quickly observe behavior across the different browsers was hugely important in helping me maintain sanity.

The past 90 days has been intense, busy and fascinating in the best possible way.  Does Atlassian’s culture live up to the hype?  Definitely.  I’ve been busier than at any other job I’ve ever had, and my work has been much more exposed, but I’ve also had plenty of ways to de-stress when I needed it.  I’ve played fussball in the basement, I laughed when one of our CEO’s wore some really ugly pants that I suspect were pajama bottoms to work, I got to make a network visualization for FedEx day and my boss took me out for a beer to celebrate the ending of my first 90 days.  I like this place.  They keep me on my toes, but in a way that keeps me feeling creative and empowered.

By the way, Atlassian is still hiring testers.  If you apply, tell ’em I sent ya ;)

Enhanced by Zemanta

1 thought on “90 Days of Manual Testing”

  1. I sympathize on the multiple screens open when testing. When I test at work I have a notebook and a monitor so two screens. I usually have the test plan open in one and browser in the other, but i I’m having to say capture with Fiddler at the same time, or have Outlook or Word open as I write up a bug report it can be a challenge at time to manage. Good post, I enjoyed it. (Although I’ll admit I don’t agree that Wiki will conquer all, it may become more ubiquitous for some though.)

Comments are closed.