A Tableflip Guide: Transitioning from Tester to Developer

Writing code is power.

As someone who has found success as a software tester, I’ve long counted myself as one of the many people who “fell into testing.” I told myself over and over that I was content to be a “technical tester” and restrict my coding habits to testing scripts, test automation frameworks and tools. Nevermind that I kept having to prove my technical knowledge over and over again, once as a tester and additionally, as a woman.

Today, I consider testing (along with support) as the space I’ve been allowed to occupy in the power structure of tech. If you go to a testing conference you’ll find people talking about how you can stay in testing forever and how it is a great career path. I’ve noticed that, often, the testers who shout the loudest about staying in testing forever have carved out their own place in the power structure of the software testing industry.

Something else you will notice at software testing conferences is the long lines for the women’s bathroom or that people in testing talk about how it’s actually one of the more diverse areas of tech. Why is that? Hmm…I wonder.

Here’s another perspective on this phenomenon of testing being more diverse: tech has pushed women of all races and people of color of all genders into non-developer roles telling us that it’s because of skills, smarts or that our brains work differently. We’re told that our work is just as valued and necessary even though the pay is much smaller and prove it again bias means we will have to prove that we have skills over and over. I’ve now decided that the software testing career path is not so much about falling as it is about being squeezed into a corner and kept there.

Transitioning out of this has been a difficult, rebellious, empowering act also known in this year of 2015 as a tableflip.

A friend of mine who recently made a similar switch had this observation about it:

I hope that every tester no matter how long they have been in testing or how old they are considers whether they want, at some point, to try being a developer. In saying that, I understand that I’ll get 5 people telling me how they never want to code and love testing so much. If you are one of those people, have fun with that. This post isn’t for you.

This post is for testers who see that their reach could extend beyond testing into more areas of tech. In particular this is for women and members of all marginalized groups in testing who have likely ended up in the field due, in some part, to cultural bias. If you find yourself writing little scripts to help you do your job, learning how to write automated tests, or learning so much about the application you test that you are constructing api calls and looking at application code, please give yourself the gift of questioning how you could go further with your technical skills and that you might want to switch to a developer career path.

Before I share what helped me, it’s worth recognizing some of my own privilege. I’m a cis, white woman. I don’t have kids and I’ve purposely sought out jobs that offer work/life balance. That means nights and weekends are mine. I make a good salary and live in the tech hub of San Francisco so I have a good amount of access and a great network of friends. I did not, however, have the time or money to attend a bootcamp so you won’t be reading about that experience here. That doesn’t mean I think they are bad. It means that they are not an opportunity for everyone, even with a scholarship. Thinking the only way into programming is through a CS degree or a bootcamp is its own bias. There are plenty of us WHO ALREADY WORK HERE. We deserve our own bridge into developer land. That bridge is something I have found to be all too rare in the tech world. I hope that changes.

Here is some of what worked for me.

Attend community workshops

Here in San Francisco, there is always a meetup. I’ve been to Railsbridge, Girl Develop It! and some Women Who Code meetups. I suggest these as a first step because they will get you oriented and help you feel supported. Also, each has their strong points so attending different types of these meetups will help you figure out which workshop will be most advantageous at each stage of your journey.

Pick a stack and stick with it

Because I enjoyed Girl Develop It! so much and because I was surrounded at work by developers working in javascript all day, I focused on learning web development through javascript and React.js. Having been to some javascript conferences, I generally find the javascript community around the Cascadia.js conference and the Javascript/Node groups in SF to be very supportive and encouraging. Eventually I branched out to learn rails, but it helped my confidence to have a solid foundation in javascript, node and React.js.

Pick an editor and learn all of the shortcuts you can

Any developer you seek help from will want to sell you on using their editor. Don’t let them do it. Pick one and make them help you with that one. If you have a mentor relationship with someone in particular, it can be worth learning their editor, but the truth is that you need to pick an editor that feels comfortable for you and learn its ins and outs. This will keep you coding.

Keep learning the shortcuts in that editor, because being able to quickly navigate your code without thinking about it is only going to help if you are building something. Also, if you are asking questions or showing someone your code, it looks better if you are using shortcuts vs. mousing around and clicking. Trust me, developers will give you more respect if you use shortcuts. My favorite editor is pretty much anything in the Jetbrains family, particularly rubymine and webstorm. Sublime is also beginner friendly and used by many of my friends who are developers.

Find a workplace that will support you

I went through multiple jobs in my quest to switch careers and found that support varied from place to place. One thing I did find helpful was to be open about what I was trying to do, once I knew what that was. It could be that you decide to do this while you are working at a job that won’t support you. That happened to me. The only solution was to find a job that would support me. Although it sounds drastic, I found that once I was in a more supportive workplace, everything was easier.

Talk to a Career Counselor

This option is pricey and, sadly, not available to everyone, but I found it to be worth every penny. Tech is so broad and there are so many different types of developer jobs. This also helped to settle the question of whether this is really what I want and not just something I’m chasing because the developer job title is power.

[Mini-rant: Personally, I’ve found that so many people tell me I would make a great product manager and maybe that’s true, but the job that I do every day needs to be in line with my heart’s desire and not other people’s perceptions of my talent. Having worked with some brilliant product managers, I’ve seen what that work involves and I know it’s not what I want. Also, I suspect that women are often steered towards the role of product manager for reasons rooted in cultural bias.]

Talking to a career counselor helped me prioritize what I was looking for in a job and a company. As an example, I found out that I care less about being emotionally invested in the problem I am solving. I care more about how I am building software. This helped me shift focus away from companies building products to consultancies focused on pairing and TDD.

Build something

Once you’ve done some workshops and are comfortable with your editor, it’s time to build something! The thing doesn’t have to be particularly original or even interesting. In fact, you might simply repeat building something you’ve built in a workshop but give it a name-change or tweak. Lately, I’ve built several versions of the same rails app, but with a different name every time. Every time I’ve rebuilt it, I’ve learned something new or deepened my understanding in some way. At some point you will start thinking about building a thing you can release. This will help you when you start interviewing because it gives you something to point at and say, I built that.

Let other things go

I’ve just had about 6 months of not doing a lot of things I wanted to do. Once you are in the serious prepping for interviews stage, it’s time to Stop Doing Other Things. This is a painful but necessary step. Women are already pressured at every turn to take on more activities than we really have time for and to always be helpful and willing to sacrifice our time for someone else or their cause.

I found getting ready for tech interviews the ultimate excuse for taking a look at my responsibilities and removing myself from a lot of it. There were a couple of weeks when all I did was send out emails and talk to people to remove myself from responsibilities. It wasn’t fun, but it created the space I needed to proceed with deep prep.

Mock Interviews and Interview practice

I had former co-workers and friends mock interview me several times. In between, I would work on the feedback I received after the mock interview. This helped me strengthen and tweak what I was doing. In fact, since I focused on pairing interviews, it made me a better pair. Also, be sure to ask about what you did right. It is such a confidence boost to know that you have some things covered and don’t need to worry about them in an interview situation.

Don’t find a mentor — ask questions!

There is a general refrain I have heard over and over at different tech events: “find a mentor!” I wish people would say, instead, “keep asking questions!!” The truth is that you will need many people, some of whom will be more involved than others. Rather than explicitly looking for and asking people to mentor you, focus on reaching out with questions. You will need to practice and you will need to reach out and ask questions even if you are tired, even if you are shy and even if you don’t want to. At the other end of that reaching out, you will find people who will only half answer your question or people who prefer answering with their own hubris rather than the information you need, but you might, if you’re lucky, find someone who not only answers your questions, but will tell you to ask a few more. That is a mentor.

A good mentor will answer your questions and guide you in the right direction. A great mentor will be there for you many times in ways you didn’t even know you would need them. They will give you confidence when you don’t have any left and they will help you shape your career path in a way that best suits you. This is not a relationship you’ll be able to trivially go out and find just anywhere, but if you end up with one of these, thank your lucky stars and your mentor for it is rare and a true gift.

Don’t give up

There were so many days when I looked at my broken code and thought, “why am I doing this?” Especially if you are not at the beginning of your tech career, you will have some days when you seriously question this strategy of switching to developer. Currently, in San Francisco, there is an on-going parade of people marching through coding bootcamps and straight into jobs. This is discouraging to watch if you are someone who can’t afford to leave your current job for 3–6 months. Also, if you are older, especially in the Bay Area, it’s easy to look around and draw the wrong conclusion that development is for the under-30 crowd. Development should be for people who find it interesting and who are curious.

Curiosity is one of the most valuable skills a tester can have. In fact, I’ve told people for a long time that curiosity is necessary for great testing. The thing about curiosity is that it doesn’t have edges. This is the root of being someone who is engaged and who cares about what they do at work. If you are a tester and you are curious about where else you might fit into tech or where else you might find a challenge, you should follow that instinct and tech (and also other software testers) should be supporting you.

What is quality? What is art? Part deux

I’m so appreciative of the discussion that developed from my previous post. I could see that people commenting were really digging deep, so I decided to address some of what was said in this follow-up post.

Here are some of the comments about the definition of quality:

Michael Bolton shared his perspective on Jerry Weinberg’s definition: “To be clear, Jerry’s insight is that quality is not an attribute of something, but a relationship between the person and the thing. This is expressed in his famous definition, ‘quality is value to some person(s).’ ”

Rikard Edgren’s definition: “Quality is more like “good art” than “art”, but anyway: I can tell what “quality to me” is when I see it. I can tell what “quality to others” is when I see it, if I know a lot about the intended usage and users.” Rikard also wrote a post where he clarifies his position a bit.

Andrew Prentice wrote about what he feels is missing from Weinberg’s definition: “I like Weinberg’s definition of quality, but I’m not convinced that it is sufficient for a general definition of quality. Off the top of my head I can think of two concepts that I suspect are important to quality that it doesn’t seem to address: perfection and fulfillment of purpose.”

The definition of quality that I learned is from Stephen Kan’s book, Metrics and Models of Software Quality Engineering. Interesting is that Kan shows a hearty and active disdain for what he says is the “popular” definition of quality. “A popular view of quality,” he writes, “is that it is an intangible trait—it can be discussed, felt, and judged, but cannot be weighed or measured. To many people, quality is similar to what a federal judge once commented about obscenity: ‘I know it when I see it.’ This is sounding familiar, no? Here is where the pretension begins to flow: “This view is in vivid contrast to the professional view held in the discipline of quality engineering that quality can, and should, be operationally defined, measured, monitored, managed, and improved.’ ” Easy, tiger. We’ll look at this again later.

Jean-Leon Gerome’s painting of Pygmalion and Galatea brings this discussion to mind. This is a link to themyth of Pygmalion and Galatea.

I’ve seen this painting in person, at the Met.  Interesting to note is that the artist was painting himself as Pygmalion in this painting. (and I like listening to “Fantasy” by the Xx while I look at this.)

The relationship in this painting is not limited to the one between Pygmalion and Galatea, the viewer is drawn into the relationship as well and the artist, himself is also participating. In this painting, Pygmalion has been completely drawn in by his own creation. The artist was so drawn in by the story that he painted himself into it. I was and am still so drawn in by the painting that it is simply painful for me to tear my eyes away from it. It slays me. When I see it, I feel the painting. I guess you could say that emotion is an attribute of this painting, but in this case, I think it’s more. In this case, the emotion is the painting. Why else does the painting exist? Would this painting work at all if the chemistry were missing? I don’t think it would. What Gerome has accomplished here is the wielding of every technique at his disposal to produce a painting with emotion as raw, basic and tantalizing as the finest sashimi.

But there is more to this relationship than just the fact that Gerome has painted himself as Pygmalion. Let’s examine the relationships that exist in this painting and what they tell us. Starting with just the painting, itself, we have the man and the woman locked in their embrace. They are surrounded with many objects. (I encourage all readers to click through to the Met’s web site. Looking at their web-site, if you double click on the painting, you can move around and zoom in and out to get a closer, more focused look.) What do you notice about all of the objects in the room? I’ve no doubt that some of you are wondering if these objects take away from the focus in the painting. If that were the case, if the painting consisted of only the man and the woman, how would we know that the man was an artist? So why do we need these particular objects? The painting could be restricted to just the hammer and chisel so what’s with all the stuff? This is where our relationship with the painting deepens should we choose to follow the breadcrumbs…

An overview of Gerome’s life, clarifies his choices. As a young artist, he spent a year in Rome which he felt was one of the happiest years of his life. At the time that Pygmalion and Galatea was painted, Gerome was grieving over the deaths of several relatives and friends. By surrounding himself with artifacts from his youth, the artist is traveling back in time to a younger, more “Roman”-tic time in his life. However depressed he may have been when he painted this, Gerome was also experiencing an artistic breakthrough in his sculpting career. Notice the breakthrough in the painting? Now that you know a bit more history, how do you feel about the painting? Does it change your perspective? This has made the painting very introspective for me. The emotion that flows from this depiction of romantic love is one of vitality and power. Perhaps Gerome is evoking these feelings as a way of tapping into his own creative powers. I remember thinking to myself when I first saw this painting at the Met, before I knew anything at all about it, “She is rescuing him.”

To describe quality as a relationship gives it a larger meaning and captures something neglected and dismissed by the literature of the “software crisis” era e.g. books such as Stephen Kan’s. Is quality as a relationship mutally exclusive to quality being an attribute of software? I don’t agree with describing quality as just an attribute. To say that quality is an attribute de-emphasizes the holistic approach to quality I try to take and for which I’m assuming Michael, Jerry Weinberg (going by his definition here only), agile, context, et. al are striving. (Full disclosure: I haven’t read any of Jerry Weinberg’s books. That does NOT mean they are not on my list. I just got out of school and the only thing I’m reading lately is visa paperwork so give me a break here.)

The software we test has its creators and has an audience of users as well. Just as Gerome had his own relationship with this painting, developers know what they want to see which leads to the building of their own relationship with the software they make. How does this affect the relationship between the software and its audience

How does value fit into this? I value the painting because of how it makes me feel when I look at it. After the examination I did, I now understand why I value the painting. As someone who is constantly seeking artistic inspiration, I am happy to go where Gerome and his muse take me. What does this say for value in software? Does the relationship between an audience of users and software create value for the audience members whether they are paying guests or not? The more I dig into this definition, the more I like it because it allows for gatecrashers, those who we did not think would be using our software, but who may find it so invaluable, they become our software’s greatest fans.

I’m going to marinate on this while I think about the 2nd part of Andrew’s comment, namely, that Mr. Weinberg’s definition of quality does not address perfection and fulfillment of purpose. After all, Kan’s two definitions of quality of “fitness for use” and “conformance to requirements” are fairly widely accepted in software.

What are you thinking? Is there something missing from Jerry Weinberg’s definition? How does measurement fit into what I’ve been writing about if it fits at all?

I leave you to think about this and the painting above. If you haven’t already, take a few minutes to click through and take a good, honest, langorous look. Put down the twitter, the kid, the spreadsheet, the reality tv show. Take some deep breaths and give yourself a few moments alone with Pygmalion and Galatea.

to be continued…

Reblog this post [with Zemanta]

Look Up, Don’t Look Down: Testing in 2010

Goodbye Blue Sky
Image by -Alina- via Flickr

This post reflects what I’d like to see for software testing in 2010.  It is a purely selfish list.  Most of what I’ve written about below will find its way into my blog over the next year.  The list is not in a particular order, that’s why I excluded numbers for each item.  I’m just so damn excited about all of it. (and yes, I stole the title from TonchiDot)Btw, I’ve changed my template, my “about” page and my blogroll.

How does my list compare with what you would like to see?

Testers get fed up with their massive tables of data and turn to visualization
Ok, so no surprise here, but I wouldn’t have picked it for a thesis if I didn’t think it was important. Testing meta-data is all around us, and we’ve yet to fully make sense of it. What is it trying to tell us? If we don’t want to boil everything down to a metric number, that doesn’t mean that the meta-data or the secrets it keeps is going away. In reality, we will only have more meta-data. The challenge lies not only in getting our data into a visualization but also in knowing what and how to explore without wasting time. When should we use a scatterplot vs. treemap vs. plain-and-simple bar graph? This goes way beyond anything the Excel wizard will tell us, but that doesn’t mean we won’t need a little magic.

Functional Programming Shows Up on Our Doorstep
I’ve been seeing devs tweet about FP all year, and I’m quite jealous.  If a dev gave you unit tests written in Haskell or Erlang, what would you do?  Testers aren’t the only ones with meta-data overdrive.  Our massively connected world is producing too much info to be processed serially.  Get ready for an FP invasion.  Personally, I’m looking at Scala.

Weekend Testing Spreads
Indie rock fans will smell BS if they see an indie rock countdown for 2009 without Grizzly Bear (had to work it in somehow).  Weekend Testing is obviously the Grizzly Bear of Software Testing for 2009 and their momentum sets a blistering pace.  Markus Gaertner has just announced that it’s expanding to Europe and I’m certain it will spread across the Pacific as well.  This is a bottom up method for learning how to test, and I hope that instructors of testing take note.  I am no expert at testing and want to do whatever I can to set the bar as high as possible.  Hey Weekend Testers, count me in!

Testers who don’t blog start to care about their writing skills
With an emphasis on tools that get software process out of our frakking way, we’ll be left with our writing. Ouch. What’s a comma splice? Hey, I’m going for my Strunk & White. All the great collaboration tools in the world aren’t going to help us if our writing skills suck.

Links Between the Arts and Software Testing Will Be Strengthened
Chris McMahon started us off with his chapter in Beautiful Testing. Shrini Kulkarni blogged about learning the power of observation by looking at art. I’ve been reading about exploratory analysis using data and visualization. By the end of the year, I want software testers besides those of us who self-identify as arty or musical to be talking about why arts education is vital for being a good software tester.

More testers start to care about understanding the fundamentals of measurement and the basics of statistics
Think fast: What is the difference between ratio and proportion? When does the mean not tell an accurate story about a set of numbers? It’s very clear that there are some serious pitfalls in the usage of metrics. What I haven’t seen is lots of testers that have a thorough understanding of basics such as levels of measurement or what a distribution will tell you. I wonder how many testers back away from using these because they don’t understand exactly how they can be harmful or because they just don’t understand exactly how they work in the first place. One assignment I’ve given my blog for the year, is to tackle some basics as applied to testing. Rejecting metrics because you see how they can harm is one thing, rejecting metrics because you don’t understand them is unfortunate. If you count yourself as a tester who is not totally comfortable with math, you’re not alone and, believe me, I understand how you feel.

Collective Intelligence Comes into Play
If I had my way, this list would be vote-able and each reader would have the ability to vote items to the top or bottom. Wouldn’t that be interesting? Unfortunately, I don’t have that…today ;o) But we’re so close! If we’ve got the technology together to analyze the hell out of our blogs through web analytics, what about our tests? I’m picturing myself writing out tests in a wiki with a zemanta-like tool suggesting tests from similar stories that have previously caught bugs. I might not always use these suggested tests, but it would be a great help for brainstorming.

I’ll have an Open Source Project Up and Running for Visualizations to be Used with Testing
This is not a resolution, it’s something I didn’t finish from last year. I am just so late on this. Oh well, giving myself a conduct cut. Seems I had a little conference talk to deal with which quickly morphed into a little talk at Adobe, followed by a little talk at Microsoft. Needless to say, I’ve got some unfinished business that has to do with treemaps. The PNSQC experience was a semester in and of itself. Time to get back into the visualizations.

It’s not lost on me that my last few posts have been sort of personal and high-level. I’ve had big changes and events happening in my life, which has made maintaining focus, well, difficult. You’ll hear all about it soon enough. Trust me, it’ll be good.

Reblog this post [with Zemanta]

PNSQC Slides and Paper Are Up

My thesis defense is tomorrow which is why I haven’t posted in a couple of weeks.  If all goes well, I’ll be posting a link to that in the next week.

This is just a quick post to say that my paper and  presentation have been added to the PNSQC web-site, along with everyone else’s paper and presentation: click here and have fun exploring.

Since I don’t read from powerpoint slides, you won’t find much in the way of explanatory verbiage in the slides, but I’m happy to answer questions if there’s something you’d like me to clarify.  Ideally, I would download the paper, and read the paper while you have the slides up.  They told me to make all of the pictures for the paper extremely small and grayscale…sigh.  Kind of kills the whole visualization aspect of my paper, but I understand they had good reasons for asking.  This is exactly why Edward Tufte took out a 2nd mortgage on his house, and self-published his first book.  I would write more about that because it’s a post in and of itself, but my recursion ain’t workin’, gotta go!

Visualizing Defect Percentages with Parallel Sets

Prof. Robert Kosara’s visualization tool, Parallel Sets (Parsets) fascinates me. If you download it and play with the sample datasets, you will likely be fascinated as well. It shows aggregations of categorical data in an interactive way.

I am so enamored with this tool, in particular, because it hits the sweet spot between beauty and utility. I’m a real fan of abstract and performance art. I love crazy paintings, sculptures and whatnot that force you to question their very existence. This is art that walks the line between brilliant and senseless.

When I look at the visualizations by Parsets, I’m inclined to print them off and stick them on my cube wall just because they’re “purty.” However, they are also quite utilitarian as every visualization should be. I’m going to show you how by using an example set of defects. Linda Wilkinson’s post last week was the inspiration for this. You can get some of the metrics she talks about in her post with this tool.

For my example, I created a dataset for a fictitious system under test (SUT). The SUT has defects broken down by operating system (Mac or Windows), who reported them (client or QA) and which part of the system they affect (UI, JRE, Database, Http, Xerces, SOAP).

Keeping in mind that I faked this data, here is the format:

DefectID,Reported By,OS,Application Component
Defect1,QA,MacOSX,SOAP
Defect2,Client,Windows,UI
Defect3,Client,MacOSX,Database

The import process is pretty simple. I click a button, choose my csv file, it’s imported. More info on the operation of Parsets is here. A warning: I did have to revert back to version 2.0. Maybe Prof. Kosara could be convinced to allow downloads of 2.0.

I had to check and recheck the boxes on the left to get the data into the order I wanted. Here is what I got:

See the highlighted defect.

So who wants to show me their piechart that they think is perfectly capable of showing this??? Oh wait, PIE CHARTS WON’T DO THIS.  Pie Charts can only show you one variable.  This one has 4.

This is very similar to the parallel coordinate plot described by Stephen Few in Now You See It and shows Wilkinson’s example of analyzing who has reported defects. She was showing how to calculate a percentage for defects.  See how the QA at the top is highlighted?  There’s your percentage.  Aside from who has reported the defects, Parsets makes it incredibly easy to see which OS has more defects and how the defects are spread out among the components.  If I had more time, I would add a severity level to each defect.  Wouldn’t that tell a story.

Parallel Sets is highly interactive.  I can reorder the categories by checking and unchecking boxes.  I can remove a category by unchecking a box if I wish.

I took away the individual defects.

By moving the mouse around, I can highlight and trace data points.  Here I see that Defect 205 is a database defect for Mac OS X.  Although I didn’t do it here, I bet that I could merge the Defect ID with a Defect Description and see both in the mouse over.

See the highlighted defect.

Parallel Sets is still pretty young, but is just so promising.  I’m hoping that eventually, it will be viewable in a browser and easier to share.  Visualizations like this one keep me engaged while providing me with useful information for exploratory analysis.  That’s the promise of data viz, and Parallel Sets delivers.

Automated Test Confessions

My life as a tester is evolving and I’m feeling less like a newbie.  I’ve also had yet another “James Bach” moment.  This time, a friend of mine forwarded me an article her husband had read and passed along to her.  He’s a developer who, I guess, is going through the whole, “unit testing: what does it all mean?” phase of life.  The email contained a few links.  Among them was James Bach’s paper from 1999, “Test Automation Snake Oil.” As I read through, what I now know is a classic, I realized that I’d been recognizing some of what  Bach writes about in my own tests.  His paper highlighted much of what I’ve come to think about my own tests.

At this point, I’ve been a software tester for about two and a half years.  From my perspective, this is not a very long time. The past year, however, has been insanely intense for me intellectually and academically.  There have been many times during the past year when I have felt myself back in the Interdisciplinary Studies program I took as a Freshman and Sophomore at Appalachian State University.  We were given 100+ pages a night of reading per night which ranged all over the humanities and sometimes sciences.  This reading was in addition to lectures and other “programming” we were expected to attend.  Between the Software Engineering classes, the job as Software Tester and the runaway fascination with Data Visualization, I’ve put myself through a similar gamut of reading and working.  This time my activities have centered around software, computers and testing.  The result of this for my job as a software tester is that I am not the tester I was last year.

At all.

Previously, I was really smitten with HP Quality Center because it gave me structure for which I was desperately searching.  This was a great improvement over the massive, disorganized and growing spreadsheets surrounding me that contained all of my test information.  All of my tests could finally be organized, and thanks to the HP online tutorial I knew my tests were organized well.  I felt liberated!  Now I could stop concentrating on how the tests should be organized and concentrate more on the actual testing itself.

This led to the realization that there was NO WAY I would EVER be able to test EVERYTHING.  I was frustrated.  Why were my test cycles so short?  Why did I always feel like a bottleneck?  Was I not good enough at testing?  Was I not fast enough?  “I must find a way to test faster,” I told myself.

After attending the 2008 Google Test Automation Conference, I turned to unit testing and automation.  I mean, I can write code.  It doesn’t scare me at all.  This doesn’t mean that I’m great at it, but I enjoy it enough to spend significant amounts of time doing it.  I decided to use my coding skills to write repeatable tests that could be run over and over and over again.  After all, I’m pulling my group, by the hair, towards automated builds and smoke tests have to be automated.  Business just LOVES these.  I was told that it was making my group look really good to have automated tests.  I came out with my system test automation framework written with bash shell scripts and awk and felt so “smaht.”  Never mind that I didn’t fully vet my system they way I do the system I test.  Never mind that certain pieces of our system are not stable and can change drastically from one release to the next.  I just knew there was a big green button at the end of the automation tunnel.  I pictured myself pushing  CTRL-T.

Then I started using my creation.  When I realized how fragile my system was, all I could do was sigh and shake my head at several tests my system was telling me had passed even though I knew they had F-A-I-L-E-D.  Not only had they F-A-I-L-E-D, they were false positives.  Maybe you’re thinking, “well this must be what happened to her last year.”  Uh…no.  This was about three months ago.

Now that I realize the fragility of automation, I feel a weight on my back.  Even worse, because this automation is perceived as such a “win,” I have fears that my fragile tests will propagate and turn into the suite of tests Bach describes in Reckless Assumption #8:  tests that maintainers are scared to throw out because they might be important.  I’ve also realized that while I was spending so much time on automation, there was something I forgot.  I forgot that I’m supposed to be TESTING.  This scared me the most.  After all, if I’m not concentrating on assessing my SUT because I’m spending so much time on automating my older tests, how am I really benefitting this project?

Thus, this paper of James Bach’s landed in my mailbox during a very interesting time in my life as a tester.  I feel like I’ve been through this whole evolution over the past year of realizing the power of automation, wanting to automate everything and then realizing that I can’t automate absolutely everything, nor should I.  These realizations triggered an identity crisis.  Am I a developer who is writing tests or am I tester who likes to develop?  I decided that I am definitely the latter, and that I need to back off the hardcore automation for a bit in favor of re-examining my SUT as a manual tester.

My group has recently completed a rather large release, and we’re testing more incrementally.  I have fewer features to test with small releases, so I’ve put down the automation for at least the next couple of cycles in favor of straight-up manual testing.  I printed out every set of testing heuristics I could find, and have been reading through them to find the most appropriate heuristics for my tests.

What has this meant for my testing?  There has been both good and bad.  The worst is that Quality Center utterly breaks with this process.  I am convinced that Quality Center was not designed for a human being engaged in the cognitive process of exploratory analysis for testing.  (My last post was about exploratory analysis.)  I think that Quality Center was designed exclusively for the Waterfall process of software engineering.  To be clear:  that is not a compliment.  Another downside, is that I have had times when I have been looking at the screen thinking, “what’s next?”

The biggest advantage is that, of the bugs I have found, far fewer have been trivial.  Once I removed all thoughts of test automation from my working memory, I have found that much more of my working memory is focused on the process of exploring and testing.  I’ve been living through the observation that, “a person assigned to both duties will tend to focus on one to the exclusion of the other.”

The most memorable paragraph in Bach’s paper is at the end.  He describes an incredibly resilient system of mostly irrelevant tests.  That’s what I was building.  I will probably be automating less, but I’m confident that the automation I write will be more relevant.

Reblog this post [with Zemanta]

Underpants Gnomes Among Us: Exploratory Analysis for Visualization and Testing

Here’s a picture of tester dog, Laika, with Dr. James Whittaker’s new book, Exploratory Software Testing: Tips, Tricks, Tours, and Techniques to Guide Test Design. It showed up on my doorstep last week, and is my first free testing book ever (thanks Dr. Whittaker!)

i can haz testr buk.
Tester Dog

In reading through Stephen Few’s new book, Now You See It,I came across a completely separate perspective of looking at graphics in an “exploratory” manner. I can literally hold a book preaching the value of “exploratory testing” in one hand and a book preaching the value of “exploratory analysis” in the other. They are the same concept. If you have ever wondered what interdisciplinary means, this is a great example of an interdisciplinary concept.

Stephen Few does a great job of explaining exploratory analysis with pictures:

where's the profit?
Exploratory Analysis

Half of the people reading this now understand the underpants gnome tie-in. For those who don’t get it, here’s a link to the original South Park clip (NSFW).

Jokes aside, I’m going to start with the picture, and discuss what this says to me about testing and see if it meshes with what JW’s definition of exploratory testing. I will then look at how this applies to visualization. At the end, the two will either come together or not. At this point, I’m not sure if they will. I’ll just have to keep exploring until I have an answer or a comment telling me why my answer is crap (which is fine with me if you have a good point).

Starting with the picture and testing. I’m assuming the “?” means “write tests.” The eyeball means analyze. The light bulb is the decision of pass or fail. The illustration of directed analysis looks like the process HP Quality Center assumes. QC assumes you’ve primarily written tests and test steps before testing based on written requirements. Then you test. After you’ve tested, you have an outcome.

The second line for “exploratory” analysis looks like a much more cognitive and iterative process. This says that the tester has the opportunity to interact with the system-under-test (SUT) before formulating any tests(eyeball). After playing with the SUT, the tester pokes it with a few tests (“?”). At this point the tester may decide some stuff works and keep poking or decide that some stuff has failed and write defects(light bulb.) Chapter 2 of Exploratory Testing describes how JW defines exploratory testing: “Testers may interact with the application in whatever way they want and use the information the application provides to react, change course and generally explore the application’s functionality without restraint (16).” So far this is looking very similar.

Now that I’ve looked at how the exploratory analysis paradigm applies to testing, here’s how it applies to visualization. As an example visualization, I’m looking at a New York Times graphic, How Different Groups Spend their Day. When I open this graphic, I can see that it’s interactive, so I immediately slide my mouse across the screen. I notice the tool tips. Reading these gets me started reading the labels and eventually the description at the top. Then I start clicking. The boxes on the top right act as a filter. There is a also a filter that engages when a particular layer is clicked.

Few’s point in describing directed analysis vs. exploratory analysis is that in the wild, when we look at visualizations, we use exploratory analysis. It’s not like I knew what I was going to see before I opened the visualization. Few describes the process known as “Schneiderman’s mantra” (for Ben Schneiderman of treemap fame) in more detail saying that we make an overall assessment (eyeball), take a few specific actions (“?”), then reassess (eyeball). Although Few doesn’t say that there is a decision made at some point in this process, I’m assuming there is because of the light bulb in the picture (84).

Recently, Stephen Few asked for industry examples of people using visualization to do their work. Some of the replies were from the airline industry, a mail order warehouse and a medical center. Software engineers should be included in this mix and apparently from page 130 in JW’s book showing a treemap of Vista code complexity, already are. Given that both use the same form of exploratory analysis, I can see why.

Exploratory analysis of software testing and visualization diverge, however, when you look at the scale of data for which each is effective. Visualization requires a large dataset. This could be multiple runs of a set of tests or, as in JW’s example, analysis of large amounts of source code. Exploratory testing as JW describes can occur at a high level such as in the case of a visualization or at the level of an individual test.

One thing my exercise has shown me for sure is that I have to read more of Exploratory Testing.

Training without a Net

Trapeze School New York Beantown at Jordan's F...
Image by StarrGazr via Flickr

Those of us who like to be actively involved in the meetings we attend surely notice the effect a giant flat screen presentation has on meeting conversation.  It can be stultifying.  In the case of training, the presence of slides on a flat screen is the equivalent of showing a really bad talk show like Jerry Springer.  Nobody learns or remembers anything unless there is a fight or petty squabble.

This week I had to train some of our system’s users on how to write usable bug reports.  I had an outline and an example that I thought was interesting enough to keep people awake and focused on the topic. In order to make the information stick, I decided to go without slides, and see what it got me.

Preparation
You would think there would be less to prepare, but in truth, you have to come up with something that will keep your audience occupied.  In my case, I put together a group exercise by creating a scenario involving a bug.

Tip #1: don’t create a trivial scenario

My scenario was too far removed from our daily situation to ring true.  I could tell that some of the users felt I was wasting their time by having them work with an example they felt was “too simple.”  Once I noticed this, I threw it out and said, “let’s just talk in terms of our system.”  This seemed to make people more comfortable.

Tip #2: have an outline IN REALLY BIG LETTERS
Chris McMahon suggested this over twitter (thx!) and it did help.  The only problem with my outline, is that I couldn’t see it very well without picking it up.  There wasn’t much on it anyway so if I could have easily enlarged the font.

Keeping Your Audience Awake
Chris also suggested that I move around and stay animated. Since I have natural talent as a drama queen, this is typically not a problem for me, but is worth a mention. Raise your hand if you’ve seen a speaker able to put you to sleep merely with the narcoleptic power of their voice. There are also speakers who mumble in which case you won’t be able to understand what they are saying even if their voice keeps you awake because its “nails on a blackboard.” In that case, maybe you really do need the slides. Am I getting too off-topic with this?

Keeping the Focus on Topic
Once the flat screen has been removed, you will find that people have stuff they want to get off of their chest.  If you are the only person holding meetings without slides:  Guess what?   They will choose your meeting to unload.  I noticed people communicating more about our defect process than I had anticipated, and not necessarily in ways that I had planned.

Tip #3 Be flexible and open to some change in the agenda
Mid-way through our exercise, we had chucked my example and were discussing the pieces of our system that should be documented in describing the environment of a crash.  The users were talking to developers about the challenges they have in reporting their environment and I noticed some holes in our defect process. We were still on topic, but I let the users talk to us about what they typically see when they have problems running the system.

Tip #4  Don’t let meeting participants change your whole agenda
Since people were talking to each other and sharing information about our software process, the discussion was pretty intense.  I found myself circling back to my outline a number of times.  Some discussion was worthwhile, but obviously needed to happen in a separate meeting.  Sometimes attendees will resist moving on, but I find that a quick, “we’ll schedule another meeting, moving on to <next point goes here>…” will get the job done.

Would I do this again?
Absolutely.  Even though I write about and study visualization, there are times when we really do need to sit in a circle with the talking stick and communicate with each other.  In fact, even Prof. Edward Tufte recognizes that there’s no need to have the monitor on all the time.  In his lectures, he shows you the graphic, tells you what to look at and then TURNS IT OFF.

Training without slides is not for the faint of heart, but, in the end, I think my work colleagues respected the fact that I wanted them thinking through the material and not just gaping at a flat screen.

Reblog this post [with Zemanta]

Plagues aren’t just for blog posts

Vibrio cholerae with a Leifson flagella stain ...
Image via Wikipedia

For the past couple of months, James Whittaker has been writing about the “plagues of testing.” As he’s been posting, I’ve been reading through a book about a real plague.

As software testers, we see a system from a perspective that developers and business types rarely and may never see.  We know our tests, we know how well they ran.  We know our system under test and which components are picky.  If you are like me, and have access to the code base, you also know the code.  In the case of both the system and it’s code you know what should be better.  Sometimes this is not a big deal, but sometimes it is a warning that it’s time to polish the resume. I have not seen this, personally, but I know that there are testers who find themselves in this position.

John Snow was a doctor in this position. He was an expert in the new art of anesthesiology in 1850’s London. He was also deeply involved in the study of cholera and concluded that it was a water born illness. This was very much counter to the prevailing theory of the time that cholera was passed through a sheer volume of stench or miasma (I can hear Dr. Evil saying this word). Health authorities in London, were so convinced that miasma, or extreme smellyness, was the reason for disease that they passed a law in 1848 requiring Londoners to drain their waste into a sewage system that would deposit into the Thames river. Unfortunately, the Thames was also a main source of drinking water for the city. Snow knew all of this and could see a health crisis in the making.

In 1854 when a major outbreak of cholera erupted in a neighborhood very close to where Snow lived, he conducted a thorough investigation in order to prove this theory that cholera was passed through water. He had a list that detailed the names and addresses of 83 people who had died from cholera.  He also had an invaluable resource of detailed information about the neighborhood’s residents in the form of local clergyman Henry Whitehead.  While Whitehead tracked down and questioned everyone he possibly could about their drinking habits, Snow analyzed this data to form patterns of who had been drinking the water, who had died and, just as importantly, who had NOT died.  Not only were Snow and Whitehead able to convince the local parish board to remove the handle of a contaminated water pump, they knew their data so well that they were able to figure out the index, or original, case that had started the outbreak.

John Snow Map

After the outbreak had subsided, Snow put the analysis from his investigation together with a very famous map into a monograph that circulated among London’s health professionals.  His monograph slowly but effectively turned the tide of thinking among health professionals away from the miasma theory.  This and a very smelly Thames convinced authorities to build a new sewer system that drained into the sea.

Frequently, when I talk to people about data visualization, they always ask me how I know what to visualize.  The Ghost Map, by Steven Johnson, illustrates this perfectly.  John Snow went from having a theory to proving his theory with visualization in a convincing way.  There’s no wizard or easy-button for this one.  It takes knowing what you are trying to say and knowing the data, inside and out, you are using to prove your theory.  For testers, this means digging into test runs and testcase organization.  Where are the tests that failed?  How many times did they fail? How are they grouped together?  Does an object that make 1 test fail make others fail as well?  If you know which tests are failing, what do you know about the code you were excercising?  How complex is it?  I know this awful, but I would so go here if I thought it made a difference…who coded it and do I respect their coding talent? Even if I think they are solid, did they know what they were supposed to be doing? You have to know exactly why you are telling the PM you think there is a serious problem and have a way to show it. I lump business people in with having the same short attention span as doctors and politicians. My blog post probably lost them at “James Whittaker.”

Johnson and Whitehead could not drag the dead people’s relatives in front of London’s public health community and force the doctors to listen.  These days doctor’s short attention span is because of insurance, but I’m sure there were other reasons back in the day.  A good visualization does not take a long time for the viewer to process.  That is their special power.  Snow’s map is much more concise than a table of 83 names and addresses along with their individual stories.  Visualization can quickly show your groups of tests that are failing.  It can show that severe defects are increasing, and not decreasing, over time.  Business may drive the ship/do not ship decision, but a good tester will know why a seriously ailing system is in so much trouble.  A great tester can effectively communicate this to a business team.

Reblog this post [with Zemanta]

Test Patterns

This will be the next-to-last week of my design patterns class, and I’m working on my final project. We were told to pick some category of design pattern and to do write-ups of the patterns in our category. Some of the example categories were security patterns, anti-patterns and concurrency patterns. I chose test patterns so it would be reusable for work.

So far, what I’ve found is that “test pattern” can mean just about anything in testing. In fact, I question whether there is really a difference between “test heuristic” and “test pattern.” It’s all just ways of categorizing abstract testing concepts that can reapplied in difference scenarios, right?

I looked up test patterns in How We Test Software at Microsoft who have also defined some of their own test patterns. In HWTSM they pretty much refer the reader to a great, fat, brick of a book titled, Testing Object-Oriented Systems by Robert Binder. I know that this book is a brick because I’ve purchased it and have been losing weight by carrying it around when I’m not reading through it. (Maybe Oprah should try this.)

This book not only has test patterns, but categorizes the test patterns into several chapters. Included are Results-Oriented Test Strategy, Classes, Reusable Components, Subsystems, Integration, Application Systems and Regression Testing. As an example, the Integration chapter contains the patterns Big-Bang Integration, Bottom-Up Integration, Top-down Integration, Collaboration Integration, Client/Server Integration, and a few more.

As I’ve been schlepping through this huge book, I’ve noticed just how technical and detailed it is. This leads to my next question, how many people use test patterns knowingly as test patterns? It’s not like most of us in testing trained for this, and the only place I’ve found straight up definitions of test patterns aside from the microsoft post is in this particular book. When I use Quality Center, it’s not like I’m separating out my tests by pattern or heuristic. Should I be? I’ve also read of testers who felt that their success was due to the fact that they weren’t following a pattern, but acting as a user. But then, isn’t that a pattern too?

I’ll post some of the stuff I’ve done for this project in a week or so. Very interested in what people think about using test patterns for testing.

Reblog this post [with Zemanta]