Twitter github

Playing with REST and Twitter

At my new job, we all take a particular specialization. One of my co-workers helps the rest of us learn about testing security while another champions the testing of internationalization.  We share what we learn about our specializations with each other so that everyone benefits.  When I was asked what I wanted my specialty to be, it didn’t take me 2 seconds to say, “I wanna specialize in API testing.” This means I need to know about REST (REpresenational State Transfer).

Here is the simplest explanation of REST I can muster:  You use a URL to call some method that belongs to another application.   So, basically, it’s using the basic http calls of POST, GET, PUT or DELETE.  The data that comes back is usually either XML or JSON.

When I was at the Writing-About-Testing conference, our host, Chris McMahon knew that I was trying to brush up on REST so he put together a few slides and did a talk for the group. While he was talking about it, I noticed @fredberinger tweeting about a slidedeck for a discussion on the management of twitter’s data. It was serendipitous because Twitter uses REST heavily.

Looking through the slides led me to googling about REST and Twitter which led to a fun discovery. If you have a Mac, you have the “curl” command. Straight from the man page… “Curl offers a busload of useful tricks.”  At its simplest, curl is a command for getting data.  What makes curl great is that it allows you to submit commands that normally require interaction with a few screens.  For example, you can use curl to submit authentication credentials.  Curl will also retrieve data for you after you’ve submitted your request. It does all of this with 1 command line.  In the world of shell scripting, since curl will return xml or json data, the data is easily saved off into an xml or json file.

This means that curl can be used to interact with Twitter’s api calls.  Since I’m on twitter way more than I should be (my husband will back me up on this claim), I thought this would be an excellent way to do some playing with REST calls.

This document from twitter’s wiki has some examples of using REST with Curl.  It’s down at number 8. I am posting my own examples as well.

Example of a GET:

“curl -u marlenac:****”

This gets some of the most recent updates in my timeline.  I noticed that the xml retrieved 17 updates at 11:00 pm in Sydney.  I’m not going to post them all, but here is  a screenshot of one status update from Andrew Elliott, aka, @andrew_paradigm.  There’s a lot more information here than I see in tweetdeck.

Status update in xml

Notice that the url above ends in “xml.”  One aspect of REST is that you have to know which datatypes are valid so that you can state that in the request.  I’m not going to make the same call, but change the ending to “json” like so:

“curl -u marlenac:****”

This call produces what should be the same data, but in the JSON format.  If you are not familiar with JSON, it’s just another data format.  If you do not like the “pointy things” aspect of xml, you might prefer JSON.  Here is a screenshot of the same status update in the JSON Format.

Status Update in JSON

The JSON that was returned did not have any line feeds.  This would make it easier to parse since there is only a need to look for braces and commas.  I inserted the line feeds in the first half of the status so that you can make sense of the data format.

Just as it is possible to get data from Twitter with REST, you can also make updates from the command line using a POST command.

“curl -u marlenac:*** -d status=”It’s a lovely day in Sydney””

A command line tweet

Here is the JSON that resulted from that update:

Update status JSON

This is obviously the beginning of my playing around with REST.  I now have to practice using REST with the Atlassian product I test, Confluence.  While I was putting this post together, I was thinking about what I would want to test in these api calls.  Since I am learning about this, I welcome input about tests that I could add to this list:

  • Check that the information retrieved from different data formats is the same.
  • Test the different parameters that are possible with each REST url.  Make sure that they work.  The api should only accept well-formed parameters and if the parameter is un-usable, should return an informative error message.
  • Check that authentication works and that users should only be able to make calls within their access rights.
  • Test the limits of how many calls can be made.

If you look through the Twitter documentation, you will notice that the place limits on the number of calls that can be made to some URLs.  This is an indication of the fact that REST can be extremely useful for obtaining lots of data for the purpose of aggregating it.  This aggregated data can make a much better data source for visualizations than, for example, comma-delimited data.  One reason I haven’t been doing much visualization lately is because my inner Scarlett O’Hara has vowed that as “Tufte is my witness, I will never use comma-delimited data ever again!)  (You can laugh, and believe me, I understand that rules were made to be broken.)

For those who have read this entire post, here’s a special treat.  If you are on a mac, you may or may not be familiar with the “say” command.  Oliver Erlewein turned me onto this during our recent Weekend Testing session.  If you type: say “Shall we play a game?” your mac will, vocally, ask you, “Shall we play a game.”  There’s also a parameter that lets you choose the voice.  I decided to try piping the REST calls I made using curl to the say command.   I could “say” what happens, but that would ruin the surprise.  ;) Have some fun with that.

A Desert Tale of Rocks and Ruins

Hiking into the Canyon

After attending the Writing About Testing conference in Durango (which you can read about here), I spent a few days in the desert canyons of Utah.  Finishing a masters degree, presenting at PNSQC, getting a new job and moving to a different hemisphere have all taken their toll.  Thus, my challenge was to clear all the shit out of my head that’s been accumulating for the past 12 months.

I started out with a pack full of “stuff” I thought I would need to survive in the desert.  The picture to the left shows me as I’m hiking down into the canyon.  I’ve only ever backpacked in mountainous environments, such as the Appalachian mountains of the Eastern U.S., where there are plenty of streams and water is never a problem.  This time, I was hiking in a place where the environment is so harsh and water is so scarce, it is all but inaccessible for 2/3 of the year.  Thus, I filled up my pack with lots of water and the “stuff,” intent on keeping nature at bay.

The area of Utah I visited was formerly inhabited by a native american tribe called the Anasazi.  From what archaeologists can tell, the Anasazi were able to thrive for quite a while in a place modern-day humans would call un-liveable.

For the first couple of days, I was very careful not to get my stuff too dirty.  I was worried about getting my camping equipment through customs in Sydney because they are strict about camping equipment being clean on re-entry.  As we hiked on, however, I quit caring and began to let the sand of the desert into my boots and quiet of the desert into my head.

The canyons were full of ruins and rock art.  The Anasazi are long gone, but the dry desert air has preserved many of their dwellings and art.  My host, Chris McMahon, calls it, “the museum of experience.”  We were able to walk right up to the art and look through the dirt for pottery shards (which we left in place).  I put myself in the shoes of an Anasazi artist as I sketched the figures I saw carved and painted on the sandstone walls.

You were here and so was I

The art in this desert was not on a canvas and because its creators are so long gone, there is no way to know exactly why it exists.  The Hopi, who are alive, believe they are descendents of the Anasazi so they probably have some very good ideas, but there will probably never be solid answers about the creation history of this art.  I love that.  Because I love questioning and imagining, I made up a thousand stories for every painting I saw.  I devoured every paint stroke and compared to every other paint stroke I saw.  If the wind blew while I was looking at a painting, I questioned the direction in which it was blowing and whether or not the painter felt the wind coming down the canyon the same way I was feeling it as I observed the results of their labor.

One way to get a more intimate connection with any piece of art is to find your own way of reproducing or re-interpreting it.  I took a small sketchbook with me and sketched out a ruin and some of the paintings.  I’ve also sketched a few more paintings from photographs my husband and I took.  Although the Anasazi paintings are quite ancient, I found figures that were well-drawn by a practiced hand.  “I don’t know who you were, but you did some good work here,” I found myself whispering as my eyes looked over the figure you see in the photograph below.

Painted by a well-practiced hand

As I immersed myself in the evidence of an ancient culture, I began letting go of the present.  My mind began to wander past the 140 character restrictions of my everyday life and the ruins of my own existence.  As my thinking shifted, my physical needs changed as well.

By the third day, I had taken anything I hadn’t yet used or worn and sheepishly stuffed it into the bottom half of my pack.  Why did I bring 2 pairs of pants?  I guess I thought the desert would be cold (ha ha).  I also didn’t need my rain jacket.  There were some other items I also didn’t need and they were much on my mind as I dragged it all on my back through the heat and the sand.

Two-Story Ruin

My husband and I were going through a similar dilemma with our tent.  On night one, we set up the tent and the rainfly we had brought.  The second night, we ditched the rainfly because it was hot, even at night.  On our third and last night we finally came to our senses and slept outside, not even bothering to pitch the tent.

I’ve always had a fear of night creatures when I’m camping.  I have heard bears and mountain lions growl at night.  On one trip, an animal brushed against our tent and I couldn’t go to sleep afterward.  Outside, in the canyon, I lay awake in my sleeping bag, watching the light  cast by the moon on the canyon wall across from our campsite.  I thought about why my fear had vanished.

Aside from the environment itself, there is nothing to fear in the desert.  Everything except for the animals small enough to subsist on the bare minimum of water has already fled.  I had succeeded in leaving all of my anxieties about nocturnal predators and life itself behind. I went to sleep in the light of the moon, picturing the motion of hands making brush strokes over warm sandstone.

We hiked out the next morning.  We had been hiking through sand for most of our trip, and my feet were relieved to finally feel the earth pushing back against them.  As we ascended out of the canyon, thoughts about work and “real life” began to come back.  I made peace with them as we drove away from the wilderness area.

This trip allowed me to gather my strength, and I felt fearless as I left the desert.  It was the same feeling I’ve had before when I’ve ascended from a cave on a rope.  Once you’ve crawled out of a 90 foot pit with nothing holding you up except for a rope, priorities shift and you remember what you really care about in life.  The same holds true for slogging through mile after sandy mile in Grand Gulch.  Fear has departed.

A View of Grand Gulch

Post Context: Here are the tweets that led to my diversity post

Jolly Roger flown by Calico Jack Rackham. Bott...
Image via Wikipedia

One rule I’ve had for myself since I started this blog is to do my best not to write posts in direct reaction to what someone else has said.  This blog is about me.  It is my writing.  It shows what’s on my mind.  There are very few situations in life when I can unabashedly and honestly say, it is all about me.  This blog is it.  There hasn’t been a post about diversity before my previous one because I’m not thinking about gender stuff most of the time.

I broke my own rule with my last post, and I did it for a good reason.  If you want to know why I wrote that last post, please take a look at this twitter transcript put together by Rick Scott.  He also saw the whole thing happen and has blogged his own reaction to it.

I was pretty angry as the tweets unfolded, but most of what I said stands.  The only tweet I would change is the one where I said the context school is a pile of crap. That was wrong of me. I know many people involved in the context school of testing who have much more fair-minded ideas about diversity and gender than those expressed by the Bach brothers in the transcript.  Unsurprisingly, James Bach has blocked me from his twitter account.  Jon Bach has also blogged his version of what happened on twitter.   I hope people draw their own conclusions based on the actual conversation rather than relying solely on one person’s account, and I do feel that it is important for people to have an opportunity to draw their own conclusions.  That’s why I’ve stayed pretty silent about this for the past week.  I love my blog and if I can’t write with respect, I don’t see any point in writing at all.

The irony in all of this is that I still think James Bach’s contributions to software and software testing are brilliant.  We do not, however, see eye to eye on diversity or even, as the transcript points out, workplace ethics.

Thus, I am officially hoisting my own pirate flag of agitator for women’s empowerment in technology.

Enhanced by Zemanta

My post about gender and diversity

Stump in Red Hills
Image by cliff1066™ via Flickr

This is a post I’ve put off writing on purpose because it’s not my favorite topic of discussion.  That’s not because I feel shy about it, it’s just because people have usually already made up their minds on this particular topic which makes the opportunity cost of the discussion high.  I’ve noticed Lisa Crispin and others making valiant efforts to have this discussion in testing.  I agree with her the others involved  that it is time.

We need to talk about the role of gender and diversity in testing.

I am tired of hearing about how my being a woman is important to the way I test.  It’s a poor definition of “woman” that I don’t believe holds up well if it’s really dissected.  There are many different ways to be a woman, and I’m not going to highlight all of them here.  I’ll just point out one stereotype that needs to go:  women have babies.  I don’t have babies and I don’t know that I’ll ever have a baby.  In fact, I have plenty of women friends who never want to have a baby.  Are we still women?

The flip side of this is that each person has differences that make them valuable on a test team.  Hopefully these advantages are obvious enough that there’s no need to go through that argument.  The problem arises when we start stereotyping individuals into monolithic groups that are actually quite varied.

Lately I’ve been thinking about this in terms of levels of measurement.  Maybe that’s because the book I learned this from,  Stephen Kan’s Metrics and Models review, really goes for the throat in the examples used to illustrate levels of measurement.

Nominal: Classifying elements into categories.  Kan uses the example of religion by saying that,  “if the attribute of interest is religion, we may classify the subjects of the study into Catholics, Protestants, Jews, Buddhists, and so on.”

Ordinal: Ranking is introduced.  Kan writes that, “we may classify families according to socio-economic status: upper class, middle class, and lower class.”

Interval Scale: At this level, there are exact, standardized differences between points of measurements.  Elements can be compared using addition and subtraction and depend on having some standard of measurement.  Kan uses a KLOC example to illustrate this one, “assuming products A, B, and C are developed in the same language, if the defect rate of software product A is 5 defects per KLOC and product B’s rate is 3.5 defects per KLOC, then we can say product A’s defect level is 1.5 defects per KLOC higher than product B’s defect level.”

Ratio Scale: This level is differentiated from the interval scale only because there is a zero element present.

Where do humans fit on this scale?  The classifications we have for each other are nominal and ordinal categorizations, but I don’t think that the levels of measurement come anywhere close to defining the measure of human experience.  Gender is what I get thrown in my face because I happen to have a vagina.  Never mind the fact that I am the one earning the money in my family, I don’t have children and I don’t wear pink or even bake.

There is a nasty undercurrent in testing at the moment that tries to define me as “woman tester.”  There’s no need to even look that hard if you want to find it.  When I see this  I will call it out and I will call it out loudly.  I call it out because it undermines hard work done every day not just by women who show up for their tech jobs.  It’s also undermining respect shown to women by hordes of male geeks who want things to be better.  Guys:  I hear you.  I know you want me to feel happy and comfortable at work.  I know you want more diversity in testing and technology.  It means the world to me that you feel this way.  I hope that we are far enough along with this problem that others, male, female, transgendered, will call it out with me.

So if you are among those who think we all ought to be wearing badges announcing how great it is that we fit some cultural stereotype/straightjacket, I hope you take some time to rethink that stance.  It’s a waste of time we could be spending on other problems in testing.

Reblog this post [with Zemanta]

Here’s the story behind the testing dashboard tweets

Low-Tech Dashboard in Confluence

I can’t believe just how hungry testers are for the low-tech testing dashboard.  It’s a great illustration of how we are evolving in our desire for tools that do not push us around or deign to tell us how we should be testing.

At my last job, I ditched Quality Center for a test-cycle and decided to use James Bach’s low-tech testing dashboard with my team’s incredibly un-sophisticated dokuwiki.  There was nothing expensive involved and certainly no rules… just enough structure and more creativity in my testing.  I loved it!  By getting rid of Quality Center, I made my testing activities much more visible to the rest of my team not to mention more creative in terms of exploratory testing.

I’ve been tweeting about it because we’re starting to use it for Confluence testing efforts.  The reason why I haven’t blogged it yet it is because I’m putting it in a paper for this year’s CAST. Those of you attending CAST will have the opportunity to see my full presentation.  Because of the overwhelming interest, I thought I’d give y’all a few tidbits and a push in the right direction.  If I get permission, I’ll be post the full paper.  If my arm is gently twisted, I might add a few more tidbits.

This is very easy to put together, can be free (as in beer) and is 115% customizable.

The ingredients:

A whiteboard or a wiki (I like Confluence and I’m not biased AT ALL.)

The low-tech testing dashboard pdf

Some idea of what *you* want to track in *your* testing

The most challenging aspect of using this dashboard is in deciding what you want to track.  My co-worker and I discussed it for a while and are still undecided on a few points.  Although the pdf suggests that putting this online is less than optimal, I think a wiki is a perfect leap.  I link all of my test objective pages to the components I list in the dashboard.  I also link important issues in the comments section. Thus, a wiki is shallow enough but has the ability to give added depth when and where it is necessary.

Mr. Bach’s dashboard is 11 years old, and thus, is a bit weathered but still very good stuff.  It’s ripe for a bit of botox here and there.  Those of you who decide to take this and “make it your own” are welcome to share how you’ve made changes or which areas of the dashboard could use a bit of tweaking in your environment.  I hope to see some dashboard photos on twitter. (There’s a meatloaf joke in there somewhere.)

Being a writer makes me a better tester

Durango and Silverton Narrow Gauge Railroad
Image by fritzmb via Flickr

This weekend, I’ve left the kangaroos and beaches behind for the mountains and red rock of Durango, Colorado.  I’m participating in the Writing-About-Testing peer conference being organized by Chris McMahon. In an effort to keep the organization cost and logistics light & easy, there are only 15 people.  We all write a blog or write presentations for conferences and are software testers.

At the beginning of this year, one of my predictions for 2010 was that software testers would begin to take writing skills more seriously.  I hope that this conference is a means to that end goal.  One of the unspoken skills in testing is our power to communicate.  I can spend all day long finding problems with an application, but what happens if I don’t have the skills to let someone know or clue someone in as to why they need to care?  Testers need good writing skills because:

  • We must be able to concisely state what is going wrong in a defect report, especially in the summary line.  I’ve been reading the book, “Don’t Make Me Think” which is about usability testing, but one of the points rang true for writing defects:  humans like to scan.  This makes defect titles especially important.  I’ve actually blogged this point before because I liked what the book How We Test Software At Microsoft had to say about the subject
  • We use persuasive writing anytime we’re writing a communication to developers in an effort to convince them of the need to fix something we feel is especially important.  Persuasive writing is a skill unto itself.  My father is an attorney and has made a career out of persuasive writing.
  • Writing out tests or explaining how we’ve tested is an especially difficult and necessary task.  If you work in an environment where you are testing something that is under intense scrutiny, then this must be doubly important.  I consider writing tests adjacent to technical writing because we have to explain in very specific terms what we are trying to do, what happens when we do it and why we are trying to do that in the first place.

My writing experience at work involves each of the three points above and more.  If you’d like to see some examples of my writing, you can check out the Atlassian bug reports I’ve written thus far.

Although I keep time constraints in mind, I agonize over the title of every single bug and the repro steps I record probably much more than I should.  When I am describing my tests or writing test objectives, I make every effort to be extremely precise.  My bonus for this effort is that I am always finding new words I can use to describe the features I am testing and the testing that I do.   Am I a nerd if this makes me happy?  I frakking hope so!

Here’s a little known fact about myself:  I competed in a spelling bee for charity when I was in my twenties.  I participated with some co-workers from my job.  I remember that my team took home a ribbon other than first, but I don’t remember exactly what our placement was.  I do remember a conversation that I had with someone else on my team.  We must have spent 20 minutes or so talking about words that we like.  My challenge to every reader of this blog whether you are involved in testing or not:  find a new word and use it to make your own daily writing more descriptive and informative.

If I’m lucky you’ll leave me a comment and tell me about it.  I want to know about your new word, because I’d also like to use it ;)

Reblog this post [with Zemanta]

A Visualization of Comments from Jira and Confluence

A FedEx Express Airbus A310 taxis for takeoff ...
Image via Wikipedia

Last week, I happily participated in my first FedEx at Atlassian.

FedEx is a quarterly competition held by the company in an effort to keep it’s staff feeling creative and hopefully produce some exciting new functionality for Atlassian’s products. Pairing is also encouraged for this so I worked with Confluence developer, Anna Katrina Dominguez.

We produced a network visualization that shows where users of Confluence and Jira have left comments. I found something similar (and maybe a bit more, ahem, polished) in this blog post of Jeffrey Heer’s about his project Exploring Enron. He visualized which members of Enron’s staff had been sending each other emails.  My post from a few days ago explains why I chose this for my FedEx project.

I’ve done a write up of our project on Atlassian’s web-site. It also includes a pdf of the visualization. The colors are pretty bad and the UI designer was just in agony that we weren’t able to remove the outline around each shape. Since we went from having 0 lines of code and a vague idea of what we were doing to a simple implementation of a semantic triple store and a visualization that does have some meaning in 24 hours, I’d say we did ok.

Python, and Anna’s skill at writing Python were impressive. Python is so well-suited to this type of project that I’m not looking forward to going back and re-doing some of this in Java. I want to though, because I like the idea of building up some classes that handle semantic data. I’ve noticed that Python is frequently mentioned as one of the best languages for getting data together and now I know why. We didn’t have to waste any lines of code setting up containers and the data we got out of our test instances of Confluence and Jira was easier to process.

The one disappointment, aside from our My Little Pony colors, was REST. We didn’t get very far with using it and ended up grabbing data through xml-rpc instead. I couldn’t figure out if this was because we just haven’t used it that much or we couldn’t get the data we wanted with it. It’s clear that I need to do some more poking at REST. I’ve got some links and Atlassian’s technical docs on REST + O’Reilly Safari at my disposal. Looks like I might do some blogging about this.

If you’d like to see a few more of the FedEx projects from this round, you can look through the deliveries on this page.

Reblog this post [with Zemanta]

Owning My Celeb-u-tester

Nada Surf taught me all I need to know about being Popular

I wrote this blog for quite a while with the attitude of, “I know know that no one is reading my blog…whoopee!”   It’s a very special feeling, being able to post whatever you want and knowing that no one is reading.  My first posts were written as part of my independent study of the semantic web/web 2.0 and morphed into posts reviewing games as part of my summer semester video games class.  (Hell yeah, I know you’re jealous!)  After that the blog languished as I slogged through a project management class that nearly killed me.

Something happened in that semester of project management.  I attended GTAC, and the blog turned into something real.  I met David Burns who writes The Automated Tester and is doing great things with Selenium and .NET.  I was so impressed with how seriously he was involved with his blog.  It rubbed off and when I returned, I started thinking of how I could get more serious about my own blog.

I knew that I wanted a new job and not in the city where I was currently residing.  I also knew that, unfortunately, my school has negative street credibility outside of Atlanta, GA.  If you want to debate me on that point, just ask the people around you, if anyone has heard of Southern Polytechnic State University. You won’t see very many hands.

I decided to blog everything that I did in school.  At least I would be able to say, “Hi prospective employer x.  I’ve done really great things at school, here they are on my blog.”  As part of this whole, getting-more-serious-thing I moved my blog from blogger to wordpress. I still had that “whoopee” feeling that no one was reading my blog.  I just thought that people would be reading it later.

I kept working my way through classes, having fun with visualization and writing my thesis.   My advisor pushed me to submit my thesis to PNSQC.  I was very shocked when it was accepted.  I had no idea it would be chosen.  I honestly thought I would get the “thanks, but no thanks” email.  It made me nervous to think about presenting my work, but I figured I would just stand up in some tiny back, basement room filled with 2 people and stumble through some slides.  Then this post of Alan Page’s happened.

At this point, I had read most of Alan’s book, (no small feat considering I had classes requiring lots of attention) and decided it was my FAVE-O-RITE testing book.  I can’t tell you how many times I’ve used this book.  I actually did a report on Equivalence class partitioning as part of my Formal Methods class.  There is no way I can possibly overestimate how important this book was/is for me.  In fact, I was unable to post a review of it because the one I wrote was just so Fangirl it even made me want to barf.  To have one of the authors saying that he couldn’t wait to see MY presentation at PNSQC was enough to make me hyperventilate.  If your favorite tester called you up and told you how much they liked what you were doing, how would you feel?

I felt watched.  I couldn’t write for a week.  I jumped at every noise and kept turning around because I could feel someone behind me.  I was spooked.

Here’s the thing about knowing that people read your blog…you KNOW people are reading your blog.  I know that when I go into work tomorrow, my boss, several co-workers and my former neighbor will have read this post.

How would you feel if you felt your every word being read by someone who mattered to you in real time as you wrote them down?  Are you spooked yet?
I got over it which is a good thing because, at this point, not only have I appeared on Alan’s blog, I’ve made an appearance on Matt Heusser’s blog, Chris McMahon’s blog and had James Bach and Mike Kelly make comments on my posts.  I’ve even appeared in the Carnival of Testing a couple of times.  It was pointed out to me recently that my blog is number 38 on a list of software testing blogs.
So I now have an admission to make that might sound callous but is, in reality, very difficult for me:  People read my blog.  Not only do people read my blog, they like it! When my post shows up in their reader they actually take time out of their day to process what I’ve written.  They give me comments, they tweet about it, they write about it.  They tell other people to read what I’ve written.
My reaction to this has continuously been a huge forehead smack and a very loud, “WHO KNEW!!!!”  After PNSQC, I totally lost control and had a big, fat, happy cry about it over pancakes at Lowell’s in Pike Place Market as I watched the ferries chug past the window.  I mean, my make-up was smeared and everything.  I just couldn’t help it.
So what have I done with this success?  I could have tried getting a job at Microsoft or Google, but instead, I chose to take a job with Atlassian, a company I want to see succeed on a grand scale.  What does that mean?
I’ve moved.  To A-U-S-T-R-A-L-I-A.
This move has forced me to sit up and take notice of my own involvement in a community of well-known software testing bloggers.  In the past week, I’ve been reading about what I’ve missed:
It’s time for me to acknowledge my own role as a voice in the testing community.  I’ve ignored it and tried to pretend that no one reads my blog. I guess this was an effort to persist in the state of “whoopee!” but I’m at a point where I’m not sure that’s the most appropriate way to live my life as a blogger anymore.  I’m not an oracle or a buzzword-inducing, testing savant, but I seem to have a voice that people enjoy hearing.  The challenge, at this point, is for me to stay true to myself and understand where I fit in this mix of testing, technology and visualization.
That also means I need to recognize the gift I’ve been given and find a way to participate despite the distance I’ve created between myself and my fellow von Testerbloggers.  This wasn’t a challenge I anticipated I would be creating for myself when I moved, but it’s proving to be a tough one.  I had no intention of crumpling up all of the relationships I’ve developed over the past year or two and tossing them into the Tasman Sea, but I feel like that’s exactly what I’ve done.
Networks:  the best ones do not involve business cards and, once cultivated, are worth the effort in maintaining.
I’ve got no idea how I will make this work, but I’m committed to meeting this challenge I’ve created for myself, and I know that I can do it.

Putting Pieces Together: The Semantic Web & Data Visualization

NodeXL Twitter Network Graphs: CHI2010
Image by Marc_Smith via Flickr

Disclosure: I wrote this in February, but never posted it.  It’s one of those pieces I just sort of coughed up in 10 minutes and forgot about.  This past week, I’ve been rolling around in semantic web concepts.  I feel like writing about them but don’t want to leave y’all wondering what I’m talking about.

Twitter has become such a part of my professional life. It’s also extremely difficult to filter. Everyone who uses twitter has some algorithm for deciding when to follow, not follow or unfollow someone.

Since my blog template now includes the nifty little blue button up there in the top left corner, I’ve been getting followers from my blog. If this is how you decided to follow me on twitter…welcome to the party :)

Currently, when someone follows me on twitter, I have to go through a really awful, dis-combobulated process of deciding 1) are they a spammer? 2)should I follow them back. The process of having to figure these 2 is one manifestation that our regularly scheduled internetz is not working anymore. We need the semantic web and we need it now. We need data visualization and we need it now. We need the two to work together like peanut butter and chocolate and we needed it yesterday.

The Semantic Web:

I always link to this youTube of Sir Tim Berners-Lee. It’s for a reason, so if you haven’t watched yet, please have a look (I just watched it again). It is the vision of the semantic web. The reason why this particular problem can, I believe, only be solved by a semantic search is because judging someone on twitter is so freaking hard. It’s not just who they know. It’s about their interests, it’s about what they are working on, it’s about how much they tweet and what they tweet about.

I am a tester, but I follow developers too. I enjoy following testers who are local but I also enjoy following testers who live in other parts of the world. I’ve got a few friends on twitter who I know in real life. The only celebrities I currently follow are celeb-u-testers, but I won’t embarrass them by calling them out here. In fact, there’s one guy I follow just because he’s an uber-nerd who is always tweeting about uber-nerd types of things. Much of it is over my head, but I find it’s a great way to keep my eye on the pulse of tech emanating San Francisco.

What do these people have in common? Maybe some of them have a few things in common, some of them have a lot in common and some of them have nothing in common with each other but there is something about them that interests me.

Data visualization

At this point a network visualization would really be helpful. If you’ve ever looked at a family tree, that is very similar to a network graph. Network graphs are all about showing and judging relationships.

A love match:

In this case, when I am notified of a new follower on twitter, I would really like to see that person’s network graph in relation to my own. However, and this is where it gets interesting, I don’t want the network graph to ONLY consist of people. I don’t just want to see who, I want to see why. Do people work in the same place? Do we share an interest in a computer language even if I am a tester and the other person is dev? Were we both residents of Haus Berlin in Wuerzburg, Germany from Fall 1994 to Summer 1995? Does this person tweet when their nose hairs grow an inch (which, btw, is a really gross thing to tweet ) or do they only ever tweet when they write a new blog post? (In which case they are probably already in my reader so I don’t need to follow them either.)

The 17 twitter visualizations summarized by Nathan Yau on his Flowing Data blog dance all around the big picture of visualizing twitter. Most of them show network relationships between people, some of them show locality, some show the volume of tweets per hashtag, but none of them has found a way to integrate all of this information. The team that accomplishes this will be making a huge breakthrough.

This is a new level of complexity for me, and I’m guessing, many others as well. Unfortunately, these are not problems that can be solved with an email and a pie chart.  I’ve started working on using pieces that go into a semantic web application.  I’ve done this partly because I feel there is promise for testing here.  If you see me posting about non-testy stuff like REST or RDF or, it’s because these are my attempts to fully understand how I can engage semantic web concepts for testing.  Exploratory testing through exploratory data analysis:  bring it!

Reblog this post [with Zemanta]

Bugs 30% Off!!!

Update: If you’re in reader, the formatting just crapped out on me this time.  Click through and it looks much better.

Seeing a sign that says 30% off in a store might make me stop and take a look.  Of course, it depends on the store.   I am much more likely to stop and look if the store is Target vs. oh, let’s say, Versace.  Why is that?  Because I already know that anything Versace has on 30% probably started out at thousands of dollars.

Look at the pie chart on the right.  Aside from the fact that it’s adding up to more than 100%, how many people were involved in the survey? It could have been 3 or it could have been 300.
How is this relevant for testing?  Let’s look at some bugs.  30% of them are user interface bugs.  Are you thinking this is good or bad?  Actually, it doesn’t tell you much of anything at all.  Why?  Because, for starters, there’s no way to tell how many bugs we are talking about.
If there are 9 bugs total, that means 3 of them are user interface bugs.  In this case, the percentage doesn’t mean very much because the number is so small.   You can make a fancy pie chart out of that (it’s more likely your boss will) but it will be a meaningless pie chart that wastes everyone’s time.   If you’ve got a number that’s lower than about 30, there’s no point in using a percentage at all.  Just use a table.  There won’t be any flash, but your data will be clear.
But that’s not the only problem in this scenario.  The larger problem is that 30% says nothing about the size or complexity of:
-each bug
-the application
-the testing effort
What if I’ve got 49 bugs that are cosmetic and 1 bug that is causing memory, leaks, data loss or volcanic eruptions in Iceland?  (Think of the children!!!!!)
A percentage about a count that is too small is unnecessary and a percentage used to obscure large numbers deserving of attention in and of themselves is an oversimplification.  I’ve been seeing a lot of percentages lately when the count is really low or when there is no count at all. When I see this I immediately mis-trust any other data included with the percentage.
Percentages are still very useful, but they must be used with care.  Just because they are highly scannable and easily processed by the human brain, doesn’t give them any meaning.  I think I once read a blogpost or something of James Bach’s where he was referring to testcases as a briefcase.  It might be big or small but there’s really no way to tell what’s inside.  The same can be said for percentages.