Anatomy of Switching Selenium from RC to Webdriver

Selenium logo

If you’ve ever wanted to see what it looks like to switch tests from the Selenium-RC api to the Selenium-Webdriver api, this github pull request will give you a very good idea of the changes that have to be made.

 

 

 

 

 

There are a few reasons why these tests are a particularly good example of what happens.

  • We use page objects.  If there’s ever been proof that page objects help ease the pain of refactoring, this would be it.  We made significant changes to our page objects, but the changes made to tests are relatively minor.
  • We have over 100 tests covering many of the areas in addons.mozilla.org.  It’s all open source, so you can see the tests, the website they are testing AND the code of the website they are testing.
  • Our tests are written in python which is a pretty easy language to read and get to know.

A few highlights and bogans:

  • When you’ve got css locators for list items and you are using “nth” you have to bump all of your numbers up by one because rc starts at 0 but webdriver starts at 1.  (Insert expletives here.)
  • When assigning locators, the format changes and makes the flavor of your locator much more obvious than it was in Selenium-RC.  It’s not that the information is COMPLETELY MISSING in Selenium-RC, it just isn’t in my favorite font-type OF ALL TIME. (<3 <3 <3)
  • Instead of using get_css_count, you have to get the list of items instead and test the length.
  • Native events don’t work as well on mac, so we had to set up windows vms in our selenium grid.  This was not a trivial task and deserves it’s own post which I will write if I get over the vm trauma of the past couple of weeks.
  • You can see Webdriver navigating across the different elements of a page which is really freaking cool!
  • You can write tests for hover elements.  We’ve got lots of this in addons at present and Selenium-RC was preventing us from tests that incorporated these.  (Open the diff view and search for “hover!”
  • Instead of working mainly with locators, Selenium-Webdriver uses Web elements.  If you don’t know, these are dom elements and have a bit more meat to them than locators.  Whereas a locator is a string describing a location in a web page, a web element encompasses tags and everything within them.
  • Currently, we have to run the tests sequentially in Firefox even though they can be run concurrently in other browsers.  This is, apparently, not a new problem.  Fortunately, after talking with the Firefox devs, they are working hard on a fix for us which should benefit anyone wanting to run webdriver tests in Firefox.
  • There are lots of lovely methods which can be chained together to mimic user actions.   I know it’s a machine and not a person with a brain running these tests, but personally, I can’t wait to write a mobile test that uses the “double tap” method.  Cue Zombies!!!!

There’s much more to this change, but I’m dog tired and it’s Friday.  Knock yourselves out with that pull request!

Special thanks to the team that wrote all of that code, Teodosia Pop, Florin Strugariu, Zac Campbell with special guest-star appearances made by Dave Hunt.

Enhanced by Zemanta

Zeitgeist

Seal birth 13 (2:32pm) by nutmeg66, on Flickr
Seal birth 13 (2:32pm) by nutmeg66, on Flickr

(If you like to listen to music while you read my posts, I suggest “Violent Dreams” by Crystal Castles.)

 

A couple of weeks ago, I presented at PNSQC on not being a testing asshole.  Although my presentation was the end of a lengthy personal journey for me, there was a moment in the middle of the journey which is preserved in a blog post.

 

You see, I had to find my way through a wall of my own emotional trauma in order to stand up in front of a paying audience and say that IT IS NOT OK TO BE A TESTING ASSHOLE.  We all have moments like this when we reach the bottom of something we feel will be endless.  For me this moment is preserved in my post, “Let’s Destroy the World.”  There’s no way you would know, but I was a weepy mess when I pressed publish on that one as a I was letting go of some really awful things.  I found myself at a bottom.

 

When you’re at the bottom of a tectonic shift happening in your life, it is not uncommon to question.  Where am I? Who am I? Am I going the right way or will I spiral back down again?  I asked myself all of these questions and more as I pressed publish on Let’s Destroy the World.”

Birth of Stars (NASA, Chandra, 10/7/08)
Birth of Stars (NASA, Chandra, 10/7/08)

 

This is the point where you start looking for signs.  I once had a beautiful friend who decided to join a monastery, and for him, the sign was a white rose.

 

For me “the sign” was this year’s announcement of the Google Test Automation Conference, “Test is Dead” which was serendipitously posted  5 days after “Let’s Destroy the World.”  My friend and fellow testing blogger, Chris McMahon called it Zeitgeist.  It should come as no surprise that I will be at GTAC this week.  (Hey attendees…us Mozillians are throwing y’all a party.  I’ll be the girl in the elevator between 7:00 and 8:00, wearing the red leopard print skirt.  ;)  )

 

There is plenty of criticism that can be heaped on those of us writing and presenting on the theme “testing is dead,” but as I wrote in my last post, I feel it as more of a transformation or rebirth.  There are people in this world who have no problem dealing with the messiness, chaos and defying of logic that come with birth and transformation.  I suggest that if you’re so attached to logic that change is inconceivable, you suspend your belief for a few moments and play with what “could be” instead of what you insist upon as “the way.”  Not everything we dream of will remain, but it’s the best way I’ve found of clearing a path into the maze of the unknown.

 

With GTAC, I once more find myself racing into a labryinth of unknowns and uncertainties, but I now know that this is where I live and feel most comfortable.  These are the people I work with, the people I play with and the people who feed my dreams.  You see, I don’t live in the future or the past, but when I look in the mirror I see them.  This week my mirror is GTAC.

 

Hey testers…how soon is now?

 

Apart from the upcoming GTAC, this post was inspired by an interview with William Gibson tweeted by @chris_blain

Stunt Hamster Alert: Elisabeth Hendrickson will be giving a talk at Mozilla next Thursday!

© Quality Tree Software Inc.

There are testers I follow, there are developers I follow…and then there’s Elisabeth Hendrickson.  Aside from being a consultant for agile testing, Elisabeth is the founder of Entaggle.  When it comes to software and building it in a collaborative way, she knows.  When I was getting started in testing, her cheat sheet helped me out many times. (I think everyone in testing must use it at one time or another.)  When I met her at the first Writing About Testing conference I didn’t tell her, but I was in a state of fangirl awe.  Elisabeth is a pragmatic wonderwoman of software as evidenced in posts such as:

 

Testing is a whole team activity.

Agile Backlash? Or Career Wakeup Call?

Specialized Test Management Systems Are an Agile Impediment

Why Test Automation Costs Too Much

 

Next Thursday, October 6 at 12:30 pm (PDT), Mozilla is lucky to have Elisabeth give her talk Lessons learned from 100+ Simulated Agile Transitions which she previously gave at Agile testing days in Berlin.  The talk will be broadcast on air.mozilla.com which means you are also invited.  If you are deliberating whether or not this will be worth an hour of your time, I suggest you read this blog post of hers.  Oh…and then there are the stunt hamsters.  (I’m not joking!!!)

 

By the way, Elisabeth is giving a rare public 3-day class on Agile Testing at Agilistry Studios which includes the word count simulation.  Believe me, if I weren’t giving a talk at PNSQC, I would be going.

 

Update… Fellow tweep @mubbashir put together this world clock for the talk so you can find the time closest to your locale.

Tilt Visualization and CSS Performance

Facebook City

Today, I’m blogging from the HTML5Dev Conference.  The house is packed, and I’m breaking out of my shell as a tester to have a look at all of this from the dev perspective.  Of course, the tester in me has tagged along.  What’s great about conferences in the age of twitter is that you are never in a bubble during these things.  One of my co-workers, Greg Koberger tweeted about some performance testing win for the lastest Firefox release, Firefox 7.

Greg's tweet

In taking a look at the lifehacker article he tweeted, I noticed that there was a category for css performance tests.  Since I spend all day, every day looking at selenium tests that have css locators and I sit next to a css dev (who has excellent taste in rap), css has been on the brain lately.  “Hmm…css peformance…LET’S GOOGLE.”  If you google for css peformance, the first link is for an article titled, Performance Impact of CSS Selectors, by Steve Souders.  Given, the article is talking about Firefox 3.0 so it’s obviously a bit long in the tooth, but it’s interests me for not one, but two reasons:

  1. I’m waiting for the talk “High Performance HTML5” to begin.  It’s being given by Steve Souders.  (Serendipity or Coincidence?  I shall leave you, dear reader, to ponder.)
  2. The article has a list of different websites and the number of DOM elements in each one.

Since this is a guerilla blog post I’m finishing up as a talk starts, I have no intention of delving deeply into the guts of the article, at the moment.  What I can share, however, is a new Firefox addon I’ve been playing with called Tilt.  The picture you see above is my Facebook page, turned on it’s side in tilt.  Tilt visualizes a web page’s dom elements in 3d and was developed by a Mozilla intern who has recently been hired.  So if we filter the list in the article, Google has the least number of elements while Facebook has the most.  It should be no surprise that the Google home page has had its performance tweaked to oblivion.  Here we can compare the dom of Google visually with the Dom of Face book.

Facebook:

Facebook Tilt

 

Google:

Google tilt

 

I’m definitely forming a hypothesis about the CSS performance of these two pages based on their tilt results.

And that’s your guerilla blog post from the HTML5Dev  Conf.   They really ought to make this thing 2 days next year.  It’s pretty cool.

Continuous Deployment and Data Visualization

Mozilla-firefox-usage-data
Image via Wikipedia

A phrase I hear a lot around Mozilla is “continuous deployment.”  I hear there’s this product Mozilla makes that’s competing with some other product that has rapid release cycles.  So, yeah, we’re working on continuous deployment.

 

I’ve noticed that a main resource around our office for information about continuous deployment is this video from Etsy.  Hearing, “We’re moving to continuous deployment,” is nothing new for me.  This is the 2nd job I’ve had where it’s been a major focus.  Since I’ve  heard of the Flickr version, I decided to watch this Etsy video.

 

Picture yourself at your computer about to hit the big button and deploy a feature you’ve been working on.  You are fairly confident that nothing catastrophic will happen, but you don’t know.  (I’m writing this from a dev perspective, but even if you’re a tester…come on…you never know, even if you’ve tested the hell out of something).  In the talk, this is what is referred to frequently as, “the fear.”  It’s actually referred to as either, “THE FEAR” or “the fear.”

 

“Fear for startups is the biggest no-no.”

“Fear is what keeps you from deleting your database.”

“Fear doesn’t go with creative work.”

 

This rings true for me because I frequently deploy selenium tests for addons.mozilla.org.  My teammates and I have talked about “THE FEAR.”  We have strategies for coping with it such as holding one’s breath, saying a prayer or running the 90+ tests one more time.  When Etsy talks about “The Fear” I know exactly what they mean.

 

Etsy’s video fascinates me because of how they have conquered “The Fear.”  It’s been on my mind every day since I watched the video.  What’s the special-continuous-deployment-sekrit-sauce-that-makes-everything-all-better?

 

Etsy combats “the fear” with visibility.  You see, at Etsy, EVERYTHING IS GRAPHED ALL THE TIME.

 

Here are some of the things they mentioned graphing in the video:
How many visitors are using this thing?
Can we deploy that to 100%?
Did we make it faster?
Did I just break something?
How long is it taking to generate a page?
How many users are logged in?
How is the bandwidth?
What’s the database load?
What’s the requests per second?

 

If you look at the graphs, they are simple bar or line graphs.  They are not exceptionally fancy but they are numerous and the maintenance admittedly takes work.  They are not, however maintained by specialists working in a silo.  The graphs are created by an engineer.  Here are some numbers:

 

20,000 lines/second is their log traffic, at times
16,000 is the number of “metrics” they have organized through dashboards
25 engineers committing code to dashboards
20 dashboards

 

I doubt that when Etsy decided to start graphing everything they woke up one day with 25 dashboards.  It sounded very much like they put the tools in the developers hands and lovingly nudged them along.

 

This is a serious commitment to data.
Data doesn’t just happen.  It takes a persistent effort to include log messages in your code. It takes servers and databases capable of handling the traffic created by the log messages and staff to maintain them.  It takes investing in huge monitors all around the office and giving people the bandwidth to figure out how to work with the data & graphics stack.  Most importantly, it takes trust so that employees are allowed to see the data without making them jump through hoops.

So how can a team move closer to the graphing part of continuous deployment?
According to Etsy:

  • Give people access to production data — without making them wait months for a special password or even log in every time.
  • Make the data real time instead of daily.  When I say access, I mean feeds.  This goes well beyond a spreadsheet.
  • Create copious amounts of log messages.  If someone clicks a link, goes full screen or downloads something…log it.
  • once you have the data, make graphs for features before you release them

 

I love data, but will be the first to admit that it is not pretty.  The plain truth about data is that it takes patience because combing through and refining  it can be tedious, monotonous work.  It is very easy to buy a bunch of monitors and put them on a wall showing an inst-o-matic graph that came with your bug tracker (I’ve seen this done.  O hai, expensive wallpaper!).  It takes more time to ask deeper, meaningful questions.  It takes even more time to filter the data into something graph-able.  After that, you have to find the right way to share it.  Note, that even if you do all of this and the data successfully tells a story, you’ll have to spend time dealing with, “and why did you use those colors.”  What was I saying? Oh yes, data is not pretty.

Now that I’m working every day with tests I visualized a couple of years ago, I’m continuing my quest for deeper questions about tests.  In my context, the tests are the selenium tests I work with day in and day out, so besides coming to grips with “THE FEAR,” I’ve also been thinking about, “THE FAIL.”  But wait!  That’s another blog post.

 

If you want to read more about Etsy’s graphs and data, they have written their own post about it.

Enhanced by Zemanta

Next Week: moz-grid-config workshop for setting up a local selenium grid

Next Friday will be my 2nd AMO Automation testday at Mozilla and the first one I’ve helped to organize. AMO stands for addons.mozilla.org and it’s the web-site for which I’ve been writing and reviewing automated checks (sometimes I call them tests, but I like referring to them as checks).

Grid by msmail
Grid by msmail

 

As part of the last test day, I ran a github workshop to help people figure out how to set up and work with github.  Here’s a link to that blogpost.

For the upcoming testday, I’ll be running a workshop to help people who would like to run our addons selenium checks with our grid configuration.  Although it’s easier, when starting out to write and execute tests using a standalone selenium jar, it’s good to understand how to run tests with selenium grid as well.  All of the tests for AMO are run using grid, so this is the setup I use when I’m doing code review or writing checks.
This is the link to the github repo containing our grid configuration.  I’ll be posting again next week with a few more instructions on how to modify the repo for running checks locally.

 

The testday starts at UTC/GMT 15:00:00 and continues throughout most of the day until 5:00pm Pacific time. (We’re global, baby!)

A few local times for the moz-grid-config workshop:
UTC/GMT: 17:00:00
Pacific:  10:00am
London: 6:00pm
Bucharest: 8:00pm
Bangalore: 10:30pm

 

Here is the link to info about our testday on the Mozilla QA Blog.

 

By the way…if you’re in the UK and close to London, you should checkout the London Selenium User’s Meetup which is happening next week. (I hear these are frakking awesome)  :)

What Happened at Writing About Testing 2

 

A black cloud has hung over my writing for the past year and it’s been frustrating. It has stretched far beyond writer’s block. You could say I’ve had some angst.  Let’s chalk it up to “mysterious writing ailments.” There are many artists who do their best work in the dark, when they are messed up, depressed, down, and/or just plain crazy. Unfortunately, that’s not me.  I credit this conference, organized by Chris McMahon, and the gorgeous Durango weather with breaking me out of my writing funk.  So how did this happen?

 

Writing!!!  There was lots and lots of writing!!!  We had a few writing assignments.  Chris started us off with memories of high school English.  He asked us to write a 5 paragraph essay.  This is an essay with an introductory paragraph, 3 main paragraphs and a concluding paragraph.  After being programmed by sadistic English teachers in my American high school, I can write these in my sleep.  It was a great illustration of how putting some artificial structure around ideas that have grown wild can reinforce and bring them together.
animas river & trail

I took this opportunity to pair with Zeger von Hese on the back porch of the library.  The picture above shows the view from where we were sitting.

 

Zeger writes the blog Test Side Story.  His posts on Testing, Art and Philosophy have been a bright light during my dark writing period.  Zeger has a degree in Cultural Studies which I consider part of the Interdisciplinary Studies family.  (I have my own Interdisciplinary Studies degree in German Studies)  Since Zeger lives in Belgium and I live in the U.S. this was a unique opportunity for us to physically sit down together with our ideas about art, observation and testing. I remember one particular moment when Zeger was doing some really deep thinking and was standing up.  I was typing what he was saying just to capture the thought process, and in this one particular moment some big ideas came together for me.  I don’t know where else this would have happened, and yes, Zeger and I have plans to share what we were talking about.

 

The 2nd writing exercise was to write a story with a beginning, a middle and an end.  I immediately knew what I wanted to write and have a rough cut together.  It’s inspired by an anecdote Trish Khoo shared with me.  I was surprised at how quickly the story came together, actually.  She wasn’t at the conference, but I’ve been talking with her about fleshing the story out a bit.  That piece will also be showing up somewhere in a blog or maybe one of the testing magazines.

 

Besides writing, most of the attendees presented on a topic. My impression of these was that most of them were half-baked and brilliant. There are not enough places to work with half-baked but brilliant ideas in testing. I enjoyed every presentation and was happy to see the attendees embrace mine.

 

My own presentation was based on my “Are you a testing asshole” post.  In the wake of that post, I’ve been researching  into why I think testers are automatically at a disadvantage on most software teams.  If you count my blog post as the first time I talked about this topic,  WAT2 is my 2nd time discussing it. Every time I talk about it, I am more alarmed by the feedback I get. We’ve got a serious asshole problem in our community.  This is a challenging, non-trivial problem, but not an impossible one.  We need to learn ways to improve the way we talk with others and the way we argue when we disagree.

 

The Pacific Northwest Software Quality Conference agrees with me that our asshole problem needs some addressing.  I submitted an abstract to them on this topic and it’s just been accepted.  I’m gonna go listen to some Phoenix.  See y’all in Portland.

Paradigm Shifts: A Year of Getting the Visualization Stack in Order

Kuhn used the duck-rabbit optical illusion to ...
Image via Wikipedia

In which you learn why Marlena has so woefully neglected her blog.

What a heavy, freaking year it’s been.  Considering that I moved to a different hemisphere, that’s no surprise, but I’m not even talking about the move itself.  One of my goals for the year that I don’t think I ever articulated even to myself was that I wanted to work on a paradigm shift for my own approach to visualizing data.  My effort to write a treemap application in Processing with Java was the last straw.  I guess my experiments with Erlang and Scheme corrupted me.  They showed me that there is a way to break free of the loops within loops with loops.  Aside from language choice, I had a “Gone with the Wind” moment of deciding that I would never use a spreadsheet as part of a visualization process again.  I don’t want to be stuck with static data forever.  It’s time to get closer to working with real time feeds as they are the best way to suck in extremely large amounts of data.  The sum total of these decisions has been a year spent building new skills.  I’ve learned how difficult that can be in the midst of new job that requires my full attention, growing Weekend Testing in my new corner of the planet and enduring my husband’s experiments with Australian cuisine (He doesn’t read my “nerdy” blog, so don’t y’all tell him I said that.)

Part I of the Epic Visualization Quest:  A language (or two)

For most of this year, I’ve been on a quest for a new language.  I tried on Python and attended Pycon which happened to take place in Atlanta a couple of weeks before I moved.  I’ve done a lot of work with ruby which felt more comfortable for me than python (who knows why, I certainly can’t explain these things.)   At the end of the year, I started learning javascript.  When I predicted, at the beginning of 2010, that functional programming would show up on my doorstep, I had NO idea that javascript is a highly functional language.  This really hit me hard when I sat down to write a javascript program with a colleague of mine and we both stared at the screen for 5 minutes before uttering a bunch of sentence fragments that went something like, “well you need a class…”  (ain’t no classes in javascript.)

I’m a fan of not just learning a language, but of understanding the headspace of that language.  This makes it harder to get started, but ultimately means that I won’t be trying to force java concepts that  don’t belong  into a javascript program.  I’ve also tried to understand which parts of javascript I might not want to use.  David Burns, The Automated Tester, suggested I give Douglas Crockford’s Javascript: The Good Parts, a read.  I’m halfway through, and while it’s not as hands on as some programming books I’ve read, it’s showing me the headspace I should be in to take better advantage of what JS has to offer.  It’s taking me some time, but I have more confidence that what I write will be better code.

Part 2 of the Epic Visualization Quest: Data Access

Most of the experiments I’ve done with data viz have involved spreadsheets, comma delimited data or tab delimited data.  I’m completely over using all three.  I can’t tell you how much time I spent schlepping data files from application to application in order to get my data in good enough shape to import into a visualization app.  Since the files were usually pretty big this turned into going and getting some coffee while Excel would open the file.  It was SO annoying.  When I attended the Writing About Testing conference in May, Chris McMahon did a short presentation on REST and it opened my eyes.  Over the rest of the year, I gradually built up my knowledge of REST and JSON which culminated in an example Ruby script you can use to pull data from JIRA, the Atlassian Issue tracker.

Part 3 of the Epic Visualization Quest:  A Visualization Library

Just as important as choosing a language is choosing a graphics library.  The 2 major libraries used with javascript, specifically for data visualization are Processing.js and Protovis.  Previously, I’ve worked my way through all of the examples in Ben Fry’s “Visualizing Data.” This was the book that initially introduced me to data visualization and convinced me that I needed to read everything by Edward Tufte.  Since Ben Fry is one of the creators of the processing language, the code in the book is processing and java.   This makes processing.js a no-brainer, but then I took a look at protovis.  I’m so intrigued with their example of a parallel coordinate plot that I have to give it a try.  I also think that their syntax will be slightly easier to use.

This has been a lot of change on top of change to digest and it’s made the year frustrating.  I am still horrible at writing javascript, but I’m also determined to be patient.  Good visualization takes time.  It is all about details and refinement which requires patience.  This patience means that my blog will probably continue to suffer but I’m hoping it also means I’ll have my visualization stack in order which will lead to better focus for 2011.

Btw…next Weekend Testing is on Sunday, January 23.  This month we’ll be pushing further into critical thinking.

Enhanced by Zemanta

Beyond the Canvas: Testing HTML5

HTML5 fist, after A List Apart
Image by justinsomnia via Flickr
One of my favorite aspects about being a tester at Atlassian is working with developers who love pushing the envelope by trying out new technology as soon as they can get their hands on it.  Recently, I’ve been testing a feature that uses HTML5.   Before my testing, I had heard of HTML5 as the apocalypse of testing because of the canvas tag.  Canvas is, to be sure, controversial
BUT…
There is much more to HTML5 than the canvas tag.

I’m always tweeting about “dev-nial”, but if all we do, as testers, is scream bloody-murder about the canvas tag, we’re going to miss some bugs as HTML5 creeps into the web applications we test.  I was pointed, in a work training session, to Google’s HTML5 Rocks web pages.  This post is about the new functionality revealed in the HTML5 Rocks slides and my thoughts about testing features made with HTML5.
If you go through the slides, do yourself a favor and use a browser that has some IDE-ness to it, so you can see what happens with storage or you can inspect elements.  The slides were intended to be very hands on and the site, itself, includes a playground.
Web Storage & Web SQL Database

Remember google gears?  I remember when it came out and thinking that having the ability to save at least some content offline would make web stuff much more fun.  Now that I live in a country with download limits, I see the ability to view web content offline from a completely different perspective.  For a feature exploiting offline web storage, I would test that content is still there even if my internet connection is turned off.   If you’re looking through the JS you have to test and you see “localStorage.setItem”  it’s an indication that something is being stored on the client side.
Security testing is also a consideration.  On the html5rocks page explaining the “web sql database” I was originally able to store an unescaped xss string in the web-storage.  To be fair, on this page, the user is entering the value to be stored on their own computer, BUT this is implies that it could be possible to insert xss  in a user’s web-storage without telling them.  (Cheers to the html5rocks team for fixing this possible exploit…y’all might want to have a look at what happens after session timeout ;)
WebSockets bring networking up to browser-level

I’m still wrapping my head around this one, but I think it will be quite important to understand WebSockets thoroughly wherever they are used.  In my brief experience with sockets, which was a few years ago, I noticed that it’s really easy to leave a socket open or to get confused about which socket is sending/receiving data.  I would want to know where this is used so I can test that the sockets are open when they should be and closed when they are no longer needed.
Concurrency hits browser based testing

When I was choosing a master’s thesis, one of my possible topics was testing and concurrency.  I floated this topic in conversations I had at the Seattle GTAC and got crickets (The only person who cared was a guy working at, surprise, Google).  I’m guessing that since HTML5 includes “web workers” known to me in Java as “threads” concurrency will become an issue.   That means race conditions, contention, etc.
Geolocation

This is something I have absolutely no experience testing.  If it’s more available to developers, are they more likely to use it in applications they haven’t before?  If I had to test something with this feature, I’d have to figure out how to mock a location.  This would be interesting to test as it related to offline storage.   If you have part of the application that relies on coordinates, but then the app goes offline, is there a way for the user to type in their coordinates?
Device Orientation

Whatever computing device you are using to read this, iPad, laptop, desktop…yes, even desktop, tilt it. If you are on a mac, you will see interesting things happen.  If you are on windows, not so much.  Since I’ve already noticed a difference in the behavior of the same browser on different operating systems, this is definitely something I would test across devices  and operating systems.  So if I’m lost in the urban jungle with my laptop, I should be able to orient myself with google maps on a mac, but if I have a Windows 7 netbook, I’ll just be screwed?  [Update:  I submitted this as a bug and the HTML5Rocks team pointed out that it’s because my netbook does not have an accelerometer.  I’m kind of surprised that the MacBook has one.  #learnsomethingneweveryday]
There are many more slides to digest at HTML5 rocks.  If you look through the slides and have some thoughts about testing, I encourage you to leave a comment or, even better, write your own post.  If you find a few more bugs, I’m sure  Google would absolutely welcome and cherish any bugs reported.  When the next version of Confluence is released, I’ll write more about my own exploits with HTML5.  I was taken by surprise at how quickly it showed up for testing and very impressed by the dev who decided it was time to make use of it.  If you use Confluence, I think you’re going to like the next release, and I’m not biased at all ;)
Enhanced by Zemanta

If I blog about visualizing defects in JIRA it means I will do it.

@woodybrood's tweet

Sometimes peer pressure is a good thing.  Today I got an unexpected tweet from Daniel Woodward a.k.a @woodybrood asking about a FedEx project where I visualize JIRA issues.  I’ve now done 2 FedExes.  In my first one, I collaborated with Anna Dominguez to create a network graph based on comments in JIRA and Confluence. It was a map of who was commenting on whose issues and pages.  I wrote a blog post about that one here.

My second FedEx was an attempt at visualizing code churn as a horizon graph using data from Atlassian’s source code analysis tool, Fisheye.  It turned out to be one of my more unsuccessful attempts at visualizing.  I couldn’t get code churn data that really meant anything and I realized that code churn should not be visualized as a horizon graph when I started putting the graphic together.  (That was painful.)

So, Daniel, the short answer to your question of whether I have a great way to visualize defects in JIRA is:  not really.  I have put JIRA issues into a treemap before, but I wouldn’t recommend that either.

Where does that leave things?

It leaves me feeling a little frustrated but still curious.  I refuse to believe that the data from bugs is not to be visualized.  My gut tells me that I just haven’t found the right questions to ask or the right style to use for visualization.  I do have one more trick up my sleeve before I’m completely out of ideas.  Here are the pieces:

I have, in the past, visualized counts of fake defects with the parallel coordinate plot software, ParallelSets.  I already hear screams about visualizing bug counts so at least let me explain before you flame me.  When I did this, I was very happy with the results.  However, the version of the software I was using was not robust enough for me to use it on a regular basis.  It’s been updated to be more robust with data, but the newer version has the side effect of not showing the data points individually, plus it only works on windows and I’m more into mac.  When it was working for me, I really loved it.  They really nailed the interaction between the user and the data.  It really pains me to say it, but I’ve moved away from using parallel sets for now.

What I noticed a while ago is that the visualization language protovis has an example of parallel coordinate plots.  I encourage everyone who is interested in visualization to play with protovis.  It’s from a group at Stanford and uses javascript.  I’ve followed the first tutorial for protovis published by Dr. Robert Kosara, and it’s pretty cool.

So I’ve got my visualization idea.  How do we get it out of JIRA?  In the past I’ve gotten information about JIRA defects into a spreadsheet and maybe also csv.  One of the reasons I liked Atlassian so much before they hired me was that data exports pretty cleanly from JIRA. There is no way to overstate how much easier that makes visualization.  Unfortunately, once I got the data out, it did not work in a treemap.  Even though getting the data out through the UI is not that bad, I’d like to try something I’ve done some fiddling around with in the past year:  REST.

JIRA has just released 4.2 which contains a new version of their REST api, and I love the documentation they have for it.  It makes working with the api extremely accessible, much like the docs on twitter’s api.  They’ve got curl examples and a script you can use to make a graph of links.  They also have a page for simple REST examples which is not completely filled in.

Here’s what I’m gonna do:

Use the new JIRA REST api to create a data set in Ruby to be used for creating a parallel coordinate plot of JIRA issues.  If I’m lucky, I’ll get 20% time to do this.  I know some guys who, I’m guessing, wil help me get the example in ProtoVis to work with the JIRA data.  I bet I can have something together by next month.  The goal would be to provide a ruby example for the page on “The Simplest Possible JIRA REST examples” along with some javascript that shows the data in a parallel coordinate plot using protovis.

Readers are officially allowed to hassle me about finishing this before Christmas 2010 on twitter and in the comments here.