PNSQC Slides and Paper Are Up

My thesis defense is tomorrow which is why I haven’t posted in a couple of weeks.  If all goes well, I’ll be posting a link to that in the next week.

This is just a quick post to say that my paper and  presentation have been added to the PNSQC web-site, along with everyone else’s paper and presentation: click here and have fun exploring.

Since I don’t read from powerpoint slides, you won’t find much in the way of explanatory verbiage in the slides, but I’m happy to answer questions if there’s something you’d like me to clarify.  Ideally, I would download the paper, and read the paper while you have the slides up.  They told me to make all of the pictures for the paper extremely small and grayscale…sigh.  Kind of kills the whole visualization aspect of my paper, but I understand they had good reasons for asking.  This is exactly why Edward Tufte took out a 2nd mortgage on his house, and self-published his first book.  I would write more about that because it’s a post in and of itself, but my recursion ain’t workin’, gotta go!

Integrity in Data Visualization: Part 2 of 2

In the second part of this series about data visualization (Part 1 is here), I will show how, according to Edward Tufte, information visualizations can trick the viewer. Since I read through this chapter in his book, The Visual Display of Quantitative Information I’ve noticed a compromise of graphical integrity in the most surprising of places. Some of these are included in my post as examples.

Currently, I’m writing a paper for my metrics class on software visualization. After the class, it will be expanded to include how software visualization can work for testers. Part of knowing how to use visualization for any project is understanding what constitutes a good or bad visualization. I’m guessing that if you found my blog you might be a tester, and in that case, understanding the basics of data visualization will help you understand where I’m going with some of my upcoming posts. Outside of testing, understanding complex visualization is a skill we all need to have because we live in an age of data.

Labeling should be used extensively to dispel any ambiguities in the data. Explanations should be included for the data. Events happening within the data should be labeled.

For this example, I’d like to use a graphic that was in a post on TechCrunch as I was in the process of writing this post. It was posted by Vic Gundotra who is VP of Engineering for Mobile and Developer Products at Google. You can read his full post here. I’m calling out a couple of his graphics because I was pretty shocked that he would post these. I’ve seen some of his presentations on YouTube, and they were awesome presentations. Using graphics as bad as what he posted on TechCrunch only diminishes an otherwise strong message and will make me think twice about any visuals he presents in the future.

Here is the first of Vic Gundotra’s graphics. Notice how he does not label the totals on the graph, but separately at the bottom. There is no way a viewer can compare the data he’s representing.

In times series graphics involving money, monetary units should be standardized to account for inflation.

Ok, this might be a controversial example but please read the explanation before flaming me. I freaking love this award winning interactive graphic about movies created by the storied New York Times graphic department…BUT I have a bone to pick with it. The graphic shows movies from 1986 to 2008 and it does not account for inflation. There are some other things going on with this graphic as well, but since it’s about movies and not our budget crisis, I give it’s lack of adjustment for inflation a “meh.” It just goes to show that nothing is perfect.

All graphics must contain a context for the data they represent.

Here’s another Vic Gundotra graphic. Notice how there is no total for the number of users represented, yet Gundotra is trying to say that 20 times more are using the T-Mobile G1. By not including this number, Gundotra is not providing an accurate picture of how many people are using either. It could be 20 people using the G1 or it could be 20,000. There’s no way to know. The fact that he didn’t include this number says, to me, that maybe the number of people is embarrassingly small, but that’s another post for another blog.

Numbers represented graphically should be proportional to the numeric quantities they represent.

This is all about scale in graphics. If you are looking at a graphic, the pieces you are looking at should be to scale. Tufte actually has a formula for what he calls “The Lie Factor.” This link has a couple of illustrations and also shows how this formula is calculated.

Number of dimensions carrying information should not exceed dimensions in the data.

You know all of those 3d pie chart and bar graph templates in the Microsoft Excel chart wizard? Don’t use them anymore, and yes, I’ve used them myself in the not-too-distant past. They qualify as “chart junk” from Tufte’s perspective.

Variation should be shown for data only and not the design of the graphic.

I looked but couldn’t find a good example of this on the web. (If you see one let me know.) One of the graphs that Tufte uses to illustrate this point shows a bar graph where the bars for years that are deemed, “more relevant,” are popped out in a separate larger section using a really heinous 3d effect. It’s on page 63.

As always, comments are welcome.

Reblog this post [with Zemanta]

What is Data Visualization: Part 1 of 2 Characteristics of Excellent Visualizations

In this post, I will be answering the question, “what is data visualization” and writing about some of Edward Tufte’s principles of for “excellent” data visualizations. This can be an aid in creating better graphs or in looking at graphs. In a subsequent posts, I will relate these fundamental principals to visualizations for use in software testing.

In his first book, The Visual Display of Quantitative Information, Tufte outlines several principals for use in the creation and interpretation of quantitative graphics. If you get the chance, I highly recommend flipping through it. If you have questions about the statistics concepts, you might want to look at Head First Statistics by Dawn Griffiths. I’ve been hitting this book up regularly especially for the metrics class I’m currently taking.

In the comments of my post “Exploring Data Visualization,” Eric asked me, “what is data visualization?” When I say data visualization I’m talking about a graphical depiction of statistical information that tells a story. These depictions can be simple or more complex, and they all have a point they are trying to make. According to Edward Tufte an excellent visualization expresses “complex ideas communicated with clarity, precision and efficiency,” (13).

To illustrate this have a look at one of my favorite interactive web graphics. “A Year of Heavy Losses,” from The New York Times. It illustrates the change in market capitalization of banks from 2007 to 2008. Be sure you click on the square at the top left to see the change. You can see not only the number of banks dwindling, but also their capitalization in the market. You can also mouse over each bank to see more granular data.

According to Tufte, these are some characteristics of excellent visualizations:
1. Lots of numbers packed into a tiny space
2. Data represented is not distorted
3. Extremely large data sets have coherency
4. Comparison between different pieces of data is easy
5. Data is revealed at a micro level and at a macro level
6. The data’s purpose is clear
7. Integration between the statistical and verbal descriptions of the data is tight

Here is an illustration Tufte uses as an example:

Train Schedule by E.J. Marey
Train Schedule by E.J. Marey

It is a French train schedule from the 1880’s. Take some time to look at it and understand it, then look back at the characteristics I have just listed. Did you notice how the cities on the left are not listed at regular intervals? This is because Marey spaced them apart proportionately to their actual distance from each other. Since he did that, when you look at the slope of the lines, you are not only seeing arrival and departure times, but also the relative speed with which the train will get you from one place or the next. If you depend on trains to get you from one place to the next, this can be very important information.

This graphic also illustrates the concept of multivariate data which, according to Tufte is also a quality of excellent visualizations. I’m going to break out what’s in the train illustration into univariate, bivariate and multivariate data. If I miss something, just add a comment.

Let’s start with the concept behind this illustration. It’s depicting arrival times and departure times of trains in France. It shows the route the trains take, and the relative speed with which they make from one station to the next.

Univariate data shows the frequency/probability of one variable.
Some univariate data from this graphic: the number of trains arriving or departing a station. The number of trains arriving at stations at any one time. The number of arrivals at a station each day. The number of departures from Chagny station each day. Each of the variables I have described is a frequency (Head First Statistics 609).

Bivariate data shows 2 values for an observation.
Bivariate data from this graphic:
(x) Time of day
(y) Number of trains arriving/departing at Chagny station

For this observation you need two variables(Head First Statistics 610).

Multivariate data shows multiple values for an observation.
If we take the observation from the bivariate data example and add stations, the observation becomes multivariate and is what you see in Marey’s illustration.

I’ve just covered a lot of material and I hope it gives you a good idea of what data visualization and the field of information visualization is all about. In my next post, I’ll be covering the ways in which graphs can lie. I’ve seen this happen at work and just completed a reading assignment for school where it was also an issue. These are complicated topics that software engineers should understand if they are to use visualization in ways such as a tester’s heads up display.

Questions and comments are always welcome.

Reblog this post [with Zemanta]

Exploring Data Visualization

For the past few months I’ve been obsessively learning about data visualization so I’m posting about my exploration with links to everything (books, blogs, graphics, people, etc.) This topic fascinates because it brings together all of my studies including art, art history, theatrical design, computer science and software engineering.

Last fall, I found the book Visualizing Data: Exploring and Explaining Data with the Processing Environment by Ben Fry. I can’t remember how I found it. Maybe it was in the O’Reilly email of new titles. Since I work at a credit reporting agency, there is no end to the data. It seemed like the perfect opportunity learn about graphics, so I started typing out Fry’s examples and then applying them to my data. Fry is one of the creators of a graphics library called Processing which uses java. This made the examples pretty easy to understand. I’m not finished with this book yet. The examples get more and more challenging the further you go, but the author seems to enjoy interacting with his readers and wants people to have a positive experience with his code.

So last Fall, I was having fun with these examples, and then I went to GTAC. I know I’ve already written about James Whittaker’s keynote, but just bear with me. Seeing how transfixed the crowd was with the few data visualizations he uses for testing, I felt something click in my head. There aren’t many moments in life when we get total clarity, but I finally had a huge one and decided not to let it go.

Before, I had just been playing with data visualization and happy that it fed my artistic side, but now, I was in it for keeps. When I came home, I looked at the other books that Ben Fry was referencing and found the ultimate classic of data visualization. If you only ever read one book about this topic, that book should be Edward Tufte’s The Visual Display of Quantitative Information, 2nd edition. After I read this book, I had a talk with my thesis advisor and decided to do a thesis on data visualization and software testing.

Please note that Tufte will not tell you what type of graph to use in any particular situation. For that, I turned to Head First Statistics. It goes over this in Chapter 1 and is the most accessible statistics book I have ever read.

Since blogging is such a great font of information, I went out looking for blogs and found several that I really enjoy. There are definitely other worthwhile blogs on data visualization.These are the ones I’m reading regularly:

Jorge Camoes’ Charts
Visual Business Intelligence (Stephen Few’s blog)
Excel Charts and Tutorials by Peltier Technical Services
Information Ocean

A few weeks ago, Edward Tufte offered a seminar in Atlanta, and I was fortunate enough to go. It’s pricey, but you get all four of his lovely hardback books are included which somewhat offsets the cost of admission. I found some excellent notes that were taken a few days later in Raleigh on Justin Wehr’s blog. I could tell that Dr. Tufte had given his prezo a few (hundred) times, but seeing him present his material provoked some really deep thinking. When the presentation was over, I walked to a bench in the hotel lobby, and put together the bones of my thesis. Visionaries such as Dr. Tufte always inspire my best thinking.

Currently, I’m reading through lots of research papers about the visualization of source code. I’ll make a separate blog post for that. Well, there might be several separate blog posts for that. For the first time in my life, I feel completely engaged in what I’m doing at work and at school.