Exploring Data Visualization

For the past few months I’ve been obsessively learning about data visualization so I’m posting about my exploration with links to everything (books, blogs, graphics, people, etc.) This topic fascinates because it brings together all of my studies including art, art history, theatrical design, computer science and software engineering.

Last fall, I found the book Visualizing Data: Exploring and Explaining Data with the Processing Environment by Ben Fry. I can’t remember how I found it. Maybe it was in the O’Reilly email of new titles. Since I work at a credit reporting agency, there is no end to the data. It seemed like the perfect opportunity learn about graphics, so I started typing out Fry’s examples and then applying them to my data. Fry is one of the creators of a graphics library called Processing which uses java. This made the examples pretty easy to understand. I’m not finished with this book yet. The examples get more and more challenging the further you go, but the author seems to enjoy interacting with his readers and wants people to have a positive experience with his code.

So last Fall, I was having fun with these examples, and then I went to GTAC. I know I’ve already written about James Whittaker’s keynote, but just bear with me. Seeing how transfixed the crowd was with the few data visualizations he uses for testing, I felt something click in my head. There aren’t many moments in life when we get total clarity, but I finally had a huge one and decided not to let it go.

Before, I had just been playing with data visualization and happy that it fed my artistic side, but now, I was in it for keeps. When I came home, I looked at the other books that Ben Fry was referencing and found the ultimate classic of data visualization. If you only ever read one book about this topic, that book should be Edward Tufte’s The Visual Display of Quantitative Information, 2nd edition. After I read this book, I had a talk with my thesis advisor and decided to do a thesis on data visualization and software testing.

Please note that Tufte will not tell you what type of graph to use in any particular situation. For that, I turned to Head First Statistics. It goes over this in Chapter 1 and is the most accessible statistics book I have ever read.

Since blogging is such a great font of information, I went out looking for blogs and found several that I really enjoy. There are definitely other worthwhile blogs on data visualization.These are the ones I’m reading regularly:

Jorge Camoes’ Charts
Visual Business Intelligence (Stephen Few’s blog)
Excel Charts and Tutorials by Peltier Technical Services
Information Ocean

A few weeks ago, Edward Tufte offered a seminar in Atlanta, and I was fortunate enough to go. It’s pricey, but you get all four of his lovely hardback books are included which somewhat offsets the cost of admission. I found some excellent notes that were taken a few days later in Raleigh on Justin Wehr’s blog. I could tell that Dr. Tufte had given his prezo a few (hundred) times, but seeing him present his material provoked some really deep thinking. When the presentation was over, I walked to a bench in the hotel lobby, and put together the bones of my thesis. Visionaries such as Dr. Tufte always inspire my best thinking.

Currently, I’m reading through lots of research papers about the visualization of source code. I’ll make a separate blog post for that. Well, there might be several separate blog posts for that. For the first time in my life, I feel completely engaged in what I’m doing at work and at school.

A few more Vampire Testing Lessons

Yellow Porsche Carrera GT with Hardtop (US Ver...
Image via Wikipedia

I just love it when bloggers mix pop culture with testing. Recently, the Testy Redhead posted a few lessons about testing she adapted from her reading of the Twilight books. I love these books in all their somewhat-poorly-written-but-ultimately-addicting glory so I couldn’t help but put together a few lessons of my own. The last lesson is a spoiler, but I’m guessing that there are not hordes of Twi-hards reading this blog.

You can build a working car from a bunch of disparate parts.
Jacob totally rebuilds a Volkswagon Rabbit using parts that he collects over the length of a couple of the books. This reminds me of some of the great open source tools that are now available for testing such as Bugzilla and Selenium. It also reminds of the Automated System Test Framework I’m building from scratch at my job. I started out with a bunch of short scripts that I wrote, but, with some perserverance I’m close to having a system in place that will greatly assist me in testing.

Sometimes the yellow Porsche really is what you need.
I’ve noticed a real disdain for expensive tools among testers, but sometimes they are the right answer the same way the yellow Porsche was the right car for Bella and Alice in New Moon. When I started my testing job, I was not a tester and I did not know what I was doing. My tester friends in another group had shown me HP Quality Center, and I realized that I desperately needed this assistance with test case management. It helped me transition off of spreadsheets and gave me a structure for repeatable testing.

Don’t read the last one if you don’t want to read the spoiler.

Testers ARE the Shield
In the last book, Bella protects everyone using her special super power. Testers also have a super power, and that power is the right to say, “This product really stinks and is not ready to be released with our team’s name on it.” This is not the most obvious power, but it can protect a team or even a company from releasing a product and regretting it. I’ve had to say this before to a most “busy and important” developer who let me know how busy and important he was, but I knew that I was protecting consumers by saying it.

Reblog this post [with Zemanta]

Is complexity really all about the source code?

At this point, the work I’m doing on my master’s thesis is starting to congeal. I’ve been reading about data visualization and how it can assist quality. There seem to be several levels at work. There is the source code level where developers and, to some extent, qa examine individual pieces of code. This level is addressed by unit tests generally written by a developer or specialized white box tester. Higher up is the system test level which may or may not be automated. Recently, frameworks and tools such as STAX/STAF and Selenium have helped to automate and provide more consistency for some level of system test. At the highest level, quality is less about tests and more about metrics, in particular, lines of code.

In my research, I have found many papers about providing analysis for source code. There are also plenty of papers which address the production of metrics. Software and User Interfaces have developed to the point where the idea of zooming is going to bring these layers together. If you think about Google Maps and the zooming capability that it has, think about how that type of zooming could be applied to the Software Development process. James Whittaker, the architect of Microsoft Team Test certainly has. His vision is that this type of interaction will result in a Heads Up Display for quality.

Assuming that this is where we are going. I can see some challenges in how projects are organized. The main challenge will be mapping from one level to the next. Moving from a system test level to a unit test level will highlight how system tests are mapped to unit tests. This goes back to requirements and design which I know is still an issue at not all but plenty of software shops. Apart from unit to system test mapping, there is also the issue of how LOC is examined and how that maps to source code and maybe even system tests. What if two developers have worked on the same code? How would management assess the LOC count?

I know that, technically, system tests, unit tests and source code are all supposed to be organized based on requirements, but some of us live in the real world!!!! I know that there are places where requirements are maybe not great, but usable. That said, there are plenty of testers who don’t work in shops where we have the benefit of usable requirements or they may be usable in a very loose sense of the word. If a tester is working in a group with bad requirements, they are still responsible for the bugs they didn’t catch.

In this case, complexity is not just about source code anymore, but about how well tests cover the system.

Software Testing Conferences

In my work as a 1 woman test team, figuring out best practices in testing has taken some work. Books help a lot. Advice from friends at my company who also test has helped out too, but I have a need to connect with other testers most of whom probably work on teams with much more structured testing guidelines. As if that weren’t enough, I have this master’s thesis I’m putting together that will deal with testing, and I’m looking for places to submit as a presentation or a poster.

Attending the Google Test Automation Conference last year opened my eyes to the fact that it is important to keep up with the best practices of other companies. Talking with other professionals in testing helped me gauge what I’ve been doing right at my job and where I can stand to improve. I’ve been looking around for other conferences to attend, and have found a few of them. Reading the blog posts of people who have attended or presented at these conferences is also interesting. In addition to including the links for some testing conferences, I’m also including links to some blog posts about those conferences (when I could find them).

Obviously my list is not exhausitive and will probably get dated over time, but I don’t mind adding to it. Likewise, if you have blogged about a testing conference you enjoyed or have comments about one of the conferences I listed, feel free to leave a comment.

Star East
Star West: JW on Test
PNSQC: Testy Redhead
CAST: Adam Goucher
GTAC: You can read my other posts or, for something completely different, check out The Automated Tester
Swiss Testing Day

Project Management Class and Scrum

This past Fall, the only class I took was Project Management. The reason I signed up for this class was because I thought it would be a nice break from the classes I had been taking like Distributed Systems and Artificial Intelligence. Ha Ha Ha. Now I know why the PM’s at work have such long hours and pained looks on their faces (or maybe it’s because we use waterfall).

In order to give students in the class a full-on project management experience, our instructor broke us into groups, gave us some requirements from which we had to produce a working application. I ended up being elected the PM for my group. Since there has been a serious lack of coverage for Agile Software Development in my classes, I organized us to use Scrum because of an interaction I’d had at the Google Test Automation Conference (GTAC).

When I attended GTAC last Fall, I had a great conversation with a man named Pete from F5 Networks. We were sitting at a lunch table where we were supposed to be discussing Agile development. He told me that he had used Scrum and been pretty happy with it on his team. I didn’t get the chance to ask him how he had implemented Scrum, but he did tell me an interesting fact he had read. He said that he had read somewhere that of all teams implementing XP or Agile software development, the teams that used Scrum were more like to stick with and have success with that process. This stuck with me when I returned home and had to quickly organize our group.

Since I didn’t know anything about Scrum, I looked for a reference on O’Reilly Safari and found The Enterprise and Scrum
by Ken Schwaber. Appendix A, “Scrum 1-2-3” was most helpful. Not only did it have an overview of the whole process, but also included examples of the documentation typically used for a Scrum project.

 

Scrum documentation for a project manager consists of a Product Backlog and a Burndown Chart. The product backlog is a high-level listing of activities that must be completed, the number of hours the activity is expected to take, and a listing of actual hours spent as the weeks progress. Activities are grouped on the chart according to Sprints. (The sprint is the basic unit of time around which activity is organized, 2-4 weeks). The burndown chart is a more detailed listing of who has taken responsibility for which activity and how they are progressing. This chart, produces a weekly graph of how the team is progressing.

Based on our academic calendar, I broke up our project into 5 1-week sprints. It took, no kidding, 8 hours just to break down our tiny set of requirements into the 2 charts. Following the scrum methodology of teams being self-organizing, once I had the activities listed, I went back to my team and asked team members to volunteer for assignments.

As the weeks progressed, we worked hard on our project. We filled out the chart every week, activities were finished and our application was completed. No one stayed up all night and we delivered exactly what we said we would. Oh yeah, and I made an A! So, what worked and what didn’t?

Stuff that didn’t work so well:

Team members who don’t pull their weight mess up all of the assignments. We had one team member who volunteered in front of the whole team to take half of the programming assignments. Afterward, he decided not to do any work. Since the burndown chart was already filled in with his name on several assignments, it was difficult to figure out how to deal with the hours he was supposed to work but didn’t. We had to reshuffle several activities among team members which threw off our initial chart. We handled this by re-organizing the chart.

Activities that slipped became difficult to track. Since each activity is supposed to be listed in one sprint category, I wasn’t sure how to track that activity if it wasn’t finished by the end of the assigned sprint. We ended up adding a column for the actual total hours required since our chart initially only had estimated total hours required.

One of the basics of Scrum is having a 15 minute daily “stand up” meeting where everyone answers the questions, “what did you do yesterday,” “what are you doing today,” “what is blocking you.” Those daily 15 minute meetings did not happen for my group because our schedules did not match up at all. That said, I think these meetings are crucial to the success of a scrum project. People need to know what’s going on and who is blocking who. If we had been having the daily meetings, we would have known that our slack programmer was slacking much earlier. To me, this also means that Scrum works better if teams are co-located. My team was not co-located, which put a serious damper on having any meetings. This was also partly due to the fact that not all of us had equal access to computers at home.

Stuff that really worked:

We were able to re-organize every week depending on how well we’d met the previous week’s goals. This is really the greatest part of Scrum organization for me. When the one guy decided he didn’t want to program (to be fair, it was his last semester), we were able to completely re-organize for the next week’s work. We were also able to adjust the workload depending on who had more or less time to do work.

The charts made bottlenecks very clear. For about a week, I was the scrum master and the lead programmer. Everyone on the team, including me, knew that we couldn’t finish the semester this way, and our instructor noticed it from our chart and the number of assignments with my name attached. Because there is so much detail to be tracked for the Product Backlog and the Burndown chart, there is no way that one person can shoulder updating the charts and producing significant amounts of code. As I said, the whole team knew that we’d have to make changes, and we did. I pretty much handed scrum-master activities to someone else when we were half-way through the project.

Testing happened throughout the project which meant that no one had to stay up all night at the end. For this reason, I think that Scrum or any iterative process is highly effective. It was still challenging to keep the test team busy during the early and middle stages of the project, but we all felt better knowing that what we had produced was working correctly.

My conclusion about Scrum:

Project tracking, in general, is much more complicated than I had anticipated. Producing the artifacts for Scrum took some time and they were not always easy to maintain. Even so, I can’t speak for the whole team, but I know that I had a much better grasp of what was happening because of our documentation. I’m sure documentation is a challenge on any project, and the fact that we were able to keep up with ours is extra points for scrum. I think that iterative software development is much more effective and gives teams a much better chance to course-correct if something goes awry. The self-organizing aspect of choosing assignments also appealed to me. I am a tester, but I could see helping out with documentation or maybe switching and doing some coding on certain assignments. This aspect of Scrum seems likely to keep team members from burning out project-to-project while also providing opportunities to learn something new. I liked using scrum. Hopefully, I’ll have opportunities in the future to work on a team using Scrum.

Reblog this post [with Zemanta]

A Semester of Software Metrics and Formal Methods

Another semester is starting tomorrow and I’m set to take a Software Metrics class and a Formal Methods class.   Even though 2 classes plus 40 hours of work every week is a lot, I’ve been looking forward to these two classes.  I’ve seen, at work, how important metrics can be.  It forms the basis for a lot of project planning and is ultimately a big part of how a software department is judged as successful or unsuccessful (aside from revenue).

Formal Methods has consistently been described to me as the most useless class in the Software Engineering program.  I haven’t taken this class, but I have to disagree.  In terms of decomposing requirements into test cases, I can see where formal methods would be quite helpful.  Why are people so freaking scared of propositional logic and discrete math?  If we’re all interested in computers, then why is this such a big deal?  It reminds me that 75% of the people in my schools CS, SWE and IT programs are only there for a paycheck, which is saddening.  I live in the hope that the faculty and staff will stop pandering to these wusses and include more classes like Formal Methods in the curriculum.

Somehow this semester, I will also have to complete about half of a thesis paper.  Over the break, I’ve been reading Ben Fry’s book about Data Visualization and working with his graphics program, Processing.  I’ve found a paper about visualizing quality, and hopefully the pair will help me along.

Happy New Year!

5 Lessons from the 2008 Google Test Automation Conference

I’m back from Seattle and ready to write some tests. Here is a short list of what I learned (or what was reinforced) at the 2008 Google Test Automation Conference.

1. If your group isn’t building continuously, they are behind the times.
In hearing people talk about how tests are run, it was generally taken for granted that builds are now being continously integrated using a tool such as Hudson or Bamboo. I found this incredibly large spreadsheet comparing CI Tools.

2. Everyone loves a visual representation
Throughout the conference, besides hearing people talk about Selenium, people were absolutely crazy about a visual representation of the Windows Vista source code presented by James Whitaker in his keynote. This link is for a treemap created by The New York Times which shows the change in the financial sector’s market capitalization over the past year. I’ve circulated it at work as an example of how powerful good visualization can be.

3. Unit testing is ubiquitous.
I talked with people about the types of testing that happens at their work, and the consensus is that much more emphasis is being placed on creating automated tests and unit tests. What surprised me is how many shops actually do use unit testing. This is not some crazy new-fangled thing anymore, this is what teams are doing in conjunction with continuous builds.

4. Live blogging isn’t easy at all.
Before I sat down for the first presentation of this conference I thought that live blogging would be a piece of cake. It turned out to be a challenge because I had to balance providing information about what the speakers were talking about with writing about my opinion of their presentation. Take into account that I’m talking with the people next to me about what the speaker is saying and at times, trying to understand the presentation and suddenly, writing about a presentation becomes quite challenging. Blogging all of them got to be too much, but luckily, I wasn’t the only one blogging the conference. David Burns of The Automated Tester was also blogging. Between the two of us, I think we got most, if not all, of the presentations blogged. I’m happy to link to any other GTAC blogs.

5. There is always someone smarter.
It’s always tough to admit that you are not the smartest person in the room (or maybe even close to being the smartest) but in situations like this one, I really don’t mind. This conference put me in such close proximity to so many brilliant testers that I couldn’t help but be inspired. Every conversation that I had was really amazing and helped me grow as a tester. I will always take any opportunity I’m offered to hang out in such great company.

It was great to meet so many smart and amazing people at GTAC. For anyone who has the opportunity to attend this conference, I highly recommend it.

Reblog this post [with Zemanta]

Automated Model Based Testing of Web Applications by Atif M. Memon and Oluwaseun Akinmade

Testing event driven software

Oh boy…AM starts talking about different states of event driven systems and is referring to state S-0 but he’s not saying zero. He is saying naught. S-naught…otherwise known as Snot. I can’t decide which is better. Snot or THUD.

Ok…back to testing. His problem domain is for testing event driven systems in all of their various states. He’s using a tool in Windows XP called computer management. He gives it a large number that’s obviously invalid and watches it fail. Then he goes to another dialog and it fails. He seems mainly concerned with generating all possible combinations of events for a web app. Elfriede Dustin mentoned a tool for this yesterday.

He’s about to crash united airlines site but too many people (including me) on the wifi. Tip: always use slides for demos.

1st part of this talk is a basic summary of the current state of web testing. He talks about creating lots of tests through the UI although yesterday, this strategy was somewhat debunked in the session by Markus Clermont. MC pointed out that building up lots of these end to end tests running through a gui can take a really long time to run if you have too many.

He wants to see some type of zoom in for these tests which would be really cool. This implies a hierarchical structure within these event driven tests.

GUI Model Based Testing
Event Flow Graph: He uses paint as an example to show how event interactions are diagrammed. Fellow blogger, David Burns has done a tutorial on this type of testing using graphs.

His app will rip the gui, generate an event flow graph and use this to generate test cases. Has a demo.
After test cases are run his app will analyze the run time to track the interactions. This is all done on desktop gui apps and he’s working on the web app side of it. He’s expecting some audience feed back for this.

His student is also making a presentation of an independant study she did. She put together a basic page with typical webapp inputs which would generate different states. She’s generated a state machine model of this basic web page. Her final state is getting to the next page. She took as test cases event sequences + expected state. Once she has possible states she creates a table. Most of her work was centered around getting the webripping and generation of test cases to work.

He’s asking for ideas on how to test this type

Q: Oh this is what I wanted to hear, they are asking about the test case explosion problem I mentioned.
A: He’s expecting large state machines that can be executed in a distributed manner. They don’t expect testers to use this and automate their process, they want them to generate the graph so that they can find more interesting test cases.

Q: (Really this is a suggestion) Have a best practice for a web app to generate a graph from which a more testable webapp can be created.

Q: They aren’t considering data dependant states. How do you get data into test cases?
A: We use partitioning method and come up with constraints between field elements. This problem however isn’t solved.

GTAC Liveblogging: Context Driven Test Automation: How to Build the System You Really Need by Pete Schneider

6 common tasks that mattered for their testing
test distribution and run control
test case set up
test case execution
test case evaluation
test case tear down
results reporting
They had writtin this code 11 different times and 11 different places. PS’s group was in charge of consolidating these.

They were in agreement until they started talking about how each task should be implemented and its importance. What they realized was thatthey all had different priorities for their approach to automated testing.

To resolve the conflict, PS started asking questions: Who is writing the tests? Who looks at results? They started grouping the tools that they had

They came up w/ different contexts: individual dev, dev team, project, and product line. When most people talk about automation, they are talking about project context. The product line context is when a produt has been released. Test can be reused in different contexts. Difference is tthe framework being used.

Dev Context: unit tests, xUnit Test Patterns by Gerard Meszaro.

Dev Team Context: focused on a subsystem of the product. These test don’t rquire knoledge of internals. Ex. tests his team wrote to catch race conditions.

Object Context: are builds more or less stable? focus on user functionality. Infrastructure more complex and you have to think about depdenancies, etc.

Project context: this where graphical tools are useful

Product Line context: long term stability test…when release approved they test for backward compatibility. Ex. have run something for 97 days and then found an out of memory error.

Case Studies of Their Tools
ITE (Integrated Test Environment) built on top of STAF/STAX. Built by testers for testers. Lots of code written in python. Primary design criteria for this was stability. All tests have metadata describing intended hardware and version of the product. Wanted to reduce setup and tear down.

How 6 tasks addressed in ITE
tests and framework distributed as a linux chroot. Verification left to test writer, does health checks. Store results of runs in a database accessible via web page.

XBVT perl based system for developer use. In desiging wanted tests to run inside or outside the tool with little overhead.
Dist/Runtime control: stored in source control. Runtime conrolled by test manifests.
Results Verification: left to writer
Teardown: left to writer
Reporting: text file w/ pass/fail is generated and stored on a web server. These results are not stored in a database. Have to search for the file.

Instead of building “one framework to test them all” they are focusing on modules.

He sees the next step as some visualization and some type of THUD.

Ask yourself:
who is goin gto rite and maintain the framework?
who will build and maintain the tests?
how are the test going to be used?
how long will the test live?

I wish my company would take this kind of initiative, but F5 looks a lot smaller than mine.

Q: What is meant by test target…tests useful across multiple contexts
A: Liked the idea yesterday of small, medium and large tests because is breaking it up in a similar way team context, project context and product line context. Tests can be identical but different stakeholders want to see results in a different way. For ex. Management wants a graph

Q: Discover lessons about retaining people writing tests for wrong context
A: We didn’t run into the case where people writing tests for wrong context

Q: You mentioned 2 tools
A: One team addresses stargegy all 4 contexts. 1 tool for individual dev context. Devs use that tool. Those tests accessible to testers but end-to-end tests work through diff tool. Tools have one way direction. Testers tool can pick up developer tests but won’t go the other

Q: ITE stores results in database is this shared
A: One database for everyone. Right now difficult to look at results multiple runs. Want ability to aggregate runs to generate report for a build on all platforms.

Q: Was choice of chroot environment because of virtualization
A: Decision was made 2 years ago when vmware wasn’t there. He wasn’t with the company when decision was made. They did this because of the heavy packet switching they work with.
F: If that weren’t the case would you be looking at virtualization
A: yes and they are looking at it. Have tests harnesses consisting multiple boxes and looking at how to virtualize that as one chunk. They are trying to come up with schema for test harness in the cloud.

Using Django for the reporting UI and for data driven tests. Use code review tool called Review Board. I also have it on good authority that Fogbugz when used with WebSVN and Cruise Control works really well and has similar functionality.

GTAC Liveblog: Using Cloud Computing to Automate Full-Scale System Tests by Mark Elian-Begin and Charles Loomis

ETICS logo
Image via Wikipedia

Mark is presenting a project called ETICS which is a European project funded by CERN. CERN has most recently been in the news for their Large Hedron Collider.

Automating System Tests
They use machines to investigate and have problems getting the machines allocated which means that by the time he can run the test, the system has been deployed and the problem exists in production. This was partly becase the release dynamics are spread across stakeholders which creates complexity for automation. In order to get around this he started using Amazon’s Web Services to run some of the tests.

ETICS’ goal was to show vision of automated testing. Mark wanted to “capture people’s worries in a common automated platform.” To do this his team built a testing framework on top of a backend grid system called Condor. This is where problems started to appear. Challenges included limited cm, limited sys admin, limited resource to implment automated testing. People were using this to do tests but their tests kept crashing because of inconsistencies in the complex grid environment. In order to simplify management of the system the decision was made to move the backend from Condor to Amazon’s Web Services.

Lessons Learned
Tools MUST be simple or people won’t use them. ROI must be obvious.

The use case Mark presented was for the Diligent project which used ETICS to automate testing. Diligent goes through large set of documents for queries. Used ETICS for deployment tests.
Full process automation is not required for ROI
If you don’t have access to machine, troubleshooting becomes difficult. I can see why they chose to use Web Services in this instance. Amazon allows an amazing amount of control for the boxes being used.

Cloud Computing
build -> package -> install-> deploy-> test
cloud helps with building lots of test beds.

He prefers using RESTful Web Services because of their ease of use

Mark touched on the hardware virtualization runtime environment called Xen for this and says Microsoft is about to announce their cloud solution. I remember reading about the different ways to use AWS and they really seem to push using it through REST

The next step, he says is kvm which goes from para virtualization (xen) to full virtualization(kvm).

Public cloud vs. Private cloud – they are seeing a need for clouds that work with senstive services.

Q&A
Q: are there projects being worked on with this tech?
A: “I wish I could be more open about this” that means YES.
Q: I looked into cost of 24×7 AWS and iwas cost prohibitive. If you use AWS for 10 minutes they charge for an hour how do you keep costs down
A: Its like mobile phone contracts It’s a problem. personally he’d like to see this go away.
A..from audience
had dinner with a friend doing performance testing using aws to attack his system. He uses it for an hour and then turns it off
Q: performance testing?
A: wrote study between grid and cloud it’s on the CERN web-site. Amazon uses compute unit to guide user in their choices. We have little information on how Amazon puts their back end together.

OMG the crowd RAN OUT of questions.

Reblog this post [with Zemanta]