Twitter github

Submitted an Abstract to PNSQC

I’m posting the abstract I just submitted to PNSQC. It’s also the abstract of the thesis I’m writing for my Masters. I’ve submitted a poster to the Grace Hopper Conference, but never before have I submitted a full-on paper requiring a full-on presentation. I chose PNSQC for 2 reasons: the focus is more on the practical side, unlike some of the ACM conferences and the conference is in Portland, Oregon. God, I love Portland.

Anyway, here is what I submitted:

Visualizing Software Quality

Moving quality forward will require better methods of assessing quality more quickly for large software systems. Assessing how much to test a software application is consistently a challenge for software testers especially when requirements are less than clear and deadlines are constrained.

For my graduate research and my job as a software tester, I have been looking at how visualization can benefit software testing. In assessing the quality of large-scale software systems, data visualization can be used as an aid. Visualizations can show complexity in a system, coverage of system or unit tests, where tests are passing vs. failing and which areas of a system contain the most frequent and severe defects.

In order to create visualizations for testing with a high level of utility and trustworthiness, I studied the principles of good data visualizations vs. visualizations with compromised integrity. Reading about these lead me to change some of the graphs that I had been using for my qa assessment and to adopt newer types of visualizations such as treemaps to show me where I should be testing and which areas of source code are more likely to have defects.

This paper will describe the principles of visualization I have been using, the visualizations I have created and how they are used as well as anecdotal evidence of their effectiveness for testing.

First Attempt at Visualizing Tests and Defects

This post is about a visualization I created to show test execution status with related defects using data obtained from HP Quality Center. I’m using Excel to create this chart and deliberately stayed away from “fancy tricks.” If you want to recreate this some steps will be different if you don’t use Quality Center or Excel. In that case, you get to figure it out.

My weekly status meeting drove me to create this chart. Every week, I sit in this meeting with some pretty important people. Whenever I’m in a testing phase, I have to show what it is I’ve been working on. Previously, I’ve used the report templates from Quality Center, but they really are crap. Wait, that’s not big enough…They really are C-R-A-P. Not only is it difficult to jostle the correct data into place, but they are ugly and none of my superiors actually trusts the Quality Center visual. Since I’ve been doing all this reading about data viz, I realized that I could easily create something much better using Excel in less time.

There are several steps to creating any visualization, and I’ll break down the creation of this chart into the following steps. They are from Ben Fry’s excellent book, Visualizing Data: Exploring and Explaining Data with the Processing Environment.
Acquire – getting your data
Parse – structuring, and sorting your data
Filter – remove stuff you don’t need from your data
Mine – apply mathemagic in the form of statistics, data mining, whatever to show patterns in your data
Represent – choose the type of graph or chart you will be using
Refine – polish your chart so that it has clarity and makes people want to look at it
Interact – add ways for the user to explore your chart

Acquire
Looking at your test set in quality center’s test lab, right click, and QC will show you an option at the bottom for “save as.” Click and this and choose excel sheet. Html would work too since excel can parse through QC’s html.

Parse
For this chart, I like to create 3 groupings: passed, failed and no run. You can have other groupings, but you probably want to make sure that the “other” groupings are together. In Excel, I sort the data by status into the 3 groups. If you are using a test set from the past, you might only have 2 groupings.

Filter
This will change depending on how large your team is and how the work is divided. Since I’m the only tester, we all know who executed the tests in my chart. I remove the columns for Attachment, Planned Host, Responsible Tester, Execution Date, Time and Subject. This leaves only 2 columns, Test Name and Status.

Mine
No mining in this one, it’s just straight up data.

Represent
This chart will be using a stacked bar format which looks similar to a stemplot, but isn’t really a stemplot.

Refine
Here’s where the magic is really happening. See that column for status? Add a column between “Test Name” and “Status.” Each cell in the new column gets a color according to that test’s status: Passed=green, Failed=red, No Run and anything else is gray. If you use the basic colors in Excel, your colors will be too bright. In Excel 2003 you can change this by navigating to Tools>Options>Colors. Here you can choose some less saturated versions of red and green. You’ll still need the very bright red, so don’t save another color over it.

Use fill color to change the colors in the empty column next to the status column. Now you can see each test and you can see how much is passed and how much is failed. Since the status is now indicated by color, you can get rid of the column with the status as text. If you’d rather keep it, you could have one column with the fill and the font color set the same. This would mask the text of the status inside the cell.

Gridlines can be very distracting, so clear all of them. To do this navigate to Tools>Options>View>Window options and clear the Gridlines check box. Doing this should immediately relax your eyes when you look at the chart.

To make it obvious which test has which status, right justify the column for test name.

If you want to show defects on your chart, you can do this using color. Remember where I said not to save over the bright red? If you have defects that you need to show on the chart, change the color of the related test case to bright red and add the I.D. and title of the defect to column on the right of the status. This works when you typically have 1 defect per test case. If you have more, I would just color that test bright red, and have a list of the defects elsewhere. Showing defects this way highlights the need for accurate and concise defect descriptions. I was reading in How We Test Software at Microsoft that their testers work very hard at creating good defect descriptions. In fact, a friend of mine had a great post that received excellent comments about this. Here is where the defect description can make a big difference. Ditto the test names. Everyone in the status meeting sees these and asks questions.

Edward Tufte would probably say that I should print this chart out on a really big piece of paper, but he doesn’t work in my group. My status meeting is paperless, so this will be displayed on a big fancy conference room flat screen, ‘cause that’s how we roll. Actually, I’m not joking about the rolling part. The person who leads the meeting has a tendency to compulsively scroll up and down, so I designed my chart to be displayed within the width of his laptop screen. That’s why the summary is out to the side. Also, I made the text of the title and the summary much larger than the font of the individual tests. I’m using Trebuchet MS for the font.

Interact
At the most basic level, this chart allows the user to interact with the chart by focusing on the smaller, individual tests if they choose to do that. They can also look at the defect information if they would rather. As an improvement, I might figure out how to add some information for each test when it is moused over.

This chart probably won’t scale if you have multiple testers, multiple projects or copius amounts of tests and defects. I know that’s most testers, but I really made this chart specifically for my needs. This shows the very busy and important people what they need to see in a way that they can trust.

As always, comments are welcome. Love it, hate it? Let me know. I’m still learning about this stuff and welcome the feedback.

Integrity in Data Visualization: Part 2 of 2

In the second part of this series about data visualization (Part 1 is here), I will show how, according to Edward Tufte, information visualizations can trick the viewer. Since I read through this chapter in his book, The Visual Display of Quantitative Information I’ve noticed a compromise of graphical integrity in the most surprising of places. Some of these are included in my post as examples.

Currently, I’m writing a paper for my metrics class on software visualization. After the class, it will be expanded to include how software visualization can work for testers. Part of knowing how to use visualization for any project is understanding what constitutes a good or bad visualization. I’m guessing that if you found my blog you might be a tester, and in that case, understanding the basics of data visualization will help you understand where I’m going with some of my upcoming posts. Outside of testing, understanding complex visualization is a skill we all need to have because we live in an age of data.

Labeling should be used extensively to dispel any ambiguities in the data. Explanations should be included for the data. Events happening within the data should be labeled.

For this example, I’d like to use a graphic that was in a post on TechCrunch as I was in the process of writing this post. It was posted by Vic Gundotra who is VP of Engineering for Mobile and Developer Products at Google. You can read his full post here. I’m calling out a couple of his graphics because I was pretty shocked that he would post these. I’ve seen some of his presentations on YouTube, and they were awesome presentations. Using graphics as bad as what he posted on TechCrunch only diminishes an otherwise strong message and will make me think twice about any visuals he presents in the future.

Here is the first of Vic Gundotra’s graphics. Notice how he does not label the totals on the graph, but separately at the bottom. There is no way a viewer can compare the data he’s representing.

In times series graphics involving money, monetary units should be standardized to account for inflation.

Ok, this might be a controversial example but please read the explanation before flaming me. I freaking love this award winning interactive graphic about movies created by the storied New York Times graphic department…BUT I have a bone to pick with it. The graphic shows movies from 1986 to 2008 and it does not account for inflation. There are some other things going on with this graphic as well, but since it’s about movies and not our budget crisis, I give it’s lack of adjustment for inflation a “meh.” It just goes to show that nothing is perfect.

All graphics must contain a context for the data they represent.

Here’s another Vic Gundotra graphic. Notice how there is no total for the number of users represented, yet Gundotra is trying to say that 20 times more are using the T-Mobile G1. By not including this number, Gundotra is not providing an accurate picture of how many people are using either. It could be 20 people using the G1 or it could be 20,000. There’s no way to know. The fact that he didn’t include this number says, to me, that maybe the number of people is embarrassingly small, but that’s another post for another blog.

Numbers represented graphically should be proportional to the numeric quantities they represent.

This is all about scale in graphics. If you are looking at a graphic, the pieces you are looking at should be to scale. Tufte actually has a formula for what he calls “The Lie Factor.” This link has a couple of illustrations and also shows how this formula is calculated.

Number of dimensions carrying information should not exceed dimensions in the data.

You know all of those 3d pie chart and bar graph templates in the Microsoft Excel chart wizard? Don’t use them anymore, and yes, I’ve used them myself in the not-too-distant past. They qualify as “chart junk” from Tufte’s perspective.

Variation should be shown for data only and not the design of the graphic.

I looked but couldn’t find a good example of this on the web. (If you see one let me know.) One of the graphs that Tufte uses to illustrate this point shows a bar graph where the bars for years that are deemed, “more relevant,” are popped out in a separate larger section using a really heinous 3d effect. It’s on page 63.

As always, comments are welcome.

Reblog this post [with Zemanta]

Bug Titles: Points to Remember

When creating a defect report, the title can be very important as it is sometimes the only part of a defect the developer will genuinely process. Not only that, but in Quality Center, the title of a defect is what gets emailed around. In his book, How We Test Software at Microsoft, Alan Page has some pointers regarding what to title a defect report.

He reiterates the importance of a good title and says that, when scanned as a list, bug titles can form an overall picture of a systems defects. Some of the particulars in creating a good title include limiting the number characters to about 80. That’s a little more than half of what you get in Twitter. Apart from that you need to walk the fine line of being descriptive, but not overly descriptive.

His example of a good bug title is, “Program crash in settings dialog box under low-memory conditions.”

The description is for any notes that don’t fit in the title. So, if you are including actual and expected behavior, that would go in the description and not the title.

Reblog this post [with Zemanta]

What is Data Visualization: Part 1 of 2 Characteristics of Excellent Visualizations

In this post, I will be answering the question, “what is data visualization” and writing about some of Edward Tufte’s principles of for “excellent” data visualizations. This can be an aid in creating better graphs or in looking at graphs. In a subsequent posts, I will relate these fundamental principals to visualizations for use in software testing.

In his first book, The Visual Display of Quantitative Information, Tufte outlines several principals for use in the creation and interpretation of quantitative graphics. If you get the chance, I highly recommend flipping through it. If you have questions about the statistics concepts, you might want to look at Head First Statistics by Dawn Griffiths. I’ve been hitting this book up regularly especially for the metrics class I’m currently taking.

In the comments of my post “Exploring Data Visualization,” Eric asked me, “what is data visualization?” When I say data visualization I’m talking about a graphical depiction of statistical information that tells a story. These depictions can be simple or more complex, and they all have a point they are trying to make. According to Edward Tufte an excellent visualization expresses “complex ideas communicated with clarity, precision and efficiency,” (13).

To illustrate this have a look at one of my favorite interactive web graphics. “A Year of Heavy Losses,” from The New York Times. It illustrates the change in market capitalization of banks from 2007 to 2008. Be sure you click on the square at the top left to see the change. You can see not only the number of banks dwindling, but also their capitalization in the market. You can also mouse over each bank to see more granular data.

According to Tufte, these are some characteristics of excellent visualizations:
1. Lots of numbers packed into a tiny space
2. Data represented is not distorted
3. Extremely large data sets have coherency
4. Comparison between different pieces of data is easy
5. Data is revealed at a micro level and at a macro level
6. The data’s purpose is clear
7. Integration between the statistical and verbal descriptions of the data is tight

Here is an illustration Tufte uses as an example:

Train Schedule by E.J. Marey

Train Schedule by E.J. Marey

It is a French train schedule from the 1880’s. Take some time to look at it and understand it, then look back at the characteristics I have just listed. Did you notice how the cities on the left are not listed at regular intervals? This is because Marey spaced them apart proportionately to their actual distance from each other. Since he did that, when you look at the slope of the lines, you are not only seeing arrival and departure times, but also the relative speed with which the train will get you from one place or the next. If you depend on trains to get you from one place to the next, this can be very important information.

This graphic also illustrates the concept of multivariate data which, according to Tufte is also a quality of excellent visualizations. I’m going to break out what’s in the train illustration into univariate, bivariate and multivariate data. If I miss something, just add a comment.

Let’s start with the concept behind this illustration. It’s depicting arrival times and departure times of trains in France. It shows the route the trains take, and the relative speed with which they make from one station to the next.

Univariate data shows the frequency/probability of one variable.
Some univariate data from this graphic: the number of trains arriving or departing a station. The number of trains arriving at stations at any one time. The number of arrivals at a station each day. The number of departures from Chagny station each day. Each of the variables I have described is a frequency (Head First Statistics 609).

Bivariate data shows 2 values for an observation.
Bivariate data from this graphic:
(x) Time of day
(y) Number of trains arriving/departing at Chagny station

For this observation you need two variables(Head First Statistics 610).

Multivariate data shows multiple values for an observation.
If we take the observation from the bivariate data example and add stations, the observation becomes multivariate and is what you see in Marey’s illustration.

I’ve just covered a lot of material and I hope it gives you a good idea of what data visualization and the field of information visualization is all about. In my next post, I’ll be covering the ways in which graphs can lie. I’ve seen this happen at work and just completed a reading assignment for school where it was also an issue. These are complicated topics that software engineers should understand if they are to use visualization in ways such as a tester’s heads up display.

Questions and comments are always welcome.

Reblog this post [with Zemanta]

Exploring Data Visualization

For the past few months I’ve been obsessively learning about data visualization so I’m posting about my exploration with links to everything (books, blogs, graphics, people, etc.) This topic fascinates because it brings together all of my studies including art, art history, theatrical design, computer science and software engineering.

Last fall, I found the book Visualizing Data: Exploring and Explaining Data with the Processing Environment by Ben Fry. I can’t remember how I found it. Maybe it was in the O’Reilly email of new titles. Since I work at a credit reporting agency, there is no end to the data. It seemed like the perfect opportunity learn about graphics, so I started typing out Fry’s examples and then applying them to my data. Fry is one of the creators of a graphics library called Processing which uses java. This made the examples pretty easy to understand. I’m not finished with this book yet. The examples get more and more challenging the further you go, but the author seems to enjoy interacting with his readers and wants people to have a positive experience with his code.

So last Fall, I was having fun with these examples, and then I went to GTAC. I know I’ve already written about James Whittaker’s keynote, but just bear with me. Seeing how transfixed the crowd was with the few data visualizations he uses for testing, I felt something click in my head. There aren’t many moments in life when we get total clarity, but I finally had a huge one and decided not to let it go.

Before, I had just been playing with data visualization and happy that it fed my artistic side, but now, I was in it for keeps. When I came home, I looked at the other books that Ben Fry was referencing and found the ultimate classic of data visualization. If you only ever read one book about this topic, that book should be Edward Tufte’s The Visual Display of Quantitative Information, 2nd edition. After I read this book, I had a talk with my thesis advisor and decided to do a thesis on data visualization and software testing.

Please note that Tufte will not tell you what type of graph to use in any particular situation. For that, I turned to Head First Statistics. It goes over this in Chapter 1 and is the most accessible statistics book I have ever read.

Since blogging is such a great font of information, I went out looking for blogs and found several that I really enjoy. There are definitely other worthwhile blogs on data visualization.These are the ones I’m reading regularly:

Jorge Camoes’ Charts
Visual Business Intelligence (Stephen Few’s blog)
Excel Charts and Tutorials by Peltier Technical Services
Information Ocean

A few weeks ago, Edward Tufte offered a seminar in Atlanta, and I was fortunate enough to go. It’s pricey, but you get all four of his lovely hardback books are included which somewhat offsets the cost of admission. I found some excellent notes that were taken a few days later in Raleigh on Justin Wehr’s blog. I could tell that Dr. Tufte had given his prezo a few (hundred) times, but seeing him present his material provoked some really deep thinking. When the presentation was over, I walked to a bench in the hotel lobby, and put together the bones of my thesis. Visionaries such as Dr. Tufte always inspire my best thinking.

Currently, I’m reading through lots of research papers about the visualization of source code. I’ll make a separate blog post for that. Well, there might be several separate blog posts for that. For the first time in my life, I feel completely engaged in what I’m doing at work and at school.

A few more Vampire Testing Lessons

Yellow Porsche Carrera GT with Hardtop (US Ver...
Image via Wikipedia

I just love it when bloggers mix pop culture with testing. Recently, the Testy Redhead posted a few lessons about testing she adapted from her reading of the Twilight books. I love these books in all their somewhat-poorly-written-but-ultimately-addicting glory so I couldn’t help but put together a few lessons of my own. The last lesson is a spoiler, but I’m guessing that there are not hordes of Twi-hards reading this blog.

You can build a working car from a bunch of disparate parts.
Jacob totally rebuilds a Volkswagon Rabbit using parts that he collects over the length of a couple of the books. This reminds me of some of the great open source tools that are now available for testing such as Bugzilla and Selenium. It also reminds of the Automated System Test Framework I’m building from scratch at my job. I started out with a bunch of short scripts that I wrote, but, with some perserverance I’m close to having a system in place that will greatly assist me in testing.

Sometimes the yellow Porsche really is what you need.
I’ve noticed a real disdain for expensive tools among testers, but sometimes they are the right answer the same way the yellow Porsche was the right car for Bella and Alice in New Moon. When I started my testing job, I was not a tester and I did not know what I was doing. My tester friends in another group had shown me HP Quality Center, and I realized that I desperately needed this assistance with test case management. It helped me transition off of spreadsheets and gave me a structure for repeatable testing.

Don’t read the last one if you don’t want to read the spoiler.

Testers ARE the Shield
In the last book, Bella protects everyone using her special super power. Testers also have a super power, and that power is the right to say, “This product really stinks and is not ready to be released with our team’s name on it.” This is not the most obvious power, but it can protect a team or even a company from releasing a product and regretting it. I’ve had to say this before to a most “busy and important” developer who let me know how busy and important he was, but I knew that I was protecting consumers by saying it.

Reblog this post [with Zemanta]

Is complexity really all about the source code?

At this point, the work I’m doing on my master’s thesis is starting to congeal. I’ve been reading about data visualization and how it can assist quality. There seem to be several levels at work. There is the source code level where developers and, to some extent, qa examine individual pieces of code. This level is addressed by unit tests generally written by a developer or specialized white box tester. Higher up is the system test level which may or may not be automated. Recently, frameworks and tools such as STAX/STAF and Selenium have helped to automate and provide more consistency for some level of system test. At the highest level, quality is less about tests and more about metrics, in particular, lines of code.

In my research, I have found many papers about providing analysis for source code. There are also plenty of papers which address the production of metrics. Software and User Interfaces have developed to the point where the idea of zooming is going to bring these layers together. If you think about Google Maps and the zooming capability that it has, think about how that type of zooming could be applied to the Software Development process. James Whittaker, the architect of Microsoft Team Test certainly has. His vision is that this type of interaction will result in a Heads Up Display for quality.

Assuming that this is where we are going. I can see some challenges in how projects are organized. The main challenge will be mapping from one level to the next. Moving from a system test level to a unit test level will highlight how system tests are mapped to unit tests. This goes back to requirements and design which I know is still an issue at not all but plenty of software shops. Apart from unit to system test mapping, there is also the issue of how LOC is examined and how that maps to source code and maybe even system tests. What if two developers have worked on the same code? How would management assess the LOC count?

I know that, technically, system tests, unit tests and source code are all supposed to be organized based on requirements, but some of us live in the real world!!!! I know that there are places where requirements are maybe not great, but usable. That said, there are plenty of testers who don’t work in shops where we have the benefit of usable requirements or they may be usable in a very loose sense of the word. If a tester is working in a group with bad requirements, they are still responsible for the bugs they didn’t catch.

In this case, complexity is not just about source code anymore, but about how well tests cover the system.

Software Testing Conferences

In my work as a 1 woman test team, figuring out best practices in testing has taken some work. Books help a lot. Advice from friends at my company who also test has helped out too, but I have a need to connect with other testers most of whom probably work on teams with much more structured testing guidelines. As if that weren’t enough, I have this master’s thesis I’m putting together that will deal with testing, and I’m looking for places to submit as a presentation or a poster.

Attending the Google Test Automation Conference last year opened my eyes to the fact that it is important to keep up with the best practices of other companies. Talking with other professionals in testing helped me gauge what I’ve been doing right at my job and where I can stand to improve. I’ve been looking around for other conferences to attend, and have found a few of them. Reading the blog posts of people who have attended or presented at these conferences is also interesting. In addition to including the links for some testing conferences, I’m also including links to some blog posts about those conferences (when I could find them).

Obviously my list is not exhausitive and will probably get dated over time, but I don’t mind adding to it. Likewise, if you have blogged about a testing conference you enjoyed or have comments about one of the conferences I listed, feel free to leave a comment.

Star East
Star West: JW on Test
PNSQC: Testy Redhead
CAST: Adam Goucher

GTAC: You can read my other posts or, for something completely different, check out The Automated Tester
Swiss Testing Day

Project Management Class and Scrum

This past Fall, the only class I took was Project Management. The reason I signed up for this class was because I thought it would be a nice break from the classes I had been taking like Distributed Systems and Artificial Intelligence. Ha Ha Ha. Now I know why the PM’s at work have such long hours and pained looks on their faces (or maybe it’s because we use waterfall).

In order to give students in the class a full-on project management experience, our instructor broke us into groups, gave us some requirements from which we had to produce a working application. I ended up being elected the PM for my group. Since there has been a serious lack of coverage for Agile Software Development in my classes, I organized us to use Scrum because of an interaction I’d had at the Google Test Automation Conference (GTAC).

When I attended GTAC last Fall, I had a great conversation with a man named Pete from F5 Networks. We were sitting at a lunch table where we were supposed to be discussing Agile development. He told me that he had used Scrum and been pretty happy with it on his team. I didn’t get the chance to ask him how he had implemented Scrum, but he did tell me an interesting fact he had read. He said that he had read somewhere that of all teams implementing XP or Agile software development, the teams that used Scrum were more like to stick with and have success with that process. This stuck with me when I returned home and had to quickly organize our group.

Since I didn’t know anything about Scrum, I looked for a reference on O’Reilly Safari and found The Enterprise and Scrum
by Ken Schwaber. Appendix A, “Scrum 1-2-3” was most helpful. Not only did it have an overview of the whole process, but also included examples of the documentation typically used for a Scrum project.

 

Scrum documentation for a project manager consists of a Product Backlog and a Burndown Chart. The product backlog is a high-level listing of activities that must be completed, the number of hours the activity is expected to take, and a listing of actual hours spent as the weeks progress. Activities are grouped on the chart according to Sprints. (The sprint is the basic unit of time around which activity is organized, 2-4 weeks). The burndown chart is a more detailed listing of who has taken responsibility for which activity and how they are progressing. This chart, produces a weekly graph of how the team is progressing.

Based on our academic calendar, I broke up our project into 5 1-week sprints. It took, no kidding, 8 hours just to break down our tiny set of requirements into the 2 charts. Following the scrum methodology of teams being self-organizing, once I had the activities listed, I went back to my team and asked team members to volunteer for assignments.

As the weeks progressed, we worked hard on our project. We filled out the chart every week, activities were finished and our application was completed. No one stayed up all night and we delivered exactly what we said we would. Oh yeah, and I made an A! So, what worked and what didn’t?

Stuff that didn’t work so well:

Team members who don’t pull their weight mess up all of the assignments. We had one team member who volunteered in front of the whole team to take half of the programming assignments. Afterward, he decided not to do any work. Since the burndown chart was already filled in with his name on several assignments, it was difficult to figure out how to deal with the hours he was supposed to work but didn’t. We had to reshuffle several activities among team members which threw off our initial chart. We handled this by re-organizing the chart.

Activities that slipped became difficult to track. Since each activity is supposed to be listed in one sprint category, I wasn’t sure how to track that activity if it wasn’t finished by the end of the assigned sprint. We ended up adding a column for the actual total hours required since our chart initially only had estimated total hours required.

One of the basics of Scrum is having a 15 minute daily “stand up” meeting where everyone answers the questions, “what did you do yesterday,” “what are you doing today,” “what is blocking you.” Those daily 15 minute meetings did not happen for my group because our schedules did not match up at all. That said, I think these meetings are crucial to the success of a scrum project. People need to know what’s going on and who is blocking who. If we had been having the daily meetings, we would have known that our slack programmer was slacking much earlier. To me, this also means that Scrum works better if teams are co-located. My team was not co-located, which put a serious damper on having any meetings. This was also partly due to the fact that not all of us had equal access to computers at home.

Stuff that really worked:

We were able to re-organize every week depending on how well we’d met the previous week’s goals. This is really the greatest part of Scrum organization for me. When the one guy decided he didn’t want to program (to be fair, it was his last semester), we were able to completely re-organize for the next week’s work. We were also able to adjust the workload depending on who had more or less time to do work.

The charts made bottlenecks very clear. For about a week, I was the scrum master and the lead programmer. Everyone on the team, including me, knew that we couldn’t finish the semester this way, and our instructor noticed it from our chart and the number of assignments with my name attached. Because there is so much detail to be tracked for the Product Backlog and the Burndown chart, there is no way that one person can shoulder updating the charts and producing significant amounts of code. As I said, the whole team knew that we’d have to make changes, and we did. I pretty much handed scrum-master activities to someone else when we were half-way through the project.

Testing happened throughout the project which meant that no one had to stay up all night at the end. For this reason, I think that Scrum or any iterative process is highly effective. It was still challenging to keep the test team busy during the early and middle stages of the project, but we all felt better knowing that what we had produced was working correctly.

My conclusion about Scrum:

Project tracking, in general, is much more complicated than I had anticipated. Producing the artifacts for Scrum took some time and they were not always easy to maintain. Even so, I can’t speak for the whole team, but I know that I had a much better grasp of what was happening because of our documentation. I’m sure documentation is a challenge on any project, and the fact that we were able to keep up with ours is extra points for scrum. I think that iterative software development is much more effective and gives teams a much better chance to course-correct if something goes awry. The self-organizing aspect of choosing assignments also appealed to me. I am a tester, but I could see helping out with documentation or maybe switching and doing some coding on certain assignments. This aspect of Scrum seems likely to keep team members from burning out project-to-project while also providing opportunities to learn something new. I liked using scrum. Hopefully, I’ll have opportunities in the future to work on a team using Scrum.

Reblog this post [with Zemanta]