6 common tasks that mattered for their testing
test distribution and run control
test case set up
test case execution
test case evaluation
test case tear down
results reporting
They had writtin this code 11 different times and 11 different places. PS’s group was in charge of consolidating these.
They were in agreement until they started talking about how each task should be implemented and its importance. What they realized was thatthey all had different priorities for their approach to automated testing.
To resolve the conflict, PS started asking questions: Who is writing the tests? Who looks at results? They started grouping the tools that they had
They came up w/ different contexts: individual dev, dev team, project, and product line. When most people talk about automation, they are talking about project context. The product line context is when a produt has been released. Test can be reused in different contexts. Difference is tthe framework being used.
Dev Context: unit tests, xUnit Test Patterns by Gerard Meszaro.
Dev Team Context: focused on a subsystem of the product. These test don’t rquire knoledge of internals. Ex. tests his team wrote to catch race conditions.
Object Context: are builds more or less stable? focus on user functionality. Infrastructure more complex and you have to think about depdenancies, etc.
Project context: this where graphical tools are useful
Product Line context: long term stability test…when release approved they test for backward compatibility. Ex. have run something for 97 days and then found an out of memory error.
Case Studies of Their Tools
ITE (Integrated Test Environment) built on top of STAF/STAX. Built by testers for testers. Lots of code written in python. Primary design criteria for this was stability. All tests have metadata describing intended hardware and version of the product. Wanted to reduce setup and tear down.
How 6 tasks addressed in ITE
tests and framework distributed as a linux chroot. Verification left to test writer, does health checks. Store results of runs in a database accessible via web page.
XBVT perl based system for developer use. In desiging wanted tests to run inside or outside the tool with little overhead.
Dist/Runtime control: stored in source control. Runtime conrolled by test manifests.
Results Verification: left to writer
Teardown: left to writer
Reporting: text file w/ pass/fail is generated and stored on a web server. These results are not stored in a database. Have to search for the file.
Instead of building “one framework to test them all” they are focusing on modules.
He sees the next step as some visualization and some type of THUD.
Ask yourself:
who is goin gto rite and maintain the framework?
who will build and maintain the tests?
how are the test going to be used?
how long will the test live?
I wish my company would take this kind of initiative, but F5 looks a lot smaller than mine.
Q: What is meant by test target…tests useful across multiple contexts
A: Liked the idea yesterday of small, medium and large tests because is breaking it up in a similar way team context, project context and product line context. Tests can be identical but different stakeholders want to see results in a different way. For ex. Management wants a graph
Q: Discover lessons about retaining people writing tests for wrong context
A: We didn’t run into the case where people writing tests for wrong context
Q: You mentioned 2 tools
A: One team addresses stargegy all 4 contexts. 1 tool for individual dev context. Devs use that tool. Those tests accessible to testers but end-to-end tests work through diff tool. Tools have one way direction. Testers tool can pick up developer tests but won’t go the other
Q: ITE stores results in database is this shared
A: One database for everyone. Right now difficult to look at results multiple runs. Want ability to aggregate runs to generate report for a build on all platforms.
Q: Was choice of chroot environment because of virtualization
A: Decision was made 2 years ago when vmware wasn’t there. He wasn’t with the company when decision was made. They did this because of the heavy packet switching they work with.
F: If that weren’t the case would you be looking at virtualization
A: yes and they are looking at it. Have tests harnesses consisting multiple boxes and looking at how to virtualize that as one chunk. They are trying to come up with schema for test harness in the cloud.
Using Django for the reporting UI and for data driven tests. Use code review tool called Review Board. I also have it on good authority that Fogbugz when used with WebSVN and Cruise Control works really well and has similar functionality.