Dear Desktop Engineering Reader:
So, I switch over to a new Mac workstation. Client calls. Has an important document to be fixed right away. I send back my edits, and my application -- the exact, fully compatible Mac application that my client uses on his PC -- produces an unreadable file for my client. I open it to see what’s going on. My application not only gags on the file it made, it gets so stuck that even after I force a quit, reboot, and fire up a different doc, the application offers me the gagged doc and the beachball of no return. Hours wasted.
Nothing I wanted to do and nothing that I was working with exceeded expectations of what I’m told I can do and have done. But, somewhere within the compatibility test development, planning, and execution stages something went phooey. Whether during the defect management stage the problem I uncovered was dismissed, never seen, never tested for, not retested for after another problem was fixed, or put aside for another day, I can only speculate.
On its face, my hassles exposed a software quality testing failure. But, frankly, that’s speculative too. See, QA is only as good as the process you use to compile and manage a product’s requirements and specifications. According to PTC, poorly written or poorly communicated requirements are responsible for 50% to 70% of software project failures and that 56% of all errors are introduced in the requirements phase. Ergo, if QA gets poor-quality requirements, you can expect unpleasant and expensive surprises as well as ticked off customers someday. It’s only a matter of time.
That brings us to the subject of today’s Check It Out white paper: Formalized requirements management. The paper -- “Requirements-Based Testing: Encourage Collaboration Through Traceability” -- is from PTC. It focuses on the company’s Integrity platform for software application lifecycle management. It’s an interesting read.
Six pages long, this PDF explains how you can enable your QA team to create a requirements-based testing process that includes, among other attributes, identification of vital test areas, methodologies to validate requirements, traceability throughout the process, defect resolution, and metrics. The paper contrasts manual and point test management processes with a unified application lifecycle management process. The contrast is vivid.
I’m running out of room here, so I’ll just mention that you’ll also find a link to a short video clip on managing the ideas that become your requirements. Look around the page it’s on and you’ll find a longer webinar to learn even more about application lifecycle management.
You can have the greatest QA people going, but if your software application lifecycle management processes are less than they should be, they’re flying by the seat of their pants. And you’re going to pay for that big time. Now, I’m off to write a snarky letter to an outfit that obviously missed something during QA.
Thanks, Pal. -- Lockwood
Anthony J. Lockwood
Editor at Large, Desktop Engineering