Several years ago I was asked to write a chapter for a book called Methods for Testing and Evaluating Survey Questionnaires. So a couple of colleagues and I wrote something on testing online questionnaires. It led me to scratch the surface of the contemporary software testing literature where I learned that the industry had more or less run up the white flag on zero defects, that software has become so complicated and the competitive pressures to get releases out quickly so intense that most people had quietly given up on the idea of a first release being bug free. This struck me as somewhat analogous to what's happened in MR over the last decade: research designs have become more complex, the questionnaires to support them have followed suit, but the timelines clients insist on continue to get shorter and shorter. So questionnaires are more convoluted, there are more lines of code and more numbers to check but with less time to do it.
Of course, we all insist to our clients that we check it all and that remains the goal. But even if we have all the time we need there are two lessons I took from the software QA literature and they have to do with the priorities that should guide our approach to QA:
- Focus first on the most important stuff, that is, the section of the questionnaire, the lines of code and the analytic outputs that will create the biggest problem if they are wrong.
- Focus next on those places where there is most likely to be an error, that is, where the questionnaire or code is most complex and the numbers hardest to compute.
Then check everything else. Clients rightfully expect that every deliverable we give them be 100 percent correct. Getting there is not easy.