We're just winding down now from our work on the Consumers Guide to Grants Management Systems
- a project that we worked on for more than six months. It was the biggest, most research intensive report we've ever done as Idealware, and I learned a lot from the process. Here's some of my take-aways, as I reflect on the project overall.Surveys aren't a great fit for gathering information about software
Surveys often seem like a great idea, like a straightforward way to gather a lot of information about what people think about packages. But it's in fact very difficult to get results that tell you much. Distribution is challenging. Getting a representative sample of anyone is infeasible on a limited budget (this would require defining a specific group of people and then trying to get a 50-60% response rate from them - classically, you do this through monetary incentives, follow-up calls, etc), so you need to default to more informal distribution methods.
But it's difficult to ensure that these informal methods gather information from a useful cross-section of people, and they're quite prone to being distorted by a few individuals. You don't have to have evil intent to distort an informal survey - if one enthusiastic user forwards the survey request onto the user group for their favorite package, then your usage numbers are suddenly way off.
And it's hard to interpret the data you've gathered. You need to be very limited and specific in your questions in order to get useful results. For my money, individual interviews - or even focus groups - are a better bang for the buck.
If I had it to do again, I wouldn't have spent nearly as much time on our grants management survey as we did. However, it was very useful for a particular purpose: it helped us to finalize the vendor list. By listing tools and including an "other" category, we heard about all the tools that were in use by the people who answered the survey, and it prompted vendors who weren't listed to contact us.It's hard to define an evaluation framework before you've reviewed tools
I like defined processes, and my tidy brain really wants to interview a bunch of folks about the features they find useful in a software package, translate that into a framework for evaluating tools, and then evaluate tools using that framework. Only one problem: that doesn't work. You can (and we did) translate the interview data into a set of questions to ask, but it's really impossible to determine the key aspects that will be important in comparing software until you've done a number of reviews.
For instance, Document Management was a major theme in interviews, and we asked vendors a number of questions about their features in this area. But for 90% of the features, not a single product had them. While we certainly need to highlight this gap, it's pointless to have a whole evaluation category just to show that every products score poorly.
So in practice, you need to define the questions to ask vendors, do at least four or five reviews, and then come back around to define the evaluation framework. This seems weird and inefficient - for instance, you need to first write up your demo notes so you'll remember what you saw (products blend together mighty fast when you demo 5-10 products in a week or two) and then come back later and translate those notes into your review format. It's way faster to just go straight from notes to review - but if you don't have the evaluation framework completed, you'll need to go back through those reviews later to make them consistent in language and what's evaluated. Which is what we did on this project. It's inefficient, and so tedious but difficult that I worry that it can't be done accurately... without the superhuman diligence of Katie Guernsey, research assistant extraordinaire (thanks, Katie!).Reporting features are hard to evaluate
I'm not entirely happy with our coverage of the reporting features - our comparison is valid, but the vast majority of products were "Advanced" by our scoring system. Are most grants management systems really advanced in reporting? I don't know. Most had quite flexible ad-hoc reporting systems, where you could actually do a lot of slicing and dicing on almost any data element. The real differences, it felt to me, were around ease of use - are the canned reports useful? Can you actually use the ad-hoc tools? This stuff was very difficult to evaluate without specific scenarios - useful for what? To who?Quick summary reviews work well in tandem with detailed reviews
For this report, we spent two to three hours demoing and reviewing each of nine tools, but we also did quick half-hour demos of another eight. These quick looks were more useful than I thought they'd be. While we weren't able to do the same kind of detailed comparison of these packages, we got a good sense of strengths and weaknesses of each, enough to put them in context in the report. I think that doing a number of detailed reviews *first* really helped, though - it gave us a lot of knowledge about what to look for and ask about for the summary reviews....But they're really hard to proof
We asked each vendors to review their reviews and summaries for errors of fact, and interestingly, it took as long or longer to deal with the comments for the products that had only a little paragraph summary blurbs as with the products that had five-six page detailed reviews. The detailed reviews deal mostly with facts, so often there was no arguing them. On the other hand, the summary reviews generalize - i.e. X product is strong in this area, but is weak in another - and vendors found many things to argue with there. We obviously don't need to get the vendor to agree that they're weak in a certain area, but we wanted to make sure we hadn't gotten any important facts wrong, which was much harder for these summary blurbs.
All in all, I'm really happy with how the report came out. Those of you who have taken a look, what do you think? Are there things you think came out particularly well, or not so well, in the report?