Problems in evaluating mid-size enterprise software

Enterprise software products are difficult to evaluate. This is not earth-shattering news to anyone that's ever been in the position of having to compare one content management system (CMS) package to another—or been responsible for implementing a new CRM system for their organization. Enterprise packages are difficult to evaluate due to the size of the market for the software (lots of packages), the complexity and total number of features represented in the software (lots of features), and the data-intensive nature of most enterprise tools (you need lots of test data).

To do a proper evaluation you must be willing to commit generous amounts of time and effort to do your homework on the market, get to know the software and its strengths and weaknesses, and figure out how your business processes will really interact with the system.

Done in a totally rational manner, software evaluation is a seemingly simple purchasing decision. These decisions usually follow a similar process. Even if you're a satisficer instead of a maximizer, the stages outlined by business scholar Philip Kotler are generally applicable. Purchasers of software go from many software choices (i.e., 1,000 possible options) down to their final decision (i.e., the one you buy and install), just like they would in purchasing a car, a house, or any type of consumer good. Kotler traces the narrowing of choice through the following sets:

  1. Total Set – everything that exists in the market (i.e., 1,000)
  2. Awareness Set – everything that you're aware exists (i.e., 100)
  3. Consideration Set – what you're willing to look at (i.e., 10)
  4. Choice Set – your final options (i.e., 3)
  5. Decision – the package you go with (i.e., 1)

It seems all so straight forward. But it's not. Here are some of the problems I've noticed to be inherent in selecting an enterprise software package:

The "you don't know what you don't know" problem: Your Awareness Set (what you know is out there) is a fraction of the Total Set (all packages that exist) and you have the nagging feeling that the piece of software that will make your life perfect is in that Total Set, you just haven't found it yet. This is where speciality websites like CMSWatch for content management systems or expensive analysts like Gartner and Forrester come in handy—they keep track of what's out there, so you don't have to. Analyst firms provide you with a default Awareness or Consideration set of packages to choose from, with Gartner narrowing your selection choices down even further thanks to their Magic Quadrant reports. While it's a good start, you'll still be faced with the question, "But does it meet our needs?" and in order to answer that question, you'll need to ask an even more difficult one: "What are our needs anyway?"

The "feature shoot-out" problem: Fully evaluating the 100 enterprise software products that made it into your Awareness Set is a non-starter, but you have to prune down your big list somehow. You decide to do it based on features. You create your big list of products and list the features for each package in a comparative Excel spreadsheet of epic proportions. Side by side, you attempt to see who does what and what's missing. Decision making based on this approach can be useful, but is prone to the fallacy that bigger is better, favouring a more "comprehensive" package simply because it's got a ton of features. Psychologist Barry Schwartz in his book "The Paradox of Choice" points out that people often make comparative decisions based on "more vs. less," but when it comes to post-purchase satisfaction, the experience of the product is what really counts; the usability of the features matter, not the total amount. And that often means that "less" wins. How many features does your VCR have? How many do you use? How important were those features when you were shopping for it? How about after owning for a year?

The "realistic simulation" problem: Once you get down to your final products to evaluate, you need to work with the software and test it just like you would do in your real job. Well if it's a complex piece of software like a CRM or a web content management system, you need to load a lot of meaningful data into the system, go through the business processes just as you would in your real job, and use the system just like you would in real life. You manage 30,000 documents with your current CMS? Well how does the new one perform in the same context? This can be frustrating, as many people who install new software recognize, it can take days, weeks, and even months to become truly productive with a multi-user enterprise package. And during this time, you can often keep wondering, "Is it me? Is it the software? Why isn't this working right? Is it because I'm not proficient using this yet? Or does this software not meet my needs...? Is there another feature buried in here that I don't know about? Am I missing something?"

The "not-so-objective evaluation criteria" problem: On paper, evaluation criteria can be given different scores (cost gets 30%, features get 50%, etc) and look to be quite objective and quantitative in nature. Spreadsheets comparing features can be summed and totalled, resulting in an official looking number. But when it comes to actually using the system, another part of our brain gets involved. "I just don't like the way it... feels." During a recent evaluation, I kept coming back to that little voice in my head that was saying, "This package just isn't right. It feels wrong." It had nothing to do with the number of features per se or the data coming out of it, but rather the interaction design, the aesthetics of the system, and the way it behaved. These are powerful feelings present in the users and evaluators, feelings that are difficult to be ignored or left out of the scoring criteria.

Finally, there's the other hard-to-compare criteria beyond just features and function: service and support, vendor and platform longevity, technology architecture, and of course, cost. But few of these criteria get people as excited or frustrated as actually using the software itself.

It's a tough job evaluating large enterprise software packages. If you can design your evaluation process to mitigate these risks, you might not find the perfect piece of software, but you might be able to save yourself some frustration in looking.