can they really HELP US?
Looking for the right HR software often leads to potential buyers looking up reviews and asking for recommendations. But we need to be realistic about their true value, says Denis W Barnard*
When talking about selecting HR & payroll software systems (HRIS) I often use the analogy of choosing a car.
A car that goes fast, packs in a family of 5 and is fuel-efficient can come in various guises, so we need to be specific as to what we want parked on the drive. And that isn’t all; driving 500 miles in one car can leave a very different impression to that of another one.
Beyond this point the analogy begins to wobble, because you can road test a car but it’s not so easy with an HRIS. No matter how many demonstrations, reference sites and dummy data runs you have, you really have a very limited feel for what is going to happen until you are driving live “on the road” – by which time it is really too late if you have chosen the wrong thing.
There are ratings for pretty much everything now, but several doubts have been revealed as to honesty and impartiality. These aren’t the problem with HR software, though. I’ve always felt that ratings were too subjective and maybe coloured by the personal perceptions of the client as to why it was good or bad.
Think about these points:
1. Is the reviewer a current user? If not, then how long ago were they a user?
2. How long before a review becomes outdated by product changes, and which version of the software is being reviewed?
Time may have made the review less than useful if issues (good or bad) have changed. As with 1) the review has to be current and to produce a review after each update or upgrade of software would be extremely labour-intensive, even were the users prepared to invest the time in doing it.
3. What were the reviewer’s requirements for this software?
Remembering the car analogy, users have their own particular needs, and a review that is full of praise because the software worked for them could be irrelevant to a prospect with different criteria.
It follows that if the wrong HRIS was selected the purchaser is going to blame the vendor, even though they themselves should be shouldering more than a significant part of the blame. A disgruntled client is going to give a poor review.
Has the software actually been used as it should? This seems to be obvious, but I have seen plenty of cases where the client actually changed the purpose of the software at some point after it was installed.
There are cases where the proper user training was skimped to save money, or where the staff that was originally trained as users had moved on, and their replacements not given comprehensive training on the systems. Dissatisfaction will affect the tone of the review or recommendation, even though the vendor may be largely blameless.
Customer service is one area where products can differ and it’s inevitable that some providers will be better than others at looking after their clients. Is there a dedicated client manager (and not one looking after 250 other customers or with a huge region to manage!) and is their brief to sell more products or work with the client to encourage them to get more out of the system?
Bear in mind that service can fluctuate, and a change of provider policy or personnel can lead to a deterioration of the relationship.
So how do we consider impartial reviews by software experts? These certainly have value but rather like a car road test, they can only provide an indication of the software’s capabilities with regard to functionality and features.
Software costs are fairly similar when comparing like for like, but the real costs are related to the project itself, and these will fluctuate according to how ready the client is and the amount of resource they are putting behind the project. One key question would be the typical implementation time required; in this game time certainly does mean money.
So here we see the issues behind HRIS reviews, and these need to be taken into account when setting off on a software selection exercise.