Save content
Have you found this content useful? Use the button above to save it to your profile.
Tickbox
iStock_it:iSergey_AW

How to define the ‘best’ apps for accountants

by

With a surfeit of software solutions out there for accountants, picking the right app can be a time-consuming and expensive process. The good news is that there are ways to help firms define what tech tool works best for them. Dayle Rodriguez from Kreston Reeves shares a matrix they have developed for choosing new software.

3rd Jul 2024
Save content
Have you found this content useful? Use the button above to save it to your profile.

The word “best” can be subjective and objective. This might sound contradictory but hear me out. We can all agree that a sprinter is measured on speed. They are the best if they are consistently first over the line – that is, the fastest. In this example “best” is objective.

However, when it comes to software, best is often subjective. What is right for one business may not be right for another, and myriad variables need to be taken into account to define what is best.

In my day-to-day work at Kreston Reeves, I help to curate and validate what is best for our clients at a departmental level (creating a preferred providers list). I also work on bespoke projects with clients with complex needs and help define what is best for them.

I know from experience the overwhelm that can occur when finding the best software solution(s) for your business or clients. Having to consider other users or departments, data and privacy laws, budget, implementation time, training and connectivity to other systems.

And on top of all that you have various suppliers claiming their product is “the best”. These are all but a few of the reasons why defining “best” can be difficult.

Need x execution analysis

The good news is that we often define “best” in our day-to-day lives and in many cases, we do it without thinking. For example, how would you define the best car? Or better put, how would you define the best car for you? The engine power, fuel economy, fuel type, number of doors, boot space, safety features, price and so on.

Without writing anything down or measuring anything, based on your personal preferences and circumstances, you have probably thought of a few factors that might rule out several options and bump a couple up the list.

When deciding on a large or long-term purchase in your personal life, what you are actually doing is a need x execution analysis. The exciting thing is, you can codify this process and apply it to software or app selection.

Need x execution (or N x E) is a simple method that can be used to help define “best” by creating quantifiable and measurable metrics.

  • Need analyses your requirements for the software and is measured on a scale of 1 to 5, with 1 being a “nice to have” and 5 being a “must have”.
  • Execution is based on the software itself and is measured on a scale of 0 to 10, with 0 being the feature does not exist in the product and 10 being the feature in the product is delivered in the best possible way.

Using N x E, you can score a software feature on a scale of 0 to 50. You then combine the scores of multiple features to clearly quantify which solution is best, in other words the solution with the highest overall score.  

As an example, when I led the Kreston Reeves project for our preferred optical character recognition (OCR) provider we had to determine essential functionality (5) versus nice to haves (1). We reviewed Dext, HubDoc, AutoEntry and Basecone.

For our OCR project, speed and accuracy were both considered a level 5 – very important to the functionality of the products we reviewed. Whereas the ability to detect handwritten bills and invoices was considered a (1) nice to have.

Below is an example chart with some of the criteria we used as part of our OCR project, with the suppliers anonymised and dummy data used for the scores. 

Kreston Reeves app matrix
Kreston Reeves

Software decisions validated

The beauty of this method is it is robust and scalable. Whether you have one or two staff or a wider group of stakeholders, this method allows you to define what is best for you by making sure you prioritise what is needed – and how well what is needed can be delivered.

A byproduct of quantifying “best” is that it can help with internal adoption and change management. Individuals who’ll be using it every day can see that due diligence has been conducted and the decision to use new software has been validated, rather than made purely on price or a whim.

With our evaluation matrix, I’m confident that everyone at Kreston Reeves can say “We use software X for reasons 1, 2, 3, but there are competitors that do 4, 5, 6 better.”  

The N x E process can be broken down into five steps.

  1. Establish what you need. This can be as simple as a basic summary of requirements, problems or desired outcomes, for example cutting down on paper in the office or speeding up tax return turnaround time. 
  2. Consider client, staff and your own needs (at various levels of seniority).
  3. It’s time to matrix! Create your evaluation using the 5 and 10 (N x E).
  4. Contact vendors, use the paragraph you created in step 1 and share it with them.
  5. Test! Conduct tests linked to the criteria outlined in the evaluation matrix and fill in the boxes.

Replies (11)

Please login or register to join the discussion.

avatar
By FactChecker
03rd Jul 2024 20:06

"The beauty of this method is it is robust and scalable" ...
... but at its heart the results are entirely driven by subjective scores.

Whilst that's in line with the valid comment made at the start ("when it comes to software, best is often subjective") ... doesn't it rather undermine the proposition that 'the method' gives you a set of answers which "define what is best for you .. and how well what is needed can be delivered"?

Thanks (4)
Replying to FactChecker:
Dayle Rodriguez profile picture
By Dayle Rodriguez
04th Jul 2024 10:21

Thank you for taking the time to respond. The subjectivity can be limited by ensuring the testing is uniform and objective.

Using OCR processing speed as an example; we created several dummy supplier invoices then had them processed by each OCR provider. We timed, with a stopwatch, how long it took from sending the data to the OCR system to when the OCR system marked the invoice as complete.

The test was conducted by more than one person on different accounts. The recorded times were aggregated and an average processing speed was determined.

Our method of testing, that was underpinned by the matrix provided quantifiable and objective evidence.

Thanks (1)
Replying to Dayle Rodriguez:
avatar
By FactChecker
04th Jul 2024 14:14

You appear to have misunderstood my point ... which was that the scores (not the methodology) are subjective.

For instance, in your OCR example one of the criteria is "Ease of editing if required".
Any measure that starts with 'Ease' (and there are two others in that list) are by definition subject to the personality/experience/etc of the person giving the score.
So "more than one person on different accounts" is good (and would have been worth mentioning upfront) - although it introduces 'interesting' concepts, such as do you get different 'profiles' for different potential users of the system (as in size of business / depth of experience of users / personality types / etc)?

Basically I'm all for a more rigorous approach and regard yours as a good first step ... but just a little worried that people may think it is entirely mechanistic and therefore that any 'results' are unimpeachable and valid for all.

Thanks (2)
Replying to FactChecker:
Dayle Rodriguez profile picture
By Dayle Rodriguez
05th Jul 2024 09:05

You are 100% correct FactChecker N x E is a good first step and you are right there other layers to the process that make it more robust but I only had 600 words. I haven't shared other aspects of the testing process.

When you say "just a little worried that people may think it is entirely mechanistic and therefore that any 'results' are unimpeachable and valid for all" I agree. In fact it is one of the core messages of the article i.e. it can't be one size fits all and don't just follow other people. Best needs to be defined by you.

Some things are empirical and other things are subjective. When testing you are balancing the empirical with the subjective to determine best.

Thank you for the comments I think we have a similar mindset with regard to software selection/testing. Your comments have made me dig a little deeper and provide more detail. Hopefully others find our exchange helpful.

Thanks (1)
avatar
By PhilHobdenTech
04th Jul 2024 09:38

Really like this systemic approach. Too many people approach this is a unstructured way and can risk implementing the wrong technology at the wrong time or end up with buyers remorse.

Thanks (4)
avatar
By ArianBloodwood
04th Jul 2024 10:12

Thanks so much for this - it's really helpful!

Thanks (2)
Ivor Windybottom
By Ivor Windybottom
04th Jul 2024 10:22

In addition to the usability factors, etc., we are now consciously including supplier status, namely how the "Iris effect" will impact choice.

I define this effect as the risk that a supplier may be bought by large private equity firms and consolidated/dropped/price hiked.

This can be an important discussion before choosing a supplier, but unfortunately it's not clear (to me!) whether bigger is better or whether smaller suppliers are preferable.

Thanks (3)
Replying to Ivor Windybottom:
avatar
By PhilHobdenTech
04th Jul 2024 10:57

Interesting point.

I have also been asked (in the past) about roadmap and cash runways as key questions when choosing tech. Firms wanting to see there is long term investment and future in their choice

Thanks (0)
Replying to Ivor Windybottom:
Dayle Rodriguez profile picture
By Dayle Rodriguez
04th Jul 2024 11:44

Ivor, a really good point. I like how you dub it the "Iris effect" hahaha. It's funny but true.

"Whether bigger is better or whether smaller suppliers are preferable" I think that is one of the debates most of us are having. Bundling or unbundling of software. It comes and goes in cycles, at the moment we are seeing a lot of bundling/feature consolidation.

Thanks (0)
Stepurhan
By stepurhan
05th Jul 2024 09:47

Why is the example chart so horrendously fuzzy? If you're going to post something as part of an article, surely a decent quality image is in order.

Thanks (0)
Replying to stepurhan:
Tom Herbert
By Tom Herbert
05th Jul 2024 10:48

Apologies stepurhan, that's our fault. Will liaise with Dayle and see if I can procure a better version.

Thanks (0)