carlos-muza-hpjsku2uysu-unsplash.jpg

‘Working smarter, not harder’: What reviewers think of using technology in audits

New research by Assistant Professor of Accountancy Scott Emett and KPMG Professor of Accountancy Steve Kaplan shows that even when an audits’ quality is the same, external reviewers still perceive a difference between the data and analytics approach and the traditional approach.

By Jane Larson

Auditing firms are spending billions of dollars on technology to help them work faster and more accurately, but there is a catch: Some firms feel their innovative approaches get unfair scrutiny from external reviewers.

Reviewers, for their part, profess to have open minds. The quality of financial statement audits is paramount, they say, regardless of whether the audits get done in traditional or technological ways.

Assistant Professor of Accountancy Scott Emett and KPMG Professor of Accountancy Steve Kaplan wanted to find out if reviewers were biased against audits done with data and analytics technology, and if so, what interventions could be made to reduce such bias.

Audit firms that had invested in technology and training began hearing that reviewers may be less accepting of the new tools than of the traditional tools, Kaplan says. “There’s a tendency for people to be cautious about accepting new changes in technology,” he says, “and while there were those anecdotes, it was unclear why that might be. What was the underlying, specific reason, and how might we change that?”

Kaplan and Emett spotted a timely problem in the accountancy industry to solve.

We were primarily motivated in solving this tension that exists between audit firms and regulators, especially these external reviewers.


– Assistant Professor of Accountancy Scott Emett

External reviews of audits are generally done by the Public Company Accounting Oversight Board, a non-profit corporation created by Congress to oversee public companies, or the American Institute of CPAs, which reviews audits of privately held companies. Reviewers report any deficiencies they find in audits to the auditing firms and recommend remedies. Firms that fail to carry out the solutions may face having the reviewers’ report made public, which can cause the firms to lose clients. Even internal reports that find deficiencies can have a chilling effect on the careers of partners who led those audits. As a result, auditing firms care significantly about how reviewers perceive the quality of their work.

Until Emett and Kaplan’s team’s work, little research had been done on what external reviewers thought about auditors’ use of data and analytics.

Along with colleagues from the University of Missouri-Columbia and the University of Mississippi, the two conducted an experiment that found that reviewers did tend to regard audits using time-saving technology as of more inferior quality than audits done in traditional ways. But why?

Falling prey to a psychological shortcut

The team thought a basic principle of psychology was at play — the effort heuristic. It says we humans tend to think that whenever anyone spends more effort on a task, the result will be of better quality than a task that took less effort. The effort heuristic is a natural shortcut in making decisions, but it also raises these questions:

Who is the better artist — the one who spends months creating a perfect landscape, or the one who brings more exceptional natural talent to the work? Who is the better baseball player — the one who gets to the ballpark early to put in hours of batting practice, or the one who has shown star-quality skills since Little League days?

The results might be the same, with each painting selling for the same amount, and each ballplayer achieving the same batting average. But because the time the artist and the ballplayer spend at their craft is transparent and observable, people tend to rely on the effort heuristic and think that the two who spend more time and effort produce work of higher quality than those who put in less time and effort.

“In most situations, it does take a lot of effort to produce something that’s high quality,” Emett says. “But using that short cut, that effort heuristic, can lead to bad judgment in some circumstances. In this study, data and analytics can cut down on a lot of the manual effort that goes into an audit. If you’re an external reviewer and you’re focusing on what the engagement team has performed, data and analytics are going to decrease the amount of effort they put in. If you’re using the effort heuristic, you might think, ‘Oh, they didn’t put in very much effort, so it must be a low-quality audit.’”

In auditing, the traditional approach takes time and effort. Auditors first take a sample of all a company’s transactions and then identify exceptions or instances where documents don’t match up, within the sample. They then manually look at those exceptions in detail to identify those with significant misstatements, and they extrapolate those misstatements to the entire population of transactions.

The data and analytics approach rearranges the steps in the process, with technology reducing much of the manual effort. Auditors use technology to look through all a company’s transactions and find all the exceptions, then take a sample from among the exceptions. Auditors look at those exceptions in detail to identify those with significant misstatements and extrapolate them to the entire population of transactions.

One significant difference in the approaches is the number of transactions each review. The traditional method may draw a sample of 150 transactions and then examine all of them manually to find exceptions, while the data and analytics approach uses technology to see all the exceptions and then may draw a sample of 35 transactions to examine manually.

Rating quality, before and after

Emett and Kaplan first designed an experiment to compare how reviewers judged the quality of the two approaches. They had 60 audit partners and senior managers with peer review experience read about an audit of sales revenue at a fictional firm. Some of the reviewers read that the audit team used data and analytics tools, and others read that the team used the traditional approach. Unknown to either group of reviewers, the researchers had designed the experiment, so both audits had the same results — one-third of the samples contained misstatements of $510 each, which extrapolated to an overstatement of revenue by $2.77 million.

Nevertheless, when asked to rate the quality of the auditors’ work, the group that reviewed the data and analytics approach ranked the audit lower in quality than the group that reviewed the traditional method. Additional questions revealed that the data and analytics reviewers thought their audit took less effort and left more exceptions unexamined.

The researchers ran a second experiment with 98 participants, having one group read a speech emphasizing the importance of audit effort and having the other group read a statement stressing the importance of audit execution. When given the same fictional audit to review, the group primed to think about audit effort judged the quality of the data and analytics approach lower than the traditional method. However, the group primed to think about audit execution decided both approaches are of similar quality.

“The intervention, at its core, is trying to get people comfortable with the idea that high quality does not equal just high effort. High-quality audits are possible without a ton of manual effort — the idea of ‘work smarter, not harder,’” Emett says. “The more we can get external reviewers comfortable with the notion that in today’s day and age we can produce high-quality audits without huge amounts of effort from the engagement team … they make different judgments than they would otherwise. They stopped exhibiting this effort heuristic … and they started judging these two audit approaches equal in quality.”

Kaplan says the research suggests that external reviewers should receive training to reinforce the idea that there are other measures of quality besides effort. The study also matters because if reviewers are punishing the use of data and analytics for reasons that have nothing to do with audit quality, it could impede the adoption of technology that improves quality and efficiency.

The two say their research breaks new ground because it shows that even when the audits’ quality was the same, external reviewers still perceived a difference between the data and analytics approach and the traditional approach. It shows the effort heuristic applies in the context of audits and suggests it could apply in many other contexts, Emett says. Besides diagnosing the source of the problem, the two say, their research also provides an intervention that could reduce reviewers’ unintentional use of the effort heuristic in judging quality and could level the playing field for the two approaches.

The bottom line

Emett and Kaplan say the research has these takeaways for key stakeholders in the auditing process:

For audit firms: Your fears about negative perceptions of the data and analytics approach are well-founded. External reviewers are likely to give undue scrutiny to audits that take advantage of technology. To withstand extra scrutiny, firms should deliver quality above and beyond what they have done in the past.

For external reviewers and regulators: You likely are falling prey to the effort heuristic and making judgments about the data and analytics approach for reasons unrelated to quality. The good news is that a low-cost intervention, such as thinking about audit execution rather than effort, can resolve the issue.

For companies hiring auditors: Data and analytics approaches can be as effective and efficient as traditional approaches. The technology can mean high-quality audits at a lower cost, but the downside is that these audits may draw undue scrutiny from external reviewers. Consider discussing with audit firms how much technology they plan to use when doing your audits.

Latest news