Audit firms step up AI adoption while seeking safeguards
Audit and accounting firms are adopting artificial intelligence in both strategy and day-to-day work, while calling for stronger safeguards, oversight, and common standards, according to a global IDC study sponsored by Caseware.
The research surveyed 1,005 audit and accounting professionals in senior decision-making roles. It found that 66% said AI is already embedded in their firm's strategy, widely used in select functions, or being tested through pilot projects, suggesting the technology has moved beyond experimentation for many organisations.
As adoption rises, firms want more certainty about how AI is governed. Many audit leaders said they would accept lower AI performance in exchange for stronger security or safety measures. IDC reported that 55% were willing or very willing to make that trade-off.
The findings suggest governance is increasingly viewed as part of deployment, not a separate task. They also reflect the risks of using automated systems in work involving financial statements, controls testing, and audit judgments.
Governance focus
The profession is entering a new phase of AI use. Early adoption is giving way to a more structured period in which firms assess how tools fit within risk management practices and professional standards.
A central theme is standardisation. Two-thirds of respondents (66%) said there is an urgent need for a globally harmonised AI framework for audit and assurance. Such a framework could guide how firms evaluate AI tools, document their use, and demonstrate safeguards to regulators, clients, and internal governance bodies.
The call for harmonisation reflects the cross-border nature of audit and accounting networks and the needs of multinational clients seeking consistent assurance practices across jurisdictions. Respondents also saw value in shared expectations for controls, transparency, and accountability when AI is used in assurance workflows.
Bias concerns
Managing bias emerged as a major concern. Nearly four in five respondents (79%) rated the risk of algorithmic bias in AI systems used for critical functions as moderately, very, or extremely significant.
The survey referenced use cases including risk assessment, fraud detection, and decision-support. These tasks involve prioritising accounts, transactions, or entities for additional testing, and can influence how auditors evaluate evidence and respond to anomalies. Bias in models or training data can skew those decisions.
The results suggest professionals see bias as a practical risk, not a theoretical one. They signalled that governance should include processes to identify, manage, and document model risk, particularly when AI is applied to tasks that can affect audit quality and the reliability of financial reporting.
Vendor role
Caseware, which sponsored the research, sells AI-enabled audit and assurance software. It said the findings reflect a shift in what firms expect from technology providers as AI becomes more embedded in core processes.
David Marquis, Caseware's Chief Executive Officer, described a shift in focus from adoption to deployment practices.
"With two thirds of firms already embedding AI into their strategies, the profession has crossed a pivotal threshold. The question is no longer whether to adopt but how to deploy AI in a way that the profession can truly depend on. That conviction sits at the heart of everything we build at Caseware. We have earned our place in the profession by delivering purpose-built AI intelligence across every stage of the workflow - innovation that meets the exacting standards the work demands and that empowers professionals to do their best work with confidence."
IDC also pointed to where competitive differentiation could emerge. Mickey North Rizza, Group Vice-President for Enterprise Software at IDC, said firms will gain advantage from governance and standards alignment, not adoption alone.
Her comments reflect a broader pattern in enterprise software: early adoption creates visibility, but sustained benefits depend on operational controls, training, and consistent execution across teams.
"The profession is entering a defining phase of AI maturity. Adoption is accelerating, but the competitive advantage will come from how effectively firms embed governance, harmonize with standards and operationalize responsible AI at scale. Those that act decisively now to align innovation with trust, security and transparency will be best positioned to lead in the next era of audit and assurance."
Survey scope
Most respondents worked in the United States (39.8%). The United Kingdom accounted for 15%, followed by Canada (14.9%), Australia (10.1%), Germany (10%), and the Netherlands (10%).
Participants came from organisations of varying sizes. About a quarter worked at firms with 11 to 50 employees, while 24.8% were from organisations with 500 or more employees. The remainder represented mid-sized firms, including those with 51 to 100 employees and 101 to 249 employees.
The sample comprised senior decision-makers, including directors, vice presidents, executive vice presidents and senior vice presidents, C-suite executives, and partners.
Further insights from the study are due to be published later in March.