SUBMIT YOUR RESEARCH
Saudi Journal of Economics and Finance (SJEF)
Volume-10 | Issue-03 | 118-127
Review Article
How Reliable are AI-Generated Financial Disclosures Compared to Human-Written Disclosures, and what Audit Procedures are Necessary to Ensure their Accuracy and Integrity?
Ghazala Parveen, Salma Shaheen Shaik
Published : March 30, 2026
DOI : https://doi.org/10.36348/sjef.2026.v10i03.006
Abstract
The quick use of generative artificial intelligence (GenAI), especially large language models (LLMs), is changing the way companies report and share financial information. GenAI is becoming more and more popular with finance teams for writing the narrative parts of annual reports, management discussion and analysis (MD&A), sustainability disclosures, and earnings releases. This is because it promises big gains in efficiency and more consistent messaging. At the same time, regulators, standard setters, and audit firms warn about new risks to reliability, including hallucinations, bias, loss of explainability, and weak controls over AI workflows. This paper offers a conceptual and normative examination of (1) the reliability of AI-generated financial disclosures in comparison to human-written disclosures, and (2) the requisite audit procedures and governance mechanisms necessary to guarantee accuracy and integrity. We synthesize evidence on the impact of AI on the quality of financial reporting and audit by using recent empirical and conceptual literature in auditing and accounting information systems, as well as practitioner reports and regulatory guidance. Research in banking and external auditing indicates that the utilization of AI is positively correlated with the quality of financial reporting and external audits, facilitated by enhanced information processing. Simultaneously, survey data reveals apprehensions regarding ethical dilemmas, requisite professional diligence, and professional discernment in the extensive application of AI within the auditing process. We suggest a conceptual framework for evaluating the dependability of AI-generated disclosures that encompasses: (a) the design of AI systems and training data; (b) internal control over financial reporting (ICFR) and AI-specific controls; (c) governance and oversight by management, audit committees, and regulators; and (d) independent verification by external auditors. Building on existing auditing standards (for example, ISA 315 and ISA 330) and emerging technology guidance, we outline a set of risk-based audit procedures tailored to AI-generated narrative disclosures, including data lineage testing, model governance evaluation, analytical procedures on AI text, and expanded documentation requirements. The paper concludes that AI-generated financial disclosures can be at least as reliable as human-written disclosures - and in some dimensions more reliable - provided that entities implement strong AI governance, maintain human-in-the-loop review, and that auditors adapt their methodologies to explicitly address AI-related risks. In the absence of such controls and audit responses, GenAI can materially undermine the reliability, auditability, and credibility of financial reporting.
Scholars Middle East Publishers
Browse Journals
Payments
Publication Ethics
SUBMIT ARTICLE
Browse Journals
Payments
Publication Ethics
SUBMIT ARTICLE
This work is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License.
© Copyright Scholars Middle East Publisher. All Rights Reserved.