AI Training & Governance for a Top 10 Accounting Firm
The Problem
A top 10 accounting firm had rolled out AI copilot tools across the organization but was seeing uneven adoption and significant anxiety about compliance risk. The firm's internal AI governance policy created three distinct lanes for data handling: public data (minimal restrictions), enterprise-protected client data (requiring specific authorization chains), and personal/tax return data (tightly restricted under IRC ยง7216 and firm policy). Partners and staff understood the policies existed. They didn't understand how to apply them in practice.
The gap between "we have an AI policy" and "our people know how to use AI within that policy" was where the risk lived. Without hands-on training grounded in realistic scenarios, the firm faced two bad outcomes. Teams would either avoid the tools entirely (wasting the investment) or use them carelessly (creating compliance exposure). Both were already happening.
The Solution
We designed and delivered a multi-session training curriculum for accounting professionals, built around live demonstrations using realistic financial data and mapped directly to the firm's own governance framework.
Synthetic datasets, real complexity. We built a complete set of training data for a fictional multi-location retail business: a full trial balance with 36 accounts across three locations, intentionally embedded anomalies (disproportionate marketing spend, potential leasehold impairment indicators, inventory-to-COGS ratio questions), and enough depth to support the full audit engagement lifecycle. A second dataset used publicly available SEC filings to build an interactive financial analysis dashboard. Both datasets were designed to be immediately recognizable to the audience as the kind of work they do every day.
Prompt sequences that follow the engagement lifecycle. Rather than showing isolated "ask the AI a question" demos, we walked participants through a seven-prompt sequence that mirrors how an actual engagement unfolds: validate the data, produce standard deliverables (financial statements, ratio analysis), perform analytical review procedures, generate adjusting entry recommendations, and draft a client summary memo. Each prompt built on the previous one, so the audience saw the tool maintaining context across a realistic workflow.
Compliance mapping in real time. Before every demonstration, we mapped the data being used to the firm's three-lane governance framework. Public data, synthetic data, and enterprise-protected data each carry different requirements. Participants saw exactly which lane they were operating in and what approvals would be needed if they substituted real client data. The governance framework became practical rather than theoretical.
Review gates as a feature, not a footnote. The firm's policy requires that all AI output be reviewed by an engagement team member for accuracy and bias before inclusion in any client deliverable. We built the review step into the demonstration flow rather than mentioning it as an afterthought. Participants saw what a proper human review of AI-generated work product looks like, including what to check, what to flag, and how to document the verification.
The Result
Participants left with a concrete understanding of how to apply AI tools within their firm's compliance framework. The training materials, including the synthetic datasets and prompt sequences, were designed for reuse across the firm's practice groups. The approach established a repeatable pattern: realistic data, governance-mapped demonstrations, and embedded review gates that make compliance a natural part of the workflow rather than an obstacle to adoption.