Your Team Is Already Using AI. Two Regulators Just Published the Rulebook.

Somewhere in your finance org right now, an analyst is pasting actuals into ChatGPT. An FP&A lead is pressure-testing a board narrative with Claude. A controller is running Gemini against a rec that won’t tie out. None of it shows up in your control matrix. And until a couple of weeks ago, there was no finance-specific governance framework designed to help you manage any of it.
That changed fast. On February 19, the U.S. Treasury released the Financial Services AI Risk Management Framework, a governance toolkit built for finance teams with 230 control objectives mapped by deployment stage. Five days later, ECB Banking Supervision signaled it would tighten scrutiny on generative AI in banks, zeroing in on third-party concentration risk. Two regulators, one week, one message: document what you’re running, who owns it, and what happens when it fails.
The timing is uncomfortable. Only 14% of CFOs in a recent RGP survey said they’ve seen measurable ROI from AI investments. Meanwhile, 53% of investors expect AI projects to deliver returns within six months, a timeline only 16% of CEOs consider realistic. CFOs are absorbing pressure from both directions, and the instinct to either sprint without documentation or freeze until the framework is perfect will fail either way.
The finance leaders posting real AI ROI share one structural habit: they built the control layer before they scaled use cases, then used that control layer to accelerate the next deployment. The Treasury framework makes this concrete. Its stage-based control matrix means a pilot-mode team applies a different control set than one running AI across the full close cycle.
For the analysts, accountants, and FP&A staff already using AI tools daily, a few ground rules matter. Enterprise-tier tools are SOC 2 Type II compliant and don’t train on your inputs. Free-tier accounts carry none of those protections. Keeping a simple log of which tool, which workflow, and what data goes in transforms you from “person using unsanctioned tools” to “person who documented the use cases now part of the official rollout.” And the single most important habit: never let AI output go straight into a deliverable without checking it against source data.
Three things to do before your next quarter close: run the AI inventory before someone else does, map existing SOX controls before layering on new ones, and tie your AI value case to an operating metric that runs through the P&L. The governance floor just got poured. The regulators handed you the vocabulary, the control structure, and the staging logic. Use it as the foundation, and you deploy faster with less risk. Ignore it, and you’ll be explaining the gap when an auditor asks what your analysts have been running for the past eighteen months.