AI governance has everyone’s attention – and it should have yours. IDC expects 50% of organizations to struggle to keep up with fast-moving compliance standards. On the brighter side, they see 40% finally starting tackle technical debt, though many continue to lack a clear return on AI investment.
Executives who treat governance as a bolt-on function rather than a business-aligned strategy risk getting left behind – or worse, finding themselves fined into the headlines.
As AI cements its status as a business and leadership imperative, here’s a collection of activities that bear watching – and why they matter.
1. U.S. State-Level Data Laws Are Gaining Steam
Next month, the Tennessee Information Protection Act (TIPA) goes into effect, joining a growing list of state-level data privacy laws modeled after the Virginia Consumer Data Protection Act. These laws emphasize consumer control over personal data and impose new requirements for data transparency, deletion rights, and opt-out mechanisms.
While each state’s framework is slightly different, the message is clear: AI solutions that rely on personal data must be built and deployed with privacy-first architecture in mind. And if your vendors touch this data? You’re responsible for their compliance too.
2. The European Data Commission is Making GDPR Easier (For Some)
Good news for small and mid-sized businesses operating in Europe: The European Data Commission is proposing reduced record-keeping obligations under GDPR. This simplification could cut annual administrative costs significantly for organizations with fewer risk-prone data activities.
But don’t get too comfortable. GDPR remains one of the strictest data protection laws in the world. The new proposals are not a rollback of accountability, just a smarter allocation of resources. Larger enterprises and those using AI must still be able to demonstrate lawful data processing, model transparency, and user consent mechanisms.
3. NIST’s Privacy Framework is Now AI-Aware
In April, the National Institute of Standards and Technology (NIST) released an initial public draft of Privacy Framework Version 1.1, aligning more closely with its Cybersecurity Framework 2.0 and explicitly integrating AI risk considerations.
This is a big deal. For the first time, privacy, cybersecurity, and AI governance are being treated as interconnected disciplines, not silos. The framework encourages proactive identification of privacy risks from AI models, and supports the use of impact assessments, bias testing, and explainability protocols. NIST welcomes stakeholder feedback on the Privacy Framework 1.1 IPD by June 13, 2025.
4. SaaS Security is a New AI Governance Frontier
The Cloud Security Alliance’s 2025 State of SaaS Security report highlights a silent risk in many AI adoption plans: third-party SaaS integrations.
Their January 2025 survey revealed:
- 76% of organizations increased their SaaS security budgets this year
- Top concerns include shadow IT, visibility gaps, and over-privileged access
- AI models deployed within SaaS ecosystems often go unaudited by the buyer
This report underscores why AI governance must include vendor oversight. You can’t outsource accountability. Especially when your data is flowing through APIs and integrations you don’t fully control.
5. California’s CPPA is (Loudly) Enforcing Privacy
In a recent enforcement action, the California Privacy Protection Agency (CPPA) hit fashion designer Todd Snyder LLC with a $345,000 penalty for data privacy failures.
The lesson? Compliance is not a checkbox. It’s an ongoing, documented, and auditable process—especially when AI systems are making or informing decisions based on customer data.
This action signals that regulators are watching, and that privacy violations tied to AI use will carry financial and reputational costs.
Read the CPPA enforcement story →
6. AI ROI Is Still Murky. Without Governance, It Stays That Way.
Back to that IDC report: many organizations are still struggling with unclear ROI on their AI investments. That’s not just a finance issue—it’s a governance issue.
When oversight is weak, models are misused. When metrics are unclear, results can’t be trusted. When no one owns AI performance, optimization stalls. Governance isn’t just about reducing risk. It’s about increasing return on innovation.
Download IDC’s CIO Agenda for 2025 →
So What, Now What?
Start with visibility. What AI systems do you use? Where is your customer data flowing? Who’s accountable for compliance?
Then build from there:
- Align governance to business goals, not legal panic.
- Embed privacy, ethics, and risk controls in every AI touchpoint.
- Include your vendors in your governance plans.
- Invest in ongoing education, not just enforcement.
Let BlueSky Help You Get There
If you’re feeling the pressure to catch up with AI regulations—or just want a head start before the next wave hits—BlueSky’s AI Governance Accelerator was built for this moment.
It’s a quick-start program that helps leadership teams assess AI risk, design scalable governance structures, and embed oversight into everyday operations.