AI Regulatory Compliance
Self-Assessment Checklist
A practical checklist for CTOs and technology leaders at mid-market companies ($50M–$500M). Check the boxes that apply to your organization, then score yourself at the end.
1. Federal AI Regulations
Active federal requirements that apply to companies using AI in business operations. These are enforced now.
SEC SEC AI Guidance
AI capability disclosure. If you use AI for client-facing services (portfolio analysis, risk scoring, recommendations), you have documented and disclosed these capabilities to clients.
SEC examination priority for 2025-2026. Investment advisors and broker-dealers must disclose.
AI monitoring policies. You have written policies for monitoring AI system outputs, including drift detection and accuracy tracking.
Third-party AI vendor data protection. You have assessed and documented how third-party AI models (ChatGPT, Claude, Copilot, etc.) handle client data, including data retention and training opt-outs.
FTC FTC AI Enforcement ("Operation AI Comply")
No deceptive AI claims. Your marketing materials accurately describe what your AI does. No claims of “AI-powered” without actual AI behind the feature.
Penalty: Section 5 enforcement actions, FTC consent decrees
Fair AI data practices. Your AI systems do not collect or use consumer data in ways that are unfair, deceptive, or cause substantial injury.
CFPB Consumer Financial Protection Bureau (ECOA)
Non-discriminatory AI credit decisions. If you use AI in lending, underwriting, or credit decisions, you have tested for discriminatory outcomes across protected classes.
Penalty: Active enforcement against algorithmic bias under ECOA
Adverse action explanability. Your AI-driven credit decisions can produce specific, individualized reasons for adverse actions (not just “the model decided”).
FED/OCC/FDIC Banking Regulators — Model Risk Management
AI model inventory. You maintain a documented inventory of all AI/ML models in production, including purpose, data inputs, and model owners.
Model validation. AI models used in BSA/AML/OFAC compliance, fraud detection, or risk assessment undergo independent validation on a defined schedule.
2. State AI & Privacy Regulations
State-level requirements with enforcement deadlines in 2025-2026. Multiple states may apply if you have customers or employees in those states.
CO Colorado AI Act (SB24-205) — Effective June 30, 2026
High-risk AI identification. You have identified whether any AI systems make “consequential decisions” (employment, lending, insurance, housing, education, legal).
Impact assessments completed. For each high-risk AI system, you have completed a documented impact assessment covering purpose, data used, known risks, and mitigation measures.
Penalty: Up to $20,000 per violation
Consumer notification. Consumers are notified when a consequential decision is made or substantially influenced by AI, and provided with an explanation and appeal process.
CA California (CCPA/CPRA) — AI Provisions Active
AI data processing disclosure. Your privacy policy discloses the use of AI/automated decision-making in processing consumer data, including profiling.
Opt-out mechanism. California consumers can opt out of automated decision-making that produces legal or similarly significant effects.
Penalty: $7,500 per intentional violation (CPRA)
Data retention policy. You have documented and disclosed how long AI-processed consumer data is retained and for what purposes.
NYC NYC Local Law 144 — Active Now
Bias audit for hiring tools. If you use AI/automated tools in hiring or promotion decisions for NYC-based candidates, you have completed an annual independent bias audit.
Public audit results. Bias audit results are publicly posted on your careers page at least 6 months before use.
Penalty: $500–$1,500 per violation per day
IL Illinois Biometric Information Privacy Act (BIPA)
Biometric consent. If your AI processes biometric data (facial recognition, fingerprints, voiceprints), you obtain informed written consent before collection.
Penalty: Private right of action, $1,000–$5,000 per violation
3. International — EU AI Act Applicability
The EU AI Act applies if you serve EU customers, process EU citizen data, or deploy AI systems within the EU. Even US-based companies may be in scope.
EU AI Act applicability assessment. You have determined whether any of your AI systems fall under the EU AI Act (do you have EU customers, partners, or data subjects?).
Risk classification. If applicable, your AI systems are classified by risk tier (Unacceptable, High-Risk, Limited, Minimal) per EU AI Act Article 6.
Conformity documentation. High-risk AI systems have technical documentation, quality management systems, and human oversight mechanisms as required by EU AI Act Article 9-15.
Transparency obligations. AI systems that interact with people (chatbots, generated content) are disclosed as AI to users.
4. Industry-Specific AI Implications
How existing regulations extend to AI systems in your industry. These are not new laws — they are existing compliance requirements applied to new AI capabilities.
SOX Sarbanes-Oxley — AI in Financial Reporting
AI controls documentation. AI systems that affect financial reporting (revenue forecasting, fraud detection, automated journal entries) are documented in your SOX control framework.
AI change management. Changes to AI models affecting financial data follow your change management and testing procedures with appropriate sign-off.
HIPAA HIPAA — AI in Healthcare Data
BAA coverage for AI vendors. AI vendors processing PHI (patient data, health records, clinical notes) are covered under Business Associate Agreements.
Minimum necessary principle. AI systems only access the minimum PHI necessary for their function — not broad access to entire patient databases for model training.
De-identification before AI processing. PHI is de-identified per HIPAA Safe Harbor or Expert Determination method before being processed by AI models, unless clinical necessity requires identified data.
PCI-DSS PCI-DSS — AI in Payment Processing
AI systems in cardholder data environment. AI systems that process, store, or transmit cardholder data are within your PCI-DSS scope and subject to all applicable controls.
AI logging and monitoring. AI-driven fraud detection and transaction processing systems have logging sufficient to support forensic investigation (Requirement 10).
5. AI Governance Fundamentals
Regardless of specific regulations, these are the governance basics every company using AI should have in place. Most enforcement actions cite governance failures, not technical failures.
INV AI System Inventory
Complete AI inventory. You have a documented list of every AI/ML system in use — including shadow AI (teams using ChatGPT, Copilot, Midjourney without IT approval).
Include: system name, purpose, data inputs/outputs, model type, vendor, owner, deployment date, last review date
Shadow AI policy. You have a written policy on unapproved AI tool usage, and employees know how to request approval for new AI tools.
RISK Risk Assessment
Risk classification framework. Each AI system is classified by risk level (e.g., High, Medium, Low) based on impact on people, data sensitivity, and regulatory exposure.
Documented risk mitigations. For each high-risk AI system, specific mitigation measures are documented and reviewed quarterly.
BIAS Bias Testing & Fairness
Bias testing protocol. AI systems making decisions about people (hiring, lending, insurance, customer service prioritization) are tested for bias across protected classes before deployment and on a regular schedule.
Fairness metrics defined. You have defined what “fair” means for each AI system (demographic parity, equalized odds, individual fairness) and measure it.
TRANS Transparency & Explainability
AI disclosure to users. When users interact with AI (chatbots, generated content, automated decisions), they are informed that AI is involved.
Decision explainability. For consequential AI decisions, you can provide a human-understandable explanation of why the AI made that specific decision.
Human override mechanism. Users affected by AI decisions have a way to request human review and override of the AI decision.
DATA Data Governance for AI
Training data documentation. You know what data your AI models were trained on, including whether it includes customer data, copyrighted material, or data from unauthorized sources.
Data input/output logging. AI system inputs and outputs are logged for audit purposes, with retention periods aligned to regulatory requirements.
Vendor data handling. For third-party AI services, you have verified and documented: where data is processed, whether data is used for model training, data retention policies, and deletion capabilities.
Score Yourself
Count the boxes you checked in each section. Be honest — this is for you, not us.
1. Federal AI Regulations
____ / 10
2. State AI & Privacy Regulations
____ / 11
3. International (EU AI Act)
____ / 4
4. Industry-Specific
____ / 7
5. AI Governance Fundamentals
____ / 11
Total
____ / 43
35–43: Strong
You have a solid AI governance foundation. Focus on maintaining it as regulations evolve.
18–34: Gaps Exist
You have some controls in place but significant exposure in key areas. Prioritize the unchecked items — enforcement is active.
0–17: At Risk
Your AI compliance posture has major gaps. With Colorado AI Act (Jun 2026) and active FTC/SEC enforcement, this needs attention now.
Need help with the items you couldn’t check?
Most mid-market companies score between 12 and 22 on this checklist. That is not a failure — it is normal. The regulatory landscape moved faster than anyone’s compliance program.
We help CTOs close these gaps with a clear, prioritized plan — not a 200-page audit report that sits on a shelf.
Schedule a 15-minute call and we will walk through your specific exposure.
Schedule a Call