Navigating GLBA, SOX, FINRA, and banking regulations when implementing AI and LLM solutions in financial institutions
Financial institutions face some of the strictest regulatory requirements for data protection and AI usage. Between the Gramm-Leach-Bliley Act (GLBA), Sarbanes-Oxley (SOX), FINRA rules, SEC guidance, and federal banking regulators, implementing LLMs requires careful attention to customer privacy, model risk management, and vendor oversight. Understanding these requirements is critical for banks, credit unions, broker-dealers, investment advisors, and fintech companies.
Financial institutions are subject to oversight from multiple regulators: OCC, FDIC, Federal Reserve, CFPB, SEC, FINRA, and state banking departments. Each has issued guidance on AI, model risk management, and third-party vendor oversight. Violations can result in enforcement actions, civil money penalties, and reputational damage.
Multiple federal regulations govern how financial institutions can use LLMs and handle customer data:
Scope: All financial institutions (banks, credit unions, insurance companies, securities firms, mortgage lenders, debt collectors)
Requirements:
LLM Implication: Customer financial information (account numbers, balances, transaction history, credit scores) sent to LLM vendors may constitute "sharing" under GLBA unless the vendor qualifies as a service provider with contractual safeguards.
Scope: Publicly traded companies, including public banks and financial services firms
Key Requirements:
LLM Implication: If LLMs are used in financial reporting processes (e.g., generating MD&A sections, analyzing revenue recognition), controls over LLM outputs must be documented and tested as part of SOX compliance.
Scope: Broker-dealers, registered representatives, securities firms
Key Rules:
LLM Implication: LLM-generated customer communications (emails, marketing materials, research reports) must be reviewed and approved by a registered principal before distribution. All prompts and outputs must be retained per retention schedules.
Scope: Registered investment advisors (RIAs)
Key Requirements:
LLM Implication: If LLMs provide investment recommendations, advisors remain fully responsible for ensuring advice is suitable and in the client's best interest. Disclosures may be required if AI significantly influences advice.
Key Guidance:
LLM Implication: LLMs used for credit decisions, fraud detection, or customer segmentation are considered "models" under SR 11-7 and require formal model risk management frameworks.
GLBA protects "nonpublic personal information" which includes personally identifiable financial information provided by customers or obtained through transactions:
Account Information:
Credit Information:
Personal Identifiers:
Combined Information:
Information that is publicly available is NOT NPI and can be used in LLM prompts:
Federal Reserve guidance SR 11-7 requires banks to establish robust model risk management frameworks. LLMs used in business decisions are considered "models" and must comply:
A quantitative method, system, or approach that applies statistical, economic, financial, or mathematical theories and techniques to process input data into output estimates or predictions.
LLMs used for credit decisions, fraud detection, customer segmentation, or regulatory reporting are models.
Requirements for deploying LLMs:
Independent review of model performance and limitations:
Continuous oversight of model performance:
Oversight structure for model risk:
LLMs present unique validation challenges compared to traditional statistical models:
Solution: Implement robust output validation, human review for high-stakes decisions, and continuous monitoring.
Regulatory guidance requires financial institutions to perform due diligence on third-party service providers, including LLM vendors:
Before selecting an LLM vendor, assess:
Ensure vendor contracts include:
After engagement, continuously monitor vendor:
Avoid over-reliance on a single LLM vendor:
For broker-dealers and investment advisors, Material Nonpublic Information (MNPI) is information about a company that hasn't been disclosed publicly and would likely affect the stock price if known. Using MNPI for trading violates insider trading laws.
Transmitting MNPI to third-party LLM vendors could constitute improper disclosure. Examples of MNPI:
Many financial institutions maintain information barriers to prevent MNPI from reaching individuals who could trade on it. LLM usage must respect these barriers:
Financial institutions can safely use LLMs for many workflows that don't involve NPI or MNPI:
Analyze publicly available earnings transcripts, SEC filings (10-K, 10-Q), news articles.
Requirement: Only public information; no MNPI
Draft internal policies, update employee handbooks, create training materials.
Requirement: No customer data; human review required
Summarize new regulations, compare requirements across jurisdictions, identify compliance gaps.
Requirement: Public regulatory documents only
Generate code comments, identify potential bugs, suggest refactoring improvements.
Requirement: No customer data in code; security review
Generate answers to common questions about products, rates, and services.
Requirement: Public information only; compliance review before publishing
Summarize non-sensitive meeting notes, action items, and decisions.
Requirement: No NPI, MNPI, or confidential strategy discussions
These use cases CAN be done with LLMs but require additional safeguards:
We can help you navigate GLBA, SR 11-7, FINRA, and other financial regulations to safely implement LLMs while maintaining compliance.
Schedule a Consultation