Financial Services & Banking LLMs

Navigating GLBA, SOX, FINRA, and banking regulations when implementing AI and LLM solutions in financial institutions

Financial institutions face some of the strictest regulatory requirements for data protection and AI usage. Between the Gramm-Leach-Bliley Act (GLBA), Sarbanes-Oxley (SOX), FINRA rules, SEC guidance, and federal banking regulators, implementing LLMs requires careful attention to customer privacy, model risk management, and vendor oversight. Understanding these requirements is critical for banks, credit unions, broker-dealers, investment advisors, and fintech companies.

⚖️ Regulatory Landscape

Financial institutions are subject to oversight from multiple regulators: OCC, FDIC, Federal Reserve, CFPB, SEC, FINRA, and state banking departments. Each has issued guidance on AI, model risk management, and third-party vendor oversight. Violations can result in enforcement actions, civil money penalties, and reputational damage.

1

Key Financial Regulations for LLM Usage

Multiple federal regulations govern how financial institutions can use LLMs and handle customer data:

Gramm-Leach-Bliley Act (GLBA)

Scope: All financial institutions (banks, credit unions, insurance companies, securities firms, mortgage lenders, debt collectors)

Requirements:

  • Safeguards Rule: Implement administrative, technical, and physical safeguards to protect customer information
  • Privacy Rule: Provide privacy notices; limit sharing of nonpublic personal information (NPI)
  • Pretexting Protection: Protect against unauthorized access to customer information

LLM Implication: Customer financial information (account numbers, balances, transaction history, credit scores) sent to LLM vendors may constitute "sharing" under GLBA unless the vendor qualifies as a service provider with contractual safeguards.

Sarbanes-Oxley Act (SOX)

Scope: Publicly traded companies, including public banks and financial services firms

Key Requirements:

  • Section 302: CEO/CFO certification of financial statement accuracy and internal controls
  • Section 404: Management assessment of internal controls over financial reporting (ICFR)
  • Section 409: Real-time disclosure of material changes in financial condition

LLM Implication: If LLMs are used in financial reporting processes (e.g., generating MD&A sections, analyzing revenue recognition), controls over LLM outputs must be documented and tested as part of SOX compliance.

FINRA Rules (Broker-Dealers)

Scope: Broker-dealers, registered representatives, securities firms

Key Rules:

  • Rule 3110: Supervision of employees and customer communications
  • Rule 2210: Communications with the public (must be fair, balanced, not misleading)
  • Rule 4511: Books and records retention (email, communications, orders)

LLM Implication: LLM-generated customer communications (emails, marketing materials, research reports) must be reviewed and approved by a registered principal before distribution. All prompts and outputs must be retained per retention schedules.

SEC Investment Advisor Regulations

Scope: Registered investment advisors (RIAs)

Key Requirements:

  • Fiduciary Duty: Act in the best interest of clients; disclose conflicts of interest
  • Form ADV: Disclosure of services, fees, conflicts, and disciplinary history
  • Custody Rule: Safeguarding client assets and information

LLM Implication: If LLMs provide investment recommendations, advisors remain fully responsible for ensuring advice is suitable and in the client's best interest. Disclosures may be required if AI significantly influences advice.

Federal Banking Regulators (OCC, FDIC, Fed)

Key Guidance:

  • SR 11-7 (Model Risk Management): Banks must validate, test, and monitor models used in decision-making
  • Third-Party Risk Management: Due diligence, ongoing monitoring, and contractual protections for vendors
  • Fair Lending (Reg B, ECOA): Ensure credit decisions don't discriminate based on protected classes

LLM Implication: LLMs used for credit decisions, fraud detection, or customer segmentation are considered "models" under SR 11-7 and require formal model risk management frameworks.

2

Nonpublic Personal Information (NPI) Under GLBA

GLBA protects "nonpublic personal information" which includes personally identifiable financial information provided by customers or obtained through transactions:

Examples of NPI That Cannot Go to LLMs Without Safeguards

Account Information:

  • Account numbers
  • Account balances
  • Transaction history
  • Payment history

Credit Information:

  • Credit scores
  • Credit reports
  • Loan applications
  • Debt-to-income ratios

Personal Identifiers:

  • Social Security numbers
  • Driver's license numbers
  • Tax ID numbers
  • Passport numbers

Combined Information:

  • Name + account number
  • Name + credit score
  • Name + loan amount
  • Any info obtained from transactions

✅ Safe to Send (Publicly Available Information)

Information that is publicly available is NOT NPI and can be used in LLM prompts:

  • Information from government records (property records, bankruptcy filings)
  • Publicly available directories (phone books, public websites)
  • Financial institution's own marketing materials
  • General industry data (interest rates, market indices)
3

Model Risk Management for LLMs (SR 11-7)

Federal Reserve guidance SR 11-7 requires banks to establish robust model risk management frameworks. LLMs used in business decisions are considered "models" and must comply:

📋 What Qualifies as a "Model" Under SR 11-7?

A quantitative method, system, or approach that applies statistical, economic, financial, or mathematical theories and techniques to process input data into output estimates or predictions.

LLMs used for credit decisions, fraud detection, customer segmentation, or regulatory reporting are models.

1. Model Development & Implementation

Requirements for deploying LLMs:

  • Documentation: Model purpose, methodology, limitations, assumptions
  • Testing: Validate outputs on representative datasets before production use
  • Approval: Senior management and board awareness; sign-off for high-risk models
  • Conceptual Soundness: Ensure the model design is appropriate for the intended use

2. Model Validation

Independent review of model performance and limitations:

  • Independent Review: Validation performed by staff not involved in model development
  • Benchmarking: Compare LLM outputs to alternative models or human judgment
  • Outcomes Analysis: Track actual vs. predicted results over time
  • Sensitivity Analysis: Test how prompt variations affect outputs

3. Ongoing Monitoring

Continuous oversight of model performance:

  • Performance Metrics: Track accuracy, error rates, false positives/negatives
  • Drift Detection: Monitor for degradation in model performance over time
  • Version Control: Document when LLM vendor updates underlying models
  • Incident Tracking: Log and investigate unexpected or incorrect outputs

4. Governance & Controls

Oversight structure for model risk:

  • Model Inventory: Catalog all LLMs in use, their purpose, and risk tier
  • Policies & Procedures: Written standards for model development, validation, and use
  • Three Lines of Defense: Business units (1st), model validation (2nd), internal audit (3rd)
  • Board Reporting: Regular updates on model risk profile and issues

⚠️ Challenges with LLM Validation

LLMs present unique validation challenges compared to traditional statistical models:

  • Black Box Nature: Cannot inspect model weights or decision logic (proprietary)
  • Non-Deterministic: Same prompt may produce different outputs on different runs
  • Vendor Updates: Model may change without notice when vendor updates underlying system
  • Hallucinations: Model may generate plausible but false information

Solution: Implement robust output validation, human review for high-stakes decisions, and continuous monitoring.

4

Third-Party Risk Management for LLM Vendors

Regulatory guidance requires financial institutions to perform due diligence on third-party service providers, including LLM vendors:

Pre-Engagement Due Diligence

Before selecting an LLM vendor, assess:

  • Financial Stability: Review financial statements, funding, going concern viability
  • Security Posture: SOC 2 reports, penetration testing, incident history
  • Data Handling: Where data is stored, who has access, data retention policies
  • Subprocessors: Does vendor use other third parties? (cloud providers, data annotators)
  • Regulatory Experience: Does vendor work with other financial institutions?

Contractual Protections

Ensure vendor contracts include:

  • Data Ownership: Bank retains ownership of all input data and prompts
  • No Training: Explicit prohibition against using bank data for model training
  • Deletion Rights: Ability to request deletion of all data upon contract termination
  • Audit Rights: Right to audit vendor's security controls and data handling
  • Breach Notification: Timely notification of security incidents (within 24-48 hours)
  • Business Continuity: Vendor's disaster recovery and business continuity plans

Ongoing Monitoring

After engagement, continuously monitor vendor:

  • Annual Reviews: Re-assess vendor risk profile annually
  • SOC 2 Updates: Review updated SOC 2 reports when issued
  • Performance Metrics: Track SLA compliance, uptime, incident response times
  • Regulatory Changes: Monitor if vendor faces regulatory enforcement or data breaches

Concentration Risk

Avoid over-reliance on a single LLM vendor:

  • Diversification: Consider multi-vendor strategies for critical applications
  • Exit Strategy: Maintain ability to migrate to alternative providers
  • Vendor Lock-In: Avoid proprietary prompt formats or vendor-specific APIs when possible
5

Material Nonpublic Information (MNPI)

For broker-dealers and investment advisors, Material Nonpublic Information (MNPI) is information about a company that hasn't been disclosed publicly and would likely affect the stock price if known. Using MNPI for trading violates insider trading laws.

❌ NEVER Send MNPI to LLM Services

Transmitting MNPI to third-party LLM vendors could constitute improper disclosure. Examples of MNPI:

  • Unannounced merger or acquisition plans
  • Financial results before public release (earnings, revenue misses)
  • Major contract wins or losses not yet disclosed
  • Changes in executive leadership before announcement
  • Pending regulatory enforcement actions
  • Product recalls or safety issues not yet public

Information Barriers ("Chinese Walls")

Many financial institutions maintain information barriers to prevent MNPI from reaching individuals who could trade on it. LLM usage must respect these barriers:

  • Do not use shared LLM systems across departments that should be separated (e.g., investment banking and trading)
  • Implement access controls to prevent MNPI from being entered into LLM prompts
  • Audit LLM usage logs to ensure no MNPI was inadvertently transmitted
6

Safe LLM Use Cases for Financial Institutions

Financial institutions can safely use LLMs for many workflows that don't involve NPI or MNPI:

✅ Market Research & Analysis

Analyze publicly available earnings transcripts, SEC filings (10-K, 10-Q), news articles.

Requirement: Only public information; no MNPI

✅ Policy & Procedure Documentation

Draft internal policies, update employee handbooks, create training materials.

Requirement: No customer data; human review required

✅ Regulatory Research

Summarize new regulations, compare requirements across jurisdictions, identify compliance gaps.

Requirement: Public regulatory documents only

✅ Code Review & Documentation

Generate code comments, identify potential bugs, suggest refactoring improvements.

Requirement: No customer data in code; security review

✅ Customer Service FAQs

Generate answers to common questions about products, rates, and services.

Requirement: Public information only; compliance review before publishing

✅ Internal Meeting Summaries

Summarize non-sensitive meeting notes, action items, and decisions.

Requirement: No NPI, MNPI, or confidential strategy discussions

⚠️ Use Cases Requiring Extra Controls

These use cases CAN be done with LLMs but require additional safeguards:

  • Credit Decisioning: Subject to SR 11-7 model risk management; fair lending testing required
  • Fraud Detection: Must validate accuracy; false positives can harm customers; model monitoring required
  • Customer Communications: FINRA/SEC review required; must comply with Reg B, fair lending
  • Financial Reporting: SOX controls required; outputs must be validated before inclusion in reports

Best Practices for Financial Institutions

DO

  • Obtain contractual commitments from LLM vendors that data won't be used for training
  • Implement model risk management for LLMs used in credit, fraud, or compliance decisions
  • Perform third-party due diligence including SOC 2 review and security assessments
  • Maintain audit logs of all LLM usage and retain per FINRA/SEC requirements
  • Test for bias and fair lending compliance before deploying LLMs in credit decisions
  • Establish acceptable use policies and train employees on GLBA/MNPI restrictions

DON'T

  • Send customer account numbers, SSNs, or transaction history to LLM services
  • Use LLMs for credit decisions without model risk management framework
  • Transmit Material Nonpublic Information (MNPI) to third-party LLM vendors
  • Deploy LLMs in production without independent validation and testing
  • Publish LLM-generated customer communications without compliance review
  • Rely solely on LLM outputs for SOX financial reporting without validation

Need Help with Financial Services AI Compliance?

We can help you navigate GLBA, SR 11-7, FINRA, and other financial regulations to safely implement LLMs while maintaining compliance.

Schedule a Consultation