Public Mortgage Lender (Illustrative Profile)
v1Financial Services · Generated 4/13/2026, 9:40:00 PM
Public Mortgage Lender (Illustrative Profile) presents critical AI risk that would likely result in a decline recommendation. The composite risk score of 100 reflects 5 primary risk drivers across 5 mapped claims scenarios. Score confidence should be evaluated in conjunction with the evidence readiness metrics below.
Risk Dimensions
Inherent Harm
4.8
/ 5.0
Top Risk Drivers
What safeguards exist to detect and prevent algorithmic bias in lending, underwriting, or customer-facing financial decisions?
The Mobley v. Workday trajectory has made algorithmic bias a class-action vector, not just a regulator question. Documented testing and mitigation is the defense that survives discovery.
Does the organization conduct regular fair lending analysis specifically on AI-driven credit decisions?
Disparate-impact testing plus the documented less-discriminatory-alternatives search is the SR 11-7 + ECOA fusion carriers and examiners have converged on. Absent testing is the most expensive gap in credit AI.
Are all AI/ML models and systems documented in a formal inventory with version control and ownership tracking?
Without a central inventory, no one can answer "what AI is running here, and who owns it?" — which is the first question every carrier, regulator, and board committee asks after an incident.
Remediation Roadmap
Bias detection, testing, and mitigation for high-impact AI
Identify AI decisions that affect people (employment, credit, housing, healthcare, pricing, content distribution). Define the protected and consequential cohorts. Run pre-deployment bias testing — disparate impact ratio, equalized odds, or the metric appropriate to the decision type — and continuous testing in production. Establish a mitigation playbook: re-training, reweighting, thresholding, or removal when thresholds are breached.
Done looks like: A bias-testing standard that names the metrics, thresholds, and cadence for each AI system. Test reports for all in-scope systems in the last 12 months. A documented mitigation action on any system where thresholds were breached, reviewed by governance.
Fair-lending testing regime for AI/ML credit models
Run pre-deployment and continuous fair-lending testing on credit models: disparate-impact analysis across protected classes, comparative-file review, and proxy-variable investigation. Establish thresholds that trigger mitigation. Document the less-discriminatory-alternatives search — CFPB Circular 2022-03 makes the search itself a compliance artifact. Keep statistician-of-record sign-off on methodology.
Done looks like: Fair-lending testing standard with named metrics and thresholds, quarterly testing reports for all in-scope models, documented LDA search for each material model change, and at least one model-mitigation decision in the last 18 months where testing surfaced an issue.
Stand up a formal AI/ML model inventory
Create a single source of truth for every production and staged AI system — foundation models, fine-tuned variants, classical ML, and rule-based decision engines. Each record captures: owner, business purpose, data inputs, decision outputs, deployment environment, dependencies, validation status, last review date, and mapped risk tier. Assign accountable owner with sign-off authority.
Done looks like: A model registry (e.g. in a governance platform or a version-controlled catalog) listing every AI system in use, with a named human owner for each, mapped to a business process. Auditable update history. Quarterly reconciliation against production telemetry catches un-registered systems.
Claims Scenarios(5)
Evidence Confidence
Band
high
Tier
4
Margin
±2
Score Range
98–100
Documented
47%
By Area