Consumers
January 15, 2026 · Negotiate The Future
Consumer Protection in the AI Era
Liability, transparency, and the rights of people who interact with automated systems
When an algorithm denies your loan, terminates your employment, or determines your insurance rate, existing consumer protection law is poorly equipped to respond. The frameworks we built for human decision-makers do not translate cleanly to systems that process millions of decisions at scale, often without legible reasoning.
The Federal Trade Commission has authority over unfair and deceptive practices. That authority applies to AI-powered systems — but enforcement requires technical capacity the agency is still building. The Consumer Financial Protection Bureau has issued guidance on algorithmic credit decisions. The Equal Employment Opportunity Commission has flagged AI hiring tools. Coordination across these agencies remains ad hoc.
What is missing is a coherent right to contest automated decisions affecting material interests. Not a right to know the proprietary model weights — but a right to know the basis of a decision, to challenge it, and to have the challenge reviewed by a human with authority to override.
This is not a radical position. It is the same procedural logic we apply to government administrative action. Extending it to consequential private AI systems is overdue.