The Texas Responsible Artificial Intelligence Governance Act has been in force since January 1, and the state is now the largest American jurisdiction with a general-purpose AI statute live on the books. Enforcement sits exclusively with Attorney General Ken Paxton's office, which is standing up the consumer-complaint portal the statute requires and building out the civil-investigative-demand workflow that the law authorizes.
TRAIGA reaches any private entity that operates in Texas, offers products or services to Texas residents, or develops or deploys AI in the state. The drafting is deliberate. A California vendor selling to a Houston customer is inside the statute's reach, regardless of where the model is trained or hosted.
The prohibitions are narrower than Colorado's risk-classification regime and narrower than the European Union's tiered approach. TRAIGA bars AI systems designed to cause self-harm, encourage harm to others, facilitate criminal activity, infringe constitutional rights, discriminate unlawfully against a protected class, produce child sexual abuse material, or generate unlawful deepfakes. The statute explicitly excludes disparate-impact claims from the discrimination prong. Intent is required.
Penalties scale with cure. A curable violation carries a civil penalty between $10,000 and $12,000. Uncurable violations can reach $200,000. Continuing violations run up to $40,000 per day. State agencies may suspend or revoke licenses and impose separate penalties up to $100,000. A 60-day cure period runs from the date the attorney general issues a notice of violation, and the office may file suit only once that window closes without cure.
Several architectural choices distinguish TRAIGA from peer legislation. There is no private right of action, which cuts against the Oregon and Illinois direction. An affirmative defense is available to companies that discover and address violations through internal testing or adversarial review conducted consistent with the NIST AI Risk Management Framework. Banks and insurers already regulated under federal regimes are treated as compliant for those activities. Developers are shielded from liability for end-user misuse of otherwise lawful systems.
The statute also creates the country's first state-level AI regulatory sandbox. The Texas Department of Information Resources administers a 36-month testing program under which participating firms face relaxed enforcement in exchange for quarterly performance reporting. Texas is using the sandbox to pull developers into a supervised space rather than chasing them into adjacent jurisdictions, a bet that controlled visibility is worth more than deterrence.
An advisory body, the Texas Artificial Intelligence Council, has been seated. The seven-member council has no rule-making authority. Its role is to brief legislators on emerging issues and to develop legislative recommendations between sessions. The council's early agenda is expected to focus on governmental use of AI, a category the statute treats more heavily than private deployment.
Enforcement posture in the opening months has been preparatory rather than prosecutorial. The attorney general's office has prioritized building the complaint intake system, hiring investigators, and issuing initial guidance to regulated entities. Public-facing action is most likely to appear first in the categories the statute treats as uncurable: intentional deception that induces self-harm, deepfake sexual imagery, and deployments that target constitutional rights.
The federal context complicates the picture. The White House's March National Policy Framework for Artificial Intelligence urged Congress to preempt state AI laws that impose undue burdens, and Senator Marsha Blackburn's TRUMP AMERICA AI Act discussion draft pushes in the same direction. Texas Republicans have mixed views on that preemption push.
The state's attorney general is on record defending TRAIGA's role as a backstop to federal inaction, and the sandbox provision gives Texas a friendlier narrative for industry than most state statutes offer.
For companies deploying AI at scale, Texas now matters in a specific way that it did not on December 31. The combination of intent-based prohibitions, a cure period, the NIST-aligned affirmative defense, and the sandbox creates a compliance playbook that is easier to execute than it is to ignore.


