AI Governance Takes Shape: Breaking Down Washington’s Latest AI Frameworks
In a matter of days, Washington saw a flurry of competing artificial intelligence (AI) policy frameworks, signaling that the push for comprehensive federal AI legislation has entered a new consequential phase. Most notably, the White House released its National AI Policy Framework. This nonbinding yet highly influential legislative roadmap calls for targeted action on children’s safety, copyright, community protections and American AI competitiveness, while advocating for federal preemption of most state AI laws. Sen. Marsha Blackburn (R-TN) also unveiled the discussion draft for her national AI framework, the TRUMP AMERICA AI Act, staking out a broader and more prescriptive position than recommended by the White House.
Below is a detailed overview and comparison of key proposals, along with an assessment of next steps as congressional leadership and key committees begin developing their responses to the administration’s recommendations.
White House National Policy Framework on AI
The White House’s National Policy Framework on AI takes a middle-ground approach, addressing key concerns around children, copyright, conservatives and communities, the “four C’s,” while advocating for a light-touch regulatory posture and federal preemption of most state AI laws. The document is a nonbinding legislative recommendation that will likely carry significant influence as Congress moves to broadly address AI regulation. It does not prescribe a specific legislative vehicle, but key elements are expected to inform bills already circulating among Republicans.
A.Protecting Children and Empowering Parents
The framework urges Congress to equip parents with the tools to manage their children’s online privacy, screen time, content and accounts. It calls for age-assurance requirements for certain AI platforms and mandates safety features to reduce risks of sexual exploitation and self-harm. Importantly, it urges Congress to preserve states’ authority to enact their own child protection laws, including prohibitions on child sexual abuse material (CSAM).
B.Safeguarding and Strengthening American Communities
The framework addresses four key areas under this pillar:
- Energy Costs. Calls for codifying the Trump administration’s Ratepayer Protection Pledge to shield consumers from electricity cost increases driven by data center expansion and for streamlining federal permitting for AI infrastructure.
- Scams. Urges Congress to strengthen efforts combating AI-enabled impersonation and fraud, particularly targeting vulnerable populations.
- National Security. Calls for ensuring national security agencies have the technical capacity to assess frontier AI models and associated risks.
- Small Businesses. Urges Congress to provide resources to small businesses to support AI adoption and deployment.
C.Respecting Intellectual Property Rights and Supporting Creators
The administration takes the position that training AI on copyrighted material does not violate existing copyright law, while leaving final resolution to the courts. It calls on Congress to consider a licensing framework for collective rights systems and to establish federal protections against unauthorized distribution or commercial use of AI-generated replicas of a person’s voice, likeness or other attributes, with clear exceptions for parody, satire, news reporting and other First Amendment-protected expression. Congress is urged to monitor fair use developments and fill any gaps if needed, while refraining from near-term action that could interfere with ongoing judicial resolution.
D.Enabling Innovation and Ensuring American AI Dominance
The framework calls for regulatory sandboxes to advance U.S. AI leadership, federal dataset accessibility for training purposes and explicitly directs Congress not to create a new federal rulemaking body to regulate AI.
E.Educating Americans and Developing an AI-Ready Workforce
The framework urges Congress to integrate AI into education and workforce training through nonregulatory means, and to expand research into task-level workforce shifts driven by AI.
F.Establishing a Federal Policy Framework, Preempting Cumbersome State AI Laws
The framework directs Congress to preempt overly burdensome state AI laws while preserving traditional state police powers, zoning authority and states’ ability to govern their own use of the technology. It cautions that preemption must cover areas best suited to a national standard, including rules governing AI development, Americans’ use of AI and liability frameworks that would penalize developers for third-party misuse.
TRUMP AMERICA AI Act Discussion Draft – Sen. Marsha Blackburn (R-TN)
Sen. Marsha Blackburn’s (R-TN) bill proposes a comprehensive federal framework governing AI, online platforms and digital content with the stated goal of promoting AI innovation while safeguarding children, workers, creators, consumers and national security. The legislation is intentionally expansive, combining technology policy, labor reporting, platform accountability, child safety, intellectual property (IP) and national security oversight into a single statutory structure.
The bill applies broadly to companies and entities engaged in the development, deployment or operation of AI technologies, particularly AI chatbots, advanced AI systems and algorithmically driven online platforms. It relies on a multiagency enforcement and coordination model, with responsibilities allocated across the Federal Trade Commission (FTC), Department of Justice (DOJ), Department of Energy (DOE), National Institute of Standards and Technology (NIST), Department of Labor (DOL), Department of Commerce (DOC) and the National Science Foundation (NSF). While the bill establishes significant new federal standards, it largely preserves state authority through savings clauses, preempting state law only where direct conflicts arise.
This bill is best understood as a negotiating tool with the Trump administration and as congressional negotiations accelerate. The bill is strategically broad and designed to preserve Sen. Blackburn’s status as a key negotiator for any legislative product.
A.AI Chatbots and Duty of Care (Title I)
A central component of the bill is the creation of a federal duty of care for AI chatbot developers. Developers would be required to exercise reasonable care to prevent and mitigate foreseeable harms caused by their systems, with liability assessed under a negligence-style standard. The bill empowers the FTC to issue rules defining minimum safeguards and to enforce violations as unfair or deceptive practices. State attorneys general would also be empowered to bring enforcement actions, raising questions about the extent to which AI developers could face overlapping federal and state scrutiny.
B.AI and Workforce Impacts (Title II)
The bill addresses the labor impacts of AI by mandating regular disclosure of AI-related job effects. Covered entities, including publicly traded companies, federal agencies and certain private firms, would be required to report data on layoffs, hiring, retraining and unfilled positions attributable to AI adoption. The bill tasks DOL with collecting, analyzing and publicly releasing this information.
C.Repeal of Section 230 (Title III)
The bill sunsets and repeals Section 230 of the Communications Decency Acy after a two-year delay. The repeal eliminates a long-standing immunity for online platforms with respect to third-party content, exposing companies to substantially heightened litigation and compliance risk.
D.Protecting Children Online (Title IV)
Child protection is a major focus of the legislation. The bill establishes a statutory duty of care for platforms likely to be used by minors. Platforms must implement default safety protections, limit personalized recommendation systems that encourage compulsive use, provide robust parental controls, and maintain reporting and response mechanisms for harmful content. Covered platforms would be subject to transparency audits and enforcement by the FTC and state attorneys general.
E.AI Companions and Criminal Provisions (Title V – GUARD Act)
The bill further targets AI systems that simulate emotional or interpersonal relationships through the GUARD Act, which restricts so-called AI companions. Minors would be prohibited from accessing these systems, and covered entities would be required to deploy meaningful age-verification processes to distinguish adults from minors.
F.Risk-Based Framework for Advanced AI (Title VI)
The bill establishes a risk-based regulatory framework for advanced AI systems, defined by computational training thresholds. Developers of such systems would be required to participate in a DOE-led evaluation program involving pre-deployment testing, reporting and risk assessment. Noncompliance would trigger severe civil penalties.
G.AI Liability and Consumer Protection (Title VII – AI LEAD Act)
The bill establishes a federal AI liability regime under the AI LEAD Act. It clarifies standards for developer and deployer liability, introduces strict liability for defective or unreasonably dangerous AI systems, and prohibits contractual provisions that waive or unduly limit liability. A federal cause of action would allow such suits by individuals, classes, states and the federal government. Foreign AI developers would also be required to register U.S. agents, creating accountability for overseas firms accessing the U.S. market.
H.Bias, Transparency, and Censorship Concerns (Title VIII)
Concerns over bias and censorship are addressed through mandatory audits of high-risk AI systems, with a particular emphasis on political viewpoint and affiliation discrimination. Covered entities would be required to conduct annual independent audits and provide ethics training for relevant personnel.
I.AI Innovation, Standards, and Research (Titles IX–XI)
Alongside the regulatory components, the bill invests heavily in AI innovation and infrastructure. It expands federal testbeds, research programs and standards development led by NIST, promotes international cooperation with allied nations, and restricts access for foreign adversaries.
J.Individual Voice and Visual Likeness Protections (Title XII – NO FAKES Act)
Title XII of the bill includes the NO FAKES Act, which establishes a new federal intellectual‑property‑style right that protects an individual’s voice and visual likeness against unauthorized digital replication. This is specifically geared toward highly realistic, computer‑generated likenesses while carving out exceptions for legitimate uses such as authorized remastering or editing. The bill creates civil liability for publicly distributing, displaying or offering products or services that use an unauthorized digital replica, as well as for distributing tools primarily designed to create unauthorized replicas.
K.Rightsholder Subpoena Power (Title XIII – TRAIN Act)
Title 13 creates a subpoena mechanism that authorizes courts to compel the production of documents “sufficient to identify with certainty the copyrighted works … that were used by the developer to train the generative artificial intelligence model” when asked to do so by rightsholders. Courts are directed to grant the subpoena request so long as the subpoena is in proper form and includes a properly executed declaration stating that the rightsholder has a subjective good faith belief that the developer used some or all of at least one or more copyrighted works. Developers that fail to comply absorb a presumption that they made copies of copyrighted works. Rightsholders are prohibited from sharing the records they obtain, but do not appear to be prohibited from sharing information describing those records.
Practically speaking, this would work as a pre-discovery tool for rightsholders that suspect their copyrighted works were being used by developers to train AI models.
L.Content Provenance Information Requirements (Title XIV)
The bill’s content provenance requirements can be split into three categories.
First, the bill directs federal authorities to facilitate the development of technical standards for provenance information and detection of synthetic or synthetically‑modified content. The bill assigns NIST responsibility for research, development and public education on provenance systems, synthetic content detection, and methods to authenticate or label digital content.
Second, the bill requires entities developing synthetic (or synthetically-modified) content to include content provenance information that indicates that the content is synthetic. The bill prohibits the removal of the content provenance information.
Third, the bill prohibits the nonconsensual training of an AI system with copyrighted material, so long as the material has content provenance information attached. The covered content is specifically the material described in 17 U.S.C. § 102. Notably, these protections only apply when content provenance information is attached.
This section allows for enforcement by the FTC (violations are considered unfair or deceptive acts or practices), state attorneys general and through private actions.
M.Fair Use (Title XV)
The bill clarifies that the “unauthorized reproduction, copying, or computational processing of copyrighted works” for virtually any AI purpose (training, fine-tuning, developing, or creating) is not considered fair use under section 107. Similarly, AI models created “through inference, distillation, or similar processes” are presumed to incorporate copyrighted material unless the developer can establish by clear and convincing evidence that the AI was developed using authorized materials or that the training information included no copyrighted expression. The bill clarifies that any generated content that reproduces or services from copyrighted works constitutes infringement.
Regarding derivative works, the bill clarifies that derivative works created with copyrighted expression included in the training data are deemed infringing works. Those works are not eligible for copyright protection.
Comparing the Frameworks
The White House’s National AI Policy Framework stakes out a middle ground between Sen. Blackburn’s sweeping proposal and narrower approaches, advocating for a light-touch regulatory posture, federal preemption of most state AI laws and targeted action across children’s safety, copyright, community protections and American competitiveness.
AI Framework Table
| TRUMP AMERICA AI Act | WH National Policy Framework for AI | Winning the AI Race: AI Action Plan | |
| Type of Action | Discussion Draft | Legislative Recommendation | Executive Order |
| Date | 18-Mar-26 | 20-Mar-26 | 23-Jul-25 |
| Champion | Sen. Marsha Blackburn (R-TN) | Michael Kratsios and David Sacks | President Trump |
| Overview | Broad, wide-ranging bill which bundles AI safety and innovation, children’s online privacy, Section 230 repeal, copyright/IP protections and labor impacts into a single package. | A middle-ground approach: reducing regulatory uncertainty, reinforcing U.S. dominance, addressing copyright/IP concerns and guarding against censorship. | Nonbinding executive strategy document across three pillars: accelerate innovation, build infrastructure and lead internationally. Accompanied by three executive orders. Pro-innovation, deregulatory. |
| State Preemption | Limited approach. Generally applicable state laws are preserved. No broad preemption of AI-specific state rules. | Yes. Calls for preemption of burdensome AI state laws in favor of a national standard, while carving out exceptions for online privacy, consumer protection and fraud prevention. | Strong federal preference. Calls for a national legislative framework to preempt onerous state AI laws. |
| Center for AI Standards and Innovation (CAISI) | Codified. | Not addressed. | Expands and Redirects CAISI. Directs CAISI to evaluate Chinese AI models; also calls for revising the NIST AI Risk Management Framework. |
| National AI Research Resource (NAIRR) | Codified. Less granular on governance and mechanics. | Not addressed. | Reaffirms and Expands. |
| AI Safety | Yes. Creates a risk-based evaluation program. | Not addressed. | Selective. Emphasizes national security risk evaluations for frontier AI models. Frames safety primarily through a security lens rather than developer mandates. |
| Liability | Yes. Establishes developer and deployer liability for harms. Creates a federal cause of action, requires foreign AI developer registration as foreign agents. | Not addressed. | Not addressed. |
| Online Safety | Yes. Imposes duty-of-care obligations on AI chatbot developers, incorporates the Kids Online Safety Act and bans minors from using AI companion apps. | Yes. Calls for legislation protecting children from AI harms, including age-assurance requirements, parental controls over privacy and content and platform safeguards against minor sexual exploitation. | Not addressed. |
| Section 230 | Yes. Repeals Section 230. | Not addressed. | Not addressed. |
| Copyright | Yes. Requires content provenance standards for synthetic content. Clarifies that AI training on copyrighted materials is not fair use. Deems AI-generated derivative works infringing unless developer proves authorized material was used. | Yes. Calls for legislation shielding American creators, publishers and innovators from infringing AI-generated outputs, balancing IP protections with free speech and innovation concerns. On unsettled questions around fair use, favors judicial resolution over congressional intervention. | Not addressed. |
| Workforce | Yes. Requires public companies and large private employers to disclose AI-related workforce impacts. | Yes. Urges Congress to promote AI literacy and workforce training through nonregulatory means, and to expand research into AI-driven workforce disruption. | Yes. Directs creation of an AI Workforce Research Hub; promotes AI literacy, skills development and worker retraining. |
| AI Bias/Censorship | Yes. Requires AI systems to undergo bias audits and mandates ethics training for developers. | Yes. Urges Congress to bar government coercion of tech companies to suppress or alter content for partisan or ideological purposes, and to establish recourse mechanisms for violations. | Yes. Directs NIST to remove DEI and bias-related language from AI RMF. Frames ideological neutrality as the anti-bias mechanism. |
| International | Yes. Directs NIST to build international coalitions on AI innovations and standards alignment. Less operationally detailed. | Indirect. Calls for equipping national security agencies with the technical capacity to assess frontier AI models and associated risks. | Yes. Export American AI as a global standard; counter Chinese influence in international governance bodies; align allies on export controls; ensure U.S. is a leader in AI standards globally. |
| Data Centers | Yes. Ratepayer protection provision restricting utilities from passing AI data center infrastructure costs onto residential consumers. | Yes. Urges Congress to codify the Ratepayer Protection Pledge requiring AI companies to fund new power generation and all infrastructure upgrades needed for data centers. | Yes. Streamlines federal permitting for data centers; focuses on expanding the power grid and building high-security data centers. |
| Name, Image and Likeness | Yes. Codifies voice and visual likeness rights, protecting individuals from unauthorized AI-generated replicas. Includes the NO FAKES Act. | Yes. Calls for a federal framework protecting individuals from unauthorized AI replicas of their voice or likeness, with exceptions for parody, satire and other First Amendment-protected uses. | Adjacent. Calls for combating synthetic media in the legal system. |
THIS DOCUMENT IS INTENDED TO PROVIDE YOU WITH GENERAL INFORMATION REGARDING AI POLICY FRAMEWORKS. THE CONTENTS OF THIS DOCUMENT ARE NOT INTENDED TO PROVIDE SPECIFIC LEGAL ADVICE. IF YOU HAVE ANY QUESTIONS ABOUT THE CONTENTS OF THIS DOCUMENT OR IF YOU NEED LEGAL ADVICE AS TO AN ISSUE, PLEASE CONTACT THE ATTORNEYS LISTED OR YOUR REGULAR BROWNSTEIN HYATT FARBER SCHRECK, LLP ATTORNEY. THIS COMMUNICATION MAY BE CONSIDERED ADVERTISING IN SOME JURISDICTIONS.
Contributors:
Recent Insights
Read MoreAI Governance Takes Shape: Breaking Down Washington’s Latest AI Frameworks
Presentation | March 18, 2026State of Play
Client Alert | March 17, 2026FTC Seeks Comments on Rental Housing Fees and Negative Option Marketing
Client Alert | March 17, 2026Trump Issues Executive Orders on Mortgage Credit, Housing Construction
Water Blog Post | March 17, 2026SWIS 2026: The Big Signals Shaping Water’s Next Decade
Client Alert | March 17, 2026Revenue Strategies for Central Coast Landowners in Tough Agricultural Times
You have chosen to send an email to Brownstein Hyatt Farber Schreck or one of its lawyers. The sending and receipt of this email and the information in it does not in itself create and attorney-client relationship between us.
If you are not already a client, you should not provide us with information that you wish to have treated as privileged or confidential without first speaking to one of our lawyers.
If you provide information before we confirm that you are a client and that we are willing and able to represent you, we may not be required to treat that information as privileged, confidential, or protected information, and we may be able to represent a party adverse to you and even to use the information you submit to us against you.
I have read this and want to send an email.