AI Quarterly April 2026

AI Quarterly | A Review of AI Law, Policy & Practice | April 2026

Our AI Quarterly publication brings together the firm’s latest AI focused practical insights, key developments, and upcoming programs all in one place, including relevant writings, events, and firm news.

Insights

Cybercrime Trends to Watch: Takeaways from the FBI’s 2025 IC3 Annual Report

On April 6, 2026, the Federal Bureau of Investigation (FBI) released its 2025 IC3 Annual Report, which provides key trends, case data, and other statistics related to the FBI’s ongoing efforts to combat emerging cybersecurity threats. For the first time, the IC3 report documents the growing use of AI by cybercriminals to conduct successful fraud schemes by generating convincing phishing emails, synthetic video content, and voice cloning. The FBI received more than 22,000 complaints referencing AI, with adjusted losses exceeding $893 million.

“Show Your Work, AI”: Congress Pushes for AI Model Transparency

On March 26, 2026, a bipartisan group of U.S. lawmakers introduced H.R.8094, the AI Foundation Model Transparency Act of 2026 (AI FMTA). At its core, the AI FMTA would require developers of certain large AI models, like ChatGPT or Claude, to publicly disclose key information about how the models are trained, what the models are designed to do, where the limitations and risks lie, and how the models are evaluated and monitored. The purpose is to provide the public with transparency but not to regulate AI.

Key AI, Cybersecurity, and Privacy Takeaways from the NAIC 2026 Spring Meeting

On March 22–25, the National Association of Insurance Commissioners (NAIC) held its 2026 Spring National Meeting in San Diego. During the meeting, the Innovation, Cybersecurity, and Technology Committee, along with its working groups on Third-Party Data and Models, Big Data and Artificial Intelligence, and Cybersecurity, addressed key developments in oversight of third-party data and models, insurer use of artificial intelligence, cybersecurity preparedness, and consumer privacy.

California Jumps into AI Procurement with State Governing Principles in an Executive Order

On March 30, 2026, California Governor Gavin Newsom signed executive order N-5-26, aimed at governing the responsible procurement and deployment of generative artificial intelligence across California’s government. The order builds on executive order N-12-23, issued in September 2023, by directing a series of actions across multiple state agencies, with most deliverables due within 120 days.

The Trump Administration’s AI Framework: Key Federal Policy Priorities and Legislative Recommendations

On March 20, 2026, the Trump Administration released its National Policy Framework for Artificial Intelligence, a legislative recommendation document intended to guide Congress in establishing a unified federal approach to artificial intelligence governance. The White House’s new AI Framework follows Senator Marsha Blackburn’s March 18, 2026, legislative discussion draft, the Trump America AI Act. Blackburn’s draft generally reflects the priorities outlined in the AI Framework, with notable differences in the areas of copyright protections, liability for AI developers, and the proposed repeal of Section 230 of the Communications Act.

EU Privacy Regulators Weigh In on the Proposed EU Biotech Act: Key Takeaways for Life Sciences Companies

On March 10, 2026, the European Data Protection Board and the European Data Protection Supervisor issued a joint opinion on the European Commission’s proposed EU Biotech Act—a forthcoming legislative framework expected to materially affect how clinical trials are designed, conducted, and governed in the EU. The proposal, introduced in December 2025, would amend key EU life sciences legislation, including the EU Clinical Trials Regulation, and introduce new requirements for the use of advanced technologies such as artificial intelligence across the medicinal product lifecycle.

U.S. Senator Marsha Blackburn Proposes National AI Legislative Framework

On March 18, 2026, U.S. Senator Marsha Blackburn issued an AI legislative framework discussion draft, the Trump America AI Act. According to Blackburn, this intends to codify President Trump’s December 11, 2025 Executive Order for establishing a uniform federal AI policy. Blackburn stated, “[President Trump] called on Congress to pass federal standards and protections to solve the patchwork of state laws that has hindered AI innovation.” The AI discussion draft, Blackburn said, is intended to broadly protect “children, creators, conservatives, and communities from exploitation, abuse, and censorship and ensure American AI companies can innovate without cumbersome regulation.”

Threat Actors Exploit Google’s Gemini to Accelerate Cyberattacks

Google Threat Intelligence Group (GTIG) has reported that cybercriminals—in particular, state-sponsored threat actors from North Korea, Iran, China, and Russia—are misusing Gemini, Google’s large language model, to support all stages of their attack life cycle. Specifically, GTIG observed threat actors using Gemini to code and script tasks, accelerate reconnaissance, research publicly known vulnerabilities, and enable malware development and post-compromise activity.

Federal Court Rules Using AI Tools Can Waive Privilege, Even if Privileged Information Is Input into Them

On February 10, 2026, the Southern District of New York held that a criminal defendant could not claim attorney-client privilege over documents he produced using a commercially available artificial intelligence tool—even though he had input privileged information from his lawyers into the tool. This case is of likely interest to companies working to manage internal uses of AI tools and also to operations of corporate legal departments.

California Attorney General Announces Investigative Sweep into “Surveillance Pricing”

On January 28, 2026, California Attorney General Rob Bonta announced an investigative sweep targeting “surveillance pricing” practices among businesses in the retail, grocery, and hotel sectors. The investigation focuses on companies that use consumers’ personal information to set individualized prices.

Spanish DPA Highlights Privacy Risks in GenAI Content Creation

In early January 2026, the Spanish Data Protection Authority (Agencia Española de Protección de Datos, or AEPD) issued new guidance on the privacy and data protection risks associated with uploading images or photos—whether directly or indirectly identifying individuals—into generative AI tools. The guidance is particularly focused on situations where those images are hosted by thirdparty online services or digital platforms.

FTC Reverses Rytr Consent Order Amid Push for Federal AI Standards

On December 22, 2025, the Federal Trade Commission (FTC) set aside its 2024 consent order against Rytr, a generative-AI-powered company, concluding that the original complaint “failed to satisfy the legal requirements of the FTC Act” and that the order unduly burdened AI innovation in violation of the Trump Administration’s January 2025 AI Executive Order and America’s AI Action Plan, which prioritize fostering AI adoption.

New York Regulates Large Artificial Intelligence Models

On December 19, 2025, just eight days after President Trump issued the Executive Order Ensuring a National Policy Framework for Artificial Intelligence to challenge burdensome state laws that regulate artificial intelligence, New York Governor Kathy Hochul signed the Responsible Artificial Intelligence Safety and Education (RAISE) Act. The RAISE Act imposes transparency, compliance, safety, and reporting requirements on certain developers of large “frontier” AI models. The RAISE Act took effect March 19, 2026.

Publications

  • April 6, 2026 – As a part of Alston & Bird’s Health Data Monetization video series, Jennifer Everett presents “Designing Privacy-First Monetization,” discussing how considerations such as AI governance, downstream use, and national-security rules shape modern data strategies. Dan Felz presents “From Data to Durable Growth,” outlining common monetization models, from internal analytics and AI to partnerships and research, and underscoring why market-ready governance is essential to scaling these initiatives responsibly.
  • March 31, 2026 – Dorian Simmons discusses the uptick in lawsuits involving AI voice agents and AI-powered call monitoring services, providing insights on what the courts are saying and actions businesses can take to mitigate litigation risk in Alston & Bird’s Class Action & MDL Roundup.
  • January 2026 – Valarie Williams and Alvaro Montenegro published “California Prohibits Shared Pricing Algorithms and Eases Antitrust Pleading Standards” in The Journal of Robotics, Artificial Intelligence & Law.

Speaking Engagements

  • April 22, 2026 – Lance Taubin will speak on the panel “Your Company’s AI Systems: The New Target for Cyber Attackers” at the Incident Response Forum D.C. 2026.
  • April 13–15, 2026 – Sean Sullivan spoke on the panel “Transformative Tech Transactions – Contracting Strategies for Embracing Digital Health and Patient Engagement” at the AHLA Health Care Transactions Conference, examining key legal, regulatory, and contracting considerations for digital health, AI, interoperability, remote patient monitoring, and patient engagement technologies.
  • April 14, 2026 – Gillian Clow and Kaitlin Owen presented “AI in Litigation: Ethical Considerations and Practical Tips” for the 13th program in the Alston & Bird AI Legal Insights: Shaping Tomorrow webinar series.
  • March 30, 2026 – Jennifer Everett moderated and Cynthia Cole spoke on the panel “What the Next Generation of Oversight and Enforcement Should Look Like,” focusing on AI legislation, enforcement, and policy, during Alston & Bird’s luncheon during the IAPP Global Privacy Summit.
  • March 25–27, 2026 – Alex Brown moderated and spoke on the panel “Surveillance Pricing: Who’s Watching Your Wallet?” during the 2026 ABA Antitrust Spring Meeting, which detailed how companies using AI pricing tools are navigating increased regulation.
  • March 18, 2026 – Gillian Clow presented “Litigating the Algorithm: AI Underwriting, Bias Claims, and Emerging Case Law” as a part of the ABA TIPS Life, Health & Disability Insurance Committee’s Lunch and Learn webinar series.
  • March 18, 2026 – Julie Mediamolle and Courtney Quirós presented “Avoid the PAIn: AI Disclosures and Emerging Securities Liability” and Cynthia Cole and Dorian Simmons presented “Data Democracy to Digital Anarchy: Where Are We Headed Now” during the 12th program in the Alston & Bird AI Legal Insights: Shaping Tomorrow webinar series that took place during Alston & Bird’s 2026 Annual Alumni, Friends & Client CLE.
  • March 12, 2026 – Jennifer Pike and Sara Pullen discussed key takeaways, including how health care organizations are moving beyond AI experimentation to focus on maturity, ROI, and governance at the 2026 HIMSS Global Health Conference.
  • March 3–4, 2026 – Dorian Simmons presented “AI Product Counseling: Dos and Don’ts” during the 2026 Privacy & Technology Law Forum.
  • February 27, 2026 – Sean Sullivan presented “Deep Dive into Due Diligence and Contracting: Key Steps to Take When Vetting Vendors and AI Tools for Healthcare Applications” at the ACI Healthcare AI Summit.
  • February 13, 2026 – Sean Sullivan presented “Drinking from the Data Firehose: Legal Strategies for Deploying AI to Manage RPM, Interoperability, Data Integration, and Patient Engagement Tools in a Fragmented Digital Ecosystem” at the AHLA Winter Institute: Advising Providers and AI in Health Care.
  • January 27, 2026 – Kaitlin Owen and Gillian Clow presented “The Ethical Use of AI in Litigation” at a program hosted by the Southern California Chapter of the Association of Corporate Counsel.
  • January 15, 2026 – Yuri Mikulka presented “Litigator’s Guide to AI: Practical Applications and Insights.”
  • January 13, 2026 – David Keating, Hyun Jai Oh, and Dorian Simmons presented “The Rise of Agentic Commerce: Innovation, Opportunity, and Risk” the 11th program in the Alston & Bird AI Legal Insights: Shaping Tomorrow webinar series.

News

January 26, 2026 – Alex Brown is featured on the podcast ABA Antitrust Law Section: Our Curious Amalgam discussing “What’s Happening with AI and Data Privacy? An Update from the PRIS Committee.”

Press Releases

Alston & Bird Adds Technology & Privacy Partner in Silicon Valley

Alston & Bird announced that Cynthia Cole has joined its Technology & Privacy Group as a partner in the firm’s Silicon Valley office, advancing the firm’s growth in technology transactions, data privacy, artificial intelligence, and cybersecurity throughout California and across the firm’s 13 offices.


AI Quarterly is produced by Alston & Bird's Artificial Intelligence (AI) Team, led by Cari Dawson, Brian Elsworth, and Sean Sullivan. It is edited by Alex Brown and Jennifer Everett.

You can subscribe for future updates by completing our publications subscription form.


Media Contact
Alex Wolfe
Communications Director