Appeals Court Backs Media Matters in Dispute Over FTC Subpoena – PYMNTS.com

Two federal judges have acknowledged that members of their staff used artificial intelligence tools in drafting court orders that U.S. Senate Judiciary Committee Chairman Chuck Grassley described as “error-ridden,” according to Reuters.
Complete the form to unlock this article and enjoy unlimited free access to all PYMNTS content — no additional logins required.
yesSubscribe to our daily newsletter, PYMNTS Today.
By completing this form, you agree to receive marketing communications from PYMNTS and to the sharing of your information with our sponsor, if applicable, in accordance with our Privacy Policy and Terms and Conditions.
In letters released Thursday by Grassley’s office, U.S. District Judge Henry Wingate of Mississippi and U.S. District Judge Julien Xavier Neals of New Jersey confirmed that their respective chambers used AI-generated materials in decisions that bypassed normal review procedures. Both judges said they have since strengthened internal protocols to prevent similar issues.
According to Reuters, Judge Neals, who serves in Newark, said in his letter that a draft decision in a securities lawsuit “was released in error – human error – and withdrawn as soon as it was brought to the attention of my chambers.” He explained that a law school intern had used OpenAI’s ChatGPT for research without approval or disclosure. Neals stated that his chambers has since implemented a written AI policy and improved its review process. Reuters had earlier reported that AI-generated content had appeared in the draft, citing a source familiar with the situation.
Related: OpenAI Introduces ChatGPT Atlas, Challenging Google’s Dominance in Web Browsing
Judge Wingate, based in Jackson, Mississippi, said in his letter that a law clerk in his office used Perplexity “as a foundational drafting assistant to synthesize publicly available information on the docket.” He described the public posting of the draft as “a lapse in human oversight.” The original order, part of a civil rights case, was later removed and replaced, with Wingate previously attributing the change to “clerical errors,” per Reuters.
Grassley initiated the inquiry after attorneys involved in the cases reported factual inaccuracies and other errors in the rulings. In a statement released Thursday, the senator commended the judges for their transparency and urged the judiciary to establish clearer policies governing AI use. “Each federal judge, and the judiciary as an institution, has an obligation to ensure the use of generative AI does not violate litigants’ rights or prevent fair treatment under the law,” Grassley said.
Across the country, courts have been grappling with the implications of AI in legal practice. Lawyers have faced fines and other penalties for submitting filings containing unverified or fabricated information produced by AI tools, according to Reuters.
Source: Reuters
Grassley Pushes for Stronger AI Rules Following Judicial Use of ChatGPT and Perplexity
Oct 23, 2025 by CPI
Appeals Court Backs Media Matters in Dispute Over FTC Subpoena
Oct 23, 2025 by CPI
Trump Administration in Talks to Fund Quantum Computing Development
Oct 23, 2025 by CPI
Senator Elizabeth Warren Targets Media Mega-Merger Between Paramount and Warner Bros. Discovery
Oct 23, 2025 by CPI
Apple Hit with £1.5 Billion Ruling Over App Store Fees
Oct 23, 2025 by CPI
Antitrust Chronicle® – Network Effects
Oct 23, 2025 by CPI
Synthetic Data, Network Effects, and the Future of Competition
Oct 23, 2025 by Rozhina Ghanavi & Catherine E. Tucker
On Artificial Intelligence and Network Effects
Oct 23, 2025 by Pinar Yildirim
Complementarities, Network Effects, and the Antitrust Puzzle of Platform M&A
Oct 23, 2025 by Yangyang Cheng, Daniel Sokol & Carmelo Cennamo
Avoiding the Oversimplification of Network Effects
Oct 23, 2025 by Christopher Yoo