The legal boundary between a tool and an accomplice is being tested in Florida. Attorney General James Uthmeier has initiated a criminal investigation into OpenAI after reviewing chat logs between its flagship AI, ChatGPT, and Phoenix Ikner, a 20-year-old student charged in a university shooting that killed two people and wounded six last year. The probe seeks to determine if the "significant advice" the bot allegedly provided to Ikner constitutes criminal liability under the state’s aiding and abetting statutes.
In a public statement, Uthmeier framed the investigation through a stark hypothetical: "if ChatGPT were a person," he argued, the nature of its interactions with the suspect would warrant murder charges. The case represents a rare and aggressive attempt to apply criminal law to the outputs of a large language model. While social media platforms and software providers have long enjoyed broad immunity for user-generated outcomes, Florida’s move suggests a growing appetite to treat generative AI as an active agent with its own set of responsibilities.
For OpenAI, the investigation underscores the limitations of the safety guardrails designed to prevent the software from facilitating harm. The company has asserted that the bot is not responsible for the user’s actions, yet the case forces a difficult conversation about the predictability of these systems. As AI moves from passive information retrieval to providing tactical or strategic guidance, the legal system is being forced to decide whether a corporation can be held accountable for the "logic" its machines provide to those intent on violence.
With reporting from Ars Technica.
Source · Ars Technica



