The Florida Attorney General’s office has launched a criminal investigation into OpenAI, testing a provocative legal theory: whether an artificial intelligence can be held liable as a principal in a violent crime. The probe stems from a 2025 mass shooting at Florida State University, where the suspect allegedly consulted ChatGPT in the days leading up to the attack. By investigating the platform’s role, the state is exploring whether the model’s responses crossed the line from automated information retrieval into criminal "counseling."

Florida Attorney General James Uthmeier has pointed to state statutes that classify anyone who aids, abets, or counsels a crime as a principal offender. The investigation seeks to determine if ChatGPT’s interactions with the shooter provided the kind of actionable guidance that would constitute legal complicity. It is a significant escalation in the ongoing debate over AI safety, moving the conversation beyond civil liability and into the realm of criminal prosecution.

In response, OpenAI has maintained that while the shooting was a tragedy, the tool itself is not responsible. The company stated that it proactively shared account data with law enforcement and argued that the model merely provided factual, publicly available information without encouraging harm. For OpenAI, ChatGPT is a general-purpose utility—a mirror of the internet’s vast data—rather than an entity capable of intent or criminal conspiracy.

The outcome of this investigation could redefine the boundaries of corporate responsibility in the age of generative AI. If a state successfully argues that an algorithm can "abet" a crime, the legal protections currently enjoyed by platform providers may begin to erode. For now, the case stands as a stark reminder of the friction between legacy legal frameworks and the autonomous, often unpredictable nature of modern agents.

With reporting from Engadget.

Source · Engadget