In what might be the most bizarre technological hiccup of 2026, the Heber City Police Department in Utah has found itself at the center of a viral news storm after their AI report-writing software officially documented an officer "shape-shifting into a frog" during a routine arrest. The incident, which has resurfaced this week as a prime example of artificial intelligence hallucinations, occurred when the department's cutting-edge transcription tool inadvertently listened to a Disney movie playing in the background of a body camera recording.

The "Princess and the Frog" Incident

The error originated from Draft One, a generative AI software by Axon designed to draft police narratives automatically from body camera audio. According to Heber City Police, an officer was processing a scene while the 2009 animated film The Princess and the Frog played nearby. The AI, unable to distinguish between the officer's dialogue and the movie's plot, seamlessly wove the film's magical elements into the criminal case file.

"The body cam software and the AI report-writing software picked up on the movie that was playing in the background," explained Sergeant Rick Keel. "That's when we learned the importance of correcting these AI-generated reports." instead of a standard arrest log, the official draft claimed the officer had physically transformed into an amphibian—a plot point belonging to the fictional Prince Naveen, not a Utah law enforcement agent.

Efficiency vs. Accuracy: A Digital Dilemma

Despite the amphibious mix-up, Heber City officials remain optimistic about the technology's potential. Sgt. Keel noted that the software saves him approximately 6 to 8 hours of paperwork weekly, a significant efficiency boost for the department. The tool is designed to listen to audio from body-worn cameras and generate a first draft of the police report, which officers are then required to review and edit.

However, this incident highlights a critical vulnerability known as "hallucination," where AI models present incorrect or nonsensical information as fact. While a frog transformation is an obvious error that is easily caught, experts worry about more subtle mistakes—such as misquoted suspects or incorrect legal contexts—that could slip through unnoticed and impact criminal justice outcomes.

Sparking Legislative Action in Utah

The "Frog Cop" story has gained renewed traction this week as Utah lawmakers scramble to pass new AI regulations before the legislative session ends on March 6, 2026. The incident is serving as a colorful case study for bills like S.B. 205, which aims to establish strict transparency and oversight requirements for government use of generative AI.

Legislators are citing the error as proof that human oversight is non-negotiable. The proposed "Law Enforcement Artificial Intelligence Amendments" would require:

  • Mandatory disclaimers on any police record created with AI assistance.
  • Certification by the author that the report has been reviewed for accuracy.
  • Strict policies defining which AI tools are permissible for official use.

The Future of AI in Policing

As police departments across the United States increasingly adopt AI tools to combat staffing shortages and administrative burnout, the Heber City incident serves as a humorous but stark warning. Axon, the developer of Draft One, has emphasized that their software is intended to be a "force multiplier" but not a replacement for human judgment.

For now, the Heber City Police Department has corrected the record: no officers have hopped away from duty, and their shape-shifting capabilities remain purely fictional. But as the story continues to circulate in "News of the Weird" columns this week, it stands as a permanent reminder that while AI can write the report, it can't always understand the reality.