Your AI Efficiency Experts
viabl-site-header-08 (1).jpg

BLOG

Welcome to the Viabl blog — where we explore the real-world impact of AI and software engineering on business efficiency, risk management, and product success. From practical frameworks to cautionary tales, these posts share lessons, strategies, and stories that help teams make smarter technology decisions. Whether you're building with AI or just starting to evaluate it, you’ll find actionable insights here.

That Time AI Almost Killed a Family

Artificial intelligence can offer tremendous value, but only when we carefully manage the potential harm caused by errors. Without thoughtful safeguards, the consequences can be severe or even life-threatening.

Viabl has developed a simple framework for minimizing AI harm:

Viabl's AI framework

In this post, we'll explore a recent example of an AI-generated product gone dangerously wrong: an AI-authored guidebook intended to help readers identify edible mushrooms, which tragically resulted in a family being poisoned.

Although this isn't a software system, Viabl’s framework is applicable to any AI-driven task, so let's take a closer look.


What Happened?

Recently, a family in the UK ended up hospitalized after following advice from a guidebook titled "Mushrooms UK: A Guide to Harvesting Safe and Edible Mushrooms." The book, purchased from a major online retailer, claimed to be a trustworthy resource, featuring detailed descriptions and images intended to assist readers in safely identifying edible mushrooms.

Unfortunately, the guidebook contained dangerously inaccurate information.

A closer investigation revealed the mushroom images were AI-generated and included misleading or entirely incorrect identifications. Additionally, the text showed signs of AI anomalies, including random, inappropriate questions or unexpected conversational phrases. A particularly alarming example was:

“In conclusion, morels are delicious mushrooms which can be consumed from August to the end of Summer. Let me know if there is anything else I can help you with.”

Where did the AI design go wrong?

Viabl's AI Framework stresses minimizing harm when dealing with high-stakes scenarios. Here’s some major AI mistakes the “authors” of the book made:

  • High risk of harm: Mushroom identification is a high-stakes task; incorrect information can result in serious illness or death.

  • No human oversight: The guidebook’s content and images were mostly, if not entirely, AI-generated without expert verification, meaning crucial safety decisions relied on unchecked AI outputs.

  • No clear disclosure: The fact that the guidebook’s content was AI-generated was never disclosed, misleading users into believing they were receiving expert-authored advice, thus removing the final possible human check!

What would have been a better design?

For high-stakes tasks like mushroom identification, rigorous validation is essential. As always, there is a trade-off between speed and caution.

Here's how Viabl recommends approaching similar scenarios:

  1. Maximum caution: Have an expert mycologist write the book. AI can be used for light editing, but the final output must be vetted by an expert.

  2. Moderate caution: AI can draft initial content, but it must be extensively reviewed by qualified experts before publication. AI-generated images should be replaced by authentic, verified photography.

For something as high-stakes as mushroom identification, we recommend maximum caution.


What's Next?

In our next installment, we’ll examine a days-old AI disaster and break down how careful design could have prevented it.