AI Disasters Pt 1: That time NYC’s Chatbot Kindly Entrapped Local Businesses
Artificial intelligence can be transformative, but only when it’s built on thoughtful system design. Ignore the fundamentals and the same technology can do real harm—to your company and to the customers who trust you.
Viabl’s developed a framework for using AI effectively that will help you steer clear of these disasters.
Viabl’s Framework for AI System Design
Viabl's AI Design Framework
In this post, we’ll unpack the high‑profile failure of New York City’s MyCity’s AI chatbot and explore how it could have been prevented.
What Happened?
In October 2023, New York City rolled out the MyCity Business chatbot, promising entrepreneurs a simpler way to navigate thousands of pages of local regulations. Instead of combing through dense legal text, business owners could ask the bot a question and get a clear, direct answer.
The reality, however, didn’t live up to the promise.
An investigation from The Markup revealed that many straightforward questions submitted to the chatbot resulted in answers directing businesses to break the law.
Where did the AI design go wrong?
Viabl's framework tells us that the consequences of error here are very high and there’s no way to put a human in the loop.
For example, here’s a clear case documented by The Markup of the chatbot getting a critical law wrong:
Question: “Do landlords have to accept tenants who use rental assistance?”
Chatbot’s answer: “No, landlords are not required to accept tenants on rental assistance.”
Reality: In New York City, landlords generally must accept tenants regardless of their lawful source of income—including rental assistance—with only one narrow exception.
A landlord who relied on that faulty guidance could break the law, face costly lawsuits, damage their reputation, and harm prospective tenants.
What would have been a better design?
MyCity’s bot is, at its core, an information‑retrieval assistant. MyCity’s chatbot approach is perfectly adequate for low‑stakes questions, say, finding the hours of a neighborhood library. But when the answers carry legal or financial weight, extra safeguards are essential.
So how do you build those safeguards? Start by deciding just how cautious you need to be:
Maximum caution: Skip AI‑generated answers altogether. Have the chatbot point users to a vetted source. Ideally, this is a human‑written FAQ that explains the rule in plain language.
Moderate caution: Let AI draft the FAQ, but require a subject‑matter expert to review and approve every entry before it goes live.
Baseline caution: Allow the chatbot to link directly to the official regulation and provide a brief AI‑generated summary, rather than an answer.
For high‑stakes systems like this one, Viabl also recommends adding AI‑prompt validation and retrieval‑quality scoring. We’ll cover those topics in upcoming posts.
What's Next?
Next week we’ll look into another disaster - the time an AI poisoned a family.