2 minute read
We love to think that algorithms and AI are infallible and all-knowing. The reality in 2026 is that they still aren’t. These days sometimes AI is a confident guesser with bad days and zero shame. When it screws up, sometimes it doesn’t apologize, or it apologizes profusely before doing it again.
An Uber Driver Stuck Going the Long Way Around
This one happened to me about a week ago. I was at the West Palm Beach Brightline Station waiting on my Uber. My driver was sitting on S Rosemary Ave in a white Jeep, pointed straight north, pickup spot literally one block over. Easy, right? Turn right on Datura, you’re there. Instead, the app drew a giant U-shape sending him way west to Quadrille, north past the pickup, then back east on Datura to come in from the other side. 0.7 miles to go four blocks. I watched the Jeep icon do laps and wondered what made it do that to my driver. Several minutes later he finally pulled up. I asked him about it. He just shrugged and said that kind of thing happens “all the time.” Not a glitch, not a one-off, just Tuesday. The app is routing but not in a way that gives anyone a feeling of confidence.
McNuggets to Infinity and…
Back in June 2024, McDonald’s ended its three-year drive-thru AI partnership with IBM. Probably due to viral TikToks showing that the system going nuts. A video had two customers begging the AI to stop as it kept piling on Chicken McNuggets, eventually hitting 260. Another order got bacon added to ice cream. The shutoff was complete across 100-plus test locations by late July. Confidently wrong is still wrong folks.
The Chatbot That Invented an Airline Policy
In late 2022, Jake Moffatt asked Air Canada’s website chatbot about bereavement fares after his grandmother died. The chatbot told him he could apply for the discount within 90 days of buying the ticket. He bought, flew, applied, and got denied because the actual policy requires the request before travel. Air Canada’s defense in tribunal was, no joke, that the chatbot was “a separate legal entity that is responsible for its own actions.” The tribunal called that submission “remarkable” and in February 2024 ordered Air Canada to pay him CAD $812.02.
Apple Intelligence Tells the BBC the Wrong News
In December 2024, Apple’s new AI notification summarizer pushed a fake BBC headline to iPhones claiming Luigi Mangione, the suspect in the UnitedHealthcare CEO killing, had shot himself. He hadn’t. The BBC formally complained. Reporters Without Borders called for the feature to be pulled, noting that facts cannot be decided by a roll of the dice. Apple eventually paused news summaries to fix it because no one wants a “feature” that will get news agencies to sue you.
So What Do We Actually Do About It?
Stop treating the little AI icon, the chatbot, or the auto-summary as always right. The AI or algorithm doesn’t know the route is dumb, doesn’t know the policy doesn’t exist, doesn’t know your driver is doing laps. You do. Your eyeballs and your gut are still the best sensors in the room. Override the robot when it’s clearly confused or just plain wrong. “The app told me to” has never and will never hold up as a defense.
Sources: CNBC, Restaurant Business Online, AI Incident Database, CBS News, American Bar Association, AI Business, CNN, The Register, Tom’s Guide, Reporters Without Borders