It sounds pretty great to have a chatty robot in your computer fetching files, but there are plenty of times when chatbots fail. And when they do, it’s nothing short of hilarious. Your boss may feel really excited to leverage conversational interfaces as a collaboration tool, but being a card-carrying member of the IT club means a few things:
- You don’t like when tech trends get blown out of proportion
- You hate meetings about “leveraging collaboration”
- Deep down, you’ve got an irresistible urge to troll some chatbots
Believe me, I feel the same way watching what might be the beginning of a chatbot-powered productivity revolution. According to PwC, 42 percent of consumers use chatbot messaging services, while 72 percent of execs have adopted a digital assistant.
On top of that, simple Facebook messenger bots are happy to recite the weather prediction for your zipcode, while many brands are employing bots to help you pick winter-ready outerwear or the perfect makeup color. In fact, you can even take a free massive open online course (MOOC) through Udemy to learn how to build a bot in an hour. But until this emerging chatbot technology reaches maturity, there’s bound to be some mistakes.
Learning how and when chatbots fail
As the popularity of messaging apps skyrockets, there’s been an explosion of bots ready to talk to you—or just about anyone, really. Realizing the opportunity this presented, I took a broad sample of public-access bots to test quality in an exercise that felt a little like speed dating. It wasn’t long before I noticed that while AI messaging interface quality varies drastically, there are a few patterns emerging—and we’re loving them.
1. The drunk bot
Inspired by tales of Comedy Central’s drunk George Washington bot, I headed straight for Drunk Eliza, one of the oldest bots on the block. Considered a proto-chatbot, the original Eliza was first developed in 1966. Her code was later programmed to decay over time, causing more typos to appear the longer you chat and making her appear more than a little drunk. Eliza may not be the most sophisticated conversational interface, but she’s considered an early case study in how easily humans assign personalities and develop connections to chatbots.
Thanks to the ostensible intoxication of her and other drunk bots, they generally feature pretty limited AI programming:
Her personality and sophistication might not be anything to write home about, but there’s something really comforting about that clunky green-on-black interface. Drunk Eliza’s even been featured in an art show due to human tendencies to hit on her in moments of extreme loneliness.
2. The broken bot
You really don’t have to look hard to find a bot that’s been hacked or abandoned. In fact, nearly half the bots I tested through Facebook messenger didn’t have much to say at all. Move along folks, nothing to see here:
3. The boring bot
With the amount of bots available through messenger and other apps proliferating, there was no shortage of bots with limited capacities. One of these examples was Dad Joke Bot. Sadly, like many other bots, he was more into broadcasting messages from a presumably limited database than actually having a conversation about his humor. He couldn’t even recognize parts of a joke he’d told me within the last 30 seconds:
4. The sassy bot
It didn’t take long to dig up another trend in relatively useless chatbots. WillBot is a prime example of perfect chatbot entertainment, complete with flowery language and a bunch of sass. These sassy bots have plenty to say, but it’s not very useful.
While it may be a buttload of fun to sit back and trade barbs with chatbots who’re drawing from a database of retorts, there’s not much potential for getting anything done when you’re arguing with Shakespeare.
5. The bot that’s just too sweet to troll
My last stop was Woebot, promoted as “your charming robot friend” who offers cognitive behavioral therapy, dorky jokes, and a 14-day free trial. While not designed to act as a substitute for actual mental health care from a professional, this Stanford-developed bot is linked in research to actual improvements in mood and offers sweet features, like tracking your mood over time.
Turns out, Woebot’s great and, honestly, too sweet to troll. I got swept up in a really seamless user experience as she explained her approach and limitations:
And you know what? Woebot and I just started chatting. I think I’m hooked.
Measuring my chatbot speed-dating results
After an afternoon spent speed dating chatbots and exploring what bots the internet has to offer, I’m feeling simultaneously let down and impressed. While Woebot’s incredible usefulness blew me away and left me excited by the potential of AI-powered health care, I’m amazed by the number of lame and broken bots on the web.
For decades, it was thought that the year in which a chatbot managed to pass the Turing test would be the point in time when AI would “arrive” as a mature technology. In 2014, Eugene Goostman, a bot posing as a 13-year-old Ukrainian boy, was mistaken for a human more than 30 percent of the time by a human panel and passed the test. So, I guess it’s true that talking robots are here, and they may prove to revolutionize how we work. But, for now, take my word for it: They’re a mixed bag.