Chatbots could be the future of interaction. They’re programs, only capable of calculated responses, but they can be almost indiscernible from humans. The main purpose of chatbots is automating access to information—a harmless task. However, chatbots can be used in many other ways, some of which are malicious.
What can we expect from a world where chatbots are a primary form of human-computer interaction?
Chatbots are computers; computers are known for being pretty darn smart. So, chatbots can be programmed to perform pretty much any task. What if you could text someone any question and expect them to give you an answer almost instantly, every time? Chatbots are capable of that and much more.
At their core, chatbots are just programs. They are a way of interfacing with information that a computer has access to. In many ways chatbots are like websites or apps; they just have a different way of interacting with you. This new, chat-based form of interaction is probably their biggest benefit. Chatbots are faster and more intuitive than traditional interfaces.
While computers are super smart, they can also seem pretty dumb sometimes. Chatbots only know what their programmers teach them, and some chatbots are programmed to perform very specific functions. This can lead to some serious limitations, and frustration for people who aren’t familiar with these limitations.
For instance, consider a weather chatbot. If you tell a weather chatbot your zip code, it will tell you the forecast for the day. However, if someone also asked for the forecast for this friday, the chatbot might not know how to respond. This could be a frustrating experience if the chatbot isn’t programmed to properly handle its own limitations. Therefore, creating chatbots is a complex process that will have to be perfected before it can see widespread use.
Lastly, as with any program, chatbots are capable of malfunctioning. However, chatbot errors have the potential to be exceptionally troublesome since interaction options are potentially endless. This once again falls into the hands of programmers to handle all possible inputs.
Chatbots aren’t exactly Terminators, but they can be dangerous. Just like computer viruses take advantage of our expectations for how we interact with the internet, malicious chatbots can do the same. However, chatbots hold the added danger of being able to disguise themselves as humans.
While chatbots are far from perfect, and you can generally tell when you’re talking to one, they are also capable of doing a convincing imitation of human behavior. This is already being exploited to create chatbot scams, misleading advertisements, and astroturfing “socialbots”. Just like with viruses, the trick doesn’t have to be perfect, just good enough to fool the right people.
So, yes, chatbots open up a wide range of possibilities for how we interact with computers. Unfortunately, as with all new technology, they open up a new category of dangers, some of which we likely won’t be able to recognize until it’s too late.
In many ways, chatbots are no different than any other computer program. However, the introduction of artificial intelligence into everyday use is a topic that has been the subject of fiction for decades. It’s no wonder that people are both excited and skeptical about seeing it come to life.