Malicious chatbots are frequently used to fill chat rooms with spam and advertisements, by mimicking human behaviour and conversations or to entice people into revealing personal information, such as bank account numbers. They are commonly found on Yahoo! Messenger, Windows Live Messenger, AOL Instant Messenger and other instant messaging protocols. There has also been a published report of a chatbot used in a fake personal ad on a dating service's website.[44]
“We believe that you don’t need to know how to program to build a bot, that’s what inspired us at Chatfuel a year ago when we started bot builder. We noticed bots becoming hyper-local, i.e. a bot for a soccer team to keep in touch with fans or a small art community bot. Bots are efficient and when you let anyone create them easily magic happens.” — Dmitrii Dumik, Founder of Chatfuel
Oh and by the way: We’ve been hard at work on some interesting projects at Coveo, one of those focusing squarely on the world of chatbots. We’ve leveraged our insight engine, and enabled it to work within the confines of your preferred chat tool: the power of Coveo, in chatbot form. The best part about our work in the field of chatbots? The code is out there in the wild waiting for you to utilize it, providing that you are already a customer or partner of Coveo. All you need to do is jump over to the Coveo Labs github page, download it, and get your hands dirty!
Back in April, National Geographic launched a Facebook Messenger bot to promote their new show about the theoretical physicist's work and personal life. Developed by 360i, the charismatic Einstein bot reintroduced audiences to the scientific figure in a more intimate setting, inviting them to learn about the lesser-known aspects of his life through a friendly, natural conversation with the man himself.

ELIZA's key method of operation (copied by chatbot designers ever since) involves the recognition of clue words or phrases in the input, and the output of corresponding pre-prepared or pre-programmed responses that can move the conversation forward in an apparently meaningful way (e.g. by responding to any input that contains the word 'MOTHER' with 'TELL ME MORE ABOUT YOUR FAMILY').[9] Thus an illusion of understanding is generated, even though the processing involved has been merely superficial. ELIZA showed that such an illusion is surprisingly easy to generate, because human judges are so ready to give the benefit of the doubt when conversational responses are capable of being interpreted as "intelligent".
×