- 21.10.2020 07:15 am
- 17.09.2020 11:15 am
- 14.08.2020 05:30 pm
- 06.08.2020 01:15 pm
- 17.06.2020 08:30 pm
In my earlier blog about chatbot security I laid out a troubling question: Why should we worry about chatbots? They are such a new phenomenon that we don't have enough information yet to fully estimate the threats they pose to security.
It's also reasonable to ask what all the fuzz is about and think that chatbots are just another internet hype.
I’m not a fortune teller, but it is important to understand the context where chatbots can – and most likely will – flourish. The way people use social media is changing: Today, we spend more time on messaging apps than on social networks. It’s clear that more services such as chatbots will be developed to complement the messaging apps.
The flip side is that with chatbots you should take extra caution with security. This is how to do it:
The intelligence of chatbots rests very much on their learning capabilities. The more they know about the user, the more personalised service they can offer. This means that they have to collect data.
For security reasons, especially inside enterprise networks, it’s important to know the following:
- What data is collected?
- Where is it stored?
- How long is it stored?
- Who else has access to my data?
Communication on chatbots should be encrypted to make sure the information can’t be accessed from outside. With public channels, this is an obvious risk. Last summer, Facebook started to test end-to-end encryption.
While encryption is recommendable, it introduces new problems. Could a chatbot be utilised as a new channel to spread malware, nicely hidden under encryption? Could it be a command and control channel for malware or botnets?
Still, there are more obvious advantages than downsides to encrypting chatbot conversations.
Chatbots present similar security challenges as email, for which we already have effective remedies.
Most email clients are quite good in detecting phishing email and sending it directly to junk mail. Some bad emails still make it to our mailboxes but luckily we are quite adept at detecting them and know not to click those nasty links, just as I wrote in my earlier blog post.
With chatbots the reality is totally different. They are still so new that technology isn’t able to detect malicious bots accurately. We must do the job ourselves.
When using chatbots in your messaging applications, take extra caution. Don’t automatically give your personal information, especially credit card info, to a friendly-looking chatbot.
Read more about our approach to security in our whitepaper.