- 13.04.2021 10:15 am
- 08.04.2021 02:30 pm
- 08.04.2021 01:15 pm
- 08.04.2021 12:45 pm
- 07.04.2021 03:45 pm
- 07.04.2021 03:30 pm
- 07.04.2021 01:15 pm
- 06.04.2021 04:15 pm
- 02.04.2021 12:00 pm
- 31.03.2021 06:15 pm
- 31.03.2021 03:30 pm
- 31.03.2021 11:30 am
Robo-assistants are slowly becoming an integrated part of our lives. Asking Siri, Alexa or Cortana for information or just teasing them with random questions has become a natural way of interacting with technology. While these are the flagships of AI, other chatbots, which are presumably simpler are still useful, and we can expect to see more of them soon. In fact, due to cost constraints and 24/7 availability, most companies are considering replacing their call center teams with chatbots, either text or voice-based. The only problem until now is that the algorithms powering them are not sensitive enough to handle any situation.
The scope of text analysis is to develop tools that offer machines the opportunity to understand humans in the same way individuals understand each other. This is a reverse engineering process, which looks at specific words, identifies entities (names, places, etc.), and classifies the components into parts of speech or looks for collocation (words usually appearing next to each other).
After performing these operations, the algorithm manages to make sense of the message by looking at verbs, negations, and conjunctions. Duke Universityidentified three levels of text analysis: text marking, natural language processing, and sentiment analysis.
This tool is necessary if you intend to build a human-like chatbot. It gives the bot the ability to answer questions sensibly. It also provides the chatbot with some personality and makes it more conversational. It is based on syntax (grammar), semantics (meaning) and pragmatics (purpose).
The challenges of NLP are mostly related to ambiguity. The same word can have different meanings, the sentences can be parsed in different ways, or the intention of the speaker can be different than the obvious one, as in the case of sarcasm or jokes.
The classic way to do NLP is by detecting the language, splitting the text in the pre-processing phase, modeling and providing the output. Recently, this process has been enhanced by deep learning where the machine strives to understand the meaning of words by context. In this case, the training sets should be very clear and teach the algorithm the differences.
After the text is prepared, the algorithm looks for specific keywords, groups of words and speech patterns which it compares with a predefined dictionary. Each entity receives a score on different dimensions. The scale can be binary happy (1)/sad (0) with intermediary ratings, or it could be multi-dimensional, much like the Facebook reactions buttons (like/love/laughing/angry/etc.). Aggregating the individual scores through specialized software like InData Labs’ text analytics tools can give an overall rating and show the transition of a client through different stages.
For example, they might start neutral and get angry or frustrated if they can’t find the answer or solution they are looking for. On the other hand, a quick reply from the chatbot can brighten up their day. Use thresholds and flags to identify the switch between states.
Creating a chatbot is more accessible than a regular app from a graphical design perspective since it will be similar to or even embedded in an existing chat app like Facebook Messenger, Skype or WhatsApp. The real challenge resides in creating a pleasant user experience, since making a machine hold an engaging and informative conversation is a challenge most humans don’t rise to.
Most chatbots have a single and clear purpose, unlike Siri which is a generalist. Therefore, it makes sense to build the conversation around this primary goal, like giving information about reservations, booking tickets or acting as personal assistants offering support and reminders. The success of a chatbot is measured if they can provide the necessary information as fast as possible and if they can redirect a free conversation back on track in a polite or even fun way.
Depending on the purpose and the degree of sophistication you want for your chatbot you can use retrieval models or generative ones. For the first option, you will use text analysis to create pre-written answers and then just design the rules to select one of these answers from a library. Usually, these chatbots only conduct a superficial text analysis and look for keywords. These are practical but can sound too robot-like. One way to make this more bearable is to create a library of answers expressing the same thing and randomizing them.
Generative chatbots go through the entire text-analysis process to understand the message and build a customized answer, sometimes incorporating some of the input information to make it more personal and human-like. The bad news is that these models, which use machine learning require vast amounts of training data to learn the response patterns. If you have thousands of hours of call-center logs, this could be a solution, as long as you are sure that the staff always provided the right answers. Otherwise, it is garbage in, garbage out.
Regardless, if you use generative or canned models, you can still benefit from sentiment analysis. For canned models, the selection can be made from safer options, or if the sentiment surpasses a certain threshold, it could trigger the procedure to forward the case to a human agent.
More sophisticated models can cautiously build their answers taking into consideration the sentiments of the person talking to the chatbot and adapting at each stage.
Text analysis alone can’t create effective chatbots, but it is an integrated part of the design process. The design process should always take the user’s perspective and try to build a strategy both for uninitiated interactions and for malicious users.
The next steps are to integrate deep learning and machine learning so well that the chatbot learns from its past conversations and becomes better every day. The real test is if the algorithm will be able to detect unwanted input and filter it before it learns an incorrect pattern.