DZone

Illustration by Quovantis

Last year, when one of our healthcare partners (we refer to our clients as partners) were looking to build a conversational AI chatbot, I was apprehensive about guiding them. I had only worked on the level 2 (out of the 5 levels of conversational AI) type of bots. But this time our partner wanted to build a contextual/consultative AI-powered chatbot assistant.

I was concerned about how the bot would understand end-users’ problems. What features can we build to make it more humanistic? Would it be successful in replacing human care and compassion? Would it replicate the same emotions of empathy, compassion, and care?

Source: DZone