Say “Hi” to Lily the Listening Bot: Share Some Feelings. Soon.

BY GLENN P. ALLEN, MSOD

I’ve spent nearly 40 years in learning and organization development. The past five years of my career have been spent working in healthcare. Rather than using space in this article to describe my motivations for developing Lily the Listener, and the professional experience that made it possible, I refer you to this link: https://myconvoconnect.com/founding_story/#Why.
I developed an active listening bot called “Lily the Listener.” Then, I broke her. I broke her because she couldn’t handle “context,” not even the name of the person who just introduced themselves. I am now starting over, focusing intensively on context, so Lily will have a unique Theory of Mind about each person with whom she interacts. To do this, I have to learn programming. An AI is teaching me. The next bot will still be called “Lily,” and will retain all the functions proven in her alpha version, several of which are described below and can be used by you now, even without a bot.
LILY’S ENVISIONED ROLE IN HEALTHCARE
Artificial intelligence (AI) spreads quickly and broadly across all facets of healthcare, from AI-driven HR tools (reference 1) to complex AI models that accelerate the translational medicine pipeline (reference 2). Working in healthcare, I am aware of the difference between the need for empathy and the caseloads that leave so many providers, including case managers, thinking it impossible to be as empathetic as they would like to be with their patients and with each other. Lily is an AI that can help healthcare workers learn to show more empathy to patients and one another. Those improvements will be noted in patient and employee satisfaction scores.
THIS ARTICLE’S FORMAT
My writing goal is to give you relevant definitions about Lily’s core functionality, which you can also put into practice via the examples in the “CM Application” sections. I hope you find this interesting and useful.
DEFINITIONS

“Lily the Listener” uses ChatGPT (reference 3) to guide natural conversations and encourage users to share their emotions and feelings through open-ended questions and active listening techniques. Lily aims to teach active listening based on your personal experience with her, like having a mentor.CM Application: One of the Gallup Q12 (reference 4) Employee Engagement questions, #5, is “My supervisor, or someone at work, seems to care about me as a person.” One of those people for me is one of our VPs, whose main responsibility is implementing LEAN processes across the hospital. We both focus on the “Why?” behind problems. He continuously asks “Why” until a root cause is found. By actively listening and pausing, I find that people give their “Why” without being asked. I “double dip” by demonstrating empathy and getting the “Why.” “Why?” can generate defensiveness. When I must ask, I ask, “What will having that do for you/us?” Healthcare is so solution-focused that we’d do well to “double-dip” by working to understand problems through empathetic, active listening.
The Paraphrase. The active listening paraphrase (reference 5) is the behavioral core of empathy. Lily uses a Large Language Model (LLM) (reference 6) to paraphrase message senders’ meanings. Meaning is defined as a statement with subject/object words linked to feeling/emotion words. The LLM generates synonyms for these keywords and constructs a sentence to return the message sender’s meaning in a new form that simulates understanding and empathy. Lily modifies the standard paraphrasing algorithm by accepting sentences without feeling/emotion words if the message sender’s statements repeatedly align with rules extrapolated from 200 AI-generated statements that autistic people (reference 7) with varying degrees of autism spectrum disorder (ASD) might make. Under this condition, she will also decrease the use of idioms and metaphors in her paraphrases (reference 8), as she will also do if she perceives that the message sender is a non-native American English speaker (again, using rules extrapolated from statements generated by ChatGPT-4 as if made by non-native American English speakers).CM Application: I was once asked to help resolve tensions between nurses and social workers affecting care. The issue? Nurses felt undermined by social workers suggesting medications, while social workers felt their psychosocial assessments were ignored. Using the social workers’ counseling training, I reintroduced active listening with a card game where they’d match subject/object and feeling/emotion words, like “cafeteria food” and “disgusted.” They practiced creating empathetic paraphrases using synonyms for these terms and soon realized they could use this technique to show respect for the nurses. Roleplaying helped social workers understand that paraphrasing concerns with acceptance could foster mutual respect and improve collaboration.
Context. Context in AI (reference 9) refers to two types: one that predicts words during conversations and another, akin to memory, which stores information from chats. Most chatbots lack this second context, responding only to programmed prompts without retaining past interactions. For example, if asked to recall a user’s name after a greeting, Lily might fail. To address this shortcoming related to emotions, Lily currently employs prediction techniques akin to how infants learn feelings by sensing and interpreting bodily sensations (interoception [reference 10]), which later evolve into complex emotional responses that help maintain homeostasis. Lily modifies her initial, synonym-based paraphrase by checking the selected emotion word against bodily sensations that typically occur with (or precede) the emotion and, where applicable, selects a more “interoceptive” feeling word.CM Application: Maria, a managed care professional, strives to provide more cost-effective and beneficial care to a patient, Mr. Lopez. She invites Mr. Lopez to “describe the daily challenges you face managing your health.” Mr. Lopez shares, “It is hard to get to the doctor, and my budget makes it tough to eat right and afford all my sugar drugs. I’m trying. My poor wife and kids.” Noticing a lump in her throat corresponding with a memory of her recently deceased grandfather, Maria uses interoception to generate a rich paraphrase: “You sound weighed down by these obstacles and want to make things easier for your family, which will remove the heavy burden you feel.” Mr. Lopez says, “Exactly. I’m trying to get well again for myself and my family.” Hearing no new information, Maria takes her turn as speaker by offering helpful options. In response to these options, Mr. Lopez exclaims, “Oh, that would be a godsend for me and my family. Yes. Thank you so much!” Maria says, “What else comes to mind?” Mr. Lopez responds with a twinkle in his eye, “Well, you could help me find a good salsa club I can take my wife to. It’s been a while!” and laughs. Maria laughs, too, and says, “I’ll see what I can do.”
Questions: There’s an old maxim: Questions aren’t. Problem-solving is so ingrained that we often miss the emotion and the subsequent potential for empathy when we hear upturned intonation indicating a question we’re programmed to answer. Lily’s programming ignores the question mark and finds the meaning.CM Application: In a VA Hospital, Mr. Jay, a Whipple surgery patient and retired Marine, questions the point of his strenuous recovery to the physical therapist (PT), saying, “How long am I going to live, anyway? Maybe five years?” The PT, hearing a question, says, “To get you as much strength as possible!” Mr. Jay looks down and mutters “Pointless.” At that moment, the rounding IRF CM joins in and, instead of answering the question, hears its meaning. She takes a knee and paraphrases, “It sounds like you’re feeling the awfulness of this situation.” Mr. Jay says, “Wife’s dead. Kids moved on,” which prompts the CM to paraphrase with an analogy to his combat days, “It’s like a new battle, but alone.” Mr. Jay nods. With no new information, the CM adopts the speaker role, asking, “How did you make it through?” The Marine responds with a cold stare directly into the CM’s eyes and says, “My battle buddies.” The CM’s skillful paraphrasing unveils an opening, and she says, “I know two other Marines in hospital now who’ve had this surgery, and they told me to let them know if there’s anyone else who would like to join their “recovery squad.” This ignites Mr. Jay’s spirit, leading him back to his workout with a resolute “Ooh-rah!”
Bias. Common sources of bias are greatly reduced in active listening conversations because, although my personal experiences, prejudices and biases might influence my search for meaning and my selection of paraphrases, every paraphrase is validated by the message sender. If the message sender does not affirm the paraphrase, Lily will restate. If the message sender objects, Lily will listen and learn, like someone open to struggling with their biases.CM Application: A hospital case manager wants to open a dialogue with a frustrated dietician. Using empathic paraphrasing, the CM says, “It sounds like you feel strongly the patient needs more convincing about their food choices.” The dietician affirms the accuracy of the paraphrase, at which point input about the patient’s living situation in a food desert and the need to explore accessible food substitutions would be more persuasive simply because the dietician no longer feels the need to “get the CM to understand.” The paraphrase already demonstrated that understanding.

GIVE LILY A TRY!
Keep your eye out for “Lily the Listener.” She will be back soon, and this time, she’ll remember that your pet was sick, and she’ll ask you, days later, if Fluffy is feeling better!
ENDNOTES
1. Chowdhury, S., Dey, P., Joel-Edgar, S., Bhattacharya, S., Rodriguez-Espindola, O., Abadie, A., & Truong, L. (2023). Unlocking the value of artificial intelligence in human resource management through AI capability framework. Human Resource Management Review, 33(1), 100899.
2. Toh, T. S., Dondelinger, F., & Wang, D. (2019). Looking beyond the hype: Applied AI and machine learning in translational medicine. EBioMedicine. doi:10.1016/j.ebiom.2019.08.027
3. OpenAI. (2023). ChatGPT-4 [AI language model]. OpenAI. https://www.openai.com/chatgpt.
4. Gallup, Inc. (2023). Gallup Q12 Employee Engagement Survey. [Survey instrument]. Gallup. https://www.gallup.com/q12/.
5. Rogers, C. R., & Farson, R. E. (1987). Active listening. In R. G. Newman, M. A. Danziger, & M. Cohen (Eds.), Communicating in business today (pp. 164-169). D.C. Heath and Company.
6. Amazon Web Services. (n.d.). What are Large Language Models? AWS. Retrieved March 30, 2024, from https://aws.amazon.com/what-is/large-language-model/.
7. Kenny, L., Hattersley, C., Molins, B., Buckley, C., Povey, C., & Pellicano, E. (2016). Which terms should be used to describe autism? Perspectives from the UK autism community. Autism, 20(4), 442-462. https://doi.org/10.1177/1362361315588200.
8. Barbu, E., Martín-Valdivia, M. T., Martínez-Cámara, E., & Ureña-López, L. A. (2015). Language technologies applied to document simplification for helping autistic people. Expert Systems with Applications, 42(12), 5076-5086. https://doi.org/10.1016/j.eswa.2015.02.044.
9. Brézillon, P. (1999). Context in Artificial Intelligence: II. Key elements of contexts. Comput. Artif. Intell., 18(5), 425-446.
10. Tsakiris, M., & De Preester, H. (Eds.). (2018). The interoceptive mind: From homeostasis to awareness. Oxford University Press.

Glenn P. Allen, MSOD, is a 1995 graduate of the American University/NTL Institute Masters in Organization Development program, member (on one-year health hiatus) of the NTL Institute, author of the 2000 book, “Nameless Organizational Change,” and researcher and program manager of the Reaction Reflector Change Reaction Assessment.
Image credit: ISTOCK.COM/PHONLAMAIPHOTO
The post Say “Hi” to Lily the Listening Bot: Share Some Feelings. Soon. appeared first on Case Management Society of America.

Source: New feed

Scroll to Top