
Amid the hype about the capabilities of artificial intelligence, there are assertions that AI can replace humans in situations where empathy and caring are especially important.
But, is that true? Think about John Haugeland’s famous quote “the trouble with artificial intelligence is that computers don’t give a damn.”
As Chris Tessone commented —
“The philosopher John Haugeland explained why we should not trust software with life-and-death decisions: “Computers don’t give a damn. They interpret our world as a meaningless collection of data points, not as a dynamic reality. They are always stuck in the present moment, with no sense of time. Worst of all, they cannot understand that all existence is finite. Things run out: energy, natural resources, human lives. Computers are blind to the time-bound, connected lives that we experience from birth to death. Even the most advanced models are dangerously indifferent to our world.
Human intelligence consists precisely in trying to get life-and-death matters right. Our knowledge is rooted in the awareness that we will die, that things end. Martin Heidegger, the thinker who most inspired Haugeland, warned that technology also deceives us. The magic of automation leads us to believe that the world’s resources are endless. Death can be indefinitely delayed, it seems. Nature cannot hold us back. We start to think like our models.”
For more, see:
*Haugeland, J. (1979). Understanding Natural Language. The Journal of Philosophy, 76(11), 619–632. [PDF] [Cited by]
*Tessone, C. (2023). Heidegger’s bots: The birth and death of responsible artificial intelligence. Epoché Philosophy Monthly, 65. [Cited by]
And are we starting to think like our models? Yes, the examples are around us. For one, think of the billionaires obsessed by longevity. Rich people who want to live forever.
Yes, AI has shown that it can do some tasks very quickly, but not always very well. But that speed and apparent comprehensiveness make it seem flawless, powerful, able to defy natural limits. However, those very qualities make it inferior to human beings in the many situations where caring and empathy are most needed–such as in medicine, mental health treatment, teaching, etc.
What does the research say?
*Rubin, M., Arnon, H., Huppert, J. D., & Perry, A. (2024). Considering the Role of Human Empathy in AI-Driven Therapy. JMIR Mental Health, 11, e56529. [PDF] [Cited by]
“Recent breakthroughs in artificial intelligence (AI) language models have elevated the vision of using conversational AI support for mental health, with a growing body of literature indicating varying degrees of efficacy. In this paper, we ask when, in therapy, it will be easier to replace humans and, conversely, in what instances, human connection will still be more valued. We suggest that empathy lies at the heart of the answer to this question. First, we define different aspects of empathy and outline the potential empathic capabilities of humans versus AI. Next, we consider what determines when these aspects are needed most in therapy, both from the perspective of therapeutic methodology and from the perspective of patient objectives. Ultimately, our goal is to prompt further investigation and dialogue, urging both practitioners and scholars engaged in AI-mediated therapy to keep these questions and considerations in mind when investigating AI implementation in mental health.”
*Rubin, M., Li, J. Z., Zimmerman, F., Ong, D. C., Goldenberg, A., & Perry, A. (2025). Comparing the value of perceived human versus AI-generated empathy. Nature Human Behaviour, 9(11), 2345-2359. [Cited by]
“Artificial intelligence (AI) and specifically large language models demonstrate remarkable social–emotional abilities, which may improve human–AI interactions and AI’s emotional support capabilities. However, it remains unclear whether empathy, encompassing understanding, ‘feeling with’ and caring, is perceived differently when attributed to AI versus humans. We conducted nine studies (n = 6,282) where AI-generated empathic responses to participants’ emotional situations were labelled as provided by either humans or AI. Human-attributed responses were rated as more empathic and supportive, and elicited more positive and fewer negative emotions, than AI-attributed ones. Moreover, participants’ own uninstructed belief that AI had aided the human-attributed responses reduced perceived empathy and support. These effects were replicated across varying response lengths, delays, iterations and large language models and were primarily driven by responses emphasizing emotional sharing and care. Additionally, people consistently chose human interaction over AI when seeking emotional engagement. These findings advance our general understanding of empathy, and specifically human–AI empathic interactions.”
*Editorial (2025). Reclaiming care in the age of AI. The Lancet, 406(10512), 1535. [PDF]
“Care of an individual begins and ends with a human being. AI can help that person be seen more clearly—free from the noise of paperwork, distraction, and exhaustion. The next phase of progress in health care will depend less on technical capacity and more on ethical stewardship and the health-care community’s ability to keep humans at the centre of design and deployment. If done properly, AI will not replace care; rather, it could help us rediscover it.”
Questions? Please let me know (engelk@grinnell.edu).

