AI chatbots and “human” language; do words actually mean anything coming from a large language model?

From: Salaj, Ron. (2024). Artificial Intelligence is the Other of human. European Alternatives.
Is there any humanity in the words/text that come from large language models (LLM) and AI?

‘The most astonishing thing about ChatGPT and every subsequent AI chatbot has always been that these programs are the first nonhumans to be fluent in our language. That simple fact is also, as Deb Roy argues in an essay in The Atlantic, among the most alarming things about these products.

Speech is not only about communicating information. When people write and converse, their words carry consequences. Humans, as Roy puts it, have a “moral dimension of language—the fact that speakers are vulnerable, dependent, and answerable.” Large language models are none of these things. Instead, chatbots allow people to instantaneously produce speech without obligation—essays, feedback to a colleague, personal notes—eroding human dignity and conscientiousness. As Roy argues: “Apologies become costless. Responsibility becomes theatrical. Care becomes simulation.”’

From:

Wong, Matteo. (2026, February 20). Atlantic Intelligence. The Atlantic.

Roy, Deb. (2026, February 15). Words Without Consequence: What does it mean to have speech without a speaker? The Atlantic.

See also —

*Youssef, A., Stein, S., Clapp, J., & Magnus, D. (2023). The Importance of Understanding Language in Large Language Models.The American Journal of Bioethics : AJOB, 23(10), 6-7. [PDF] [Cited by]

“Recent advancements in large language models (LLMs) have ushered in a transformative phase in artificial intelligence (AI). Unlike conventional AI, LLMs excel in facilitating fluid human–computer dialogues. LLMs in chatbots and ChatGPT have proven capable of mimicking human-like interactions—meeting a demand for various services. These services span from answering electronic health inquiries to acting as mental health support chatbots. The potential for LLMs to transform how we perceive, write, communicate, and utilize AI is profound, emphasizing the importance of understanding the impact of LLMs on human communication

But this perspective misses something important about the nature of language. Language is a tool that people use. People do things with words. The same utterance can mean very different things to different individuals in different contexts. Saying “Do you know what time it is?” can be a request for someone to tell you the time. It can be a complaint about running late when made while tapping your watch, trying to get your partner to finish getting dressed. It can be a literal question as part of a diagnostic evaluation when uttered to someone whose cognition is being tested. Focusing on just the outputs misses important aspects of language. What do we take from the data cited in the preceding paragraphs when thinking about AI-mediated communication between physicians and patients?”

*Haugeland, J. (1979). Understanding Natural Language. The Journal of Philosophy, 76(11), 619–632. [PDF] [Cited by]

The trouble with Artificial Intelligence is that computers don’t give a damn-or so I will argue by considering the special case of understanding natural language. Linguistic facility is an appropriate trial for AI because input and output can be handled conveniently with a teletype, because understanding a text requires understanding its topic (which is unrestricted), and because there is the following test for success: does the text enable the candidate to answer those questions it would enable competent people to answer? The thesis will not be that (human-like) intelligence cannot be achieved artificially, but that there are identifiable conditions on achieving it.”

In addition —

AI and Caring … and not thinking like machines


Questions? Please let me know (engelk@grinnell.edu).