
It is common these days to see people walking their dogs or walking along streets, in parks, staring at or speaking to their smart phones. Yes, they are physically out in the “real” world but, in reality, they are deeply immersed within the artificial world created by AI, social media, and, yes, the Internet.
In this hybrid existence, what is real? What actually exists? What and where are those bedrock facts of life and society that we can experience and agree on? Experiences and facts that are self-evident? Is there anything self-evident anymore? What are the impacts of this?
“Seeing is believing” or is it? There once was a time when we could have confidence that what we saw depicted in photos and videos was real. Even when Photoshopping images became popular, we still knew that the images started as originals. Now, with advances in artificial intelligence, the world is becoming more artificial, and you can’t be sure what you see or hear is real or a fabrication of artificial intelligence and machine learning.” (From: Marr, Bernard. (2019). Artificial Intelligence Is Creating A Fake World — What Does That Mean For Humans? Forbes.)
“Knowledge of the world isn’t the same as experiencing the world. Why does online connection so often seem to lack aliveness, as compared to encounters with the world of flesh-and-blood people, nature, and material things? This paragraph from Karl Ove Knausgaard struck me as eloquent: “It feels as if the whole world has been transformed into images of the world, and has thus been drawn into the human realm, which now encompasses everything. There is no place, no thing, no person or phenomenon that I cannot obtain as image or information. One might think this adds substance to the world, since one knows more about it, not less, but the opposite is true: it empties the world, it becomes thinner. That’s because knowledge of the world and experience of the world are two fundamentally different things. While knowledge has no particular time or place and can be transmitted, experience is tied to a specific time and place and can never be repeated. For the same reason, it also can’t be predicted. Exactly those two dimensions – the unrepeatable and the unpredictable – are what technology abolishes. The feeling is one of loss of the world.” (From: Burkeman, Oliver. (2025). The Imperfectionist: Five Short Thoughts. And from: Knausgaard, Karl Ove, Translated by Olivia Lasky, Damion Searls (2025). The Reenchanted World: On Finding Mystery in the Digital Age. Harper’s Magazine.
See also —
*Garry, M., Chan, W. M., Foster, J., & Henkel, L. A. (2024). Large language models (LLMs) and the institutionalization of misinformation. Trends in Cognitive Sciences, 28(12), 1078-1088. [Cited by]
“Chat-based large language models (LLMs), such as ChatGPT, detect patterns among words and generate text by predicting a sequence of words. But their predictions are probabilistic, which means LLMs can generate misinformation.
Although societies have always grappled with misinformation, LLMs exploit weaknesses in how we monitor our world to determine what is real and what is not.
Because LLMs generate vast quantities of information, we now face an unprecedented rush of false claims, crafted and delivered with techniques that encourage people to think those claims are true. Worse still, this information then gets fed back into datasets that are used to train future generations of LLMs.
As models send misinformation to each other and return misinformation from each other, it will be harder and harder for us all to determine what is real.”
*Vukov, Joseph. (2024). Staying Human in an Era of Artificial Intelligence. New City Press. [Cited by]
“AI poses a real and present danger. It contains the capacity to amplify social problems, drive a wedge further into our already-polarized society, and sow seeds of distrust in communities and personal relationships. When approached without a robust sense of human dignity, AI also threatens to undermine our self-understanding. To a degree beyond any previous technology, AI can make us forget ourselves. In this new era of AI, we must consciously make a choice: to stay human. In this book I provide a map and the tools for doing just that.”
*Gesnot, Renald. (2025). The Impact of Artificial Intelligence on Human Thought. arXiv. [PDF]
“This research paper examines, from a multidimensional perspective (cognitive, social, ethical, and philosophical), how AI is transforming human thought. It highlights a cognitive offloading effect: the externalization of mental functions to AI can reduce intellectual engagement and weaken critical thinking. On the social level, algorithmic personalization creates filter bubbles that limit the diversity of opinions and can lead to the homogenization of thought and polarization. This research also describes the mechanisms of algorithmic manipulation (exploitation of cognitive biases, automated disinformation, etc.) that amplify AI’s power of influence. Finally, the question of potential artificial consciousness is discussed, along with its ethical implications. The report as a whole underscores the risks that AI poses to human intellectual autonomy and creativity, while proposing avenues (education, transparency, governance) to align AI development with the interests of humanity.”
*Phubbing: psychology, harms, and more — more fallout from technology addiction
*AI companions: addiction and privacy concerns
*Negative impacts of smartphone and social media use on adolescent mental health
Questions? Please let me know (engelk@grinnell.edu).

