
“… AI tools to generate and edit content are getting more advanced, easier to operate, and cheaper to run—all reasons why the US government is increasingly paying to use them. We were well warned of this, but we responded by preparing for a world in which the main danger was confusion. What we’re entering instead is a world in which influence survives exposure, doubt is easily weaponized, and establishing the truth does not serve as a reset button. And the defenders of truth are already trailing way behind.”
From: O’Donnell, James (February 2, 2026). The Algorithm. MIT Technology Review.
Featured Article:
*Clark, S., & Lewandowsky, S. (2026). The continued influence of AI-generated deepfake videos despite transparency warnings. Communications Psychology, 4(1), 13. [PDF]
“Advances in artificial intelligence (AI) have made it easier to create highly realistic deepfake videos, which can appear to show someone doing or saying something they did not do or say. Deepfakes may present a threat to individuals and society: for example, deepfakes can be used to influence elections by discrediting political opponents. Psychological research shows that people’s ability to detect deepfake videos varies considerably, making us potentially vulnerable to the influence of a video we have failed to identify as fake. However, little is yet known about the potential impact of a deepfake video that has been explicitly identified and flagged as fake. Examining this issue is important because current legislative initiatives to regulate AI emphasize transparency. We report three preregistered experiments (N = 175, 275, 223), in which participants were shown a deepfake video of someone appearing to confess committing a crime or a moral transgression, preceded in some conditions by a warning stating that the video was a deepfake. Participants were then asked questions about the person’s guilt, to examine the influence of the video’s content. We found that most participants relied on the content of a deepfake video, even when they had been explicitly warned beforehand that it was fake, although alternative explanations for the video’s influence, related to task framing, cannot be ruled out. This result was observed even with participants who indicated that they believed the warning and knew the video to be fake. Our findings suggest that transparency is insufficient to entirely negate the influence of deepfake videos, which has implications for legislators, policymakers, and regulators of online content.”
See also —
AI and Large Language Models: shortcomings and mistakes
AI, social media, the Internet and how we experience the world; what is real? what are the impacts?
Questions? Please let me know (engelk@grinnell.edu).

