
The easy answer is that it is much faster and requires less effort to click a button and get a custom summary of academic papers and their connections–rather than spending the time and mental effort to do it yourself.
Or, is this preference just an extension of the utility of abstracts that have been used to summarize academic papers for decades–long before the public rise of artificial intelligence?
Or, does it go deeper than that? Is it not just a preference that saves time and effort but, alongside, involves more trust? Do we trust artificial intelligence more than we trust our own human abilities? Does speed, ease, and apparent power equal confidence? Do we trust the machine more than ourselves?
Featured articles:
*Vanneste, B. S., & Puranam, P. (2024). Artificial Intelligence, Trust, and Perceptions of Agency. Academy of Management Review, amr.2022.0041. [PDF] [Cited by]
“Modern artificial intelligence (AI) technologies based on deep learning architectures are often perceived as agentic to varying degrees—typically, as more agentic than other technologies but less agentic than humans. We theorize how different levels of perceived agency of AI affect human trust in AI. We do so by investigating three causal pathways. First, an AI (and its designer) perceived as more agentic will be seen as more capable, and therefore will be perceived as more trustworthy. Second, the more the AI is perceived as agentic, the more important are trustworthiness perceptions about the AI relative to those about its designer. Third, because of betrayal aversion, the anticipated psychological cost of the AI violating trust increases with how agentic it is perceived to be. These causal pathways imply, perhaps counterintuitively, that making an AI appear more agentic may increase or decrease the trust that humans place in it: success at meeting the Turing test may go hand in hand with a decrease of trust in AI. We formulate propositions linking agency perceptions to trust in AI, by exploiting variations in the context in which the human–AI interaction occurs and the dynamics of trust updating.”
*Cabrero-Daniel, B., & Sanagustín Cabrero, A. (2023). Perceived trustworthiness of natural language generators. In Proceedings of the First International Symposium on Trustworthy Autonomous Systems, Article No.: 23, 1-9. [PDF] [Cited by]
“Natural Language Generation tools, such as chatbots that can generate human-like conversational text, are becoming more common both for personal and professional use. However, there are concerns about their trustworthiness and ethical implications. The paper addresses the problem of understanding how different users (e.g., linguists, engineers) perceive and adopt these tools and their perception of machine-generated text quality. It also discusses the perceived advantages and limitations of Natural Language Generation tools, as well as users’ beliefs on governance strategies. The main findings of this study include the impact of users’ field and level of expertise on the perceived trust and adoption of Natural Language Generation tools, the users’ assessment of the accuracy, fluency, and potential biases of machine-generated text in comparison to human-written text, and an analysis of the advantages and ethical risks associated with these tools as identified by the participants. Moreover, this paper discusses the potential implications of these findings for enhancing the AI development process. The paper sheds light on how different user characteristics shape their beliefs on the quality and overall trustworthiness of machine-generated text. Furthermore, it examines the benefits and risks of these tools from the perspectives of different users.”
*Molina, M. D., & Sundar, S. S. (2024). Does distrust in humans predict greater trust in AI? Role of individual differences in user responses to content moderation. New Media & Society, 26(6), 3638-3656. [Cited by]
“When evaluating automated systems, some users apply the “positive machine heuristic” (i.e. machines are more accurate and precise than humans), whereas others apply the “negative machine heuristic” (i.e. machines lack the ability to make nuanced subjective judgments), but we do not know much about the characteristics that predict whether a user would apply the positive or negative machine heuristic. We conducted a study in the context of content moderation and discovered that individual differences relating to trust in humans, fear of artificial intelligence (AI), power usage, and political ideology can predict whether a user will invoke the positive or negative machine heuristic. For example, users who distrust other humans tend to be more positive toward machines. Our findings advance theoretical understanding of user responses to AI systems for content moderation and hold practical implications for the design of interfaces to appeal to users who are differentially predisposed toward trusting machines over humans.”
*Merine, R., & Purkayastha, S. (2022). Risks and Benefits of AI-generated Text Summarization for Expert Level Content in Graduate Health Informatics. The Institute of Electrical and Electronics Engineers, Inc. (IEEE). 2022 IEEE 10th International Conference on Healthcare Informatics (ICHI). [Cited by]
“AI-generated text summarization (AI-GTS) is now a popular topic in applied computer science education. It has proven helpful in various sectors, but its benefits and risks in education have not been thoroughly investigated. Few researchers have demonstrated the benefits of employing AI-generated text summaries in learning to generate ideas swiftly and to explore insights and hidden knowledge. AI-GTS has made it easier for students to understand electronically-available critical information. On the other hand, the risks linked with its implementation in education are understudied. Some anticipated risks include harming pupils’ writing skills, overdependence, reduced critical thinking capacity, and increased plagiarism. This paper presents the application of AI-generated text summarization in a graduate health informatics course and discusses the risks and benefits to students. Furthermore, utilizing the Bidirectional Encoder Representations from Transformers (BERT) model, we demonstrate that the current state-of-the-art AI-generated text summarization has the potential to create expert knowledge content. We conducted a study with 58 health informatics graduate students in the Fall of 2019 to write annotated bibliography for 25 articles each, to which we also added the AI-generated article summaries. We then asked the students to peer grade and distinguish the AI-generated annotations from the student-written summary. Using the Kruskal-Wallis test, we found no significant difference in the peer grades between the two. The robustness of such AI-generated text summarization raises important questions for educators teaching in health informatics.”
Follow the story at Artificial Intelligence and Education.
Questions? Please let me know (engelk@grinnell.edu).
One comment
Comments are closed.