Many academic librarians believe context matters when artificial intelligence (AI) tools such as ChatGPT are used by students and faculty to assist with their work, according to “AI in Higher Education: The Librarians’ Perspectives,” a recent survey of 125 librarians published this month by Helper Systems. While only eight percent of respondents said that they believe it is cheating when students use AI products for research—compared with 49 percent who said it was not—42 percent said that it was “somewhat” cheating.
Many academic librarians believe context matters when artificial intelligence (AI) tools such as ChatGPT are used by students and faculty to assist with their work, according to “AI in Higher Education: The Librarians’ Perspectives,” a recent survey of 125 librarians published this month by Helper Systems, developers of the free kOS PDF organization and indexing app.
While only eight percent of respondents said that they believe it is cheating when students use AI products for research—compared with 49 percent who said it was not—42 percent said that it was “somewhat” cheating.
“Context is essential,” one respondent explained. “AI can have a role in brainstorming and iterating, as well as critique and to a lesser extent research. But a whole paper written via AI would be unacceptable, especially if unlabeled as such.” Another said that they have already begun hearing faculty opinions on the issue, noting that “I’ve heard some professors say they are fine with students using AI to compose a first draft, while others have said they consider any use cheating. I tend to err on the side of, if it’s not cheating, it is at least likely to impede their ability to think critically,.”
Of those who believed that the use of AI was not cheating, one wrote that “using AI to search for or analyze big or unstructured data has nothing to do with cheating,” while another explained that “some of us are old enough to remember that spellchecker was going to replace editors and Google was going to replace libraries. AI is now going to save time for researchers who are already using spellchecker and Google technology for their work.”
Results were slightly different when librarians were asked whether it was ethical for professionals to use AI products in the workplace. Over half (52 percent) said yes, 14 percent said no, and 33 percent said somewhat.
“As long as a researcher is transparent about using AI at a particular stage of a research process and there is no harm to research subjects or bias, it is ethical,” one respondent wrote. Another wrote that the extent of AI use could become problematic, noting that “AI is a tool like any other that can be used for work purposes. However, I do think it will encourage some professionals to engage in lazy work that relies on the machine to do their thinking for them.”
Respondents do not believe that AI should be a required part of current higher-ed curricula. When asked if it should be mandatory for students and professors to use AI products, only five percent of respondents said yes, 14 percent said maybe, and 81 percent said no.
“I think that students and professors need to learn to use AI products appropriately and ethically, but mandating tool use could be an infringement on intellectual freedom and simply not appropriate for a lot of courses,” one explained.
When asked an open-ended question about what excites them most about the potential uses of AI products in higher education, librarians’ responses varied. One respondent noted how AI might facilitate the research process, writing that “a chat AI that can take many or most reference questions is really interesting, or at least [one that could] suggest keywords effectively. An AI that summarizes complicated content in search results and then links to items in our catalog is already really appealing and would get more students to engage with academic research.” Another wrote that the tools have the potential “to replace manual, librarian-led literature searching and systematic reviews. I see AI as a faster, more efficient solution to what is now a very hands-on, time-consuming process. I see real promise here.”
Writing about the future of AI products in higher ed, one respondent saw potential for personalized learning applications to boost student engagement and motivation, adaptive testing to assess student knowledge in real time, student support services, and AI-driven analysis systems to “help researchers analyze large amounts of data more quickly and accurately than traditional methods. This can help researchers identify patterns and insights that may not be immediately apparent, leading to new discoveries and innovations.”
But when asked a separate question regarding their concerns about AI, respondents wrote about its potential to facilitate cheating, reduce or eliminate critical thinking and originality, or replace human jobs. One bluntly stated “I’m a librarian, so not a damn thing [excites me about AI]. In fact, it’s one step closer to the end of this occupation as we know it.” Another wrote that “people are lazy, so if tools are developed that allow them to produce mediocre or passable results without thinking, they will stop even trying to think.”
Only 13 percent of respondents said that their library currently offers AI products to researchers, but 24 percent said that they were considering it. Sixty-three percent said “no,” with one respondent explaining that “there would have to be an extraordinarily compelling case for AI for us to consider [allocating budget resources] when we are struggling to maintain even fundamental academic databases.”
We are currently offering this content for free. Sign up now to activate your personal profile, where you can save articles for future viewing
Add Comment :-
Comment Policy:
Comment should not be empty !!!