Artificial intelligence is not a solution—it’s a tech tool that is only useful when it actually solves problems for learners and librarians. AI is everywhere you look today, from the big three search engines to the local library.
Artificial intelligence is not a solution—it’s a tech tool that is only useful when it actually solves problems for learners and librarians.
AI is everywhere you look today, from the big three search engines to the local library. While companies around the world have raced to jump on the generative AI bandwagon, it has proven to be neither as profitable nor as effective as its early advocates have hoped. Gale’s parent company, Cengage Group, has been at the forefront of exploring AI’s potential to support academic areas from research to reskilling, and our experience has taught us to take a thoughtful approach to incorporating generative AI into our offerings.
As with any other technological innovation, choosing the best tool for the job starts with asking the right questions. While many leaders are asking, “How quickly can we introduce our new generative AI tool?” Our focus has been on asking, “How can we make our user experience better—and can AI play a role?”
For us in the edtech space, this means evaluating how AI can be leveraged to enhance the learning experience so that we offer students a more personalized experience and create efficiency that allows librarians and educators to focus on learner relationships. As Gale partners with libraries to provide educational resources to millions, the five key principles guiding us may prove helpful to librarians who are evaluating or implementing AI tools.
1) Demystify AI rather than trying to predict what it might do.
Research shows that nearly half of academic librarians believe generative AI has the potential to benefit library services and operations—yet they still have legitimate concerns about their preparedness and the technology’s privacy and security. Rather than focusing on what generative AI might do, we treat AI like any other developing technology. We say “developing” not “new” because, while many generative AI functions have been launched recently, artificial intelligence has been around for years.
Like other companies who deliver content online, Gale has leveraged machine learning to enable text and data mining in Gale Digital Scholar Lab and to bring curriculum-aligned content into classroom instruction with Gale In Context: For Educators.
As AI becomes a fact of life in the library and classroom, everyone dedicated to education and research serves learners best by focusing not on a hypothetical future but on the practical present. Developing a publicly available AI policy is an essential first step. As you put that policy into action, always bear in mind the skills learners will need to benefit from AI, as well as the skills they might learn from using it
2) Always focus on solving learners’ actual problems.
The excitement around AI has created disconnects between what students want from AI and the preparedness of schools and libraries to use it. The rush to appear cutting-edge by integrating AI into processes and products has led some organizations and institutions to adopt a “when-you-have-a-hammer-everything-looks-like-a-nail” mindset.
Providing AI tools for their novelty value is tempting, but rolling out something new is only productive when it is actually more effective for the job at hand than what you’re already using. Gale faced this sort of choice recently when we were looking to translate a large body of online content into several foreign languages. We considered using a large language model (LLM), but our testing showed that “lower-tech” machine translation worked better than AI in its current state, so we chose the tool that worked better.
So rather than offering a generative AI solution to a problem your patrons and learners don’t actually have, conduct user surveys to discover their current challenges. With that information in hand, convene a workshop to determine if AI might be able to contribute to a solution. For example, when Gale conducted a survey at the ALA Annual Conference last summer, we found enormous interest in applying AI to deliver better student outcomes or to lighten the burden on educators. At the same time, respondents were clear that they don’t want answer-oriented AI tools that short-circuit the learning process or limit learners’ opportunities to develop critical thinking skills.
To keep our users’ needs front and center, any new generative AI tool Gale releases will be in beta so we can benefit from the symbiotic relationship we’ve built with the librarians, educators, researchers, and students who use our products every day. This approach can also apply to any tool a library introduces: as patrons learn to use the new tool, librarians learn how to make it work better for them.
3) Distinguish real human intelligence from artificially created “intelligence.”
LLM chatbots can be astonishing. The slick writing styles, embedded knowledge, and simulated self-awareness can make them appear to be glib and seemingly infallible oracles. The new agentic AI systems, with their ability to make decisions, work with other AI agents, and perform tasks, are especially impressive. They are certainly a vast improvement over a web search engine that spews out millions of weblinks—but chatbots can also be Trojan horses spewing made-up facts with no awareness or accountability. A large language model that is trained on specious material can become a vector spreading falsehoods.
By contrast, a retrieval augmented generation (RAG) approach harnesses the general conversational power of an LLM while limiting the chatbot’s responses to specific, academically relevant data it has been trained on, such as peer-reviewed articles. While an unlimited LLM might base its answers on anonymous content from a subreddit, RAG can deliver information only from authoritative articles published with a byline in a peer-reviewed journal with clear journalistic and academic standards. These articles can be cited, reproduced, and, on occasion, corrected according to ethical guidelines.
4) Protect valuable human intelligence and intellectual property.
Given the importance of training AI on credible authoritative sources, tech companies have used a variety of approaches to train their LLMs. This has resulted in an onslaught of lawsuits over the use of copyrighted material. Failure to protect intellectual property (IP) is an avoidable financial risk, so libraries should take protecting that material very seriously as well. Consider what should go behind a firewall and who should have access to the protected material and find an LLM tool that respects the boundaries you want around IP. To maintain trust with authors and publishers, Gale makes sure they know how we are keeping their IP safe, and we secure the appropriate licenses to content we intend to use as training material for AI tools—transparency goes a long way.
5) Safeguard data.
The consequences of failing to protect learner data can be severe. A recent cautionary tale is AllHere Education, an edtech company that had $12 million in startup funding and seven-figure contacts to deliver its AI-powered chatbot Ed. Following suspicions about how it was safeguarding student data, the company is now bankrupt and under federal investigation. A measured, learner-focused approach to AI must include having a clear data protection policy—and the mechanisms to enforce it—in place.
AI is constantly evolving, and our approach to the technology will need to evolve with it. Rather than declaring, “Here’s a hard and fast truth that we're certain of,” keep up to date with the latest developments, continually review the ethics of your AI integration, and keep asking yourself, “What problem am I solving for learners and patrons, and could AI be part of the solution?”
To learn more about AI at Gale, visit: https://www.gale.com/generative-ai
Paul Gazzolo is the Senior Vice President & Global General Manager of Gale, a part of Cengage Group. He can be reached via LinkedIn.
Darren Person is Executive Vice President & Chief Digital Officer at Cengage Group. He can be reached via LinkedIn.
SPONSORED BY
We are currently offering this content for free. Sign up now to activate your personal profile, where you can save articles for future viewing