In this AI Watch, we discuss:
In this AI Watch, we discuss:
On our podcast, while we cover the need to address serious issues in ownership and representation, we have a relatively positive view of AI. This view, however, is far from universal in the library domain. There are many librarians that have strong valid issues with AI.
One of the best arguments against generative AI I’ve seen was put forth by Violet Fox. She recently released a zine entitled “A Librarian Against AI.” In the zine, she states:
“Generative Al is a destructive force. As a librarian, I think it goes against everything we hold dear in librarianship. Al is ’just a tool,’ of course, but it a) confidently provides inaccurate information, b) oversimplifies responses to queries and strips context from answers, and c) relies heavily on stolen intellectual property. All the while it burns through energy and water at alarming rates!”
This is not just a quick screed. She really builds arguments around AI and ALA's Code of Ethics. It is a great counterpoint that is worth reading and thinking about.
I’m excited to see Dave highlighting Violet’s work—she was one of my teaching assistants at the University of Washington’s MLIS program. Related to her work, I just published an article (with Tyler Youngman) posing such questions as: How do we, as information professionals, think about AI tools? How do we think about using them ethically? And what is our role in training librarians? How to incorporate these kinds of skills in their education?
Second, I recently presented at the State Libraries and AI Technologies (SLAAIT) Working Group Summative Meeting. In getting ready for my talk, I thought that I would ask ChatGPT a question: “Who was the first Black child to integrate schools in Alabama?” It responded with “Linda Brown.” I was pretty surprised by this because I know from history and my own personal experience that the correct answer is my dad, Sonnie Wellington Hereford IV. Linda Brown is the person who brought the famous 1954 Brown versus the Board of Education in Kansas case.
I reported the answer as incorrect to ChatGPT. Then, before recording our podcast a month later, I asked the question again. It still answered with “Linda Brown.” I clarified the question and asked about elementary schools in Alabama, and it told me the answer was Vivian Malone—also incorrect.
Finally, I asked ChatGPT about my dad: “Who is Sonnie Hereford IV?” And guess what? It replied that he was the first Black child to integrate a public school in Alabama on September 9, 1963. Absolutely correct.
So, as we think about how we use these AI technologies (including Google and Bing search summarize results), remember to be careful about accepting reported “facts” without cross-checking. I partially tie this back to a lack of records about Black history and similar topics. AI built and trained on incomplete or partial content will still try to provide answers, albeit wrong ones.
This month, I reflected on how I personally use AI, which has changed over the past six months.
First, I record my own live music, and then I like to tease apart the different tracks. There are some great free tools to do this, such as Gaudio Studio. LALAL.AI has a reasonable cost structure (base package $20) for vocal remover and music source separation. I often use the free software, Audacity, a popular audio editing and recording technology. It's absolutely fantastic, and they added an AI effects suite that offers improvements such as isolating tracks and reducing noise.
Second, I use AI for audio transcription. For example, this column is based on the AI Watch segment from our podcast. TurboScribe is a free AI tool that provides an accurate transcript in about 15 seconds. We then edit extensively, but having the transcript is a tremendous time-saver.
And lastly, I started using AI for images. Although Dave typically handles our graphics, I've used DALL·E 3 via ChatGPT to generate images. With a few prompts it quickly returns a reasonable image that can be further refined.
However, I’ve had considerable feedback questioning the ethics of this type of AI software since it was trained on millions of images without permissions from the original artists. The critics raised some very good points—how AI systems are based on massive data crunching of content without obtaining intellectual property rights. Based on the feedback, I reflected on my complicity and decided to remove the AI-generated image and replace it with a legally obtained stock photo.
We will definitely discuss in more depth the nature and implications of ethical use of AI in an upcoming podcast.
We are currently offering this content for free. Sign up now to activate your personal profile, where you can save articles for future viewing
Add Comment :-
Comment Policy: