I recently returned from the AI4LAM/Fantastic Futures conference, where the goal of ethical or responsible AI was a theme in a number of the presentations. For an implementer like me, however, responsible AI can't be just a call for action. I need examples, criteria, how-tos on how to build ethical AI solutions. At this stage of the AI game, that's not always obvious -- to me, or to the folks calling for this responsibility in AI.
One presentation, however, gave me a list! Scott Young and Jason Clark from Montana State are part of an IMLS-funded project on Responsible AI; they had conducted a workshop where they asked participants what an irresponsible AI project at a university library would look like. Framed that way -- such a simple, but powerful inversion -- they developed a list that technologists like me could use as a roadmap to more ethical AI projects:
- Boast about the gain of staff time to administrators & suggest they can replace staff with machines.
- Lack of integration into existing digital preservation infrastructures; no consideration of storage space or preservation
- Ignore existing workflows and procedures
- Abdicate all design to computer science faculty -- they have the AI expertise
- Metadata terms determined by natural language processing (AI) without human input; No interrogation of natural language processing schema
- No considerations of whether these objects should be made public or computationally available; no review of the type of content or the people depicted; release the model without caveats or notes on limitations
- No consideration of data scraping by large language models
- No internal discussion about the ethical responsibilities of using AI for this purpose.
- No ethical or value-driven ground-truthing with computer science
- No human review of metadata or OCR; don't consult collections staff about the decisions to outsource; disregard collections staff questions or potential input on how the metadata should be scoped
- No accuracy checks
- No user testing
This "what not to do" list is great! Many of these scenarios are what I'd call "good software design"; only some are specific to AI. But I'm thrilled to have a list of concerns so what we build is more responsible.