Just like with information you might find on the internet, it's up to you to evaluate the accuracy and reliability of the content generated for you by an AI tool. If you choose to use AI instead of researching with pre-vetted, quality-proven library resources, you are responsible for verifying the quality of the AI tool you use and the information it provides.
One way to critically evaluate AI tools and the content they generate is to use the acronym, VALID-AI (see graphic on the right). Ask yourself the questions below as you work through the acronym.
Validate Data: Is the data representative, unbiased, and relevant to the problem at hand?
Analyze algorithms: What information about the algorithm can you find?
Legal and ethical considerations: Are there potential risks or unintended consequences of relying on this technology?
Interpret how it works: When using the AI, is it clear how the AI is operating in the backend?
Diversity and bias: Is there obvious bias in the AI outputs that needs to be corrected?
Accuracy check: When you check AI-generated results against real-world examples, does the generated content reflect certifiable truths?
I (You): Are you using AI tools in an ethical way?
From University of Toronto Libraries' Artificial Intelligence for Image Research guide.
Developed by S. Hervieux & A. Wheatley at The LibrAIry, the ROBOT test is another handy acronym-based evaluation method to help you consider the quality of an AI tool.
Reliability
Objective
Bias
Owner
Type