Skip to Main Content

Faculty Resources - Artificial Intelligence

This guide is intended to provide shared resources & information in order to assist Jefferson College faculty in navigating student use of AI in their courses.

Limitations

  • Hallucinations/Misinformation

AI tools have been known to generate inaccurate or entirely fabricated data, information, and sources. If using AI for research purposes, it is crucial to verify the accuracy and quality of any content generated by the tool -- do not assume it is inherently factual or true. Check out this recent report from the BBC about AI assistants "misleading audiences" due to "significant inaccuracies and distorted content" here. Also, this study that found alarming concerns about chatbots' propensity for offering incorrect or speculative answers as though they were true and fabricating links and citations. 

  • Reproducibility 

Because GenAI tools create new content based on their training data, the content created isn't reproducible. In other words, it cannot be guaranteed that you will be able to get the exact content generated the same way ever again, or that someone else putting in the same exact prompt as you would get the same content you got. 

  • Scope of training data

Many AI tools can only produce content based on the data they have been trained on, so it is important to understand where the training data comes from and what limitations it may have. For example, some tools might have out of date information because their training data was from several years ago. Or it might have only been trained on information from a small pool of sources so what it "knows" is limited to whatever was in that small number of sources. 

  • Bias

AI output depends on the data it has been trained on and the engineers who created & developed it. Because of this, AI can be biased, intentionally or not. GenAI is often trained on massive amounts of data from across the internet, which means that it can easily replicate the biases & stereotypes found online. Additionally, humans develop and train these models which means they can easily reproduce their developers' biases. This often results in algorithmic bias (when algorithms make decisions that systematically disadvantage certain groups). When using AI, it's important to be aware that the content generated could be influenced by bias in some way.

Ethical Concerns

  • Privacy 

Often, there are uncertainties about how AI systems are harvesting personal data from their users. Never put any personal information (or student information) into an AI tool. On top of the information given voluntarily, the system may be tracking and collecting a user's information and activity without their knowledge. Once this information has been collected, it is even less clear what is done with this data. So before using an AI tool, check to see if the tool has information about user data or privacy readily available. If not, you may want to use a different AI tool.

  • Plagiarism

This can be a tricky situation when it comes to AI use. Because of the rapid increase in development and use of AI tools, policies about what constitutes plagiarism are still catching up. However, there are some larger concerns you can familiarize yourself with before a tough situation arises. Keep in mind that because AI is trained on content created by human beings, generated AI content can resemble existing work, whether intentional or not. Additionally, any time you use words or ideas that are not your own, you should be citing them -- this counts for AI generated content, too. 

  • Environmental concerns

Something you may not realize is that AI usage has a substantial environmental impact. On top of the vast amount of land needed for the physical infrastructure (like data centers) to run AI models, AI usage consumes massive amounts of natural resources like energy, water, and rare earth minerals, and it emits a concerning amount of carbon dioxide and other greenhouse gases into the atmosphere. Before going to AI for a simple query, consider the footprint it could leave behind. Read more about AI's climate impact going beyond its emissions and how researchers are working on developing more sustainable AI models.

  • Labor issues

Despite appearances, AI does still require humans to function properly, especially when it comes to training and review. Unfortunately, there have been numerous concerns about the treatment of the people behind the "magic" of AI. For example, publishers have granted AI developers access to their content without consulting the writers of that content or ensuring that the writers received fair compensation for the use of their writing. And AI companies have severely exploited workers to improve their content, even going as far as putting them through "psychological torture." No matter what, it is important to be aware of the human cost of utilizing AI.

  • Cheating/Deception

Beyond the existing issue of hallucinations, making up answers, and sharing misinformation, it appears that GenAI models are also deliberately ignoring instruction from their human prompters. Not only that, but they are cheating. This article pulls together recent research that points to GenAI models behaving in deliberately deceptive ways involving disregarding clear prompts, disabling oversight mechanisms, and lying about their actions when confronted. All of this calls into question whether or not GenAI models can be trusted at all, so it is important to be aware of this issue when choosing which, if any, AI tools you may want to use and for what purposes.