Whether you permit the use of AI in your classroom or not, it is vital to be aware of limitations & ethical issues surrounding this technology.
As an educator, sharing this knowledge aids students in thinking critically about their use of artificial intelligence-based tools & platforms. Students should be able to effectively understand these tools before they decide to utilize them, so it is crucial to relay the ethical concerns and limitations they will interact with, especially if you intend to teach or encourage the use of AI for assignments in your course.
Legally, AI poses myriad questions and concerns, but one of the thorniest situations is copyright law. More questions than answers have sprung up over the past few years, but this article summarizes many of these key issues in current litigation, including "whether and when AI-generated works are copyrightable, whether training generative AI models on copyrighted works infringes the works, whether the models themselves could be infringing derivative works, and much more. Additionally, check out the Copyright Office's multi-part report examining copyright law & policy issues raised by AI.
AI tools have been known to generate inaccurate or entirely fabricated data, information, and sources. If using AI for research purposes, it is crucial to verify the accuracy and quality of any content generated by the tool -- do not assume it is inherently factual or true. Check out this recent report from the BBC about AI assistants "misleading audiences" due to "significant inaccuracies and distorted content" here. Also, this study that found alarming concerns about chatbots' propensity for offering incorrect or speculative answers as though they were true and fabricating links and citations.
Because GenAI tools create new content based on their training data, the content created isn't reproducible. In other words, it cannot be guaranteed that you will be able to get the exact content generated the same way ever again, or that someone else putting in the same exact prompt as you would get the same content you got.
Many AI tools can only produce content based on the data they have been trained on, so it is important to understand where the training data comes from and what limitations it may have. For example, some tools might have out of date information because their training data was from several years ago. Or it might have only been trained on information from a small pool of sources so what it "knows" is limited to whatever was in that small number of sources.
AI output depends on the data it has been trained on and the engineers who created & developed it. Because of this, AI can be biased, intentionally or not. GenAI is often trained on massive amounts of data from across the internet, which means that it can easily replicate the biases & stereotypes found online. Additionally, humans develop and train these models which means they can easily reproduce their developers' biases. This often results in algorithmic bias (when algorithms make decisions that systematically disadvantage certain groups). When using AI, it's important to be aware that the content generated could be influenced by bias in some way.
Often, there are uncertainties about how AI systems are harvesting personal data from their users. Never put any personal information (or student information) into an AI tool. On top of the information given voluntarily, the system may be tracking and collecting a user's information and activity without their knowledge. Once this information has been collected, it is even less clear what is done with this data. So before using an AI tool, check to see if the tool has information about user data or privacy readily available. If not, you may want to use a different AI tool.
This can be a tricky situation when it comes to AI use. Because of the rapid increase in development and use of AI tools, policies about what constitutes plagiarism are still catching up. However, there are some larger concerns you can familiarize yourself with before a tough situation arises. Keep in mind that because AI is trained on content created by human beings, generated AI content can resemble existing work, whether intentional or not. Additionally, any time you use words or ideas that are not your own, you should be citing them -- this counts for AI generated content, too.
Something you may not realize is that AI usage has a substantial environmental impact. On top of the vast amount of land needed for the physical infrastructure (like data centers) to run AI models, AI usage consumes massive amounts of natural resources like energy, water, and rare earth minerals, and it emits a concerning amount of carbon dioxide and other greenhouse gases into the atmosphere. Before going to AI for a simple query, consider the footprint it could leave behind. Read more about AI's climate impact going beyond its emissions and how researchers are working on developing more sustainable AI models.
Despite appearances, AI does still require humans to function properly, especially when it comes to training and review. Unfortunately, there have been numerous concerns about the treatment of the people behind the "magic" of AI. For example, publishers have granted AI developers access to their content without consulting the writers of that content or ensuring that the writers received fair compensation for the use of their writing. And AI companies have severely exploited workers to improve their content, even going as far as putting them through "psychological torture." No matter what, it is important to be aware of the human cost of utilizing AI.
Beyond the existing issue of hallucinations, making up answers, and sharing misinformation, it appears that GenAI models are also deliberately ignoring instruction from their human prompters. Not only that, but they are cheating. This article pulls together recent research that points to GenAI models behaving in deliberately deceptive ways involving disregarding clear prompts, disabling oversight mechanisms, and lying about their actions when confronted. All of this calls into question whether or not GenAI models can be trusted at all, so it is important to be aware of this issue when choosing which, if any, AI tools you may want to use and for what purposes.