Some items are only available on campus or will require authentication via EUID and Password at the point of use.
The "unauthorized" use of any person or technology that assists in a student's assignment, project, or paper is considered cheating under the UNT Student Academic Integrity Policy (UNT Policy 6.003). Unless a professor or instructor gives explicit "authorization," AI cannot be used to complete assignments, projects, or papers. Doing so will result in a "cheating" violation.
AI should not be used to assist in writing papers, searching for sources, or creating citations. Citations provided by AI are fabricated by mimicking existing bodies of work. In most cases, AI will pull direct quotes from existing sources to answer queries and make-up information about the source.
AI can be used ethically to help you develop an outline for a paper, generate ideas, and learn a citation style. Talk to your subject librarian or professor about how you can use AI ethically.
Guidance from MLA on how to cite generative AI in MLA format
"Prompt Given to ChatBot" prompt. Title of the AI tool, version of AI tool, Chatbot Publisher, Date content was generated, URL for the tool or conversation
"Describe the symbolism of Anne's red hair in Anne of Green Gables by L.M. Montgomery" prompt. ChatGPT, 24 May version, OpenAI, 12 June 2023, https://chat.openai.com/share/6850690a-1480-406f-939d-856d171af11d
("Describe the symbolism")
Guidance from the APA on how to cite generative AI in APA format
Name/Author of the Chatbot/Model. (Year of the version you used). Title the model (version of the model) [Type of model]. URL
OpenAI. (2023). ChatGPT (May 24 version) [Large language model]. https://chat.openai.com/chat
(OpenAI, 2023).
Guidance on how to cite generative AI in Chicago Style format
The Chicago Manual of Style recommends that you cite the generative AI tool as a footnote but not list it in a bibliography, treating the exchange like a personal communication.
Text generated by AI Tool, Date, Tool Creator/Owner, url
Text generated by ChatGPT, March 7, 2023, OpenAI, https://chat.openai.com/chat
ChatGPT, response to "explain the significance of Anne's red hair in Anne of Green Gables by L.M. Montgomery," March 7, 2023, OpenAI, https://chat.openai.com/chat
In this example, you have a footnote attached to the quote or paraphrased information from the generative AI, but haven't told the reader what instructions you gave the tool. In that case, the prompt should be included in your footnote.
*Our thanks to the University of Wisconsin Milwaukee for these examples.*
Disclaimer: Always check with your professor or publisher to see their guidelines for using AI in assignments or publishing.
Who is the author? Could their view be biased in any way?
Text or images generated by AI tools have no human author, but they are trained on materials created by humans with human biases. Unlike humans, AI tools cannot reliably distinguish between biased material and unbiased material when using information to construct their responses.
Who is the intended audience?
Generative AI tools can be used to generate content for any audience based on the user’s prompt.
What is the intended purpose of the content? Was it created to inform, to make money, to entertain?
Generative AI tools can create convincing text and images that can be used to propagate many different ideas without being clear that the information or images could be false.
Where was it published? Was it in a scholarly publication, a website, or an organization page?
Generative AI has already been used to create content for websites and news outlets. Considering whether the source is scholarly, has a good reputation and a clear history of providing reliable information is useful for figuring out whether the information you find is useful or misleading.
Does it provide sources for the information?
Articles, news outlets, and websites that provide sources could be an indicator of reliability. Further assessing the sources by following the links and citations to verify the information will help confirm that the information you find is reliable.
Limitations of AI: Hallucinations and Fake News
Generative AI natural language processing tools, language models, or chatbots like ChatGPT have been shown to hallucinate or provide unsubstantiated information. Text generated by AI can also seem very confident, so it can be difficult to ascertain what information generated by AI is trustworthy and what information is not. (To learn more, read about six fake cases created by ChatGPT in the Steven A. Schwartz and Peter LoDuca court case)
(Adapted from AI Tools and Resources/University of South Florida Libraries)
Copyright © University of North Texas. Some rights reserved. Except where otherwise indicated, the content of this library guide is made available under a Creative Commons Attribution-NonCommercial 4.0 International (CC BY-NC 4.0) license. Suggested citation for citing this guide when adapting it:
This work is a derivative of "Artificial Intelligence", created by [author name if apparent] and © University of North Texas, used under CC BY-NC 4.0 International.