As with any new technology or teaching strategy, incorporating AI into your classroom comes with critical considerations and best practices for course design, facilitation, and grading. The following questions and answers provide information about common areas or curiosity and concern.
For more detailed information, consult the Best Practices for Generative AI (PDF, 2024). Also, consider attending AI events, affiliating with the AI² Center, or reaching out to another faculty member to push the conversation forward.
AI
Artificial Intelligence
The development or use of computer systems that can perform tasks that typically require human intelligence, such as learning, problem-solving, decision-making, and perception.
ML
Machine Learning
A subset of AI involving algorithms and statistical models to enable machines to learn from data without being explicitly programmed.
DL
Deep Learning
A subset of ML that involves the use of neural networks with multiple layers to learn complex patterns in data.
AI is a computer program or a machine’s ability to take action without being explicitly directed to do so. The growing AI field focuses on developing systems to perform tasks that would take humans a long, often unrealistic or impractical, amount of time to complete.
One of the hallmarks of AI is its ability to compile data and detect pattens. This means AI can do many things including image generation, translation, speech recognition, agricultural predictions, medical research, loan processing, and more.
- AI is for everyone—AI should be used in ways that respect human values, rights, and dignity, and that promote social, economic, and environmental benefits for all.
- AI should be transparent, explainable, and accountable, and individuals should be informed and empowered to understand, challenge, and remedy AI-based outcomes.
- AI should be reliable, safe, and secure, and its accuracy and effectiveness should be evaluated and verified for the intended use.
- AI should be inclusive and accessible, and its development and implementation should consider diverse perspectives and experiences.
- AI should be used in ways that uphold academic freedom, integrity, and excellence, and that foster interdisciplinary collaboration and innovation.
- AI should be used in ways that adhere to the principles of data privacy, applicable laws and regulations, and that protect personal data and intellectual property rights.
- AI should be used in ways that avoid and mitigate bias and discrimination, and that proactively identify and avoid causing any harm.
- AI should be used in ways that are transparent and ethical, and that never deceive or spread misinformation.
- AI usage should be subject to continuous assessment and improvement, and responsive to the rapid advancements and challenges of AI technology.
- Increased efficiency in course preparation: AI can assist with course preparation including lesson plans, learning objectives, materials, and custom content.
- Generates diverse content: AI can create case studies from diverse viewpoints, generate custom datasets for assignments, develop roleplaying scenarios, and more.
- Personalized learning: AI can help provide supplemental learning through prompts for student self-quizzing, study guides, tutoring, and more.
- Student preparation: AI can simulate real-world scenarios, provide instant feedback, and give students opportunities to critically evaluate experiences.
- Supports different abilities: AI has the power to assist students with different abilities and non-native learners who may need assistance with interpretation, structure, or alternative formats.
- Increased access and assistance: AI can extend learning with 24/7 access for students who may not be able to contact their instructors or have geographical constraints. For instance, AI-powered tutoring can offer personalized support to students outside traditional hours.
- AI literacy and readiness: As with all digital literacy, students, faculty, and staff must develop AI literacy skills, including use of AI tools and critical evaluation of AI output.
- Biases and limitations: AI can replicate existing stereotypes and biases present in society. Students must be prepared to encounter and handle biases. AI’s accuracy cannot be guaranteed.
- Authorship and ownership issues: It’s important to reflect on ethical considerations for generative AI use within your discipline. Content input into AI systems may have been gathered without consent from writers, artists, and content creators. Deciding how AI will be cited and acknowledged in your course is important.
- Privacy and security: Protecting student privacy is crucial as you select AI tools. Examine the information students provide and teach them to understand how to protect their data.
- Equity and access: It is also important to ensure affordable, equitable access to the selection of AI tools.
- Academic integrity concerns: Providing clear expectations to guide student work is essential. AI detection software cannot be relied on to detect AI-generated content. Additionally, to date, AI detection software has been shown to be unreliable and biased against non-native English writers.
- Machine learning: Machine learning is a type of artificial intelligence which uses an algorithm and data to “learn” something. By learning, we mean the ability to be exposed to an input and output and be able to make a guess at the appropriate output when exposed to something new, based on what was “learned.”
- Algorithm: An algorithm is simply a procedure. A very simple example might be an algorithm called “Add 2” which simply adds two to any number it is given.
- Data: Data is whatever information is fed to an algorithm. If your data consists of a dataset of one number, the number 7, and you feed it to your algorithm “Add 2”, you obviously get 9. Your algorithm could be much more complicated than “Add 2”, and your dataset could be billions or trillions of characters instead of one.
- Model: A model is a combination of an algorithm and a dataset.
- LLM: The development of the large language model (LLM) was the game-changer. The LLM goes well past the elemental model explained in this list. The LLM uses a specific type of machine learning to take massive quantities of text, often harvested from the breadth of the internet itself. The LLM can be “tuned” in a variety of ways, but it is the fundamental basis that allows the computer to converse in natural human languages.
- Generative AI: Generative AI is a broad term for AI models that produce some sort of content or media. This may include human language (written or electronically spoken), visual media such as images or animations, aural content such as music, or specific types or combinations of any of the above such as computer code or movies or other immersive content.
- GPT: The Generative Pre-Trained Transformer is a particular type of LLM, upon which ChatGPT and other tools are built. Although part of the name of OpenAI’s ChatGPT product, a GPT is a type of LLM, not a specific product.
- Token: Tokens are chunks of language that an LLM uses as part of its model, or when ingesting text from a natural language interaction with a user. Tokens are often used to limit the use of a model, or to charge for its use, as the amount of computing power utilized maps to the number of tokens used in a fairly linear way.
- Training: Training (that’s the “P” in Pre-Trained) is the process of cleaning and organizing data, tokenizing that data, and preparing the LLM to output useful responses based on its design intent. Training frequently comes up in discussion about ethics and intellectual property around generative AI models, because once a model is trained on data, there is no longer a direct link from the model’s “knowledge” back to the data that it was trained with.
Syllabus guidelines may vary by college, department, and/or course. Consider your plan for integrating AI into assessments and include clear statements in your syllabus and assignment instructions that align with your approach.
The following considerations are general approaches that may help address AI in your course:
AI-Permitted: Generative AI tools may be required in this course. Generative AI use is promoted in some assignments and will be clarified in assignment instructions. Any work that is done using generative AI must be cited in your submission.
Some AI: Generative AI tools may be used to enhance some assignments in this course. Assignment instructions will differentiate between distinct human and AI tasks. Any work that is done using generative AI must be cited in your submission.
No AI: The learning that takes place in this course requires your unique perspective and human experience. Use of AI would make it harder to evaluate your work. It is not permitted to use any generative AI tools in this course, and the use of AI will be treated as an academic integrity issue.
AI tools commonly used in the classroom span across several categories, ranging from assistive (e.g., autocorrect and autofill, and text-to-speech) to large language models (LLMs) that allow generative and conversational AI.
Consider these best practices as you explore AI Tools:
- Sign up for the tool(s) and familiarize yourself with their main functions. For instance, if you are planning to ask students to share a conversation they had in AI, test if this capability is enabled.
- If you plan to use AI in an assignment, test prompts and assignment instructions in the tool you selected before sharing it with students. Some subject matters that are more niche may present inaccuracies or limitations.
- Refer to Fast Path Solutions for updates on data usage and compliance.
Navigate students through the basic use of AI tools by including the following:
- Instructions on how to create an account or access an existing account of the tool(s) used in the class.
- Introduce the AI tool through a low-stakes or ungraded assignment, such as this AI Introductory Discussion.
Promote opportunities for students to critically evaluate issues related to privacy, ethics, and biases in AI. Consider the following:
- Ensure students understand safety and security considerations and/or use a protected account, such as their UF login with NaviGator AI or Microsoft Copilot.
- Refer to Fast Path Solutions for updates on data usage and compliance.
- Explore and give students the AI Prompt Cookbook as a tool to help develop AI literacy.
- Ask students to evaluate the output from AI for potential biases or inaccuracies or purposely design assessments to incorporate practice for evaluation, such as this AI Literacy assignment.
Build authentic assessments that support meaningful engagement with real-world scenarios.
Building authentic assessments helps students apply their knowledge and experience to real-world situations and problems. Examples of authentic assessments include personal reflections, analysis and problem-solving in case studies, role playing, debates, simulations, and more. You can explore a variety of authentic assessments that leverage AI in the Elevated Recipes (Authentic + Artificial) section of the AI Prompt Cookbook.
Emphasize transparency in purpose and expectations for assessments.
Transparency in Learning and Teaching (TILT) provides a framework that emphasizes transparency and clarity in purpose, tasks, and criteria for success. Use this transparent assignment template based on the TILT model that also includes ways to clarify expectations for AI usage for each assignment.
Scaffold assessments for frequent, low-stakes opportunities for feedback.
Breaking assessments into multiple stages, or scaffolding, provides students with multiple opportunities to receive feedback and guidance. This is also a great strategy to reduce the pressures students may feel with higher-stakes assessments, potentially reducing academic integrity concerns.
Follow Universal Design for Learning (UDL) principles to optimize the relevance, value, and authenticity of assessments.
UDL principles recommend multiple means of engagement, representation, and expression in the teaching and learning experience. By providing multiple ways for students to engage with course materials and multiple submission options (e.g., video, text, and auditory), students can show more creativity and autonomy in their submissions.
The University of Florida promotes the innovative and responsible use of AI applications to enhance academic and operational activities.
Adherence to these security guidelines and policies ensures a protected and ethically sound environment for all users. By using AI applications within the University, you agree to comply with these guidelines and uphold the integrity and security standards set forth by the institution. Use the links provided to learn more about policies and guidelines.
- Compliance with University Policies: Users must adhere to all existing University policies, including but not limited to, those related to data security, privacy, intellectual property, academic integrity, and the university’s Acceptable Use Policy.
- Copyright and Intellectual Property: Use of copyrighted materials with AI should comply with all existing copyright laws and intellectual property policies and guidelines. Any work or output produced by AI applications should correctly acknowledge sources and contributions, following the University’s policies on academic integrity outlined in the Student Honor Code and Student Conduct Code.
- Data Protection: Users are responsible for ensuring that any data input into AI applications is handled in compliance with the UF’s Data Classification Policies. Sensitive and restricted information must be anonymized or appropriately secured to protect personal and institutional privacy following UF Data Security Guidance.
- Ethical Use: AI applications must be used ethically, respecting the rights and dignity of all individuals, ensuring AI is employed in a manner that is fair, transparent, and accountable. Usage should align to UF’s Regulation 1.0104, which further references the State of Florida’s Code of Ethics for Public Officers and Employees, Chapter 112, Florida Statutes, as well as other applicable University of Florida Regulations, Guidelines, Policies and Procedures.
- Risk Management: Users should only install or use AI applications that are approved by the University’s Integrated Risk Assessment process. It is essential to keep these applications updated and report any security vulnerabilities immediately.
AI Course Designations AI learning experiences at UF are categorized as five core AI literacies:
- Know & Understand AI: Know the basic functions of AI and how to use AI applications.
- Use & Apply AI: Applying AI knowledge, concepts, and applications in different scenarios.
- Evaluate & Create AI: Higher-order thinking skills (predict, design, etc.) with AI applications.
- AI Ethics: Human-centered considerations (e.g., fairness, bias, transparency, safety).
- Enabling AI: Recognizes courses that are not fully AI-focused but support knowledge and skill development that promotes better understanding of AI. The AI Curriculum Committee reviews courses and awards these designations.
“I choose not to spend my time trying to mitigate cheating. I want to spend my time educating students and helping them learn. And learning myself and engaging with other faculty and having great conversations. I want my experiences to be positive, not negative, and I can choose that.” —Joel Davis, Clinical Professor, Warrington College of Business
Learn More
Read the Full Guide
The advice on this page is drawn from “Best Practices for Generative AI.” Find additional guidance about the implementation and use of generative AI for instructors, students, researchers, and HR professionals.
Got a Minute?
UF’s AI Minute podcast has ideas for you.
Contact AI at UF
Mailing Address
P.O. Box 113175
University of Florida
Gainesville, FL 32611
Physical Address
105 Ayers Building at Innovation Square
720 SW Second Ave.
Gainesville, FL 32601
Call us: (352) 294-1895