Consult with your professors regarding acceptable use of AI for your classes!
Also know F&M's Policies: F&M's Academic Honesty Policy and F&M's Student Code of Conduct
AI Alignment: Refers to ensuring an AI system's goals and actions match those of its creators or human values. The primary objective of AI alignment is to prevent scenarios where AI systems, especially highly autonomous and intelligent ones, might act in ways that are harmful or contrary to human interests.
AI ethics: A field of applied ethics focused on the development and use of AI in a way that aligns with moral principles, particularly fairness, transparency, accountability, and respect for human values.
AI safety: An interdisciplinary field aiming to mitigate risks from AI systems. It encompasses technical solutions to ensure reliable AI function, aligning AI goals with human values, and developing safeguards against misuse and unintended consequences.
Algorithm: A finite set of well-defined instructions for performing a specific task. It operates on defined inputs and produces a corresponding output through a series of steps, ensuring a solution exists and can be reached efficiently.
Artificial general intelligence, or AGI: Artificial general intelligence (AGI) is a hypothetical type of AI that mimics human-like intelligence. Unlike regular AI, which is designed for specific tasks (such as playing chess, grammar correction, or speech translation), AGI is characterized by its general cognitive abilities. Which means it can perform any intellectual task that a human can do, adapt to new situations, and improve its performance over time.
Artificial intelligence, or AI: the endeavor of creating intelligent agents, which are systems that reason, learn, and act autonomously in pursuit of goals. This field encompasses diverse approaches like machine learning, symbolic reasoning, and optimization to simulate human-like cognitive abilities in machines.
Bias: Refers to systematic prejudice within an algorithm or model. This can arise from imbalanced training data reflecting societal biases, or limitations in the algorithm's design. Biased AI can lead to unfair or discriminatory outcomes.
Chatbot: A program simulating conversation with human users through text or voice commands.
Deep learning: A subfield of machine learning inspired by the structure and function of the human brain. It utilizes artificial neural networks with multiple hidden layers of interconnected nodes to process information. These layers extract increasingly complex features from data, enabling deep learning models to handle intricate tasks like image recognition, speech understanding, and natural language processing.
Diffusion: A process that progressively adds noise to data, transforming it from a clean state towards a state of random noise. Training involves learning to reverse this diffusion process, essentially denoising the data to recover the original distribution. This allows the model to generate new, realistic data (like images or text) by starting from pure noise and iteratively removing it.
Generative adversarial networks, or GANs: A class of deep learning models for generating new data. They consist of two neural networks: a generator that creates new data points, and a discriminator that tries to distinguish real data from the generator's creations. Through an adversarial training process, the generator learns to mimic the real data distribution, while the discriminator becomes adept at spotting forgeries. This competition drives both networks to improve, ultimately enabling the generator to produce high-fidelity, realistic data.
Generative AI: refers to artificial intelligence techniques that create new data, like images, text, or music. These models learn the underlying patterns and structure of existing data and use that knowledge to generate novel content that resembles the training data. Generative AI leverages deep learning architectures and techniques like generative adversarial networks (GANs) to achieve this.
Hallucination: Refers to outputs that deviate significantly from real data. These can be nonsensical creations, factual errors, or content biased by the training data. Hallucinations arise from limitations in the model's understanding of the underlying data distribution.
Large language model, or LLM: A complex AI system trained on massive amounts of text data. These models leverage deep learning architectures like transformers to analyze and process information, enabling them to perform diverse tasks in natural language processing.
Machine learning, or ML A subfield of AI where algorithms improve their performance on a specific task through experience. They learn from data, identifying patterns and relationships without explicit programming. This allows them to make predictions, classifications, or decisions on new data, constantly refining their abilities.
Multimodal AI: Refers to AI systems that process and learn from multiple data types, like text, images, audio, and sensor data. It employs data fusion techniques to combine information from these modalities, leading to a richer understanding of the data compared to single-modality approaches. This enables multimodal AI to perform tasks like image captioning, video question answering, and robot perception in the real world.
Natural language processing (NLP): A subfield of AI concerned with enabling computers to understand and manipulate human language. It employs techniques from computer science, linguistics, and statistics to analyze and process written or spoken language. NLP tasks include extracting meaning from text, generating human-like text, and enabling communication between humans and machines.
Neural network: Computational models inspired by the structure and function of the brain. They consist of interconnected nodes (artificial neurons) arranged in layers. These nodes process information and transmit signals to other nodes, mimicking how neurons fire in the brain. By adjusting the connections between nodes (learning), neural networks can perform complex tasks like image recognition, speech understanding, and natural language processing.
Prompt: A user-provided input that guides the model's generation process. It can be text instructions, descriptions, or even existing data.
Prompt engineering: The art of crafting instructions for artificial intelligence models, specifically generative AI models. Imagine it as giving a detailed recipe to a cook instead of just throwing a bunch of ingredients at them. By carefully wording prompts and choosing the right format, you can guide the AI to create exactly what you want, like a specific kind of text, computer code, or even creative writing.
Style transfer: A technique for applying the visual style of one image (e.g., a painting) to another (e.g., a photograph). Deep learning models analyze the "style" (texture, brushstrokes, colors) and "content" (objects, shapes) of both images. The model then generates a new image that preserves the content of the target image but renders it with the artistic style of the reference image temperature: Parameters set to control how random a language model's output is. A higher temperature means the model takes more risks.
Training data: The datasets used to help AI models learn, including text, images, code or data.
Transformer model: A neural network architecture and deep learning model that learns context by tracking relationships in data, like in sentences or parts of images. So, instead of analyzing a sentence one word at a time, it can look at the whole sentence and understand the context.
Zero-shot learning: A type of machine learning technique where a model is able to recognize and classify objects or perform tasks it has never seen before, based on the knowledge it has learned from other related tasks or objects.
--adapted from Syracuse University Libraries
Unless otherwise noted, the content of this guide is either adapted or taken from "A student guide to navigating college in the artificial intelligence era" by Elon University under the Creative Commons Attribution NonCommercial 4.0 International License