Consult with your professors regarding acceptable use of AI for your classes!
Also know F&M's Policies: F&M's Academic Honesty Policy and F&M's Student Code of Conduct
Creating a positive AI-infused world is up to us
Image used via the Creative Commons
Here are some ethical issues related to AI:
AI models are frequently trained on copyrighted works without the creators’ permission. This raises legal and ethical questions about ownership. Treat AI output like any other information – don’t assume it’s free to use without checking the rights on the original source. Favor AI tools that are transparent about their training data and licensing.
The development and use of AI systems is increasing energy demand, as are cryptocurrency, streaming video and other systems. Don’t think of these tools as limitless resources that can be carelessly wasted or used for trivial purposes.
Sharing personal information with an AI system may put yourself or others at risk. The information could be accessed and used in ways beyond your control by government authorities or private corporations. This is especially relevant in authoritarian states, where political control and mass surveillance is common. Configure AI settings so your data is not used for training new models, and never share information about others without their permission.
AI models are developed using vast amounts of information, much of it drawn from English language sources. As a result, AI models can inherit prejudices and factual errors found in the source materials. AI output may also be influenced by government propaganda and censorship or corporate restrictions. It is up to AI users to recognize biases and not perpetuate false or misleading narratives and stereotypes.
AI tools can generate fake photos, videos and audio. They can provide instructions for mounting disinformation campaigns, cyberattacks or making weapons of violence. Even seemingly innocuous uses, such as AI-created social media posts, can spin out of control. Use these powerful technologies with utmost care.
As generative AI becomes increasingly integrated into research workflows, it brings both transformative potential and pressing challenges. This table highlights key benefits, such as enhanced productivity, personalization, and scientific discovery, alongside critical concerns, including data privacy, misinformation, and ethical use. Understanding both dimensions is essential for researchers to engage responsibly with AI tools and contribute to the ongoing conversation about their role in academia and society.
Challenges of Generative AI | Benefits of Generative AI |
Reliability, transparency & misinformation: Ensuring AI-generated content is accurate and trustworthy remains a challenge. Risks of misinformation highlight the need for verification and transparency in scholarly contexts. | Efficiency, productivity & innovation: Automates routine and complex tasks, boosting innovation in science, engineering, healthcare, and the arts. |
Cultural impacts: AI systems can perpetuate cultural and linguistic bias, necessitating awareness of diverse representation and social impact. | Personalization & enhanced user experiences: Delivers customized content in education, healthcare, and entertainment, improving outcomes and engagement. |
Privacy & data security: Use of personal or sensitive data raises ethical concerns and demands secure data handling standards in AI research. | Advanced data analysis for decision-making: Enables large-scale, complex data analysis for precision in science, medicine, climate modeling, and economics. |
Intellectual property & ethical use: Raises questions around ownership, plagiarism, and potential misuse such as deepfakes or misinformation campaigns. | Breaking down language barriers: AI translation tools support global collaboration in academia, business, and diplomacy. |
Access inequality & economic impacts: The digital divide and potential job disruption from automation highlight the need for equitable AI access and research. | Improving accessibility & healthcare: Assists individuals with disabilities and enhances medical diagnostics, treatments, and surgical precision. |
Environmental Sustainability: Training large AI models consumes significant energy; researchers are exploring sustainable approaches. | Enhancing education & environmental sustainability: Enables personalized learning and helps manage ecosystems and forecast climate-related events. |
Regulatory, legal & ethical frameworks: Lack of comprehensive regulation necessitates research into governance, fairness, and ethical AI deployment. | Smarter business strategies & market insights: Predicts trends, analyzes markets, and supports agile business decisions through AI-powered insights. |
Accessibility & inclusivity: AI systems must be designed to accommodate diverse users, including those with disabilities, ensuring equitable access. | Strengthening safety & public welfare: Improves cybersecurity, emergency response, and public safety through real-time data analysis and threat detection. |
Psychological & societal dynamics: AI influences social norms and personal identity; researchers must assess its psychological and cultural effects. | Cultivating creativity & expanding the arts: Enables co-creation of music, visual art, storytelling, and interactive media by artists and non-artists alike. |
Global governance & technological equity: Concentration of AI power among a few companies and nations poses global fairness and governance concerns. | Fostering scientific discovery: AI accelerates discovery in fields like physics, biology, and medicine by simulating, analyzing, and modeling faster than humans can. |
Unless otherwise noted, the content of this guide is either adapted or taken from "A student guide to navigating college in the artificial intelligence era" by Elon University under the Creative Commons Attribution NonCommercial 4.0 International License