Concerns surrounding generative artificial intelligence (AI) have escalated on college campuses, with discussions primarily focusing on student cheating. However, these conversations often overlook broader ethical issues that higher education institutions and technology companies must address, including the use of copyrighted material in AI training and the protection of student privacy.

As a sociologist specializing in AI and its impact on work, I recognize the importance of evaluating these ethical questions from multiple perspectives. The responsibility for the ethical use of AI should not rest solely on students. Instead, it is crucial that technology companies and higher education institutions share this burden.

Challenges of Banning AI Tools

Some universities have opted to ban generative AI products like ChatGPT due to concerns about academic integrity. While there is evidence that students misuse these technologies, bans ignore research demonstrating that generative AI can enhance academic achievement. Studies also highlight its potential benefits for students with disabilities.

Colleges and universities have an obligation to prepare students for AI-integrated workplaces. A growing number of institutions have begun incorporating generative AI into their curricula, offering students free access through school accounts. However, this approach presents ethical challenges. Not all students have equal access to technology; those who cannot afford subscriptions may be left behind.

Moreover, students using free AI tools face significant privacy risks. When they use these platforms, such as by asking for help on paper ideas, they generate valuable data that companies utilize to improve their AI models. In contrast, paid services often provide stronger data protections and clearer privacy guidelines. Institutions can tackle these equity issues by negotiating licenses with vendors that guarantee student privacy and offer free access to AI tools.

Reassessing Academic Integrity Policies

In the book “Teaching with AI,” authors José Antonio Bowen and C. Edward Watson argue for a reevaluation of academic integrity policies. I share their perspective but emphasize that when integrating generative AI into curricula, institutions must scrutinize the ethical implications of using student data.

Penalizing students for “stealing” words from large language models raises ethical questions when considering how tech companies gather data from various online sources, often without proper citation. These companies utilize copyrighted materials, some allegedly sourced from piracy sites, to train their models. Although asking a chatbot to write an essay and the training process are not identical, both actions involve ethical considerations that deserve attention.

Higher education institutions should examine AI model outputs with the same rigor they apply to student work. If they have not thoroughly vetted AI outputs before signing vendor agreements, they lack a solid basis for pursuing traditional academic integrity violations. This calls for a rethinking of academic integrity policies to reflect the realities of AI technology.

Furthermore, student data management under AI vendor agreements poses another significant concern. Students may worry that their interactions are logged and linked to their identities, potentially impacting academic integrity assessments. To address these concerns, institutions should openly communicate the terms and conditions of AI agreements to their communities. If college leaders do not fully understand these agreements or are unwilling to disclose them, it may indicate a need for a comprehensive review of their AI strategies.

The implications of data privacy are heightened considering how generative AI is increasingly used as a source of personal support. According to OpenAI, approximately 70% of ChatGPT usage is for non-work-related purposes. CEO Sam Altman acknowledges that users often seek life advice and emotional support through AI. The tragic case of a teenager’s suicide while interacting with ChatGPT underscores the risks associated with generative AI and highlights the need for institutions to prioritize both personal security and privacy.

To mitigate the risks of students forming unhealthy emotional attachments to chatbots, institutions could establish clear guidelines that AI should be used strictly for academic purposes. Additionally, providing reminders about mental health resources on campus can further support students. Training faculty and students on responsible AI use will also be vital in promoting ethical engagement with these technologies.

Ultimately, higher education institutions must recognize their responsibilities in this evolving landscape. If they perceive the ethical obligations related to AI as too burdensome, they may need to reconsider their strategies for implementing this technology. Addressing these challenges proactively will be essential in ensuring that the integration of generative AI into education is both beneficial and ethically sound.