Generative artificial intelligence (AI) tools are now embedded in popular software, tempting students and educators alike to regard these commercial applications as reliable “copilots” and “tutors.” As scholars whose research and teaching are closely bound up in the reading, analysis, and writing of texts, English department instructors feel it important to underscore that “generative AI” is the product of centuries of human labor. This includes the immense troves of writing used as “training data” without consent or credit; the work of technologists in fields like computer science and linguistics responsible for statistically “modeling” that data; and the labor of human annotators whose ongoing work is necessary to make such “AI” seem more reliable and human-like.
Discussing the ethics of generative AI (and AI more generally) requires the opportunity to develop and expand new critical literacies. As scholars of the humanities and educators, we are deeply committed to this project. Although AI chatbots are commercial tools that were not designed with education in mind, they are now being pushed heavily despite limited research on their impact. Indeed, early studies on the use of chatbots for tutoring, writing, and brainstorming suggest that these systems can undermine learning, produce homogeneity, and diminish students’ confidence and self-efficacy. The underlying models on which generative systems are currently built have been shown to recapitulate historical biases and stereotypes; infringe on copyright protections; surveil users, including teachers and students; leak data; expend enormous amounts of energy, water, and investment; and concentrate tremendous power and resources in the hands of a tiny elite.
At Rutgers, student learning goals in the humanities are carefully crafted to emphasize skills in critical thinking, research, textual analysis, and the use of evidence. That is particularly true of writing courses that aim to develop habits of reading and writing that students need to meet rhetorical challenges creatively and to take risks intellectually. Learning goals in literature courses emphasize the ability to evaluate and critically assess sources and use the conventions of attribution and citation correctly, as well as to analyze and synthesize information and ideas from multiple sources to generate new insights. Depending on how they are used, generative AI tools can undermine all these goals.
In accordance with Kathryn Conrad’s “Blueprint for an AI Bill of Rights for Education,” we recommend that 1) all instructors be able to make decisions around the use of AI in their classrooms to best to execute the learning goals for their courses; and that 2) RU-NB administration consult with faculty before purchasing and/or endorsing the use of AI technology for coursework. We further endorse the current OTEAR recommendation that all Rutgers syllabi include clear policies to help students adhere to the university’s standards for academic integrity and preclude the loss of crucial opportunities to develop skills and acquire learning. We recognize that, at present, AI detection tools subject students to data surveillance while delivering faulty and even discriminatory results.
Generative applications are now widely available, and it is important for students to understand how they work, what they are capable of, and how to contend with their ethical implications. Rutgers should strive to provide resources for building such critical AI literacies from the ground up, in dialogue with students, instructors, and other stakeholders. We regard the attainment of critical AI literacies as a process of equipping students with the necessary knowledge for exercising judgment about whether or how to use these imperfect and, so far, largely untested commercial technologies.