Statement on Using Generative AI

.

Statement Production Details

Contributors: Ava Bindas, Pearl Chaozon Bauer, Kristen Layne Figgins, H Fogarty, Ryan Fong, Cherrie Kwok, Indu Ohri, Sophia Hsu, Adrian S. Wisnicki (lead) Contact

Publication Date: 2023

Full Statement

The release of ChatGPT on 30 November 2022 launched a new era of cultural interest in artificial intelligence (AI) – especially generative AI – in many facets of daily life and work. Some have responded with excitement to the diverse opportunities created by generative AI; others have responded with concern, including raising fears about AI unpredictability, widespread use of disinformation, and even the extinction of humanity.

Such polarization hearkens back to other eras marked by the rapid rise of multiple technologies, including the Victorian era, which scholars have recently begun to connect to developments in AI (see, e.g., Ward, Goodlad, Hsu et al.). Those eras remind us about the transitory nature of such polarization, the capabilities of humans to take proactive roles in the face of technological revolution, and the inherent limitations of public discussion about technology. Despite widespread current debates about generative AI, for example, issues related to colonialism, race, class, gender, sexuality, and ableism have received comparatively minimal attention, especially from Big Tech, the leaders of the current AI development drive.

At Undisciplining the Victorian Classroom (UVC), we are very mindful of these issues and complexities, and we realize that generative AI is not necessarily a quick fix or neutral tool. We recognize the democratizing potential of generative AI, especially when it is made freely available for everyone on the internet. Yet we also contend, in solidarity with organizations like the American Council of Learned Societies (ACLS), that it is essential to place “digital tools and technologies in the hands of communities that have historically been alienated from them.”

Additionally, we acknowledge the problematic labor practices in the history of how generative AI has been built and continues to be developed, the environmental impact of training generative AI models, and the potential for generative AI to perpetuate the worst biases of society. Such elements run counter to UVC’s commitments to, for example, fostering equitable labor practices and building a site centered on minimal energy consumption.

That said, we at UVC also believe that the present moment offers a unique opportunity to engage generative AI in the service of our mission to support innovative pedagogy and take teaching seriously as a critical practice. Through doing so, we hope to model how scholars-at-large might use generative AI technologies thoughtfully in their pedagogy, rather than shy away from them. In fact, we believe that engaging with generative AI directly gives us and our students the opportunity to interrogate these technologies and critically determine the role that they play in our lives and scholarship.

Actively using generative AI also presents a chance to underscore that these technologies cannot substitute for the unique, multifaceted expertise that we humanities scholars and instructors bring to the academy and to our teaching. Such technologies can support our work, but not replace it. As a result, we are open to our contributors embedding and using generative AI in every facet of their submitted materials, whether it be when drafting methodological essays or syllabi, or as part of engaging students through lesson plans and assessments, or for many, many other uses we can only start to imagine. Nothing is off the table.

However, we also ask that the use of generative AI in materials submitted to UVC be done critically, ethically, openly, and intentionally. For example:

  • Scholars should cite, document, and critically reflect on any significant use of generative AI, although defining “significant” may prove tricky as AI becomes more and more embedded in modern life;
  • AI-enabled publications might also include supporting items, such as prompts used to engage with generative AI or screenshots of conversations with specific large language models (LLMs) such as ChatGPT, Claude, Bard, Bing, etc.;
  • All AI-generated text, ideas, and other materials should be confirmed to be accurate and fact-based;
  • Finally, there should be clear reasons, elaborated as part of the submission materials, about why the incorporation of generative AI in potential UVC materials supports the overall antiracist, anticolonialist, and antiableist mission of UVC.

We are likewise happy to work with our contributors to brainstorm appropriate ways to acknowledge and reflect on AI use in the materials.

Working with the latest AI technologies, of course, is a work-in-progress for all of us; we’re all inventing this as we go along. As a result, we envision this statement as the start of a conversation with our contributors and readers about the use of generative AI in scholarship, not the end of it. We welcome feedback on the statement itself as well as suggestions for AI policies that will help to make the use of AI on our site appropriately align with the overall goals, practices, and mission of UVC.

Works Cited

Anderson, Nick. “WVU’s Plan to Cut Foreign Languages, Other Programs Draws Disbelief.” The Washington Post, Aug. 18, 2023.

Generative Artificial Intelligence.” Wikipedia, [2023].

Goodlad, Lauren M. E., ch. Critical AI. Rutgers, 2023.

Goodlad, Lauren M. E., ed. Critical AI. Duke University Press, 2023.

Hsu, Sophia, et al. “Victorian ‘Artificial Intelligence’: A Call to Arms (Re-recording).” Critical AI, YouTube, 2023.

Kumar, Ajay, and Tom Davenport. “How to Make Generative AI Greener.” Harvard Business Review, July 20, 2023.

Nurse, Keyanah. “ACLS Community Message for August 2023.” ACLS | American Council of Learned Societies, Aug. 9, 2023.

Perrigo, Billy. “Exclusive: OpenAI Used Kenyan Workers on Less Than $2 Per Hour to Make ChatGPT Less Toxic.” Time, Jan. 18, 2023.

Sandra Day O’Connor College of Law. “ASU Law to Permit Use of Generative AI in Applications.” Arizona State University, July 27, 2023.

Scott, Brianna, et al. “How AI Could Perpetuate Racism, Sexism and Other Biases in Society.” NPR, July 19, 2023.

Tan, Rebecca, and Regine Cabato. “Behind the AI Boom, an Army of Overseas Workers in ‘Digital Sweatshops.’The Washington Post, Aug. 28, 2023.

Values and Practices.” Undisciplining the Victorian Classroom, 2021.

Ward, Megan. Seeming Human: Artificial Intelligence and Victorian Realist Character. The Ohio State University Press, 2018.

Tile/Header Image Caption

Lovelace, Ada. “Diagram for the Computation by the Engine of the Numbers of Bernoulli,” excerpt, colors inverted, significant magnification. Print, 1842. Wikimedia Commons. Public domain. In this diagram and the accompanying note, Lovelace describes an algorithm for Charles Babbage’s Analytical Engine which, Wikipedia indicates, “is considered to be the first published algorithm ever specifically tailored for implementation on a computer.”

Page Citation (MLA)

Ava Bindas, Pearl Chaozon Bauer, Kristen Layne Figgins, H Fogarty, Ryan Fong, Cherrie Kwok, Indu Ohri, Sophia Hsu, Adrian S. Wisnicki (lead). “Statement on Using Generative AI.” Undisciplining the Victorian Classroom, 2023, https://undiscipliningvc.org/html/about/statement_on_using_generative_ai.