Report on 2025 MLA Generative AI Initiatives Survey
In Spring of 2023, the MLA-CCCC Joint Task Force distributed a survey and collected feedback from its constituencies on Generative AI. The group used results from that survey to direct their activities in 2023 and 2024, which resulted in posts on our blogs, working papers, and webinars.
In Spring of 2025, the MLA Generative AI Initiatives issued a follow-up survey to assess changes over time. The survey was intended to learn more from organization members about their needs and interests related to Generative AI, particularly in relation to advocacy efforts and professional resources that MLA can offer. This blog post reports back to members on the trends in the survey responses, which received 637 responses.
Question 6 asked members “In relation to public advocacy, which, if any, of the following do you think the MLA’s AI task forces should focus on?” providing a selection of options to prioritize in addition to an opportunity for free response. Respondents could choose multiple options. Responses are shown in the bar graph below.

Figure 1: Bar graph of survey responses about priorities for public advocacy.
The most-selected responses include a desire for advocacy efforts:
| Public awareness of the continuing value of human reading and writing processes, human feedback, and writing, literature, and language teachers in an age of AI | 90.5%% |
| Authors’ rights to control whether their text is used to train AI systems | 71% |
| The environmental impact of AI | 57% |
| Proper source attribution in AI outputs that include text or ideas from specific human authors | 55.43% |
Consensus among MLA members focused on communicating publicly about the value of human-led reading and writing, authors’ copyrights, GenAI’s contribution to climate change, and proper citation of AI outputs that are based on human authors’ texts and ideas. Interventions in the problematic designs and uses of GenAI require multiple experts across disciplines. Thus, MLA is well-poised to advocate on the behalf of its members in some areas but not others. For example, the Task Force and MLA are well-positioned to address three of the four above concerns. We have written two drafts of statements that advocate for the Humanities in reading and writing (forthcoming spring 2026 and that are against the integration of AI and allowing agentic AI to complete assignments within learning management systems, and provide principles and guidance on source attribution.
We point to other resources on the environmental impact of AI; while we may not be positioned to participate in the design of data centers, we can use the following sources as pedagogical tools to raise our students’ and colleagues’ awareness of the energy and water cost of even one prompt. The four examples below are small yet quick ways to discuss this issue with students. Many other resources abound.
- Nine takeaways: webpage comparing energy consumption
- What Uses More
- We did the math on AI’s Energy Use
- Explained: Generative AI’s environmental impact

Figure 3: Bar graph showing responses about member use of AI.
Over half of respondents (50.55%) indicated that they were not using AI at all, and the next most response was “For personal use” (30.59%), followed by “For administrative tasks, emails, writing letters, or department or program tasks” at 19.81%. These data from closed-ended questions are mirrored by the open-ended responses that were generated by Question 9. These responses raise some important questions that educators need to grapple with. Both critical adoption and GenAI refusal require a baseline of knowledge–do educators need to be users of the technology to understand and then take a stance? Can you develop a full understanding of what GenAI can do without experimenting with prompting, evaluating output, and trying different chatbots, each of which has some distinct features? What responsibilities to our students and colleagues are we abdicating if we leave education on the uses of AI technology to others? And how can we guide students about this increasingly ubiquitous technology without a full understanding of its capacities?
Summary of Question 9: Share anything else related to your professional needs on the topic of AI
Over 200 respondents shared a range of professional needs, of which we highlight the most significant. Those include designing meaningful assignments that reduce students’ unauthorized use of GenAI; strategies for resisting hype discourse from GenAI tech companies, especially when our administrators buy into such discourse; methods of convincing students of the value of their own learning, critical thinking, and voice style; and resources for teaching AI literacy, both in thinking about what it means to write under the conditions of GenAI access and in raising students’ awareness of the high cost of using GenAI to their learning, the environment, Black labor, and intellectual property rights.
Respondents also used this space to express their real and valid concerns about GenAI. While some respondents had cautious optimism about GenAI, calling for strategies to meet this moment and center the value of the Humanities, many more suggest that LLMs as they are designed now are destructive to the environment and reduce the worth of the Humanities and college education in general.
Response to question nine suggests members need support with a variety of critical AI literacy practices :
- Refusing GenAI outright;
- Strategizing what LLM integration looks like while prioritizing human critical thinking and creativity
- Imagining the theoretical implications of LLMs for writing, literature, and languages.
We note the multiple perspectives on GenAI present in the survey results. Students are exposed to a variety of approaches, from refusal to enthusiastic adoption and all points in between. Each approach will teach them different things and cultivate valuable skills. As a Task Force, we believe it is okay to have multiple positions, including contradictory ones, as long as we have some ground rules to prevent abuses.
However, across all comments — regardless of opposing views — we note that 90% of respondents (over 500 members) desire that the MLA advocate for its membership and promote a human-centered approach to teaching, learning, and research across writing, language, literature and the Humanities more broadly. To honor this request, the MLA Task Force on AI in Research and Teaching will write two public-facing one-page statements. The first is a statement on the value of human-centered reading and writing practices, even if instructors allow a hybrid AI approach. This statement responds to the Trump Administration’s AI Action Plan. The second statement addresses the proliferation of AI integration into educational technologies and the concern thatAI agents will be allowed to autocomplete student work learning management software.
Finally, we wish to share the continued affirmation of the following values that members expressed, whether oriented toward critical adoption of GenAI or aligned with refusal. We might ground our professional conversations in these values as a common starting point.
- Critical thinking and critical reading: Our disciplines have centered engagement with the reception and production of language — oral, textual and across multimedia), which lead us to constructing truths about the human experience. In what ways can they continue to be at the center of our work with students, and how might they be challenged by (or adapted to) navigating decisions around GenAI use?
- Process (writing, reading, creating, responding): In our areas of expertise, process has sometimes been taken as a “given,” but GenAI has ignited a re-articulation of our appreciation for and knowing the necessity of process. How might we make the process of reading, writing, and learning not just an assumption but a core, documented and visible part of the work?
- Ethical responsibility with technology (what are, if any, responsible ways of use): Our discipline has crafted strategies that blend students’ information literacy with digital literacy, highlighting the importance of academic integrity and avoiding plagiarism. GenAI expands our awareness to how using LLMs impacts the environment and uses the physical, creative, and cognitive labor of human beings. How can our expertise in broader conversations about the creation and design of writing technologies inform our teaching about the extractive nature of GenAI tools?
- Labor and expertise: Language and literacy educators have spent time and energy developing expertise–and continue to labor in a knowledge economy that has become disrupted by GenAI. How do we intervene in some administrators’ effort to use GenAI in ways that exacerbate current austerity measures across university and college campuses? How can we raise awareness of our discipline’s value and authority in this realm and claim a central leadership role in decision-making at the intersection of language, literacy, and AI?