Introduction
While the integration of artificial intelligence has transformed the higher education landscape over the last decade (Grassini, 2023), new technologies in academic contexts often provoke negative attitudes and concerns (Ismatullaev and Kim, 2024). As seen in the proliferation of commentary across higher education communication platforms and mainstream media after the public release of GPT-3 in November 2022, the arrival of generative-AI in higher education is no exception. Concerns raised by faculty included the lack of value and benefit from ChatGPT, the risk of cheating and plagiarism, teachers’ lack of experience in using it for teaching purposes and perceived difficulties in use (Iqbal et al., 2022). While the study reported by Iqbal was conducted soon after the release of GPT-3, these themes and attitudes have remained relatively constant, as found in similar studies published since then.
Several studies have highlighted how negative attitudes are often tied to the risk of plagiarism and cheating (Nguyen, 2023). This is one of the most recurring concerns, legitimized by the fact that language models are trained using massive amounts of existing data. This data is contained in AI-generated textual output and is organized in ways that cannot be easily detected without appropriate tools (Khalil and Er, 2023). While there is ongoing debate as to whether the use of generative AI constitutes plagiarism, using it to write complete texts is considered cheating in most situations.
Another source of reluctance about generative-AI is the lack of critical thinking and cognitive effort that could result from students using such tools (Gandhi et al., 2023). If critical thinking and cognitive effort are essential to the learning process, it would be logical to assume that overreliance on generative-AI would be detrimental. To prepare for the likely expansion of AI tools in education, and to avoid them becoming a hindrance for critical thinking, a number of studies have focused on how to adapt the use of AI tools so as to transform them into a factor for development (see Dergaa et al., 2023; Qawqzeh, 2024; Wu, 2024).
At the same time, a number of potential benefits of generative-AI have also been highlighted in studies on faculty and students’ attitudes, notably that it shortens and facilitates working processes (Lee and Perrett, 2022), that it could be used as a customized feedback generator (Kim and Kim 2022), or that it could give students quick access to information and therefore help them streamline their writing process (Iqbal et al., 2022) while developing a broader viewpoint about researching information (Darwin et al., 2024). These possibilities and others, such as automatic text generation or translation, have been much studied in the literature on generative language models (Guo and Lee, 2023; Gao et al., 2023; Huang and Tan, 2023; Imran and Almusharraf, 2023).
Irrespective of AI’s benefits or drawbacks, it is crucial to examine current attitudes toward generative AI for supporting academic writing, particularly because such attitudes can impact the integration of AI in Writing Centers. Focusing on the differing perceptions between students and teachers, for example, teachers have been found to be stricter and more cautious about AI than students, stating more frequently that AI is detrimental to learning or that it should be banished from education altogether (Ma, 2023). This difference could be tied, among other factors, to differences between students’ and teachers’ actual use and knowledge of AI-powered tools. While students are often frequent users of AI (Schiel and Bobek, 2023), teachers’ AI usage rates and skills tend to be lower (Chounta et al., 2022; Dilzhan, 2024). Less frequent use and competency are potential factors affecting how AI is accepted (Galindo-Domínguez et al., 2024).
The research we report in this chapter addresses the anticipated growth of AI tools in higher education. In this context, understanding the attitudes of both students and teachers is important to strategically accompanying that growth. At the Graduate School Writing Center at Clermont Auvergne University (France), peer writing tutors have encountered a range of student reactions to AI tools. Some students have outright refused the use of ChatGPT during sessions, while others have been hesitant due to concerns about plagiarism, doubts about the quality of AI-generated text, or restrictions imposed by their instructors. These reactions have made it difficult to implement AI-generated techniques into writing tutoring sessions.
To address these concerns, we have conducted a perception study involving both students at the Writing Center and their teachers. The study aims to answer the following research questions:
-
What are teachers’ and students’ main concerns regarding the use of AI in student learning and academic writing tasks?
-
How do negative perceptions of AI impact students’ willingness to use AI in their writing processes?
Addressing these questions is crucial for our goal of establishing research-based principles to more effectively integrate generative AI into Writing Centers and support students in developing essential writing skills. In our concluding discussion, we propose strategies for overcoming negative perceptions and concerns about AI among faculty and students. We explore ways to train tutors on how to discuss and use AI effectively during tutoring sessions. Additionally, we suggest outreach initiatives for Writing Centers to engage faculty and promote best practices for utilizing AI in academic writing.
Method
In June 2024, 173 Masters-level student users of the Writing Center and sixteen teachers were contacted over email. Thirty-two students and ten teachers agreed to participate in our study. To measure each cohort’s use and perception of AI, we elaborated two separate questionnaires, based on Zablot et al. (2025) and Demonceaux et al. (2025). The surveys asked participants about their use of AI in academic contexts, covering frequency of use, perceived skill level, and AI’s impact on their academic activities. Another section sought their opinions about AI in higher education, including views on banning it, perceived risks, and beliefs about the attitudes of other students and teachers toward AI. All responses to the questionnaire were anonymous. Additionally, we conducted short semi-structured interviews with two Writing Center tutors to explore their experiences and feelings about students’ reactions to using AI in tutoring sessions.
Results
Concerning the use of AI technology, responses to the student survey indicate that only three of the thirty-two students had never used any form of generative AI; in contrast, only three out of ten teachers had used it for their university activities. The teachers’ main concerns about adopting generative AI included doubts about the accuracy of the information generated and a lack of understanding of its purpose.
Teachers’ relatively low interest in generative AI contrasts with students’ usage. Figure 1 shows the frequency of AI use by students for university assignments, while Figure 2 indicates the main purposes for which students use AI. As can be seen in Figure 1, nearly two-thirds of the students indicated a relatively high rate of use, either daily or several times a week. This use principally concerned using generative AI for help with different aspects of their writing (Figure 2).
Figure 1. The frequency of students’ AI use for university assignments (n = 32)
Figure 2. Students’ purposes in using AI (n = 32)
Students’ frequency of use is further reflected in terms of general attitudes towards AI use. Figures 3 and 4, respectively, display results for students’ and teachers’ general attitudes about AI use in higher education. Answers ranged from strongly disagree to strongly agree, with an opt-out (‘I don’t know’).
As shown in Figure 3, a large majority of students agreed that aspects of conversational assistants could be beneficial to learning and disagreed prohibiting it in academic contexts. However, concerns appear as well, as two-thirds of the students agreed that conversational assistants use could limit their ability to learn independently, and one-third thought that its use was not compatible with development of critical thinking. In addition, students raised concern about the unreliability of AI-generated contents, leading them to think that conversational assistants should not be used (Figure 3).
Figure 3. Students’ attitudes towards AI use in higher education (n= 32)
Similarly, Figure 4 shows that most teachers are favorable to using AI in teaching and agree that it can be compatible with students’ critical thinking. Opinions about potential threats of AI are evenly spread. In contrast, teachers are proportionally more cautious than students about AI use and lean toward banning its use at the university level. They also show stronger divergences in attitude than students, which is in alignment with their lower frequency of use.
Figure 4. Teachers’ attitudes towards AI use in higher education (n = 10)
Concerning the potential threats of AI for higher education, in the teacher survey, participants were asked to select three major potential threats from a list of eight. Teachers most frequently considered the weakening of students’ critical thinking abilities and the potential negative effect on their writing skills to be the main threats posed by AI. In addition, plagiarism was considered to be a central concern. Figure 5 presents these results.
Figure 5. Teachers’ perceptions about potential threats of AI for higher education (n = 10)
Discussion
Overall, the most notable observations from our perception study are the largely positive attitudes towards AI among participants, and the lack of significant rejection of AI. Even so, this contrasts with the writing tutors’ interviews, where they reported that many students were reluctant to use AI during tutoring and refusals were common. The discrepancy between tutors’ comments and our survey results suggests that AI-reluctant students may not have participated in the survey. To address this, we cross-referenced the survey results with the tutors’ interviews to gain a better understanding of the reasons behind students’ avoidance. This will help us identify the main attitudes towards AI and develop strategies for the Writing Center to mitigate negative impressions, as discussed below.
For example, the threat of plagiarism was shown in the surveys to be an important concern for teachers and cited by writing tutors as the most common reason students refused to use AI during tutoring sessions. This concern likely stems from uncertainty about how to properly integrate AI-generated material without crossing ethical boundaries. This caution is understandable given the severe consequences of plagiarism. However, the Writing Center aims to model the use of ChatGPT in ways that do not lead to plagiarism or replace students’ written work with AI-generated text.
Tutors also believed that students’ reluctance was fueled by a lack of understanding of how AI can be used and by teachers’ frequent prohibitions, citing plagiarism risks. Despite tutors’ explanations that using AI does not obligatorily lead to plagiarism or cheating, students often refused due to these concerns. Research has shown that insufficient knowledge about a technology and a lack of trust in it can lead to its avoidance (Galindo-Domínguez et al., 2024; Ismatullaev and Kim, 2024). This situation presents a challenge for tutors, who must balance suggestions for AI use with ensuring students’ comfort. Tutors sometimes avoided recommending AI when they sensed it would not be well received. Both tutors interviewed reported feeling uneasy when suggesting ChatGPT during tutoring sessions. This unease stemmed from frequent negative student reactions and initial doubts about AI from the tutors themselves. This issue complicates the tutor-student relationship and hinders the study of AI’s impact on learning.
A short-term solution could involve changing how AI is introduced in tutoring sessions. For example, instead of merely suggesting AI, tutors could directly use ChatGPT to generate a text with issues and ask students to identify and improve these issues. This approach avoids plagiarism and could engage students more effectively by presenting a quick challenge, potentially stimulating their engagement and motivation (Hamari et al., 2016; Khan et al., 2017).
In the survey results, teachers also indicated concern that AI might weaken students’ critical thinking skills. This frequent concern highlights the need to foster a critical use of AI that aligns with students’ learning needs. The Writing Center’s approach with ChatGPT in tutoring sessions aimed to guide students in using AI as a tool, not a replacement, in the learning process. For instance, if a student has a recurring writing issue, ChatGPT can generate a text with similar problems, and the tutor can critically guide the student to identify and correct these issues. In this regard, ChatGPT can serve as an infinite source of solved examples to train specific skills. We believe it is important for potential users to understand that AI does not necessarily replace the user and can be beneficial for those who explore its use in new ways.
The unreliability of AI-generated content was also a major factor for negative attitudes, with teachers frequently citing this as a reason for their apprehension. Among students, nine out of thirty-two agree that conversational assistants should not be used due to content reliability issues. Concerns about generating misleading or incorrect information are significant barriers to AI adoption in academic contexts (Peters and Visser, 2023). In contrast, the Writing Center’s objective is to promote a critical and informed use of AI, developing students’ ability to evaluate the reliability of AI-generated content. By clearly stating that language models can make mistakes, we aim to encourage students to reconsider information critically. This approach mitigates misinformation risks and trains students to use AI as a supplementary resource rather than a definitive answer.
The most significant difference between students and teachers found in our study is the frequency of AI use. Students use AI tools much more frequently than teachers for various tasks (Schiel and Bobek, 2023). Frequent use helps individuals discover useful aspects of AI and feel more comfortable using it. Conversely, not using AI makes it harder to experience its benefits. This discrepancy leads to differences in knowledge and skill levels, widening the usage gap and affecting attitudes towards AI integration in academia. While teachers in our study showed high interest in AI, they perceived their skill level as low, as similarly observed by Galindo-Domínguez et al. (2024). This suggests that negative perceptions may stem mainly from lack of skill and knowledge among some teachers. Similarly, one tutor, initially doubtful about AI, became more confident and supportive of its use after learning how it works and discovering its benefits.
These observations highlight the impact of actual use on the intention to use AI. Since AI avoidance is primarily a concern teachers relay to students in their classrooms, teachers’ advice against using AI significantly affects students’ attitudes. Nonetheless, this makes tutor guidance a key factor in improving students’ perception and intention to use AI, in turn helping them to learn to use it effectively.
Conclusion
This study highlights the diverse attitudes towards AI among students and faculty at the Graduate School Writing Center at Clermont Auvergne University. While there is general recognition of AI’s potential benefits and little outright rejection, concerns remain, particularly about plagiarism and academic consequences on the part of students and, for teachers, about the reliability of AI-generated content and its impact on critical thinking. These findings align with other studies on attitudes towards AI in higher education (Al Darayseh, 2023; Ma, 2023).
We observed significant differences in AI use and attitudes between students and teachers, with students using AI more frequently and being less concerned about its consequences. We also found that infrequent use, skill level and knowledge level contribute to less positive attitudes towards AI in academia. Tutors’ experiences revealed that these attitudes also affect their confidence in suggesting AI use during sessions.
This study was limited by the low response rate and potential participant bias, as those strongly opposed to AI might have chosen not to answer the survey. To address this, we supplemented our findings with writing tutors’ interviews to gain a deeper understanding of faculty and student attitudes towards AI. We believe Writing Centers and tutor training can play a significant role in mitigating concerns about AI by integrating its use in ways that stimulate critical thinking and avoid plagiarism, enhancing students’ understanding of AI, and reassuring them about its actual risks. With these insights, we will adapt the future tutoring sessions at the Writing Center to address better these perceptions, allowing for more effective studies on the impact of AI on the learning of academic writing.