Research-based guidelines for building more targeted Writing Center actions: Faculty and student views on AI for academic writing

Recommandations fondées sur la recherche pour des actions plus adaptées en Writing Center : Perceptions des enseignants et des étudiants sur l’usage de l'IA pour l'écriture académique

Riassunti

There is mounting evidence that artificial intelligence can be beneficial for learning academic writing tasks. Accordingly, its potential for supporting writing peer tutors’ work with students in Writing Centers has been garnering increasing attention. It has been observed, for example, that the structure of texts can be quickly assessed by AI, which then generates feedback for tutors while considering the criteria they have proposed in prompts. Despite the enormous potential of AI to support student learning during writing peer tutoring sessions, there has been considerable pushback by faculty and higher education institutions, concerned about its negative impact on students’ critical learning. These negative attitudes can in turn impact students’ receptiveness toward learning how they might use AI strategically to support their writing tasks. This study examines the role Writing Centers might play in mitigating these negative perceptions. The context for the study is a graduate school Writing Center at a mid-sized university in France. Two comprehensive surveys targeting 10 faculty teachers from STEM and HHS fields and 32 graduate students from those fields have been designed to explore instructors’ and students’ perceptions about the benefits and drawbacks of AI in academic writing tasks, and its potential usefulness in tutoring sessions. This study is part of a broader inquiry into perceptions about AI across higher education institutions in France and Quebec. The results reveal underlying attitudes and beliefs about AI, providing research-informed principles for orienting Writing Center actions, including training tutors on how to communicate about and work with AI during their sessions, and how Writing Centers might expand outreach initiatives to bring faculty on board with emerging best practices. Such studies further lay the groundwork for future explorations about how AI can be leveraged in Writing Centers to help students develop core writing competencies.

Résumé court

Il existe de plus en plus de preuves que l'intelligence artificielle peut être bénéfique pour l'apprentissage de la rédaction académique. En conséquence, son potentiel pour soutenir le travail des tuteurs d'écriture avec les étudiants dans les Writing Centers attire de plus en plus l'attention. Il a été observé, par exemple, que la structure des textes peut être rapidement évaluée par l'IA, qui génère ensuite des retours pour les tuteurs tout en tenant compte des critères qu'ils ont proposés dans les prompts. Malgré le potentiel de l'IA pour soutenir l'apprentissage des étudiants lors des séances de tutorat en écriture, il y a eu une résistance considérable de la part des enseignants et des institutions d'enseignement supérieur, préoccupés par son impact négatif sur l'apprentissage critique des étudiants. Ces attitudes négatives peuvent à leur tour affecter la réceptivité des étudiants à apprendre comment ils pourraient utiliser l'IA de manière stratégique pour soutenir leurs tâches de rédaction. Cette étude examine le rôle que les Writing Centers pourraient jouer pour atténuer ces perceptions négatives. Le contexte de l'étude est un Writing Center d'une école de troisième cycle dans une université de taille moyenne en France. Deux enquêtes complètes ciblant 10 enseignants de faculté et 32 étudiants de troisième cycle de ont été conçues pour explorer les perceptions des enseignants et des étudiants sur les avantages et les inconvénients de l'IA dans les tâches de rédaction académique, et son utilité potentielle dans les séances de tutorat. Cette étude fait partie d'une enquête plus large sur les perceptions de l'IA dans les institutions d'enseignement supérieur en France et au Québec. Les résultats révèlent des considérations et des croyances sous-jacentes concernant l'IA, fournissant des principes fondés sur la recherche pour orienter les actions des Writing Centers, y compris la formation des tuteurs sur la manière de communiquer et de travailler avec l'IA pendant leurs séances, et comment les Writing Centers pourraient étendre leurs initiatives de sensibilisation pour rallier les enseignants aux meilleures pratiques émergentes. De telles études posent en outre les bases d'explorations futures sur la manière dont l'IA peut être exploitée dans les Writing Centers pour aider les étudiants à développer des compétences fondamentales en rédaction académique.

Résumé long

Introduction et état de l’art

L’apparition de l’intelligence artificielle a transformé le paysage de l’enseignement supérieur (Grassini, 2023) et elle y suscite souvent des préoccupations, notamment le manque de fiabilité de ChatGPT, le risque de triche et les difficultés d’utilisation (Iqbal et al., 2022) ou encore le fait que son usage pourrait affaiblir la pensée critique des étudiants ou détériorer leurs efforts cognitifs (Gandhi et al., 2023). Pour éviter cela, certaines études proposent d’adapter l’utilisation des outils d’IA afin de les transformer en facteurs de développement de la compétence rédactionnelle à l’université (Dergaa et al., 2023 ; Qawqzeh, 2024 ; Wu, 2024). La recherche montre que l’IA générative offre également des avantages potentiels, entre autres le fait de faciliter des processus de travail (Lee and Perrett, 2022), générer du feedback personnalisé (Kim and Kim, 2022) ou aider à la recherche d’informations (Iqbal et al., 2022). Ces possibilités et d’autres, comme la traduction automatique, ont été largement étudiées dans la littérature sur les modèles de langage génératifs (Guo and Lee, 2023 ; Gao et al., 2023 ; Huang and Tan, 2023 ; Imran and Almusharraf, 2023). Cependant, les considérations négatives des utilisateurs font obstacle à leur expérience de ces points positifs. Au Writing Center de l’Université Clermont Auvergne, les tuteurs en écriture ont rencontré diverses réactions des étudiants face à l’IA. Certains ont refusé l’utilisation de ChatGPT, tandis que d’autres étaient hésitants en raison de préoccupations concernant le risque de plagiat, ou à cause des restrictions imposées par leurs enseignants. Ces réactions ont rendu difficile l’intégration de l’IA dans les séances de tutorat.

L’étude que nous rapportons cherche à clarifier les considérations des étudiants et des enseignants à l’égard de l’IA à l’université pour accompagner la croissance rapide de son utilisation. Pour répondre à ces préoccupations, nous avons mené une étude de perception auprès des étudiants du Writing Center et de leurs enseignants. L’étude vise à répondre aux questions suivantes :

Quelles sont les principales préoccupations concernant l’utilisation de l’IA dans l’apprentissage et les tâches de rédaction académique ?

Comment les perceptions négatives de l’IA influencent-elles la volonté des étudiants d’utiliser l’IA dans leurs processus de rédaction ?

Répondre à ces questions est crucial pour établir des principes pour intégrer l’IA générative dans les Writing Centers et aider les étudiants à développer des compétences essentielles pour la rédaction.

Méthode

En juin 2024, 173 étudiants et seize enseignants ont été contactés pour participer à l’étude. Trente-deux étudiants et dix enseignants ont accepté de participer. Nous avons élaboré deux questionnaires distincts, un pour chaque cohorte, pour mesurer l’utilisation et la perception de l’IA dans des contextes académiques. De plus, nous avons mené des interviews semi-structurées avec deux tutrices du Writing Center pour explorer leurs expériences et sentiments à propos des réactions des étudiants à l’utilisation de l’IA.

Résultats

Concernant la fréquence d’utilisation, les réponses des étudiants indiquent que seulement trois sur trente-deux n’avaient jamais utilisé de forme d’IA générative ; en revanche, seuls trois des dix enseignants l’avaient déjà utilisée pour leurs activités universitaires. Aussi, les étudiants utilisent fréquemment l’IA à l’inverse des enseignants, ce qui constitue la différence la plus significative entre les deux cohortes.

Au sujet des objectifs d’utilisation de l’IA, les étudiants déclarent un usage dans des buts très variés : de la traduction de texte, de l’aide à l’écriture et à la compréhension de notions, de la recherche d’informations et de la génération de feedback sur le travail en cours.

Concernant les considérations, une grande majorité d’étudiants ont été d’accord avec le fait que les assistants conversationnels pourraient être bénéfiques à l’apprentissage et ne souhaitaient pas qu’ils soient interdits à l’université. Cependant, des inquiétudes se sont aussi manifestées, principalement concernant le plagiat, le manque d’indépendance dans l’apprentissage que cela pourrait engendrer, et la difficulté d’émettre une pensée critique lors de son utilisation. Le manque de fiabilité des contenus de l’IA a également suscité des inquiétudes.

Du côté des enseignants, les résultats montrent que la plupart d’entre eux ont eu des réponses positives à l’égard de l’IA dans l’enseignement et pensent qu’elle n’est pas incompatible avec la pensée critique, mais ont émis proportionnellement plus d’avis tranchés en défaveur de l’IA que les étudiants. Les principales inquiétudes des enseignants étaient le plagiat et la détérioration de la pensée critique et des capacités d’écriture des étudiants.

Discussion

Les résultats ont montré des attitudes largement positives envers l’IA qui contrastent avec les réticences observées dans les interviews des tutrices en écriture, qui ont affirmé que la plupart des étudiants du Writing Center n’étaient pas ouverts à l’IA. Cette divergence entre les résultats et l’expérience réelle suggère que les étudiants réticents à l’IA n’ont peut-être pas participé à l’enquête. Parmi les principales inquiétudes des participants à l’étude, la peur du plagiat est probablement une des plus importantes. Elle a été identifiée comme une préoccupation majeure pour les enseignants et est fréquemment citée par les tutrices comme la raison pour laquelle les étudiants refusent d’utiliser l’IA pendant les séances. Une solution à court terme pourrait consister à changer la manière dont l’IA est introduite lors des séances de tutorat et à expliquer que le dispositif n’approuve pas non plus le plagiat et vise à définir des moyens d’utilisation qui ne le permettent pas.

Une autre considération forte est celle selon laquelle l’IA est incompatible avec la pensée critique, ce qui signale qu’il est important de réfléchir à des usages de l’IA qui laissent libre cours à la pensée critique des apprenants. Pour expérimenter dans cette direction, les assistants conversationnels peuvent par exemple être utilisés pour générer un texte avec des erreurs ou des incohérences, puis demander à l’apprenant d’évaluer le texte et d’expliquer en quoi il présente des problèmes. Cela serait davantage engageant pour les apprenants et mettrait à l’épreuve leurs propres jugements critiques comme lors d’une relecture d’un texte authentique, et cette méthode permettrait d’exposer les apprenants à de nouvelles manières d’utiliser l’IA pour entraîner leur compétence rédactionnelle et reconsidérer leur usage s’il ne leur convient pas.

Nous pensons que les Writing Centers peuvent jouer un rôle significatif dans l’utilisation informée de l’IA. Les informations obtenues lors de cette étude permettront d’adapter les futures séances de tutorat pour mieux correspondre aux attentes des apprenants.

Indici

Mots-clés

intelligence artificielle, IA générative, plagiat, rédaction académique, centres d'écriture, Perceptions des étudiants, perceptions des enseignants, utilisation de l'IA, pensée critique

Keywords

Artificial Intelligence, generative AI, plagiarism, academic writing, Writing Centers, student attitudes, teacher attitudes, AI usage, critical thinking

Struttura

Testo completo

Introduction

While the integration of artificial intelligence has transformed the higher education landscape over the last decade (Grassini, 2023), new technologies in academic contexts often provoke negative attitudes and concerns (Ismatullaev and Kim, 2024). As seen in the proliferation of commentary across higher education communication platforms and mainstream media after the public release of GPT-3 in November 2022, the arrival of generative-AI in higher education is no exception. Concerns raised by faculty included the lack of value and benefit from ChatGPT, the risk of cheating and plagiarism, teachers’ lack of experience in using it for teaching purposes and perceived difficulties in use (Iqbal et al., 2022). While the study reported by Iqbal was conducted soon after the release of GPT-3, these themes and attitudes have remained relatively constant, as found in similar studies published since then.

Several studies have highlighted how negative attitudes are often tied to the risk of plagiarism and cheating (Nguyen, 2023). This is one of the most recurring concerns, legitimized by the fact that language models are trained using massive amounts of existing data. This data is contained in AI-generated textual output and is organized in ways that cannot be easily detected without appropriate tools (Khalil and Er, 2023). While there is ongoing debate as to whether the use of generative AI constitutes plagiarism, using it to write complete texts is considered cheating in most situations.

Another source of reluctance about generative-AI is the lack of critical thinking and cognitive effort that could result from students using such tools (Gandhi et al., 2023). If critical thinking and cognitive effort are essential to the learning process, it would be logical to assume that overreliance on generative-AI would be detrimental. To prepare for the likely expansion of AI tools in education, and to avoid them becoming a hindrance for critical thinking, a number of studies have focused on how to adapt the use of AI tools so as to transform them into a factor for development (see Dergaa et al., 2023; Qawqzeh, 2024; Wu, 2024).

At the same time, a number of potential benefits of generative-AI have also been highlighted in studies on faculty and students’ attitudes, notably that it shortens and facilitates working processes (Lee and Perrett, 2022), that it could be used as a customized feedback generator (Kim and Kim 2022), or that it could give students quick access to information and therefore help them streamline their writing process (Iqbal et al., 2022) while developing a broader viewpoint about researching information (Darwin et al., 2024). These possibilities and others, such as automatic text generation or translation, have been much studied in the literature on generative language models (Guo and Lee, 2023; Gao et al., 2023; Huang and Tan, 2023; Imran and Almusharraf, 2023).

Irrespective of AI’s benefits or drawbacks, it is crucial to examine current attitudes toward generative AI for supporting academic writing, particularly because such attitudes can impact the integration of AI in Writing Centers. Focusing on the differing perceptions between students and teachers, for example, teachers have been found to be stricter and more cautious about AI than students, stating more frequently that AI is detrimental to learning or that it should be banished from education altogether (Ma, 2023). This difference could be tied, among other factors, to differences between students’ and teachers’ actual use and knowledge of AI-powered tools. While students are often frequent users of AI (Schiel and Bobek, 2023), teachers’ AI usage rates and skills tend to be lower (Chounta et al., 2022; Dilzhan, 2024). Less frequent use and competency are potential factors affecting how AI is accepted (Galindo-Domínguez et al., 2024).

The research we report in this chapter addresses the anticipated growth of AI tools in higher education. In this context, understanding the attitudes of both students and teachers is important to strategically accompanying that growth. At the Graduate School Writing Center at Clermont Auvergne University (France), peer writing tutors have encountered a range of student reactions to AI tools. Some students have outright refused the use of ChatGPT during sessions, while others have been hesitant due to concerns about plagiarism, doubts about the quality of AI-generated text, or restrictions imposed by their instructors. These reactions have made it difficult to implement AI-generated techniques into writing tutoring sessions.

To address these concerns, we have conducted a perception study involving both students at the Writing Center and their teachers. The study aims to answer the following research questions:

  • What are teachers’ and students’ main concerns regarding the use of AI in student learning and academic writing tasks?

  • How do negative perceptions of AI impact students’ willingness to use AI in their writing processes?

Addressing these questions is crucial for our goal of establishing research-based principles to more effectively integrate generative AI into Writing Centers and support students in developing essential writing skills. In our concluding discussion, we propose strategies for overcoming negative perceptions and concerns about AI among faculty and students. We explore ways to train tutors on how to discuss and use AI effectively during tutoring sessions. Additionally, we suggest outreach initiatives for Writing Centers to engage faculty and promote best practices for utilizing AI in academic writing.

Method

In June 2024, 173 Masters-level student users of the Writing Center and sixteen teachers were contacted over email. Thirty-two students and ten teachers agreed to participate in our study. To measure each cohort’s use and perception of AI, we elaborated two separate questionnaires, based on Zablot et al. (2025) and Demonceaux et al. (2025). The surveys asked participants about their use of AI in academic contexts, covering frequency of use, perceived skill level, and AI’s impact on their academic activities. Another section sought their opinions about AI in higher education, including views on banning it, perceived risks, and beliefs about the attitudes of other students and teachers toward AI. All responses to the questionnaire were anonymous. Additionally, we conducted short semi-structured interviews with two Writing Center tutors to explore their experiences and feelings about students’ reactions to using AI in tutoring sessions.

Results

Concerning the use of AI technology, responses to the student survey indicate that only three of the thirty-two students had never used any form of generative AI; in contrast, only three out of ten teachers had used it for their university activities. The teachers’ main concerns about adopting generative AI included doubts about the accuracy of the information generated and a lack of understanding of its purpose.

Teachers’ relatively low interest in generative AI contrasts with students’ usage. Figure 1 shows the frequency of AI use by students for university assignments, while Figure 2 indicates the main purposes for which students use AI. As can be seen in Figure 1, nearly two-thirds of the students indicated a relatively high rate of use, either daily or several times a week. This use principally concerned using generative AI for help with different aspects of their writing (Figure 2).

Figure 1. The frequency of students’ AI use for university assignments (n = 32)

Figure 1. The frequency of students’ AI use for university assignments (n = 32)

Figure 2. Students’ purposes in using AI (n = 32)

Figure 2. Students’ purposes in using AI (n = 32)

Students’ frequency of use is further reflected in terms of general attitudes towards AI use. Figures 3 and 4, respectively, display results for students’ and teachers’ general attitudes about AI use in higher education. Answers ranged from strongly disagree to strongly agree, with an opt-out (‘I don’t know’).

As shown in Figure 3, a large majority of students agreed that aspects of conversational assistants could be beneficial to learning and disagreed prohibiting it in academic contexts. However, concerns appear as well, as two-thirds of the students agreed that conversational assistants use could limit their ability to learn independently, and one-third thought that its use was not compatible with development of critical thinking. In addition, students raised concern about the unreliability of AI-generated contents, leading them to think that conversational assistants should not be used (Figure 3).

Figure 3. Students’ attitudes towards AI use in higher education (n= 32)

Figure 3. Students’ attitudes towards AI use in higher education (n= 32)

Similarly, Figure 4 shows that most teachers are favorable to using AI in teaching and agree that it can be compatible with students’ critical thinking. Opinions about potential threats of AI are evenly spread. In contrast, teachers are proportionally more cautious than students about AI use and lean toward banning its use at the university level. They also show stronger divergences in attitude than students, which is in alignment with their lower frequency of use.

Figure 4. Teachers’ attitudes towards AI use in higher education (n = 10)

Figure 4. Teachers’ attitudes towards AI use in higher education (n = 10)

Concerning the potential threats of AI for higher education, in the teacher survey, participants were asked to select three major potential threats from a list of eight. Teachers most frequently considered the weakening of students’ critical thinking abilities and the potential negative effect on their writing skills to be the main threats posed by AI. In addition, plagiarism was considered to be a central concern. Figure 5 presents these results.

Figure 5. Teachers’ perceptions about potential threats of AI for higher education (n = 10)

Figure 5. Teachers’ perceptions about potential threats of AI for higher education (n = 10)

Discussion

Overall, the most notable observations from our perception study are the largely positive attitudes towards AI among participants, and the lack of significant rejection of AI. Even so, this contrasts with the writing tutors’ interviews, where they reported that many students were reluctant to use AI during tutoring and refusals were common. The discrepancy between tutors’ comments and our survey results suggests that AI-reluctant students may not have participated in the survey. To address this, we cross-referenced the survey results with the tutors’ interviews to gain a better understanding of the reasons behind students’ avoidance. This will help us identify the main attitudes towards AI and develop strategies for the Writing Center to mitigate negative impressions, as discussed below.

For example, the threat of plagiarism was shown in the surveys to be an important concern for teachers and cited by writing tutors as the most common reason students refused to use AI during tutoring sessions. This concern likely stems from uncertainty about how to properly integrate AI-generated material without crossing ethical boundaries. This caution is understandable given the severe consequences of plagiarism. However, the Writing Center aims to model the use of ChatGPT in ways that do not lead to plagiarism or replace students’ written work with AI-generated text.

Tutors also believed that students’ reluctance was fueled by a lack of understanding of how AI can be used and by teachers’ frequent prohibitions, citing plagiarism risks. Despite tutors’ explanations that using AI does not obligatorily lead to plagiarism or cheating, students often refused due to these concerns. Research has shown that insufficient knowledge about a technology and a lack of trust in it can lead to its avoidance (Galindo-Domínguez et al., 2024; Ismatullaev and Kim, 2024). This situation presents a challenge for tutors, who must balance suggestions for AI use with ensuring students’ comfort. Tutors sometimes avoided recommending AI when they sensed it would not be well received. Both tutors interviewed reported feeling uneasy when suggesting ChatGPT during tutoring sessions. This unease stemmed from frequent negative student reactions and initial doubts about AI from the tutors themselves. This issue complicates the tutor-student relationship and hinders the study of AI’s impact on learning.

A short-term solution could involve changing how AI is introduced in tutoring sessions. For example, instead of merely suggesting AI, tutors could directly use ChatGPT to generate a text with issues and ask students to identify and improve these issues. This approach avoids plagiarism and could engage students more effectively by presenting a quick challenge, potentially stimulating their engagement and motivation (Hamari et al., 2016; Khan et al., 2017).

In the survey results, teachers also indicated concern that AI might weaken students’ critical thinking skills. This frequent concern highlights the need to foster a critical use of AI that aligns with students’ learning needs. The Writing Center’s approach with ChatGPT in tutoring sessions aimed to guide students in using AI as a tool, not a replacement, in the learning process. For instance, if a student has a recurring writing issue, ChatGPT can generate a text with similar problems, and the tutor can critically guide the student to identify and correct these issues. In this regard, ChatGPT can serve as an infinite source of solved examples to train specific skills. We believe it is important for potential users to understand that AI does not necessarily replace the user and can be beneficial for those who explore its use in new ways.

The unreliability of AI-generated content was also a major factor for negative attitudes, with teachers frequently citing this as a reason for their apprehension. Among students, nine out of thirty-two agree that conversational assistants should not be used due to content reliability issues. Concerns about generating misleading or incorrect information are significant barriers to AI adoption in academic contexts (Peters and Visser, 2023). In contrast, the Writing Center’s objective is to promote a critical and informed use of AI, developing students’ ability to evaluate the reliability of AI-generated content. By clearly stating that language models can make mistakes, we aim to encourage students to reconsider information critically. This approach mitigates misinformation risks and trains students to use AI as a supplementary resource rather than a definitive answer.

The most significant difference between students and teachers found in our study is the frequency of AI use. Students use AI tools much more frequently than teachers for various tasks (Schiel and Bobek, 2023). Frequent use helps individuals discover useful aspects of AI and feel more comfortable using it. Conversely, not using AI makes it harder to experience its benefits. This discrepancy leads to differences in knowledge and skill levels, widening the usage gap and affecting attitudes towards AI integration in academia. While teachers in our study showed high interest in AI, they perceived their skill level as low, as similarly observed by Galindo-Domínguez et al. (2024). This suggests that negative perceptions may stem mainly from lack of skill and knowledge among some teachers. Similarly, one tutor, initially doubtful about AI, became more confident and supportive of its use after learning how it works and discovering its benefits.

These observations highlight the impact of actual use on the intention to use AI. Since AI avoidance is primarily a concern teachers relay to students in their classrooms, teachers’ advice against using AI significantly affects students’ attitudes. Nonetheless, this makes tutor guidance a key factor in improving students’ perception and intention to use AI, in turn helping them to learn to use it effectively.

Conclusion

This study highlights the diverse attitudes towards AI among students and faculty at the Graduate School Writing Center at Clermont Auvergne University. While there is general recognition of AI’s potential benefits and little outright rejection, concerns remain, particularly about plagiarism and academic consequences on the part of students and, for teachers, about the reliability of AI-generated content and its impact on critical thinking. These findings align with other studies on attitudes towards AI in higher education (Al Darayseh, 2023; Ma, 2023).

We observed significant differences in AI use and attitudes between students and teachers, with students using AI more frequently and being less concerned about its consequences. We also found that infrequent use, skill level and knowledge level contribute to less positive attitudes towards AI in academia. Tutors’ experiences revealed that these attitudes also affect their confidence in suggesting AI use during sessions.

This study was limited by the low response rate and potential participant bias, as those strongly opposed to AI might have chosen not to answer the survey. To address this, we supplemented our findings with writing tutors’ interviews to gain a deeper understanding of faculty and student attitudes towards AI. We believe Writing Centers and tutor training can play a significant role in mitigating concerns about AI by integrating its use in ways that stimulate critical thinking and avoid plagiarism, enhancing students’ understanding of AI, and reassuring them about its actual risks. With these insights, we will adapt the future tutoring sessions at the Writing Center to address better these perceptions, allowing for more effective studies on the impact of AI on the learning of academic writing.

Bibliografia

Andersdotter Karolina, 2023, “Artificial Intelligence Skills and Knowledge in Libraries: Experiences and Critical Impressions from a Learning Circle”, Journal of Information Literacy, Vol. 17, No. 2, p. 108-130.

Al Darayseh Abdulla, 2023, “Acceptance of artificial intelligence in teaching science: Science teachers’ perspective”, Computers and Education: Artificial Intelligence, No. 4, [https://doi.org/10.1016/j.caeai.2023.100132].

Chan Cecilia Ka Yuk and Hu Wenjie, 2023, “Students’ voices on generative AI: Perceptions, benefits, and challenges in higher education”, International Journal of Educational Technology in Higher Education, Vol. 20, No. 1, p. 43.

Choung Hyesun., David Prabu and Ross Arun, 2023, “Trust in AI and its role in the acceptance of AI technologies”, International Journal of Human–Computer Interaction, Vol. 39, No. 9, p. 1727-1739.

Chounta Irene-Angelica, Bardone Emanuele, Raudsep Aet and Pedaste Margus, 2022, “Exploring teachers’ perceptions of artificial intelligence as a tool to support their practice in Estonian K-12 education”, International Journal of Artificial Intelligence in Education, Vol. 32, No. 3, p. 725-755.

Darwin, Rusdin Diyenti, Mukminatien Nur, Suryati Nunung, Laksmi Ekaning D. and Marzuki, 2024, “Critical thinking in the AI era: An exploration of EFL students’ perceptions, benefits, and limitations”, Cogent Education, Vol. 11, No. 1, [https://doi.org/10.1080/2331186X.2023.2290342].

Davis Fred D., 1989, “Perceived usefulness, perceived ease of use, and user acceptance of information technology”, MIS quarterly, Vol. 13, No. 3, p. 319-340.

Demonceaux Sophie, Lima Feirouz, Malpel Sébastien, Picard Ariane, 2025 (forthcoming), “Représentations et utilisation des outils d'intelligence artificielle generative : enquête à l'université auprès des enseignants des filières de l'éducation et de la formation”, paper at the XXIVe Congrès de la SFSIC, Société Française des Sciences de l'Information & de la Communication (SFSIC), PREFICS (Université Rennes 2), 18-20 juin, Rennes.

Dergaa Ismail, Chamari Karim, Zmijewski Piotr and Saad Helmi Ben, 2023, “From human writing to artificial intelligence generated text: examining the prospects and potential threats of ChatGPT in academic writing”, Biology of sport, Vol. 40, No. 2, p. 615.

Dilzhan Balnur, 2024, Teaching English and Artificial Intelligence: EFL Teachers’ Perceptions and Use of ChatGPT, Master’s thesis, SDU University, [https://doi.org/10.35542/osf.io/fwy92].

Galindo-Domínguez Héctor, Delgado Nahia, Campo Lucía and Losada Daniel, 2024, “Relationship between teachers’ digital competence and attitudes towards artificial intelligence in education”, International Journal of Educational Research, No. 126, [https://doi.org/10.1016/j.ijer.2024.102381].

Gandhi Tejal K., Classen David, Sinsky Christine A., Rhew David C., Vande Garde Nikki, Roberts Andrew and Federico Frank, 2023, “How can artificial intelligence decrease cognitive and work burden for front line practitioners?”, JAMIA open, Vol. 6, No. 3, [https://doi.org/10.1093/jamiaopen/ooad079].

Gao Catherine A., Howard Frederick M., Markov Nikolay S., Dyer Emma C., Ramesh Siddhi, Luo Yuan and Pearson Alexander T., 2023, “Comparing scientific abstracts generated by ChatGPT to real abstracts with detectors and blinded human reviewers”, NPJ Digital Medicine, Vol. 6, No. 75.

Grassini Simone, 2023, “Shaping the future of education: exploring the potential and consequences of AI and ChatGPT in educational settings”, Education Sciences, Vol. 13, No. 7, p. 692.

Guo Ying and Lee Daniel, 2023, “Leveraging chatgpt for enhancing critical thinking skills”, Journal of Chemical Education, Vol. 100, No. 12, p. 4876-4883.

Hamari Juho, Shernoff David J., Rowe Elizabeth, Coller Brianno, Asbell-Clarke Jodi and Edwards Teon, 2016, “Challenging games help students learn: An empirical study on engagement, flow and immersion in game-based learning”, Computers in human behavior, No. 54, p. 170-179.

Huang Jingshan and Tan Ming, 2023, “The role of ChatGPT in scientific communication: writing better scientific review articles”, American journal of cancer research, Vol. 13, No. 4, p. 1148.

Imran Muhammad and Almusharraf Norah, 2023, “Analyzing the role of ChatGPT as a writing assistant at higher education level: A systematic review of the literature”, Contemporary Educational Technology, Vol. 15, No. 4, p. 464.

Iqbal Nayab, Ahmed Hassan and Azhar Kaukab Abid, 2022, “Exploring teachers’ attitudes towards using ChatGPT”, Global Journal for Management and Administrative Sciences, Vol. 3, No. 4, p. 97-111.

Ismatullaev Ulugbek V. U. and Kim Sangho H., 2024, “Review of the factors affecting acceptance of AI-infused systems”, Human Factors, Vol. 66, No. 1, p. 126-144.

Kelly Sage, Kaye Sherrie-Anne and Oviedo-Trespalacios Oscar, 2023, “What factors contribute to the acceptance of artificial intelligence? A systematic review”, Telematics and Informatics, No. 77, [https://doi.org/10.1016/j.tele.2022.101925].

Khalil Mohammad and Er Erkan, 2023, “Will ChatGPT Get You Caught? Rethinking of Plagiarism Detection”, dans Panayiotis Zaphiris, Andri Ioannou (dir.), Learning and Collaboration Technologies. HCII 2023. Lecture Notes in Computer Science, Vol. 14040, Cham, Springer, p. 475-487, [https://doi.org/10.1007/978-3-031-34411-4_32].

Khan Amna, Ahmad Farzana Hayat and Malik Muhammad Mudassir, 2017, “Use of digital game based learning and gamification in secondary school science: The effect on student engagement, learning and gender difference”, Education and Information Technologies, 22, p. 2767-2804.

Kim Nam Ju and Kim Min Kyu, 2022, “Teacher’s perceptions of using an artificial intelligence-based educational tool for scientific writing”, Frontiers in Education, Vol. 7, Frontiers Media SA, [https://doi.org/10.3389/feduc.2022.755914].

Lee Irene and Perret Beatriz, 2022, “Preparing high school teachers to integrate AI methods into STEM classrooms”, Proceedings of the AAAI conference on artificial intelligence, Vol. 36, No. 11, p. 12783-12791.

Ma Emily, 2023, “Impressions of ChatGPT: Using Survey Results to Inform AI Policy in Education”, Journal of Student Research, Vol. 12, No. 3, [https://doi.org/10.47611/jsrhs.v12i3.4871].

Nguyen Quynh Hoa, 2023, “AI and Plagiarism: Opinion from Teachers, Administrators and Policymakers”, Proceedings of the AsiaCALL International Conference, Vol. 4, p. 75-85.

Park Claire Su-Yeon, Kim Haejoong and Lee Sangmin, 2021, “Do less teaching, do more coaching: toward critical thinking for ethical applications of artificial intelligence”, Journal of Learning and Teaching in Digital Age, Vol. 6, No. 2, p. 97-100.

Peters Tobias M. and Visser Roel W., 2023, “The Importance of Distrust in AI”, in Luca Longo (dir.), Explainable Artificial Intelligence. First World Conference, xAI 2023, Lisbon, Portugal, July 26–28, 2023, Proceedings, Part III, Cham, Springer Nature Switzerland, p. 301-317, [https://doi.org/10.1007/978-3-031-44070-0_15].

Qawqzeh Yousef Kamel, 2024, “Exploring the Influence of Student Interaction with ChatGPT on Critical Thinking, Problem Solving, and Creativity”, International Journal of Information and Education Technology, Vol. 14, No. 4, p. 596-601, [https://doi.org/10.18178/ijiet.2024.14.4.2082].

Schiel Jeff and Bobek Becky L., 2023, “High School Students’ Use and Impressions of AI Tools”, Lumina Foundation for Education, [https://coilink.org/20.500.12592/qfttmhk].

Williamson Ben and Eynon Rebecca, 2020, “Historical threads, missing links, and future directions in AI in education”, Learning, Media and Technology, Vol. 45, No. 3, p. 223-235.

Wu Yi, 2024, “Critical Thinking Pedagogics Design in an Era of ChatGPT and Other AI Tools—Shifting From Teaching “What” to Teaching “Why” and “How”, Journal of Education and Development, Vol. 8, No. 1, [https://doi.org/10.20849/jed.v8i1.1404].

Zablot Solène, Boulc’h Laetitia, Pironom Julie, Sardier Anne and Drot-Delange Béatrice, 2025 (forthcoming), “Robots conversationnels : des objets de savoirs ? Usages et représentations chez des étudiants en sciences de l’éducation et de la formation”, RITPU (Revue internationale des technologies en pédagogie universitaire), Vol. 22, No. 1.

Illustrazioni

Figure 1. The frequency of students’ AI use for university assignments (n = 32)

Figure 2. Students’ purposes in using AI (n = 32)

Figure 3. Students’ attitudes towards AI use in higher education (n= 32)

Figure 4. Teachers’ attitudes towards AI use in higher education (n = 10)

Figure 5. Teachers’ perceptions about potential threats of AI for higher education (n = 10)

Per citare questo articolo

Referenza elettronica

Hamza Miftah, Dacia Dressen-Hammouda e Christine Blanchard Rodrigues, « Research-based guidelines for building more targeted Writing Center actions: Faculty and student views on AI for academic writing », À tradire [On line], 3 | 2024, On line dal 27 mars 2025, ultima consultazione: 26 avril 2025. URL : https://atradire.pergola-publications.fr/index.php?id=450 ; DOI : https://dx.doi.org/10.56078/atradire.450

Autori

Hamza Miftah

hamza.miftah[à]doctorant.uca.fr
Université Clermont Auvergne, ACTé, F-63000 Clermont-Ferrand, France

Hamza Miftah est doctorant en sciences de l’éducation, avec une spécialisation en linguistique et analyse textuelle. Sa thèse de doctorat examine l’impact de l’utilisation d’assistants conversationnels alimentés par l’IA lors de séances de tutorat en écriture entre pairs pour développer les compétences fondamentales en écriture académique des étudiants.

Dacia Dressen-Hammouda

dacia.hammouda[à]uca.fr
Université Clermont Auvergne, ACTé, F-63000 Clermont-Ferrand, France

Dacia Dressen-Hammouda est professeure des universités à l’Université Clermont Auvergne. Ses recherches portent sur les interactions entre contexte socioculturel et pratiques communicationnelles spécialisées. Ses projets actuels portent sur les littératies numériques professionnelles et scientifiques, la formation des tuteurs des centres d’écriture et les implications de l’indexicalité pour l’édition scientifique internationale.

Christine Blanchard Rodrigues

Université Clermont Auvergne, ACTé, F-63000 Clermont-Ferrand, France
christine.blanchard[à]uca.fr

Christine Blanchard Rodrigues est maître de conférences en sciences du langage à l’Université Clermont Auvergne. Elle est membre permanente du laboratoire LRL. Ses domaines de recherche comprennent la co-écriture en ligne, l’apprentissage du vocabulaire dans des environnements multimédias, le CALL pour le français langue étrangère et l’anglais langue étrangère et les formations ouvertes et à distance.

Diritti d'autore

Licence Creative Commons – Attribution 4.0 International – CC BY 4.0