A study published in the journal Education and Information Technologies finds that students who are more conscientious tend to use generative AI tools like ChatGPT less frequently, and that using such tools for academic tasks is associated with lower self-efficacy, worse academic performance, and greater feelings of helplessness. The findings highlight the psychological dynamics behind AI adoption and raise questions about how it may shape students’ learning and motivation.
Generative AI refers to computer systems that can create original content in response to user prompts. Large language models, such as ChatGPT, are a common example. These tools can produce essays, summaries, explanations, and even simulate conversation—making them attractive to students looking for quick help with academic tasks. But their rise has also sparked debate among educators, who are concerned about plagiarism, reduced learning, and the ethical use of AI in classrooms.
“Witnessing excessive reliance among some of my students on generative AI tools like ChatGPT made me wonder whether these tools had implications for students’ long term learning outcomes and their cognitive capacity,” said study author Sundas Azeem, an assistant professor of management and organizational behavior at SZABIST University.
“It was particularly evident that for those activities and tasks where students relied on generative AI tools, classroom participation and debate was considerably lower as similar responses from these tools increased student agreement on topics of discussion. With reduced engagement in class, these observations sparked my concern whether learning goals were actually being met.
“At the time we started this study, most studies on students’ use of generative AI were either opinion-based or theoretical, exploring the ethics of generative AI use,” Azeem continued. “The studies exploring academic performance seldom considered academic grades (CGPA) for academic outcomes, and also ignored individual differences such as personality traits.
“Despite the widespread use, not all students relied on generative AI use equally. Students who who were otherwise more responsible, punctual, and participative in class seemed to rely lesser on generative AI tools. This led me to investigate if there were personality differences in the use of these tools. This gap, coupled with rising concerns about fairness in grading and academic integrity inspired me for this study.”
To explore how students are actually engaging with generative AI, and how their personality traits influence this behavior, the researchers surveyed 326 undergraduate students from three major universities in Pakistan. The students were enrolled in business-related programs and spanned from their second to eighth semester. Importantly, the study used a three-wave, time-lagged survey design to gather data over time and minimize common biases in self-reported responses.
At the first time point, students reported their personality traits and perceptions of fairness in their university’s grading system. Specifically, the researchers focused on three personality traits from the Big Five model: conscientiousness, openness to experience, and neuroticism. These traits were selected because of their relevance to academic performance and technology use. For example, conscientious students tend to be organized, self-disciplined, and achievement-oriented. Openness reflects intellectual curiosity and creativity, while neuroticism is associated with anxiety and emotional instability.
In the second time point, participants reported how frequently they used generative AI tools—especially ChatGPT—for academic purposes. In the third and final wave, students completed measures assessing their academic self-efficacy (how capable they felt of succeeding academically), their experience of learned helplessness (a belief that efforts won’t lead to success), and reported their cumulative grade point average.
Among the three personality traits studied, only conscientiousness was significantly linked to AI use. Students who scored higher in conscientiousness were less likely to use generative AI for academic work. This finding suggests that conscientious individuals may prefer to rely on their own efforts and are less inclined to take shortcuts, aligning with prior research showing that this personality trait is associated with academic honesty and self-directed learning.
“Our study found that students who are more conscientious are less likely to rely on generative AI for academic tasks due to higher self-discipline and perhaps also higher ethical standards,” Azeem told PsyPost. “They may prefer exploring multiple sources of information and other more cognitively engaging learning activities like researching and discussions.”
Contrary to expectations, openness to experience and neuroticism were not significantly related to AI use. While previous research has linked openness to a greater willingness to try new technologies, the researchers suggest that students high in openness may also value originality and independent thought, potentially reducing their reliance on AI-generated content. Similarly, students high in neuroticism may feel uneasy about the accuracy or ethics of AI tools, leading to ambivalence about their use.
The researchers also examined how perceptions of fairness in grading might shape these relationships. But only one interaction—between openness and grading fairness—was marginally significant. For students high in openness, perceiving the grading system as fair was associated with lower AI use. The researchers did not find significant interactions involving conscientiousness or neuroticism.
“One surprising finding was that fairness in grading only marginally influenced generative AI use, and only for the personality trait openness to experience, showing that regardless of grading fairness, generative AI is gaining widespread popularity,” Azeem said. “This is telling, given that we had anticipated students would rely more on generative AI tools with an aim to score higher grades, when they perceived grading was unfair. Also, while individuals high in openness to experience are generally early adopters of technologies our study reported no such findings.”
More broadly, the researchers found that greater use of generative AI in academic tasks was associated with several negative outcomes. Students who relied more heavily on AI reported lower academic self-efficacy. In other words, they felt less capable of succeeding on their own. They also experienced greater feelings of learned helplessness—a state in which individuals believe that effort is futile and outcomes are beyond their control. Additionally, higher AI use was linked to slightly lower academic performance as measured by GPA.
These patterns suggest that while generative AI may offer short-term convenience, its overuse could undermine students’ sense of agency and reduce their motivation to engage deeply with their coursework. Over time, this reliance might erode critical thinking and problem-solving skills that are essential for long-term success.
Further analysis revealed that the use of generative AI also mediated the link between conscientiousness and academic outcomes. Specifically, students who were more conscientious were less likely to use AI, and this lower use was associated with better academic performance, greater self-efficacy, and less helplessness.
“A key takeaway for students, teachers, as well as academic leadership is the impact of students’ reliance on generative AI tools on their psychological and learning outcomes,” Azeem told PsyPost. “For example, our findings that generative AI use is associated with reduced academic self-efficacy and higher learned helplessness are concerning as students may start believing that their own efforts do not matter. This may lead to reduced agency where they believe that academic success is dependent on external tools rather than internal competence. As the overuse of generative AI erodes self-efficacy, students may doubt their ability to complete assignments or challenging problems without the help of AI. This may make students passive learners, hesitating to attempt tasks without support.
“When they feel less in control or doubt themselves for a long time, it may lead to distorted learning habits as they may believe generative AI will always provide the answer. This may may also make academic tasks boring rather than challenging, further stunting resilience and intellectual growth. Our findings imply that while generative AI is here to stay, its responsible integration into academia through policy making as well as teacher and student training is key to its effective outcomes.”
“Our findings did not support the common idea that generative AI tools help perform better academically,” Azeem explained. “This makes sense given our findings that generative AI use increases learned helplessness. Academic performance (indicated by CGPA in our study) relies more on individual cognitive abilities and subject knowledge, which may be adversely affected with reduced academic self-efficacy. Accordingly, teachers, students, as well as the general public should exercise caution in relying on generative AI tools excessively.”
The study — like all research — include some limitations. The sample was limited to business students from Pakistani universities, which may limit the generalizability of the findings to other cultures or academic disciplines. The researchers relied on self-reported measures, though they took steps to reduce bias by spacing out the surveys and using established scales.
“The self-reported data may be susceptible to social desirability bias,” Azeem noted. “In addition, while our study followed a time-lagged design that enables temporal separation between data collection, causal directions between generative AI use and its outcomes can be better mapped through a longitudinal design. Likewise, in order to design necessary interventions and training plans, it may help future studies to investigate conditions under which generative AI use leads to more positive and less negative learning outcomes.”
“In the long term, I aim to conduct longitudinal studies that investigates long-term student development like creativity, self-regulation, and employability over multiple semesters. This may help bridge the emerging differences in literature regarding the positive versus harmful effects of generative AI for students. I also intend to explore other motivational traits besides personality, that may influence generative AI use. Perhaps this stream of studies may empower me to design interventions for integrating AI literacy and ethical reasoning for effective generative AI use among students in the long run.”
The findings raise larger questions about the future of education in an era of accessible, powerful AI. If generative tools can complete many academic tasks with minimal effort, students may miss out on learning processes that build confidence, resilience, and critical thinking. On the other hand, AI tools could also be used to support learning, for example, by helping students brainstorm, explore new perspectives, or refine their writing.
“While our study alarms us to the potential adverse effects of generative AI for students, literature is also available supporting its positive outcomes,” Azeem said. “Therefore, as AI tools become increasingly embedded in education, it is vital that policy makers, educators, and edtech developers go beyond binary views of generative AI as either inherently good or bad. I believe that guiding responsible use of generative AI while mitigating risks holds the key to enhanced learning.”
“To be specific, instructor training for designing AI-augmented learning activities can help foster critical thinking. These can emphasize encouraging student reflection on AI-generated content in order to address some caveats of generative AI use in the classroom. Likewise, promoting fair and transparent grading systems may reduce incentives for misuse. With unchecked and unregulated use of generative AI among students, learned helplessness is likely to become prevalent. This may impair the very capacities that education is intended to develop: independence, critical thinking, and curiosity. Amid all the buzz of educational technology, our study emphasizes that technology adoption is as much a psychological issue, as it is a technological and ethical one.”
The study, “Personality correlates of academic use of generative artificial intelligence and its outcomes: does fairness matter?,” was authored by Sundas Azeem and Muhammad Abbas.
