
Frontline Learning Research Vol. 13 No. 1 (2025)
76 - 83
ISSN 2295-3159
The University of Hong Kong, Hong Kong
Article received 16 June 2024/ Article revised 12 March 2025 / Accepted 19 March 2025/ Available online 4 April 2025
GenAI (Generative Artificial Intelligence) will have a growing role within formal education. What should that role be? How do we treat GenAIs as an opportunity to enhance and reenergise teaching and learning? This theoretical article suggests that answers to these questions should start with our foundational psychological theories about what students need to function and develop well in educational environments. The article outlines how psychological needs theory, focusing on students' basic psychological needs for competence and relatedness might be a path forward. Teacher behavior supporting these psychological needs (i.e., involvement and structure), which have established relationships with learning outcomes, are used as a base for discussing the potential roles of human and AI instructors. A co-piloting model that draws on the strengths of each instructor is framed and suggested as a possible way forward for research and practice in this area. Further research is needed to continue to develop this initial theoretical framing of co-piloting and assess if and how it can contribute to herald a brighter future for students across educational levels and contexts.
Keywords: GenAI; Psychological Needs; Co-Piloting; Structure; Involvement
As GenAIs surge into nearly every area of human endeavour (Kaplan, 2024), the question has become what GenAIs cannot do, not what they can (Jiang et al., 2022). One presumed area of eventual adoption is education (Chen et al., 2022). On its face, this premise seems reasonable given the fact that GenAIs have already demonstrated that they are able to manage more information than any living human (Wenzlaff et al., 2022). They could become tireless teachers that could answer every query a student might have. Isn't this the teacher that humankind has waited for?
This is most likely a question that humankind will be considering for decades to come. While instructional frameworks have been posited for employing AI in education (e.g., Holstein et al., 2020), from the perspective presented in this article, they tend to miss a critical step in answering two questions: a. How do humans learn and then b. How teaching/teachers best support the learning process? Humans are remarkable creatures, with substantial ability to exceed their physiological limitations, adapting to environments far beyond their intended evolutionary settings (Moran, 2022). However, there are fundamental constraints on both humankind's physiology and psychology that must be taken into consideration when seeking to support students in effective learning (Sweller & Chandler, 1991).
This article begins by introducing some longstanding psychological theories which provide direction for addressing psychological constraints on learning and thus have the potential to support educators as they seek to integrate rapidly developing GenAI agents with the complexities of classroom learning. Co-piloting is put forth as a practical frame for discussing the application of psychological theory to the issues put forward. This is followed by the application of two theoretical perspectives (competence and relatedness psychological needs) on learning and student-teacher classroom interaction. The article concludes with some preliminary suggestions for how AI and Human-teachers might work together effectively (i.e., co-pilot classroom learning) to ensure students the world over reap the benefits of the coming age of AI. 2.
To make clear what is meant by a constraint on learning, a biological constraint is a good place to start. An example of a biological-psychological constraint that AI instructors might be in a strong position to address is cognitive load. Cognitive Load Theory's (Sweller & Chandler, 1991) main and well-established contention is that humankind’s cognitive architecture has limited short-term memory space. Furthermore, this limited capacity can be further constrained by the way in which information is presented for learning. This has a plethora of implications for how instructional materials are organised and presented (e.g., Sweller, 1994). Cognitive load is an issue that a potential AI instructor would definitely need to address to be successful. This constraint, however, is not insurmountable. An AI might learn and apply a set of rules for how and how much material to present to an individual.
A psychological constraint that might be a bridge too far, and the focus of this commentary, is that of our modern theories of psychological needs (e.g., Baumeister & Leary, 1995; Deci & Ryan, 2000; Dweck, 2017). Many researchers from a broad array of areas (Carver & Scheier, 1982; Mischel & Shoda, 1955) have supported the theory that there are a set of basic psychological needs that should be satisfied for human beings to function well and, in the case of children, develop to their fullest capacity. While many psychological needs have been put forth, basic psychological needs are built in, not developed over time (e.g., Autonomy; Dweck, 2017). Different models for basic needs have been suggested with Self-Determination Theory's arrangement taking centre stage for much of the past four decades (Ryan & Deci, 2017): a. Autonomy, b. Relatedness, c. Competence. Self-Determination Theory is focused on the first, which is central to their overarching theory of wellbeing through internal regulation of motivation. The latter two and other psychological needs (e.g., self-esteem and trust) have received less attention (Patall et al., 2024). A more recent arrangement of psychological needs of different types has provided a balanced and inclusive perspective on this issue (i.e., Dweck, 2017). For Dweck, psychological needs for competence and relatedness (i.e., acceptance) stand at the heart of psychological needs which establishes a partial Venn Diagram with Self-Determination Theory's widely used model. Not included by Dweck is the psychological need for autonomy, which is still important but positioned as an amalgamation of other psychological needs which develop with maturity. The Dweck-SDT overlap focuses our attention on relatedness and competence, which have been well-researched and are strongly associated with elements of teaching (García-Rodríguez et al., 2023; Patall et al., 2024).
Corresponding to students' need for competence is what is called structure (Skinner, 1995). Structure refers to the ways in which teachers, and the learning environment they create, informs students about their developing competence (e.g., feedback). It also refers to how teachers provide opportunities for students to enhance (e.g., work on materials at the edge of their growing knowledge in a domain of study) and demonstrate (e.g., tests, projects, presentations to both experience mastery and note gaps in their understanding) their competence to themselves and others (Skinner, 1995). This aspect of teaching is in many ways an area of obvious strength for a potential GenAI instructor. GenAIs have an increasing ability to quickly and effectively construct teaching-learning materials across a number of domains: e.g., formative/summative assessments, practice materials, and a variety of classroom individual and group learning activities (Holmes & Miao, 2023). GenAI's increasing ability to efficiently develop precise curricula which can offer students' information about their learning progress and opportunity to practice would be extremely time consuming for a human teacher. Furthermore, curricula provided by a human teacher are generally one-size fits all. Curricula organised and delivered by an GenAI instructor could be increasingly individualised as the course of instruction progressed, providing personalised structure. While this basic psychological need seems potentially addressed by GenAI, perhaps in many ways better than a human instructor, the second psychological need raised is a more complex task for GenAI, current and future.
The second basic need is relatedness, which supported by teaching behaviours, is referred to as involvement. Involvement refers to teaching behaviours that support students in feeling more connected to the teacher and learning environment more broadly. Involvement more precisely includes teachers' expressions of caring and affection. It also refers to teachers providing resources and being physically/emotionally available/accessible to students (Skinner et al., 1998).
With much of the research around basic psychological needs being undertaken within self-determination theory, the relentless focus has been on the psychological need for autonomy, which is central to much of these researchers' theorising, with competence and then relatedness receiving far less relative attention. This does not make good practical (or theoretical) sense, given that the beliefs related to need for competence (e.g., self-efficacy) are some of the highest correlates of achievement (Richardson et al., 2012). Furthermore, a preeminent and widely cited study (Skinner & Belmont, 1993) made clear that if involvement (i.e., teaching that meets the need for relatedness) was controlled for, that students' perception of autonomy-support had no statistically significant impact on the learning behaviors and emotions. In fact, involvement, in addition to supporting future motivations for learning and learning behaviours, also increased teachers' autonomy-support and structure. A more recent meta-meta-analysis has confirmed the central role of need for relatedness, emphasising its broad and pervasive impact on supporting students' motivation to learn and achieve (Jansen et al., 2022).
Clearly, involvement is important for student learning. This is due to its paired, direct impact on student behaviours and motivation, and on teacher behaviours which support other psychological needs (structure and autonomy-support; Skinner & Belmont, 1993). It might be the most important part of what many teachers do. A question for AI researchers that arises from this contention is therefore: Is involvement an aspect of teaching that an AI instructor is situated to offer at least as well as human teachers? To begin to answer this question, the components of involvement must be carefully reflected upon. Two complex components that might be considered are: a. caring and affection or warmth from the teacher, b. physical and emotional availability/accessibility. Can an AI currently or in the near future convincingly express caring and affection? Even in its recent instantiations (e.g., ChatGPT, Gemini, Claude etc...), it is conceivable that GenAI might express this aspect of involvement. The question is how effectively and for how long. It is easy to imagine how the veil of warmth might not be convincing to some students. It is also reasonable to see how an initial veil of involvement might be inadvertently and irretrievably shattered by a poor-quality exchange that informs the student that s/he is being taught by an algorithm. Both weaknesses likely contributed to the failure of early efforts to use AI as learning partners in classrooms (e.g., Fryer et al., 2017) and more broadly (Okonkwo & Ad-Ibijola, 2021). Of course, AI actually expressing warmth is likely only on our distant technological horizon (e.g., De Togni et al, 2021).
The second component of involvement that should be considered is physical and emotional availability/accessibility of the teacher (Skinner, 1995). As neither of these are conceivable in a reasonable timeframe, the question again is whether GenAI can currently or in the near future convincingly convey these qualities. Emotional availability much like caring and affection might be conveyed with increasing effectiveness as AI continues to develop. However, both types of availability might have to wait for a combination of level four (or five) AI (see OpenAI's framework; e.g., Velu, D, 2024) and life-like robotics to be possible.
At least for the moment, involvement is the crucial component of teaching that AI will have substantial difficulty bridging. Unlike structure, which GenAI instructors could effectively provide, involvement will be something that they can only fake. The quality of this facsimile will be a barrier to GenAI instructors' substantive effectiveness. This makes it an area that developers should be actively seeking to improve and rigorously test.
The first fact that developers need to face is that much like structure being about personalising learning experiences based on individual competences, involvement will also need to be personalised to be effective. Unlike structure, however, the GenAI teacher cannot simply give the student a test to estimate the necessary involvement. Some clues regarding age-appropriate instructor involvement might be extracted from what we know about human psychological development (e.g., Flavell, 1977; Lerner, 2001). Other clues might be elicited through adept interaction with students. This will rely on experimentation and the development of robust and adaptive communication skills. In addition to trying different instructional strategies, AIs will need to gather emotional signals from humans, many of which are not oral. AIs are already being trained for this (Jaiswal et al., 2020) but integrating these signals with human developmental theory and oral interaction will be a challenge just to create a facsimile of AI instructor involvement.
While novel sounding, co-piloting is an academic idea which has been decades in development. The idea of Human-Computer partnerships has a long history within computer science. Early frameworks discussed technical augmentation (Engelbart & English, 1968) or a more fluid conceptual symbiosis between "men and electronic computers" (Licklider, 1960; p.4). After two decades of technological development, early conceptions of shared/adjustable autonomy of software agents matured (e.g., Bradshaw, 1997). In the most recent decade, there has been an increasing focus on Human-Centred AI, which seeks a balance between Human and AI control while optimally supporting "humans` self-efficacy, creativity and responsibility" (Shneiderman, 2020; p.495), with a strong focus on human-AI collaboration in workplaces (Wilson & Daugherty, 2020). The term co-piloting has recently been popularised by GitHub copilot which is a popular, increasingly indispensable AI tool for programmers (Wikipedia, n.d.).
Education took a parallel route, while building on the same original aspirations of humans being augmented or working in symbiosis with computers. Intelligent Tutoring Systems (for a review see Nawan, 1990) and a wide range of increasingly animated pedagogical agents (Dehn & Van Muken, 2000) were developed and tested. With the increasing power of these systems (Luckin et al., 2016) and then the eventual addition of GenAI (ChatGPT3 in 2020), supports for learning blossomed across a broad range of education domains and contexts: Through games like Minecraft for programming skills, as support for classroom teachers' general materials development, or additional tutoring/practice for students (Microsoft, 2025, March 5). Together this development comes together as co-piloting in education which might be summed up by four central components. These components partially overlap with workplace conceptions of co-piloting and the areas of teaching-learning which GenAI is well positioned to contribute to. These are a. collaboration (augmenting and blending with teachers; Luckin et al., 2016) and b. efficiency (automating tasks and saving teachers' time) are the most straightforward; c. building on Intelligent Tutoring Systems decades-long effort to personalise learning, meeting students' at their knowledge level; d. probably most practical, co-piloting in education means building more and better feedback into the learning experience. This final component comes both through students' direct and indirect access to information. The indirect route arises from teachers gaining a better understanding of students' knowledge development. Teachers are then in a stronger position to talk not just about where students are and need to go, but also how to get there (Hattie, 2012).
Co-piloting has not to our knowledge been the focus of research in educational contexts. It is, however, a worthy avenue for discussion because, as noted to this point, AIs have the propensity to become better at providing personalised structure than many human classroom teachers ever could (Joshi, et al., 2021). However, structure's impact on student learning is further enhanced by involvement (i.e., support for relatedness needs; Skinner et al., 1998), which will be provided by teachers. Such a co-piloted classroom can readily be imagined. Lessons are created by the AI; reviewed and perhaps adjusted by the teacher. These lessons would be individualised in parts by the GenAI to ensure all students are working in their proximal zone of development. At the same time, the teacher will ensure that important parts of these individualised lessons would intersect across students to foster sufficient social learning experiences (ensuring opportunities for relatedness needs to be met). Teachers would keep abreast of students` progress through visualised analytics and ensure students know that teachers understand and care about their progress, that they are available to discuss difficulties in groups or one-on-one. At the same time, teachers would dedicate a substantive amount of classroom time to small and large group interactions, listening and building relationships with and between students.
If teachers were left to focus on involvement, it is possible that they might even meaningfully improve the quality of the involvement they provide students. Teacher education and the research that supports involvement as a teaching skill could home in on how involvement is best delivered to different students, at different ages and stages of development. In this way, co-piloting could, potentially, dramatically improve the teaching profession.
Given the short format of this article, its scope was necessarily narrow, focusing on the introduction of co-piloting between a human teacher and GenAI agent, aiming to improve students' learning experiences. A specific psychological approach was taken in this introductory effort, but many other (or additional) theories might have been applied to this AI-Human educational interface. Further theoretical work to a. elaborate the presently proposed perspective of psychological needs (e.g., Dweck, 2017; Ryan & Deci, 2017), b. extend thinking to other well-established connected theories of education (e.g., Skinner, et al, 2022) and c. diversify our understanding of this human and digital educational merger are necessary and called for (e.g., Malgilchrist, 2021). Furthermore, empirical research (experimental and longitudinal) is critical to test the ideas put forth here and other theorising that must follow. These tests need to be sensitive to students (e.g., students` culture and prior knowledge) and context (e.g., level and aims of education) if they are to be meaningful. AI are unlikely to match all educational needs; understanding where they are useful and where they are not will be critical (e.g., see Dinsmore & Fryer, 2025). Finally, questions of structure and relatedness support by current and future AI raise ethical questions regarding students` understanding of who or what they are learning from. This issue has both expertise and emotional valences that will need to be addressed by research going forward.
Educational technology has always had a gap between the technology available and the psychologically framed research that examines best teaching and learning practices (Means, 2022). There is a danger, as AIs rapidly develop and begin to look like an answer to many of our educational questions, that this gap turns into a gulf. Before this happens, we should recognise what AI is likely to excel at and what it can only hope to be a facsimile of. AI needs to be properly harnessed to ensure it drives better learning experiences and outcomes. Psychological needs theory (specifically needs for relatedness and competence) is a firm theoretical foundation upon which to develop co-piloting instruction. Through such paired instruction (i.e., co-piloting), AI and Human might each maximise their respective strengths to the benefits of students for the generations to come.
Baumeister, R. F.,
& Leary, M. R. (1995).The need to belong: Desire for
interpersonal attachments as a fundamental human motivation.
Psychological Bulletin, 117, 497–529.
http://dx.doi.org/10.1037/00332909.117 .3.497
Bradshaw, J. M. (1997). Software Agents. MIT Press.
Carver, C. S., & Scheier, M. F. (1982). Control theory: A
useful conceptual framework for personality-social, clinical, and
health psychology. Psychological Bulletin, 92, 111–135.
http://dx.doi.org/10.1037/0033-2909 .92.1.111
Chen, X., Zou, D., Xie, H., Cheng, G., & Liu, C. (2022). Two
decades of artificial intelligence ineducation. Educational
Technology & Society , 25(1), 28-47.
De Togni, G., Erikainen, S., Chan, S., & Cunningham-Burley, S.
(2021). What makes AI ‘intelligent’ and ‘caring’? Exploring affect
and relationality across three sites of intelligence and care.
Social Science & Medicine , 277, 113874.
https://doi.org/10.1016/j.socscime.2021.113874 Dinsmore,
D. & Fryer, L. K.(2025, March 5). What does
current genAI actually mean for student learning? Learning
and Individual Differences
http://dx.doi.org/osf.io/f8z56_v1
Dinsmore, D. & Fryer, L. K.(pre-print). What
does current genAI actually mean for student learning?
https://doi.org/osf.io/f8z56_v1
Dweck, C. S. (2017). From needs to goals and representations:
Foundations for a unified theory of motivation, personality, and
development. Psychological Review , 124(6),
689–719.
https://doi.org/10.1037/rev0000082
Engelbart, D. C., & English, W. K. (1968). A research center
for augmenting human intellect. Proceedings of the December 9-11,
1968, Fall Joint Computer Conference, Part I on - AFIPS ’68 (Fall,
Part I), 395. https://doi.org/10.1145/1476589.1476645
Flavell, J. H. (1977). Cognitive development.
Prentice-Hall.
Fryer, L. K., Ainley, M., Thompson, A., Gibson, A., &
Sherlock, Z. (2017). Stimulating and sustaining interest in a
language course: An experimental comparison of Chatbot and Human
task partners. Computers in Human Behavior , 75,
461–468.
https://doi.org/10.1016/j.chb.2017.05.045
García-Rodríguez, L., Iriarte Redín, C., & Reparaz Abaitua, C.
(2023). Teacher-student attachment relationship, variables
associated, and measurement: A systematic review. Educational
Research Review, 38, 100488.
https://doi.org/10.1016/j.edurev.2022.100488
Hattie, J. (2012). Know Thy Impact. Educational Leadership
Holmes, W., & Miao, F. (2023). Guidance for generative
AI in education and research . UNESCO Publishing.
Holstein, K., Aleven, V., & Rummel, N. (2020). A Conceptual
Framework for Human–AI Hybrid Adaptivity in Education. In I. I.
Bittencourt, M. Cukurova, K. Muldner, R. Luckin, & E. Millán
(Eds.), Artificial Intelligence in Education (Vol.
12163, pp. 240–254). Springer International Publishing.
https://doi.org/10.1007/978-3-030-52237-7_20
Jiang, Y., Li, X., Luo, H. et al. Quo vadis artificial
intelligence? Discov Artif Intell 2, 4
(2022). https://doi.org/10.1007/s44163-022-00022-8
Jaiswal, A., Raju, A. K., & Deb, S. (2020, June). Facial
emotion detection using deep learning. In 2020 international
conference for emerging technology (INCET) (pp. 1-5).
IEEE.
Jansen, T., Meyer, J., Wigfield, A., & Möller, J. (2022).
Which student and instructional variables are most strongly
related to academic motivation in K-12 education? A systematic
review of meta-analyses. Psychological Bulletin, 148
(1-2), 1–26. https://doi.org/10.1037/bul0000354
Joshi, S., Rambola, R. K., & Churi, P. (2021). Evaluating
artificial intelligence in education for next generation. In
Journal of Physics: Conference Series (Vol. 1714, No. 1,
p. 012039). IOP Publishing.
Kaplan, J. (2024). Generative Artificial Intelligence: What
Everyone Needs to Know . Oxford University Press.
Lerner, R. M. (2001). Concepts and theories of human
development. Psychology Press.
Licklider, J. C. (1960). Man-computer symbiosis. IRE
transactions on human factors in electronics , (1), 411.
Luckin, R., Holmes, W., Griffiths, M., & Corcier, L. B.
(2016). Intelligence unleashed: An argument for AI in
education . Pearson.
Macgilchrist, F. (2021). Theories of Postdigital Heterogeneity:
Implications for Research on Education and Datafication.
Postdigital Science and Education , 3(3),
660–667.
https://doi.org/10.1007/s42438-021-00232-w
Means, B. (2022). Making insights from educational psychology and
educational technology research more useful for practice.
Educational Psychologist , 57(3), 226–230.
https://doi.org/10.1080/00461520.2022.2061974
Microsoft (n.d.).
https://www.microsoft.com/en-us/education/blog/2025/01/delivering-greater-impact-with-copilot-and-the-power-of-agents/
Retrieved 5 March, 2025.
Mischel, W., & Shoda, Y. (1995). A cognitive-affective system
theory of personality: Reconceptualizing situations, dispositions,
dynamics, and invariance in personality structure. Psychological
Review, 102, 246 268.
http://dx.doi.org/10.1037/0033-295X.102.2.246
Moran, E. F. (2022). Human adaptability: An introduction to
ecological anthropology . Routledge.
Okonkwo, C.W., & Ade-Ibijola, A. (2021). GenAIs applications
in education: A systematic review. Computers and Education:
Artificial Intelligence , 2, 100033.
https://doi.org/10.1016/j.caeai.2021.100033
Patall, E. A., Yates, N., Lee, J., Chen, M., Bhat, B. H., Lee, K.,
Beretvas, S. N., Lin, S., Man Yang, S., Jacobson, N. G., Harris,
E., & Hanson, D. J. (2024). A meta-analysis of teachers’
provision of structure in the classroom and students’ academic
competence beliefs, engagement, and achievement. Educational
Psychologist, 59(1), 42–70.
https://doi.org/10.1080/00461520.2023.2274104
Richardson, M., Abraham, C., & Bond, R. (2012). Psychological
correlates of university students’ academic performance: A
systematic review and meta-analysis. Psychological Bulletin,
138(2), 353–387.
https://doi.org/10.1037/a0026838
Ryan, R. M. & Deci, E. L. (2017). Self-Determination
Theory: Basic Psychological Needs in Motivation, Development,
and Wellness . The Guilford Press, New York.
Skinner, E. A., & Belmont, M. J. (1993). Motivation in the
classroom: Reciprocal effects of teacher behavior and student
engagement across the school year. Journal of Educational
Psychology, 85(4), 571–581.
https://doi.org/10.1037/0022-0663.85.4.571
Skinner, E. (1995). Perceived Control, Motivation, &
Coping. SAGE Publications, Inc.
https://doi.org/10.4135/9781483327198
Skinner, E. A., Zimmer-Gembeck, M. J., Connell, J. P., Eccles, J.
S., & Wellborn, J. G. (1998). Individual Differences and the
Development of Perceived Control. Monographs of the Society
for Research in Child Development , 63(2/3).
https://doi.org/10.2307/1166220
Skinner, E. A., Kindermann, T. A., Vollet, J. W., & Rickert,
N. P. (2022). Complex Social Ecologies and the Development of
Academic Motivation. Educational Psychology Review, 34(4),
2129–2165.
https://doi.org/10.1007/s10648-022-09714-0
Sweller, J., & Chandler, P. (1991). Evidence for cognitive
load theory. Cognition and instruction, 8(4),
351-362. https://doi.org/10.1207/s1532690xci0804_5
Sweller, J. (1994). Cognitive load theory, learning difficulty,
and instructional design. Learning and Instruction, 4(4),
295–312.
https://doi.org/10.1016/0959-4752(94)90003-5
Tontini, G. E., & Neumann, H. (2021). Artificial intelligence:
thinking outside the box. Best Practice & Research
Clinical Gastroenterology , 52, 101720.
https://doi.org/10.1016/j.bpg.2020.101720
Velu, D. (2024). AI's 5-level framework to AGI. Medium.
Downloaded 7 March 2025.https://medium.com/@dheeren.velu/ais-s-5-level-framework-to-agi-2d0ef4880f95
Wenzlaff, K. and Spaeth, S. (2022). Smarter than Humans?
Validating how OpenAI’s ChatGPT Model Explains Crowdfunding,
Alternative Finance and Community Finance
http://dx.doi.org/10.2139/ssrn.4302443
Wikipedia (n.d.). Github. https://en.wikipedia.org/wiki/GitHub
Retrieved 5 March, 2025.