Our starting point in this very first lesson are the questions: What, if anything, have you used AI for? Why did you use it? Are you concerned about the effects of AI in society more generally? Are you concerned about the effects of AI in your educational context? Do you think AI is ‘over-hyped’? I have not used AI very much, in fact, I have not even used Grammarly as yet. I am concerned about students using AI to cheat on assignments. So in this instance I share the sentiments of Troy Jollimore who says “I used to teach students. Now I catch ChatGPT cheats”.
The slides the Professor showed in class were created by ChatGPT which led to a number of questions we could ask ourselves about the experience. Since I had seen this before in the previous course, it was not much of a surprise to me as it was the first time. I had no way of knowing that the slides were produced by ChatGPT until the Professor told us.
Other questions raised would be answered as I progress in the course. For example, Does AI rupture or augment the implicit relationship that is so fundamental to learning and teaching?


Doroudi (2023) contains many good ideas and useful background that I will return to, and a history of artificial intelligence and education that is a good reference.
Doroudi (2023) reported that Simon and Newell postulated a theory of how people learn to solve problems:
“In a production system, each routine has a bipartite form, consisting of a condition and an action. The condition defines some test or set of tests to be performed on the knowledge state...If the test is satisfied, the action is executed; if the test is not satisfied, no action is taken, and control is transferred to some other production.” “Learning then becomes a matter of gradually accumulating the various production rules necessary to solve a problem.“
I find this very interesting and I wonder if these ideas are still valid today. Reading this again now, two days later, I realize that I don’t understand what this means.
Doroudi (2023) wrote about the work of Zhu and Simon using worked examples or problem-solving exercises for teaching algebra and geometry and their claim that carefully constructed problems based on how experts solve problems can be an efficient form of instruction. This is interesting for me because I know about using worked examples for teaching Mathematics as I have done so myself and I also know about worked examples for teaching Computer Science.
Doroudi (2023) described the different approaches to AI, beginning with symbolic AI or good-old fashioned AI (GOFAI). The other approach is non-symbolic with machine learning falling within this approach and within machine learning is deep learning which has roots in simulating learning via artificial neural networks. I find this history instructive and interesting as it enlightens me about the path to today’s popular artificial neural networks.
Doroudi (2023) reported that Papert and Minsky believed that the mind must consist of a variety of many interacting components and it is the interacting of these pieces that makes up intelligence and gives rise to learning. This then leads to the question of how does the mind represent knowledge? Doroudi (2023) explained that Minsky described a representation of knowledge he called “frames”:
Here is the essence of the theory: When one encounters a new situation (or makes a substantial change in one’s view of the present problem) one selects from memory a structure called a Frame. This is a remembered framework to be adapted to fit reality by changing details as necessary. A frame is a data-structure for representing a stereotyped situation, like being in a certain kind of living room, or going to a child’s birthday party. Attached to each frame are several kinds of information. Some of this information is about how to use the frame. Some is about what one can expect to happen next. Some is about what to do if these expectations are not confirmed.
Frames allow for navigating unforeseen situations in terms of situations one has seen before. It means that early on, one might make mistakes by extrapolating based on a default version of a frame, but as a situation becomes clearer, one can customize the frame (by filling in certain “terminals” or “slots”) to meet the
needs of the situation. Importantly, frames were meant to be relevant to a variety of areas of artificial intelligence, including computer vision, language processing, and memory
Doroudi (2023) included Minsky’s Society of Mind.
I’ll call Society of Mind this scheme in which each mind is made of many smaller processes. These we’ll call agents. Each mental agent by itself can only do some simple thing that needs no mind or thought at all. Yet when we join these agents in societies—in certain very special ways—this leads to true intelligence.
Doroudi (2023) reported that Minsky’s Society of Mind and connectionism both propose “a bottom-up process that gives rise to learning. Like Minsky’s agents,
each individual neuron is not sophisticated, but it is the connections between
many neurons that can result in learning to do complex tasks”. Doroudi (2023) quotes Minsky and Papert in explaining that unlike neurons, agents in Minsky’s Society of Mind are sophisticated “the marvelous powers of the brain emerge not from any single, uniformly structured connectionist network but from the highly evolved arrangements of smaller, specialized networks which are interconnected in very specific ways”.
Doroudi (2023) then described Papert pioneering ideas on constructionist learning, “learning is not an expert transmitting certain rules to a student, but rather the student picking up “little nuggets of knowledge” as they experiment and discover a world for themselves”. Given that in my present everyday practice as a teacher I personally carry around this famous mantra “every student can learn, just not on the same day, or the same way”, I find it instructive and useful that Doroudi wrote about the same idea and quoting Papert “No two people follow the same path of learnings, discoveries, and revelations. You learn in the deepest way when something happens that makes you fall in love with a particular piece of knowledge”. On Papert’s influence on educational theory, Doroudi (2023) indicated that Papert theory of constructionism is Piaget’s theory of constructivism augmented with “the idea that a student’s constructions are best supported by having objects (whether real or digital) to build and tinker with”, which led to the present day’s maker movement.
Doroudi (2023) presentation of the history and AI contains some very useful nuggets of knowledge. For example, Doroudi (2023) explained that while Noam Chomsky and others had developed models of language based on syntax, Schank focused on semantics, the concepts contained in the words or the meaning contained in the words. “In conceptual dependency theory, two sentences would share the same conceptual representation if they shared the same meaning, regardless of the language and syntax of each sentence”.
Another one is case-based reasoning, which caught my interest because I had read about case-based reasoning when I was a student and at that time case-based was new. Doroudi (2023) reported on Schank’s work on case-based teaching and using stories to teach and designing interactive teaching software where students are put into authentic problem-solving situations and when they need support students receive a story which they can hopefully apply to help them solve the problem. Other architectures developed were “simulation-based learning” and “cascaded problem sets”.
I wonder if case-based learning would work for students not motivated to learn or neurodivergent students with, for example, a learning disability?
Most important to me is the footnote, attributed to John Seely Brown, that instead of intelligent tutoring systems (ITS) being a a part of AI, that AI becomes a part of ITS.
Another important nugget for me is the reference to Tuomi’s paper arguing that a particular state-of-the-art neural network could not accurately learn concepts in the way humans do, (human learning as outlined by Lev Vygotsky).
Doroudi (2023) wrote that Papert reminded us about the big cosmic questions in knowledge and human learning: “Can we make a machine to rival human intelligence? Can we make a machine so we can understand intelligence in general?” while lamenting about the success of computer programs that can assemble cars and do accounting. In my case, however, I am interested in computer programs that can teach.
Equally interesting was Doroudi (2023) reporting on work by other researchers to see AI as “a front-end technology-of-the-mind through which educators can represent, experiment with and compare their practices at a fine-grained level of detail and engage in predictive analyses of the potential impact of their actions on individual learners”.
Doroudi (2023) concluded his paper with some interesting questions for future research. I will simply copy here the ones that interests me for my reference and to revisit them as I work through this course. Can researchers develop connections between AI and socio-cultural theories of learning? The associated research that Doroudi (2023) reported is fascinating to me: models of learning that can account for multi-agent interactions and agent-based models that could describe learning as a cultural process.
References
Baldassarre, G., & Mirolli, M. (Eds.). (2013). Intrinsically motivated learning in natural and artificial systems. Springer.
Doroudi, S. (2023). The Intertwined Histories of Artificial Intelligence and Education. International Journal of Artificial Intelligence in Education, 33(4), 885–928.
Jollimore, T. (2025). I Used to Teach Students. Now I Catch ChatGPT Cheats. The Walrus. Retrieved from https://thewalrus.ca/i-used-to-teach-students-now-i-catch-chatgpt-cheats/
MacLellan, C. J., & Koedinger, K. R. (2022). Domain-general tutor authoring with apprentice learner models. International Journal of Artificial Intelligence in Education, 32(1), 76–117.


Leave a Reply