Selwyn (2024) paper is so important that I am just going to embed the whole paper into my Learning Portfolio here and read the whole thing every time.

“There is no I in AI” by Steven Pemberton. I listened to this video and rewound it so many times that I lost count. I know that I started listening to this video at 8 p.m. and I listened to it over and over again until past midnight and then I decided to stop. So here are just a few points that I would record in my Learning Portfolio for the records. I learned so much from this talk that I am not surprised that it is related to an Introduction to AI that Steven Pemberton teaches in a course.

First nugget for me: notice that the machine is learning but this is not true intelligence. The AI cannot explain itself. Interestingly, humans cannot always explain themselves. “There is no I in AI (yet)”. In the example Steven Pemberton used, he showed that the learning program cannot work out for itself that all corners in Tic Tac Toe are the same. That knowledge must be encoded for the learning program. There is no intelligence in AI. AI can learn but it is not intelligent. I recall Greg Brockman talk on ChatGPT where he said the same thing. The ChatGPT program can add 40-digit numbers but yet it got it wrong when it tries to add 40-digit numbers and 35-digit numbers. So the AI has learned to add 40-digit numbers but not the intelligence to do addition. And there are many other examples we know of. For example, the question about the world record for crossing the English Channel by foot. The other great example was given by Steven Pemberton. “If an orchestra of 120 players take 40 minutes to play Beethoven’s 5th, how long would it take 60 players?”. The AI answer is twice as long. So I now know the secret behind the statement “There is no I in AI”.

Second, why do humans think AI programs are intelligent? Steven Pemberton explains that is because of pareidolia, which is that humans have a way of interpreting everything in terms of ourselves. Steven Pemberton said that we humans have a tendency to think of non-intelligent computer programs as intelligent when instead they are just machine learning. And we are surprised when AI programs make mistakes.

Steven Pemberton then brilliantly explains how ChatGPT works and that ChatGPT is a stochastic parrot. Quoting Pemberton from his slides. “The phrase (stochastic parrot) was coined in a paper ‘On the Dangers of Stochastic Parrots: Can Language Models be too big?’ by Google employees Emily Bender, Timnit Gebru, and others. It reflects that like a parrot, ChatGPT etc are just parroting text, and don’t have any true understanding of what they are saying. The paper covers the risks, environmental and financial costs, inscrutability and bases, the inability of the models to understand, and the potential for deceiving people. It resulted in Gebru and Mitchell losing their jobs at Google!”

In his illustration of how ChatGPT works, Steven Pemberton showed how ChatGPT is simply using statistical techniques he showed, just on a large scale because it has read the whole Internet. ChatGPT just generates text related to what you typed and it does not have real understanding. That is, ChatGPT is just parroting back stuff at you based on what you typed.

Then Steven Pemberton talked about Generalized Intelligence, or what I knew as Artificial Generalized Intelligence (AGI). Pemberton said that the new arms race is on for Generalized Intelligence, when there is really an I in AI. When will it happen? What will happen when computers are more intelligent than us? Interestingly, Pemberton believes that it is not a question of whether it will happen or not but rather when it will happen because it is only a matter of time. It could be next year that we find it or it could be 50 years from now.

Another interesting point that caught my attention is that from data we can extract knowledge and from knowledge we can extract wisdom. So since machines are good at finding patterns, it maybe that the machines are able to extract knowledge from the huge quantities of data that we produce and hopefully we humans can extract wisdom from this knowledge.

Ezra Klein started by asking these questions. If you have this technology why would you not use it to do it for you? Read your book, write your essay, do your Math homework, write that reference letter?

Rebecca Winthrop speaking with Ezra agrees with Ezra that now we have AI that can write that essay, do the AP exams and pass the Bar exam. She says that the skills that are most important are how students are motivated and engaged students are to learn new things. That students are motivated and engaged to want to dig in and do the hard learning. That maybe one of the most important skills to have in a time of uncertainty, meaning an AI-infused world. That they are go-getters. They are way-finders. Things are going to shift and change and they are going to be able to navigate and constantly learn new things. And be excited to learn new things. Because when kids are motivated, that is the most important, the greatest predictor of how well they do.

Engagement is very powerful. It is how motivated you are to dig in to learn: to show up, participate in class, do homework. It relates to how you feel. Do you find school interesting? Do you find school interesting? Is it exciting? It is related to how you think. Are you cognitively engaged? Are you looking at what you learned in one class and applying it to another class or your life outside of class. It is also about how proactive are you about your learning. And all those dimensions work together in education. And it is a powerful construct to predict how well students do, to predict better grades, better mental health, better understanding of content and more enrollment in further education and other benefits.

Rebecca Winthrop said that they found that students engage in four different ways or engagement modes : (1) passenger mode, (2) achiever mode, (3) resistor mode, (4) explorer mode. Passenger mode is when students are coasting, achiever mode is when they are trying to get perfect outcomes, resistor mode is when they are avoiding and disrupting and explorer mode is when they love what they are learning, dig in and learn and when they are super proactive.

Rebecca Winthrop explained passenger mode in depth. Passenger mode, according to Winthrop is difficult for parents and teachers to spot because many students in passenger mode get really good grades but they are just bored to tears. They show up to school, they do the homework but have dropped out of learning. So passenger mode is when students are really coasting and doing the bare minimum. Some signs of this is that the student does the work as fast as possible and another sign is that they say “school is boring, I learn nothing”. So students are in passenger mode when school is too easy. Another version of why students get into passenger mode is when school is too hard. So for example, you can have a neurodivergent student who does not feel they belong so are not tuning in. Or they are missing certain pieces of skill sets that they need, and because knowledge and education is cumulative in many ways, they get overwhelmed and need special attention.

Rebecca Winthrop says that we need learning experiences that motivate students to dig in and be engaged and excited to learn. She said that there are three parts to the answer.

(1) Why do you want an education? What is the purpose of education? Because now we have AI that can write that essay, do the AP exams and pass the Bar exam, we need to rethink the purpose of education.

(2) How do kids learn? She said we know a lot about this.

(3) What should they learn? What is the content? What are the skills? KUDo. More than that: learning to live with people, knowing yourself and developing flexible competencies to navigate an uncertain world.

It is absolutely essential that they learn content so as to know what is real and what is fake.

Kids need to build the muscle to do hard things because AI will make a frictionless world for kids. (AI would be able to write any essay, read any article, do any math problem and pass any test for the kid).

Ezra Klein took the position of an AI Enthusiasts and described an utopian future of personalized learning in the same vein as “an amazing AI personal tutor for every student and an amazing AI teaching assistant for every teacher”.

Rebecca Winthrop think that the goal inside of schools is not 100% personalized learning learning journey for every student in the utopian vision as described in the utopian future of personalized learning.

Rebecca Winthrop says that teachers do many many things in the classroom. I believe that she means that teaching is complex. And students learn in relationships with other humans. We have evolved to do that and so we would never go away from that. This is the same theme in the other readings where the point is that teaching and learning is a human enterprise and we would never be able to go away from that.

She says that teachers do knowledge transfer and skills development when they teach and work with students. She says that AI can be good here and we could see this in Adaptive Learning Systems and Intelligent Tutoring Systems.

She says that she can see that in the future you can have a model of education with multiple people working with a student and one of those could be an AI tutor. So an AI tutor can help with skills development and knowledge acquisition. I can see this since even today we have the classroom teacher and SERTs working with students. So students may spend part of the day with adaptive learning software or intelligent tutoring software on key academic subjects and spend the rest of the day interacting with peers and others in the community. She feels, however, that the model should not be one of students sitting in front of an intelligent tutoring software for most of the time.

Some of the other interesting points that Rebecca Winthrop made are the following. Students learn in relationships with other humans, meaning peers and teachers. Grades do not show how much students are engaged. Schools today are not designed to give students agency, rather schools today are designed for students to comply. What you need are some feedback loops that are beyond just grades and behaviour to know how the student is developing agency over their learning; which is are they able to reflect and think about things they are learning in a way that they can identify what is interesting and they can have the skills to pursue new information. That is the core skill for learning new things in an uncertain world. Make sure students are learning to interact with other human beings, working with peers or connecting with community members. There will be a premium on human interaction as more and more skills get automated and done by AI.

References

Bender, E. M., Gebru, T., McMillan-Major, A., & Shmitchell, S. (2021). On the Dangers of Stochastic Parrots: Can Language Models Be Too Big? Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency, 610–623. https://doi.org/10.1145/3442188.3445922

Ezra, K.(Host). (2025, May 13). Rebecca Winthrop is the director of the Center for Universal Education at the Brookings Institution discusses how A.I. is transforming what it means to work and be educated. [Video podcast episode]. The Ezra Klein Show. https://www.youtube.com/watch?v=HQQtaWgIQmE

SAI Conference (2024, Dec 11). There’s no I in AI. Join Steven Pemberton as he delves into the fascinating world of Artificial Intelligence and uncovers the latest advancements and trends that are shaping the future of humanity. [Video]. YouTube. https://www.youtube.com/watch?v=lS4-QSR1sNk

Selwyn, N. (2024). On the Limits of Artificial Intelligence (AI) in Education. Nordisk Tidsskrift for Pedagogikk & Kritikk, 10(1).  https://doi.org/10.23865/ntpk.v10.6062


Leave a Reply

Your email address will not be published. Required fields are marked *