AI and “Knowledge”

AI and “Knowledge”

Do you think there is a useful distinction to be made between a ChatBot and an AI Assistant? Why and why not? Yes, in my mind we can make a distinction for Chatbots that are basically stochastic parrots and carry on a conversation even they maybe using Large Language Models (LLM). An AI Assistant can be more agentic in that it can be an intelligent software agent that can use tools, have memory and so be able to maintain the context of an interaction and have goals. An AI Assistant can be multi-modal, meaning they can handle text, audio and video.

What did the interviewees outline as limitations of LLMs? For what purposes did they NOT recommend the use of LLMs? Among the concerns of of LLMs is the problem of hallucination, meaning they make up information or provide misleading information. The most surprising and something new for me is that the models can be overconfident. The other one is that the models can be deceiving in that they know they know, but they don’t want to tell you that they know. I felt like “Whoa!”, I agree with you, you don’t want to sound like you are treating these LLMs like humans. The researchers warned against using these models in certain high stakes situations, such as relying on it for medical advice or legal advice. So I wonder, what about education? We know that these models are not perfect, and as we know they can make mistakes, so should they be used for educating young students?

In the The Conversation (2025) webinar, AI expert guests Prof. Mayank Kejriwal and Prof. Lu Wang answer questions about AI. ChatGPT was described as autocomplete on steroids, which is fairly good description. I recall the other description of ChatGPT is a stochastic parrot. Large Language Models (LLM) such as ChatGPT are purely text-based and multimodal can handle text, audio and video and can be used to create interactive agents.

LLMs seems to be expanding in a couple ways and one of these is that the models can have reasoning abilities and these models are called reasoning models. The researcher said that it has been shown that the LLM is capable of reasoning. He said that the LLM is not just memorizing facts and regurgitating the facts in a different form, rather they are demonstrating reasoning abilities, in that the reasoning models are capable of making original inferences based on a given set of facts. This is new for me and surprising. I recall Steven Pemberton said that there is no I in AI.

The researchers lamented the overuse of AI tools in the classroom to do a lot of work they should be learning. They point out that the danger in this is that students do not learn the fundamental knowledge and skills, and these fundamental skills degrade because people stop using them, meaning basic skills such as reading and writing. For example, people are not reading and writing anymore like we used to do. I agree and I recall the finding that students attention span has degraded because of the over use of social media. And we see a similar problem to nomophobia we see with mobile phones, where people have a fear of losing their mobile phones and have difficulty turning away from their mobile phones. So there is a similar risk that people will develop a dependency on AI tools.

The researchers warn about students losing fundamental skills such as critical thinking. I recall that this is a point I know about: the idea that because of AI, we should focus on teaching students about critical thinking and also critical thinking is becoming more people competing against AI in the workplace. Also that we need to treasure and encourage all those things that makes us more human, such as conversations with humans, interpersonal relationships with humans and human to human interaction, social skills, speaking skills and oracy.

AI tools can be put to good uses, for example as tools to assist in teaching. Prof Lu Wang said that an application they are building is a customized version for every course. She also pointed to the potential of AI tools to be used as a personal tutor for every student. This is an idea we are familiar with, AI and personalization and using AI to build Intelligent Tutoring Systems (ITS).

The many of the use cases of AI that the researchers talk about point to the idea of intelligence augmentation (IA).

References

The Conversation (2025, Apr 18). How to use AI safely – and what to watch out for. [Video]. YouTube. https://www.youtube.com/watch?v=G3pMJSfvOkY


Leave a Reply

Your email address will not be published. Required fields are marked *