AI to be used in classrooms

I came across this article online that says google is trying to get AI into schools and I believe that is a bad idea. It will cut down on critical thinking skills, and this is an effect we can currently see as per the nature paper that shows our brain activity is decreasing. This goes doubly so because the article in question doesn’t really state how AI will be used, and the current Google LLM, Gemini, doesn’t really encourage critical thinking, and is more so attempting to answer questions with surface level answers.

Article:https://www.reuters.com/business/retail-consumer/pearson-google-team-up-bring-ai-learning-tools-classrooms-2025-06-26/
Nature Paper:Impact of artificial intelligence on human loss in decision making, laziness and safety in education | Humanities and Social Sciences Communications

I just read the Reuters article now and I get your point on why you think it is a bad idea. But I also think you’ve missed a very important point in the Reuters article. It says:

The tools will adapt to each student’s pace and needs, while also helping teachers track performance and tailor lessons… AI could help reshape school education by replacing uniform teaching methods with personalized learning paths tailored to individual students.

I don’t think this decreases brain activity in anyway as these tools would help both students and teachers. It is common knowledge that students learn differently and teachers teach differently. If everyone has their own unique learning and teaching methods using AI tools, I don’t think that’s a bad idea.

Plus, there are also articles online that show that AI can help build critical thinking skills. Like this one here.

So I think it’s not about bringing AI into classrooms but about how AI will be used in classrooms.

1 Like

You have a point there but the truth is as fast and easy using AI seems you still need to construct the prompt properly to get a good result and that process of creating a good prompt will

I can see your point there, as looking at the findings and views of the EFL students shows that they can use AI as a tool. But is clear than they also already have a clear understanding of critical thinking skills. These students are clearly able to use AI as the tool it is. However this study also clearly shows limitations in AI such as:

P4: “While AI can perform complex calculations quickly, it can’t provide the ‘why’ behind the data, which is crucial for critical evaluation. […] relying too much on AI for statistical analyses can lead to a form of cognitive offloading, where I might neglect the critical thinking process that should accompany data interpretation.”

P6: “AI literary analysis tools may highlight certain themes or motifs but might miss out on the human elements like irony, humor, or emotional nuance. […] such tools can sometimes lead me to form interpretations that are technically accurate but lack depth or human understanding”

It seems clear to me that if these tools are used incorrectly or if Generative AI LLM’s are forced over actual educational planning, then it will reduce critical thinking as students will just take the surface level understanding without question. So with what you said:

It is clear that their needs to be an actual focused limitation on bringing in AI. And considering how often AI is used as a catch all and how often LLM’s like Gemini are pushed into unnecessary roles, I don’t expect Google to act ethically with this and thus cause the loss of critical thinking skills as its not part of the curriculum. Especially as the Reuters quote does seem more like marketing rather than a action plan.

The tools will adapt to each student’s pace and needs, while also helping teachers track performance and tailor lessons, the companies said in a statement.

If there is a control method to make sure Google keeps with their old motto:

Don’t be Evil

Then it could be a great way of helping students learn, however I don’t think you can blindly trust Google to do the right thing.

1 Like

I’ll put up a different take on this. What’s more important: thought development or practical skills? I would argue that the application of AI is a real skill that people will need to have. Especially as more roles open up in the field, such as prompt engineers and the like, we will need to have a higher proficiency with AI tools in order to get the most out of them. That should be taught, just like how some argue financial literacy should be taught (or other practical life skills).

I think the critical thinking of the learners will only be affected if students rely on the AI to provide surface level answers to their academic prompts and assignments. However, a proper use of AI, for instance, during inquiry, can enhance their critical thinking levels.

So, then the question is, how is that even remotely enforced? I mean, as things stand right now, there is no way to monitor how much a student relies on AI to write their report (and, even then, it gets kinda dicey). Do you think that there will be a good way to moderate how much a student is allowed to use tools like LLMs in their academics?

the current Google LLM, Gemini, doesn’t really encourage critical thinking

Has it occurred to anyone that not only does AI seduce us into less critical thought, but AI in schools might lead future generations to put blind faith in AI?

I am impressed by LLMs and want to use them for more, but someone who’s used AI since childhood will be less impressed and more expectant. I don’t know much about how child development relates to a concept such as authority figures, but perhaps that is the path we are traveling.

I think it will go beyond just using LLM tools in classrooms, whether we agree or disagree. Since the introduction of ChatGPT and others, we’ve passed a point of no return in humanity. LLMs will forever be part of our lives from now on, no matter what. I also believe many jobs will disappear, and unfortunately teaching might be one of them. However, a small group of skilled teachers will still be needed to help build and shape LLM teachers that can teach our kids critical thinking as a skill.

I do have to agree with this view, companies have a massive intrest in expanding LLM’s to all fields, and education is included. Teachers will be required to adapt to this new status quo.

However this is a stretch, even assuming a full digital aristotle for all students. This is because schools have other roles, such as childcare during work hours for the parents and socialisation. I don’t see a world without any form of teacher, even if they don’t “teach” anymore as childcare and school socialisation is still needed.

Maybe it’s a stretch, maybe I’m reading too much into the future. I also hope that schools never disappear; they are more than just places to learn, they’ve become more like social clubs. However, when it comes to caregivers in any sector, it’s not even new that robotics and technology have been slowly crawling forward to replace the current workforce. We laughed at laundry-folding robots and called Boston Dynamics robots brainless. But I can’t dismiss the fact that the missing part of advanced robotics has been LLMs and agentic AI. They lacked communication, reasoning, and planning; and now they’re getting all that with LLMs. I believe there will soon be a revolution in the humanoid industry that will change the face of the caregiving industry forever. Just my two cents and how I’m interpreting the trend.

That seems to be the case, LLMs and AI are great at pattern recognition and that makes them suitable for use in robotics control if properly trained. Its gotten to the point where robots are behind the current growth of AI and thus now have to catch up. Thats why I would like to specialise in robotics when possible.