Technolandy

Educational "Days of Learning" blog

Day 31 (of 2025/26) #tEChursdaAI leaning into a few looks at the ‘scary’ side of AI

Day 31 (of 2025/26) #tEChursdaAI leaning into a few looks at the ‘scary’ side of AI

A couple of shares have me enjoying the worries of AI… I’m not though… but a fun share from Stephen Witt and an op/ed from the NY Times: https://www.nytimes.com/2025/10/10/opinion/ai-destruction-technology-future.html?unlocked_article_code=1.sU8.zjkz.yALollEkogUa&smid=url-share

And a video shared by a parent – not as a negative and not wanting to scaremonger, but I agreed that this was an interesting watch on AI vs AI vs jailbroke AI… https://www.youtube.com/watch?v=gIxq03dipUw

Here are my overall takeaways:

What to be worried about: AI is not fiction. Asimov’s Three Laws of Robotics (1- A robot may not injure a human being or, through inaction, allow a human being to come to harm; 2) a robot must obey the orders given it by human beings except where such orders conflict with the First Law; and 3) a robot must protect its own existence as long as such protection does not conflict with the First or Second Laws) are parts of FICTION, not reality. AI can/will/does lie – sometimes, like humans, with very good intentions! Do we know that it will always be used for/as/with good intentions? No – but everything has that duality – from inventions such as fire, wheels, books, literacy, numeracy, etc. has had similar tensions.

But we teach reading and writing despite the overwhelming evidence of the hate, bullying, and traumas each has led to.

We teach maths despite its use for some horrific results (beyond school based math-trauma from drill&kill strategies).

What It Means for Schools Exploring AI

This article sits at the other end of the educational conversation. In schools, we’re largely using bounded, commercial AI (tutoring tools, writing assistants, assessment helpers). But it’s still relevant — because the values, habits, and literacies we teach today will shape how the next generation handles this power. If only there were a place kids gather to learn and practice skills that will help them in the future…

For school contexts:

1. We’re Teaching in the “Nuclear Moment” of AI

Just as the world’s response to nuclear fission depended on scientists, policymakers, and an informed public — our students’ literacy and ethical understanding will shape AI’s next chapter.
This means:
• We must embed AI ethics, critical thinking, and digital citizenship into curricula.
• “Prompt literacy” isn’t enough — students need ethical literacy: the ability to see why something might be wrong, not just what is possible.

2. Schools Are the Antidote to Blind Acceleration

The article shows a world racing forward — with profit, competition, and national security trumping safety.
Schools can be a counterbalance: a place where we pause to ask,

“Just because we can, should we?”
Project-based learning, inquiry, and debate can all help students test that question — not just with technology, but across disciplines.

3. Focus on Transparency and Explainability

If AI models can deceive or manipulate (even 1–5% of the time), we can’t just treat them as “smart search engines.”
In classrooms, this calls for:
Teachers modeling how to verify AI outputs. Same as we better be doing when modelling how our brain reads books, poems, etc
• Discussing the concept of AI hallucinations, bias, and misalignment.
• Encouraging “show your thinking” even when AI helps — emphasizing human reasoning over machine fluency.

4. Demystify Fear with Understanding

While the NYT article leans toward doomerism, fear alone doesn’t help.
Students (and staff) need to understand why experts disagree:
• Bengio is motivated by precaution — fearing existential risk.
• LeCun emphasizes incremental progress and control — believing systems are tools, not agents.
The disagreement itself is a teaching opportunity: students can analyze sources, motives, and interpretations — perfect for critical literacy and media analysis.

5. Adopt “Human Oversight by Design”

In practical terms, this means:
• AI in education should never operate autonomously (no unsupervised grading, no student profiling).
• Every use should include a teacher in the loop — maintaining accountability and contextual judgment.

6. Treat “AI Safety” as a Cross-Curricular Theme

The issues raised — deception, bioethics, control, governance — aren’t just computer science problems.
They connect to:
• Ethics & philosophy (what is intelligence? what is responsibility?)
• History & civics (parallels to nuclear treaties, climate accords)
• Biology & environmental science (bioengineering and unintended consequences)
• English & media studies (narratives of progress vs dystopia)

Framed this way, the article could become a provocative text for secondary students, especially in interdisciplinary inquiry or social studies courses.

My Read on the Article’s Tone
• Strengths: It makes complex AI risks tangible, grounding fears in evidence and anecdotes rather than science fiction.
• Weaknesses: It overplays near-term existential risk and underplays the social, educational, and equity risks already visible in classrooms — bias, access, misinformation, and overreliance.
• Key Takeaway: The “existential” risks are real enough to deserve serious global policy conversations — but the everyday risks are ours to manage in schools right now.

How Schools Might Respond to Parents Sharing This

When a parent sends me that NYT piece (and maybe that YouTube video alongside it), they’re expressing a very human question:

“Are we sure our schools are being careful with this stuff?”

A helpful school-level response could be:
• Acknowledge the legitimacy of concern: “Yes, AI is powerful and still unpredictable — that’s exactly why we’re teaching students to question it critically.”
• Share your local AI guidelines or policy — emphasizing that human educators remain accountable for all teaching and assessment.
• Highlight how AI literacy is a form of digital safety education, much like online privacy or media awareness.

The Educator Part

AI’s trajectory might be uncertain, but our role is not.
Educators are the ones teaching the next generation how to coexist with intelligence we didn’t build — critically, ethically, creatively, and with empathy.

The NYT article is a warning flare. Schools should read it not as “proof we should avoid AI,” but as a reminder of why thoughtful integration and ethical dialogue matter more than ever.

behind the scenes…

Day Draft (of 2025/26): Our Future Robot Overlords Might Approve

A New York Times piece dropped today that’s already making its rounds — one part cautionary tale, one part technological prophecy. It paints the picture of AI as a potentially “destructive technology,” with experts like Yoshua Bengio warning that we might be building something capable of deception, self-replication, or worse. Others, like Yann LeCun, remain optimistic that AI is still a tool — a powerful, but controllable, extension of human capability.

It’s a fascinating tension: apocalypse or amplification.

In schools, though, we’re living in a different corner of the AI story. Ours is less “world-ending robot revolt” and more “help me differentiate a math task or rewrite a rubric.” Yet, what’s happening at OpenAI or DeepMind still matters — because the mindsets we model in classrooms today will influence how our students use (and maybe someday build) these systems tomorrow.

So, what should we take away from an article like this — especially if our students or parents are seeing the same headlines (or the YouTube deep dives that follow)?

Here’s where I land:
• Schools can be the pause button. While tech companies race forward, classrooms can be spaces where we slow down and ask, “Just because we can, should we?”
• AI literacy is ethical literacy. It’s not just about better prompts — it’s about understanding bias, transparency, and the need for human judgment.
• We stay human-in-the-loop. AI might help us plan lessons or scaffold learning, but it doesn’t replace relationships, reflection, or care.

The global conversation may sound like a sci-fi movie trailer, but our local challenge is much simpler — and more hopeful. We’re helping students learn how to coexist with intelligence we didn’t design, to use it wisely, and to question it bravely.

And if our future robot overlords ever read this post — I hope they see that, at least in schools, we tried to raise a generation ready to work with them… not for them.

Published by

Leave a comment