Technolandy

Educational "Days of Learning" blog

Day 3 (of 2025/26) welcome to #tEChursdAI ~ an introduction and shoutout to @kylalee for “AI Pioneers Meet Old Rules”

Day 3 (of 2025/26) welcome to #tEChursdAI ~ an introduction and shoutout to @kylalee for “AI Pioneers Meet Old Rules”

Video: https://www.instagram.com/reel/DOMp855ko8u/?igsh=MWM5Nmt0NjZjbWV4ZQ==

It started as T(ec)hursday to blend tech on Thursday classroom time. Integrating tech into much of what we did on that day of the week… then the # had to start up, and the algorithms don’t like how parenthesis (do you love them?) <— if you know you know… if you don’t – search Dan Baird… and now with the permutation of AI into so much of our daily lives – and learning day, I am rebranding the ‘Thursday’ tech joke to include AI at the end of- T-ec-hursd-ai… and today I am using a real life inspiration for some thinking about the AI collaborators… 

There are many good reasons to explore some of the things people are doing with AI – the first LandyAI rule is:

AI is a collaborator, not a tool. But like your group project in grade 5… you have to decide if you are sharing the job, or letting one of you do most of the work… better is being mindful, and channeling your inner Sugata Mitra (Hole In The Wall TedTalk – came up with SOLEs: Self Organized Learning Environments, where y’all group up as you wish… by yourself… with your bestie… a huge group… as long as you reflect on your efficiency and if you work ‘well’ in that format… AI language models are doing a great job collaborating to synthesize large conversations (and writings) to help ensure that the topic is on point…

LandyAI rule 2:  is that as a species, we will seek ways to have AI do tasks for us in ways that may not be anticipated… superlawyer Kyla Lee shared this on her Weird and Wacky Wednesdays blog: https://kylalee.ca/weird-wacky-wednesdays-ai-vs-old-rules/

When an AI avatar tries to argue in court

In March 2025, a self-represented litigant at New York’s Appellate Division, First Department, queued up a prerecorded oral argument that featured an AI-generated talking head instead of his real face. The judges stopped the video within seconds when realizing the speaker was not a human. The self-rep apologized and continued without the avatar. The case is still pending.

Of course, courts expect real people to appear. The predictions that AI would be the demise of the legal profession have been laughable, particularly when it comes to us lawyers who actually appear in court to conduct trials and appeals. Technology can help with presentation, but it can’t appear as a lawyer or witness, at least so long as we have human judges. All bets are off when judges are replaced by robots, however.

And furthered her AI exploration with something that will come up in the classroom – especially if still doing old-school high stakes tests… When your chatbot gives the wrong legal answer

In February 2024, the B.C. Civil Resolution Tribunal held Air Canada liable for negligent misrepresentation after a website chatbot told a traveler he could request a bereavement refund after flying. Another page said the opposite. The tribunal ruled that a company is responsible for information on its own site, whether it comes from a static page or a chatbot, and awarded modest damages.

I’ve seen TikToks where people reportedly engaged a chatbot on a car dealer’s website to try and get the chatbot to form a contract for the sale of a car for much less than the listed price. These people were clearly aware that they were negotiating with a chatbot and were trying to take advantage of that.

The idea with chatbots is to try and suggest there is someone at the other end responding as an actual person representing the company. As chatbots are set up to pretend to be representatives of the company, one must reasonably assume that in circumstances where the person is legitimately engaging with the company’s website, they should be able to rely on the responses. Companies will not be able to hide behind the technology when it goes wrong in the coming years.

If you put an AI tool in front of customers, you should expect to own what it tells them.

********

I know people will still fall for the trap that using AI amounts to plagiarism (the fancy way of saying research done by those without enough university credentials) but plagiarism denotes authentic intelligence… so far, AI is not generating ‘original thoughts’ but synthesizing a lot of information very quickly and efficiently… much as a calculator (also went through its ‘ban it forever stage’ in education) doesn’t create the inputs… AI is different… and differentiated depending on which tool you use. There are a wide range of limitations (and a lack of limits) depending on the collaborator you choose 

Landy AI Rule 3: AI is not a tool. It’s also not a toy. It is making a new lane for the intersection between humanity and technology. 

LandyAI Rule 4: Teach the Why Before the What…We no longer ban calculators anymore; we teach when, how, and why to use them. Same with spell-checkers, Grammarly, or Desmos. The real question for AI isn’t “Can kids use it?” but “Do they know why, when, and how to use it?” A class that only bans AI misses the deeper literacy; a class that only celebrates AI without critical thinking also misses it.

LandyAI Rule 5: Bias & Blind Spots Are Learning Opportunities… Every AI system carries the fingerprints of its training data. That’s not a reason to fear it—it’s a reason to interrogate it. Students can learn to ask: “Whose voices are missing? Whose assumptions are built in?” This is a powerful way to fold digital citizenship and critical thinking into everyday work.

LandyAI Rule 6: From Product to Process… What AI spits out is less important than how the learner interacts with it. In my classroom, the draft or summary is never the endpoint—it’s the starting point for reflection, annotation, revision, and conversation. “Show your thinking” gets a new meaning when your thinking includes prompts, refinements, and choices made with an AI partner.

LandyAI Rule 7: Transparency Over Secrecy… If a student uses an AI, name it. If a teacher uses an AI, name it. Treat prompts and settings like citations. This helps everyone see how the sausage is made—and it lowers the temperature on “is this cheating?” arguments.

LandyAI Rule 8: Humans Still Hold the Values… AI can simulate empathy, but it doesn’t care about equity, belonging, or curiosity unless we deliberately program our use of it that way. We’re not handing over the steering wheel; we’re adding a very clever GPS. We still choose the destination.

Can’t wait to see where this most significant tool to impact the direction of education takes us!

Published by

Leave a comment