In the podcast, Zahra Suarez discusses the implications of integrating generative AI tools like JobCPT and Cod in high school classrooms. She highlights concerns about reshaping student engagement with knowledge, ethical implications, equity issues, and the potential impact on teacher-student relationships. Zahra advocates for a cautious approach, emphasizing the importance of human-centered education, ethical frameworks, and pedagogical supports before fully integrating AI tools in education. She suggests investing in writing centers, peer review programs, and fostering authentic dialogue to promote genuine learning experiences. Ultimately, she encourages educators to ask difficult questions and prioritize the development of students' critical thinking and growth in a human-centered educational environment.
Welcome, everyone. My name is Zahra Suarez and in today's episode, hopefully only episode, we're going to be talking about generative AI, such as JobCPT, Cod, Gemini, and whether they belong in high school classrooms. Now, I know it's a hot topic. Some educators, such as teachers, are excited, some are skeptical, and many are somewhat in between. I want to make a case for caution, not fear, not rejection, just a thoughtful pause, because when we introduce gen-AI into high school learning environments, we're not just adding a tool, we're reshaping how students engage with knowledge, how they write, how they even think, and that deserves some serious scrutiny.
Let's start with what's at stake here. High school isn't just about content mastery. It's about forming your identity as a person, ethical development, and learning how to think critically in a world full of noise. When we hand students a tool that generates essays, solves math problems, or summarizes text in seconds, we risk short-circuiting the very process we're trying to teach, writing, for example. It isn't just about producing a polished paragraph. It's about wrestling with ideas, making choices, revising, and discovering your own voice.
Gen-AI can mimic that voice. It can even sound convincing, but it doesn't understand. It doesn't wrestle. It predicts. And that's a fundamentally different kind of process than what we want students to develop. Now let's talk about equity. Yeah, I know. Big leap. I don't even know. But this is where things get even more complicated with gen-AI. Not every student has access to gen-AI at home. Not every family has the digital literacy to guide their children through ethical use of gen-AI.
And not every school has the infrastructure to support responsible integration. If we normalize gen-AI in classrooms without addressing these disparities, we risk widening them. Students with more resources will learn how to use these tools strategically. Students without them may fall further behind. Or worse, be penalized for using them incorrectly. And for incorrectly, I use air quotes. And let's be honest. The line between support and cheating is a little blurry. If a student uses AI to generate an outline, is that okay? Or is that a full draft? Or if they tweak just a little, a few sentences and a couple These are not just technical questions.
They're ethical ones. And we haven't built the scaffolding to answer them just yet. High school students are still developing their self-esteem. Like they're not smart as much cognitively, emotionally, or ethically. They're still learning how to form arguments, how to cite sources, how to take intellectual risks. Even during their senior year of high school. Introducing gen-AI at this stage can be destabilizing. It can lead to over-reliance, loss of confidence, and confusion about what counts as their own work.
I've seen students who feel ashamed because their writing doesn't sound as polished as when they compare it to students who use AI. I've seen others who submit AI-generated work without understanding what is even on the paper. And I've seen teachers struggle to assess learning when they're not sure who or what produced the assignment. This isn't just about plagiarism. It's about pedagogy. Hopefully I say it right. Don't worry about pedagogy. If we can't trace the thinking behind students' work, how do we track their growth? How do we support their growth? Here's something I don't think we talk about enough.
The role of struggling in learning. Struggle isn't a flaw. It's a feature. When students wrestle with a concept, whether they revise a messy draft, when they ask for feedback and try again, that's where the magic happens. That's where learning sticks, when you ask. Zen AI can bypass that struggle. It can seem as if it's taken the easy road. It can offer clean answers, elegant phrasing, and intimate solutions. But that's not the same as learning. It's performance without process.
And if we start designing classrooms around efficiency rather than depth, we risk turning education into a transaction, not a transformation. Let's shift gears for a moment and talk about teachers because this isn't just a student issue. It's also a professional one. Teachers are already navigating overloaded curriculum, standardized testing, and shifting expectations, especially in high school. Adding generative AI into the mix without clear guidelines or training or even support can feel overwhelming. Some teachers are even being asked to use AI to generate lesson plans, to grade papers, or to even write simple feedback for a project or a paper.
And while that might sound helpful, it raises serious questions about professional autonomy, labor, and even trust between the teacher and the student. Are we valuing teachers' expertise or are we just asking them to outsource supporting their judgment or replacing it with algorithms? There's also a little bit of a slipper slope here, and I think it needs to be addressed in this. Once gen AI becomes normalized in classrooms all over America, especially in high school classrooms, it's going to be hard to draw boundaries.
If it's okay to use AI for brainstorming, why not for drafting? If it's okay for teachers to use it for feedback, why not for grading? And once we start relying on AI for core pedagogical tasks, we risk losing something even more essential, the human relationship at the heart of learning. Students don't just learn from content. They learn from connections, from connecting with the teacher, from the way a teacher responds to their ideas, that challenges their assumptions, and even celebrates their growth.
Gen AI can't replicate that, and we shouldn't pretend it can't. So what am I advocating for? Not a ban, per se, not a rejection, but a pause. A pause to ask the difficult questions. What kind of learning do we want to cultivate for our future generation? What kind of thinkers do we want to raise, and what kind of relationships do we want to protect? Do we want to protect the relationship between a computer and a student, or the relationship of a teacher and a student? Gen AI might have a role in education someday, but right now, in high school classrooms, I believe we need to tread awfully carefully.
We need to build ethical frameworks, pedagogical supports, and equity safeguards before we integrate these tools, because once they're in, there's no going back, or it's very hard to go back. And if you're wondering, okay, but what's the alternative? Here's what I'd offer. Not that I'm a professional in any way, shape, or form. Invest in writing centers and peer review programs. Teach strategies that help students reflect on their own thinking, so they understand what they're writing, and how to get it out in a way that's good.
And to use low-tech scaffolds like sentence starters, graphic organizers, and even revision checklists. And to create space for authentic dialogue, inquiry, and creative risk-taking. Because for that dialogue between a reader and a writer is so important. Now that I'm in college for a semester, I finally realized that than when I was a senior in high school. These aren't flashy, they're not algorithmic, but they're human, and they actually work. Okay, so that's my case. Not against technology whatsoever, I love technology, but use it thoughtfully and ethically.
Human-centered education should be what we're aiming for, not computer, not an AI thinking. If you're a teacher listening to this, I hope you feel affirmed, or at least believe that your work truly matters. Your judgment matters, and your voice in this conversation is extremely essential. Let's keep asking the hard questions. I know for me, asking the hard questions is really difficult. I'm such an introvert, but sometimes I just gotta do it. I just gotta push, get out of my shell, and ask the difficult questions.
Let's keep protecting this space for students to think, struggle, and grow. And let's remember, education isn't just about what the teachers teach, it's about who we become together. Alright, that was my closing. I hope whoever is listening likes it. I hope whoever listened to this podcast thinks it's decent, at least. But, that's it. Signing off, see you later!