Policy First, Panic Never: Guiding AI the Right Way

What We Have Learned About Responsible AI in K–12 Education

By Dr. Adam Phyall

We’ve reached a tipping point in K–12 education. AI isn’t some futuristic trend on the horizon; it’s here. It’s in our phones, our classrooms, our district offices, and yes, even grading our essays.

So the question is no longer “Should we use AI?” It’s “Are we using it responsibly, and who’s holding the reins?”

I had the honor of moderating a high-energy, thought-provoking panel at this year’s GAMEIS Conference, titled “AI by Design: Implementing Responsible AI in K–12 Education.” Thanks to the sponsorship of ProLogic ITS, we were able to bring together an incredible lineup of voices from both the front lines of education and the cutting edge of technology. With a packed room of edtech leaders, district teams, and policy thinkers, we rolled up our sleeves and got down to business about what it takes to lead with AI in an ethical, equitable, and effective manner.

This session wasn’t about shiny tools or sci-fi hype. It was about people, policy, and purpose.

Meet the Panel

Our expert panelists came from both the tech industry and the education trenches:

Their perspectives were diverse, their examples powerful, and their insights as real as it gets.

1. Policy Before Platform: Building AI Governance from the Ground Up

We kicked things off with a message districts don’t hear enough: policy must come before product.

Too often, AI adoption starts with someone demoing a cool tool, and suddenly the tech tail is wagging the policy dog. But if you’re not leading with governance, ethics, and equity, you’re playing catch-up from day one.

That’s why I shared the AI by Design Guide, a free resource from Future Ready Schools. This guide helps districts:

  • Form cross-functional AI governance teams
  • Craft policies aligned to district goals and community values
  • Build a shared vision that doesn’t just react to AI, but directs it

We emphasized the importance of involving everyone, including tech leaders, instructional coaches, administrators, parents, and, yes, even students. Because if we’re doing school to kids instead of with them, we’ve already lost the plot.

2. From Vape Sensors to Virtual Fences: AI for School Safety

Next, we explored one of the most high-stakes applications of AI: physical safety and daily operations.

Laura Schroeder shared real-world use cases from schools utilizing AI-powered sensors to detect vaping, trigger safety alerts, and even monitor perimeter breaches. One story stuck with me: a student wandered beyond a digital fence and was found near a pond—thanks to AI-triggered alerts, staff reached the child in time. That’s not theoretical. That’s life-saving.

But we didn’t sugarcoat it. This type of monitoring raises significant privacy concerns, particularly in schools. Are we normalizing surveillance? Where do we draw the line?

“Safety often trumps privacy in education,” one panelist said, “but that’s a choice—not a given.”

It’s time for districts to make intentional decisions about how, where, and why AI is being deployed. Transparency matters. So does trust.

3. Ethics Aren’t Extra—They’re Everything

We then zoomed in on the ethical landscape of AI.

Daniel Rivera and Dr. Gardner reminded us that AI is only as good as the data it’s trained on, and when that data is biased, the outcomes can be too. Predictive analytics are powerful, but without checks and balances, we risk turning into the educational version of “Minority Report.”

We discussed:

  • Data bias and explainability
  • The risk of AI automating disciplinary actions without human context
  • The importance of clear definitions (Are we talking about AI broadly—or generative AI specifically?)

One of my favorite moments? When Dr. Gardner brought up Asimov’s Three Laws of Robotics. That sparked a lively debate about how today’s large language models can be “jailbroken” by clever users, who bypass their safety protocols through social engineering. Students are already doing this. It’s not hypothetical.

“Our students can spot AI use in a heartbeat,” I reminded the room. “They know when their teachers used ChatGPT to write a prompt, so we’d better start owning that and teaching them how to use it responsibly.”

4. The Human Element Must Stay Center Stage

Throughout the session, a consistent theme emerged: AI is a tool, not a replacement for people.

Sure, AI can streamline operations. It can analyze data in seconds. But it can’t build relationships, inspire curiosity, or understand a student’s rough morning before they even walk in the door.

Dr. Gardner said it best: “AI won’t replace security personnel. But those who refuse to use AI might be replaced by others who do.”

This resonated when we discussed the now-common classroom “AI loop.”:

  • A student uses AI to answer a prompt…
  • That prompt was created by AI…
  • And the teacher uses AI to grade it.

It’s a loop with no humans in it—and our students notice.

5. Students Are Watching. Let’s Model Integrity.

One of the most powerful takeaways came from a Future Ready student panel I referenced. A student said:

“Our teacher says we can’t use AI. But we can tell that our teacher used AI to write the questions. Our teacher doesn’t talk like that.”

If we expect students to use AI responsibly, we have to walk the talk ourselves. That means modeling curiosity, transparency, and discernment. It means acknowledging double standards. And it definitely means bringing students and families into the conversation.

6. Want to Lead with AI, Not Chase It? Join the Bootcamp.

To close out the session, I shared an opportunity that’s changing the game for district leaders:

The Future Ready AI Leadership Bootcamp

This six-week, fully virtual experience helps district teams:

  • Create ethical, equitable AI implementation plans
  • Align AI efforts with district goals
  • Engage stakeholders and avoid top-down missteps
  • Build capacity to lead—not just follow—the AI conversation

And thanks to philanthropic support, scholarships are available for small, rural, and under-resourced districts.

So yes, there’s a path forward. And no, you don’t have to figure it out alone.

Final Word: Let’s Design the Revolution

AI is already reshaping how we learn, teach, and lead. We can either let it happen to us… or we can shape it, guide it, and humanize it.

Let’s ensure we’re not just discussing the future of education.
Let’s be the ones writing it.


Want to learn more?

AI by Design Guide
Apply for the AI Bootcamp
Apply for a Scholarship
Connect with me: @AskAdam3

Meet The Author


Adam A. Phyall III, Ed.D.
Director of Professional Learning and Leadership

Meet Adam