Literacy Over Limits: Ravenscroft's Strategic Framework for Teaching and Learning with AI
Stacy Calfo
November 1, 2025
In this article:
- Under the leadership of Associate Head of School for Academic Affairs Elizabeth Helfant and Chief Information and Innovation Officer Louis Tullo, Ravenscroft is focused on how AI can be used as a tool to benefit teaching and learning.
- This fall, the school partnered with expert Eric Hudson to offer professional development for all faculty and staff and a parent conversation.
- This ongoing, long-term effort is focused on four priorities: literacy over policy, augmentation over automation, design over technology, and vision over decisions
Artificial intelligence is no longer on the horizon. It is already part of daily life. From homework helpers to programs that create art and music, AI is woven into how people work, learn, and create.
Associate Head of School for Academic Affairs Elizabeth Helfant and Chief Information and Innovation Officer Louis Tullo are partnering to lead Ravenscroft’s efforts to use technology wisely and safely while keeping curiosity, ethics, and creativity at the center—for students and faculty members.
That commitment came into focus this fall when Eric Hudson visited campus to work with faculty, staff, and parents. Hudson is a veteran educator who has spent more than 20 years helping schools navigate innovation. He previously served as chief program officer at Global Online Academy and now works as an independent consultant, guiding schools as they respond to changing educational landscapes.

Technology and education expert Eric Hudson led a professional development day for Ravenscroft faculty and families at the beginning of the school year.
During his visit, Hudson challenged the Ravenscroft community to focus on four priorities that will shape how the school approaches artificial intelligence: literacy over policy, augmentation over automation, design over technology, and vision over decisions. These priorities now guide the way teachers, students, and families are learning to use AI as a thoughtful, human-centered tool.
Literacy Over Policy
At Ravenscroft, building knowledge comes first. Rather than setting limits right away, the school is equipping students, teachers, and families with a clear understanding of AI and how to use it well.
“While our early efforts focused on providing guardrails, the call this year was to start a deeper conversation about developing AI literacy,” Tullo said. “That shift is helping teachers move beyond simple dos and don’ts and toward lessons that build curiosity and critical thinking.”
Helfant is focused on guiding and supporting faculty as they explore how AI fits meaningfully into teaching and learning. She notes that AI is already embedded into many of the tools faculty use, often in ways that are invisible.
“Much like the external world, AI is already part of the culture,” she explained. “It is being integrated into tools that support teaching and learning so much that it is possible not to even know you’ve stepped into an AI-assisted place.”
Faculty are using tools like Gemini to help design lessons and assessments. “They remain the primary author, or the human in the loop,” Helfant said. “But AI can help them be more efficient and can be a thought partner, making suggestions on how to make something more interesting or more relevant.”
In tandem with this work, Helfant is helping faculty rethink assessment design, shifting the focus from final products to the learning process itself. “These are things educators have been working on for years. AI has simply made the work more urgent,” she said.
This emphasis on thoughtful practice connects directly to Ravenscroft’s Lead From Here framework, which provides a shared language for reflection and self-awareness. “We are thinking carefully about how we talk about AI with students so they understand the cost to their learning if they rely on it too heavily,” Helfant said. “This means helping them develop stronger metacognitive and reflective skills.”
A key part of building AI literacy is modeling use directly with students. “By not being afraid to use it with them and by showing how to question outputs, we help students learn to do the same,” Helfant explained. “The day with Eric Hudson was invaluable in getting faculty to a place of comfort with AI. We have moved from trying to guard against it to being intentional and explicit about use.”
To sustain this work, faculty need both time and resources. “They need exposure to examples of good use and some guidance in understanding how to use the tools,” she said. “Some of this comes from experimenting for their own use, and some requires more explicit training. Fortunately, Ravenscroft has a number of people who can support faculty in this work, along with online courses, webinars, and readings.”

Parents and faculty members participate in a conversation about helping students navigate AI tools.
Augmentation Over Automation
At Ravenscroft, AI is used to enrich learning rather than replace it. Teachers are experimenting with tools like Google Gemini and Flint to give quick feedback, spark ideas, or help students practice skills.
“AI enables us to accomplish things that were previously inconceivable and perhaps even impossible,” said Tullo. “There are things that are distinctly human that educators should continue to do and insist our students do as well.”
Teachers are finding that when AI is positioned as a partner, students engage more deeply. Middle School Social Studies Teacher Meredith Stewart has found powerful ways to integrate AI into her speech and debate class. For their first introductory speech, she wanted students to focus on delivery skills such as eye contact, voice, and confidence rather than the stress of writing a polished text from scratch. Together, the class developed a prompt and had AI generate their speeches.
“It was fascinating,” Stewart explained. “Having AI write that first speech lowered their stress levels, which let us focus on performance. But when it came time for their second speech, students overwhelmingly chose to write their own because they realized after hearing the AI-generated intros how similar they all sounded. That experience showed them AI’s limitations in capturing their own voices.”
She also uses AI to provide feedback on student debates. By uploading audio recordings and asking the system for feedback, she offers students critiques that feel both clear and emotionally neutral. “They appreciated that the AI feedback was kind but more explicit than what peers might give,” she said. “It allowed them to focus on the actual suggestions for improvement without the emotional weight that sometimes comes with teacher or peer critique.”

Middle School Social Studies Teacher Meredith Stewart works with students on an AI-assisted project.
Director of Global Education and Social Studies Teacher Melanie Spransy is taking a similar approach in her 9th-grade world history class. She and her students are experimenting with AI not to boost productivity but to support deeper thinking and problem-solving. For an upcoming assignment, students will write thesis statements based on a shared set of documents, input their theses into Gemini to generate a five-paragraph essay, then deconstruct that essay into an outline and evaluate the strength of its argument.
“This assignment calls students to think critically about AI-generated writing and discover for themselves the strengths and limitations of Gemini’s ability to construct historical arguments,” Spransy said. "I want to create assignments that require students to think for themselves, even if they are using AI."
Tullo notes that tools like Flint also help teachers tailor their instruction. By surfacing where students are struggling, Flint allows teachers to adjust their lessons and focus on the human work of coaching, discussion, and feedback. “It’s like giving every student a personal tutor while still letting the teacher focus on the uniquely human parts of teaching,” he said.

Faculty and staff members participated in an all-day AI professional development workshop before the start of the academic year.
Design Over Technology
The spirit of intentionality carries into how Ravenscroft designs its curriculum and assessments. Rather than chasing the latest tools, the focus is on thoughtful planning that ensures any use of AI serves clear learning objectives. The school aims to create experiences that keep students thinking, questioning, and creating, even when AI is part of the mix.
“We set out to reimagine assessments so students could not offload all their thinking to AI,” Tullo explained. “We wanted them to use AI strategically to create opportunities for learning that did not exist before.”
This commitment to intentional design led to the creation of the AI Acceptable Use Scale, a five-level framework ranging from “No AI Use” to “AI as a Co-Creator.” It asks students to document and cite their AI use and, when needed, submit chat logs. By embedding these expectations into assignments, teachers give students a clear structure for using AI as a support for learning rather than a shortcut.
Helfant plays a key role in ensuring consistency across departments. “One of our biggest priorities is to make sure students have a clear, unified understanding of how AI is used in their classes,” she said. “The Acceptable Use Scale gives teachers a shared language and helps align expectations so students aren’t navigating a patchwork of different policies.”
That alignment builds on Ravenscroft’s long-standing leadership in AI education. Years before ChatGPT became a household name, the school introduced a course in artificial intelligence. Today, students can take Seminar in AI Foundations and Prompt Engineering, which blends core principles with hands-on work. “The course gives students the chance to think critically about how these tools function and how they can be used creatively and responsibly,” Helfant said.
Vision Over Decisions
Ravenscroft is looking beyond day-to-day classroom needs to focus on what students will need long after graduation. The school’s work with AI is anchored in a clear purpose: helping young people grow into thoughtful, ethical, and innovative learners who know how to use new tools wisely.
Along with the AI Acceptable Use Scale, the school is weaving this vision into conversations about student well-being through the Brains and Belonging framework. Associate Head of School for Student Affairs Kendra Varnell sees a natural overlap between the two.
“Brains and Belonging focuses on how students think and connect with others,” she explained. “As AI becomes part of their daily lives, we want to help them set healthy boundaries, think critically, and engage with these tools in ways that reflect their values.”
Varnell also notes that this work is evolving as AI evolves. “AI is changing all the time, so our approach has to evolve, too,” she said. “We are always asking where students are encountering it, where we’re seeing it used well or misused in the world, and how we can give them the guardrails and the education they need to use it thoughtfully.”
This philosophy is evident in the school’s digital citizenship curriculum, where students learn about AI through practical, age-appropriate activities. For example, in health class, students might use AI to generate recipes from a set of ingredients, a way to explore its helpfulness as a tool without mistaking it for human expertise. In contrast, health instructors share examples where AI-generated fitness plans are compared with expert evaluations to highlight where bias and lack of personal context can make its advice unreliable and dangerous. These kinds of tangible lessons give students a realistic understanding of both the possibilities and limits of AI.
Parent and community engagement is a key piece of this vision. This fall, the school has hosted several workshops and information sessions to facilitate partnership between school and home when it comes to healthy technology habits.
“When parents understand the language and expectations we use at school, it creates a shared approach,” Varnell said. “It helps students hear consistent messages about thoughtful use of technology.”
Tullo noted the inescapable impact of AI technology. “Our design challenge has been, and will continue to be, integrating the human brain with this new artificial one,” he said.