Four years after his college graduation, Max Rios '21 returned to campus to discuss AI's evolving role in education and pathways to high-tech careers. Photo by Dan Loh.
by MaryAlice Bitts-Jackson
Just four years after graduating from Dickinson, Max Rios '21 is working at one of the world’s top tech organizations. He recently returned to campus to connect with students and share insights on the evolving role of AI in education.
Rios transferred to Dickinson as a rising junior, through the college’s community college partnership program. A computer science major, he tutored fellow Dickinson students and volunteered remotely, through a Microsoft tutoring program. After graduating, he landed a coveted job at Google, where he’s a software engineer.
Assigned tasks and passion projects keep Rios constantly learning. Highlights include early experimentation with AI, work on agentic chatbots for fellow “Googlers” and the co-creation of a plagiarism-detection tool. Rios’ work as part of the Google Classroom team connects him with teachers and students as well as fellow tech pros—and informs his unique perspective on how technology can be used in education.
During his two-day visit to campus, Rios spoke about his path from Dickinson to Google as he visited with senior-year computer science majors and with members of the Computer Science & Math Club. He also met one-on-one with students interested in tech careers. And, in a Sept. 18 public lecture, "Intersectionality Between AI & Education,” Rios framed AI use in education as a complex issue that demands a collective and informed approach.
“For Gen Alpha, AI is going to be like what smartphones were for Gen Z—a tool that’s always been there, so it’s especially important that young people’s voices are heard.” —Max Rios ’21
Rios shared his enthusiasm for the ways AI can personalize learning, boost efficiency and detect plagiarism and AI fakes, and he listed accessible tools that students and educators can start using today.
But he didn’t shy away from the critical questions: Are we ready for these tools? What are their limitations? How do we ensure they are accessible and equitable? How do we address issues of data privacy, bias, plagiarism and inaccurate AI-produced content?
“It's important to know that sometimes AI can be wrong, and that sometimes it’s really hard to know when it’s hallucinating,” Rios warned. That’s especially true when users become over-reliant on AI and when users overestimate how much they know about the subject at hand. He encouraged students not to shortcut the research process—and to always read AI outputs closely and critically.
“If you’ve done the research, you’re better able to recognize problems,” he said. The bonus: Those close reads also keep critical-thinking skills sharp in the AI age—a major societal concern.
“I sometimes think that jobs at gold-standard places like Google are out of reach, and that the people who get jobs there must be robots with ... a superhuman ability. … But learning that Max went to Dickinson, just like me, was really inspiring.” —Mandy Grawl ’28
Rios ended the lecture with a call to action: improve AI literacy and foster open, ongoing campuswide conversations about responsible use. He encouraged students to speak up—in the classroom, with friends, at club events and during hackathons, campus forums and other outlets.
“For Gen Alpha, AI is going to be like what smartphones were for Gen Z—a tool that’s always been there,” he said. “So it’s especially important that young people’s voices are heard.”
The message resonated with students like Mandy Grawl ’28, a computer science major who left the event with renewed excitement and confidence.
“I sometimes think that jobs at gold-standard places like Google are out of reach,” she said, “and that the people who get jobs there must be robots with Ivy League connections and a superhuman ability to program. But learning that Max went to Dickinson, just like me, was really inspiring.”
Published September 24, 2025