Alumni in AI: How to Be Human in a Digital World

Julia Feerrar '12

Julia Feerar '12

Top tips from a digital-literacy expert

by MaryAlice Bitts-Jackson

Waiting in line at the grocery store at the end of a long day, my faith in humanity was waning fast. Then a colorful hydrangea tree lit up my feed, flooding me with dopamine. I “liked” the post. I didn’t pause to think: Hydrangeas don’t grow on trees. 

How can you stay a step ahead of mis- and disinformation, and what should you do when others misstep? Julia Feerrar ’12 can help.

Feerrar, a former English major, is a librarian, associate professor and digital-literacy director at Virginia Tech. She has offered expert advice for the digital age through several regional and national news outlets, including a live spot for the Weather Channel, discussing AI and hurricane misinformation. As she notes, while many of us know the basics of evaluating online content, there's an additional dimension to consider:

How we think about digital information—and how we interact with each other through it—influences our relationships and lives.

Four tips for humane living in a digital world

1. Be curious—and skeptical.

First, the obvious: Think critically about what you see online and evaluate the source or publisher. Look for claims out of context, lack of citations and signs of AI-generated content (weird hands—and hydrangeas on trees—for example). Verify facts through trusted sources (Snopes, Politifact, etc.). 

“I try to balance curiosity with healthy skepticism in my approach, especially as I talk with students,” Feerrar says. “I ask questions like, Who created this information? Who has a stake in this piece of media? Who will be helped or harmed by this tool?”

2. Consider the emotional context.

“Questionable content is often built to prey directly on our fears, biases and preconceptions,” Feerrar notes. Understanding the emotional contexts of our online interactions can help us recognize our own danger zones—and respond to someone else’s faux pas with empathy (see below).

If you notice your emotions running high, that’s your cue to step back. Worried that Martians will invade Earth? Approach space-travel content with extra caution. Excited to see the Barbie sequel? Fact-check before sharing on-set photos. 

3. Put empathy first.

When friends post something questionable, remind yourself that their posts have emotional contexts too. Then respond accordingly.

“What keeps me grounded is coming back to the human dimensions of digital literacy,” Feerrar says. “We all can be susceptible to AI fakes and other kinds of misinformation.”

4. One size does not fit all.

Tailor your responses to others’ posts according to the potential for harm as well as your relationship to that person. An AI-generated photo of an impossibly lavish hotel is probably less worth calling out than a post with erroneous health information. Whether those posts were shared by a distant acquaintance or a loved one would factor in too.

“Different kinds of misinformation will call for differing responses, but I tend toward reaching out to someone privately first and having a source ready to back up my claim,” Feerrar says.

The big picture

While generative AI and other dis/misinformation-spreading technologies are ever-evolving, the big challenges they bring to the fore—propaganda, misinformation, questions of access to information, authorship and ownership—are far from new, Feerrar stresses. Take comfort in that—and capitalize on cultural lessons learned.

“I think that meeting the challenges of the digital age means recognizing those throughlines and looking back as we look forward,” Feerrar says.

By recognizing the dynamics at play and keeping empathy front and center, we can interact with media and with each other more intentionally and humanely, she adds. “These are important and evolving parts of digital literacy. It’s going to take all of us to keep figuring out what it means to be a person in our digital world.”

Read more Alumni in AI features.

TAKE THE NEXT STEPS 

Published January 8, 2025