Skip to main content

New Center to Guide Ethical Tech Policy

September 23, 2025
Sepehr Vakil and Claudia Haase
Sepehr Vakil and Claudia Haase co-direct the Center for Responsible Technology, Policy and Public Dialogue.

Northwestern’s School of Education and Social Policy has launched a multidisciplinary research hub called the Center for Responsible Technology, Policy, and Public Dialogue to explore questions of ethics, fairness, and innovation in the age of artificial intelligence.  

By bringing together a range of diverse voices— from anthropologists and artists to economists, learning scientists and psychologists—organizers hope to help shape a just technological future.

“Technology is profoundly emotional on multiple levels,” SESP Dean Bryan Brayboy said in opening remarks. “People argue, fight and disagree about everything from how it affects the environment and our children’s brains to what it means for data to be capitalized and commodified. These are vital debates. That’s why our center is so important — it’s a space for dialogue across disciplines and communities.”

It’s also a place to shape the response to artificial intelligence, rather than simply react to it, said Megan Bang, the James Johnson Professor of Learning Sciences. “But that takes governance, imagination and inclusion at every level,” she said.

Housed in Northwestern’s School of Education and Social Policy (SESP), the center also introduced a new master’s program in Technology, People, and Policy to train future leaders at the intersection of innovation and ethics.

The launch, marked by a university symposium, was made possible in part by a $1 million grant from the Kapor Foundation, a national leader in efforts to diversify and democratize the tech industry.

Guiding Tech Toward Fairness and Ethics

The center is led by Sepehr Vakil, associate professor of learning sciences, and Claudia Haase, associate professor of human development and social policy. Together, they are asking urgent questions about the role of technology in society — now a part of everything from education and economics to mental health and the arts.

“It’s a luxury to have computer scientists, political scientists, psychologists, and learning scientists all working together,” Vakil said. “This center is not just about technology — it’s about people. It’s about ensuring that as AI advances, our ethics, equity and communities advance with it.”

Haase, a developmental psychologist, emphasized the social and emotional implications of emerging technologies. Some people, she noted, are turning to AI for emotional support—whether students seeking advice or individuals using it for comfort or decision-making.

“Others are concerned about what these developments mean for human connection and the relationships we have with each other and ourselves,” Haase said. “The rising use of AI in mental health suggests people are searching for healing — and AI is becoming part of that search. We need to talk about what this might mean for human well-being and what future we want for ourselves and our children.”

Highlights from the symposium: 

 

Dan PerlmanAlumnus Dan Perlman (BS12) of New York City and  Chicago-based artists Samir Abdul and Mike Knight will serve as the center's first artists in residence. Perlman, a write, filmmaker and commedian, reflected on the growing use of AI in creative industries, where companies increasingly use algorithms to read or assist in writing scripts.

“You can cut a script using AI and rubber-stamp it without ever engaging,” he said. “But I don’t understand the creative process if you’re not struggling, hitting walls, getting breakthroughs. That’s what makes the work real. If our brains aren’t turned on while making something, I don’t expect anyone’s brain to turn on while consuming it.”

Perlman invoked writer Toni Morrison, who said, “Struggling through the work is more important than publishing it."

“It’s important to continually incentivize the process of progress through struggle,” he said. “To keep our brains turned on.”

Tech Policy Is No Longer Just Technical — It’s Political

One of the day’s central themes was the need to translate research and ethical concerns into public policy. A panel featuring tech policy leaders Nik Marda and Nicol Turner Lee — both instructors in the master’s program in Technology, People and Policy — highlighted the growing political dimensions of AI.

Nik MardaMarda, former technical lead for AI governance at Mozilla and a White House Office of Science and Technology Policy adviser, noted that AI has shifted from a purely technical challenge to a political force.

“Policy alone isn’t enough,” he said. “We need people working at all levels of the AI stack — the technical, the policy and the political. AI is backed by trillions of dollars and shaping society. It needs to be guided with the public interest at the center.”

Lee, director of the Center for Technology Innovation at the Brookings Institution and author of Digitally Invisible: How the Internet Is Creating the New Underclass, underscored the dangers of exclusion.

“If we don’t change how we think about who deserves to benefit from AI, we’ll keep punishing those just trying to participate,” she said. “We need to ensure AI is a tool of inclusion, not exclusion.”

Why Experts Need to Act Fast and Speak Clearly

Kirabo JacksonIn a panel on engaging with industry, policymakers and the public, Professor Kirabo Jackson, a labor economist and former member of the White House Council of Economic Advisers, emphasized the importance of speed and clarity.

“People don’t have three years to wait for research — they may not even have three weeks,” said Jackson, the Abraham Harris Professor of Human Development and Social Policy. “If someone’s going to make an educated guess, it should be you — the expert. Not someone else.”

He urged scholars to step outside academia and write for broader audiences — blogs, op-eds and policy briefs. “Stop writing only for journal editors," he said. "Start writing for people.”

Jackson shared lessons from his time in Washington, where he deliberately avoided jargon such as “exogenous” or “positionality” in favor of plain language.

“I wasn’t dumbing things down,” he said. “I was making my ideas stronger — because they were easier to understand.”

AI as a Tool for Civic Infrastructure
Many educational disparities stem from systemic fragmentation — not just a lack of resources, said Nichole Pinkard, the Alice Hamilton Professor of Learning Sciences. AI has the potential to improve entire learning ecosystems.

“The opportunity gap is often a coordination gap,” she said. “AI can help cities visualize participation across schools, parks and libraries — aligning fragmented systems and supporting equity.”

Her work, including the Cities Learn Platform, reimagines AI as a civic tool, helping communities align around shared educational resources.

“Learning doesn’t just happen in schools. It happens at home, at church, at community centers,” Pinkard said. “AI gives us the ability to connect these fragmented systems — but right now, we still operate in silos.”

How We Perceive AI Shapes Its Impact
David RappProfessor David Rapp’s research on misinformation explores how psychology influences trust in AI-generated content.

“When people read the same article labeled as AI-written versus human-written, their trust levels change,” said Rapp, the Walter Dill Scott Professor of Learning Sciences. “Certainty in tone affects trust — but only when it’s from a human. With AI, nuance gets flattened.”

Understanding these dynamics is key to building trustworthy and transparent AI systems, he said.

AI and the Blurring of Human-Machine Boundaries
Sylvester JohnsonSylvester Johnson, a scholar of race, religion and technology in the Weinberg College of Arts and Sciences, tackled the cultural and ethical implications of emotionally responsive AI — from AI companions to therapy bots.

“We talk to our devices. We seek empathy from machines,” he said. “But who decides where the boundary between human and machine lies?”

Drawing parallels to historical struggles over who counts as fully human, Johnson stressed the need for new frameworks of governance and ethics.

Wheels on Horses
Liz GerberElizabeth Gerber, co-director and founder of Northwestern’s Center for Human-Computer Interaction + Design, imagined a future where AI serves everyone — not just the powerful.

“A dyslexic student uses personalized AI tools. A rural mother uses her phone to diagnose illness. A small farmer uses AI to grow her business.”

To achieve that future, Gerber said, we must design with communities, not for them — ensuring inclusion, transparency and AI literacy at all levels.

She described her shift toward a co-created classroom model, where students help shape the course. “It’s more work, but it’s more human,” she said.

“We don’t yet know what the future model should look like. It’s like putting wheels on horses.”

“Let’s not underestimate the power we have. Northwestern helped lead a national student voting movement. We can do that again — with AI literacy, inclusive design and a commitment to equity.”

Should AI Be Regulated?
Illinois State Rep. Abdelnasser Rashid, co-chair of the state’s AI Task Force, detailed recent legislation aimed at regulating AI — including laws to prevent algorithmic employment discrimination and protect educators from being replaced by automation.

“Technology is evolving faster than regulation,” Rashid said. “We’ve seen what happens when we’re slow to act — look at the consequences of unchecked social media. We can’t let that happen again with AI.”

“That’s why academic partnerships, like this one with Northwestern, are so crucial,” he said.