Civic Innovation Initiative Launches with Broad Dialogue on Trust in Artificial Intelligence

Researchers, educators, ethicists, and industry professionals gathered for the initiative's first board meeting to shape community-driven research priorities around one of today's most urgent issues: trust in AI.
Civic Innovation Trust in AI

A diverse group of researchers, educations, ethicists, industry professionals, and students gathered for the first board meeting of the Civic Innovation Initiative. This initiative is led by program director Dr. Jonathan Garlick, a Tufts University professor and Fellow of the Auster Center for Applied Innovation at Tufts. With Dr. Garlick’s direction, the Civic Innovation Initiative will be focused on shaping community-driven research priorities around one of today’s most urgent issues: trust in artificial intelligence. 

The gathering marked the first step in an extensive planning process to design a Topic Prioritization Event (TPE). This event will be a structured community forum that will ultimately inform the direction of the Auster Center's new Civic Innovation Seed Grant program. The intent of the grant is to catalyze transdisciplinary research that innovates on solutions prioritized by the community itself.

The participants of this first meeting make up the Civic Innovation Board – a group of individuals from diverse lived experiences and professional backgrounds brought together to discuss the broad topic of trust in artificial intelligence. Their individual backgrounds include AI ethics philosophers, national security researchers, computer scientists, humanities educators, corporate professionals, and more. The diversity of the group was intentional: no single discipline or institution was meant to dominate. 

The core of the session was a structured "concern gathering," a key method used by the Civic Science Collective for community engagement. Through a series of round-robin questions, participants shared what trust in AI means to them, their main professional and personal worries, and the concerns of communities often left out of these discussions.

Responses painted a complex and pressing picture. On the question of what trust in AI truly means, participants stressed the importance of human oversight and accountability beyond just technical performance. Technical risks that were mentioned included AI “hallucinations,” bias, and data misuse, which lead the conversation to how we can trust a system that makes these kinds of mistakes. Several pointed out that trust should be aimed at the ecosystems and institutions that govern AI, not the technology itself, reinforcing the idea that AI is only as trustworthy as the people and systems behind it.

Another concern among the group was the social impacts of AI; many are worried about effects on critical thinking, vulnerable areas such as elder care and mental health, and worsening of inequalities in society. There was an emphasis on current events, highlighting political threats like surveillance, authoritarian empowerment, and AI in warfare. Overall, a key recurring theme was that those most affected by AI – the elderly, immigrants, low-income groups, and youth – are least involved in shaping the development of artificial intelligence and advanced technology. There was a shared sense of irony that the very communities AI could most benefit are the ones most at risk from its unchecked deployment.

There were many points and concerns shared amongst the group, however there were also differences that reflected real tension between hope and concern. Some participants saw AIs transformative potential, while others worried that hype driven by profit motives was outpacing meaningful safeguards. This difference in opinion was proof that the board is constructed of professionals with diverse backgrounds which will continue to drive constructive and critical conversation. 

The meeting ended with participants expressing gratitude for the wide range of perspectives in the room. Several described the discussion as a "breath of fresh air" compared to more isolated, technocratic conversations they face in their daily work.

The Civic Innovation Board will meet again in the coming weeks to design the structure of the upcoming Topic Prioritization Event, finalize communication agreements for participants, and identify who should be invited to the larger community forum. The ultimate goal: a grant program shaped not by any single institution or discipline, but by the concerns and hopes of the communities AI most deeply touches.

Learn more about the Civic Innovation Initiative and how we are building research that communities trust.