Siya Verma
Headshot

Hi, I’m Siya!

I’m a researcher, founder, and builder fascinated by how capital and technology reshape the world. From co-authoring papers on human-AI interaction to launching a startup and podcasting with the people defining what's next, my experiences range deep research, product strategy, storytelling, and navigating high-stakes decisions in fast-moving environments.

Email: siyaver[at]sas.upenn.edu

Focus

I work at the intersection of research, policy, and product insight, translating qualitative and quantitative findings into strategies for product teams, policymakers, and academic audiences.

Core areas

  • AI governance and trust
  • Human–AI interaction and user mental models
  • Deliberative polling and public opinion
  • Product-facing reliability and transparency signals

Methods

  • Qualitative coding and thematic analysis
  • Survey and deliberative polling design
  • Mixed-methods synthesis for stakeholders
  • Applied prototyping for trust scoring and reliability

Research projects

Selected work across human-AI interactions, education policy, and cross-cultural perceptions of AI agents.

AI–Human Interaction Dynamics

Lead Author · APSA 2025, Stanford AI Industry Symposium · Forthcoming book chapter (2026)
HCI Public opinion Qualitative coding

Led qualitative coding of 4,000+ AI community forum transcript data points to analyze public opinion on human-AI interaction. Findings are being translated into internal insights for a global AI firm.

AI in Education – Global Deliberative Polling Initiatives

Project and Research Lead · Speaker at Yale AI Governance Conference, iCivics · Organized Congressional Briefing
Policy Education Deliberative polling

Designed and launched three national education polling events engaging 300+ students, teachers, and school boards in the U.S. Polls have been initiated globally in Greece, Albania, and several more happening in the U.S. in individual schools Results informed international education policy in Albania and Council of Europe.

Youth Perspectives on AI and Social Media

Lead Author · APSA 2025 · Under review
Governance Youth vote Thematic analysis

Analyzed 400+ responses from first-time voters in “America in One Room: Youth Vote” using Excel-based thematic analysis, focusing on youth sentiment around AI, privacy, and content governance.

Mental Models of AI Agents: US and India

Lead Author · To be presented at APSA 2026 · Ongoing
Cross-cultural Safety messaging Frontier AI forum

Comparing public mental models of AI among U.S. and Indian participants in the Frontier AI Industry Forum, organized by Stanford University and companies including Meta, DoorDash, and Cohere. Exploring implications for trust, alignment, and safety messaging.

Designing Consumer-Facing AI Agents Users Want to Use

Research Assistant, Co-Author · Wharton Human-AI Lab · Ongoing
Product design Agent UX Landscape study

Identifying key features across 50+ consumer-facing AI agent applications, with the aim of developing an agent to test with a randomized group of 100 individuals. Findings will inform practitioners designing agents users trust.

Applied work and writing

Product-driven reliability research, hackathon work, and a weekly research synthesis newsletter.

Ezio: LLM Hallucination Detection and Trust Scoring

Co-Founder · Top 10 Finalist, AI Hackathon · Launching January 2026
Reliability Trust scoring

Designed a trust-scoring framework for LLM outputs. Developed research-informed scoring weights across accuracy, sycophancy, and reliability grounded in AI and HCI literature.

Penn AI for Social Good Hackathon Winner

Best Overall Product (2025)
Civic tech Tax assistant LLM workflow

Pitched and built an MVP for an LLM-based tax assistant for international students and gig workers. Chosen as Best Overall Product out of 100+ hackathon entries.

The Weekly Thesis

Substack newsletter · Weekly synthesis of technical papers and AI-relevant headlines to make AI more approachable to all
Research synthesis AI policy Safety

Weekly synthesis of technical papers and recent headlines that relate to AI policy and safety, written for researchers, builders, and policymakers.