AI–Human Interaction Dynamics
Led qualitative coding of 4,000+ AI community forum transcript data points to analyze public opinion on human-AI interaction. Findings are being translated into internal insights for a global AI firm.
I’m a researcher, founder, and builder fascinated by how capital and technology reshape the world. From co-authoring papers on human-AI interaction to launching a startup and podcasting with the people defining what's next, my experiences range deep research, product strategy, storytelling, and navigating high-stakes decisions in fast-moving environments.
I work at the intersection of research, policy, and product insight, translating qualitative and quantitative findings into strategies for product teams, policymakers, and academic audiences.
Selected work across human-AI interactions, education policy, and cross-cultural perceptions of AI agents.
Led qualitative coding of 4,000+ AI community forum transcript data points to analyze public opinion on human-AI interaction. Findings are being translated into internal insights for a global AI firm.
Designed and launched three national education polling events engaging 300+ students, teachers, and school boards in the U.S. Polls have been initiated globally in Greece, Albania, and several more happening in the U.S. in individual schools Results informed international education policy in Albania and Council of Europe.
Analyzed 400+ responses from first-time voters in “America in One Room: Youth Vote” using Excel-based thematic analysis, focusing on youth sentiment around AI, privacy, and content governance.
Comparing public mental models of AI among U.S. and Indian participants in the Frontier AI Industry Forum, organized by Stanford University and companies including Meta, DoorDash, and Cohere. Exploring implications for trust, alignment, and safety messaging.
Identifying key features across 50+ consumer-facing AI agent applications, with the aim of developing an agent to test with a randomized group of 100 individuals. Findings will inform practitioners designing agents users trust.
Product-driven reliability research, hackathon work, and a weekly research synthesis newsletter.
Designed a trust-scoring framework for LLM outputs. Developed research-informed scoring weights across accuracy, sycophancy, and reliability grounded in AI and HCI literature.
Pitched and built an MVP for an LLM-based tax assistant for international students and gig workers. Chosen as Best Overall Product out of 100+ hackathon entries.
Weekly synthesis of technical papers and recent headlines that relate to AI policy and safety, written for researchers, builders, and policymakers.