How AI Shapes Human Judgment: Confidence, Trust, and Over‑Reliance Across Three Original Studies (As a High Schooler)

by Shiven Kharidehal | at Minnebar20

This session presents findings from three original research studies examining how humans interact with AI — and how AI interacts with us. Across topics including LLM confidence calibration, AI‑generated phishing, and the effects of AI explanations on trust, this talk explores a central question: How does AI influence human judgment, accuracy, and over‑reliance?

I’m a high school student at Mounds View High School and very new to presenting at events like Minnebar, so this session is designed to be especially accessible, fast‑paced, and easy to follow. Despite being early in my research journey, I’ve spent the past year studying how people trust (and sometimes over‑trust) AI systems — and I’ll be sharing what surprised me most.

You’ll learn, as the results of my 3 papers reported:

How often large language models are confidently wrong, and why self‑reported confidence fails as a reliability signal

Why AI‑generated phishing emails are harder to detect — and why people are more confident when they’re wrong

How explanations (simple or detailed) increase trust and over‑reliance, even when the AI is incorrect

What these findings mean for cybersecurity, education, and everyday AI use

Whether you’re an AI researcher, developer, educator, or just curious about how humans and AI influence each other, you’ll walk away with a clearer understanding of the risks and opportunities in our rapidly evolving AI ecosystem.

Shiven Kharidehal

This person hasn't yet added a bio.


Are you interested in this session?

This will add your name to the list of interested participants. It will help us gauge interest for scheduling purposes.

Interested Participants