Overview

This project investigates how subtle variations in AI-generated responses—such as the level of detail or confidence—can shape user beliefs. We conducted a large-scale experiment to understand not just whether users accept AI suggestions, but how these features influence both the stance and strength of their beliefs.

Key Contributions

How It Works

Study interface showing AI comment and response options
Participants read AI-generated comments and indicated their agreement level

Participants were shown claims on various topics and asked to indicate their initial beliefs. They then received AI-generated comments that varied in detail level and confidence tone, after which they reported their updated beliefs.

My Role & Tech

### My Role - Designed the experimental protocol - Created the AI prompt variations - Built the study interface - Conducted statistical analysis - Led paper writing
### Tech Stack - **Survey Platform:** Custom web application - **LLM:** GPT-4 for response generation - **Analysis:** Python, R for statistics - **Participants:** Prolific recruitment

Outcome

Under Review

This work offers insights into how AI communication design impacts human judgment—raising both opportunities and ethical considerations for LLM-powered systems.