This project investigates how subtle variations in AI-generated responses—such as the level of detail or confidence—can shape user beliefs. We conducted a large-scale experiment to understand not just whether users accept AI suggestions, but how these features influence both the stance and strength of their beliefs.
Key Contributions
Pre-registered experiment with 304 participants across fact-checking and opinion tasks
Systematically varied AI response detail (brief vs. rich) and confidence tone
Measured both stance direction and belief strength changes
Found that rich detail with moderate confidence is most effective in shifting beliefs
Provides practical insights for ethical LLM communication design
How It Works
Participants read AI-generated comments and indicated their agreement level
Participants were shown claims on various topics and asked to indicate their initial beliefs. They then received AI-generated comments that varied in detail level and confidence tone, after which they reported their updated beliefs.
My Role & Tech
### My Role
- Designed the experimental protocol
- Created the AI prompt variations
- Built the study interface
- Conducted statistical analysis
- Led paper writing
### Tech Stack
- **Survey Platform:** Custom web application
- **LLM:** GPT-4 for response generation
- **Analysis:** Python, R for statistics
- **Participants:** Prolific recruitment
Outcome
Under Review
This work offers insights into how AI communication design impacts human judgment—raising both opportunities and ethical considerations for LLM-powered systems.