Detecting Coordination in Civic Discourse
Real-World Context
Imagine you’re scrolling through social media during a local election. You notice dozens of accounts posting very similar messages about a candidate, all within minutes of each other. Are these real people expressing genuine opinions, or is this coordinated manipulation?
What We’re Looking For
Organic discourse happens when real people naturally discuss civic issues:
- Posts spread out over time as people see news and react
- Different people use different words and perspectives
- Activity follows human patterns (busier during lunch, evenings)
- Mix of original thoughts, sharing, and responding
Coordinated manipulation shows suspicious patterns:
- Bot farms: Networks of fake accounts posting similar content simultaneously
- Burst campaigns: Sudden spikes of activity that don’t match natural human behavior
- Content recycling: The same message copied and pasted across many accounts
- New account brigades: Recently created accounts all acting in coordination
Real Examples
- 2016 election: Thousands of fake accounts pushed divisive content during key political moments
- Local politics: Bot networks amplified negative stories about municipal candidates right before elections
- Issue advocacy: Coordinated campaigns made fringe positions appear to have widespread support
The Privacy Challenge
We want to detect manipulation without invading privacy:
- Can’t read private messages or personal content
- Can’t track individual users across platforms
- Can’t access names, locations, or identifying information
But we can look at behavioral patterns that indicate coordination while preserving anonymity.