After years researching how people from different cultures perceive and respond to digital threats, I've come to believe that one-size-fits-all security training is doing more harm than good. Security policies written in one cultural context (usually Western, usually individualistic) get deployed globally and then we wonder why adoption rates vary so dramatically.
The gap isn't apathy. It's framing.
What my dissertation found
My doctoral research at Purdue examined trust in cybersecurity contexts across multiple national populations. What I found, again and again, was that the same security behavior , like clicking, a suspicious link, was interpreted through completely different cognitive and social frameworks depending on the participant's cultural background.
In high-collectivist cultures, trust is often extended based on group membership or in-group signals. An email that appears to come from a "known" organization carries more implicit legitimacy, regardless of technical indicators of compromise. In high-individualist cultures, skepticism tends to kick in earlier, but that doesn't mean those users are immune. They just get caught by different attacks.
"The attack surface isn't just technical. It's cognitive, social, and cultural."
Why this matters for practitioners
Security awareness training that doesn't account for these differences isn't just ineffective; it can actively erode trust in security teams. When training feels irrelevant to someone's actual lived experience, they learn to ignore it. And that learned ignorance is far more dangerous than simple lack of knowledge.
The fix isn't to create dozens of culturally separate security programs. It's to design security communication with cultural humility baked in, acknowledging that the threats are the same, but the context in which people process those threats is not.
Where I'm going next
I'm currently exploring how these cultural dynamics interact with AI-mediated security systems, specifically, whether users from different backgrounds apply the same culturally-shaped trust heuristics to AI decision-making that they do to human ones. My suspicion is that they do. And that means the AI security tools we're building right now are inheriting the same cultural blind spots as the awareness training that came before.
More on that soon.