October 31, 2018

Multiple-platform measurement looks at evolution of hateful, racist memes

Written by

jeremy blackburn experts2017Jeremy Blackburn, Ph.D.Racist memes are prevalent in fringe web communities, with a substantial number of political memes making their way from these communities to mainstream communities, which supports claims that memes might be used to enhance or harm politicians’ public images, according to a new study from the University of Alabama at Birmingham and collaborators.  

Web communities have become the most powerful platform for the spread of ideas and culture due to their viral nature and ability to evolve. Some memes are completely innocent, but many of them are also being used as weapons with racist and aggressive undertones.

“We want to understand the origination and evolution of memes alongside how they influence web users,” said Jeremy Blackburn, Ph.D., assistant professor in the UAB Department of Computer Sciences. “If we can characterize these memes and their origination communities, then we can build systems to help networks identify and block the dissemination of hateful and racist memes.”

Posts gathered from fringe web communities, including Twitter, Reddit, 4chan and Gab, were analyzed over a 13-month period. The posts showed that memes with hateful and racist content are being shared often, with the most popular cluster of memes being the anti-Semitic “Happy Merchant” meme and controversial Pepe the Frog.

The findings show that the most influential meme ecosystem is 4chan, where Politically Incorrect (/pol/) substantially influences the meme ecosystem because of the number of memes it produces. Meanwhile, The_Donald has a higher success rate in pushing memes to other communities. Reddit and Twitter users tend to post “fun” memes, while Gab and /pol/ post racist or political memes.

“Our work is the first attempt to provide a multiple-platform measurement across the meme ecosystem, with a focus on fringe and potentially dangerous communities,” Blackburn said. “This is the first step in building systems to protect against the distribution of harmful ideologies.”

Previous studies from Blackburn and colleagues have already been used by social network providers to assist in the identification of hateful content, such as Facebook’s banning the use of Pepe the Frog memes used in a hateful context. The team of researchers’ methodology allows for social networks to automatically identify hateful variants.

The paper was presented at the Internet Measurements Conference on Wednesday, Oct. 31, and won the distinguished paper award. Collaborators on the paper include the Cyprus University of Technology, University College London and Kings College London.