University of Alabama at Birmingham study. The study, A Quantitative Approach to Understanding Online Antisemitism, discusses how the anti-Semitism trend in alt-right communities influenced web communities in mainstream ecosystems like Twitter and Reddit, which began as early as the months preceding the 2016 election.Two of the largest alt-right web communities displayed an explosive growth in anti-Semitic and racist rhetoric over the course of 18 months beginning in 2016, according to a
Researchers at the University of Alabama at Birmingham, Princeton University, the Cyprus University of Technology, the University of Illinois at Urbana Champaign and the Network Contagion Research Institute conducted the first large-scale quantitative analysis on the rise of online anti-Semitism and how anti-Semitic content flows across mainstream and fringe web communities. The study looked at Politically Incorrect (/pol/), a subcommunity in 4chan that offers an image-based discussion forum where users are anonymous, and Gab.ai, an alt-right Twitter clone.
“We analyzed more than 100 million posts from the largest alt-right-affiliated social platforms,” said Jeremy Blackburn, Ph.D., assistant professor in the UAB Department of Computer Science. “We are facing a new socio-technical problem. The web has brought people closer together but has also been harnessed for ill intent by people aiming to spread hateful ideology. There may be 100 racists in your town; but in the past, they would have to find each other in the real world. Now they just go online.”
Researchers found that major news events such as the 2016 election, inauguration and the Charlottesville “Unite the Right” rally in 2017 correlated to changepoints in anti-Semitic and racist rhetoric on these channels. The team also used quantitative tools to decipher the intentionally ambiguous, but hostile and coded language these platforms innovate.
Using an analysis pipeline that categorizes memes from tens of millions of images from numerous web communities, the team analyzed the makeup and flow of the “Happy Merchant” meme, an infamous anti-Semitic meme that has immense popularity in both /pol/ and Gab. Investigators first demonstrated how the merchant infects other popular memes within /pol/, which generated tens of thousands of merchants over the sample period.
|The web has brought people closer together but has also been harnessed for ill intent by people aiming to spread hateful ideology.|
The flow of the meme was analyzed within and between web communities, including mainstream communities such as Reddit and Twitter, as well as other fringe subcommunities, such as Reddit’s The_Donald, to examine the influence that different web communities have on one another in spreading the “Happy Merchant” meme. For the spread of the “Happy Merchant” meme, the group found that 4chan’s /pol/ exhibited the greatest influence on the spread of the meme overall to all other web communities, acting as a metaphorical “red zone” of contagion. Reddit’s The_Donald exerted less influence overall in the spread of the “Happy Merchant” meme, but bore the distinction of being the most effective in disseminating the meme per each instance of its appearance.
“We are seeing an explosive and uncontrolled growth in anti-Semitic content on these fringe networks and finding evidence of their influence in the mainstream information ecosystem,” said Princeton University psychologist Joel Finkelstein, lead author and director of the Network Contagion Research Institute. “Qualitative methods, such as those used by the ADL or the SPLC to monitor and control racism and anti-Semitism, cannot keep pace with the instantaneous and viral nature of internet content; but we hope the methods we showcase here can point toward ways to better diagnose and ultimately mitigate against anti-Semitic memes and rhetoric.”
Blackburn and Finkelstein are co-founders of the Network Contagion Research Institute, or NCRI. The NCRI deploys machine learning tools to expose hate on digital social networks within a cross-platform, public-minded and global framework. They are a multidisciplinary group of scientists and engineers who apply technical skills to further public insight into the problem of online hate. They examine how hateful images and language grow within and between web communities and how the infection of hate spreads between the online and real worlds.