This paper is available on arxiv under CC BY-NC-SA 4.0 DEED license.
Authors:
(1) Alyvia Walters, Rutgers University, USA;
(2) Tawfiq Ammaris, Rutgers University, USA;
(3) Kiran Garimella, Rutgers University, USA;
(4) Shagun Jhaver, Rutgers University, USA.
Table of Links
- Abstract & Introduction
- Background and Related Work
- Methods
- The Tools of CRT Meme Production
- Knowledge Production in a Post-Truth World
- Conclusion & References
Methods
Data Collection
In this work, we focused on the popular images shared in the discussion around critical race theory. To identify these images, we collected public Facebook posts and images published between May 2021 and May 2022 which were discussing critical race theory. During that year, there had been multiple spikes in the discussion around CRT on Facebook in line with real-world events like the Virginia Gubernatorial Election, making this time span appropriate for analysis.
We used CrowdTangle (2022), a tool provided by Meta that enables searching and analyzing public content from Facebook. We collected all posts from Facebook that contained the term “critical race theory”, and had a minimum of 100 interactions, as we were interested in analyzing the images with the largest reach. We did not include the term “CRT”, a popular abbreviation of critical race theory, in our search query as our early sampling and search results review indicated a high false positive rate for that term (e.g. related to CRT televisions). This gave us 5,662 posts during the period May 2021-May 2022. Since a majority of the posts (around 70%) contained images, we decided to focus on images. The final dataset consisted of 3,906 images that were accessible and downloadable.*
Clustering
Once all the images were collected, the next step was to identify the popular images among them. We defined an image’s popularity as the number of times an image appears in our dataset. We borrowed Zannettou et al. (2018)’s method of using image hashing, specifically pHash (Monga and Evans 2006) values to identify similar images. pHash is an algorithm for perceptual hashing (Farid 2021) which returns a random string (‘hash’) for any given image. The property of this random string is that perceptually similar images (e.g. images that are slightly cropped, or have a watermark but are otherwise the same image) have similar pHash values. Given the pHash values for two images, we can compute the distance between them to infer if the two images are similar.
Clustering is a technique to identify and group similar objects based on a specific property into the same cluster. We used DBSCAN (Ester et al. 1996), a density-based clustering algorithm to group the identical images. DBSCAN considers clusters to be dense regions of data points, handles well the clusters of arbitrary shapes and is also robust to noise and outliers. We performed clustering based on the distance between the hashes, which gave us 190 clusters. Each cluster had multiple images in it, with the cluster size ranging from 3 images to 28 images.
Coding & Critical Discourse Analysis
Qualitative Content Analysis Because we undertook an iterative image coding process, we included enough images in the analysis to reach thematic saturation (Low 2019), which was 35 clusters. Within this set of 35, several clusters/images were so rhetorically similar that we collapsed them into one category, leaving us with 27 distinct images for analysis. Initially, images were considered apart from their contextualizing captions and comments for analysis, but in cases where it was not particularly clear which code an image should be given, we considered the surrounding text and reactions on the Facebook post where the image was shared to get a better understanding.
We carried out the analysis in an iterative manner. First, we decided on the categories/dimensions for which the images should be coded, the most basic being a binary categorization of pro-CRT or anti-CRT. Then, we began qualitatively coding for emergent themes. These codes were refined over multiple iterations until we finally grouped similar codes together to create organized parent codes. Though we created parent codes for multiple image categories (e.g., ‘type’, ‘origin’), we primarily focused on the “role” of images. The set of “role” codes captures how the image is deployed and the message/intent the image is attempting to convey.
Critical Discourse Analysis After coding these memes to better understand their rhetorical functions, we finally engaged critical discourse analysis (CDA) in order to introduce questions of power in our semiosis. In this methodology, language is never read as neutral and is instead analyzed for its ideological underpinnings. According to Fairclough, CDA provides a methodology to systematically explore often opaque relationships of causality and determination between (a) discursive practices, events and texts, and (b) wider social and cultural structures, relations and processes; to investigate how such practices, events and texts arise out of and are ideologically shaped by relations of power and struggles over power . Fairclough (2018, p. 93)
He theorizes discourse as a “three-dimensional” structure which is made up of discourse events, discursive practices, and social practice. Discourse events, he posits, are the actual “text” to be analyzed—“text” meaning any culturally-situated object of study—and these discourse events are composed of both discursive practices and social practices (Fairclough 2018).
In the case of the present study, memes are the discourse events which we analyze for their discursive practices–what the text and image, together, are discursively creating and reflecting–and for their social practices–how these discourses are tied up in sociocultural contexts. The content analysis allowed us to see trends in the discursive practices of these memes, and situating these trends within the social contexts of political and hegemonic power relations allowed us to make our ultimate arguments on why these memes, as discourse moments, matter in a crowded field of political discourse.
∗We will share a link to this dataset after the peer review is completed.