close
close

News Bureau | ILLINOIS


News Bureau | ILLINOIS

Anthropology professor Kendra Calhoun studies the creative language people use on social media platforms to trick algorithms that might falsely categorize content as “inappropriate” or “offensive.” Calhoun spoke to Diana Yates, Life Sciences Editor, News Bureau about this phenomenon, which she calls “linguistic self-censorship.”

In a current reportYou and your co-author Alexia Fawcett of the University of California, Santa Barbara, have written that TikTok’s content moderation practices sometimes delete or suppress posts that address controversial topics such as suicide, race, gender and sex or sexuality. Does this have a disproportionate impact on some users’ free speech?

Yes. These practices disproportionately affect TikTok users who are already structurally marginalized by society because of their identity and background. Our research found that users from all backgrounds engage in linguistic self-censorship, but those who expressed the greatest fear of suppression of their content were primarily people from marginalized groups.

Which communities are most likely to have their posts removed or suppressed?

There are no publicly available statistical data on TikTok’s moderation practices, but TikTok content creators from communities marginalized based on race, gender, and/or sexuality have been some of the most vocal critics when it comes to having their content suppressed or removed. A large portion of the examples in our study came from creators who are Black, transgender, and/or queer. People with social and political beliefs that conflict with the beliefs of those in power are another group whose content may be at risk of suppression. We saw this recently with pro-Palestinian content in the wake of Hamas’s October 7, 2023 attack on Israel and Israel’s retaliatory war.

Are social media companies doing anything to better protect the free expression of marginalized groups or others while protecting users from harmful content that violates the platform’s policies?

All social media platforms have publicly available community guidelines designed to govern their content moderation practices. But these policies are still subject to interpretation by moderators, so there is always room for uneven application. Years of research into content moderation and censorship on platforms like YouTube, Twitter, Instagram, and now TikTok show that suppressing the speech of marginalized people is an ongoing problem. But content moderation decisions are internal to companies, so social media users don’t know if and how these companies are trying to resolve this conflict.

How do users adapt their own language to avoid censorship on TikTok?

The main goal of linguistic self-censorship online is to avoid writing or speaking certain words or phrases that might be detected by an algorithmic filter. Users do this in many creative ways, manipulating spelling, sound, and meaning, and using digital linguistic resources such as emojis and speech recognition technology. Someone might avoid the word “gay” by spelling it “ghey.” or replace the word “porn” with the emoji for the rhyming word “corn”, as in “🌽 star.” Users also replace words with homophones that can be correctly interpreted in context, such as “sir-come-sized” for “circumcised”.

Please share with us some of the most playful examples.

One of the most creatively censored words in our study was “white,” which was censored because of the widespread belief that it is harmful to acknowledge or talk about race, including whiteness. Some forms change spelling and use terms like “whyte,” but many others use references to white-colored objects. Emojis for white objects, for example 🦷, 🧻, 🚽, are used as literal replacements for “white,” such as “🦷 people.” Expressions like “blank Google document” or “8.5 x 11”both refer to a white sheet of paper – use written descriptions of white objects instead of “white person/people.” Since there are so many white objects in the world, the possibilities are seemingly endless.

How do the new expressions contribute to community building?

Many self-censored forms are based on in-group knowledge, such as existing slang or past trends on the platform or cultural practices. When people use these forms, they signal to others in the group that they have some kind of social connection. The phrase “le dollar bean” originated from the censored form “le$bean”—an abbreviation for “lesbian”—being mispronounced by TikTok’s text-to-speech feature. The video that popularized the form was posted by a lesbian couple, and “le dollar bean” became a popular self-description phrase among lesbian TikTok users.

Some forms also signal a particular social or political viewpoint, so by using this form you can show others that you are like-minded. “Isnotreal” And “Israhell” – for “Israel” – are two recent examples of self-censored expressions that signal that a user is critical of Israel’s current war against the Palestinians in the Gaza Strip.

Do these linguistic innovations continue to exist in other contexts?

Some of these innovations are niche, or only enjoy a brief moment of popularity before fading into the background, as is the case with many TikTok trends. Other self-censored forms have had long lives, migrating to other social media platforms or offline contexts. The word “unalive” for “kill” remains in use on TikTok, and can also be easily found with a keyword search on Twitter. “Accountant,” as a censored form of “sex worker,” emerged before TikTok even existed, but its use on the platform helped it spread to new communities. It’s not uncommon for words that emerge online to become part of people’s everyday vocabulary—think “rizz” or “trolling.” So there’s a high chance that as more innovative forms emerge, they’ll find their way into our offline interactions.

Leave a Reply

Your email address will not be published. Required fields are marked *