The intimacy between online platforms and individuals are intensifying the malicious impact of cyberbullying and hate speech. Recently on Nov. 2, students protested against the loose management of an online student community called Everytime where cyberbullying occurred often via its anonymous quality.
Hong Sung-soo, associate professor of law at Sookmyung Women’s University, defined hate speech as such that relates to existing discrimination and strengthens stigma against social minorities. According to Hong, neglecting hate speech can lead to pestering the activity of social minority.
“I call it minority stress,” Hong said. “Hate speech not only leads to the realization of crime, it gives individuals traumatic experiences.”
Hong further explained leaving hate speech unattended or not labeled as one can lead to people believing discrimination is inevitable.
To mitigate the problem, attempts to detect malicious comments with the power of algorithms are on the way. Massive platforms such as Twitter, Facebook and Reddit have already applied algorithms that warn its users whether a comment possibly contains hate against social minorities. Additionally, Naver has recently updated its Cleanbot, which refers to the swear word filtering algorithm, so it can detect the malicious intent of the comment buried in the context.
To cover which preconditions can lead to the active development and spread of such algorithms and applications, Ewha Voice interviewed Lee Soo-jin, a data scientist working at ST Unitas. Lee, along with one more teammate, had coded an algorithm in 2019 that detects whether a sentence is written with ill intent as a personal project.
Lee accentuated the importance of the dataset on malicious comments, which is a collected set of data that becomes the basis to teach a machine learning algorithm.
Lee and his teammates had organized profanity and vocabularies with malicious intent from online gaming communities to proceed the project. He shared the main concerns the team had experienced while collecting appropriate vocabularies.
“There were several cases where a sentence is not ill-intended despite including profanity,” Lee said.
Lee also gave an example where the context entirely determines whether a sentence is meant to mock the counterpart. In some extremely conservative communities, words referring to the liberal parties were used as profanity. Whereas in an extremely liberalist-centered community, the exact opposite word usage was able to be seen.
“I agree with what has been said in a developer community a few years ago. It is most important that members or managers of each community decide on regulations that define what is acceptable and what is not,” Lee said. “Common interest is required as well since its constituents best know which remarks are intended to be mocking.”
Upon what changes students persist that online platforms achieve, “Ewhabyunnal” has responded. Ewhabyunnal is an organization that stands up for the voices of social minorities who have participated in the protest on Nov. 2.
“It is ironical how the notion of freedom of speech is used to belittle the voices of social minorities as emotional or too idealistic,” Ewhabyunnal said.
Ewhabyunnal was most conscious of the phenomena where hate speech was deemed the main stream idea whilst receiving anonymous agreements once they were uploaded. It was evident more attention and care from the school and online platform creators should be involved for students to be able to socialize in an equal and safe environment.
Ewhabyunnal believes it is crucial that the online platform managers accept the guideline suggestions from youth human rights organization and create reporting category for hate speech.
Overall, Ewhabyunnal hoped mutual consciousness upon hate speech to be fostered among users.
“While it is primary to stop rapid spread and reproduction of hate speech in terms of community regulations, users’ spontaneous efforts are the most crucial,” Ewhabyunnal said.