Would a voice-only comment system revolutionise online discourse today?

DISCLAIMER: THIS IS JUST AN IDEA CREATED ON A WHIM. IT MAY NOT BE ORIGINAL AND IT MAY BE COMPLETELY UNREALISTIC. I DO NOT BELIEVE THAT IT WOULD INSTANTLY TRANSFORM THE ONLINE HEMISPHERE TODAY. IT IS SIMPLY AN IDEA.

If you haven’t encountered it yet, the rise of online hate speech is a pressing issue that has increasingly dominated headlines. Discussions around this topic are often framed in the present tense, as if it’s a new phenomenon, but in reality, hate speech has been a persistent problem since the dawn of the internet. The anonymity afforded by digital platforms allows individuals to spread hate with significant consequences. In these discussions, influential figures such as Elon Musk and Tommy Robinson frequently come into focus. With just a brief, insinuating tweet, they can mobilise their dedicated followers to amplify their messages. For instance, during the August Southport riots, Musk publicly questioned Robinson’s arrest under anti-terror laws, asking what actions were deemed “terrorism.” This conversation coincided with Robinson’s banned documentary, which garnered 33 million views on X. Unsurprisingly, Robinson praised Musk, claiming he was “the best thing to happen for free speech in this century“. At that time, Robinson had over 900,000 followers on X.

In light of this ongoing issue, I propose the introduction of a voice-only comment system as a potential solution. By requiring users to articulate their thoughts verbally, we can encourage more thoughtful expression and reduce the impulsivity often associated with text-based communication. This shift could foster a more accountable and respectful online discourse, which is essential in combating hate speech effectively.

According to Statista, during the second quarter of 2024, Facebook removed 7.2 million pieces of hate speech content, which was down from 7.4 million in the first quarter of 2024. Online hate can range from anything to a negative remark about someones body on an Instagram post to a racist rant on X that can result in riots and disruption across countries (as seen a few months ago in Southport). The results of online hate aren’t hard to find, and it’s effects aren’t to be downplayed. The cult of celebrity and online hate are also connected in this way, and are apart of the iceberg of online hate that we are most commonly exposed to on a regular basis. Controlling this digital monster has proved challenging for countries, and has been compared to a ‘whack a mole battle’- where one hateful account is banned another rises to the surface. With accessibility to online spaces comes a social responsibility expected of individuals to navigate such in the same manner they would a physical one, yet this is unrealistic. Our digital personas are far from an extension of our physical selves when partnered with anonymity, and the influence of aggrevated content.

The Carnegie Mellon University did a study on the rise of racist hate speech during the pandemic and discovered that automated accounts—also known as social bots—did not only amplify the amount of hate in online racism conversations during the pandemic- they also managed to shift the targets of online hate in these discussions, shaping social media dialogue about race. Double checking a source these days before reading it isn’t a universally carried out practice, and these bots can seriously shape the information we find on social media platforms. In 2023, The Guardian reported on the influence of bot activity on X back in regards to tweets around the false claim that Donald Trump won the election. Using Alexandria Digital, a tool developed to monitor and identify the spread of misinformation and disinformation, researchers discovered that a sprawling bot network of 1,305 accounts were pushing out the false claim that the late president had won. Researchers found that the bots had a tendency to disseminate the same information about the topics in a certain way over five times- with some tweeting up to 662 times a day. The momentum at which these bots work partnered with their sheer network of influence they have on individuals actively spreading online hate makes for a monster of hate hard to tackle. Meghan Markle and Prince Harry looked at the spread of online hate back in 2021 during their Netflix Documentary, and found that 55 primary hate accounts and 28 secondary hate accounts (which helped propel the information of the primary) accounted for 70% of all original hate targeting Markle. Bots partnered with these small yet wide-reaching hate accounts dedicated to push out hate into the digital stratosphere are replacing single-purpose hate accounts. The equation of influence around online hate doesn’t solely include hundreds of individuals pushing out the same hateful agenda on a certain topic, these results point to a hatred machine far more complex than that.

The psychology of anonymity on social media is a crucial factor to consider when looking at the inclination of disseminating online hate. Xinyu Pan (2023) focuses on the valiance and bravery cyberspace breeds within due to this dimension of anonymity, finding that anonymity online ‘creates a condition under which individuals can freely express their views without having to concern about public pressure or regulatory repression, which in turn promotes discussion of controversial issues’. Experiencing judgement from others doesn’t have quite as much influence over individuals digital habits when they can detach from their identity altogether, meaning that this anonymity dangerously ‘reduces the cost of immoral behaviour’. A key finding of Pan’s research was the discovery that ‘individuals with higher perceived anonymity online show a greater tendency to commit cyber agression’. Australia has been hugely vocal about cracking down on anonymous X accounts, even pushing for the complete banning of such back in 2021. These calls were knocked down by the argument that anonymity has been ‘central to social debate across history and individual human development in repressive societies’, allowing people to explore their ‘heritage, seuxality, gender identity’. Taking away this right to anonymity online points to taking away the rights of individuals that are crucial to their protection from danger within society’.

During the August attacks, I came up with a mock online hate strategy whilst on a drive. Despite being aware of it’s flaws, I would find it’s implementation a fascinating one. This strategy involves implementing a voice-only comment system to platforms like X and Instagram (where most online hate takes place) , and encourage users to vocally record their thoughts instead of type them out. Ensuring this system was user-friendly and accompanied by editing tools that would control the lengths of comments, users would be able to vocally react and respond online,

This voice-only strategy comes with it’s clear fallacies. Conversation on social media is upheld by the right to comment, and restrictions on text-based comments can be perceived as a form of censorship. Censorship however usually refers to the suppression of speech regarded as harmful, often by authority, and a voice-only approach would not be designed with the purpose of silencing individuals in mind but aiming to reshape the manner in which dialogue takes place on social media. Inclusion is still promoted through this voice only medium- carried out through the implementation of a design that includes features like automatic transcription and subtitles.

The goal of this approach centres around reflection around what an individual is contributing to social media. Saying something out loud involves hearing yourself speak back. Although many of these hate accounts may do so publicly in social circles, the act of proliferating hate speech bears more social responsibility. This approach doesn’t involve taking away the right to anonymity, but promotes a sense of accountability aimed at those who use this anonymity as a digital weapon. Social media has made it so easy to leave a comment. Whether you are on a train, or on a walk, the ability to type is one that requires little effort with the few swipes of a button, Speech can also be seen as easy to carry out yet when in a public space, pushing out an anti-message proves harder to propel into the digital stratosphere when surrounded by others. Again- the weight of responsibility is made heavier through this approach.

This approach also re-defines the role of bots in the digital realm. As touched on prior, text-based bots are relatively easier to create, yet the creation of bots that convincingly produce natural-sounding vocal hate comments at the same rate of production as before (over 600 hate comments a day), requires a more complex creation process. Human conversation also takes place at a rapid speed, and bots may find this alarming, and impossible to keep up with, especially where people expect immediate responses and emotional input. Every human voice comes with it’s own individual nuances, and these cannot be evoked in the same way by bots, no matter how technologically advanced. Whilst hate comments from bots may still arise, the ability to spot such as a bot takes away the blow of online hate. Bots may be easier to manage through this voice-only system, and accompanied by the presence of emotion-detection algorithms- help social media platforms recognise aggressive or hateful tones more efficiently.

Harmful speech would still pertain through this system, whether it be through the use of advanced voice modulation software, or just the simple manipulation of voice- whether it be through an accent or emotionally charged rhetoric that is falsely framed as meaningful and positive. Even when conversation is carried out in the physical world it can be subject to interpretation, and microaggressions can still emerge, with prejudice still being expressed without the use of overtly hateful language. This approach would not completely transform the landscape of digital hate- and it would certainly anger a lot of people. Typing a comment, response or a reaction online is a simple act that has been used to access digital spaces ever since the dawn of the internet, Taking away this basic liberty would leave individuals in a sense of befuddlement over their approach to the online realm- and perhaps expose themselves to a new social media user within them.

A voice-only approach to social media dialogue is not a flexible or realistic weapon against the spread of online hate, however it may play a surmountable part when looking at the role of social responsibility and the manipulation of anonymity in the context of the pandemic of online hate we are facing today. A voice only approach would require it’s own set of management rules (whether it be emotion-detection algorithms) just like any tool on social media as there will always be a digital strain that gains immunity to hate prevention tools over time (as we have seen with the mass rise of bots). We are constantly encouraged to say how we feel, and a voice-only approach takes this to a whole new digital dimension when utilised by social media giants in the takedown of online hate.

Previous
Previous

Is to transform to conform?

Next
Next

The Fashion Week Diaries