Society problems

Commentary: Singapore’s move to regulate online safety faces challenges, but society can help

Singapore is considering two proposed codes of practice to promote user online safety, both announced last week by the Ministry of Communications and Information (MCI).

The Code of Practice for Online Safety aims to address the processes in place in social media services that have been identified as having a high scope or high risk to protect users, and the Content Code for Media Services will grant the Infocomm Media Development Authority (IMDA) the power to request social media services to deactivate harmful content or accounts.

The two codes are expected to work together to introduce greater protections for online users, and MCI’s announcement follows a global trend of authorities reviewing legislation amid growing concerns about online harms.

For example, the proposed European Union (EU) Digital Services Act, when passed, would require online platforms to introduce practices to identify risks, undergo independent audits and share data with consumers. authorities and researchers.

Singapore’s move reflects concerns about online safety, especially for at-risk groups. As social media has become an integral part of daily life, even for minors, instances of harmful content and abuse have increased at an alarming rate.

You may have heard of the “Nth Room” case in South Korea, where several individuals like Cho Ju-bin used Telegram from 2018 to 2020 to blackmail, distribute and sell sexually explicit content from over 100 women and girls.

Similarly, in Singapore, a Telegram chat group SG Nasi Lemak was established in 2018 as a platform for users to share obscene videos and photos of women and girls.

In 2020, TikTok users reported distress and trauma after watching a video of a man killing himself on the popular video streaming service. These are just a few examples of online harms that have emerged in recent years.

The two proposed codes are distinct from previous online legislation, such as the Protection from Lies and Online Manipulation Act (Pofma) and the Protection from Harassment Act (POHA).

While Pofma aims to fight against lies, POHA offers protections against harassment or harassment, which is more relevant to the issue of online harm.

However, the POHA itself is insufficient to cover the scenarios associated with online damage. For example, individuals may not know that explicit content about them is being distributed in order to file complaints under the POHA.

POHA is also intended for victims of harassment and is not intended to engage social media platforms.

While social media platforms such as TikTok and Meta have issued statements welcoming the codes, working with the authorities means that the platforms will need to collect or synthesize data in order to compile the required reports, and should have greater responsibility for them. damage that emerges on their platforms.

Increased surveillance and sharing of data can be seen as contrary to privacy concerns that have also grown among end users, who do not always understand the extent to which personal data is anonymized.

Developing frameworks and guidelines that can be used to guide platform practices associated with user data is crucial, especially for vulnerable people such as minors.

For example, exemptions will need to be defined while outlining the data rights of individuals, and clarification will also be needed on how data is stored and/or disposed of in different online harm scenarios.

Looking ahead, it’s worth mentioning that instant messaging apps such as WhatsApp and Telegram are currently not covered by the codes, perhaps for technical reasons.

Instant messaging platforms such as WhatsApp use end-to-end encryption and self-destructing messages, which means messages are only accessible to the sender and recipient(s) and/or will disappear after a while.

Yet such features have been observed to fuel past cases of online harm, such as in the case of Nth Room and SG Nasi Lemak. Questions about how to mitigate the role of instant messaging platforms in perpetuating harm online remain.

Another challenge concerns user-related practices.

In my own research, I’ve also spoken to many parents who complain that they don’t know how to handle their children’s social media use, largely because as non-users they don’t understand always features associated with different social media platforms.

Tech-savvy children and young people can also find ways to circumvent content moderation settings and protections.

The issues allude to the fact that codes alone are inadequate, as instant messaging platforms can be out of reach due to the very way they are designed, and tech-savvy miners can find ways around the safeguards in place.

But viewing codes alone as the solution to online harm would be a missed opportunity. Social media has become an integral part of our lives and we need to come together as a society to tackle the issue of harm online.

This means that as parents we need to engage and dialogue with our children and each other about their use of online platforms.

As educators, we can do more to go beyond digital literacy and guide our students as responsible digital citizens.

As fellow online users, we can look out for each other. As a society, we will need to bring our collective voices to what we define as online safety and terms of engagement.

Online harm is a problem of our time, and society is better off when we can all see it and play our part in shaping it.

ABOUT THE AUTHOR:

Dr. Natalie Pang is a Senior Lecturer and Deputy Head in the Department of Communications and New Media, and a Senior Research Fellow at the Center for Trusted Internet and Community, both at the National University of Singapore. She is also a member of the Sunlight Alliance for Action’s research working group to combat online harms, particularly those that target women and girls.