At a glance: Singapore's proposed rules to reduce online harm, Latest Tech News - The New Paper
Tech

At a glance: Singapore's proposed rules to reduce online harm

The Ministry of Communications and Information is currently conducting consultations with the tech industry and the public on new codes of practice for social media platforms.

When passed, the Code of Practice for Online Safety and the Content Code for Social Media Services will require platforms to implement safety standards and content moderation processes to minimise users' risk of exposure to harmful online content like terrorist propaganda.

The platforms will also need to ensure additional safeguards for users who are under 18 years old, including tools to help them or their parents minimise their exposure to inappropriate content such as sexual or violent videos, and unwanted interactions like online stalking and harassment.

The new rules are aimed at codifying these standards in law and giving the authorities powers to take action against platforms that fail to meet the requirements. They are expected to be added to the Broadcasting Act.

The ministry has not released many details on the specific stipulations of the codes as they are still being developed in collaboration with the tech industry, though it has said the Infocomm Media Development Authority (IMDA) will be empowered to direct social media services to disable access to harmful online content for Singapore users.

Platforms will also be required to produce annual accountability reports to be published on the IMDA website.

These reports will need to include metrics to show the effectiveness of their systems and processes.

InstaSex: When teens in Singapore become social media prey | CloseUp

Here are five types of online harm the Code of Practice for Online Safety and the Content Code for Social Media Services could cover.

1. Violence and terrorism

Social media platforms will need to proactively detect and remove violent videos or other content that promotes violence, such as terrorist propaganda.

One example is an incident in 2019, in which a gunman stormed a mosque in Christchurch, New Zealand, and fired on Muslim worshippers while live-streaming the terrorist attack on Facebook using a helmet-mounted camera. Clips of the footage quickly made their way onto other platforms like Twitter.

In another incident, rioters who stormed Capitol Hill in the United States last year used social media to organise themselves and amplify their messages.

Similar material that incites violence, as well as content that encourages self-directed violence such as suicide, will also be covered under the new codes of practice.

2. Dangerous viral challenges

Social media "challenges" that encourage dangerous behaviour are another example of harmful online trends that the codes aim to reduce.

Last year, a 10-year-old Italian girl died after taking part in an online "blackout challenge" which encouraged users to choke themselves until they pass out.

Some platforms have taken steps to remove such challenges.

Viral video platform TikTok blocked hashtags and videos related to the "milk crate challenge" last year over concerns that participants could be seriously injured. The trend involved users filming themselves stacking milk crates into a tower and then climbing over them. Many videos showed users falling to the ground while attempting the stunt.

3. Sexual exploitation, abuse and harassment

The codes will also aim to minimise users' risk of exposure to sexual content and abuse, including child pornography. Platforms will be required to detect and remove child sexual exploitation and abuse material, as well as content that promotes sexual violence.

Sexual harassment, online stalking and threats of "revenge porn" - or non-consensual sharing of sexual images - will also be tackled. Social media platforms will be required to ensure users can easily report such unwanted interactions, assess the reports and take appropriate action in a timely manner.

One example of sexual harassment is a poll circulated on social media last year inviting people to rank local female asatizah (Muslim religious teachers) according to their sexual attractiveness.

4. Threats to public health

Content that threatens public health could also fall afoul of the new rules.

During the Covid-19 pandemic, conspiracy theories and viral social media posts may have contributed to vaccination hesitancy and encouraged people not to heed measures implemented to control the spread of the virus.

Similar posts could constitute a threat to public health and may be covered under the rules to combat online harm.

5. Threats to racial and religious harmony

The new codes will take Singapore's unique context into account, including sensitive issues like race and religion. Offensive content or incidents that could stoke racial or religious tension will be covered.

One example is a case where a man was charged for stoking racial tensions after he posted racially offensive tweets using the persona of a Chinese woman with the pseudonym Sharon Liew.

Another is a 2020 post by a person using a profile called "NUS Atheist Society" which depicted the Bible and the Quran as alternatives to be used in the event of a toilet paper shortage.

social mediaLAW AND LEGISLATIONinternetCOMMUNITY ISSUES