Social Media Regulation - Inside Story (UK News)
Should social media be regulated?
Main question:
"Is regulation the answer and can it be done without violating personal freedoms?"
While sites including Facebook and Twitter allow us to share information, they have also become places for illegal and harmful content to thrive.
- Big tech giants should be held accountable to regulate media on their platforms
"To help keep our community safe we haven't waited for regulation; we've created new technology, hired expert reviewers, worked with external specialists, and reviewed our policies to ensure they're fit for evolving challenges we face online".
-YouTube UK
"We have clear rules about what is said and what isn't allowed on our platforms and are investing billions in safety. We look forward to carrying on the discussion with the government, Parliament, and the rest of Industry as this process continues."
-Facebook UK
The regulation of web content is a complex area which touches on issues such as freedom of speech, censorship, and jurisdiction. There's also the question of defining what is subjectively offensive and what actually qualifies as harmful.
Offensive vs harmful content
- Looking at who the audience is: children need more censored content from violence for example; age limits
- Laws against hate speech, inciting violence etc., are already present in the public sector, but this raises the question of whether they should exist in the private sector
The speed of the internet is also a huge issue because of the copious amounts of content being uploaded and how quickly it spreads. Because of this, and the platforms that are held responsible, they might then take down more content that necessary since there is so much at stake for them as these large tech companies are being held accountable.
Responsibility is being shifted more and more to online platforms while legislations are adopting the approach to regulate illegal or potentially harmful content. There is a focus on the swift removal and deleting this harmful content. However, this raises concerns on human rights and freedom of expression for online users, especially if the platforms are required to remove this content in such a short timeframe. This doesn't allow for content providers/online users in general to challenge decisions of the platforms and also creates a risk for excessive removal of legitimate speech from online platforms.
We need to understand the distinction between harmful legal content and content that actually violates the existing national laws. When approaching the topic of things such as cyberbullying or other harmful societal phenomena online, it doesn't necessarily always mean that they actually meet the threshold of being illegal from the perspective of national law.
Legal but harmful content
- YouTube currently employs about 10,000 people in monitoring, removing content, and policy development
- Facebook and Instagram have more than 35,000 people working on safety and security
- Ofcom currently employs about a thousand people and they're also monitoring the media
It is not necessarily issues about freedom of expression, but rather the fact that platforms such as YouTube and Google depend on user-generated content as this is how they make their money and profits, and this is also done in relation to illegal content and copyrights
Over-removal and under-removal (of illegal content)
Human rights issues/illegality issues:
- Online harassment
- Abuse
- Cyberbullying
- Governments are responding to the concerns of their citizens and trying to tackle these
Importance of self-regulation
Difficulties with Journalism
- The question of how platforms companies judge the validity of new stories, some of which contain disturbing content, e.g., violence/violent imagery
- Who makes the call over whether these stories are allowed to be viewed on social media or not
- Since people are getting their news more frequently from platforms such as Facebook and Google rather than news organisations, having regulations in place will make it harder for them to push out new stories they want to release
Internet regulation and content :
Germany: "Social media companies must remove banned content within 24 hours or face fines of up to $54 million"
Australia: can face prison for failing to remove extremely violent content
New Zealand: 'Christchurch Corps' was created in response to Christchurch mosque shootings that were live streamed to Facebook to limit the spread of violent content
China: 'The Great Firewall' where several websites are banned outright and content is strictly regulated
Regulation will set minimum human rights and standards that will establish the environment of the legal certainty, resulting in companies as well as users understanding their duties and responsibilities.
- Transparency
- Automation
Artificial Intelligence (AI):
- General data protection - privacy to be protected
- Back-end processing - monetizing content, which can fuel illegal content
How far can we go in terms of monitoring and managing content in order to ensure legality through AI?
There should be a focus on what is illegal and not on debating what is harmful content
AI needs to be accountable and transparent
Should there be a consolidated global approach where all the countries come together and make a plan despite varying and different approaches (in reference to cultural values and what's acceptable e.g. sexuality, religion)
Bias coding - gender and race
Any solution has to work for everybody in all parts of the world and this is something that can only really be done on a human scale, and needs to be done by humans first before we try to automate it
Companies are global on the internet but regulation is local
The British government has said that it will protect online users rights by safeguarding free speech, promoting technology and ensuring businesses are not unduly impacted.
Comments
Post a Comment