Facebook, Twitter, Other Internet Firms to Be Responsible for Content Under UK Plan
Following a consultation, the UK government said on Wednesday it planned to legislate to ensure companies had systems in place to tackle harmful content such as child abuse, cyber bullying and terrorist propaganda.
The policy, which will be developed in the coming months, would not place an undue burden on business, the government said. Penalties had not yet been decided, but it said the new rules would be enforced in a “fair, proportionate and transparent way.”
Governments globally are wrestling over how to better control content on social media platforms, often blamed for encouraging abuse, the spread of online pornography and for influencing or manipulating voters.
Germany introduced tough regulations on social media in 2018, which can see platforms fined if they do not review and remove illegal content within 24 hours of it being posted. Australia has also legislated.
“As the internet continues to grow and transform our lives it is essential that we get the balance right between a thriving, open and vibrant virtual world, and one in which users are protected from harm,” Britain’s Digital Minister Nicky Morgan and Interior Minister Priti Patel said in a statement.
The new regulations will apply to platforms on which user-generated content is shared, for example through comments, forums or video sharing.
The regulator, most likely media watchdog Ofcom, must be able to take action against tech bosses who did not take online safety seriously, the government said, adding that it will set out its position on senior manager liability in the coming months.
Ben Packer, a lawyer at Linklaters who has advised technology companies, said the proposals showed Britain was committed to implementing one of the most ambitious regulatory frameworks yet, which would have a significant impact on tech giants.
BETTER REGULATION
Facebook and Google said they would work with the UK government on the new regulations.
Facebook said it had long called for better regulation.
“New rules are needed so that we have a more common approach across platforms and private companies aren’t making so many important decisions alone,” said Rebecca Stimson, Facebook’s head of UK public policy.
“This is a complex challenge as any new rules need to protect people from harm without undermining freedom of expression or the incredible benefits the internet has brought.”
Keeping people safe was something Facebook took extremely seriously, she said, and in recent years the company had tripled the number of people working on the issue to 35,000 and was using artificial intelligence to find and remove harmful content.
Social media companies have largely self regulated, as the law has struggled to keep up with technology.
The managing director of Google’s YouTube UK, Ben McOwen Wilson, said the platform looked forward to working with the government to ensure a free, open and safer internet.
“To help keep our community safe, we haven’t waited for regulation; we’ve created new technology, hired expert reviewers, worked with external specialists, and reviewed our policies to ensure they are fit for the evolving challenges we face online,” he said.
Britain first announced last year that it would develop new online safety laws, saying they would be the toughest in the world.
Packer said the proposals announced on Wednesday moved away from the previous debate about whether social media companies should be classified as ‘publishers’, and therefore subject to libel and other laws, and focused instead on making platforms responsible for the systems they had in place to deal with harmful content.
(Editing by Guy Faulconbridge/ Louise Heavens/Susan Fenton)
- Surviving the ‘Silver Tsunami’: Closing the Talent, Skills Gap in Underwriting
- Florida Businessman Pleads Guilty to Rolling Back Odometers by Thousands of Miles
- Man Charged With Hiring Another to Burn Down His Home for $1.3 Million in Insurance
- Palm Beach Revolt Forces Sylvester Stallone to Abandon Mansion Sea Barrier