The new regulations, proposed on Monday, are needed to battle rising concerns over the growth of violent content and disinformation, the government claims, insisting that young people must be shielded from the dangers of a virtual world that encourages suicide, cyberbullying, and inappropriate behaviors.
Search engines alongside online messaging services and file-hosting sites will also come under the remit of a new United Kingdom regulator.
Peter Wanless, head of the National Society for the Prevention of Cruelty to Children, welcomed the proposals: "For too long social networks have failed to prioritize children's safety and left them exposed to grooming, abuse, and harmful content", he said.
The government outlined plans for an industry-funded regulator that would police the technology companies' platforms for harmful content, such as incitement to terrorism and child sexual exploitation.
For Silicon Valley, the UK's rules could amount to the most severe regulatory repercussion the tech industry has faced globally for not cleaning up a host of troubling content online.
Facebook, YouTube, and the more niche 8chan came in for severe criticism just last month when the suspected Christchurch shooter attacked two mosques and livestreamed the whole event.
"The era of self-regulation for online companies is over", Digital Secretary Jeremy Wright said, adding he wanted the sector to be "part of the solution".
Industry lobbying bodies representing Facebook, Google, and other big tech firms say the proposed laws are too vague and may harm competition. Google declined to comment. But calls for big tech to be regulated have grown in recent years following a spate of controversial incidents, the most recent of which was the live-streaming of the mass shooting in New Zealand on Facebook.
Giving the government power to dictate what content is appropriate sets a risky precedent, director-general Mark Littlewood said.
Damian Collins, a member of the UK's ruling Conservative Party who chairs the Digital, Culture, Media and Sport Committee, cited the New Zealand terrorist attack as a reason for introducing regulation. He said the panel would hold hearings on the government's proposal in the coming weeks. The government published its blueprint for regulation in a white paper on Monday, a precursor to actual legislation. "Online harms are widespread and can have serious consequences". But regulators ultimately could play a role in scrutinizing a broader array of online harms, the UK said, including content "that may not be illegal but are nonetheless highly damaging to individuals or threaten our way of life in the UK" The document offers a litany of potential areas of concern, including hate speech, coercive behaviour and underage exposure to illegal content such as dating apps that are meant for people over age 18. Many details, such as how it defines harmful content, and how long companies have to take it down, have yet to be hammered out.
The new proposals and laws represent a shift from an older model, where companies have often remained protected by generation-old rules that shield them from liability for content their users disseminate.
"Put simply, the tech companies have not done enough to protect their users and stop this shocking content from appearing in the first place", Home Secretary Sajid Javid said in a statement released by his office.