Leading tech giants TikTok, X (formerly Twitter), Google, and Meta have submitted official responses to the UK Parliament’s Science, Innovation and Technology Committee following their February hearing.
The inquiry, which is focused on social media safety, misinformation, and harmful algorithms, prompted a series of follow-up questions from the committee chair.
The committee has now published the companies’ replies, offering insight into their approach to content moderation, misinformation, and the handling of harmful online material during critical incidents, such as last summer’s riots.
Meta’s Response on Content Moderation and UK Safety Rules
Meta acknowledged that recent updates to its Hateful Conduct Policy could be interpreted as permitting offensive statements, such as those of a racist nature, which had been highlighted in leaked internal documents.
The company confirmed it has no current plans to expand its Third-Party Fact-Checking Programme or introduce Community Notes in the UK, although these features were launched in the US. Meta indicated it would evaluate UK-specific regulations, including the Online Safety Act, before making any policy changes.
In response to questions on research or risk assessments, Meta offered limited detail, stating that no single process governs changes to its Community Standards.
X Details Algorithm Process but Avoids Ad Revenue Questions
X outlined how its main feed algorithm and Community Notes system operate. Community Notes rely on input from users with differing opinions to determine whether a note is helpful, aiming to improve trust in the accuracy of posts.
However, X refused to disclose whether advertisers paused campaigns during the summer riots, citing business confidentiality, despite earlier public statements suggesting an ad freeze had taken place.
As of September 2024, X confirmed that only 1,275 employees worldwide were involved in content moderation, following widespread job reductions.
Google Addresses Misinformation But Avoids Financial Transparency
Google responded to concerns about false claims related to the Southport killings, saying it had demonetised the offending site, Channel3Now, within two days. However, the company did not disclose how much revenue it or the site earned from ads during this period.
The company also declined to comment on how its advertising systems may have contributed to public unrest or how much YouTube, which it owns, has earned from ads on content promoting eating disorders.
Google did offer brief details on its efforts to prevent ads from appearing next to harmful content.
TikTok Cites Confidentiality in Withholding Algorithm Details
TikTok refused to provide specific information on how its recommender algorithm and trust and safety features operate, citing the need to protect commercially sensitive data and prevent misuse by bad actors.
Committee Reaction and Ongoing Concerns
The Committee welcomed the fact that some platforms acknowledged the importance of UK safety regulations, such as the Online Safety Act, in shaping future decisions on content moderation. There was also interest in X’s explanation of how its bridging algorithm supports more balanced contributions on Community Notes.
However, MPs expressed concern that several companies failed to fully address questions regarding the monetisation of harmful or misleading content, as well as transparency around their algorithms and advertising practices.
The committee stressed that while the tech giants had previously committed to transparency and accountability before Parliament, their limited responses suggest there is still a long way to go.
