WASHINGTON (BLOOMBERG) - Executives from Facebook, Twitter and Google told US lawmakers they are combating disinformation on a range of subjects including the 2020 election and have taken down video, posts and messages deemed false and a risk to health and safety.
"Looking ahead to the November election, we are aware that the Covid-19 pandemic, widespread protests and other significant events can provide fodder for nation state sponsored disinformation campaigns," Richard Salgado, the director of law enforcement and information security at Google, said during a House Intelligence Committee hearing Thursday (June 18).
Facebook has studied the conversation surrounding 200 elections around the world as it prepares for November, said Nathaniel Gleicher, the company’s head of cybersecurity policy.
As the hearing got underway, Facebook said it removed posts and ads from Donald Trump’s campaign team that associated an upside-down triangle – a symbol Nazis used to identify political prisoners – with Antifa.
"We removed these posts and ads for violating our policy against organised hate,” Facebook said in a statement. "Our policy prohibits using a banned hate group’s symbol to identify political prisoners without the context that condemns or discusses the symbol."
Gleicher told the lawmakers that when Facebook spots these types of posts, "we bank it within our system so we can look at other instances where it might appear so we can find it and remove it automatically."
"When we identify something like this we bank it within our system so we can look at other instances where it might appear so we can find it and remove it automatically," Gleicher said at the hearing.
Committee chairman Adam Schiff, a Democrat from California, scheduled the hearing to focus on the influence of foreign actors on the social networks. He cited threats of manipulation from Russia, China and Iran and asked if tech companies can keep up.
"The nature of your platforms, all of them, is to embrace and monetise virality," Schiff said. "The more sensational, the more divisive, the more shocking or emotionally charged, the faster it circulates."
Google’s YouTube service removed more than 200,000 videos and over 100 million ads to stem disinformation about the coronavirus pandemic and prevent advertisers from profiting, according to a written testimony from Salgado ahead of the hearing.
Facebook has more than 35,000 people working on safety and security, three times the number in 2017, according to Gleicher.
Twitter has tracked the threat of disinformation related to recent protests on racism and police brutality spurred by the death of George Floyd at the hands of police in Minneapolis, said Nick Pickles, director of global public policy strategy and development.
"The public conversation on Twitter has highlighted the deep-rooted nature of issues related to race, justice, and equality," Pickles said, in a written statement. "While we have not seen evidence of concerted foreign state-backed efforts to manipulate the public conversation in recent weeks, we remain vigilant."
Similarly, Facebook hasn’t found evidence of this, but it has seen financially motivated scammers try to profit from the protests – for example by selling "non-existent T-shirts" to protesters, said Gleicher.
Representative Jim Himes, a Democrat from Connecticut, criticised Facebook’s algorithm for promoting "polarisation, division and anger" to increase engagement.
"If every single American household is full of toxic, explosive gas, as I think it is today, all it takes is a match from Russia, or Iran, or North Korea, or from China to set off a conflagration," he added.
When Facebook’s Gleicher disputed that claim, Himes requested data to back up the executive’s stance.
Meanwhile, Schiff characterised Google’s reputation as one of "keeping its head down and avoiding attention to its platform while others draw heat" – a claim that the company disputed in the hearing.
The executives from Twitter and Facebook said they’ve seen bad actors evolve their disinformation tactics over time.
Twitter has witnessed the use of state-controlled media and government accounts to influence US opinion on the coronavirus pandemic and the protests – for example, Chinese actors comparing the police response in the US to recent protests with the policing response in Hong Kong.
"That shift from platform manipulation to overt state assets is something we’ve observed," Pickles said. "We have to keep one step ahead of this and keep looking at how bad actors change their behaviour."