US lawmakers target big tech 'amplification' but what does that really mean?

Lawmakers argue that when social media sites boost the performance of hateful or violent posts, the sites become accomplices. PHOTO: ST FILE

NEW YORK (NYTIMES) - United States lawmakers have spent years investigating how hate speech, misinformation and bullying on social media sites can lead to real-world harm. Increasingly, they have pointed a finger at the algorithms powering sites like Facebook and Twitter, the software that decides what content users will see and when they see it.

Some lawmakers from both parties argue that when social media sites boost the performance of hateful or violent posts, the sites become accomplices. And they have proposed bills to strip the companies of a legal shield that allows them to fend off lawsuits over most content posted by their users, in cases when the platform amplified a harmful post's reach.

The House Energy and Commerce Committee discussed several of the proposals at a hearing last Wednesday (Dec 1). The hearing also included testimony from Ms Frances Haugen, the former Facebook employee who recently leaked a trove of revealing internal documents from the company.

Removing the legal shield, known as Section 230, would mean a sea change for the Internet, because it has long enabled the vast scale of social media websites.

Ms Haugen has said that she supports changing Section 230, which is a part of the Communications Decency Act, so that it no longer covers certain decisions made by algorithms at tech platforms.

But what, exactly, counts as algorithmic amplification?

And what, exactly, is the definition of harmful?

The proposals offer far different answers to these crucial questions. And how they answer them may determine whether the courts find the bills constitutional.

Here is how the bills address these thorny issues.

1. What is algorithmic amplification?

Algorithms are everywhere. At its most basic, an algorithm is a set of instructions telling a computer how to do something. If a platform could be sued any time an algorithm did anything to a post, products that lawmakers are not trying to regulate might be ensnared.

Some of the proposed laws define the behaviour they want to regulate in general terms. A Bill sponsored by Minnesota Democrat Senator Amy Klobuchar would expose a platform to lawsuits if it "promotes" the reach of public health misinformation.

Ms Klobuchar's Bill on health misinformation would give platforms a pass if their algorithm promoted content in a "neutral" way. That could mean, for example, that a platform that ranked posts in chronological order would not have to worry about the law.

Other legislation is more specific. A Bill from Democrat representatives Anna Eshoo (California) and Tom Malinowski (New Jersey) defines dangerous amplification as doing anything to "rank, order, promote, recommend, amplify or similarly alter the delivery or display of information".

Another Bill written by House Democrats specifies that platforms could be sued only when the amplification in question was driven by a user's personal data.

"These platforms are not passive bystanders - they are knowingly choosing profits over people, and our country is paying the price," said Representative Frank Pallone Jr, chair of the Energy and Commerce Committee, in a statement when he announced the legislation.

Mr Pallone's new Bill includes an exemption for any business with five million or fewer monthly users. It also excludes posts that show up when a user searches for something, even if an algorithm ranks them, and Web hosting and other companies that make up the backbone of the Internet.

2. What content is harmful?

Lawmakers and others have pointed to a wide array of content they consider to be linked to real-world harm.

There are conspiracy theories, which could lead some adherents to turn violent. Posts from terrorist groups could push someone to commit an attack, as one man's relatives argued when they sued Facebook after a member of Hamas fatally stabbed him.

Other policymakers have expressed concerns about targeted advertisements that lead to housing discrimination.

Most of the bills now in Congress address specific types of content.

Ms Klobuchar's Bill covers "health misinformation". But the proposal leaves it up to the Department of Health and Human Services to determine what, exactly, that means.

"The coronavirus pandemic has shown us how lethal misinformation can be, and it is our responsibility to take action," Ms Klobuchar said when she announced the proposal, which was co-written by New Mexico Democrat Senator Ben Ray Lujan.

A bill co-written by Senator Amy Klobuchar (centre) would remove the legal shield of Section 230 if a social media platform "promotes" the reach of health misinformation. PHOTO: NYTIMES

The legislation proposed by Ms Eshoo and Mr Malinowski takes a different approach. It applies only to the amplification of posts that violate three laws - two that prohibit civil rights violations and a third that prosecutes international terrorism. Mr Pallone's Bill is the newest of the bunch and applies to any post that "materially contributed to a physical or severe emotional injury to any person".

This is a high legal standard: Emotional distress would have to be accompanied by physical symptoms. But it could cover, say, a teenager who views posts on Instagram that diminish her self-worth so much that she tries to hurt herself.

Some Republicans expressed concerns about that proposal last Wednesday, arguing that it would encourage platforms to take down content that should stay up.

Representative Cathy McMorris Rodgers of Washington, the top Republican on the committee, said it was a "thinly veiled attempt to pressure companies to censor more speech".

3. What do the courts think?

Judges have been sceptical of the idea that platforms should lose their legal immunity when they amplify the reach of content.

In the case involving an attack for which Hamas claimed responsibility, most of the judges who heard the case agreed with Facebook that its algorithms did not cost it the protection of the legal shield for user-generated content. If Congress creates an exemption to the legal shield - and it stands up to legal scrutiny - courts may have to follow its lead. But if the bills become law, they are likely to attract significant questions about whether they violate the First Amendment's free-speech protections.

Courts have ruled that the government cannot make benefits to an individual or a company contingent on the restriction of speech that the Constitution would otherwise protect.

"The issue becomes: Can the government directly ban algorithmic amplification?" said Jeff Kosseff, an associate professor of cyber security law at the US Naval Academy. "It's going to be hard, especially if you're trying to say you can't amplify certain types of speech."

Join ST's Telegram channel and get the latest breaking news delivered to you.