Congress urges tech giants to fortify against foreign meddling before next election

Facebook, Google and Twitter attorneys testified in Congress about Russia's meddling in last year's presidential election this week.

How did the most powerful tech companies in America let Russia and other malicious actors co-opt their platforms to sow discord and undermine U.S. democracy in the run up to the 2016 presidential election?

That question hung over congressional hearings this week as lawmakers grilled top executives from Facebook, Twitter and Google. Members cajoled the tech giants to shoulder more responsibility for their outsized role in American political discourse as next year’s midterm election nears.

"I'm sure as you began your businesses and they grew, it was the idea of bringing people together and not tearing people apart," Rep. Brad Wenstrup, R-Ohio, said at the Nov. 1 House Intelligence Committee hearing. "The Wright brothers never intended the airplane to be used as a weapon of mass destruction. But that's what we're faced with in this world today, and we have to deal with it."

The scale of the Russian operation is staggering: On Facebook alone, more than 150 million people were likely exposed to Moscow’s disinformation campaign before the 2016 presidential election. This far-reaching effort included posts aimed to suppress voter turnout by duping Americans into casting ballots via text message, as well as political ads focusing on controversial topics calculated to inflame sentiments on both sides of an issue.

"This activity by the Russians is to go down in history as the greatest covert action campaign in the history of Mother Russia," said Rep. Will Hurd, R-Texas. "It has eroded trust in our public institutions, like our press, like our Congress, like some of our great American companies."

The tech giants sought to reassure Congress they’d redoubled their efforts to blunt future foreign interference. But lawmakers urged a stronger response, from legislation requiring more transparency in political advertising to closer cooperation with law enforcement. Here are some of the ideas that received a hearing this week.

Connecting more dots

Several lawmakers probed whether the tech firms’ procedures for identifying bad actors lurking on their platforms were enough to prevent future interference.

The representatives explained that their method relies on an array of "signals" to suss out a user’s identity. In the case of suspected Russian meddling, for instance, they might look at an account’s country of registration, whether the phone number or email associated with the account is Russian or uses a Russian network, and if the account has a Russian internet address. They may also look at banking information.

Some lawmakers blasted the tech firms for failing to act on obvious connections, such as the fact that some Russian ads were paid for with Russian rubles, and panned their track record of detecting and blocking hostile accounts.

"How did Facebook, which prides itself on being able to process billions of data points and instantly transform them into personal connections for its user, somehow not make the connection that electoral ads paid for in rubles were coming from Russia?" Sen. Al Franken, D-Minn., said at the Senate Judiciary Committee. "Those are two data points, American political ads and Russian money, rubles. How could you not connect those two dots?"

Franken asked the tech firms to promise not to accept foreign currency for American political ad buys, adding, "My goal is for you to think through this stuff a little bit better."

Other lawmakers struck a more amicable tone and encouraged stronger cooperation between the government and tech industry to combat misinformation.

"Your companies are just beginning to come to grips with the scale and the depth of the problem," said Richard Burr, R-N.C., who chairs the Senate Select Committee on Intelligence. "That's encouraging, but know this: We do better when you do better. I'd urge you to keep that in mind and to work with us proactively to find the right solution to a very constant challenge."

The executives welcomed more help from the intelligence and law enforcement communities to identify threats and other bad actors, singling out the intelligence community’s January 2017 assessment that publicly revealed for the first time the extent of Russia’s foreign influence campaign.

"The intelligence assessment was a very important piece of information that caused us to look further," Facebook general counsel Colin Stretch told the House Intelligence Committee.

Who’s behind social media political ads?

The dizzying pace of technology sometimes puts lawmakers in a position of playing catch up. Under current election law, for instance, political ads that run on television, radio and print must reveal their funding source, but no such requirement exists for social media.

Sen. Amy Klobuchar, D-Minn., used the congressional hearings to draw attention to a bipartisan bill she’s co-sponsoring called the Honest Ads Act, which would require an equal amount of transparency and disclosure from tech giants.

Both Twitter and Facebook borrowed ideas from the legislation to put in place new transparency measures.

Klobuchar gave a nod to these efforts but said without legislation "we're going to have a patchwork of ads from different companies ... without actual enforcement."

Separately, some lawmakers encouraged the tech firms to place labels on political ads to easily distinguish them from other content.

Both Twitter and Google said they were developing icons that would lead users to click to access information about an advertiser's’ identity.

"We will be identifying very clearly whether or not something is a political ad, so that you can see it right away," Sean Edgett, Twitter’s acting general counsel, told lawmakers. "You'll either hover over or click on a spot to then see a full transparency center that gives you all that information right away."

Raising public awareness

Several lawmakers raised the issue of whether tech firms have a duty to inform users they’ve been exposed to false information.

"If you were in a medical facility, and you got exposed to a disease, the medical facility would have to tell the folks who were exposed," said Sen. Mark Warner, D-Va. "TV and radio make corrections. I think it's an interesting question about what obligation you might have."

Stretch of Facebook declined to say his company has such an obligation. Rather, Facebook has an obligation to stop the activity, investigate, and share the information with industry and government partners, if necessary.

Likewise, Twitter gave little cause for optimism on this front. During the hearings it repeatedly touted its users’ ability to self-police falsehoods, and cited the voter suppression tweet as an example.

"We often see there's a lot of activity on the platform to correct false narratives," Twitter’s Edgett said. "The number of tweets that were counteracting (the voter suppression tweet) as false, and telling people not to believe that, was like between eight and 10 times what we saw on the actual tweets."

One outside expert who was called to testify before the Senate Judiciary Committee proposed a rating system for media outlets, with assessments based on an outlet’s accuracy, and amount of opinion versus reporting they publish.

"I propose the equivalent nutrition labels for information outlets, a rating icon for news-producing outlets displayed next to their new links and social media feeds and search engines," said Clint Watts, a fellow at the Foreign Policy Research Institute. "Users wanting to consume information from outlets with a poor rating wouldn't be prohibited and if they are misled about the truth, they have only themselves to blame."

Lawmakers sought permission from the tech giants to publicize more of the offending content, which will be released in hopes of educating the public about telltale signs of disinformation.