Google, Facebook, Twitter CEOs will face US lawmakers again: How to watchMarch 19, 2021
It’s not the first time that Facebook CEO Mark Zuckerberg, Twitter CEO Jack Dorsey and Google CEO Sundar Pichai have been grilled by lawmakers about how they moderate content but the coronavirus pandemic and the election season has put a larger spotlight on the topic. The virtual hearing comes as US lawmakers consider new regulation that could put more pressure on online platforms to do a better job of combating lies.
The House subcommittee on communications and technology and the House subcommittee on consumer protection and commerce are holding the joint hearing.
“For far too long, big tech has failed to acknowledge the role they’ve played in fomenting and elevating blatantly false information to its online audiences. Industry self-regulation has failed. We must begin the work of changing incentives driving social media companies to allow and even promote misinformation and disinformation,” Energy and Commerce Committee Chairman Frank Pallone Jr. (D-New Jersey), Communications and Technology Subcommittee Chairman Mike Doyle (D-Pennsylvania), and Consumer Protection and Commerce Subcommittee Chair Jan Schakowsky (D-Illinois) said in a statement in February.
Democrats, civil rights groups, celebrities and others have scrutinized tech companies for not doing enough to address this problem. At the same time, the platforms are also trying to fend off accusations they’re censoring speech from conservatives, which they repeatedly deny.
The hearing is called Disinformation Nation: Social Media’s Role in Promoting Extremism and Misinformation. Here’s what you need to know:
The hearing is scheduled on Thursday at 12 p.m. ET/9 a.m. PT.
What to expect
Facebook, Google and Twitter will likely outline all of the steps they’ve taken to curb the spread of misinformation and disinformation, but that probably won’t be enough to satisfy lawmakers.
The companies have labeled misinformation and directed people to more authoritative sources of information both during the 2020 US presidential election and the pandemic, although it’s not clear how effective these efforts have been. Facebook partners with third-party fact-checkers to flag misinformation and says it will show these posts lower within people’s feeds.
Twitter has been working on a new community-driven forum called Birdwatch that lets users identify misleading tweets. Google-owned YouTube said it also reduces recommendations for harmful misinformation and “human evaluators” help determine if a claim is inaccurate or a conspiracy theory. The three platforms have also removed health misinformation if they include false claims that could lead to physical harm.
All these efforts, though, didn’t stop misinformation from spreading. False claims that 5G caused the coronavirus and misinformation that vaccines are toxic continue to permeate throughout social media. During the election, misinformation about voter fraud, the QAnon conspiracy theory and other online lies spread on social media. Social networks have also had to grapple with fake accounts created to sow discord and disinformation during elections.
Former President Donald Trump was also notorious for spreading misinformation on social media during the election season. All three platforms suspended Trump because of concerns about inciting violence following the deadly Jan. 6 Capitol Hill riot. Facebook’s oversight board is currently weighing whether to keep Trump’s ban in place. Twitter’s ban is permanent, and YouTube said it would lift the ban on Trump’s channel when the risk of violence decreases.