Pro-China misinformation group continues spreading messages, researchers say.
The network has promoted a variety of falsehoods about human rights abuses and the coronavirus.,
Two years ago, researchers uncovered details about a disinformation network that made a coordinated effort to push Chinese government messaging outside the country. Now, a separate research group has said that the network is still at it, despite efforts by social media companies to stop it.
More than 2,000 accounts continued to spread Chinese propaganda in the last year, according to a new report from the disinformation research group Miburo. They have promoted such falsehoods as the denial of human rights abuses in China’s Xinjiang region, where the Communist Party has carried out repressive policies against the Uyghurs, a Muslim ethnic minority, and Covid-19 misinformation, like the conspiracy that the U.S. military developed the coronavirus as a bioweapon.
The accounts point to a “well-resourced, high-skill actor that keeps reappearing,” said Nick Monaco, the director of China research at Miburo. He added that the timing and messaging of the posts in the network aligned perfectly with public messaging put out by the Chinese government in the last year.
Miburo said it was difficult to determine whether the influence campaign was organized by the ruling Communist Party or if some accounts are by nationalist citizens. But “knowing who pressed the enter key is less important” than the implication of a well-known actor spreading Chinese propaganda “at a high volume on international social media networks,” Mr. Monaco said in a blog post about the campaign.
China is known to use social media to broadcast its political messages with the aim of shaping global opinion. In June, The New York Times and ProPublica revealed the existence of thousands of videos orchestrated by the Chinese government in which citizens denied abuses in Xinjiang. This week, The Times reported on a set of documents that showed how Chinese officials tap private businesses to generate propaganda on demand.
Miburo said the network, nicknamed “Spamouflage” by researchers, was first discovered by the research group Graphika in a 2019 report. Though some posts have since been removed, Miburo tracked around 2,000 more accounts on Facebook, YouTube and Twitter from January 2021 to this month that the companies largely failed to remove.
Miburo found nearly 8,000 YouTube videos in the network in the past year that collected over 3.6 million views, and links to the videos were posted on both Facebook and Twitter. The researchers also found 1,632 accounts in the network on Facebook, including some accounts that used fake profile photos generated with the help of artificial intelligence and Bangladeshi Facebook pages that later changed their names and started to post about China.
In early December, 287 YouTube channels spreading the Chinese propaganda were still up, Mr. Monaco said. All were removed after the researchers sent their data set to YouTube.
Farshad Shadloo, a YouTube spokesman, said the channels were terminated in the last month as part of YouTube’s continuing investigation into coordinated influence operations linked to China. He said most of the channels had uploaded “spammy content” and that “a very small subset uploaded content in Chinese and English about China’s Covid-19 vaccine efforts and social issues in the U.S.”
Twitter said it permanently suspended a number of accounts based on Miburo’s report under its platform manipulation and spam policy. Margarita Franklin, a Facebook spokeswoman, said that the company would continue to work with researchers to detect and block the attempts of networks “to come back, like some of the accounts mentioned in this report.”
Facebook said that while some of the accounts flagged by Miburo resembled the behavior of Spamouflage, it could not yet confirm their connection to the network without more research. A handful of accounts spotted by Miburo were false positives, the company said.
In January, according to Miburo’s report, a Facebook user linked to a YouTube video that spread propaganda about coronavirus vaccines. “Many countries [prefer to] buy Chinese vaccines first, U.S. vaccines have side effects,” the post said.
By August and September, several Facebook accounts began pushing the false conspiracy that Covid-19 was developed in Fort Detrick, an American military base in Maryland, and alleged that the U.S. military was behind the coronavirus.
But Mr. Monaco argued that the most troubling new aspect of this version of the Spamouflage campaign was “the malice of spreading propaganda that denies human rights atrocities on a mass scale” by posting about Xinjiang.
On June 27, two different Facebook pages in the network posted identical messages within 10 minutes of each other, falsely denying forced labor and genocide in Xinjiang and characterizing it as “the lie of the century,” an unattributed quote of Zhao Lijian, a Chinese Ministry of Foreign Affairs spokesman.
Last month, reporting on newly disclosed financial documents showed that Kenya’s president, Uhuru Kenyatta, and members of his family were linked to 13 offshore companies with hidden assets of more than $30 million. The findings, part of the leaked documents known as the Pandora Papers, initially generated outrage online among Kenyans.
But within days, that sentiment was hijacked on Twitter by a coordinated misinformation campaign, according to a new report published by the nonprofit Mozilla Foundation. The effort generated thousands of messages supporting the president, whose term is ending, and criticizing the release of the documents.
“Like clockwork, an alternative sentiment quickly emerged, supporting the president and his offshore accounts,” said Odanga Madung, a fellow at Mozilla and an author of the report.
“Kenyan Twitter was awash in Pandora Paper astroturfing,” he said.
The research underscores how online platforms based in the United States still struggle to police inauthentic behavior abroad. Internal documents obtained by the former Facebook product manager turned whistle-blower, Frances Haugen, repeatedly showed how the social network failed to adequately police hate speech and misinformation in countries outside North America, where 90 percent of its users reside.
Ann-Marie Lowry, a Twitter spokeswoman, said in a statement that the company’s uniquely open nature empowered research such as Mozilla’s. “Our top priority is keeping people safe, and we remain vigilant about coordinated activity on our service,” Ms. Lowry said. “We are constantly improving Twitter’s auto-detection technology to catch accounts engaging in rule-violating behavior as soon as they pop up on the service.”
Mr. Madung and another researcher, Brian Obilo, looked at over 10,000 tweets discussing mentions of Mr. Kenyatta in the Pandora Papers over a four-week period. They found a campaign of nearly 5,000 tweets with thousands of likes and shares that was “clearly inauthentic” and “coordinated to feign public support,” according to the research.
The 1,935 accounts that they found had participated in the campaign tweeted for days only about the Pandora Papers and Mr. Kenyatta, and got certain hashtags like #phonyleaks and #offshoreaccountfacts to appear on Twitter’s dedicated sidebar for trending topics by posting the hashtag repeatedly. The researchers noted that many of the accounts they found had been part of a previous disinformation campaign tweeting pro-government propaganda from May that they had flagged to Twitter. The company took down some of the accounts but allowed others to remain up.
“Before chest thumping and yawning mercilessly that H.E had stolen your money, know first when the offshore accounts were acquired #OffshoreAccountFacts,” said one tweet that posted as part of the campaign, using a short hand for “His Excellency” to refer to Mr. Kenyatta. The post collected 341 likes and shares on Twitter before the account was suspended.
A similar campaign was attempted on Facebook but the researchers found only 12 posts with under 100 interactions there, Mr. Madung said.
After the researchers shared the report with Twitter’s policy team, the company suspended more than 230 accounts for violating its platform manipulation and spam policies. It added that it would continue to work with third-party organizations that helped to identify tweets or accounts that violated the social network’s policies.
Share of Election-Related Posts on Social Platforms Linking to Videos Making Claims of Fraud
YouTube’s stricter policies against election misinformation was followed by sharp drops in the prevalence of false and misleading videos on Facebook and Twitter, according to new research released on Thursday, underscoring the video service’s power across social media.
Researchers at the Center for Social Media and Politics at New York University found a significant rise in election fraud YouTube videos shared on Twitter immediately after the Nov. 3 election. In November, those videos consistently accounted for about one-third of all election-related video shares on Twitter. The top YouTube channels about election fraud that were shared on Twitter that month came from sources that had promoted election misinformation in the past, such as Project Veritas, Right Side Broadcasting Network and One America News Network.
But the proportion of election fraud claims shared on Twitter dropped sharply after Dec. 8. That was the day YouTube said it would remove videos that promoted the unfounded theory that widespread errors and fraud changed the outcome of the presidential election. By Dec. 21, the proportion of election fraud content from YouTube that was shared on Twitter had dropped below 20 percent for the first time since the election.
The proportion fell further after Jan. 7, when YouTube announced that any channels that violated its election misinformation policy would receive a “strike,” and that channels that received three strikes in a 90-day period would be permanently removed. By Inauguration Day, the proportion was around 5 percent.
The trend was replicated on Facebook. A postelection surge in sharing videos containing fraud theories peaked at about 18 percent of all videos on Facebook just before Dec. 8. After YouTube introduced its stricter policies, the proportion fell sharply for much of the month, before rising slightly before the Jan. 6 riot at the Capitol. The proportion dropped again, to 4 percent by Inauguration Day, after the new policies were put in place on Jan. 7.
To reach their findings, researchers collected a random sampling of 10 percent of all tweets each day. They then isolated tweets that linked to YouTube videos. They did the same for YouTube links on Facebook, using a Facebook-owned social media analytics tool, CrowdTangle.
From this large data set, the researchers filtered for YouTube videos about the election broadly, as well as about election fraud using a set of keywords like “Stop the Steal” and “Sharpiegate.” This allowed the researchers to get a sense of the volume of YouTube videos about election fraud over time, and how that volume shifted in late 2020 and early 2021.
Misinformation on major social networks has proliferated in recent years. YouTube in particular has lagged behind other platforms in cracking down on different types of misinformation, often announcing stricter policies several weeks or months after Facebook and Twitter. In recent weeks, however, YouTube has toughened its policies, such as banning all antivaccine misinformation and suspending the accounts of prominent antivaccine activists, including Joseph Mercola and Robert F. Kennedy Jr.
Ivy Choi, a YouTube spokeswoman, said that YouTube was the only major online platform with a presidential election integrity policy. “We also raised up authoritative content for election-related search queries and reduced the spread of harmful election-related misinformation,” she said.
Megan Brown, a research scientist at the N.Y.U. Center for Social Media and Politics, said it was possible that after YouTube banned the content, people could no longer share the videos that promoted election fraud. It is also possible that interest in the election fraud theories dropped considerably after states certified their election results.
But the bottom line, Ms. Brown said, is that “we know these platforms are deeply interconnected.” YouTube, she pointed out, has been identified as one of the most-shared domains across other platforms, including in both of Facebook’s recently released content reports and N.Y.U.’s own research.
“It’s a huge part of the information ecosystem,” Ms. Brown said, “so when YouTube’s platform becomes healthier, others do as well.”
Google said it will no longer display advertisements on YouTube videos and other content that promote inaccurate claims about climate change.
The decision, by the company’s ads team, means that it will no longer permit websites or YouTube creators to earn advertising money via Google for content that “contradicts well-established scientific consensus around the existence and causes of climate change.” And it will not allow ads that promote such views from appearing.
“In recent years, we’ve heard directly from a growing number of our advertising and publisher partners who have expressed concerns about ads that run alongside or promote inaccurate claims about climate change,” the company said.
The policy applies to content that refers to climate change as a hoax or a scam, denies the long-term trend that the climate is warming, or denies that greenhouse gas emissions or human activity is contributing to climate change.
Google limits or restricts advertising alongside certain sensitive topics or events, such as firearms-related videos or content about a tragic event. This is the first time Google has added climate change denial to the list.
Facebook, Google’s main rival for digital advertising dollars, does not have an explicit policy outlawing advertisements denying climate change.
In addition to not wanting to be associated with climate change misinformation, ad agencies, in an echo of their shift away from the tobacco business decades earlier, have begun to re-evaluate their association with fossil-fuel clients. Agencies such as Forsman & Bodenfors have signed pledges to no longer work for oil and gas producers. Calls have increased to ban the industry from advertising on city streets and sponsoring sports teams.
Greenpeace USA and other environmental groups filed a complaint with the Federal Trade Commission earlier this year accusing Chevron of “consistently misrepresenting its image to appear climate-friendly and racial justice-oriented, while its business operations overwhelmingly rely on climate-polluting fossil fuels.” Exxon faces lawsuits from Democratic officials in several states accusing it of using ads, among other methods, to deceive consumers about climate change.
Publications such as the British Medical Journal, The Guardian and the Swedish publications Dagens Nyheter and Dagens ETC have limited or stopped accepting fossil fuel ads. The New York Times prevents oil and gas companies from sponsoring its climate newsletter, its climate summit or its podcast “The Daily,” but it allows the industry to advertise elsewhere.
YouTube said on Wednesday that it was banning the accounts of several prominent anti-vaccine activists from its platform, including those of Joseph Mercola and Robert F. Kennedy Jr., as part of an effort to remove all content that falsely claims that approved vaccines are dangerous.
In a blog post, YouTube said it would remove videos claiming that vaccines do not reduce rates of transmission or contraction of disease, and content that includes misinformation on the makeup of the vaccines. Claims that approved vaccines cause autism, cancer or infertility, or that the vaccines contain trackers, will also be removed.
The platform, which is owned by Google, has had a similar ban on misinformation about the Covid-19 vaccines. But the new policy expands the rules to misleading claims about long-approved vaccines, such as those against measles and hepatitis B, as well as to falsehoods about vaccines in general, YouTube said. Personal testimonies relating to vaccines, content about vaccine policies and new vaccine trials, and historical videos about vaccine successes or failures will be allowed to remain on the site.
“Today’s policy update is an important step to address vaccine and health misinformation on our platform, and we’ll continue to invest across the board” in policies that bring its users high-quality information, the company said in its announcement.
In addition to barring Dr. Mercola and Mr. Kennedy, YouTube removed the accounts of other prominent anti-vaccination activists such as Erin Elizabeth and Sherri Tenpenny, a company spokeswoman said.
The new policy puts YouTube more in line with Facebook and Twitter. In February, Facebook said it would remove posts with erroneous claims about vaccines, including assertions that vaccines cause autism or that it is safer for people to contract the coronavirus than to receive vaccinations against it. But the platform remains a popular destination for people discussing misinformation, such as the unfounded claim that the pharmaceutical drug ivermectin is an effective treatment for Covid-19.
In March, Twitter introduced its own policy that explained the penalties for sharing lies about the virus and vaccines. But the company has a five “strikes” rule before it permanently bars people for violating its coronavirus misinformation policy.
The accounts of such high-profile anti-vaccination activists like Dr. Mercola and Mr. Kennedy remain active on Facebook and Twitter — although Instagram, which Facebook owns, has suspended Mr. Kennedy’s account.
YouTube started looking into broadening its policy on anti-vaccine content shortly after creating a set of rules around Covid-19 vaccine misinformation in October, according to a person close to the company’s policymaking process, who would speak only anonymously because he was not permitted to discuss the matters publicly. YouTube found many videos about the coronavirus vaccine spilled over into general vaccine misinformation, making it difficult to tackle Covid-19 misinformation without addressing the broader issue.
But creating a new set of rules and enforcement policies took months, because it is difficult to rein in content across many languages and because of the complicated debate over where to draw the line on what users can post, the person said. For example, YouTube will not remove a video of a parent talking about a child’s negative reaction to a vaccine, but it will remove a channel dedicated to parents providing such testimonials.
Misinformation researchers have for years pointed to the proliferation of anti-vaccine content on social networks as a factor in vaccine hesitation — including slowing rates of Covid-19 vaccine adoption in more conservative states. Reporting has shown that YouTube videos often act as the source of content that subsequently goes viral on platforms like Facebook and Twitter, sometimes racking up tens of millions of views.
“One platform’s policies affect enforcement across all the others because of the way networks work across services,” said Evelyn Douek, a lecturer at Harvard Law School who focuses on online speech and misinformation. “YouTube is one of the most highly linked domains on Facebook, for example.”
She added: “It’s not possible to think of these issues platform by platform. That’s not how anti-vaccination groups think of them. We have to think of the internet ecosystem as a whole.”
Prominent anti-vaccine activists have long been able to build huge audiences online, helped along by the algorithmic powers of social networks that prioritize videos and posts that are particularly successful at capturing people’s attention. A nonprofit, the Center for Countering Digital Hate, published research this year showing that a group of 12 people were responsible for sharing 65 percent of all anti-vaccine messaging on social media, calling the group the “Disinformation Dozen.” In July, the White House cited the research as it criticized tech companies for allowing misinformation about the coronavirus and vaccines to spread widely, sparking a tense back-and-forth between the administration and Facebook.
Several people listed in the Disinformation Dozen no longer have channels on YouTube, including Dr. Mercola, an osteopathic physician who took the top spot on the list. His following on Facebook and Instagram totals more than three million, while his YouTube account, before it was taken down, had nearly half a million followers. Dr. Mercola’s Twitter account, which is still live, has over 320,000 followers.
YouTube said that in the past year it had removed over 130,000 videos for violating its Covid-19 vaccine policies. But this did not include what the video platform called “borderline videos” that discussed vaccine skepticism on the site. In the past, the company simply removed such videos from search results and recommendations, while promoting videos from experts and public health institutions.
Daisuke Wakabayashi contributed reporting. Ben Decker contributed research.
Facebook has become more aggressive at enforcing its coronavirus misinformation policies in the past year. But the platform remains a popular destination for people discussing how to acquire and use ivermectin, a drug typically used to treat parasitic worms, even though the Food and Drug Administration has warned people against taking it to treat Covid-19.
Facebook has taken down a handful of the groups dedicated to these discussions. But dozens more remain up, according to recent research. In some of those groups, members discuss strategies to evade the social network’s rules.
Media Matters for America, a liberal watchdog group, found 60 public and private Facebook groups dedicated to ivermectin discussion, with tens of thousands of members in total. After the organization flagged the groups to Facebook, 25 of them closed down. The remaining groups, which were reviewed by The New York Times, had nearly 70,000 members. Data from CrowdTangle, a Facebook-owned social network analytics tool, showed that the groups generate thousands of interactions daily.
Facebook said it prohibited the sale of prescription products, including drugs and pharmaceuticals, across its platforms, including in ads. “We remove content that attempts to buy, sell or donate for ivermectin,” Aaron Simpson, a Facebook spokesman, said in an emailed statement. “We also enforce against any account or group that violates our Covid-19 and vaccine policies, including claims that ivermectin is a guaranteed cure or guaranteed prevention, and we don’t allow ads promoting ivermectin as a treatment for Covid-19.”
In some of the ivermectin groups, the administrators — the people in charge of moderating posts and determining settings like whether the group is private or public — gave instructions on how to evade Facebook’s automated content moderation.
In a group called Healthcare Heroes for Personal Choice, an administrator instructed people to remove or misspell buzzwords and to avoid using the syringe emoji.
An administrator added, referring to video services like YouTube and BitChute: “If you want to post a video from you boob or bit ch ut e or ru m b l e, hide it in the comments.” Facebook rarely polices the comments section of posts for misinformation.
Facebook said that it broadly looks at the actions of administrators when determining if a group breaks the platform’s rules, it said, and if moderators do break the rules, that counts as strikes against the overall group.
The groups also funnel members into alternative platforms where content moderation policies are more lax. In a Facebook group with more than 5,000 members called Ivermectin vs. Covid, a member shared a link to join a channel on Telegram, a messaging service, for further discussion of “the latest good news surrounding this miraculous pill.”
“Ivermectin is clearly the answer to solve covid and the world is waking up to this truth,” the user posted.
After The Times contacted Facebook about the Ivermectin vs. Covid group, the social network removed it from the platform.
Two decades ago, Wikipedia arrived on the scene as a quirky online project that aimed to crowdsource and document all of human knowledge and history in real time. Skeptics worried that much of the site would include unreliable information, and frequently pointed out mistakes.
But now, the online encyclopedia is often cited as a place that, on balance, helps combat false and misleading information spreading elsewhere.
Last week, the Wikimedia Foundation, the group that oversees Wikipedia, announced that Maryana Iskander, a social entrepreneur in South Africa who has worked for years in nonprofits tackling youth unemployment and women’s rights, will become its chief executive in January.
We spoke with her about her vision for the group and how the organization works to prevent false and misleading information on its sites and around the web.
Give us a sense of your direction and vision for Wikimedia, especially in such a fraught information landscape and in this polarized world.
There are a few core principles of Wikimedia projects, including Wikipedia, that I think are important starting points. It’s an online encyclopedia. It’s not trying to be anything else. It’s certainly not trying to be a traditional social media platform in any way. It has a structure that is led by volunteer editors. And as you may know, the foundation has no editorial control. This is very much a user-led community, which we support and enable.
The lessons to learn from, not just with what we’re doing but how we continue to iterate and improve, start with this idea of radical transparency. Everything on Wikipedia is cited. It’s debated on our talk pages. So even when people may have different points of view, those debates are public and transparent, and in some cases really allow for the right kind of back and forth. I think that’s the need in such a polarized society — you have to make space for the back and forth. But how do you do that in a way that’s transparent and ultimately leads to a better product and better information?
And the last thing that I’ll say is, you know, this is a community of extremely humble and honest people. As we look to the future, how do we build on those attributes in terms of what this platform can continue to offer society and provide free access to knowledge? How do we make sure that we are reaching the full diversity of humanity in terms of who is invited to participate, who is written about? How are we really making sure that our collective efforts reflect more of the global south, reflect more women and reflect the diversity of human knowledge, to be more reflective of reality?
What is your take on how Wikipedia fits into the widespread problem of disinformation online?
Many of the core attributes of this platform are very different than some of the traditional social media platforms. If you take misinformation around Covid, the Wikimedia Foundation entered into a partnership with the World Health Organization. A group of volunteers came together around what was called WikiProject Medicine, which is focused on medical content and creating articles that then are very carefully monitored because these are the kinds of topics that you want to be mindful around misinformation.
Another example is that the foundation put together a task force ahead of the U.S. elections, again, trying to be very proactive. [The task force supported 56,000 volunteer editors watching and monitoring key election pages.] And the fact that there were only 33 reversions on the main U.S. election page was an example of how to be very focused on key topics where misinformation poses real risks.
Then another example that I just think is really cool is there’s a podcast called “The World According to Wikipedia.” And on one of the episodes, there’s a volunteer who is interviewed, and she really has made it her job to be one of the main watchers of the climate change pages.
We have tech that alerts these editors when changes are made to any of the pages so they can go see what the changes are. If there’s a risk that, actually, misinformation may be creeping in, there’s an opportunity to temporarily lock a page. Nobody wants to do that unless it’s absolutely necessary. The climate change example is useful because the talk pages behind that have massive debate. Our editor is saying: “Let’s have the debate. But this is a page I’m watching and monitoring carefully.”
One big debate that is currently happening on these social media platforms is this issue of the censorship of information. There are people who claim that biased views take precedence on these platforms and that more conservative views are taken down. As you think about how to handle these debates once you’re at the head of Wikipedia, how do you make judgment calls with this happening in the background?
For me, what’s been inspiring about this organization and these communities is that there are core pillars that were established on Day 1 in setting up Wikipedia. One of them is this idea of presenting information with a neutral point of view, and that neutrality requires understanding all sides and all perspectives.
It’s what I was saying earlier: Have the debates on talk pages on the side, but then come to an informed, documented, verifiable citable kind of conclusion on the articles. I think this is a core principle that, again, could potentially offer something to others to learn from.
Having come from a progressive organization fighting for women’s rights, have you thought much about misinformers weaponizing your background to say it may influence the calls you make about what is allowed on Wikipedia?
I would say two things. I would say that the really relevant aspects of the work that I’ve done in the past is volunteer-led movements, which is probably a lot harder than others might think, and that I played a really operational role in understanding how to build systems, build culture and build processes that I think are going to be relevant for an organization and a set of communities that are trying to increase their scale and reach.
The second thing that I would say is, again, I’ve been on my own learning journey and invite you to be on a learning journey with me. How I choose to be in the world is that we interact with others with an assumption of good faith and that we engage in respectful and civilized ways. That doesn’t mean other people are going to do that. But I think that we have to hold on to that as an aspiration and as a way to, you know, be the change that we want to see in the world as well.
When I was in college, I would do a lot of my research on Wikipedia, and some of my professors would say, ‘You know, that’s not a legitimate source.’ But I still used it all the time. I wondered if you had any thoughts about that!
I think now most professors admit that they sneak onto Wikipedia as well to look for things!
You know, we’re celebrating the 20th year of Wikipedia this year. On the one hand, here was this thing that I think people mocked and said wouldn’t go anywhere. And it’s now become legitimately the most referenced source in all of human history. I can tell you just from my own conversations with academics that the narrative around the sources on Wikipedia and using Wikipedia has changed.
More than three years ago, Mark Zuckerberg of Facebook trumpeted a plan to share data with researchers about how people interacted with posts and links on the social network, so that the academics could study misinformation on the site. Researchers have used the data for the past two years for numerous studies examining the spread of false and misleading information.
But the information shared by Facebook had a major flaw, according to internal emails and interviews with the researchers. The data included the interactions of only about half of Facebook’s U.S. users — the ones who engaged with political pages enough to make their political leanings clear — not all of them as the company had said. Facebook told the researchers that data about users outside of the United States, which has also been shared, did not appear to be inaccurate.
“This undermines trust researchers may have in Facebook,” said Cody Buntain, an assistant professor and social media researcher at the New Jersey Institute of Technology who was part of the group of researchers, known as Social Science One, who have been given the user activity information.
“A lot of concern was initially voiced about whether we should trust that Facebook was giving Social Science One researchers good data,” Mr. Buntain said. “Now we know that we shouldn’t have trusted Facebook so much and should have demanded more effort to show validity in the data.”
The company apologized to the researchers in a email this week. “We sincerely apologize for the inconvenience this may cause and would like to offer as much support as possible.” Facebook added that it was updating the data set to fix the issue but that, given the large volume of data, it would take weeks before the work would be completed.
Representatives of the company, including two members of Facebook’s Open Research and Transparency Team, held a call with researchers on Friday, apologizing for the mistake, according to two people who attended the meeting.
The Facebook representatives said only about 30 percent of research papers relied on U.S. data, said the people on the call, who agreed to speak only anonymously. But the representatives said that they still did not know whether other aspects of the data set were affected.
Several researchers on the call complained that they had lost months of work because of the error, the people on the call said. One researcher said doctoral degrees were at risk because of the mistake, while another expressed concern that Facebook was either negligent or, worse, actively undermining the research.
“From a human point of view, there were 47 people on that call today and every single one of those projects is at risk, and some are completely destroyed,” Megan Squire, one of the researchers, said in an interview after the call.
Mavis Jones, a Facebook spokeswoman, said the issue was caused by a technical error, “which we proactively told impacted partners about and are working swiftly to resolve.”
The error in the data set was first spotted by Fabio Giglietto, an associate professor and social media researcher from the University of Urbino, in Italy. Mr. Giglietto said he discovered the inaccuracy after he compared data that Facebook released publicly last month about top posts on the service with the data the company had provided exclusively to the researchers. He found that the results of the two were different.
“It’s a great demonstration that even a little transparency can provide amazing results,” Mr. Giglietto said of the chain of events leading to his discovery.
This is the second time in recent weeks that researchers and journalists have found discrepancies in the data sets Facebook has provided for more transparency on the platform. In late August, Politico reported that tens of thousands of Facebook posts from the days before and after the Jan. 6 riots on Capitol Hill had gone missing from CrowdTangle, an analytics tool owned by the social network that is used by journalists and researchers.
Ryan Mac contributed reporting.
As California’s Sept. 14 election over whether to recall Gov. Gavin Newsom draws closer, unfounded rumors about the event are growing.
Here are two that are circulating widely online, how they spread and why, state and local officials said, they are wrong.
Rumor No. 1: Holes in the ballot envelopes were being used to screen out votes that say “yes” to a recall.
On Aug. 19, a woman posted a video on Instagram of herself placing her California special election ballot in an envelope.
“You have to pay attention to these two holes that are in front of the envelope,” she said, bringing the holes close to the camera so viewers could see them. “You can see if someone has voted ‘yes’ to recall Newsom. This is very sketchy and irresponsible in my opinion, but this is asking for fraud.”
The idea that the ballot envelope’s holes were being used to weed out the votes of those who wanted Gov. Newsom, a Democrat, to be recalled rapidly spread online, according to a review by The New York Times.
The Instagram video collected nearly half a million views. On the messaging app Telegram, posts that said California was rigging the special election amassed nearly 200,000 views. And an article about the ballot holes on the far-right site The Gateway Pundit reached up to 626,000 people on Facebook, according to data from CrowdTangle, a Facebook-owned social media analytics tool.
State and local officials said the ballot holes were not new and were not being used nefariously. The holes were placed in the envelope, on either end of a signature line, to help low-vision voters know where to sign it, said Jenna Dresner, a spokeswoman for the California Secretary of State’s Office of Election Cybersecurity.
The ballot envelope’s design has been used for several election cycles, and civic design consultants recommended the holes for accessibility, added Mike Sanchez, a spokesman for the Los Angeles County registrar. He said voters could choose to put the ballot in the envelope in such a way that didn’t reveal any ballot marking at all through a hole.
Instagram has since appended a fact-check label to the original video to note that it could mislead people. The fact check has reached up to 20,700 people, according to CrowdTangle data.
Rumor No. 2: A felon stole ballots to help Governor Newsom win the recall election.
On Aug. 17, the police in Torrance, Calif., published a post on Facebook that said officers had responded to a call about a man who was passed out in his car in a 7-Eleven parking lot. The man had items such as a loaded firearm, drugs and thousands of pieces of mail, including more than 300 unopened mail-in ballots for the special election, the police said.
Far-right sites such as Red Voice Media and Conservative Firing Line claimed the incident was an example of Democrats’ trying to steal an election through mail-in ballots. Their articles were then shared on Facebook, where they collectively reached up to 1.57 million people, according to CrowdTangle data.
Mark Ponegalek, a public information officer for the Torrance Police Department, said the investigation into the incident was continuing. The U.S. postal inspector was also involved, he said, and no conclusions had been reached.
As a result, he said, online articles and posts concluding that the man was attempting voter fraud were “baseless.”
“I have no indication to tell you one way or the other right now” whether the man intended to commit election fraud with the ballots he collected, Mr. Ponegalek said. He added that the man may have intended to commit identity fraud.
Facebook said on Tuesday that it had removed a network of accounts based in Russia that spread misinformation about coronavirus vaccines. The network targeted audiences in India, Latin America and the United States with posts falsely asserting that the AstraZeneca vaccine would turn people into chimpanzees and that the Pfizer vaccine had a much higher casualty rate than other vaccines, the company said.
The network violated Facebook’s foreign interference policies, the company said. It traced the posts to a marketing firm operating from Russia, Fazze, which is a subsidiary of AdNow, a company registered in Britain.
Facebook said it had taken down 65 Facebook accounts and 243 Instagram accounts associated with the firm and barred Fazze from its platform. The social network announced the takedown as part of its monthly report on influence campaigns run by people or groups that purposely misrepresent who is behind the posts.
“This campaign functioned as a disinformation laundromat,” said Ben Nimmo, who leads Facebook’s global threat intelligence team.
The influence campaign took place as regulators in the targeted countries were discussing emergency authorizations for vaccines, Facebook said. The company said it had notified people it believed had been contacted by the network and shared its findings with law enforcement and researchers.
Russia and China have promoted their own vaccines by distributing false and misleading messages about American and European vaccination programs, according to the State Department’s Global Engagement Center. Most recently, the disinformation research firm Graphika found numerous antivaccination cartoons that it traced back to people in Russia.
Security analysts and American officials say a “disinformation for hire” industry is growing quickly. Back-alley firms like Fazze spread falsehoods on social media and meddle in elections or other geopolitical events on behalf of clients who can claim deniability.
The Fazze campaign was carried out in two waves, Facebook said. In late 2020, Fazze created two batches of fake Facebook accounts that initially posted about Indian food or Hollywood actors. Then in November and December, as the Indian government was discussing emergency authorization for the AstraZeneca vaccine, the accounts started pushing the false claim that the vaccine was dangerous because it was derived from a chimpanzee adenovirus. The campaign extended to websites like Medium and Change.org, and memes about the vaccine’s turning its subjects into chimpanzees proliferated on Facebook.
The Fazze campaign went silent for a few months, then resumed in May when the inauthentic accounts falsely claimed that Pfizer’s vaccine had caused a much higher “casualty rate” than other vaccines. There were only a few dozen Facebook posts targeting the United States and one post by an influencer in Brazil, and there was almost no reaction to the posts, according to the company. Fazze also reached out to influencers in France and Germany, who ultimately exposed the disinformation campaign, Facebook said.
“Influence operations increasingly target authentic influential voices to carry their messages,” Facebook said in its report. “Through them, deceptive campaigns gain access to the influencer’s ready-made audience, but it comes with a significant risk of exposure.”
AdNow, the parent company of Fazze, did not immediately respond to a request for comment.
Facebook said it had also removed 79 Facebook accounts, 13 pages, eight groups and 19 accounts in Myanmar that targeted domestic citizens and were linked to the Myanmar military. In March, the company barred Myanmar’s military from its platforms, after a military coup overthrew the country’s fragile democratic government.
Twitter on Tuesday suspended Representative Marjorie Taylor Greene, Republican of Georgia, from its service for seven days after she posted that the Food and Drug Administration should not give the coronavirus vaccines full approval and that the vaccines were “failing.”
The company said this was Ms. Greene’s fourth “strike,” which means that under its rules she can be permanently barred if she violates Twitter’s coronavirus misinformation policy again. The company issued her third strike less than a month ago.
On Monday evening, Ms. Greene said on Twitter, “The FDA should not approve the covid vaccines.” She said there were too many reports of infection and spread of the coronavirus among vaccinated people, and that the vaccines were “failing” and “do not reduce the spread of the virus & neither do masks.”
The Centers for Disease Control and Prevention‘s current guidance states, “Covid-19 vaccines are effective at protecting you from getting sick.”
In late July, the agency also revised its indoor mask policy, advising that people wear a mask in public indoor spaces in parts of the country where the virus is surging to maximize protection from the Delta variant and prevent possibly spreading the coronavirus. A recent report by two Duke University researchers who reviewed data from March to June in 100 school districts and 14 charter schools in North Carolina concluded that wearing masks was an effective measure for preventing the transmission of the virus, even without six feet of physical distancing.
Ms. Greene’s tweet was “labeled in line with our Covid-19 misleading information policy,” Trenton Kennedy, a Twitter spokesman, said in an emailed statement. “The account will be in read-only mode for a week due to repeated violations of the Twitter Rules.”
In a statement circulated online, Ms. Greene said: “I have vaccinated family who are sick with Covid. Studies and news reports show vaccinated people are still getting Covid and spreading Covid.”
Data from the C.D.C. shows that of the so-called breakthrough infections among the fully vaccinated, serious cases are extremely rare. A New York Times analysis of data from 40 states and Washington, D.C., found that fully vaccinated people made up fewer than 5 percent of those hospitalized with the virus and fewer than 6 percent of those who had died.
Twitter has picked up enforcement against accounts posting coronavirus misinformation as cases have risen across the United States because of the highly contagious Delta variant. In Ms. Greene’s home state, new cases have increased 171 percent in the past two weeks, while 39 percent of Georgia’s population has been fully vaccinated against the virus.
Ms. Greene’s Facebook account, which has more than 366,000 followers, remains active. Her posts on the social network are different from her posts on Twitter. She also has more than 412,000 followers on Instagram, which Facebook owns.
On Telegram, the encrypted chat app that millions flocked to after Facebook and Twitter removed thousands of far-right accounts, Ms. Greene has 160,600 subscribers.
As coronavirus cases and hospitalizations surge across the country, wrought by the spread of the Delta variant, some conservatives have pinned the blame on migrants crossing the southern border — without providing any evidence.
Faced with rapidly rising cases in their states and criticized by President Biden for their opposition to mask mandates, the governors of Florida and Texas have pointed to the administration’s border policies as a primary cause of the new cases. That sentiment has also echoed on social media, among members of Congress and among the unvaccinated.
“He’s imported more virus from around the world by having a wide open southern border,” Gov. Ron DeSantis of Florida said of Mr. Biden on Wednesday. “Whatever variants are across the world, they’re coming through that southern border.”
Gov. Greg Abbott of Texas made a similar claim on Fox News on Monday: “The Biden administration is allowing people to come across the southern border, many of whom have Covid, most of whom are not really being checked for Covid.”
Officials have said that positive test results among migrants have increased in recent weeks. A spokesman for Hidalgo County in Texas, which is in the Rio Grande Valley, where many migrants cross the border, said that the positivity rate for migrants was about 16 percent this week, as of Thursday.
But public health experts said there was no evidence that migrants were driving the surge of coronavirus. The positivity rate for residents of Hidalgo County — excluding migrants — was 17.59 percent this week.
While Texas is experiencing many more cases than a couple of months ago, many of the major outbreaks are occurring in states — such as Missouri and Arkansas — that do not border Mexico, said Dr. Jaquelin P. Dudley, associate director of the LaMontagne Center for Infectious Disease and a professor of molecular biosciences at the University of Texas at Austin.
Max Hadler, the Covid-19 senior policy expert at Physicians for Human Rights, a nonprofit advocacy group, said positive rates were increasing in every state in the country.
“It’s not a border issue or a migrant issue, it’s a national issue. And it’s a particularly major issue in states with lower vaccination rates,” Mr. Hadler said. “That’s the clearest and most important correlation, and it has nothing to do with migrants but rather with rates of vaccination among people living in those states.”
A recent report from the Kaiser Family Foundation found that those not fully vaccinated accounted for between 94 percent and 99.8 percent of reported coronavirus cases in the 23 states and Washington, D.C., that collect breakthrough case data.
There is not evidence that any of four variants of concern tracked by the Centers for Disease Control and Prevention initially entered through the southern border. The four variants of concern, which are those that are more transmittable or cause more severe cases, are called Alpha, Beta, Gamma and Delta.
Dr. Benjamin Pinsky, the director of the Clinical Virology Laboratory for Stanford Health Care, which tracks new variants, said the lab’s findings did not support Mr. DeSantis’s assertion that variants were “coming through” the southern border.
The first identified cases of the Alpha and Beta variants in the United States were patients in Colorado and South Carolina with no travel history, according to the C.D.C. The first identified case of the Gamma variant was a patient in Minnesota, who had traveled to Brazil.
Dr. Katherine Peeler, an instructor at Harvard Medical School, noted that the Delta variant — first identified in India last year — is more widespread in the United States than in most of Latin America. The first case of the Delta variant detected in the United States occurred in March, according to the C.D.C.
“As such, this is not an issue of increasing Delta variant from the southern border and those seeking asylum,” Dr. Peeler said.
Mr. Abbott’s office did not respond when asked for evidence that migrants were not being tested.
Christina Pushaw, Mr. DeSantis’s press secretary, said the governor never implied that migration was the only reason for the spread of the virus, but rather he was simply highlighting “the paradoxical nature of the Biden administration’s support for additional restrictions on Americans and lawful immigrants,” like vaccine passports, “while allowing illegal migrants to cross the border and travel through the country freely.”
Most migrants trying to cross the southern border are turned away by officials. Out of 1.1 million encounters on the southern border so far this fiscal year, more than 768,000 have led to expulsions.
Of the remaining apprehended migrants, some are detained and some are released as they await decisions on their asylum applications. Several local governments and charities across the Texas border where migrants have been released told The New York Times that many, if not most, migrants in their care are tested and then quarantined if they test positive.
A spokesman for Customs and Border Protection said that the agency provided migrants with personal protective equipment as soon as they were taken into custody, and the migrants were required to keep their masks on at all times. Anyone who exhibits signs of illness is taken to a local health center and is tested and treated there. Once migrants are transferred out of C.B.P. custody, they are released to a nongovernmental organization, a local government, Immigration and Customs Enforcement or, in the case of unaccompanied minors, the Department of Health and Human Services.
The Department of Homeland Security has “taken significant steps to develop systems to facilitate testing, isolation, and quarantine of those individuals who are not immediately returned to their home countries after encounter,” David Shahoulian, the assistant secretary for border and immigration policy at the department, said in a government court document filed this week. Mr. Shahoulian said that the department and I.C.E. had set up processing and testing centers along the border to aid with the surge in migrants.
Joseph Mercola, who researchers say is a chief spreader of coronavirus misinformation online, said on Wednesday that he would delete posts on his site 48 hours after publishing them.
In a post on his website, Dr. Mercola, an osteopathic physician in Cape Coral, Fla., said he was deleting his writings because President Biden had “targeted me as his primary obstacle that must be removed” and because “blatant censorship” was being tolerated.
Last month, the White House, while criticizing tech companies for allowing misinformation about the coronavirus and vaccines to spread widely, pointed to research showing that a group of 12 people were responsible for sharing 65 percent of all anti-vaccine messaging on social media. The nonprofit behind the research, the Center for Countering Digital Hate, called the group the “Disinformation Dozen” and listed Dr. Mercola in the top spot.
Dr. Mercola has built a vast operation to disseminate anti-vaccination and natural health content and to profit from it, according to researchers. He employs teams of people in places like Florida and the Philippines, who swing into action when news moments touch on health issues, rapidly publishing blog posts and translating them into nearly a dozen languages, then pushing them to a network of websites and to social media.
An analysis by The New York Times found that he had published more than 600 articles on Facebook that cast doubt on Covid-19 vaccines since the pandemic began, reaching a far larger audience than other vaccine skeptics. Dr. Mercola criticized The Times’s reporting in his post on Wednesday, saying it was “loaded with false statements,” (not “false facts” as was previously reported here).
Dr. Mercola said in his blog post that he would remove 15,000 past posts from his website. He will continue to write daily articles, he said, but they will only be available for 48 hours before being removed. He said it was up to his followers to help spread his work.
Rachel E. Moran, a researcher at the University of Washington who studies online conspiracy theories, said the announcement by Dr. Mercola was “him trying to come up with his own strategies of avoiding his content being taken down, while also playing up this martyrdom of being an influential figure in the movement who keeps being targeted.”
Aaron Simpson, a Facebook spokesman, said, “This is exactly what happens when you are enforcing policies against Covid misinformation — people try extreme ways to work around your restrictions.”
Facebook, he said, “will continue to enforce against any account or group that violates our rules.”
YouTube said that it had clear community guidelines for Covid-19 medical misinformation, that it had removed a number of Dr. Mercola’s videos from the platform and that it had issued “strikes” on his channel. The company also said it would terminate Dr. Mercola’s channel if it violated its three strikes policy.
Twitter said that it had taken enforcement action on Dr. Mercola’s account in early July for violations of its Covid-19 misinformation policy, putting his account for in read-only mode for seven days.
“Since the introduction of our Covid-19 misinformation policy, we’ve taken enforcement action on the account you referenced for violating these rules,” said Trenton Kennedy, a Twitter spokesman. “We’ve required the removal of tweets and applied Covid-19 misleading information labels to numerous others.”
Because of an editing error, an earlier version of this article misstated a portion of the post from Joseph Mercola. He said a previous article in The New York Times was full of false “statements,” not false “facts.”
On Instagram, a detail from a medieval painting was superimposed with words suggesting Jews were responsible for the deaths of children.
On Twitter, a photoshopped image of world leaders with the Star of David on their foreheads was posted above the hashtag #JewWorldOrder.
And on YouTube, a video of the World Trade Center on fire was used as a backdrop for an argument that Jews were responsible for the terrorist attacks on the towers 20 years ago.
All are examples of anti-Semitic content explicitly banned by social media companies. They were shared on social media and were allowed to remain up even after they were reported to social media companies, according to a report released on Friday by the Center for Countering Digital Hate, a nonprofit organization.
The study, which found that social media companies acted on fewer than one in six reported examples of anti-Semitism, comes alongside a report with similar findings from the Anti-Defamation League. Both organizations found that anti-Semitic content was being widely shared on major social media platforms and that the companies were failing to take it down — even after it was reported to them.
“As a result of their failure to enforce their own rules, social media platforms like Facebook have become safe places to spread racism and propaganda against Jews,” the Center for Countering Digital Hate said.
Using the tools the platforms created for users to report posts that contain hate speech, nudity and other banned content, the center’s researchers spent six weeks reporting hundreds of anti-Semitic posts to Facebook, Instagram, Twitter, YouTube and TikTok. In all, the posts they analyzed were seen by up to 7.3 million people.
They found that Facebook and Twitter had the poorest rates of enforcement action. Of the posts reported to them as anti-Semitic, Facebook acted on roughly 10.9 percent. Twitter, the report said, acted on 11 percent. YouTube, by comparison, acted on 21 percent and TikTok on 18.5 percent.
There were millions of views of the anti-Semitic content on both YouTube and TikTok. On Twitter and Facebook, the views were in the hundreds of thousands.
“While we have made progress in fighting anti-Semitism on Facebook, our work is never done,” said Dani Lever, a Facebook spokeswoman. She added that the prevalence of hate speech on Facebook was decreasing, and she said that, “given the alarming rise in anti-Semitism around the world, we have and will continue to take significant action through our policies.”
A Twitter spokesperson said the company condemned anti-Semitism and was working to make Twitter a safer place for online engagement. “We recognize that there’s more to do, and we’ll continue to listen and integrate stakeholders’ feedback in these ongoing efforts,” the spokesperson said.
TikTok said in a statement that it proactively removes accounts and content that violate its policies, and that it condemns anti-Semitism and does not tolerate hate speech. “We are adamant about continually improving how we protect our community,” the company said.
YouTube said in a statement that it had “made significant progress” in removing hate speech over the last few years. “This work is ongoing and we appreciate this feedback,” said Ivy Choi, a YouTube spokeswoman.
The Anti-Defamation League’s survey was similar but smaller. It reported between three and 11 pieces of content on each of the same platforms, as well as on Reddit, Twitch and the gaming platform Roblox. It gave each platform a grade, such as a C- for Facebook and TikTok and a D for Roblox, based on how quickly the companies responded and removed the posts. The highest-rated platform, Twitter, received a B-.
“We were frustrated but unsurprised to see mediocre grades across the board,” said Jonathan Greenblatt, the chief executive of the organization. “These companies keep corrosive content on their platforms because it’s good for their bottom line, even if it contributes to anti-Semitism, disinformation, hate, racism and harassment.”
“It’s past time for tech companies to step up and invest more of their millions in profit to protect the vulnerable communities harmed on their platforms,” he added.
Bill Gates has been a favorite target of people spreading right-wing conspiracy theories in the past year. In posts on YouTube, Facebook and Twitter, he has been falsely portrayed as the mastermind behind Covid-19 and as a profiteer from a virus vaccine.
The popularity of those falsehoods have given more life to at least a couple of other unfounded claims about him, according to new research: that he has been colluding with the Chinese Communist Party, and that he is behind moonshot plans to stem climate change.
“While it has been a significant accelerator over the past year and a half, the global pandemic isn’t the origin of many of the conspiracy theories about Bill Gates currently circulating across media,” said Jennifer Granston, head of insights at Zignal Labs. “Rather, it is the gasoline being poured on a fire that’s been smoldering for more than a decade.”
According to research from the media insights company Zignal Labs, which tracked narratives about Mr. Gates on social media and cable television and in print and online news outlets from June 2020 to June 2021, as many as 100,000 mentions were made in the last year about Mr. Gates’s connections to the Chinese government.
In one example, an article on The National Pulse, a far-right website, suggested without evidence that Mr. Gates, a co-founder of Microsoft, could have influenced the U.S. relationship with China because a relative had once worked in a government job loosely related to U.S.-China relations when President Biden was vice president. Another article in The National Pulse listed several instances in which Microsoft worked with Chinese companies, and people online pointed to this as evidence Mr. Gates must be conspiring with the Chinese government. Both articles potentially reached hundreds of thousands of followers on Facebook according to data from CrowdTangle, a Facebook-owned social media analytics tool.
Mr. Gates was mentioned another 260,000 times in falsehoods about climate change, according to Zignal. One unfounded claim is that Mr. Gates was funding a plan to dim the sun. (In reality, he is financially backing a small-scale experiment from Harvard University that aims to look at whether there are aerosols that could reduce or eliminate the loss of the ozone layer.) In another, conspiracy theorists say that Mr. Gates is pushing a plan to force people in rich countries to eat only “100 percent synthetic beef” because he had a financial stake in a company making those products. (Mr. Gates did say it was a good idea for developed nations to consider the idea, but it was part of a larger conversation about tech breakthroughs and energy policies to tackle the effects of climate change.)
Those falsehoods, while popular, still pale in comparison to those about his profiteering off the coronavirus. In one popular unfounded claim, Mr. Gates is accused of wanting to surveil the population with microchip vaccination implants (159,000 mentions). Mr. Gates’s philanthropy work in distributing vaccines to developing countries had also been twisted into unfounded accusations that he was trying to cull the global population (39,400 mentions). And a third popular falsehood pushed by conspiracy theorists is the notion that Mr. Gates advocated vaccine passports in order to further a tech-enabled surveillance state (28,700 mentions).
According to Zignal Labs, the sharing of tweets linking Mr. Gates to the vaccine passport narrative actually spiked during the time of Mr. Gates’s divorce announcement from his wife of 27 years, Melinda French Gates, with whom he ran the Bill and Melinda Gates Foundation. The breakup has set off new scrutiny of his conduct in work-related settings.
“Bill Gates: privacy please everyone,” said one tweet, which was liked and shared more than 30,400 times. “Also Bill Gates: we need vaccine passports.”