Wednesday, May 27, 2020 [Tweets] [Favorites]

Bot Twitter Accounts Discussing COVID-19

Karen Hao (via John Gruber):

Kathleen M. Carley and her team at Carnegie Mellon University’s Center for Informed Democracy & Social Cybersecurity have been tracking bots and influence campaigns for a long time. Across US and foreign elections, natural disasters, and other politicized events, the level of bot involvement is normally between 10 and 20%, she says.

But in a new study, the researchers have found that bots may account for between 45 and 60% of Twitter accounts discussing covid-19. Many of those accounts were created in February and have since been spreading and amplifying misinformation, including false medical advice, conspiracy theories about the origin of the virus, and pushes to end stay-at-home orders and reopen America.

Virginia Alvino Young:

To analyze bot activity around the pandemic, CMU researchers since January have collected more than 200 million tweets discussing coronavirus or COVID-19. Of the top 50 influential retweeters, 82% are bots, they found. Of the top 1,000 retweeters, 62% are bots.

[…]

Many factors of the online discussions about “reopening America” suggest that bot activity is orchestrated. One indicator is the large number of bots, many of which are accounts that were recently created. Accounts that are possibly humans with bot assistants generate 66% of the tweets. Accounts that are definitely bots generate 34% of the tweets

These are extraordinary claims, both because of the high numbers and because lots of real people are also talking about COVID-19. Some of them are spreading misinformation, and some are in favor of reopening sooner. In my own Twitter feed I have seen very few if any COVID-19 tweets that look like they are bot-related. How did the researches arrive at these counts, with such apparent certainty?

Neither of these articles shows actual examples of bots. I could not find a published paper, data, methodology, or code. Professor Carley did give a seminar on March 31, which has more details than the news release (via Tess Owen). One of the precise claims is:

Overall in the discussion around corona virus about 45% of the users are more than 50% likely to be bots

This is a bit less sensational, and it clarifies that these are not numbers based on humans looking at the tweets and accounts and categorizing them as bot or not-bot. Rather, they are counting accounts that were assigned bot percentages by a machine learning model.

Darius Kazemi:

The short of it is: knowing what we know about the study, which is very little, it seems like these researchers have in the past used a very loose and nearly useless definition of “bot”

[…]

Also worth looking at is this informal audit of a few “bots” that were identified by these researchers back in April, some of which are humans with faces and lives who post videos of themselves like, talking and living and stuff

[…]

Also if you’re interested in this you can check out my blog post on “The Bot Scare” which is not peer-reviewed but I try to cite lots of sources and make a decent argument that most of this kind of research is pretty flimsy.

Yoel Roth (Twitter Head of Site Integrity):

There’s no right or wrong way to use Twitter — and many “bot” studies wind up dismissing a lot of real activity as inauthentic.

Even if you take “bot” to mean “automated spam,” there’s little evidence that the dramatic conclusions of the #COVID19 study are accurate.

That’s not to say that spam isn’t an issue. We know that discussions about #COVID19 are a prime target for all sorts of platform manipulation. Since March, our proactive systems have challenged millions of spammy accounts Tweeting about COVID.

[…]

Why not just suspend accounts immediately, or share information about our other actions in our APIs? Doing so would make it easier for adversaries to know we’ve caught them, and adapt to evade our detections.

Possibly the bot threat is exaggerated, but that’s not exactly comforting, either.

Joey D’Urso:

Bots do exist, and there have been several concerning stories in recent years about foreign bots attempting to influence elections in the UK, US, and elsewhere.

But a lot of the time, what looks like foreign bot activity is nothing of the sort.

The truth is often something even harder to get your head around — people voluntarily choosing to copy and paste identikit slogans on social media to spread a partisan message or simply wind up their opponents.

4 Comments

I think it's a pretty strong consensus among researchers that a lot of this activity comes from bots, this isn't just one team's opinion. You can see these same points these bots make mirrored in known Russian propaganda outlets like RT, so it isn't really a secret that this is being pushed by foreign interests.

Any time you have crowd-based surfacing algorithms, you invite this kind of behavior.

@Lukas I just don’t see this stuff in Tweetbot. Do I happen to be following people who don’t retweet it? Is this activity more prevalent if you use the Twitter Web site or official client? Or do the bots mostly interact with each other?

I'm pretty sure you're using Twitter differently from most other people. I'm not sure how Tweetbot works, I'm assuming it just shows your linear timeline of the people you follow? But that's not what most people see, they see the "smart" timeline which surfaces "suggested content" based on "signals." And then they see the comments below other tweets, which are also ordered based on "signals."

These things, alongside stuff like trending hashtags, are what the bots are manipulating, and how this content becomes visible to people.

@Lukas Yeah, the timeline just shows the people I follow, though I don’t recall seeing any bots in the replies, either.

Stay up-to-date by subscribing to the Comments RSS Feed for this post.

Leave a Comment