Elon Musk’s obsession with bots does nothing to stop Twitter spam.

Twitter Less than 5% of accounts report being fakes or spammers., commonly referred to as “bots”. Since his offer to take over Twitter was accepted, Elon Musk has repeatedly questioned these assumptions and even rejected them. Public Reaction from Chief Executive Officer Parag Agrawal.

musk later Putting the deal on hold and asking for more evidence.

So, why do people argue about the percentage of bot accounts on Twitter?

as a producer of botometerA popular bot detection tool, our group at Indiana University Lookout for social media I’ve been researching fake accounts and manipulation on social media for over a decade. introduced the concept of “social bot” as the foreground and first guess their fashion on twitter in 2017

Based on our knowledge and experience, estimating the proportion of bots on Twitter has become a very difficult task, and I think discussing the accuracy of the estimation may miss the point. Here’s why.

What exactly is a bot?

Measuring the spread of a problematic account on Twitter requires a clearly defined audience. Common terms such as “fake account”, “spam account”, and “bot” are used interchangeably but have different meanings. Fake or fake accounts are accounts that impersonate people. Accounts that mass produce unsolicited promotional content are defined as spammers. A bot, on the other hand, is an account that is partially controlled by software. Post content or perform simple interactions like retweet automatically.

These types of accounts are often duplicated. For example, you can create a bot that automatically posts spam under the guise of a person. These accounts are bots, spammers and fakes. However, not all fake accounts are bots or spammers and vice versa. Giving estimates without a clear definition will only lead to erroneous results.

Defining and distinguishing account types can also inform appropriate interventions. Fake and spam accounts degrade and violate your online experience Platform Policy. Malicious bots are used to: spread misinformation, inflate popularity, Exacerbate conflict through negative and provocative content, manipulate opinions, influence elections, commit financial fraud And interfere with communication. However, some bots are harmless or even usefulFor example, news dissemination assistance, disaster alert delivery and conduct research.

Simply banning all bots is not in the best interests of social media users.

For simplicity, researchers use the term “unverified account” to refer to a collection of fake accounts, spammers, and malicious bots. This is also the definition Twitter seems to use. But it’s unclear what Musk has in mind.

difficult to calculate

Even if a definition is agreed upon, there are still technical challenges in estimating prevalence.

Network graph showing a circle made up of a group of colored points with lines connecting some points
A network of coordinating accounts spreading COVID-19 information from less trusted sources on Twitter in 2020. Pik-Mai Hui

External researchers do not have access to the same data as Twitter, such as IP addresses and phone numbers. this impede the public’s ability Identifies counterfeit accounts. However, even on Twitter, the number of real fake accounts is may be higher than expectedbecause difficult to detect.

Fake accounts evolve and develop new tactics to evade detection. For example, some fake accounts Using AI-generated faces as profiles. This face is indistinguishable from the real face. even to humans. Identifying these accounts is difficult and requires new skills.

Another difficulty arises. reconciliation account Individually they appear to be normal, but they behave very similarly to each other and are almost certainly controlled by a single entity. But they are like needles in the thicket of hundreds of millions of tweets every day.

Finally, counterfeit accounts can avoid detection with techniques such as: handle exchange or automatically Post and delete A lot of content.

The distinction between real and real accounts is becoming increasingly blurred. Your account may be hacked, buy or rentSome users “donate” their credentials. Organization Who publishes it for you. As a result, the so-called “Cyborg” Account Controlled by both algorithms and humans. Likewise, spammers sometimes post legitimate content to mask their activities.

We observed a wide range of behaviors with a mixture of bot and human traits. To estimate the prevalence of unverified accounts, a simple binary classification (authenticated or unauthenticated accounts) should be applied. No matter where you draw a line, mistakes are unavoidable.

miss the big picture

The focus of the recent debate on estimating the number of Twitter bots oversimplifies the matter and misses the point of quantifying the damage of online abuse and manipulation by fake accounts.

Screenshot of the BotAmp application comparing bot activity potential for two topics on Twitter.  Yang Kai Cheong
Screenshot of BotAmp application comparing bot activity potential for two topics on Twitter.
Yang Kai Cheong
Screenshot of BotAmp application comparing bot activity potential for two topics on Twitter.  Yang Kai Cheong

Recent evidence suggests that fake accounts may not be the only cause of the spread of disinformation, hate speech, polarization and radicalization. These issues are usually associated with many users. For example, according to our analysis Misinformation about COVID-19 has been blatantly spread. Twitter and Facebook are both verified. famous account.Through bot amp, a new tool in the Botometer suite, available to anyone with a Twitter account, reveals that the existence of automated activities is not evenly distributed. For example, discussions about cryptocurrency tend to show more bot activity than discussions about cats. Therefore, whether the overall prevalence is 5% or 20% makes little difference to individual users. Your experience with these accounts will depend on who you follow and what topics you care about.

Even being able to accurately estimate the spread of counterfeit accounts will do little to solve this problem. A meaningful first step is to acknowledge the complex nature of these issues. This will help social media platforms and policy makers develop meaningful responses.conversationconversation

article author Yang Kai ChengPhD students in Informatics, Indiana University And Filippo Menzerprofessor of information and computer science, Indiana University

This article is conversation Under Creative Commons License. read original article.

Leave a Comment