Blog

Our experts' unique discoveries, observations, and opinions on what’s trending today in Business Risk Intelligence and the Deep & Dark Web.

Blog > BRI > Bots Used to Amplify Influence Across Twitter

Bots Used to Amplify Influence Across Twitter

bio
Hacktivism

During the past decade, Twitter has become a digital extension of the public sphere and a potent platform for spreading political and commercial messages. Likes, retweets, and replies from Twitter followers can amplify these messages to reach a broader audience, and a Twitter user’s follower count is often perceived as an apparent indicator of their popularity or influence. Under these dynamics, Twitter followers and engagements have begun to function as a form of social currency.

Twitter followers and engagements are far easier to counterfeit than most real-world currencies under the platform’s open API, which lends itself to the creation of bots. Twitter bots are software programs linked to one or more Twitter accounts that are able to automate certain activities on the social media platform. Under the control of an individual known as a botmaster, Twitter bots are able to search the Twitter API for posts containing a fixed set of specified phrases or hashtags and instruct linked accounts to like, retweet, or comment. Twitter bots are also able to follow and direct message (DM) other Twitter users.

Flashpoint analysts have observed services pertaining to Twitter bots on the Deep & Dark Web (DDW), as well as on the surface web. These services include the sale of bots, tutorials on how to create bots, and ready-made code for programming bots. In some cases, vendors advertise bot-related services in which clients never have direct control over the bots. For example, in the image shown below, a DDW vendor advertises the ability to provide clients with 1,000 Twitter followers, presumably leveraging their network of bots.

Image 1: A vendor on a DDW marketplace advertises “1000+ TWITTER FOLLOWERS [USA][HQ]” at a price of $1.77 USD. The vendor claims to be able to provide an “unlimited number of Twitter followers” and offer the “cheapest prices on the market.”

While the purpose of bots can vary and is sometimes unclear, many bots are used to amplify political or commercial messages. It seems as though the majority of Twitter accounts followed by bots are verified, high-profile users such as politicians, celebrities, media outlets, and consumer brands. Similarly, despite their violation of Twitter’s automation guidelines, the creation and sale of these bots occupies a legal gray area.

Twitter has publicly expressed its commitment to combating misinformation bots while also acknowledging that bots can be a “positive and vital tool” for legitimate purposes, such as customer support and public safety. For example, SF QuakeBot automatically live tweets when earthquakes occur in the San Francisco Bay area.

Image 2: A surface-web website that allows users to insert an API access key and specify tasks for ready-made Twitter bots to perform, such as direct messaging new followers, favoriting tweets that get a certain number of tweets, retweeting and commenting on tweets from certain individuals, sending automated public replies, and adding users who use a specified hashtag to a Twitter list.

How Twitter Bots are Created

The process of creating a Twitter bot typically follows a set pattern. First, a Twitter account is used to obtain an account-specific access key for the Twitter API. Twitter bots are often designed to abide by the terms of the Twitter API to avoid detection, limiting their activity within a given time period in order to avoid being flagged for suspicious behavior. Next, once the Twitter account has gained access to the Twitter API, scripts in languages such as Ruby, JavaScript, PHP, Google Apps Script, and Python are used to automate its actions.

One challenge botmasters face is Twitter’s requirement for each account to be associated with a unique email address. This issue can be circumvented in several ways, and bots can easily be scripted and designed to confirm email addresses, especially when a real email address is being utilized. Moreover, Twitter accounts with unconfirmed email addresses are still allowed to follow users, retweet, and like posts.

Another challenge botmasters face when trying to create and control multiple Twitter accounts is that each must be associated with a phone number in order to access Twitter’s API. Flashpoint analysts speculate botmasters are circumventing this measure by centralizing control of their bots, thus creating a Twitter botnet. The central bot can then create and control additional bots. This eliminates the need for hundreds of bots to access the Twitter API and be linked to phone numbers.

Motivations for Using Twitter Bots

On a high level, Flashpoint analysts assess that there are three common motives for using bots:

  • Self-Promotion: Individuals or entities hoping to build a brand or simply to appear more influential on social media may benefit from boosting the number of followers they have. Businesses may also attempt to use Twitter bots to inflate their number of retweets and likes when rolling out announcements and promotions. The New York Times reports that public figures such as reality television stars, professional athletes, comedians, TED speakers, pastors, and models have purchased fake social media followers, acknowledging that a large Twitter following can lead to endorsement deals and other business opportunities.
  •  
  • Political Influence: Research by the Atlantic Council’s Digital Forensic Research Lab (DFRL) reveals that Twitter bots can be used to intensify discourse around divisive issues. In September 2017, the DFRL published a report detailing how Twitter bots were being used to amplify hashtags related to protests by athletes in the National Football League (NFL). Under the control of unknown botmasters, the bots promoted hashtags on both sides of the issue. Russian actors allegedly used Twitter bots for this purpose during the 2016 U.S. presidential election. With the ability to like, retweet, and comment on posts, as well as follow users, Twitter bots can make some topics—including falsified stories—trend, thereby boosting their visibility on the Twitter platform.
  •  
  • Click Fraud: Twitter bots are also used to support illegitimate activities, such as so-called “click fraud.” In this scam, automated accounts create posts that direct users to sites to increase traffic volume, usually hosting a handful of advertisements controlled by the botmaster. These accounts help increase the amount of times that the advertisement or “paid content” is sent, shared, and “liked,” thereby increasing its visibility.

Identifying Twitter Bots

While it’s difficult to pinpoint any universal defining characteristics of bots, the DFRL has identified three general indicators for assessing whether a Twitter account is a bot:

  • Activity: Compared to regular Twitter users, bots tend to have abnormally high levels of activity, with some researchers regarding anything more than 50 tweets per day as suspicious.
  •  
  • Anonymity: Bots typically include little if any personal information on their user profiles. If biographical or location information is included, it is usually too vague to provide any form of identification. In many cases, bots’ usernames follow a random but predictable format, such as a name followed by eight random digits.
  •  
  • Amplification: The activity of bots typically supports the amplification of a message through retweeting, liking tweets, and quoting other users. A bot’s profiles typically include few if any original content in favor of word-for-word quotes taken from news headlines, direct links to articles, and other easily programmable content that amplifies the message the botmaster wishes to spread.

Fed up with the overabundance of politically charged propaganda bots, University of California, Berkeley students Ash Bhat and Rohan Phadte created the “Botcheck.me” browser extension for Google Chrome, which uses machine learning to assess whether a Twitter account is a bot. The plugin is able to detect only bots created to spread propaganda about U.S. politics. Since not all bots share common characteristics, programmers have yet to find a universally effective way of automatically detecting them.

Assessment

With the abundance of resources available on both the surface web and the DDW to help create and leverage Twitter bots, Flashpoint assesses with moderate confidence that Twitter bots will likely continue to be used to influence and distort information flows within the Twitter landscape.

Related Posts

About the author: Amina Bashir

bio

Amina Bashir is an intelligence analyst at Flashpoint. Amina has conducted extensive research on IoT security and taught as an adjunct computer science lecturer at Hunter College, from which she holds a Bachelor of Arts in Computer Science. Amina’s research on "SpEED-IoT: Spectrum Aware Energy Efficient Routing for Device-to-Device IoT Communication" was recently published in Elsevier’s Future Generation Computer Systems journal, and she will present her research on collaborative adversarial modeling for spectrum-aware IoT communications at the International Conference on Computing, Networking and Communications (ICNC) 2018. She is fluent in Hindi, Urdu, and Punjabi, and she is also intermediately proficient in Spanish.

About the author: Liv Rowley

Liv Rowley is an Intelligence Analyst at Flashpoint. She speaks fluent Spanish and specializes in analyzing threats emerging from the Spanish-language underground with an emphasis on Latin America. Prior to Flashpoint, Olivia’s passion for Latin America and the Middle East led her to pursue extensive research on the languages, culture, and political climate of these regions. She has studied abroad in Madrid, Spain and holds a bachelor’s degree in International Relations with a concentration in International Security from Tufts University.