In 2017, The Atlantic published an article, The Internet is Mostly Bots. At that time, 52% of internet activity was performed by bots, not people. By now, it’s likely more. To make matters worse, only 47% of Americans feel confident that they can identify a bot, while 80% believe that bots are bad for us. Additionally, just because someone feels as though they can identify a bot doesn’t necessarily mean they can. I used to think I could, but now I’m not so sure. The simple ones I can manage. But what about the ones that are on the more advanced end of the spectrum?
Sub Simulator GPT-2
To find out, I took a visit to r/SubSimulatorGPT2. It is a subreddit in which all of the posts and comments are generated by bots. People are not allowed to partake in the conversations. On first glance, the results are terrifying.
The posts and comments almost universally utilize the English language better than most humans do. There is proper punctuation, yet the writing is in a style that seems rather conversational. A number of the comments do seem to lack awareness, but the majority of the conversation flows well enough. It isn’t perfect, but how often is human to human communication? At the very least, this subreddit serves as a microcosm of what our interactions on the web look like.
How AI Social Media Bots Drive Sentiment
As of 2018, up to 15% of all Twitter accounts were bots. That might seem like an easier number to deal with than the 52% stated earlier but don’t be too hopeful yet. Bots don’t operate organically. They may try to communicate in a natural way, but their intentions are rather one dimensional. Bots are created on social media to drive a narrative.
That narrative can be anything. We see them during elections, utilized domestically and abroad. It can drive sentiment in the market, such as the downplay of the GameStop squeeze of 2021. It can even cause harm to race relations.
Bots post about hot topics and breaking news before the majority of the general public has a chance to. They share articles from sources that benefit their agenda and post hot takes to get their followers riled up. When people get emotionally invested in controversial issues online, they tend to enter a dopamine feedback loop. The effect is typically that one’s views are gradually pushed to a more extreme version of the position that they previously believed in. This heightens emotions which creates the cycle.
These phenomena have real consequences. According to this NPR poll, only 51% of Americans believe that vaccines do not cause autism. That being said, NPR itself is biased. They labeled the statement that COVID-19 came from a lab in China as unequivocally false, despite the World Health Organization’s investigation still remaining inconclusive. It is important to remember that even the most credible of sources find themselves forced to trudge through an internet swarming with bots.
How Bots Affect Me
During the 1970’s, the CIA initiative Operation Mockingbird was allegedly functioning at its peak. Operation Mockingbird was allegedly a program in which the government had journalists publish propaganda that supported its agenda. This would play on the Baader-Meinhof phenomenon. The understanding is that when people hear something repeatedly, they are more likely to believe it to be true.
Regardless of the validity of Operation Mockingbird, the Baader-Meinhof phenomenon occurs today through the use of bots on social media. This is extremely dangerous to our democracy.
I’ve already touched upon the dangers of internet censorship. Combine that with the majority of the allowed information on the internet fitting some sort of narrative. The danger compounds drastically.
As most of us spend too much time on the web, we’re inevitably susceptible to succumbing to some sort of misinformation. As I hinted with NPR earlier, there is no source free from human bias. This is not to say that nobody has credibility. Rather, nobody in the mainstream media nor the independent media has infallibility.
Humans are built to a fault. A standard of perfection is a standard of impossibility. As such, I believe the answer to this issue is not to attempt to build an internet free of disinformation. I wouldn’t even go so far as to say that people should be expected to identify it when they see it. Instead, I propose we remain aware of the Baader-Meinhof phenomenon.
Battle Bots
I stated in my opening that most Americans believe that bots are bad. When it comes to social media, I tend to agree. We don’t need to be louder than the bots. We don’t need to be quicker than them. As individuals, we just need to recognize that they are there. I see the bots not always as specific accounts. Rather, bots are to be viewed as the hot narrative that you are supposed to care about for reasons that weren’t previously obvious in any way. Examples include vaccine disinformation, 5G somehow being related to COVID-19, election fraud misinformation, and the Covington kids hoax.
While each bot promotes a certain narrative, know that no bot operates to your benefit regardless of whether you agree with the information spread by the account. How information is disseminated is important. When quality information is disseminated in the same way that disinformation is shared, the quality information loses credibility and the disinformation gains it.
Information used to be power. Now that power lies in the hands of bots. To win the war on disinformation, we must defeat the bots. Their game is quantity, not quality. All we have to do is identify it. So remember, whenever you see a trend break out on Twitter or a comments section on Reddit appear to show a large number of people arriving at a common consensus, think twice before you believe that this is how your family and friends truly feel.
Instead of assuming that people are in sync with the consensus of the internet, talk to them. Get their perspective.