by Kathryn Blake (she/they, they/them)

Before writing this, I was only vaguely aware of bots and their slightly threatening internet presence; that they were somehow partly responsible for the outcome of the 2016 American election and at the same time they could offer gentle self-care advice just when I needed it. I had no idea of the magnitude of the botosphere and what it means for the meaninglessness of The World on the Internet, which is to say, the world as we know it. While bots exist on every murky corner of social media, it’s on Twitter that they have the means to capitalise on a formula which happily mushes together truth, believability and what sells. There are a few basic categories of bots, but at the saddest and most simple level, bots are the go-to for start-ups and influencers looking to expand their popularity with a whole host of these ‘paid-for friends’ which subscribe to their channels, hype up and like their content. For bots that generate content, they work off what’s called a Markov chain which is essentially a predictive text algorithm that’s slightly less basic than a Nokia 3310, and because bots can essentially be anyone without necessarily doing anything, they have raised the alarm bells in recent years over their potential power to influence real world events.

The bots you see re-tweeting all those conspiracy theories have been a primary catalyst for the digital ethics debate. There’s a lot of fear-mongering and speculation about the ‘power’ of bots to swerve election results, propagate protests and direct revolutions. But while it might be easy to throw the blame at bots for stirring political furor in keyboard warriors across the globe, it’s not exactly true. Researchers over at University of California studied the effect and influence of bots in regards to conspiracies surrounding the Covid-19 pandemic and came up with a lot of… well, nothing. Bots are definitely around these conversations but they’re not starting them. They’re disinterested creatures, they know not what they do, they’re not even ‘doing it’ – it’s a programmed, cumulative reflex, there’s no thought or intention. Bots do have a tendency to surge during times of political unrest (which on Twitter during the last 6 years is always) and huddle around accounts that push conspiracy theories and exclamatory talk. But these bots didn’t generate content, they simply liked and retweeted ad infinitum (well, ad ‘until suspended’). They’re not changing our beliefs, merely amplifying them. Bots have the potential to realign the hyperbolic cacophony of real-life on the internet, (which is to say, not real life at all) by simply holding a mirror up to it. Who we are, they become.

The news – like the actual news – on Twitter is an echo chamber of vapid sensationalism. The all-caps double-speak churned out 24/7 is accepted because we are fallible, busy and bored humans with horroristic headlines thrown at us constantly, so, a bot-derived headline or the latest trending absurdity doesn’t read untrue to us. Enter ‘Two Headlines’, Darius Kozemi’s bot which scrapes from the pages of Google news to create a syntactically accurate, semantically nonsensical, almost entirely believable headlines; ‘John Cena’s efforts to build a Trump Tower in Moscow went on longer than he has previously acknowledged’ being an example. It’s real news chewed up and spat out in a syntactically accurate, semantically nonsensical, almost entirely believable headline. And at a glance – and what internet readership isn’t ‘at a glance’ – it can ring true. The fact that a bot can become the only true thing on a platform shows us that what news thrives on, and what Twitter depends on, is post-modern catastrophe capitalism.

And there are bots out there doing god’s own work. Sharing pictures of cats or posting randomly generated quotes from the TV show Mr. Robot, on the hour, every hour. And it’s because bots can be anything, because they can exist across the spectrum of political debate, conspiracy, and literal nonsense, they end up diffusing the seriousness of what is supposed to be serious and allow meaning to collapse in on itself.

Bots replicate and double-down on a constantly evolving digital dialect which in turn, reflects and responds to bot-tish speech. In the same way that memes become templates for jokes which in turn become their own references, bot-speak creates a grammar that becomes a shorthand for the way we talk to each other on the internet. This allows bots like @oliviataters  – designed to resemble the being and nothingness of a teenage girl by generating random content from actual language from teenagers – to sound real when they talk shite. By directly appropriating teenager’s actual language in order to generate its random content means a linguistic hybrid emerges that is not at all fake, but absolutely unreal. As our way of communicating on the internet develops, so it does for the bots. They can be surprisingly difficult to parse from the endlessly churning twitter feed, because at a quick glance, they sound ‘enough’ like real people.

The boundary between the internet and reality is gossamer thin. As Zuckerberg’s metaverse encroaches upon us, we can expect bots to become a natural extension of the spam we ourselves create. They toe the line between sublime and the ridiculous, then step over it for the sake of it. Anything we make of them we make ourselves; they’re just watching, waiting and re-tweeting, because anything we can do, they can do bot-ter.


0 0 votes
Article Rating

Leave a Reply

Inline Feedbacks
View all comments