Words: Grace Edward (she/her)
It is no surprise that, in recent years, the dramatic increase of AI technology has spurred new debate about how society should operate, with political and moral ambiguities dominating and complicating these discussions. This fusion has been met with much resistance over the fear, whether inspired by sci-fi films, social media, or scientific studies, of the unknown capabilities of the Pandora’s Box we are opening. This global fear has been dubbed (reassuringly), the AI Doomsday.
The issue of AI and its legislation has become so pressing that a summit will be held in November, conveniently located at Bletchley Park: where Alan Turing created the first computer or mechanical brain. It seems fitting that such an important summit surrounding AI be at the home of its initial creation. The hope is, according to the UK Government website, that talks will take place discussing and ‘building a consensus on rapid international action to advance safety at the frontier of AI technologies.’ We hope to expect regulations to be imposed upon the use of AI. However, how can we successfully impose restrictions when the extent of the danger of AI is not yet fully understood?
The IBM website defines AI as an entity that ‘leverages computers and machines to mimic the problem-solving and decision-making capabilities of the human mind.’ So, AI acts like a mechanical human mind, mirroring its cognitive abilities. With this, a problem arises because, unlike the human mind, no matter how sophisticated the machine, it cannot understand emotion, morals, or ethics.
Unlike in films, AI presents itself in various forms that are not always as crude or tangible as the likes of an evil robot. One prevalent form is social media, through which the macro-political influence of AI is carried out on an alarmingly microscopic scale. The Netflix documentary The Great Hack references lobbying companies such as the now defunct Cambridge Analytica. According to Amnesty International, ‘Cambridge Analytica improperly obtained data from up to 87 million Facebook profiles – including status updates, likes and even private messages.’ It seems that our votes may no longer be our votes.
This methodical manipulation has led to a substantial increase in the level of polarisation on either side of the political spectrum, a change which would not have occurred without drastic expansion into AI technology. There is a strong argument that due to the frequent use of devices such as laptops and phones, our political views and overall decisions are unconsciously being warped to fit the agenda of those in power. Whether that be Trump or Elon Musk, it must be accepted that we no longer have full control over our political decisions. It is reasonable, therefore, to assume that AI will have a big influence on the upcoming 2024 elections.
Currently there is no real protection or prevention of this global usage of AI as the law has struggled to keep up with an ever evolving non-human entity. Across the globe, 4 billion people are preparing to vote (US, Britain, India, Indonesia, Mexico, Taiwan) while governments are preparing to harness the persuasive potential of Artificial Intelligence. All we can do is issue a warning: look out for disinformation, synthetic propaganda, hyper-realistic deep fakes, and micro-targeting through highly personalised propaganda. We must thoroughly research every source, and to look for perhaps the opposing view, so as to allow for a better understanding of our society. It seems the only prevention of the weaponisation of AI lies in the hands of the individual.
In a study done by Auburn University, cognitive neuroscientists discovered that ‘most of our decisions, actions, emotions, and behaviour depends on the 95% of brain activity that goes beyond our conscious awareness.’ This is where AI works most efficiently. Be it through government imposition or the targeting of our subconscious, the influence of AI is near impossible to avoid.
A more immediate threat to human life lies in the implementation of AI in the military. The military is considering replacing human decision-making in battle with AI technologies. The government agency DARPA (Defence Advanced Research Projects Agency) who have made historical advancements on the internet, GPS, Moderna vaccine, and satellites are pioneering this idea. The British army, however, expresses an adamance, mainly through recruiting adverts, that although AI may be used in surveillance and drones, a human soldier is still the best military technology they have.
Evidently, AI is seeping into every aspect of our existence. While we are hesitant to admit it, the potential of AI is beyond our comprehension. While a human soldier is valued today, who knows where our trust will lie tomorrow? Either way, we should develop our understanding of the full power of AI in order to implement effective restrictions.
However, it can still be argued that the fusion of AI into our society is more dangerous than in the military. The military seem to have a more thorough grasp on AI technologies and where and when it can be used. However, in society and civilian life, there are no rules or ranks to follow. For example, the use of self-driving cars can possibly be seen as more disastrous in the hands of an untrained civilian than in the trained use of military drones. For example, self-driving cars in the hands of a civilian, with limited knowledge of AI, may be accountable to the law but nonetheless, cannot be as trusted as a soldier accountable to their commander. The legal ambiguities present a difficult question about responsibility and liability.
Ultimately, however, it seems the use of AI should be minimal and somewhat restricted in order to prevent the further escalation of a possible “Doomsday.” While it is doubtful that the AI summit will come to a global consensus in legislation controlling AI, international conversation is always encouraged. But we must not wait for those in charge to make a plan. We as individuals need to take a more cautious approach when opening up to, and unlocking, the endless abyss of perilous uncertainty that is Artificial Intelligence.