Between the years of 1642 and 1651, the UK was in total turmoil as the Royalist and Parliamentarist forces competed to lead the way for the union’s future. One man, utterly disturbed by the skirmishes resulting in an uncertain future and a death toll of around 200.000, strived to find out how this outcome could have been prevented. Thomas Hobbes, born in 1588, is a political philosopher who popularised the term ‘social contract’ and created an imaginary Leviathan, which he believed would explain how to prevent civil war, defined as the “death” of the state by him.
Written in 1651, Leviathan is Hobbes’ most renowned work, although Hobbes had explored the natural chaotic state of men in his 1642 book De Cive formerly. Influenced by the horrors he witnessed during the English Civil Wars, Hobbes undoubtedly had noble intentions regarding how to achieve eternal concord via the creation of the Leviathan: an artificial communion of men, much similar to the current prevalence of artificial intelligence nowadays. This Leviathan would only answer to one higher authority, God, called the Artificer by Hobbes. In this regard, artificiality is a key component of Hobbes’ work, for the Leviathan is not a natural occurrence. Also called the Commonwealth, the Leviathan is “but an artificial man”, and its parts resemble us as biological beings: the officers are the nerves, citizens’ wealth is the strength, counsellors are the memory, laws are the will and civil war is its death.
How and why is this artificial man created, and besides the resemblance of a natural body, how is its artificiality connected to humankind as a whole? Firstly, Hobbes’ arguments regarding the natural state of men are rather cruel: life is short and barbarous. In his view, where complete anarchy prevails where a system hasn’t been established, all men are capable of murder and evil as there is no sovereign dictating morals. There is no security whatsoever and trust between fellow men is nonexistent. To combat such a vile state of life, there is no option but to form a social contract, including all individuals to create the Commonwealth by reducing “all their will and plurality of voices” onto one will. This will is the Leviathan itself. The collective sovereign, composed of many people, has the sole responsibility of ensuring safety and can only be created if the people hand over their powers to it voluntarily, much like the terms and conditions we accept when we use artificial intelligence today.
AI itself isn’t necessarily capable of enforcing security, and doesn’t have the Hobbesian Leviathan’s capacities of waging war, judging or organising. Simultaneously, the current state of AI is insufficient to determine whether it will eventually attain the powers of such omnipotent actions in the near future as a social contract is being formed between AI and its users. Overdependency on AI is hugely contraversial albeit widespread, with people attributing human emotions to a non-sentient, artificial database. As this social contract is being formed, we are handing over our “will” to the AI, much like how Hobbes would have wanted the Leviathan to operate. The natural man might have formed this to prevent bodily harm, yet as the modern person faces many different problems, the same will is being transferred to AI to find safety in other aspects of life: interpersonal problems, work disputes and even articulating your thoughts is preferred to be done by AI at the cost of our own wills.
Nonetheless, AI usage is not exclusive to private persons, but also by states in the fields of law enforcement. Abusing facial recognition technologies and monitoring social media posts, some states target alleged dissidents while others have set out to create smart cities centrally controlled by AI in the name of the greater good of all. In workplaces, employers turn to AI to monitor their employees’ actions, often disregarding their personal autonomy. These are arguably different social contracts signed with AI by Leviathans themselves, and thus emerges the AI as a Leviathan above all Leviathans: the Meta-Leviathan.
Yet in transferring their authority to such artificial systems, these minor Leviathans begin to lose their own sovereignty: the creation starts to command the creator. However, the Hobbesian Leviathan is constrained by the collective will of the people and should only answer to one Artificer. In this case, if we presume authoritarian states as the new Leviathans, their dependency on AI inevitably leaves them vulnerable in an unexpected way: they have to put their trust in AI, but AI owes no such trust in return. These Leviathans do not answer to the Artificer, but instead to their creation which has risen above them. This dilemma will surely deepen as AI usage spreads through different fields and goes even beyond what it currently is.
Hobbes had, although not with great emphasis, acknowledged the Leviathan’s responsibility toward the communion of men as well. Although created for the purpose of safety, this does not mean “a bare preservation, but also other contentments of life.” In other words, the Leviathan’s legitimacy depends not only on ensuring security from wars but also on enabling conditions for the subjects’ well-being. AI, on the other hand, has no obligation to its users and no moral framework. Responsible AI and AI ethics aim to solve this and are relatively new terms aiming to facilitate human oversight by including specialists, legal experts, scientists and organisations. The collaboration should ensure AI is fair, transparent and protective of data it is entrusted with, which takes us back to “one will”. As AI is the Meta-Leviathan, with all the wills combined, how can it be kept in check by the participants of the social contract? The social contract shall not be broken, after all, for as Hobbes argues “if the essential rights of sovereignty be taken away, the Commonwealth is thereby dissolved, and every man returneth into the condition and calamity of a war with every other man, which is the greatest evil that can happen in this life”. As technology advances and AI obtains partial autonomy, the challenge will be to maintain oversight and ensure that this new Meta-Leviathan continues to serve human interests rather than undermine them.
AI as the Meta-Leviathan surpasses Hobbes’ artificial sovereign beyond human limits, yet lacks moral responsibility or conscience. Unlike the Hobbesian Leviathan, it answers to no Artificer and obeys only the big data it is feeded with, leaving all the individuals bound by the social contract, both persons and states, dependent on a system that owes them no trust. Hobbes’ warning still remains relevant—order and security demand vigilance, but the true test lies in ensuring the artificial sovereign serves humanity, not the other way around.




















