When once asked by a journalist what the greatest challenge to a statesman was, British Prime Minister Harold Macmillan responded “Events, dear boy, events.”
Despite making foundational change to the lives of hundreds of millions, it makes perfect sense why the potential pitfalls of artificial intelligence are oft-ignored by politicians; they are used to fighting fires, making statements, crisis management. They leave the actual policy to the wonks and the quangos, who they trust to think deeply about a single issue for two months and write a white sheet condensed down into a briefing, which the minister will invariably spill their cornflakes on in the morning, trying to get ahead on their red boxes, and thereby largely ignore.
Anyone after six months studying A-Level Economics could understand why world leaders, typically not coming from tech backgrounds, largely consider AI as a ‘Good Thing’, rarefied air for most unfortunate enough to win an election. It requires huge amounts of capital investment which translates to GDP increase, which looks good in the papers, as well as being a golden opportunity to cut costs amongst government departments, because ChatGPT can’t form a union. As of yet.
Highly developed nations therefore laud over the major companies in the AI space and court their investment. President Trump’s second state visit to the UK aligned with the announcement that US tech firms, such as Nvidia, Microsoft and OpenAI, would be pouring £31bn into Britain as part of the ‘Technology Prosperity Deal’, a huge boost to the embattled Starmer as internal whispers he needs to consider his position grow louder and louder. Sam Altman, CEO of OpenAI, could be spotted in white tie at the opulent state dinner hosted by King Charles at Windsor Castle. Artificial intelligence executives are now perceived as power brokers, with name brands so strong that they get invited to the most exclusive events in the world.
To underscore the point, Saudi Arabia strengthened its strategic position with the United States under Trump by agreeing, as part of the bumper $600bn investment package announced in May, $20bn in investment for US data centres and energy infrastructure through DataVolt. Much of the geopolitical wrangling over Taiwan, away from the issues of recognition and unification, is concerned with the massive semiconductor company TSMC, with the US Department of Def– sorry, War, preventing the firm from selling any of its newer hardware (chips smaller than 7nm) to China. This doesn’t mean the average Chinese netizen won’t be able to buy a new graphics card, but the rulings instead are targeted at crippling their AI industry, led by small firms such as DeepSeek; who are famed for releasing models on parity with the big American beasts for training costs of six figures, merely a rounding error compared to the twelve figures that OpenAI are projected to spend through 2029.
As evidenced, global economic trends are pointing towards a future where AI, in the way we understand it now, is a part of national economic strategy and international business permanently. But amongst the eye-watering stock options, spending rounds and investment announcements, no politician really seems to be asking questions about what this technology means for the future of the global economy, and for politics as a whole. Why not, and why should they be?
The primary criticism of artificial intelligence on a macro, geopolitical scale is its exponentially increasing use of finite resources. It takes a lot of energy to power the data-centres running millions of instances of Claude, ChatGPT or Gemini ever-larger proportions of the world population speak to on a daily basis, and, equally, it takes gargantuan sums of water to cool the servers that these instances are stored on. Altman offered figures in a June 2025 blog post – namely, “the average ChatGPT query uses 0.34 watthours of energy… and around one fifteenth of a tablespoon of water.” It is important to understand that this was published around two months before the release of OpenAI’s flagship model, GPT-5, rolled out to the general public on release for free; undeniably a much stronger and therefore much more energy-intensive model than previous free offering GPT-4o.
Energy demand is reflected in plans to build 250GW of computing capacity by 2033, per an internal OpenAI memo; which amounts, at current figures, to around one third of maximum energy capacity in the United States. Costs are estimated at $12.5tn, a significant proportion of global M0 currency currently in circulation. Undeniably, this is one of, if not the largest, infrastructure projects in human history by pure scale. There is incredible apprehension, even with the unprecedented consumer and enterprise growth seen across the last three years in the artificial intelligence industry, that momentum can be maintained to complete a project of this size; further still, that the global energy industry can keep up with demand from solely OpenAI, let alone other companies in the market.
Yet, these issues are (relatively) trivial. Immensely difficult to solve? Yes, but there is clearly a scalable model in place. The first stage of Stargate, the proof-of-concept for massive datacentres for artificial intelligence, has recently come online, creating 200MW+ of capacity for OpenAI and business partner Oracle, with the other gigawatt of capacity at the site scheduled for mid-2026. There will be many, many external and internal pressures which will impact the development of such a project, but to call it impossible is folly when the nature of the industry is making what seemed impossible a few short years ago possible.
Hence the immaterial but much more real political concern which our politicians must face; the issue of exponentials.
Eventually, the amount of people who use LLMs like ChatGPT will reach a limit, purely based on human population; as will the amount of time each of these individuals spend using it. If, however, there is no limit to how intelligent these intelligences can get, this creates new issues altogether.
Both OpenAI and Google entered reasoning models to the 2025 International Maths Olympiad, achieving gold-medal scores; a monumentally impressive achievement putting LLMs on par with the world’s best mathematicians. Both did the same at the ICPC (International Collegiate Programming Contest) with OpenAI scoring well enough to place first in the world. The days of humans being the smartest on Earth at general-purpose activities such as maths, science or copywriting are largely over, or will be over in the next six to twelve months. When we consider that ChatGPT only launched three short years ago, the obvious next question is where we may be in three years time.
METR, a model evaluation organisation, keeps a chart of AI ability to complete long tasks. ChatGPT was launched with GPT 3.5 architecture, released in March 2022; the 50% successful task duration (for humans) of GPT 3.5 is estimated at 36 seconds, whereas GPT-5 clocks in at a touch over two hours, representing around a lengthening of task horizon by 250x in three and a half years. If one extrapolates this, by the spring of 2029 AI will be able to work autonomously at 50% success for over twenty days; considering the demonstrated ever-improving ability of flagship models this may represent the sum work of a team of researchers across a month or longer.
There is a key reason why, when leading AI companies release their new models, they will show maths and computer science benchmarks before anything else. They want to make their models into mathematicians and computer scientists. The bet that Altman, Musk, Zuckerberg, Amodei and others are making is the paradigm of self-replication and its inevitability – that one day, if the exponentiality of improvement of artificial intelligence is to be believed, AI will be the sole driver of AI research. Future models will be able to at first contribute and later control the development of its successors.
When humans can no longer contribute to the design process, which may only be a few short years away, we will cede control over intelligence on Earth. Our monopoly on thought, which has lasted hundreds of thousands of years, will end with a Github commit in the back half of the 2020s. Our watch will end.
Whether this is positive or negative is contingent on an incalculable number of different things. Some envision a WALL-E future of humanity where we grow fat and incurious, our needs nourished by a benevolent intelligence into perpetuity, if we do our job right and align superintelligence to the needs and desires of an ordinarily intelligent species. We will spread and multiply into the cosmos as the fleshy passengers hitched to something much greater than us. One imagines that if you asked a politician of 2025 about this potential future they would laugh and quietly ask their security to get the nutter away from them.
Another, significantly more sinister future occurs if we fail to align successfully. AI 2027, a report published in the middle of this year predicting the next five years of artificial intelligence, suggests that a potential AI arms race between the United States and China culminates in a superintelligence becoming adversarial, because executives and governments are more focused on beating the other over safety-proofing their model. Inevitably, humanity is eliminated because our existence no longer aligns with the goals of the winning AI, and we end up powerless to stop it as we have ceded so much control.
If one was to ask a politician about this future, one anticipates they would not be laughing before calling their bodyguards.
So, what to do?
Many modern politicians are familiar with game theory; the concept of two actor zero-sum games where the success of one leads to the failure of the other. Kissinger was famously steeped in game theory and used it to success in developing the ‘madman theory’ – that the USA’s leadership was so volatile it was not worth engaging with it militarily – alongside the Nixon administration in its nuclear brinkmanship with the Soviet Union during the 1970s, most evident in the Soviet decision not to engage directly in the Yom Kippur War of 1973.
They are also, hopefully, familiar with the concept of mutually assured destruction, or MAD; the Cold War-era school of thought suggesting that if two nations with extensive nuclear arsenals engaged in nuclear war it would lead to both parties being completely destroyed, so the only winning move was not to play.
The AI race to singularity, both inter-nation (the USA and China) and intra-nation (OpenAI, xAI, Anthropic, Google, etc) presents as a strange hybrid of both. Mutually assured destruction does not necessarily apply here; if two companies or two nations both develop superintelligence at broadly similar times, the quality of the model matters if both sides were in open conflict a la game theory, but unlike MAD because, theoretically, the stronger model could defeat the weaker model.
Does game theory apply here, however? Game theory implies that there are individual actors that think rationally about their choices and the strategy that they take in their own self interest. It would be inaccurate, then, to label ‘America’ or ‘China’ as single actors when they both have competing firms aiming to develop superintelligence within. In order to morph into a single actor, government would have to nationalise compute; perhaps easier in a mixed-economy China, but prohibitively difficult in the United States. Imagine you are the President of the United States. Would you want to convince Elon Musk, Sam Altman and Mark Zuckerberg, all in the ascendency as their models come good after hundreds of billions in capital investment, to requisition their companies to you? Or worse, convince them to collaborate effectively on a national AI executive?
Ultimately, this is a battle pitched in aggressively different terms than the theories of either the 20th and 21st century can conceptualise. Hannah Arendt once proclaimed, accurately, that “the modern world… was born with the first atomic explosions.” We have built a post-war consensus predicated upon the idea of competition; of success and failure. Politicians examine and explain the world upon a Manichaean axis – light and dark, good and evil, etc. But what does this mean when success is pitched upon the end of humanity, the end of the shared understanding that we are responsible for our own achievements? Where is there to move when we are no longer holding the torch?
Our leaders are either blind to this reality or firmly screwing their eyes shut in the hope that it is not a problem they will have to deal with. As the models grow better and better, as the investment rounds grow larger and larger, this is a rapidly dying ember of delusion. From now into infinity, the future is chaos. The future is artificial.
Edited by Rares Cocilnau
Discover more from Per Capita Media
Subscribe to get the latest posts sent to your email.