By Ephraim Agbo
The AI story that matters right now isn’t just about flashy models or viral demos — it’s about where the heavy lifting happens. Over the past year, governments and corporations have moved from research labs to hard infrastructure: gigawatts of power, racks of GPUs, and sprawling data centers. Those investments are creating a new global geography of compute power — and with it, winners and losers.
Below I unpack what’s happening, why it matters for countries outside the U.S. and China, and what governments, companies and development institutions might do about it.
Big money, bigger stakes
Early in 2025 a high-profile pledge crystallized the commercial and political appetite for AI infrastructure: OpenAI announced the Stargate Project, a plan to invest $500 billion over four years to build AI infrastructure in the United States. The initiative frames compute not just as a business asset but as national economic policy and job-creation machinery.
Around the same time, Apple committed to spend and invest more than $500 billion in the U.S. over the next four years, explicitly linking part of that plan to AI, silicon engineering and workforce development. That kind of headline capital — half a trillion dollars from a single private company and another half-trillion pledged for AI infrastructure — helps explain why data centers and chip supply chains now sit at the centre of geopolitics.
Europe and other actors are responding. The European Commission launched InvestAI, a plan to mobilise €200 billion for AI investment across the bloc — from AI “gigafactories” to cloud capacity and public-private funds intended to keep more of the stack on European soil. These are efforts to build both capability and regulatory leverage.
Taken together, these announcements aren’t marketing copy. They’re the start of a capital-intensive phase where compute capacity — not just algorithms — becomes the gatekeeper to economic opportunity and regulatory control.
Who actually owns the pipes and servers?
Behind the headlines is an empirical fact: AI compute is geographically concentrated. Recent academic work mapping the "political geography of AI infrastructure" finds that only a few dozen countries host the kind of data centers used to train and run the most powerful AI models. That concentration maps onto economic power: the U.S. and China together host the bulk of AI compute capacity and most hyperscale providers. The result is that countries without local compute become dependent on foreign infrastructure and the legal/regulatory regimes that govern it.
That concentration has supply-chain consequences, too. NVIDIA — the company whose data-center GPUs are the workhorses of modern model training — held an estimated >90% share of the data center GPU market in recent years, meaning hardware bottlenecks and export controls ripple through the whole ecosystem. This hardware centralization amplifies the geopolitical stakes of where data centers and chip fabs are located.
What this means on the ground: two short case studies
Córdoba, Argentina — improvisation against the odds
Nicholas, a computer science professor in Córdoba, runs one of Argentina’s most advanced AI hubs — in a converted room with repurposed 2012 servers retrofitted with GPUs. He and his team cobble together hardware, chase loans from multilateral lenders and argue for investment on sovereignty grounds: without local compute and the talent that relies on it, Argentina risks outsourcing both the benefits and the rules that govern AI. That mix of ingenuity and constraint is typical of research hubs in countries with limited capital budgets. (This profile comes from field reporting and interviews.)
Nairobi, Kenya — time zones and queuing for scarce compute
In Nairobi, engineers sometimes schedule heavy workloads overnight — when U.S. demand is low — to get access to remotely hosted GPU capacity. The workaround works as a stopgap, but it’s inefficient, insecure (data stored abroad), and leaves businesses and researchers beholden to foreign platforms. The practical upshot: real innovation cycles slow, and sensitive data leaves national jurisdiction.
The risks of concentrated compute
-
Regulatory leverage — Hosting infrastructure gives states jurisdictional power: the ability to enforce standards, compel data access, and apply export or privacy rules. If most compute sits outside your territory, your regulatory choices are constrained.
-
Economic capture — When the major hyperscalers and chip vendors concentrate market power, they also capture rents: the best jobs, the tax base, and the ancillary services that multiply local GDP. This can accelerate brain drain from middle- and low-income countries.
-
Resilience and security — Concentration raises systemic risk: supply bottlenecks for chips or power (and export controls) can disrupt entire national AI programs. The recent increases in U.S. export controls on advanced chips illustrate how easily hardware policy becomes strategic policy.
-
Inequitable model development — Models trained on infrastructure concentrated in certain jurisdictions will reflect the commercial priorities and datasets available there. Languages, use-cases and data relevant to other regions may be underrepresented.
Why some places still innovate despite constraints
Constraints breed specific kinds of innovation. African developers and Argentine researchers are building “frugal AI” workflows: model distillation, edge-first compute strategies, federated learning and software that runs on smaller, cheaper hardware. These approaches are often more cost-efficient for local problems (health diagnostics, agriculture, low-bandwidth applications) — and sometimes better aligned with privacy and sovereignty goals.
But innovation alone is rarely sufficient. To scale, you need three enablers: capital, power & cooling infrastructure, and a local skilled workforce — each of which requires sustained public and private investment.
Policy levers and practical responses
If we accept that compute is becoming infrastructure in the same sense as ports or power plants, then governments and development actors can deploy a set of targeted responses:
-
Strategic public investment in compute (grants, public cloud capacity, or subsidized access for academia and startups). Public compute pools can reduce entry barriers and build local demand. Examples like India’s IndiaAI compute portal show how state-led platforms can democratize access at scale.
-
Public-private partnerships that tie foreign investment to skills development, local supply chains and data-sovereignty provisions. Investment commitments are more valuable when they include domestic training and manufacturing clauses.
-
Incentives for regional cloud providers — funding for regional cloud and edge providers that understand local compliance needs, similar to the EU’s InvestAI approach to mobilize regional capital.
-
Smart regulation — policies that combine data protection with pragmatic open-access provisions for research; export controls should be calibrated so they mitigate risk without cutting off research partners.
-
Targeted workforce retention — scholarships, competitive fellowships and returning-researcher incentives can blunt brain drain.
The commercial reality: investments follow returns
Despite the moral case for global equity, capital allocation is guided by return on investment. Major private players will continue to fund infrastructure where scale, energy price, regulation, and logistics make it profitable. That’s why the U.S., China, and a handful of other markets will keep pulling ahead unless deliberate public action changes the calculus. The pledges from OpenAI, Apple and the EU demonstrate two things: (1) private capital can be marshalled at unprecedented scale, and (2) public policy can shape, but not entirely dictate, where that capital lands.
Where this conversation should go next
The next decade of AI will look less like a contest of algorithms and more like a contest over who controls the hardware, the standards, and the rules. For developing countries, the immediate choices are pragmatic: build local compute where feasible, design partnerships that transfer skills and ownership, and develop regulatory frameworks that protect citizens without repelling investment.
For global institutions and donors, the challenge is to treat compute access as a development priority. Funding pools that subsidize local AI compute, investments in energy and cooling infrastructure, and coordinated international efforts to expand chip manufacturing would lower barriers meaningfully.
Final thought
AI is not only a technology; it’s an infrastructure stack — and like railways or ports, its location shapes whose economies get built and whose values get encoded into daily life. The question for the rest of the world isn’t whether the AI race will continue — it will — but whether that race will be governed by a few centres of power or by a broader, more distributed set of actors that can shape AI’s benefits and norms.
No comments:
Post a Comment