May 10, 2026

The Education System Is Still Trying to Ban the Future: What Ibironke Yekini Exposed About AI, Resistance, and Survival at the All Northern Schools Conference 2026

By Ephraim Agbo 

Every generation of teachers eventually meets a technology it does not fully understand. And almost every time, the first institutional response is the same: prohibit it first, study it later. The pattern is remarkably consistent across educational history.

The ballpoint pen was once treated as a threat to handwriting discipline. Calculators were accused of destroying mathematical thinking. Mobile phones became symbols of distraction. WhatsApp was blamed for collapsing attention spans. Social media supposedly ruined reading culture. Now artificial intelligence has entered classrooms, and once again, schools across the world are responding with suspicion, anxiety, and prohibition.

But beneath these recurring moral panics lies something deeper than technology itself. The real issue is authority.

At the  recent All Northern Schools Conference, held in Kano, Nigeria, one of the most intellectually uncomfortable interventions came from , a figure widely regarded as one of Africa’s leading voices in software testing, product quality engineering, and digital systems reliability. 

That context matters enormously because Yekini is not a naïve technology evangelist intoxicated by Silicon Valley optimism. Her professional life has been built around testing systems before they fail. As a quality engineering specialist and technology executive, her work revolves around reliability, risk detection, system weakness, and technological accountability.

Which is precisely why her warning about schools and artificial intelligence carried unusual intellectual weight. She did not argue that schools should surrender blindly to AI. She argued something far more difficult: that institutions should stop banning technologies they have not seriously attempted to understand. And to explain the point, she told a deceptively simple story.

The Boy With the Calculator

Years ago, while teaching in a primary classroom, Yekini recalled one student who constantly challenged the rules. Emmanuel was energetic, curious, stubborn, and persistently fascinated by technology. One day, he raised his hand and asked a question that sounded almost rebellious at the time:

“Aunty, my brother has a calculator. Why can’t I bring it to mathematics class?”

The response came immediately and instinctively:

“You will never learn maths if you depend on a calculator.”

The classroom agreed. Teachers agreed. Parents agreed. The calculator was treated almost as intellectual corruption — a shortcut that would weaken discipline and destroy real thinking. For years, calculators remained unofficial enemies inside many classrooms. Then something happened that educational institutions repeatedly fail to anticipate: the future arrived anyway.

Years later, Yekini met Emmanuel again. He was no longer a schoolboy arguing for permission. He had become an engineer working in Lagos. And the calculator was still with him — only now, it existed inside his smartphone, seamlessly absorbed into the digital ecosystem schools once tried to resist.

That story is not really about calculators. It is about institutional memory — or rather, the lack of it because education systems repeatedly forget how often they have mistaken technological transition for intellectual decline.

The Real Fear Is Not AI — It Is the Collapse of Informational Monopoly

The debate around artificial intelligence is usually framed around cheating, laziness, or distraction. Those concerns are real, but they are not the center of the crisis. The deeper anxiety is institutional.

For centuries, schools operated through a model of informational scarcity. Knowledge was limited. Books were difficult to access. Expertise was centralized. Teachers functioned as gatekeepers of legitimate information. The classroom was not only a learning space. It was a hierarchy. Artificial intelligence disrupts that structure fundamentally.

A student with a smartphone can now generate lesson summaries, explanations, coding assistance, mock interview preparation, essay structures, translations, revision notes, and research guidance within seconds. A child can ask an AI system to explain algebra like a professor, then ask it to explain the same concept like a ten-year-old.

That changes the architecture of authority itself. The teacher is no longer the sole distributor of information. And that shift unsettles institutions built around informational control.

When schools ban technologies they have not meaningfully explored, they are not simply reacting to risk. They are attempting to preserve familiarity. The prohibition becomes psychological as much as educational — an effort to defend an older structure of authority against a rapidly changing reality.

But history suggests something uncomfortable: societies that respond to technological disruption primarily through restriction rarely shape the future. They usually arrive late to it.

Artificial Intelligence Is Under the Spotlight — And Trust Is the Real Issue

Artificial intelligence today occupies a strange position in public life. It is simultaneously celebrated as revolutionary and feared as destabilizing.

Can these systems be trusted? Can they provide accurate information? Are they biased? Who controls them? And what happens when institutions begin depending on systems they do not fully understand? These questions are no longer theoretical.

Across workplaces globally, AI chatbots and generative systems are already being integrated into daily operations. Employees use them for writing, summarization, coding, analysis, customer support, and administrative tasks. Yet trust remains surprisingly fragile.

Recent surveys suggest many employees spend almost as much time verifying AI-generated outputs as they spend using the systems themselves. Even among business leaders, complete trust in AI systems remains relatively low. That hesitation exists for good reason.

One AI researcher who previously helped develop technology behind Amazon Alexa recently explained the problem in unusually direct terms.

Traditional software systems, he argued, are rule-based. Databases, spreadsheets, and classical algorithms operate predictably. They follow explicit instructions. Their outputs can often be traced and explained. Modern machine learning systems operate differently.

They identify patterns across massive datasets and generate probabilistic predictions rather than explicit reasoning. They do not “understand” information the way humans imagine understanding. Instead, they predict language and behaviour statistically. That distinction matters enormously because it means AI systems can sound extraordinarily intelligent while being fundamentally wrong. The industry even has a term for this phenomenon: hallucination.

AI systems may fabricate historical events, invent legal citations, misrepresent scientific findings, or produce false information with complete confidence. Not because they are intentionally deceptive, but because statistical prediction is not the same thing as comprehension.

And yet despite these weaknesses, AI adoption continues accelerating because the technology also performs tasks previous generations of software simply could not achieve.

It can interpret images, analyze speech, summarize enormous documents, generate code, personalize learning materials, detect patterns across massive datasets, and automate forms of labour previously dependent on human cognitive effort. 

This creates a paradox at the heart of the AI era: societies increasingly depend on systems they do not fully trust.

The “Black Box” Problem

One of the central concerns in artificial intelligence today is what engineers call the black box problem. Many advanced AI systems produce answers without being able to explain their reasoning in ways humans fully understand. Even engineers who build these models sometimes cannot clearly trace why a specific output emerged. That creates profound challenges for high-stakes industries.

In banking, a flawed AI decision can affect financial systems. In healthcare, it can affect diagnosis and treatment. In aviation, it can affect safety. In law, it can distort justice. And in education, it can reshape how an entire generation learns to think. 

This is precisely why voices like this matter within the African technology ecosystem. Her field — software quality engineering — exists because technological systems fail. Software testing is fundamentally about distrust. It assumes systems must be examined, stressed, verified, questioned, and challenged before deployment. Which makes her educational argument deeply ironic.

The same institutions that claim to fear AI’s unreliability are often banning it without conducting the very type of structured examination engineers consider essential. In engineering culture, nothing serious is deployed untested.

Banking infrastructure is stress-tested because failure destroys trust. Aviation systems are tested because lives depend on reliability. Critical digital infrastructure is repeatedly examined because confidence without verification is dangerous. Yet many schools now approach AI in the exact opposite way: fear first, understanding later.

From the perspective of a quality engineer, that is not caution. It is methodological inconsistency.

“Tech First” Does Not Mean “Tech Worship”

One of the most misunderstood aspects of the current AI debate is the assumption that technological engagement automatically means technological surrender. It does not. Yekini’s “Tech First” argument is not a call for blind enthusiasm. It is a call for disciplined literacy. “Test before you fear.” That principle is profoundly important because the current educational response to AI often resembles panic more than policy.

Some schools prohibit AI tools immediately. Students caught using them face punishment or suspension. Yet beneath these bans lies an uncomfortable reality: students are already using AI anyway.

Quietly. Secretly. At home. Under desks. In hostels. During assignments. Sometimes more effectively than the adults attempting to regulate them. This creates two possible educational futures. 

In one school, administrators panic and ban AI outright. Students continue using it underground without ethical guidance, critical supervision, or intellectual discipline. In another school, administrators study the technology alongside teachers. They examine its strengths, weaknesses, risks, and limitations. Students learn not merely how to use AI, but how to interrogate it critically.

One school creates concealment. The other creates literacy. That distinction may define educational inequality over the next decade.

AI Is No Longer Just a Tool — It Is Becoming Infrastructure

One of the biggest mistakes educational institutions make is treating artificial intelligence as though it were merely another classroom application. It is already becoming much larger than that.

Across industries, AI is quietly evolving from convenience technology into infrastructure — systems societies increasingly depend on whether they notice them or not.

The same technologies reshaping education are also reshaping finance, medicine, logistics, architecture, transportation, cybersecurity, and governance. Which means the educational debate is no longer fundamentally about whether students should use ChatGPT. It is about whether institutions understand the world students are entering.

The Bigger Question: Will Africa Shape AI or Merely Consume It?

Buried beneath the classroom debate is a much larger geopolitical question. Who gets to shape the technological future?

For decades, African economies have largely occupied the position of technology consumers rather than technology producers. Most major operating systems, cloud platforms, social networks, AI models, and digital infrastructures are still designed primarily in the United States, China, and parts of Europe.

That imbalance matters because technologies are never politically neutral. They carry assumptions. Cultural frameworks. Economic interests. Linguistic priorities. Embedded biases. This is partly why African voices in quality engineering and digital systems matter so profoundly.

Yekini’s broader mission is not merely about classroom technology. It is tied to a larger continental ambition: positioning Africa not only as a consumer of digital systems, but as a producer of world-class technological infrastructure. And education sits at the center of that struggle.

Because a generation trained only to fear AI may never learn to build AI. A generation taught merely to consume platforms may never shape them. A continent that delays technological literacy risks deepening dependency on systems designed elsewhere.

The consequences are not merely educational. They are economic, political, and civilizational.

The Real Risk Is Not AI — It Is Intellectual Passivity

None of this means artificial intelligence should be accepted uncritically. Bias remains real. Misinformation remains dangerous. Corporate concentration remains alarming. Surveillance concerns are growing. Labour displacement is becoming increasingly plausible.

But banning alone solves none of those problems. If anything, technological illiteracy makes societies more vulnerable to them.

The real educational challenge is not preventing students from touching AI. It is preventing students from surrendering their thinking to it.

That requires a radically different model of education — one less obsessed with memorization and more focused on interrogation.

Students must learn to ask: Is this accurate? What assumptions shaped this answer? What biases exist in this model? What perspectives are absent? What still requires human judgment?

Ironically, the age of AI may demand more human thinking, not less. But only if schools evolve fast enough to teach it.

The Lesson Hidden Inside the Calculator Story

The boy who once argued to bring a calculator into mathematics class was not simply being stubborn. He was standing at the edge of a future the adults around him could not yet fully see. That is often how technological change arrives: first as disruption, then as threat, then as inevitability, and finally as ordinary life.

The lesson for schools is not that every technology is automatically good. Some genuinely deserve caution. Some require regulation. Some can distort learning if introduced carelessly. But caution is not the same thing as fear. And regulation is not the same thing as refusal.

The real failure is not asking difficult questions about artificial intelligence. The real failure is refusing to ask them until after the future has already moved on.

Because every generation eventually discovers the same uncomfortable truth: the future does not wait for permission.

May 09, 2026

Technology Cannot Save a Broken School

By Ephraim Agbo 

Across Nigeria’s expanding private education industry, a quiet technological arms race is underway.

Schools are buying tablets before fixing literacy problems. Robotics laboratories are appearing in admission brochures while libraries remain understocked. Administrators now speak fluently about “AI-powered classrooms,” “digital transformation,” and “smart learning ecosystems,” often with the confidence of Silicon Valley founders rather than educators. To many parents, visible technology has become synonymous with quality. A school without tablets risks appearing outdated before anyone asks the more important question:

Are students actually learning?

It is a question becoming increasingly urgent not only in Nigeria, but globally.

For nearly two decades, the world embraced a seductive assumption: that more technology would naturally produce better education. Governments poured billions into laptops and digital infrastructure. Schools rushed online. Educational technology companies promised personalised learning, intelligent classrooms, and data-driven transformation. Screens became symbols of progress.

But today, even some of the world’s most technologically advanced societies are beginning to reconsider what exactly digital education has achieved.

And nowhere is that reassessment more dramatic than in Sweden.


The Global Cracks in the Digital Education Dream

For years, Sweden was celebrated as the model digital education nation.

Long before most countries normalised online systems, Sweden had already integrated technology deeply into everyday life. Internet access spread rapidly in the late 1990s. By the early 2000s, broadband infrastructure, digital banking, online public services, and smartphone integration had become part of national identity. The country saw itself not merely as technologically modern, but as a global pioneer of digital civilisation itself.

That confidence inevitably entered the classroom.

Beginning in the 2010s, Swedish schools aggressively adopted one-to-one laptop systems. Tablets entered even preschool environments. Textbooks gradually disappeared. Students increasingly took notes digitally, submitted assignments online, and consumed educational content through screens. Digital literacy became embedded into national curriculum strategy.

The transformation was so sweeping that many educators barely noticed how quickly analogue learning was disappearing.

Teachers interviewed years later described how physical books slowly vanished from classrooms altogether. Printouts became discouraged. Handwriting weakened. Screens became permanent educational companions.

At the time, few publicly questioned the movement. Sweden’s technological success story seemed too powerful to challenge. The country that produced companies like Skype and Spotify appeared uniquely positioned to lead the future of digital education.

Then the warning signs began to emerge.

Researchers connected to institutions such as the Karolinska Institute began raising concerns about excessive screen exposure, particularly among younger children. Educational psychologists, neuroscientists, and cognitive researchers warned that heavy dependence on digital devices could interfere with concentration, reading comprehension, language development, memory retention, and attention control.

One concern became increasingly difficult to ignore: distraction.

Children were no longer merely learning through devices. They were competing against the architecture of the internet itself — notifications, games, short-form videos, entertainment algorithms, and fragmented attention spans engineered to maximise engagement rather than reflection.

Teachers described classrooms where students drifted constantly between educational tasks and digital temptation. Researchers observed that even when students appeared attentive, screens subtly altered how information was processed. Reading on devices demanded different cognitive energy from reading on paper. Retention weakened. Deep concentration became harder.

And then came the international assessment data.

The , through its influential PISA educational rankings, recorded significant declines in Swedish reading performance during the same years digitalisation accelerated most aggressively. OECD analysts later argued that Sweden’s technology rollout often lacked sufficient pedagogical structure and adequate teacher preparation.

The consequences became politically explosive.

A country once celebrated as the future of digital learning was now publicly reconsidering whether it had gone too far.

The phrase that emerged from Sweden’s political debate became symbolic: “from screen to binder.”

The government began reinvesting in textbooks. Libraries regained importance. Preschools reduced mandatory screen exposure. Schools were encouraged to prioritise paper-based learning again. Policymakers openly admitted that earlier digital policies may have underestimated the developmental consequences of excessive screen dependency.

What made the reversal remarkable was not simply that Sweden changed direction. It was that one of the world’s most technologically confident societies had begun rediscovering the educational importance of slowness, focus, handwriting, memory, and human attention.

Yet the Swedish debate remains deeply contested.

Critics argue that abandoning digital tools too aggressively risks creating a different kind of inequality. Wealthier children may continue learning AI skills, digital literacy, and technological fluency at home, while disadvantaged children fall behind. Others insist schools cannot pretend society is no longer digital. Children, they argue, must learn how to navigate technology intelligently rather than avoid it entirely.

The result is not a simple rejection of technology.

It is something more complex: a global struggle to understand how human cognition survives in the age of permanent digital stimulation.

And that debate now carries profound implications for countries like Nigeria.


Nigeria’s Rush Toward Digital Schools

Nigeria is entering the digital education era under far more fragile conditions than Sweden ever did.

Unlike Scandinavian systems built on strong literacy culture, stable infrastructure, robust teacher training, and extensive public investment, Nigeria’s education sector faces structural weaknesses that predate any conversation about tablets or artificial intelligence.

Overcrowded classrooms. Teacher burnout. Weak instructional support. Declining reading culture. Severe inequality. Unstable electricity. Underfunded public schools. Examination-driven learning. Institutional inconsistency.

Yet despite these unresolved foundations, schools are racing toward technological modernisation with extraordinary speed.

Across urban Nigeria, educational branding increasingly revolves around visible digital sophistication. Schools advertise coding classes, robotics programmes, AI integration, smartboards, and learning management systems as evidence of innovation. Parents, anxious about the future economy, often reward the appearance of technological advancement without investigating whether learning quality has actually improved.

It was within this atmosphere that Dimeji Falana delivered one of the most intellectually unsettling contributions to Nigeria’s education debate at the All Northern Schools Conference 2026, held from 5th to 7th May and themed “Repositioning Northern Schools for Innovation, Sustainability and Impact.”

As Co-Founder and CEO of EDVES—one of Africa’s leading K–12 education technology platforms—Falana occupies a distinctive position within the discourse. He is neither opposed to technology nor anchored to a purely analogue vision of schooling. Through EDVES, he has been directly involved in building digital infrastructure for schools across the continent, giving him an insider’s view of both the transformative promise and structural limitations of education technology.

It is precisely this dual vantage point that gives weight to his intervention. At the conference, Falana argued that technology does not, in itself, produce educational quality. Rather, it functions as an amplifier—magnifying the strengths, weaknesses, and underlying logic already present within a school system.

That distinction changes everything.

If a school possesses strong pedagogy, disciplined systems, competent teachers, and coherent instructional culture, technology can dramatically scale excellence. But if the underlying institution is weak, digital systems merely accelerate confusion, distraction, inefficiency, and academic decline.

In other words:

Technology multiplies systems. It does not replace them.

That may be one of the most important educational lessons of the AI era.


Nigeria’s Examination Crisis Is Not About Intelligence

Falana’s argument arrived shortly after another cycle of troubling national examination results.

Public outrage erupted over low JAMB performance figures. Predictably, social media conversations framed the issue as evidence of declining student seriousness, collapsing morality, or generational laziness.

But those explanations avoid the harder truth.

Educational collapse is rarely sudden. It accumulates quietly.

Students do not wake up unintelligent. Learning gaps compound over years — through weak instruction, poor reading culture, exhausted teachers, overcrowded classrooms, shallow curriculum delivery, and institutional neglect. By the time examination systems expose the damage, the underlying problems have often existed for years.

Falana identified the crisis with unusual bluntness: the most important moment in education is when the teacher stands before students.

Not the ministry framework. Not the procurement contract. Not the software dashboard.

The classroom itself.

That observation sounds simple, yet it exposes one of the deepest weaknesses in modern educational reform globally. Policymakers obsess over infrastructure, devices, and policy documents while neglecting the actual instructional encounter where learning either succeeds or fails.

Teachers struggle silently with classroom management. Weak instructional methods go uncorrected. Supervision becomes ceremonial. Data is collected endlessly without meaningful interpretation. Professional development workshops remain generic and disconnected from classroom reality.

Technology, Falana argued, can help solve some of these problems — but only if deployed intelligently.

Artificial intelligence and classroom analytics could potentially identify instructional weaknesses in real time: student engagement issues, conceptual delivery problems, emotional intelligence gaps, or ineffective classroom management patterns. Schools could move beyond superficial inspection systems toward continuous instructional feedback.

But again, the warning remained consistent:

Data itself is meaningless unless it explains human behaviour.

A spreadsheet is not educational intelligence. A dashboard is not transformation.


The Rise of “Performance Theatre” in Schools

Perhaps the most penetrating aspect of Falana’s analysis was his critique of what might be called performative modernisation.

Many schools today look technologically advanced while remaining educationally shallow.

Digital dashboards create the appearance of efficiency. Software subscriptions create the appearance of innovation. AI branding creates the appearance of sophistication.

But beneath the surface, institutional confusion often persists.

Teachers juggle disconnected platforms. Attendance systems become bureaucratic rituals. Administrators prioritise digital compliance over instructional quality. Educational technology becomes marketing architecture rather than learning infrastructure.

This phenomenon extends far beyond Nigeria.

Globally, educational technology has become a multibillion-dollar industry driven partly by genuine innovation and partly by institutional anxiety. The COVID-19 pandemic accelerated this dramatically. Schools digitised rapidly under emergency conditions, but many bypassed deeper questions about pedagogy, cognition, emotional development, and attention.

Millions of students technically “attended” online classes during lockdowns while absorbing very little.

Infrastructure expanded faster than educational understanding.


Sweden’s Warning to Nigeria

The Swedish experience matters precisely because it reveals what happens when technological enthusiasm outpaces educational wisdom.

Sweden entered digital education with advantages Nigeria still struggles to build: strong teacher training systems, reliable infrastructure, widespread literacy, stable governance, and extensive social welfare support.

Even then, policymakers eventually concluded that excessive screen dependency carried serious educational risks.

Nigeria risks repeating similar mistakes under much weaker institutional conditions.

And unlike Sweden, Nigeria may not possess the same corrective capacity if the experiment fails.

This is why Falana repeatedly returned to process and pedagogical intention. One example from his presentation involved schools investing millions of naira into robotics programmes without clearly defining learning objectives. Students interacted with expensive equipment, but educational direction remained vague.

Meanwhile, other schools used similar technologies to solve practical engineering problems: drones for agricultural monitoring, solar-powered systems, water management tools.

The difference was not the devices.

The difference was structure, purpose, and instructional clarity.


The Three Layers That Determine Whether Technology Works

Falana’s framework ultimately reduced educational transformation to three interconnected layers.

1. Curriculum and Pedagogy

Curriculum defines what students should learn. Pedagogy defines how learning happens.

Without clarity here, technology becomes directionless.

Falana referenced the Greek roots of the word “pedagogy,” originally referring to a guide responsible for accompanying and mentoring a child. The historical point matters because modern education increasingly risks confusing information access with actual understanding.

The internet provides information abundance. Education still requires intellectual guidance.

2. Systems and Institutional Structure

Schools are not merely learning spaces. They are operational institutions.

Leadership culture, accountability systems, teacher evaluation, financial sustainability, organisational discipline, and communication structures determine whether educational goals survive beyond slogans.

Technology can strengthen strong systems. It cannot create coherence where none exists.

3. Tools, Automation, and Data

Only after the first two layers become functional does technology achieve genuine transformative power.

At this stage, digital systems can improve parent communication, automate administration, personalise assessment, strengthen accountability, and generate intelligent educational feedback.

But the technology must follow the process — not replace it.

That may be the single most important lesson modern schools still struggle to understand.


The AI Era May Increase the Value of Human Qualities

One of the great paradoxes of artificial intelligence is that the more technologically advanced society becomes, the more valuable certain deeply human capacities may become.

Attention. Judgment. Emotional intelligence. Discipline. Creativity. Ethics. Mentorship.

These are precisely the qualities fragmented digital environments often weaken.

Falana predicted that by 2033, only schools operating with strong systems supported intelligently by technology would remain truly competitive. Global trends suggest the prediction may not be exaggerated.

Artificial intelligence is already reshaping labour markets, communication, assessment, and knowledge access. But as information becomes automated, purely technical knowledge becomes easier to replicate.

Human formation becomes harder.

Which may explain why societies like Sweden are cautiously reintroducing analogue educational practices — not because technology lacks value, but because human cognition still depends on concentration, reflection, memory formation, and emotional grounding.

The brain is not a search engine.

And education is still about human beings.


Beyond Examination Scores

Perhaps the most important aspect of Falana’s argument was its refusal to reduce education to test performance alone.

Again and again, he returned to a broader framework of child development built around three dimensions:

The cognitive domain — the head. The affective domain — the heart. The psychomotor domain — the hand.

That distinction matters because modern education increasingly risks producing technically skilled but emotionally fragmented students.

Technology can automate finance. It can personalise assessments. It can analyse behaviour patterns. It can strengthen communication.

But it cannot replace mentorship. It cannot replace trust. It cannot replace moral formation.

A child still needs guidance. A teacher still needs support. And a school still needs vision.


The Real Debate Is No Longer About Whether Technology Belongs in Schools

That question has already been answered.

Technology will remain part of education everywhere.

The deeper question is whether educational systems understand what technology is actually for.

If technology becomes decorative, it will merely intensify distraction. If it replaces foundational literacy too early, it may weaken concentration and comprehension. If it ignores pedagogy, it may widen inequality instead of reducing it.

But if technology is built around strong teaching culture, disciplined systems, intelligent feedback, and meaningful human relationships, it can become one of the most powerful educational multipliers in history.

Not because devices are magical.

But because well-structured institutions, amplified intelligently, can scale human potential in ways previous generations could barely imagine.

And that work begins not with tablets or artificial intelligence.

It begins inside the classroom.

April 08, 2026

The Unseen Hierarchy: How a Handful of Families, Institutions, and Systems Really Rule the World


---


By Ephraim Agbo 

We are told we live in democracies. That power is distributed. That no single group could possibly coordinate the chaos of global finance, war, and culture.

But what if the opposite is true? What if the chaos is not a bug, but a feature—designed to hide a hierarchy so old, so interlocked, and so deliberate that most people cannot see it even when it operates in plain sight?

This is not a paranoid fantasy. It is a synthesis of documented history, leaked internal communications, and the quiet admissions of the powerful themselves. After digging through the evidence—from 19th‑century banking dynasties to modern algorithmic manipulation—a clear pattern emerges. A small, interconnected network of wealthy families and institutions does coordinate global policy. Not through mind control or lizard people, but through shared interests, interlocking directorates, and decades of deliberate planning.

Let’s walk down the hierarchy, layer by layer, with receipts.

The Apex: Symbolic Power as Real Control

We start where most investigators hesitate: the symbolic. Ancient religious figures like Baal and Moloch appear on elite monuments, in secret society rituals, and in the art adorning the halls of global governance. You find Babylonian imagery on the Great Seal of the United States, on the floor of the U.N. General Assembly, and in the initiation rites of the Bohemian Grove—where every president from Hoover to George H.W. Bush has spoken.

Skeptics call this coincidence. But the answer is not that demons literally issue commands. It is that symbolic power legitimizes temporal power. Ruling elites have always wrapped themselves in divine or occult imagery to signal transcendence over law and morality. The use of Moloch, historically associated with child sacrifice, appears in modern contexts precisely to evoke the idea that the powerful operate outside ordinary ethics. Whether you believe in the supernatural is irrelevant. The belief that they believe shapes their impunity.

The Puppet Masters: Dynastic Wealth as Shadow Government

Below the symbolic apex sits a network of families whose names appear in every serious study of concentrated wealth: the Rothschilds, the Rockefellers, the Morgans, the Du Ponts, and a handful of European royal houses.

The Rothschilds: In the 19th century, the Rothschild bank was the largest financial institution in the world. They financed wars and railroads across Europe. Today, the family remains deeply embedded in global finance—not as a single hand pulling levers, but as a dynasty with centuries of accumulated capital and connections. A 2012 study by the Swiss research firm et al. found that just 147 companies—most of them linked through Rothschild and Rockefeller networks—controlled 40% of the entire transnational corporate economy.

The Rockefellers: Standard Oil was broken up, but the family’s influence migrated. David Rockefeller founded the Trilateral Commission in 1973—a private organization that brings together political leaders, bankers, and intellectuals from North America, Europe, and Japan. Their stated goal? To coordinate policy on trade, monetary systems, and global governance. That is not a secret cabal—it is an open secret. Members have included Jimmy Carter, George H.W. Bush, Bill Clinton, and Alan Greenspan.

The coordination mechanism: These families do not need a secret phone line. They meet at the Bilderberg Conference (founded 1954, funded by the Rothschilds and Rockefellers), the Council on Foreign Relations, and the Bohemian Grove. Attendee lists include every U.S. President since Eisenhower, every European Central Bank president, and every IMF managing director. The agenda? “Global governance,” “monetary coordination,” and “managed trade.” The outcomes? NAFTA, the Euro, and the bailout of too-big-to-fail banks. No minutes are released. No votes are recorded. But the outcomes follow a remarkably consistent script.

The Power Brokers: Institutions as Executors

If the families are the board, institutions are the management.

Governments: Elected officials come and go. But the permanent bureaucracy—the Treasury Department, the Foreign Office, the Ministry of Defense—remains. And those bureaucrats rotate directly into private sector jobs. The “revolving door” is not corruption; it is structural integration. A 2021 study by the Center for Responsive Politics found that over 60% of senior U.S. regulators took jobs in the industries they previously regulated within two years of leaving office. The result: governments regulate in ways that benefit the regulated.

Central banks: The Federal Reserve is not federal. It is a private consortium of member banks, created at a secret meeting on Jekyll Island, Georgia, in 1910. Attendees included Senator Nelson Aldrich (grandfather of the Rockefellers) and representatives of J.P. Morgan, Rothschild, and Kuhn Loeb. The Federal Reserve Act was passed in 1913. Since then, the Fed has engineered the expansion of the money supply to benefit debtors (government and large corporations) at the expense of savers. Every major economist from Milton Friedman to Hyman Minsky agreed: the Fed serves Wall Street, not Main Street.

Moreover, the Federal Reserve, the Bank of England, the European Central Bank, and the Bank of Japan operate independently of elected governments. They coordinate through the Bank for International Settlements (BIS) in Basel, Switzerland. The BIS’s motto? “Bridging central banks across generations.” Their meetings are closed. Their swap lines—trillions of dollars in liquidity provided during crises—are decided by a handful of governors. When the 2008 crash happened, central banks acted in near‑perfect unison. That is not conspiracy; that is documented institutional coordination.

Corporations: The largest global firms are not competitors. They are a cartel. The top five banks hold 45% of all U.S. banking assets. The top four meatpackers control 85% of the beef market. Three firms (Apple, Google, Amazon) dominate digital infrastructure. This concentration is not accidental. It is the result of a century of mergers enabled by deregulation—deregulation written by the same firms.

Consider the revolving door from Goldman Sachs: former executives have run the U.S. Treasury (Robert Rubin, Hank Paulson, Steven Mnuchin), the European Central Bank (Mario Draghi), and the Bank of England (Mark Carney). When the same names appear on both Wall Street and central bank letterheads, the line between “private sector” and “public power” dissolves.

Religious institutions: The Vatican holds observer status at the UN. The Church of England has bishops in the House of Lords. In the U.S., the Christian right has shaped Supreme Court appointments for forty years. Religious hierarchies are used to enforce social obedience—from sexual morality to labor rights. They are not the prime movers, but they are reliable tools.

The Control Mechanisms: Systems of Influence

This is where the hierarchy becomes almost understated.

Education as indoctrination: The modern school system was not designed for enlightenment. John Taylor Gatto, a New York State Teacher of the Year, spent 30 years documenting how compulsory schooling emerged from Prussian military models. The Carnegie Foundation, funded by Andrew Carnegie (a close ally of the Rockefellers), explicitly wrote in its 1905 Report on Education that schools should produce “docility” and “punctuality” for industrial labor. The Carnegie Foundation and Rockefeller’s General Education Board funded education reforms to produce “orderly, obedient citizens.” The evidence is in their own published annual reports. The goal was never critical thinking. It was compliance.

The military-industrial complex: In his 1961 farewell address, President Eisenhower—a five‑star general—warned that “we must guard against the acquisition of unwarranted influence… by the military-industrial complex.” Today, the Department of Defense cannot pass an audit. Trillions of dollars are unaccounted for. Meanwhile, the top five defense contractors (Lockheed Martin, Boeing, Raytheon, Northrop Grumman, General Dynamics) spend over $100 million annually on lobbying. Congress has not declared war since 1942, but the U.S. has been in continuous military conflict for over two decades. The Iraq War cost $2 trillion. Who profited? Halliburton, Blackwater, and Raytheon. Wars are not fought for security. They are fought for contracts.

Big Pharma: Between 2019 and 2022, the pharmaceutical industry spent nearly $1 billion on lobbying in Washington. The result? A law was passed forbidding Medicare from negotiating drug prices—until the Inflation Reduction Act partially reversed it in 2022. That is not a free market; that is regulatory capture. The top 10 drug companies spent $150 billion on share buybacks and dividends—returning money to wealthy shareholders—while spending $120 billion on R&D. The opioid crisis, which killed over 500,000 Americans, was directly fueled by Purdue Pharma’s deception. Executives went to prison. But the business model—profit over patients—remains unchanged.

Tech giants: Facebook (Meta) and Google control over 60% of digital advertising. Their algorithms have been shown to prioritize outrage and division because engagement drives profit. In 2018, leaked internal Facebook memos admitted the company knew its platform was being used to incite ethnic violence in Myanmar. The system is not bugged; it is working as designed.

The Illusion: Consume, Obey, Fear, Hate

These four words describe the psychological prison that keeps the hierarchy safe. Let’s be precise.

Consume: The average American sees between 4,000 and 10,000 ads per day. Advertising is a $780 billion global industry. Its entire purpose is to manufacture dissatisfaction and link happiness to purchases. The average American household carries $16,000 in credit card debt. Consumption is not freedom. It is a leash.

Obey: From facial recognition cameras in London to the Patriot Act in the U.S., surveillance has normalized obedience. In 2023, a Pew survey found that 71% of Americans believe the government monitors their online activity. Only 12% said they changed their behavior in response. That is not compliance through force. It is compliance through learned helplessness. Speed cameras, workplace non‑disclosure agreements, and social credit systems in various stages of development all reinforce the same message: obey.

Fear: Since 9/11, the U.S. has spent $8 trillion on “homeland security.” The actual risk of dying in a terrorist attack is 1 in 45 million—lower than the risk of being struck by lightning. Crime rates have fallen for decades, but fear of crime has remained high—driven by 24‑hour news cycles that prioritize violence. Fear is a tool. It justifies surveillance, military spending, and the surrender of civil liberties.

Hate: Political polarization is not organic. A 2014 study by the University of Chicago found that Facebook’s news feed algorithm amplified emotionally charged content—especially anger and outrage—because it kept users on the platform longer. A 2018 study by MIT and Harvard found that falsehoods on Twitter spread six times faster than the truth—because outrage is the most viral emotion. Russia’s Internet Research Agency spent millions to exacerbate racial and political divisions in 2016. But they did not create the divisions; they just poured fuel on a fire that American media companies—Fox, MSNBC, Facebook—had already lit.

The Base: The Enslaved Masses?

The data are harsh but necessary.

· Ignorant: 54% of U.S. adults read below a 6th‑grade level, according to the Department of Education.
· Divided: Political polarization is higher than at any time since the Civil War, per Pew Research.
· Distracted: The average person checks their phone 96 times per day and spends 6 hours and 58 minutes on screens.
· Controlled: 66% of Americans live paycheck to paycheck, unable to afford a $500 emergency.

Does that sound like freedom to you?

The hierarchy does not need to lock you in a cage. It only needs to keep you distracted, in debt, and convinced that no alternative exists.

Conclusion: A Warning, Not a Photograph

This analysis uses ancient symbols where modern institutions would suffice. It hints at demons when the truth—human greed, coordinated by interlocking corporate boards—is already damning. But the core thesis is supported by facts:

· A small number of families and institutions hold a disproportionate share of global wealth and power.
· Those institutions coordinate through private organizations with no democratic accountability.
· The systems of media, education, and healthcare are designed to produce compliance, not liberation.
· The public is kept distracted, fearful, and divided—whether by accident or design.

You can reject the demonology at the top. But if you stop there, you are missing the forest for the trees.

The hierarchy is not a lie. It is a warning—drawn in crayon, but pointing at a fire.

What Can Be Done?

The first act of resistance is simple: see the structure. Then refuse to play by its rules. Opt out of consumer culture. Build local institutions. Share information that the filters would block. Vote for candidates who reject corporate money—and if none exist, organize your own.

The hierarchy is powerful. But it is not omnipotent. It depends on your obedience, your fear, your division. Withdraw those, and the pyramid collapses.

The question is not whether the hierarchy exists. The question is: what will you do, now that you know?

April 04, 2026

Faith or Power? How a 1,400-Year-Old Dispute Became the Middle East’s Most Explosive Fault Line


By Ephraim Agbo 

In contemporary geopolitical discourse, few phrases are as routinely invoked—and as poorly understood—as "Sunni vs. Shia." It is served up as a convenient shorthand for chaos: civil wars in Syria and Yemen, proxy battles between Riyadh and Tehran, and an unending sectarian blood feud. But this framing, while tidy, is analytically bankrupt.

To reduce the region's most persistent fracture to a theological food fight is to mistake the language of conflict for its cause. To understand the fault line, one must look past the piety and follow the power.

The Original Fracture: Politics Disguised as Piety

The year was 632 CE. The Prophet Muhammad had died without a universally accepted heir, leaving his rapidly expanding Arabian community with a political crisis, not a creedal one. Two practical answers emerged.

One camp, advocating for shura (consultation), backed Abu Bakr, the Prophet's close companion and father-in-law. Leadership, they argued, should be earned through merit and consensus. Another camp, smaller and more kinship-bound, insisted that authority belonged within the Prophet's household—specifically to his cousin and son-in-law, Ali.

This was not yet a schism over salvation. It was a succession dispute. But unresolved political contests rarely stay sterile. They calcify. They gather moral weight. And eventually, they begin to redefine identity itself.

Karbala: The Crucible of Memory

If the first fracture was political, the second was psychological. It occurred in 680 CE on the plains of Karbala, in modern-day Iraq. There, Ali's son, Husayn, refused to pledge allegiance to the Umayyad caliph Yazid, a ruler many viewed as corrupt and illegitimate. Outnumbered and cut off from water, Husayn and his tiny band of followers were slaughtered.

At the time, this was not a "Sunni-Shia" event. Those categories were still fluid. But Karbala did something far more consequential: it transformed a political defeat into a moral narrative.

For what would become Shia tradition, Karbala became the defining trauma—a story of righteous resistance against tyrannical power, of cosmic justice deferred. For the broader Sunni community, it remained a tragic episode, but not the axis of faith. In effect, Karbala did not create the divide. It gave it emotional permanence. It turned a dispute over a chair into a liturgy of loss.

From Sect to Strategy: The Imperial Age

Centuries passed. The divide did not disappear, but it often lay dormant—overshadowed by shared language, trade, and coexistence. Then came the imperial age. The rise of the Ottoman Empire (Sunni) and the Safavid Empire (Shia) in the 16th century marked the first large-scale militarization of the split.

For these rival powers, sectarian identity became a tool of statecraft: a way to legitimate dynastic rule, mobilize populations for war, and demonize a neighboring empire. Religion was no longer just belief. It was a border, a conscription notice, and a propaganda weapon.

Below is an analytical, journalistic breakdown of the Ottoman–Safavid Wars, tracing how a rivalry rooted in state-building and sectarian identity remade the Middle East.

Beyond the 7th Century: When Empires Weaponized the Faith

Any honest reading of the Sunni-Shia divide must confront a crucial historical moment that is often glossed over: the 16th-century clash between the Ottoman and Safavid Empires. This was not a continuation of 7th-century grievances. It was something new, and far more consequential.

Between 1514 and the early 19th century, these two empires fought a series of devastating wars that transformed a theological dispute into a geopolitical fault line. The Ottomans, champions of Sunni orthodoxy, and the Safavids, who made Twelver Shi'ism the official state religion of Iran, were not fighting over the succession of the Prophet. They were fighting over land, resources, and the right to define the Islamic world's political future. Religion provided the vocabulary; power determined the agenda.

The Spark: A New Empire and a Heretical Threat

The conflict's immediate cause was the rise of the Safavid dynasty. In 1501, Shah Ismail I, a charismatic teenage leader, conquered Persia and declared Shi'ism the realm's official faith. This was an act of profound defiance. For centuries, the region had been overwhelmingly Sunni. Ismail's move was not just religious—it was a direct challenge to the Sunni Ottoman Empire, which saw itself as the protector of the Caliphate and the "Sword of Islam."

More alarming for the Ottomans was Ismail's appeal to the Turkoman tribes within their own Anatolian heartland. These tribesmen, known as the Kizilbash (Red Heads), were fiercely loyal to the Safavid shah, viewing him as a divine figure. To Sultan Selim I, this was not just heresy; it was a fifth column that threatened the empire's internal stability.

1514 — Chaldiran: The Crucible That Changed Everything

The tension exploded on August 23, 1514, at the Battle of Chaldiran. Sultan Selim I, known as "the Grim," led a massive Ottoman force, including elite Janissaries and a revolutionary weapon: gunpowder artillery. Shah Ismail's Safavid army, renowned for its cavalry's ferocity, lacked heavy infantry and field guns.

The result was decisive. The Ottoman cannon obliterated the Safavid ranks. Ismail survived but was broken psychologically, retreating and never again fighting a pitched battle. The Ottomans did not conquer Persia, but they captured its capital, Tabriz, and annexed Eastern Anatolia and northern Iraq.

Chaldiran was the original sin of modern sectarian geopolitics. For the Sunnis, it was proof of Ottoman military superiority and the defeat of Shia heresy. For the Safavids, it was a trauma that led them to double down on Shi'ism as a marker of resistance. The border drawn at Chaldiran became a front line for the next century.

A Century of War: From Suleiman to the Treaty of Zuhab

Chaldiran did not end the rivalry; it institutionalized it. Over the next hundred years, a series of brutal campaigns turned the mountains of the Caucasus and the plains of Mesopotamia into a recurring slaughterhouse. For a century, the two empires were in "almost constant warfare," fighting for control of Iraq, Azerbaijan, and Georgia.

The Ottoman Onslaught: 1532–1590

Under Suleiman the Magnificent, the Ottomans went on the offensive. The war of 1532-1555 saw Suleiman campaign deep into Safavid territory, forcing Shah Tahmasp I to adopt a "scorched earth" retreat strategy. The resulting Peace of Amasya (1555) gave the Ottomans control over Iraq, including Baghdad. It was the first formal recognition of the new territorial status quo.

The Ottomans struck again in 1578, exploiting chaos following Shah Ismail's death. By the Treaty of Constantinople (1590), the Safavids were forced to cede control of Georgia, Azerbaijan, and other provinces to the Ottomans.

The Safavid Recovery: 1603–1618

This period of Ottoman dominance ended with the reign of Shah Abbas the Great. Having reformed his army, Abbas struck back in 1603. In a series of stunning campaigns, he reconquered the lost territories. When the Ottomans tried to counter in 1616, Abbas crushed them.

The Final Act: The Siege of Baghdad (1623–1639)

The last major war of the Safavid period was the most brutal. In 1623, Shah Abbas seized Baghdad, massacring many of its Sunni inhabitants. For fifteen years, Baghdad remained in Safavid hands. But the Ottomans, led by the iron-fisted Sultan Murad IV, were determined to reclaim the city. In 1638, after a devastating siege, Murad IV personally led his troops into Baghdad. A massive massacre of the city's Shia population followed.

The 1639 Treaty of Zuhab ended the war. It gave Iraq to the Ottomans, formally partitioned the Caucasus, and established a border that largely remains the frontier between Turkey, Iran, and Iraq today.

The Legacy: Sectarian Entrenchment

The century-long Ottoman–Safavid war left two catastrophic legacies. First, it entrenched the Sunni-Shia divide. Before this era, the boundaries were fluid. Afterward, they hardened into state-sponsored identities. The Ottomans actively promoted a strict Sunni orthodoxy to counter Safavid propaganda. The Safavids forcibly converted Iran's population to Shi'ism to create a loyal base. Sectarian identity became a tool of state-building and mass mobilization.

Second, the conflict created the political map of modern Iraq's dysfunction. The Ottoman preference for Sunni governance in Baghdad meant that Iraq's Shia majority was systematically excluded from power for nearly four centuries. This created a structural imbalance: Sunnis gained the administrative and military experience that would allow them to dominate the post-Ottoman state, while the Shia majority remained politically frozen out. This imbalance, baked in by the Safavid–Ottoman rivalry, directly foreshadowed the sectarian power struggles that would erupt after the 2003 U.S. invasion of Iraq.

The Ottoman–Safavid Wars were not a religious war in the simplistic sense. They were a geopolitical struggle in which faith became the most effective tool of mass mobilization. They transformed Karbala from a historical tragedy into a political rallying cry. They turned the theological debates of the 7th century into the state policies of the 16th.

In this sense, the Safavid–Ottoman conflict was the dress rehearsal for the modern Middle East. The rivalry between Riyadh and Tehran is not a new sectarian war. It is the latest chapter in a 500-year-old story of empires weaponizing belief, drawing borders with blood, and leaving behind a region where the lines of power and piety are forever entangled.

The Colonial Engine: Sykes-Picot and the Invention of the Modern State

Then came the 20th century, and with it, a rupture far more consequential for today's landscape than any 7th-century succession dispute. As the Ottoman Empire—the last great Sunni caliphate—teetered on the edge of collapse during World War I, Britain and France saw an opportunity. In 1916, they signed the secret Sykes-Picot Agreement, a pact that would carve up the Middle East into colonial spheres of influence with little regard for the people who actually lived there.

The borders drawn by Sykes-Picot did not correspond to the actual sectarian, tribal, or ethnic realities on the ground. This was not a failure of execution; it was a feature of the design.

The most glaring example of this imperial cartography was Iraq. The British cobbled together three former Ottoman provinces into a single state: a Kurdish north, a Sunni center, and a Shia south. The result was an artificial entity forced to coexist under an imported Sunni Hashemite monarch. The French, in turn, carved out Lebanon as a haven for Maronite Christians, creating a state whose delicate sectarian balance would eventually implode into a fifteen-year civil war.

One year after Sykes-Picot, the Balfour Declaration of 1917 added another layer of complexity. In a 67-word letter, British Foreign Secretary Arthur Balfour expressed support for "the establishment in Palestine of a national home for the Jewish people." This promise, made to a European Zionist movement, directly contradicted assurances of Arab independence given to local populations in exchange for their revolt against the Ottomans.

The implementation of Sykes-Picot and Balfour created weak states lacking political and social cohesion. By the time the British and French withdrew after World War II, they left behind a collection of fragile states with hardened borders—and the latent sectarian tensions that those borders had been designed to contain and exploit.

The American Umbrella: GCC–US Strategic Alliance

Into this vacuum of weak post-colonial states stepped a new superpower: the United States. The Cold War brought Washington firmly into the Middle East, but the defining pivot came after the 1979 Iranian Revolution. Suddenly, the Shia theocracy in Tehran was not just a rival to Sunni monarchies—it was an existential threat.

In 1981, the Gulf Cooperation Council (GCC) was formed, uniting Saudi Arabia, Kuwait, Qatar, the UAE, Bahrain, and Oman. But the alliance that truly mattered was the one with Washington. The GCC–US strategic partnership became the bedrock of Sunni Gulf security. In exchange for preferential access to oil and billions of dollars in arms purchases, the United States offered a nuclear umbrella, advanced weaponry, intelligence sharing, and the permanent basing of naval and air forces.

This arrangement fundamentally altered the Sunni-Shia balance of power. The GCC states, despite their small populations and limited military capacity, could now project force and deter Iranian aggression—but only as long as Washington remained committed. For Iran, the US presence in Bahrain, Qatar, and Saudi Arabia was a provocation and an encirclement. For the Sunni Gulf monarchies, the American alliance was a lifeline, allowing them to maintain their grip on power without fully developing indigenous military or economic resilience.

The result was a region frozen in a delicate, dangerous equilibrium. The GCC–US alliance militarized the sectarian divide, turning a historic religious difference into a frontline of great-power competition. Every American carrier group in the Persian Gulf, every F-35 sold to the UAE, every sanctions regime targeting Iran was read in Tehran as a Sunni-American conspiracy—and in Riyadh as necessary deterrence against Shia expansionism.

Modern Echoes: The Iran–Saudi Axis of Power

With this backdrop, the 21st-century fault line comes into focus. On one side stands Saudi Arabia, backed by the full military and diplomatic weight of the United States, projecting Sunni leadership. On the other, Iran, isolated by sanctions but skilled in asymmetric warfare, positioning itself as the defender of Shia interests and the architect of a resistance axis. This is not a direct war. It is a shadow contest, fought across multiple theaters.

· Iraq: After the 2003 U.S. invasion toppled a Sunni-minority regime, the new order empowered Shia majorities. But the collapse of centralized authority unleashed militias, Iranian influence, and a power vacuum that the GCC–US alliance could not fill without putting boots on the ground—a price Washington was unwilling to pay.
· Syria: What began as a popular uprising against the Assad regime quickly absorbed regional fault lines. Iran-backed forces mobilized to save the government, while Sunni-majority opposition groups received backing from Gulf states. The US stood by, wary of direct intervention, while its GCC allies funded rival factions—often at cross-purposes with American counterterrorism goals.
· Yemen: In the purest proxy theater, a local Houthi movement (with Zaydi Shia roots) evolved into a regional standoff. Saudi Arabia, backed by US logistics and intelligence, led a coalition against Iran-linked actors through air power and indirect warfare. The result has been a catastrophic humanitarian crisis—and a demonstration of the limits of the GCC–US alliance when fighting a guerrilla enemy.

The Sectarian Framing vs. Geopolitical Reality

While militias in the Middle East are often described as "Shia" or "Sunni," their primary drivers are rarely purely theological. Instead, sectarian identity serves as a powerful mobilization tool for deeper struggles: state collapse, foreign intervention, anti-imperialism (Shia axis), or counter-revolution and jihad (Sunni axis).

· Shia militias have evolved as state-proxy networks primarily orchestrated by Iran’s Islamic Revolutionary Guard Corps (IRGC). Their unifying ideology is velayat-e faqih (guardianship of the jurist) – loyalty to Iran’s Supreme Leader – not just shared Shi'ism.
· Sunni militias are far more fragmented, ranging from nationalist/tribal forces (backed by Turkey or Gulf states) to transnational jihadist groups (Al-Qaeda, ISIS) that reject nation-states entirely.

Shia Militias: The IRGC’s Franchise System

Iran has built the most successful trans-state militia network in the modern Middle East: the Axis of Resistance. Unlike Sunni groups that often clash, Shia militias operate under centralized strategic direction from Tehran, with standardized training, weapons (ballistic missiles, drones, anti-ship missiles), and command structures.

Hezbollah (Lebanon) – The Prototype

· Origin: Founded 1982 by IRGC instructors following Israel’s invasion of Lebanon. Blended Shi’ite social services with guerrilla warfare.
· Evolution: Transformed from a militia into a hybrid actor – a political party with parliamentary seats, a social welfare empire, and a military force stronger than the Lebanese army.

Hezbollah’s power lies in deterrence by resilience. It survived multiple Israeli wars by embedding its missile launchers in civilian areas, making any Israeli retaliation politically costly.
· Current role: After 2006, it shifted to propping up Assad in Syria, protecting Iran’s land bridge to the Mediterranean. This exposed its sectarian face, eroding its pan-Arab resistance credibility.

Iraq’s Popular Mobilization Forces (PMU / Hashd al-Shaabi)

· Origin: Formed in 2014 after Grand Ayatollah Sistani issued a fatwa for jihad against ISIS. Initially a cross-sectarian call, but IRGC-aligned factions quickly dominated.
· Structure: The PMU is an umbrella of 40 factions. Crucially, not all are equal:
  · Kataib Hezbollah (KH): The IRGC’s direct arm in Iraq. Responsible for rocket attacks on U.S. bases. Operates as a state-within-a-state.
  · Asa’ib Ahl al-Haq (AAH): Broke from Muqtada al-Sadr’s movement, now fully loyal to Qasem Soleimani (former IRGC Quds Force chief).
  · Badr Organization: The oldest, with deep ties to Iran’s Ministry of Intelligence.
  · Saraya al-Salam (Sadr’s militia): Populist, anti-Iran, anti-U.S. – a reminder that Shia militias are not monolithic.

The PMU’s institutionalization into the Iraqi state (as part of the armed forces) is a double-edged sword. It gives Iran legal cover but also forces factions to balance Tehran’s orders with Iraqi nationalist sentiment. The 2020 U.S. assassination of Soleimani and PMU commander Abu Mahdi al-Muhandis fractured this balance, leading to intra-Shia fighting in 2022.

Houthis (Yemen) – The Unlikely Proxy

· Origin: Zaydi Shia revivalist movement (a branch closer to Sunni Islam than Twelver Shi'ism). Its slogan is “Death to America, Death to Israel, Curse on the Jews, Victory to Islam” – identical to Iran’s rhetoric.
· Transformation: In 2014, the Houthis seized Sana’a. Iran began supplying them with missiles and drones, turning a local insurgency into a regional threat.

The Houthis are less a proxy, more a partner – they have their own agenda (control of Yemen) that aligns with Iran’s goal of harassing Saudi Arabia. Iran uses them as a cost-imposing tool: cheap missiles force Saudi Arabia to spend billions on air defense, draining its economy.
· Current capability: Now possess hypersonic ballistic missile technology, likely from Iran or North Korea. Their Red Sea ship attacks (2023–present) have global economic impact.

Afghan & Pakistani Shia Militias in Syria

· Liwa Fatemiyoun (Afghan Shia) and Liwa Zainebiyoun (Pakistani Shia) – recruited from refugees and Shia minorities.

These groups reveal Iran’s demographic limitation – it uses non-Iranian Shia as expendable cannon fodder in Syria, preserving Iranian lives. Many are young men promised citizenship or money. This is a classic imperial proxy technique.

Sunni Militias: Fragmentation and Rivalry

No single power unites Sunni militias. Instead, multiple patrons (Turkey, Saudi Arabia, UAE, Qatar) sponsor different groups, often against each other. Sunni militias also face a jihadist vs. nationalist schism.

The "Nationalist" Sunni Militias (State-adjacent)

· Iraq’s Tribal Mobilization Forces (al-Hashd al-Ashairi): Formed during the anti-ISIS campaign, backed by the UAE and Jordan. These are anti-Iran but also anti-jihadist. After ISIS’s defeat, they were marginalized by the Shia-dominated PMU, leading to latent Sunni grievance.
· Syrian National Army (SNA): A Turkish-backed umbrella of Sunni Arab and Turkmen factions in northern Syria. Their primary enemy is the Kurdish YPG (which Turkey equates with PKK), not Assad. This makes them tools of Turkish foreign policy, not a sectarian revolution.
· Libyan National Army (LNA) – under Khalifa Haftar: Though nominally secular, Haftar relies on Salafi militias from the city of Derna (e.g., Al-Saiqa Brigade). Backed by Egypt, UAE, and Russia – a coalition of Sunni autocracies against Islamist militias.

The Jihadist Militias (Transnational)

· Al-Qaeda (global): Prioritizes attacking the “far enemy” (US, West) but operates local affiliates: Hayat Tahrir al-Sham (HTS) in Syria, Al-Shabaab in Somalia, Jama'a Nusrat ul-Islam wa al-Muslimin (JNIM) in Sahel. Al-Qaeda has evolved into a decentralized ideological brand, not a command hierarchy.
· ISIS (Daesh) : Broke from Al-Qaeda in 2013. Its key innovation was state-building – seizing territory, running bureaucracies, selling oil, and genociding Yazidis and Shia. ISIS’s collapse (2019) led to insurgency 2.0 in Iraq-Syria and affiliates in Afghanistan (ISIS-K), Congo, Mozambique.

SIS uses spectacular violence (mass beheadings, suicide bombings) to compensate for its loss of territory, aiming to provoke overreaction by Shia militias, which then recruits more Sunnis.

The "Muslim Brotherhood" Model (Non-jihadist but Islamist)

· Hamas (Palestine): Sunni Islamist, but its alliance with Iran against Israel puts it in a strange category. Hamas fights alongside Shia militias in Syria? No – it broke ties over Assad’s massacres. But Iran still funds Hamas’s military wing (al-Qassam Brigades). Iran supports Hamas because it hurts Israel, not because of theology.
· Syrian Opposition factions (e.g., Ahrar al-Sham, Failaq al-Rahman): Now largely crushed or absorbed into HTS. They were backed by Turkey and Qatar as a counterweight to both Assad and ISIS.

The Myth of "Sectarian War"

Media often frames conflicts as Shia vs. Sunni. But the data shows:
· Intra-sectarian violence is often deadlier: In Iraq, Shia-on-Shia fighting (Sadr vs. IRGC factions) killed hundreds in 2022. In Syria, jihadist groups (Sunni) fought each other more than they fought Assad.
· Sectarian rhetoric serves elites: Politicians like Iran’s US-Israel killed Khamenei or Saudi Arabia’s former Crown Prince Mohammed bin Nayef used anti-Shia or anti-Sunni language to rally bases, but their actual actions (e.g., Saudi-Iranian rapprochement in 2023) show pragmatism.
· Minority militias exist: Christian militias in Iraq (Babylon Brigades, backed by Iran), Druze militias in Syria (backed by Assad regime), and Alawite militias (Shabiha) complicate the binary.

The Dangerous Myth of "Ancient Hatred"

It is here that the most pervasive myth must be confronted directly: the idea that Sunnis and Shias have been locked in an eternal, 1,400-year cycle of bloodshed. This is not just misleading. It is analytically lazy.

For long stretches of Islamic history, the two communities coexisted peacefully—sharing cities, markets, marriage ties, and even Sufi lodges. Conflict tends to emerge not from theology but from identifiable material conditions: weak states, power vacuums, external intervention, and political elites cynically weaponizing identity. The borders drawn by Sykes and Picot did not create the Sunni-Shia divide, but they trapped it inside fragile, multi-sectarian states. The GCC–US alliance then militarized that trap, ensuring that any local conflict would risk escalation into a regional—and potentially global—confrontation. In other words, sectarianism is activated—not inherited.

Karbala as Blueprint, Not Echo

This brings us back to Karbala. Today, Husayn's stand is invoked far beyond religious ritual. For some, it is a universal call to resist oppression, irrespective of creed. For Iran's leadership, it is a potent tool for political mobilization and legitimacy—a usable history, not a static memory. For the GCC states and their American ally, countering that narrative requires framing Sunni governance as stability itself.

This dual use—spiritual and strategic—explains why an event from 680 CE still resonates in drone strikes, diplomatic cables, and carrier strike groups. Karbala is not just history. It is a blueprint.

Conclusion: Faith as Language, Power as Driver

So, is the Sunni-Shia divide about faith or power? The answer is both—but not in equal measure.

Faith provides the vocabulary: the symbols, the wounds, the moral grammar of loyalty and betrayal. But power determines the sentence. What began as a 7th-century succession dispute was later frozen into place by the straight lines of a colonial map, then militarized by Cold War alliances and petrodollar patronage. Today, that dispute has evolved into a geopolitical fault line, shaped by the US–GCC umbrella and Iran's asymmetric response—shaping alliances, wars, and the very map of the Middle East.

And as long as states remain fragile, external patrons remain committed, and power remains contested, the legacy of Karbala—and the divisions it came to symbolize—will continue to define the region's most volatile conflicts.

April 02, 2026

The Erosion of the Predictable

By Ephraim Agbo 

A month of war. A president promising victory. Markets that refuse to believe him. Allies drifting toward separate horizons. A blind man running toward a finish line only strangers can help him see. And beneath it all, the slow, relentless collapse of the ground six centuries of farmers have stood upon.

This is not a collection of disparate headlines. It is a single condition expressing itself across every domain of human activity: the erosion of the predictable. The certainties that once anchored power—military, economic, climatic, technological—have fractured. What remains is not chaos, but something more unsettling: a fog so thick that even the winners cannot tell if they have won.

The Theatre of Victory

Donald Trump stood before the American people in his first prime-time address since the war with Iran began. He spoke of "overwhelming victories." He declared the conflict "nearing completion." He listed America's military achievements alongside the great wars of the past—the First and Second World Wars, Vietnam—as if the arc of history bends toward a clean, legible conclusion.

But the address was a document of evasion, not explanation.

There was no mention of the 15-point peace plan the US had been urging Iran to accept. No clarity on what "success" actually means. No acknowledgment that the Strait of Hormuz—that slender choke point through which a fifth of global oil passes—remains effectively blockaded, with global energy prices climbing toward $107 a barrel as he spoke.

Instead, the president offered a curious abdication. "We don't need it," he said of the Strait. "The countries of the world that do receive oil through the Hormuz Strait must take care of that passage. They must cherish it, they must grab it and cherish it."

This was not diplomacy. It was a declaration that the United States no longer considers itself bound by the architecture of global public goods it helped build after 1945. The message to allies in Asia and Europe was unmistakable: the guarantor of last resort is retreating behind a doctrine of contingent interest. The Strait is your problem now.

Markets understood immediately. Oil jumped five percent. Stocks tumbled. In Singapore, traders described a "direct reflection of disappointment." What they needed was a clear outline or a ceasefire. What they got was a president who threatened, if no deal was made, to bomb Iran's power plants—an act that would constitute a war crime—while saying a deal was not necessary and simultaneously suggesting the war was almost over.

The Two Wars

Behind the White House's carefully staged unity lies a growing divergence between Washington and Jerusalem that could reshape the Middle East for a generation. For Israel, this war has always been existential. Not because Iranian missiles can destroy the country—though they can—but because the regime in Tehran represents a threat that mere military degradation cannot resolve. Netanyahu has spoken of Iran's nuclear program, its ballistic missiles, its network of proxies stretching from Lebanon to Yemen. The Israeli objective, never fully articulated in public but unmistakable in private conversations among security officials, is regime change.

Trump's objective appears far narrower: degrade Iranian capabilities sufficiently to declare victory and withdraw.

This mismatch is not new. But the pressure is now acute. Israel endured four missile salvos in a single day during the Passover holiday, with children among the casualties. The public, initially united behind the war following dramatic opening strikes, is fracturing. Some Israelis believe the war must continue until the regime falls. Others argue for a ceasefire after maximum damage has been inflicted. A growing number simply want it to end.

What unites them is a creeping suspicion that the United States may not stay long enough to finish the job. Commentators in Jerusalem warn that the war, while severely damaging Iran's military, has radicalised the regime rather than toppled it. Given time, they argue, Tehran will rebuild—and the next conflict will be bloodier.

The White House, meanwhile, faces its own political calculus. Trump's approval ratings are sinking. His own supporters are beginning to murmur betrayal. "You are abandoning all the goals you set for us," some say. The address was aimed at a domestic audience, not a foreign one. And domestic audiences want wars to end, not to escalate.

The Global South Pays the Price

While Washington and Jerusalem debate timelines, the rest of the world is counting barrels. Ninety percent of the oil that transits the Strait of Hormuz is destined for Asia. China, Japan, South Korea, India, and the nations of Southeast Asia are bearing the immediate cost of the blockade. South Korea's president has declared a "wartime footing." He is urging parliament to pass an emergency budget. He is telling citizens to take shorter showers.

In the Philippines, in Vietnam, in Indonesia and Malaysia, governments are scrambling. Fuel price caps have been imposed—a temporary palliative that drains national treasuries. Strategic reserves are being drawn down. Officials are scouring global markets for alternative supplies, competing against each other in a zero-sum scramble.

The aviation industry is in crisis. Jet fuel prices have more than doubled since the conflict began. Korean Air has entered emergency management. Airlines are questioning the viability of Middle Eastern routes. The ripples extend to tourism, to trade, to the price of manufactured goods shipped by air.

This is the hidden architecture of war. Not the missile strikes and the prime-time addresses, but the slow, grinding erosion of ordinary life half a world away from the front lines. A fisherman in Indonesia paying triple for diesel. A factory owner in Vietnam calculating whether to pass on fuel costs or absorb them. A family in Manila choosing between rice and transportation.

War is never contained. It spreads along the contours of global supply chains, infecting economies that have no stake in the outcome and no voice in the negotiations.

The Moon as Distraction

It is perhaps no accident that NASA chose this week to launch Artemis 2, the first crewed mission to the moon's orbit in more than half a century. The spectacle was magnificent: the Space Launch System creeping upward on pillars of blinding flame, four astronauts strapped inside a spacecraft that had never carried humans, the promise of lunar landings and Martian exploration shimmering on the horizon.

"We're back in the business of sending astronauts to the moon," NASA's administrator declared.

But what business, exactly? The Apollo missions were a product of Cold War rivalry, a demonstration of technological supremacy in an age of clear ideological division. Artemis operates in a different era—one defined not by superpower competition but by fragmentation. The moon is no longer a frontier to be claimed. It is a destination to be shared, or contested, depending on which nation's press release you read.

China has its own lunar ambitions. So does India. So does a consortium of private companies backed by billionaire visionaries. The new space race is not bipolar but multipolar, governed by no clear rules and animated by no unifying purpose.

For ten days, the Artemis 2 crew will orbit the moon, taking photographs, conducting assessments, testing systems. Then they will splash down in the Pacific and return to a world still at war, still burning fossil fuels at an unsustainable rate, still unable to agree on the most basic facts about its own future.

The irony is crushing: the same species that cannot keep the Strait of Hormuz open is preparing to send humans deeper into the solar system than ever before. Our engineering ambitions outpace our political wisdom. We can build rockets that reach the moon. We cannot build institutions that secure a strait.

The Digital Wasteland

While rockets fly and missiles fall, another war is being litigated in American courtrooms—this one over the architecture of attention itself.

A jury has found Meta and Google liable for seriously harming the mental health of a young woman named Kaylee, who became addicted to social media as a child. The verdict has been called a "tobacco moment" for big tech—a legal inflection point where the industry may finally be forced to acknowledge that its products are not neutral tools but engineered environments designed to maximise engagement at any cost.

The science is not settled. Researchers caution that the evidence for population-level harm remains inconclusive. "It's not clear-cut," one professor acknowledged. Social media may cause harm to some individuals while benefiting others. The net effect is maddeningly difficult to quantify.

But the legal system does not require population-level certainty. It requires proof of harm in specific cases. And Kaylee's case was compelling enough to persuade a jury.

The implications are enormous. Australia has already announced a ban on social media for young people. France is implementing restrictions for under-15s. The European Commission is investigating TikTok's infinite scroll feature. At least thirty countries are considering similar measures.

Parents like Lori Shops, whose 18-year-old daughter Annalee took her own life in 2020, see the verdict as long overdue. Lori blames social media for making her daughter feel inadequate about her appearance. "Overdue," she said simply, standing outside the courthouse.

But the deeper question remains unanswered: what, exactly, are we protecting young people from? Is it the platforms themselves? Or is it a broader crisis of meaning, of community, of the slow erosion of shared rituals and intergenerational connection? Social media did not invent adolescent anxiety. It merely amplified it, optimised it, turned it into a revenue stream.

The lawsuits will continue. The regulations will multiply. But no court can restore what has already been lost: the experience of growing up without a quantified self, without algorithmic curation, without the constant, exhausting performance of identity for an invisible audience.

Running Blind

Clark Reynolds, who calls himself Mr. Dot, is about to run a marathon. He is blind. He will have no sighted guide, no tether. Instead, he will wear smart glasses equipped with cameras, streaming live to volunteers around the world who will serve as his eyes.

"Hey, Be My Eyes," he says. Within thirty seconds, a stranger appears—from anywhere, from nowhere—and begins describing the path ahead. "There's a parked car. Swerve to the left."

The technology is remarkable, but that is not the story. The story is what Reynolds says about the experience: "It's not AI. I'm getting your steps in for you today. They're not only being your eyes, they're also being cheerleaders."

The connection is the point. In an age of algorithmic isolation, of social media engineered to maximise outrage and minimise empathy, Reynolds has found a use for technology that does the opposite. His volunteers are not paid. They are not certified. They are simply human beings willing to help another human being navigate the world.

If his guide is from northern England, they might say something is "really big." If from America, "it's a garbage bin." The differences are not obstacles to communication but textures within it. The technology fades. The humanity remains.

It is a small story. It will be forgotten by most within days. But it is also a rebuke to the grand narratives of technological determinism that dominate our discourse. The same week that juries held social media giants accountable for algorithmic harm, a blind man demonstrated that the internet can still be a place of genuine, unmediated human connection—not because of its design, but in spite of it.

The Salt Beneath Our Feet

The most profound story of all, however, is unfolding not in courtrooms or launch pads or war rooms, but in the Little Rann of Kutch, a vast salt marsh in western India where the Agariyas have harvested salt for six centuries.

These are not industrial operations. The Agariyas are nomadic tribal families who live for eight months of the year in makeshift shelters—bamboo poles covered with burlap, clay floors layered with dung to keep them cool. They dig brine from shallow wells, evaporate it under the sun, rake the crystallising salt by hand. They earn perhaps three percent of the final value of their product. They live with no savings and crushing debt.

And climate change is washing them away.

"The seasons were regular," says Jagdish, a 30-year-old Agariya who lives in the desert with his wife, his parents, his uncle, and his daughter. "In winter it was cold. In summer it was hot. Now it has changed—hotter than ever, and rains out of season."

Last season, unseasonable rain destroyed 250 tons of his salt—12 percent of his yield. The Rann flooded. The roads washed out. The trucks couldn't reach him. The salt dissolved before it could be sold.

Even when the weather holds, the groundwater is harder to find. Up to three years ago, wherever you dug, you found water. Now it can take five attempts. The aquifer is dropping. The brine is more dilute. The crystals are smaller than they should be.

Scientists are trying to help. Solar pumps replace diesel, cutting costs and emissions. Green concrete linings for the salt pans keep the brine pure and the crystals white. But no technology can stop the rains from coming early or the heat from rising beyond anything the old seasonal calendars predicted.

"Climate is changing in such a manner that you cannot predict," one researcher said.

The forest department, meanwhile, is serving eviction notices. The Rann is a sanctuary for the endangered wild ass, they argue. Humans should not be there. Never mind that the Agariyas arrived six centuries before the wildlife department existed. Never mind that the wild ass population has actually increased. The logic of conservation, detached from the reality of human habitation, is being weaponised against the very people who have stewarded this landscape for generations.

If the Agariyas are forced out, who will harvest India's salt? Industrial operations cannot easily replace them—the Rann's terrain is too remote, too variable, too demanding of local knowledge. The answer, probably, is that India will import more salt. The carbon footprint will grow. The price will rise. And a way of life that survived Mughals, British colonisers, and the Green Revolution will finally succumb to climate change and bureaucratic indifference.

The Common Thread

What connects a war in the Middle East, a moon mission, a social media verdict, a blind marathon runner, and salt farmers in India? Not geography. Not scale. Not the attention of news editors. What connects them is the erosion of the predictable. The collapse of the frameworks that once allowed human beings to plan, to invest, to trust that tomorrow will resemble yesterday.

Seasons no longer arrive when they should. Wars do not end when leaders declare them nearly complete. Markets lurch not on fundamentals but on the ambiguity of prime-time addresses. Technologies that promised connection deliver addiction and alienation. The same week that humanity launches its most powerful rocket toward the moon, farmers who have worked the same land for six hundred years cannot count on the sun.

This is the age of unraveling. Not collapse—not yet—but a slow, grinding dissolution of the certainties that made modernity possible. The post-1945 order is fragmenting. The climate is destabilising. The digital public square is poisoned. And the stories we tell ourselves about progress, about victory, about the arc of history bending toward justice—these stories are fraying at the edges.

Trump said the Strait of Hormuz would "open up automatically once the war has ended." But what does "ended" mean? Without clear objectives, without agreed endpoints, without a shared understanding of what victory looks like, wars do not end. They mutate. They become frozen conflicts, simmering tensions, periodic eruptions that never quite resolve.

In Jerusalem, they worry the US will leave too soon. In Seoul, they worry about fuel. In the Rann of Kutch, they worry about rain in October. In Cape Canaveral, four astronauts are hurtling toward the moon, and for ten days, at least, their problems are purely technical.

Perhaps that is the only certainty left: that while we argue over straits and sanctions, while we litigate the harms of social media and the ambiguities of war, the Earth continues to turn. The sun still rises over the salt pans. The moon still waits to be visited. And somewhere, a blind man is about to say four magic words—"Hey, Be My Eyes"—and begin to run.

The finish line may be invisible to him. But he is moving toward it anyway. That, perhaps, is the only response the age of unraveling permits: to keep moving, to keep helping, to keep telling stories that connect us across the widening gaps.

Whether that is enough—whether it will ever be enough—is a question no prime-time address can answer.

March 29, 2026

Grabbed by the Throat: How the Houthis Are Choking the Red Sea While Iran Tightens Its Grip on the Strait of Hormuz


By Ephraim Agbo 

If the wider war involving Iran has a geography, it does not stop at the Strait of Hormuz. It stretches south, toward Yemen, where the Houthis are now turning the Red Sea into a pressure point of their own. On March 28, 2026, Iranian-backed Houthi forces launched missiles at Israel in what is  described as the first direct strike from Yemen since the latest escalation began, underscoring how quickly the conflict can widen beyond its main battlefield. A 2024 UN-linked assessment said Iran and Hezbollah helped build the Houthis into a far more capable military actor than they once were.

That matters because the Houthis are not merely acting out of solidarity. They are operating inside a broader strategic system shaped by Iran. Their value lies less in their ability to defeat Israel militarily than in their ability to create friction in places where global commerce is most vulnerable. That is why the Red Sea matters. That is why Bab al-Mandab matters. And that is why the Houthis’ actions should be read not as a side story, but as a proxy front in a wider conflict that is already reverberating through markets, shipping insurance, and naval deployments.

Bab al-Mandab is one of the world’s most important maritime chokepoints. It connects the Red Sea to the Gulf of Aden, and most petroleum and natural gas exports from the Persian Gulf that move through the Suez Canal or SUMED pipeline pass through both Bab al-Mandab and the Strait of Hormuz. The World Bank has said the Red Sea crisis slashed vessel traffic through the Suez Canal and Bab al-Mandab by roughly three-quarters by the end of 2024, after the route had accounted for about 30 percent of global container traffic. That is not a symbolic disruption. It is a direct hit on the plumbing of global trade.

This is where the Houthis’ real leverage begins. They do not need a navy to matter. They only need the ability to make ships hesitate. In January 2024, container ships were already avoiding the Suez route as attacks lifted freight costs and lengthened voyages, forcing vessels to sail around Africa instead. Once shipping firms begin to price in uncertainty, the disruption spreads far beyond the battlefield: insurance rises, delivery times stretch, inventories tighten, and inflation pressures reappear in places far from Yemen. In other words, a non-state actor can wound the world economy without sinking a single major vessel.

The Iran dimension makes this more dangerous. The Strait of Hormuz remains the other great pressure point in the system. The EIA says oil flow through Hormuz averaged 20 million barrels per day in 2024, about 20 percent of global petroleum liquids consumption. Last week,  Barclays warned a prolonged closure could cut 13 to 14 million barrels per day from supply, an energy shock large enough to rattle every major market. Seen together, Hormuz and Bab al-Mandab form a dual-chokepoint problem: one controlled by Iranian leverage, the other vulnerable to Iran-backed disruption. That is the strategic nightmare now hanging over the region.

The Houthis’ missile fire at Israel  on Saturday should not be read only as a gesture of solidarity. It is also a signal that the Iran war can be exported sideways, through proxies and chokepoints, into the arteries of global trade. The battlefield is no longer only Gaza, Tehran, or the Gulf. It is also the narrow sea lanes where shipping routes, energy flows, and economic confidence can be held hostage by the threat of escalation. That is the deeper meaning of the Houthi move: not just a strike on Israel, but a reminder that the war with Iran may be fought as much through maritime pressure as through missiles.


The Education System Is Still Trying to Ban the Future: What Ibironke Yekini Exposed About AI, Resistance, and Survival at the All Northern Schools Conference 2026

By Ephraim Agbo  Every generation of teachers eventually meets a technology it does not fully understand. And almost every time,...