By Ephraim Agbo
Every generation of teachers eventually meets a technology it does not fully understand. And almost every time, the first institutional response is the same: prohibit it first, study it later. The pattern is remarkably consistent across educational history.
The ballpoint pen was once treated as a threat to handwriting discipline. Calculators were accused of destroying mathematical thinking. Mobile phones became symbols of distraction. WhatsApp was blamed for collapsing attention spans. Social media supposedly ruined reading culture. Now artificial intelligence has entered classrooms, and once again, schools across the world are responding with suspicion, anxiety, and prohibition.
But beneath these recurring moral panics lies something deeper than technology itself. The real issue is authority.
At the recent All Northern Schools Conference, held in Kano, Nigeria, one of the most intellectually uncomfortable interventions came from , a figure widely regarded as one of Africa’s leading voices in software testing, product quality engineering, and digital systems reliability.
That context matters enormously because Yekini is not a naïve technology evangelist intoxicated by Silicon Valley optimism. Her professional life has been built around testing systems before they fail. As a quality engineering specialist and technology executive, her work revolves around reliability, risk detection, system weakness, and technological accountability.
Which is precisely why her warning about schools and artificial intelligence carried unusual intellectual weight. She did not argue that schools should surrender blindly to AI. She argued something far more difficult: that institutions should stop banning technologies they have not seriously attempted to understand. And to explain the point, she told a deceptively simple story.
The Boy With the Calculator
Years ago, while teaching in a primary classroom, Yekini recalled one student who constantly challenged the rules. Emmanuel was energetic, curious, stubborn, and persistently fascinated by technology. One day, he raised his hand and asked a question that sounded almost rebellious at the time:
“Aunty, my brother has a calculator. Why can’t I bring it to mathematics class?”
The response came immediately and instinctively:
“You will never learn maths if you depend on a calculator.”
The classroom agreed. Teachers agreed. Parents agreed. The calculator was treated almost as intellectual corruption — a shortcut that would weaken discipline and destroy real thinking. For years, calculators remained unofficial enemies inside many classrooms. Then something happened that educational institutions repeatedly fail to anticipate: the future arrived anyway.
Years later, Yekini met Emmanuel again. He was no longer a schoolboy arguing for permission. He had become an engineer working in Lagos. And the calculator was still with him — only now, it existed inside his smartphone, seamlessly absorbed into the digital ecosystem schools once tried to resist.
That story is not really about calculators. It is about institutional memory — or rather, the lack of it because education systems repeatedly forget how often they have mistaken technological transition for intellectual decline.
The Real Fear Is Not AI — It Is the Collapse of Informational Monopoly
The debate around artificial intelligence is usually framed around cheating, laziness, or distraction. Those concerns are real, but they are not the center of the crisis. The deeper anxiety is institutional.
For centuries, schools operated through a model of informational scarcity. Knowledge was limited. Books were difficult to access. Expertise was centralized. Teachers functioned as gatekeepers of legitimate information. The classroom was not only a learning space. It was a hierarchy. Artificial intelligence disrupts that structure fundamentally.
A student with a smartphone can now generate lesson summaries, explanations, coding assistance, mock interview preparation, essay structures, translations, revision notes, and research guidance within seconds. A child can ask an AI system to explain algebra like a professor, then ask it to explain the same concept like a ten-year-old.
That changes the architecture of authority itself. The teacher is no longer the sole distributor of information. And that shift unsettles institutions built around informational control.
When schools ban technologies they have not meaningfully explored, they are not simply reacting to risk. They are attempting to preserve familiarity. The prohibition becomes psychological as much as educational — an effort to defend an older structure of authority against a rapidly changing reality.
But history suggests something uncomfortable: societies that respond to technological disruption primarily through restriction rarely shape the future. They usually arrive late to it.
Artificial Intelligence Is Under the Spotlight — And Trust Is the Real Issue
Artificial intelligence today occupies a strange position in public life. It is simultaneously celebrated as revolutionary and feared as destabilizing.
Can these systems be trusted? Can they provide accurate information? Are they biased? Who controls them? And what happens when institutions begin depending on systems they do not fully understand? These questions are no longer theoretical.
Across workplaces globally, AI chatbots and generative systems are already being integrated into daily operations. Employees use them for writing, summarization, coding, analysis, customer support, and administrative tasks. Yet trust remains surprisingly fragile.
Recent surveys suggest many employees spend almost as much time verifying AI-generated outputs as they spend using the systems themselves. Even among business leaders, complete trust in AI systems remains relatively low. That hesitation exists for good reason.
One AI researcher who previously helped develop technology behind Amazon Alexa recently explained the problem in unusually direct terms.
Traditional software systems, he argued, are rule-based. Databases, spreadsheets, and classical algorithms operate predictably. They follow explicit instructions. Their outputs can often be traced and explained. Modern machine learning systems operate differently.
They identify patterns across massive datasets and generate probabilistic predictions rather than explicit reasoning. They do not “understand” information the way humans imagine understanding. Instead, they predict language and behaviour statistically. That distinction matters enormously because it means AI systems can sound extraordinarily intelligent while being fundamentally wrong. The industry even has a term for this phenomenon: hallucination.
AI systems may fabricate historical events, invent legal citations, misrepresent scientific findings, or produce false information with complete confidence. Not because they are intentionally deceptive, but because statistical prediction is not the same thing as comprehension.
And yet despite these weaknesses, AI adoption continues accelerating because the technology also performs tasks previous generations of software simply could not achieve.
It can interpret images, analyze speech, summarize enormous documents, generate code, personalize learning materials, detect patterns across massive datasets, and automate forms of labour previously dependent on human cognitive effort.
This creates a paradox at the heart of the AI era: societies increasingly depend on systems they do not fully trust.
The “Black Box” Problem
One of the central concerns in artificial intelligence today is what engineers call the black box problem. Many advanced AI systems produce answers without being able to explain their reasoning in ways humans fully understand. Even engineers who build these models sometimes cannot clearly trace why a specific output emerged. That creates profound challenges for high-stakes industries.
In banking, a flawed AI decision can affect financial systems. In healthcare, it can affect diagnosis and treatment. In aviation, it can affect safety. In law, it can distort justice. And in education, it can reshape how an entire generation learns to think.
This is precisely why voices like this matter within the African technology ecosystem. Her field — software quality engineering — exists because technological systems fail. Software testing is fundamentally about distrust. It assumes systems must be examined, stressed, verified, questioned, and challenged before deployment. Which makes her educational argument deeply ironic.
The same institutions that claim to fear AI’s unreliability are often banning it without conducting the very type of structured examination engineers consider essential. In engineering culture, nothing serious is deployed untested.
Banking infrastructure is stress-tested because failure destroys trust. Aviation systems are tested because lives depend on reliability. Critical digital infrastructure is repeatedly examined because confidence without verification is dangerous. Yet many schools now approach AI in the exact opposite way: fear first, understanding later.
From the perspective of a quality engineer, that is not caution. It is methodological inconsistency.
“Tech First” Does Not Mean “Tech Worship”
One of the most misunderstood aspects of the current AI debate is the assumption that technological engagement automatically means technological surrender. It does not. Yekini’s “Tech First” argument is not a call for blind enthusiasm. It is a call for disciplined literacy. “Test before you fear.” That principle is profoundly important because the current educational response to AI often resembles panic more than policy.
Some schools prohibit AI tools immediately. Students caught using them face punishment or suspension. Yet beneath these bans lies an uncomfortable reality: students are already using AI anyway.
Quietly. Secretly. At home. Under desks. In hostels. During assignments. Sometimes more effectively than the adults attempting to regulate them. This creates two possible educational futures.
In one school, administrators panic and ban AI outright. Students continue using it underground without ethical guidance, critical supervision, or intellectual discipline. In another school, administrators study the technology alongside teachers. They examine its strengths, weaknesses, risks, and limitations. Students learn not merely how to use AI, but how to interrogate it critically.
One school creates concealment. The other creates literacy. That distinction may define educational inequality over the next decade.
AI Is No Longer Just a Tool — It Is Becoming Infrastructure
One of the biggest mistakes educational institutions make is treating artificial intelligence as though it were merely another classroom application. It is already becoming much larger than that.
Across industries, AI is quietly evolving from convenience technology into infrastructure — systems societies increasingly depend on whether they notice them or not.
The same technologies reshaping education are also reshaping finance, medicine, logistics, architecture, transportation, cybersecurity, and governance. Which means the educational debate is no longer fundamentally about whether students should use ChatGPT. It is about whether institutions understand the world students are entering.
The Bigger Question: Will Africa Shape AI or Merely Consume It?
Buried beneath the classroom debate is a much larger geopolitical question. Who gets to shape the technological future?
For decades, African economies have largely occupied the position of technology consumers rather than technology producers. Most major operating systems, cloud platforms, social networks, AI models, and digital infrastructures are still designed primarily in the United States, China, and parts of Europe.
That imbalance matters because technologies are never politically neutral. They carry assumptions. Cultural frameworks. Economic interests. Linguistic priorities. Embedded biases. This is partly why African voices in quality engineering and digital systems matter so profoundly.
Yekini’s broader mission is not merely about classroom technology. It is tied to a larger continental ambition: positioning Africa not only as a consumer of digital systems, but as a producer of world-class technological infrastructure. And education sits at the center of that struggle.
Because a generation trained only to fear AI may never learn to build AI. A generation taught merely to consume platforms may never shape them. A continent that delays technological literacy risks deepening dependency on systems designed elsewhere.
The consequences are not merely educational. They are economic, political, and civilizational.
The Real Risk Is Not AI — It Is Intellectual Passivity
None of this means artificial intelligence should be accepted uncritically. Bias remains real. Misinformation remains dangerous. Corporate concentration remains alarming. Surveillance concerns are growing. Labour displacement is becoming increasingly plausible.
But banning alone solves none of those problems. If anything, technological illiteracy makes societies more vulnerable to them.
The real educational challenge is not preventing students from touching AI. It is preventing students from surrendering their thinking to it.
That requires a radically different model of education — one less obsessed with memorization and more focused on interrogation.
Students must learn to ask: Is this accurate? What assumptions shaped this answer? What biases exist in this model? What perspectives are absent? What still requires human judgment?
Ironically, the age of AI may demand more human thinking, not less. But only if schools evolve fast enough to teach it.
The Lesson Hidden Inside the Calculator Story
The boy who once argued to bring a calculator into mathematics class was not simply being stubborn. He was standing at the edge of a future the adults around him could not yet fully see. That is often how technological change arrives: first as disruption, then as threat, then as inevitability, and finally as ordinary life.
The lesson for schools is not that every technology is automatically good. Some genuinely deserve caution. Some require regulation. Some can distort learning if introduced carelessly. But caution is not the same thing as fear. And regulation is not the same thing as refusal.
The real failure is not asking difficult questions about artificial intelligence. The real failure is refusing to ask them until after the future has already moved on.
Because every generation eventually discovers the same uncomfortable truth: the future does not wait for permission.
No comments:
Post a Comment