Did you know that in Sub-Saharan Africa, students spend an average of six years in school but leave with only three years’ worth of learning? The learning gap is stark, but artificial intelligence (AI) is rising to the challenge. In Kenya, innovators have created Somanasi (“Learn with me”), an AI-powered chatbot designed to help bridge this divide. By providing real-time, tailored responses to student questions, gamifying lessons, and promoting collaborative learning, this resource helps students better understand and retain the school curriculum, empowering them to make the most of their time in school and narrow the learning deficit.
But AI’s impact goes far beyond classrooms. It’s revolutionizing healthcare delivery, enabling direct cash transfers, and even predicting natural disasters—reshaping lives around the globe. However, with great power comes great responsibility. Since the launch of ChatGPT in late 2022, reported incidents involving AI misuse or harm have surged 20-fold, amplifying risks, particularly in developing countries where gaps in infrastructure, data, skills, and regulations exacerbate vulnerabilities.
How can we harness AI’s potential while mitigating its risks? The World Bank’s latest report, Global Trends in AI Governance: Evolving Country Approaches, provides a roadmap.
The complexities of AI data and impact
AI systems rely on vast amounts of data to function effectively, not just numerical data and text, but also images, audio, video, and sensor inputs. These diverse data types are critical for accurate AI modeling. Yet, gaps in datasets, particularly for specific communities and less common languages, can reinforce biases For example, pictures of people with darker skin are significantly under-represented in the open-source datasets used to train AI models for skin cancer detection causing models to substantially under-perform when diagnosing skin cancer in these populations.
AI’s reliance on enormous datasets also raises concerns about data privacy and security, particularly where sensitive personal information is involved. Sophisticated machine learning models are often described as “black boxes,” because it is difficult to understand how the system arrives at a particular output. This opacity makes it difficult to fix errors, especially in high stakes sectors like finance or the delivery of essential government services, such as public safety or social assistance.
The environmental impact of AI is also significant. Tasks such as generating a single AI image consume the same amount of energy as charging a smartphone, while making one ChatGPT query requires nearly 10 times as much electricity as a Google search. These challenges demand governance frameworks that are not only technically robust but also prioritize fairness, transparency, and environmental sustainability.
Bridging innovation and regulation
AI’s rapid development has created a global consensus: robust AI governance is needed to build trust in AI and ensure it can benefit everyone without causing harm. This idea underpins key frameworks like the UN General Assembly resolution on trustworthy AI and the UN AI Advisory Body’s report on Governing AI for Humanity. Similarly, the World Bank’s World Development Report 2021 emphasizes trust as an integral pillar for the data economy. Our latest report on this topic also highlights the urgent need for adaptive and robust governance frameworks that ensure AI is ethical, transparent, and inclusive.
Key foundations for responsible AI
For countries to benefit from AI, they need the right building blocks:
- Digital Infrastructure: Providing reliable internet, advanced data systems, and computational power.
- Human Capital: Upskilling workers and building AI talent pipelines.
- Local Ecosystems: Fostering innovation through public-private partnerships and supportive policies
Policymakers often worry that over-regulation of nascent AI ecosystems could stifle innovation. However, robust AI governance and innovation are not “zero-sum” games – in fact, governance and regulation create a level playing field, encouraging greater trust in, and adoption of, AI solutions, thus fostering innovation. Without trust, AI cannot deliver on its transformational promise, and the AI divide will only grow.
Tools for policymakers
The World Bank supports countries at all stages of their AI governance journeys. Because the landscape is constantly evolving, no one-size-fits-all approach works. The aforementioned report outlines four governance tools that countries can tailor to their unique contexts, each with their pros and cons illustrated by country examples:
- Industry self-governance: Voluntary ethical business standards – such as those developed and adopted by companies like Google and Microsoft — can shape practices but lack enforcement and can contribute to potential “ethics-washing” risks.
- Soft law: Non-binding principles and technical standards provide agility but may lack clear rights or responsibilities.
- Regulatory sandboxes: Controlled environments allow testing of innovative regulatory approaches but can be very resource-intensive to run.
- Hard law: Binding frameworks like the EU AI Act or country-level legislation provide consistency and legal certainty but must be adapted to local context keeping in mind existing capacity and resources.
Countries must strike a balance between innovation and safeguards, tailoring their approaches to align with their resources, priorities, and societal needs.
Why this matters
AI isn’t just about technology—it’s about people. It’s about creating equitable opportunities, protecting rights, and building trust. The report challenges policymakers to act boldly and collaboratively, ensuring AI becomes a force for good.
Kenya’s story is just one example of how AI, when governed thoughtfully, could help change the future for millions of kids. But to unlock AI’s full potential, countries must address its complexities—biases in datasets, environmental impacts, and ethical concerns—head-on. The stakes are high, but so are the rewards.
Source: blogs.worldbank.org