LOADING

Type to search

Four Ways Developing Countries Can Build Safe, Effective AI Policies

Share

Risk-based approaches, inclusive policymaking, and strategic frameworks will allow developing countries in Asia and the Pacific to navigate the challenges of artificial intelligence while ensuring its benefits are shared equitably.

Rapid advancements in artificial intelligence (AI), fueled by ever-increasing computing power and data, are transforming industries worldwide and bringing significant benefits to individuals and organizations.

However, the widespread adoption of AI also introduces serious risks, including human rights violations, disinformation, job displacement, and worsening climate impacts.

Experts widely agree on key principles for defining desired AI governance outcomes: AI systems must be safe, secure, and robust; they should respect human rights and promote fairness and inclusion; and their use must be transparent, with decisions explainable and accountable.

The challenge now is how to implement these principles as AI capabilities rapidly evolve. Jurisdictions face three key questions as they attempt to translate international principles into domestic AI rules.

Should we govern AI with strict legal frameworks or rely on soft law mechanisms?

Strict legal guidelines provide clear and enforceable rules, enabling regulators to penalize non-compliant companies. However, AI’s fast-evolving nature means legislation could quickly become outdated.

This ‘pacing problem’ became evident during the development of the European Union AI Act, with its earlier drafts struggling to anticipate and address the unique risks posed by new foundation models like OpenAI’s ChatGPT.

Currently, only a few jurisdictions have legally binding AI policies. The majority still rely on “softer” governance mechanisms, such as voluntary codes of conduct, guidelines, and standards. These tools can be reviewed and adapted more swiftly than traditional regulations, making them well-suited for governing a dynamic and rapidly evolving technology like AI.

While non-binding policy tools currently dominate the international AI governance landscape, growing risks from rapidly advancing AI capabilities are pushing policymakers towards stricter regulations.

For instance, in the UK, the new Labor government has recently announced plans to regulate the most powerful AI models, signaling a shift away from the country’s initial non-binding approach.

Should we govern AI in a sector or use-case specific way, or should we govern it holistically?

Another key challenge is determining the scope of governance frameworks. A sectoral approach focuses on developing rules and guidance bounded by specific sectors or industries.

The UK has adopted this approach in its AI Regulation White Paper, enabling regulators to develop guidance within their respective remits under centralized coordination.

In contrast, a horizontal approach applies uniform regulations across sectors and use cases. The EU AI Act and Canada’s AI and Data Act exemplify such frameworks, imposing restrictions on AI systems regardless of their sectoral applications.

While sector-specific regulations address unique challenges and risks inherent to industries with targeted measures, a holistic approach ensures consistency and coherence across all sectors, reducing the complexity and minimizing regulatory gaps and overlaps caused by disparate regulations.

Many governments opt for a mixed approach. For example, Singapore’s predominantly horizontal policies, such as the Model AI Governance Framework, are complemented by targeted sector-specific guidance, such as the Monetary Authority of Singapore’s Veritas Initiative, which offers best practices for financial sector companies in areas such as fairness assessment in credit scoring. 

As global AI governance evolves, international alignment and cooperation will be essential to prevent regulatory fragmentation and ensure that AI’s benefits are equitably shared.

How should we set proportionate measures for different applications of AI?

AI systems vary in risk, from low-risk applications like travel route planning to high-risk ones like loan approvals or police deployment. A key challenge is establishing proportional measures to avoid overregulating low-risk areas.

A common theme emerging in response to this challenge is a risk-based approach, where the degree of regulatory intervention corresponds to the level of risk an AI system poses; minimal obligations for low-risk AI and rigorous ones for high-risk applications.

This approach is becoming central to several legislative efforts, such as the EU AI Act, Brazil’s new AI bill, and Canada’s AI and Data Act. While not all jurisdictions have explicitly adopted comprehensive risk-based legislation, many use risk and impact levels to guide their targeted regulatory interventions, such as facial recognition systems in several US states and deepfakes in the People’s Republic of China.

Developing countries in Asia and the Pacific face unique challenges when formulating AI policies. The following actions can help them craft effective AI governance strategies:

Define a clear vision for AI. Developing countries need a long-term vision that addresses their specific needs, challenges, and priorities. A national AI strategy can provide a roadmap for the country, articulating the potential benefits of AI while also outlining measures to mitigate risks like job displacement and data privacy concerns.

Developing countries in the region would benefit from broad and inclusive consultations on the strategy involving government, industry, academia, and civil society to align diverse stakeholders around a shared vision.

Strengthen data governance. AI systems rely heavily on data, making data governance a cornerstone of any AI strategy. Countries must establish frameworks to manage data collection, storage, and usage with careful attention to privacy and security.

Open data initiatives, which provide AI developers with access to high-quality datasets, are crucial to driving AI innovation.

Developing countries can look to models like Singapore’s open data initiatives that balance innovation with security. By creating policies that support data sharing while safeguarding sensitive information through encryption and privacy standards, countries can build a trustworthy AI ecosystem.

Build the AI talent pipeline. A skilled workforce is essential for building and managing AI technologies. Developing countries should focus on reforming education systems to integrate AI and data science at all levels, from primary education to PhD programs.

Continuous upskilling opportunities are also necessary to ensure that the existing workforce can keep pace with fast-evolving AI technologies. Scholarships and incentives for AI-related research can also help build a strong talent pipeline.

Adopt a multistakeholder approach. AI governance is not solely a technical challenge—it also involves ethical, legal, and social considerations. By involving diverse stakeholders, including ethicists, lawyers, human rights advocates, and citizens, developing countries can create more comprehensive and socially responsible AI policies.

Establishing platforms for regular dialogue between stakeholders will ensure that diverse perspectives are considered. This approach can lead to equitable AI development, where benefits are shared across society, and the potential harms of AI are mitigated.

As global AI governance evolves, international alignment and cooperation will be essential to prevent regulatory fragmentation and ensure that AI’s benefits are equitably shared.

Source: blogs.adb.org

Leave a Comment

Your email address will not be published. Required fields are marked *