Challenge details

Overview

Generative AI is transforming how information is created, shared, and consumed. While these systems enhance the speed and diversity of information flow, they also pose risks of misuse, such as disinformation, deepfakes, and cyberattacks. The Global Challenge to Build Trust in the Age of Generative AI aims to tackle these issues by developing policies and technologies to ensure the public can trust the information they consume online. This initiative seeks to build a trustworthy, transparent, and resilient information ecosystem that supports democratic values and societal well-being.

The challenge is organized by a global coalition, including IEEE SA, OECD-GPAI, AI Commons, UNESCO, VDE, PARIS21, the World Bank, and IDB. Its focus is on promoting global collaboration to combat the threats posed by generative AI with regards to information integrity online and ensuring that content online is verifiable, accurate, and trustworthy.

The Challenge's Mission

The Global Trust Challenge seeks interdisciplinary solutions that integrate both technology and policy to address the challenges posed by generative AI. Participants are invited to propose novel, forward-thinking approaches that not only develop technology but also propose complementary policies. These solutions should ensure the verification and trustworthiness of AI-generated content, support trustworthy AI deployment, and enhance the resilience of information ecosystems.

Key goals include:

  1. Enhancing Trust: Ensuring that AI-generated content is reliable and verifiable.
  2. Protecting Users: Promoting media literacy and providing tools to identify AI-generated content.
  3. Supporting Governance: Encouraging policy mechanisms for transparency and content flagging.

Unique Features of the Challenge

Unlike traditional competitions, this challenge integrates both technology solutions with policy approaches. It fosters collaboration across sectors and promotes a holistic approach, recognizing that technology alone cannot address the full spectrum of issues. The initiative encourages teams to develop solutions with a focus on long-term viability, adaptability to diverse contexts, and a “do-no-harm” approach. It aims to build trust not just in technology, but also in the systems and processes surrounding it. The challenge emphasizes cross-sector collaboration, with teams from different fields working together to create scalable solutions that manage and mitigate the risks of AI misuse.

What We Are Looking For

Submissions should incorporate:

  • Policy Approaches that support the integrity of information in the age of generative AI.
  • Technological Solutions that align with proposed policies, such as mechanisms for transparency, feedback loops, and content verification.
  • Testing and Validation Plans to pilot solutions in real-world settings and demonstrate scalability.

The challenge encourages innovative and forward-thinking solutions, fostering creativity and interdisciplinary collaboration. Teams are expected to offer practical, scalable ideas that can shape the future of digital information integrity.

Key Phases of the Challenge

The Global Challenge is structured into three phases:

 

  1. Phase 1 – Proposal Submission: Teams propose integrated models combining new policies and technologies, outlining implementation plans, stakeholders, resources, and expected outcomes.
  2. Phase 2 – Prototype Development: Teams design and test prototypes based on their policy and technological solutions. Prototypes are evaluated in real-world settings.
  3. Phase 3 – Pilot and Scale: Successful prototypes are piloted in collaboration with institutional partners. Teams develop strategies for scaling their solutions to maximize impact.

Proposal Submission Requirements

Proposals should cover:

 

  • Policy Approach Formulation: Innovative policy ideas that support technology solutions to ensure trust in AI-generated content.
  • Technological Solutions: Technologies that support policy goals, including features for transparency, security, and accountability.
  • Testing and Validation Plan: A roadmap for implementing, testing, and scaling the solutions, including stakeholder roles, resource requirements, and evaluation metrics.

Evaluation Criteria

Submissions will be evaluated across three stages:

  1. Phase 1 – Proposal Evaluation: Focus on relevance, feasibility, innovation, and risk management.
  2. Phase 2 – Prototype Evaluation: Usability, scalability, technical innovation, and ethical compliance.
  3. Phase 3 – Pilot and Scale Evaluation: Pilot execution, resilience against threats, and long-term viability.

Judging will prioritize cross-sector collaboration, diversity of expertise, and ethical and safety considerations.

How to Participate

Teams can register for the challenge by completing the registration form on the Global Challenge website. The challenge encourages multi-disciplinary teams and may provide guidance to teams lacking certain expertise by connecting them with experts in policy or technology.

Eligibility: Open to anyone with a policy or technological solution to address the challenges posed by generative AI. Ideally, teams will bring together experts from diverse fields, including digital technology and public policy.

Terms & Conditions: Teams must comply with confidentiality, intellectual property, and conflict-of-interest guidelines. They will retain ownership of intellectual property but are encouraged to share knowledge and methodologies to foster innovation.