Frequently asked questions
SUMMIT 333 is invite-only because the conversations require trust, expertise, and commitment.
We need people who:
Bring genuine expertise from their discipline
Can engage honestly even when it's uncomfortable
Understand the stakes without succumbing to panic
Can bridge between technical precision and philosophical depth
Represent diverse cultural perspectives on human values
Are willing to change their minds based on evidence and argument
If you believe you can contribute meaningfully to answering whether universal AI safety framework is possible or remains a challenge, we want to hear from you.
"We are asking whether humanity can achieve global consensus on existential risk governance before developing technologies that could end or radically transform human civilization. The answer to this question is not yet determined."
KEY STATISTICS:
< 5 YEARS TO AGI (median expert estimate)
0 GLOBAL AI SAFETY FRAMEWORKS currently implemented
200+ SESSIONS over three days exploring every angle
1 CORE QUESTION Can we find consensus or must we acknowledge the challenge?
The gap between capability and governance is widening. SUMMIT 333 exists to close it or to honestly acknowledge if that's impossible and chart alternative strategies.
Can humanity achieve consensus on AI safety when we barely agree on human values?
We debate abortion, euthanasia, capital punishment fundamental disagreements about life's value
We haven't solved wealth inequality, yet AI could obsolete entire economic classes
We can't agree on privacy vs. security, yet AI amplifies this dilemma exponentially
Different cultures view consciousness, dignity, and autonomy fundamentally differently
Yet we have achieved some universal agreements:
Universal Declaration of Human Rights (however imperfectly implemented)
Geneva Conventions on warfare
International maritime and aviation standards
Scientific method as shared framework for truth
The question is: Can we do it again, faster, with higher stakes, for AI?
SUMMIT 333 exists to find out.
Tangible outcomes from three days of intensive collaboration:
Comprehensive Framework Understanding
Not just theoretical models, but practical knowledge from sessions like:
"Global Authority vs Digital Sovereignty: Who Should Govern AI?"
"Sustainable Robotics Lifecycle" (policies on reuse, recycling)
"5G, AI, and the Urban Brain: The City as Compute Grid"
You'll understand what's actually feasible versus purely aspirational.
High-Trust Network
Direct relationships forged through:
Intimate gala setting (200-250 guests only)
Small roundtable discussions on critical topics
Meditation sessions creating contemplative community
Shared meals and informal networking throughout campus
When rapid coordination is needed, you'll have phone numbers and shared context.
Clarity on the Core Question
Three days focused on one question: Can we establish AI values that match human values?
You'll leave with either:
Confidence that universal framework is achievable (with clear next steps), OR
Clear-eyed understanding of why it remains elusive (with alternative strategies)
Either outcome is valuable. Certainty about the path forward matters more than false consensus.
Mental Models for Crisis
Practice from simulation exercises:
"A Day in the Life, 2050" scenarios
Existential risk debates with opposing positions
"Road to Singularity" - BCI scenarios that read/write to human brain
Intellectual muscle memory for handling the unthinkable.
Historical Moment
Future historians will study this period intensely. The decisions made at gatherings like SUMMIT 333 will be analyzed for centuries.
Why this moment matters:
First serious attempt to create universal AI safety framework before AGI arrives
Median expert estimates: less than 5 years to AGI
Currently: Zero global AI safety frameworks implemented
This is the last generation that will make these decisions as fully biological humans
The Day 3 closing debate broadcast live globally could become the watershed moment when humanity chose coordination over competition.
→ Legacy-defining participation
Is a Universal AI Safety Framework a Possibility or a Challenge?
Over three days in Abu Dhabi, we will confront this question through rigorous exploration:
Can we identify AI values that align with human values when humanity itself still debates what those values are?
If humans have managed to establish shared principles across diverse cultures, can we replicate this consensus for AI governance?
What happens when Eastern and Western philosophical traditions meet Silicon Valley technical realities?
Can we build shared mental models before the technology outpaces our ability to govern it?
This is not theoretical philosophy. This is practical preparation for our immediate future.