Afternoon Session - Driving Global Alignment

From Evidence to Decisions: Why Dubai Matters Now
Executive Think Tank – Afternoon Session (Part 1) | Health Data Forum
The afternoon session opened with a simple acknowledgement of what the Think Tank itself represented: a deliberately cross-regional conversation, unfolding simultaneously across time zones — "morning for North America," "sunset for India," and late afternoon in the Gulf. The format was already a message: if health data and advanced therapies are to scale responsibly, the dialogue cannot remain trapped in one geography, one regulatory system, or one institutional worldview.
This first segment of the afternoon was built like a three-part "stack":
-
A strategic re-anchoring of the problem (George Mathew): why evidence is abundant yet decisions remain hard.
-
A technology maturity lens (Rajendra Gupta): five functional layers of AI and what they displace, enable, and reward.
-
A trust and compliance lens (Miguel Amador): why "compliance" is not paperwork but a cross-border trust token — and why the post-quantum era forces a new level of seriousness.
Then came a short fishbowl exchange where those lenses met reality: consent, cross-border exchange, diagnostic error, pharmacovigilance, underwriting, unstructured legacy data, and interoperability at scale.
What follows is not a transcript replay. It is the architecture of the conversation — the logic, the tensions, and the practical "north stars" that emerged before the group split into co-creation rooms.
Dr George Mathew: We Don't Have an Evidence Problem. We Have a Translation Problem.
George Mathew's opening remarks were deceptively simple: we have more evidence than ever, but decision-making has never been more complex. If the last decade has taught health systems anything, it is that more data does not automatically create better outcomes. We generate massive clinical datasets, real-world data, predictive models, and precision insights — yet costs rise, inequities persist, and adoption of validated innovations remains slow.
In George's framing, the constraint is not discovery. It is translation:
-
How evidence becomes policy,
-
how it becomes workflow,
-
how it becomes finance,
-
how it becomes everyday clinical decisions.
This is where cross-regional learning becomes "mission-critical." The old model — innovation flowing predictably from one region to another — no longer holds. Today, "future leadership" is distributed:
-
Asia pushing AI-enabled diagnostics,
-
the Middle East accelerating national digital health infrastructure,
-
Europe pioneering trusted governance and privacy regulation,
-
the US innovating biotech, payment models, and commercialization.
No single region owns the future. The future belongs to ecosystems that learn from each other intentionally — and Dubai, he argued, is positioned to become not merely a consumer of innovation, but a translator, validator, and exporter of trusted models.
George then delivered three "must-change" imperatives:
a) Evidence must travel faster than disease
Publishing is not operationalizing. He called for implementation science, real-world evidence networks, and continuous learning health systems — so that lessons propagate in weeks or months, not decades.
b) Data sovereignty must coexist with data collaboration
Countries want control, privacy, and independence; innovation requires scale, diversity, and shared validation environments. The answer is not unrestricted sharing. The answer is privacy-preserving collaboration: federated learning, secure multi-party computation, trusted governance networks — mechanisms that protect sovereignty while enabling cross-border learning.
c) Design globally from day one
Health innovations are too often built for one reimbursement model, one cultural context, one regulatory regime — and then exported. That fails. Instead, global scalability must become an architectural principle, not an afterthought: culturally adaptable, economically flexible, portable across regulations, and linguistically inclusive.
His closing line sharpened the core theme:
The future doesn't belong to the most powerful AI. It belongs to the most trusted AI.
Trust becomes the currency of health-data exchange — and leadership will be defined not by who hoards data, but by who collaborates intelligently, protects privacy, and translates evidence into action quickly.
This opening was more than inspiration: it established the Think Tank's intended "product." Not a set of abstract opinions, but transferable patterns — trust architectures, governance approaches, and evidence-to-decision pipelines.
Rajendra Gupta: Five Levels of AI — and Why "AI" Is Not One Thing
Professor Rajendra Pratap Gupta stepped in immediately after Dr George Mathew and did something crucial: he broke "AI" into functional layers. His argument was that most strategy discussions collapse because leaders treat AI as a single object — when, in reality, it is a stack of capabilities with different maturity levels, displacement dynamics, and beneficiaries.
He framed his approach as an original model under publication and asked for discretion in sharing, but still used the session to share the essence of the framework: a five-level AI functional pyramid, with names deliberately chosen to be memorable:
-
Talkers — Conversational AI
-
Thinkers — Reasoning AI
-
Doers — Autonomous AI
-
Innovators — AI that innovates
-
Leaders — Organizational AI
Each layer, he argued, is not a philosophical category — it implies:
-
what is being displaced (tasks, roles, workflows),
-
who benefits (cost, outcomes, efficiency, new pathways),
-
and how quickly the shift arrives.
Level 1: Talkers (Conversational AI)
This layer replaces 24/7 helplines and routine back-office/front-office work — including some clinical assistance. The immediate beneficiaries: communication and cost. Rajendra's point: we already have this layer.
Level 2: Thinkers (Reasoning AI)
AI reasoning "like a trained expert," supporting clinical decision-making. Beneficiaries: diagnostic accuracy and outcomes. Again, his point was that this is already maturing in practice.
Level 3: Doers (Autonomous AI)
This is where the "golden partnership" between human and machine begins — and where fence-sitting becomes dangerous. Rajendra argued that in primary care and some specialities, autonomous systems can already perform substantial portions of work. Beneficiaries: system efficiency and efficacy — not just incremental gains.
Level 4: Innovators (AI that innovates)
This was the layer he was most excited about, because it changes the very nature of medicine. He framed it as the end of purely symptomatic treatment and the shift toward "8 billion people treated in 8 billion ways." His diabetes example was meant to illustrate a future where genetic testing and personalized pathways become default logic — not exceptions.
He linked this layer explicitly to quantum computing: genomic data at massive scale becomes computationally tractable, and clinical pathways, drugs, devices — even disease definitions — can be reshaped.
Level 5: Leaders (Organizational AI)
Here, AI becomes capable of running organizations. Rajendra suggested this will transform the healthcare system structure, pushing care closer to homes, leaving hospitals focused on surgery, and shifting from "healthcare to health." He painted this as intimidating to some clinicians — but also historically inevitable, because technology does not ask permission.
His closing message landed as a cultural challenge rather than a technology pitch:
-
The 19th-century industrial revolution replaced physical labour.
-
The 21st-century revolution replaces mental labour.
-
You are not competing with colleagues. You are competing with technology.
And then the positive inversion:
"This is not the age of competitors. This is the age of creators."
In the session's overall architecture, Rajendra provided the maturity map: if the Think Tank wants to produce actionable outputs on evidence translation, governance, and ATMP adoption, it must distinguish what kind of AI is being discussed — because the required safeguards, validation patterns, consent models, and regulatory pathways change by level.
Miguel Amador: Compliance Is a Token of Trust — and the Post-Quantum Era Forces a New Deal
Miguel Amador then anchored the conversation in the operational reality of trust.
He introduced himself as Complier's innovation leader and a "regulatory geek" with an engineer's mindset — and his core proposition was blunt:
In healthcare, regulation exists because, without it, each patient would need to personally verify every product, every pill, every software system. That is impossible. Regulation is society's mechanism for scalable trust.
He reframed compliance as not "paperwork," but a universal language that allows trust to travel across borders.
Two kinds of safety are always in play
Miguel highlighted that in healthcare data, "safety" is not only clinical safety; it is also rights safety — privacy, dignity, and protection from harm caused by misleading systems.
He warned of "software harm" that can be fatal not by physical injury, but by misleading people into dangerous behaviour. The line between "consumer choice" and "medical claim" is therefore central — and compliance is the system that defines and enforces that line.
Why compliance scales trust across regions
He contrasted regulatory approaches (FDA versus Europe's stricter posture), but the key insight was strategic: companies often use the strictest recognised pathway to signal global trustworthiness. In cross-border partnerships — Europe ↔ GCC ↔ US — compliance becomes the "passport."
Data sovereignty is not only technical — it is geopolitical
Miguel linked data residency and governance to geopolitics: treaties get challenged; jurisdictions claim rights to access data; and trust is not merely about encryption — it is about who can compel access, under what legal force.
From static compliance to dynamic vigilance
He argued the world is moving from one-time certification toward continuous post-market vigilance. And that shift increases the need for data, because surveillance and adaptation become part of safety.
The elephant in the room: "harvest now, decrypt later"
Miguel's post-quantum argument landed as the session's most urgent warning: data might be secure today, yet already being collected by adversaries for future decryption. He compared it to climate change in the digital world — visible, approaching, yet not taken seriously enough.
He introduced the idea of a "Quantum Geneva Convention" — not necessarily as a fully formed proposal, but as a provocation: we need international norms for how data and cryptographic transitions are handled so that power asymmetries don't become structural domination.
The key takeaway: trust is now time-dependent.
Consent, security, and governance must begin to account for the fact that "secure today" may mean "exposed later."
Miguel's intervention transformed the session from "AI transformation" to trust architecture under adversarial futures — and made a direct bridge to the Think Tank's theme: advanced therapies and real-world evidence cannot scale globally without trust that lasts.
The Fishbowl: Consent 2.0, Diagnostic Error, Pharmacovigilance, Underwriting, and the Data We Already Have
With the three lenses now on the table — translation, maturity, compliance — the session shifted into questions and reflections. This fishbowl mattered because it surfaced real constraints and opportunities.
Osama Elhassan: Consent 2.0 and patient-mediated exchange
Dr Osama Elhassan raised two pragmatic questions:
-
What does "consent management 2.0" look like in distributed environments?
-
Isn't patient-mediated cross-border exchange the fastest path — focusing on direct patient benefit (e-prescriptions, care during travel/pilgrimage), rather than classic institutional data sharing?
Miguel's response leaned toward a societal framing: consent becomes harder when long-term security cannot be guaranteed. The aim is not perfect safety before deployment, but deploy with guardrails and monitoring — and stop harm fast when discovered.
Eric Sutherland: Governments are sleepwalking into quantum — but there's low-hanging fruit now
Eric strongly agreed on the quantum urgency, citing practical progress he has seen firsthand. Then he pivoted to a high-yield economic argument:
If health systems want funding to build future-ready infrastructure, one immediate opportunity is to reduce diagnostic errors. He cited a figure of ~15% diagnostic error and a large share of health system costs tied to it (as presented in his referenced work during the session). His logic was direct:
Diagnostic error often happens because the right information is not available at the right time and place.
So the quickest ROI comes from better data foundations and interoperability — exactly the "Data First" logic.
He also added a behavioural reality: when you talk about errors, clinicians may revolt, but wasteful re-testing already happens because existing results are inaccessible. Fixing this is both politically delicate and financially powerful.
Dr Sara Rogers: AI can streamline pharmacovigilance, but the data question doesn't disappear
Sara brought a grounded caution: pharmacovigilance already has unresolved challenges even without AI. AI could reduce manual work, but the fundamental question remains: what data goes in, and how is it used to make decisions? In a safety-critical domain, "automation" is not automatically "better."
Keith: Underwriting becomes more surgical — uncertainty becomes measurable
From a coverage/outcomes perspective, Keith Kennerly noted that AI enables a more precise underwriting view. Where banks and payers fear the unknown, better data can turn uncertainty into empirical patterns — potentially enabling more predictable access and coverage decisions.
This was an important bridge to the Think Tank's ATMP focus: advanced therapies create new risks and payment questions, and evidence must credibly connect to access mechanisms.
Hemangi: What about unstructured legacy data? (paper and pen reality)
Imangi asked the most globally relevant question: what do we do with the vast amount of older, unorganised data — especially in settings where paper and pen are still dominant?
Eric's reply was a clean framing:
-
We can't only serve the people "born today."
-
We must optimise the past while maximising the future.
In other words: mine and structure what exists, while designing forward data collection to be usable by default.
Alexander David: OMOP/OHDSI as a cross-regional interoperability engine
Alexander introduced an OHDSI/OMOP perspective: the common data model and community expansion across regions. This was a subtle but critical point: cross-regional learning needs not only trust, but also shared representation of data. Without common models, "evidence travel" is slowed by translation costs.
Luís Almeida: ATMP readiness and gene therapy centres
Luís added the practical ATMP lens: implementing gene therapy centres and building capability around advanced therapies — precisely the kind of domain where evidence, governance, and trust must lock together.
The Pivot: Two Breakout Rooms, One Shared Challenge
Paul then moved the group into co-creation mode, announcing a consolidation from four rooms into two — to increase critical mass and depth:
-
Room A: foundations of trust, data integrity, trusted research environments
-
Room B: from evidence to access (including payment/coverage perspectives)
This was the perfect structural mirror of the afternoon's first part:
-
Trust architecture on one side,
-
Access architecture on the other,
both grounded in the same ATMP/RWE challenge.
The agenda recap at the end of the broadcast (with time-zone mapping) reinforced a core design principle: if cross-regional learning is mission-critical, the programme itself must be time-zone literate and globally legible.
What This First Segment Achieved
Before any co-creation began, the session had already produced a coherent frame:
-
Evidence translation is the bottleneck, not evidence generation.
-
AI maturity must be specified because each layer creates different levels of displacement, risk, and value.
-
Compliance is the operational language of trust — and trust must survive adversarial futures.
-
Post-quantum thinking is no longer optional in health data strategy.
-
The fastest near-term wins may come from fixing data foundations that reduce waste and diagnostic error — creating "investable savings" for the future.
-
For cross-border exchange, patient-mediated use cases can be a pragmatic early pathway — but only if consent, surveillance, and accountability evolve.
This is precisely the "Data First, AI Later" logic in live action: the Think Tank resisted the temptation to jump straight into AI promises and instead mapped the conditions under which AI can be trusted, adopted, and paid for.
In the next part of the afternoon (the breakouts), the real test becomes whether those conditions can be translated into operational artefacts: governance patterns, consent approaches, trust mechanisms, and evidence-to-access pathways that are portable across regions.
Conclusions
The afternoon did not begin with technology hype. It began with the uncomfortable truth that we are drowning in evidence while starving for decisions. It then forced a second truth: "AI" is not one thing — it is a layered system of capabilities that will change workflows, economics, and trust demands at different speeds. And finally, it placed a hard boundary around optimism: trust will not scale by good intentions, and security will not hold by default. If Dubai is to become a translator and exporter of health innovation — not just a consumer — it will be because it learns faster, governs better, and treats compliance as what it truly is: a global token of trust.
