Co-founded project
AI safety research and fieldbuilding organisation co-founded with Jayat Joshi in September 2025, built to advance safe and trusted AI across the APAC region.
At a glance
Why it exists
There's a structural gap in the GCR ecosystem: no peer-reviewed, well-resourced, India-based AI safety research and fieldbuilding organisation is doing serious technical and governance work in the region.
SAFL is the start of closing that gap. It grew out of the four-pathway proposal I wrote for EAIF in 2022 — specifically, the long-horizon "in-house interdisciplinary research lab" pathway — and is now the institutional vehicle through which we run the India and Global South AI safety programs.
What SAFL does
Building institutional and governance capacity for advanced AI in India and the wider APAC region.
Cultivating expert talent in AI safety through fellowships, mentoring, and targeted independent assignments.
Advancing trustworthy AI research in collaboration with academic and industry partners.
Convening diverse stakeholders to develop consensus on safe and ethical AI deployment.
Recent work
Summit
Hosted SAFL's Hardware-Rooted Sovereignty Workshop featuring Prof. Stuart Russell and Eileen Donahoe as guests, with a live demo of Lucid Computing's hardware-rooted sovereignty stack. I appeared on stage with India's IT Secretary, S Krishnan.
Panel
Convened leaders from the NUS AI Institute, IIT Madras, the UNSW AI Institute, and UC Berkeley around the Safe & Trusted AI pillar.
Research
Partnerships at the top 5 IITs and IISc to accelerate research in AI for Science, AI for Social Good, and Trustworthy AI. The pipeline also feeds Measuremint's expert network.
Intelligence
Covers 50+ stakeholders across government, academia, and industry. Used as a reference by international organisations evaluating India as a talent source and research partner.
Partners
Read more
For the full story, visit the SAFL site or read the short intro deck. To talk about partnerships, fellowships, or convenings — get in touch.