What the Background means?
The network is the core data structure of our talent graph - researchers, labs, and affiliations as nodes.
Graph-theoretic algorithms - centrality, shortest path, embeddings — traverse it to surface the right candidates.
Transitioning from my current role as Co-CEO at SteadRise (formerly Impact Academy) to focus full-time on GCR fieldbuilding with SAFL, which I co-founded.
Based in New Delhi and Singapore. In the recent past, collaborators include the likes of UK AISI, FAR.AI, Apollo Research, GovAI, and Schmidt Sciences.
What I work on
Grant investigation, advisory, and the theory-of-change work behind AISCF, GAISF, and SAFL.
AI safety fieldbuilding, including AISCF, GAISF, AI governance convenings, and advisory across LMICs.
Talent infrastructure at SteadRise: 50K-profile ML talent map, 4,200-candidate funnel, 13 partner labs.
SAFL's India AI Tracker — 50+ stakeholders, used by international orgs as a reference.
Co-founded Secure AI Futures Lab, an org for safe and trusted AI in India + APAC.
Measuremint, Singapore AI Safety Hub, Budhimaan Baccha → RLHF, 80K Hours advising, and more.
About me
Mostly a curious person who wandered into AI safety work and stayed because the problems kept getting more interesting. The day-to-day moves between grantmaking advisory, fieldbuilding programs, talent infrastructure, governance convenings, and the new institutions the space still needs - map the gap, build the thing, calibrate the bar, ship the work, run the ops end-to-end.
When I'm not buried in a BOTEC, drafting a grant brief, or tuning an LLM for the day's task, I'm probably lurking on Hacker News, poking at a side project that won't ship, or out on a walk with my dogs Wolfie and co — I document their adventures over at @thebarkitects_.
Founded and scaled India's first cohort-based AI safety fellowships — AISCF at IIT-Delhi (900+ EOIs, 22 completers, 5 PhD applicants, 1 SERI MATS facilitator) and the earlier Alignment Research Fellowship (600+ applicants across 40 STEM universities, 24 selected). Currently taking AISCF to IIT-Bombay and onward to the rest of the top 5 IITs and IISc.
Investigations, advisory memos, and theory-of-change work that have shaped funder decisions on India AI safety. The four-pathway EA Infra Fund proposal became the strategic foundation for AISCF and SAFL. Ongoing advisory work with international funders evaluating LMIC AI safety opportunities.
Convening government, academia, and industry leaders through flagship summits, panels, and capacity-building programs in India and abroad. Most recently: SAFL × India AI Impact Summit 2026 with a Hardware-Rooted Sovereignty Workshop alongside Prof. Stuart Russell and India's IT Secretary S Krishnan — see events.
Co-founded SAFL with ~$400K via Schmidt Sciences and the AI Safety Tactical Opportunities Fund, plus additional support from the Future of Life Institute. India + APAC focus on safe and trusted AI. Operating the India AI Tracker (50+ stakeholders), partnering with IITs and IISc on AI for Science, Social Good, and Trustworthy AI, and convening cross-sector stakeholders around technical AI safety.
At SteadRise: a 50K-profile ML research talent map, a 4,200-candidate funnel across 13 partner labs (FAR.AI, UK AISI, Apollo Research and others), and the evaluation infrastructure (LLM-graded binary-signal scoring, JD fingerprinting across 10 skill dimensions) that scales candidate triage. Talent work as the operational arm of the fieldbuilding mission.
Measuremint — a voice-first AI career agent (ElevenLabs + Claude) testing whether AI-native recruiting can clear high-volume Indian markets of 2K–20K applicants per role. Earlier: Budhimaan Baccha digital literacy → RLHF pivot, Singapore AI Safety Hub exploration, and 80,000 Hours advising.
Upcoming
Jul 6th – 11th, 2026
International Conference on Machine Learning
Seoul, South Korea
Aug 15 – 21, 2026
International Joint Conference on Artificial Intelligence
Bremen, Germany
Dec 6th - 12th 2026
Conference on Neural Information Processing Systems
Syndney, Australia

Some of my Collaborators

















