Back to Home

Programs · advisory · convening · since Sept 2022

Field Building

I've been doing AI safety fieldbuilding in India full-time since September 2022 — first as founder of the India AI Safety Initiative (later acquired by SteadRise, formerly Impact Academy), and more recently as co-founder of Secure AI Futures Lab.

One of the most useful things I've learned in that time is what seems not to transfer from the US/UK template to the Indian context — and what does.

What I've learned about Indian AI safety fieldbuilding

In my experience, the default playbook under-delivers in India

From an 8-month x-risk research-interest survey across India's top CS universities (3,000+ miles, 12 cities, 80+ campus map) and from running the IIT-Delhi AISCF pilot the following year, four things stand out to me:

01

Career legibility seems to beat philosophical framing

Indian STEM students engage when programming has visible next steps — capstone projects, internships, placement pathways. In my experience, pure intro fellowships without an on-ramp lose people fast.

02

Career risk aversion has been the dominant friction I've encountered

From a 2023 observation: “Indian students at top institutes are too focused on career long-term security … top students simply don't want to be involved [in extracurriculars], at least the very top ones, and want to focus only on academic and job placement.”

03

ML talent doesn't seem to be the bottleneck

“By their 4th year, most students have done at least an ML course, with many doing multiple courses and some research projects.” My read: at the margin, adding more ML bootcamps is closer to capabilities work than to safety work.

04

TIAs tend to outperform reading groups for surfacing top signal

Short, paid, low-friction technical tasks (Targeted Independent Assignments) let strong candidates self-identify in ways an 8-week curriculum doesn't, and this seems especially true for non-EA-connected technical talent.

Programs I've built

Five programs, end-to-end

Each program is paired with the funnel it actually produced, the counterfactual outcomes that followed, and the link where one exists.

India Alignment Research Fellowship

2022–2023 · India AI Safety Initiative

India's first Alignment Research Fellowship

Founded under the India AI Safety Initiative on a $65K EAIF grant. India's first Alignment Research Fellowship attracted 600+ applicants across 40 STEM universities, of whom 24 (top 4%) were selected. 10 papers from that cohort are now published or under review.

The applicant pool itself was, in my view, the most informative output — a near-complete map of where serious AI safety interest sits in Indian academic institutions, and a useful indication of where the next wave of researchers is likely to come from.

AISCF IIT Delhi

Aug–Nov 2023 · IIT Delhi

AI Safety Careers Fellowship (AISCF)

India's first cohort-based, in-person AI safety careers fellowship. Adapted BlueDot Impact's AGISF curriculum to the Indian institutional context and layered career-development work — capstone projects, placement support, mentor matching — on top of the technical reading. Advised by FAR.AI and SERI MATS; partnered with BlueDot Impact.

900+
EOIs
175+
Full applications
22
Completers (≥75%)

Counterfactual outcomes (within 5 months of fellowship end)

  • 20 of 22 completers committed to a significant alignment research project or study (≥100 hours / ≥3 months).
  • 6 of 14 AISCF participants who applied to GCP Oxford Dec 2023 were accepted.
  • 1 facilitator was selected for SERI MATS, Berkeley.
  • 5 applied to alignment-focused PhDs at top US universities or short-term roles at international labs.
  • Multiple participants declined private-sector tech offers — including a confirmed ~$400K quant-firm offer — to apply to alignment-focused PhDs and research roles instead.
AISCF public retrospective
Global AI Safety Fellowship

SteadRise · Global

GAISF — Global AI Safety Fellowship

A TIA-first, mentorship-led model that surfaces technical and governance talent from regions traditionally under-served by AGISF-style fellowships.

4,200
Warm candidates
13
Partner labs
26 / 12
Offers / Accepted

Partner labs include Anthropic, UK AISI, FAR.AI, and others. Offers closed across 6 partners (FAR.AI, UK AISI, GovAI), with 12 accepted placements.

The premise is straightforward: instead of teaching people the field for 8 weeks and hoping they self-select into research, give them a meaningful technical problem upfront and watch which people can already do the work. My read is that this produces a better signal-to-cost ratio and better generalisation to non-EA-connected talent than the standard playbook.

globalaisafetyfellowship.com
EA Opportunity Board

Director · Oct 2022 – Feb 2024

EA Opportunity Board

Directed the EA Opportunity Board from October 2022 to February 2024 on a lean operating budget of roughly $1,000–$1,500 / month.

100 → 1,500+
Subscribers
~10 → 30+
Weekly postings
+++
Partner orgs (subst. growth)

The interesting part of running the board was watching where the field's hiring demand actually concentrated week over week — which became one of the live inputs into the talent systems on the Talent side.

IIT Delhi AI Security Initiative

Co-built with Basil Labib · IIT Delhi + Kairos

IIT Delhi AI Security Initiative (AISI@IITD)

A student-and-researcher initiative at IIT Delhi doing research, education, and mentorship around AI safety and alignment — modelled after HAIST and the Berkeley AI Safety Initiative, but built for the Indian subcontinent talent pool.

  • Six-week AI Safety Fellowship with funding, mentorship, and access to a curated network.
  • Active research on reasoning in language models, fairness benchmarks, corrigibility, and AI regulation.
  • Mentorship + compute for advanced researchers, plus a technical + policy track for AGI-risk mitigation.

~12-person core team. Partners: IIT Delhi, Kairos Project.

iitdaisi.org

Advisory and capacity-building

Supporting other fieldbuilders, grantees, and grantmakers

FAR.AI — India / South Asia talent strategy

Ongoing advisory on where to allocate talent-side investment in the region, what's working in Indian fieldbuilding, and which orgs CG-style funders should be watching.

SERI MATS — Indian-pipeline cross-pollination

AISCF advisor; sharing rosters, surfacing strong Indian MATS candidates, and helping calibrate what an Indian SERI-MATS-equivalent should look like.

EA career guide for people from LMICs

Co-authored a guide for high-aptitude individuals in LMICs trying to find their way into impactful AI / GCR careers. Read on EA Forum.

80,000 Hours — Career Advisor (by invitation)

AI Alignment & Technical Governance track, Sep 2023–present.

Singapore AI Safety Hub

Founding Member & Strategic Advisor, Dec 2024–present.

Career Planning Workshop, Condor Camp SEA

Taught AI-safety career planning to LMIC fellows in Dec 2024. Slides.

Strategic writing & theory-of-change

The documents behind the programs

For the long-form treatments — including the India-specific funder-approach brief — see Grantmaking & Writing.

Sept 2022, updated 2023

EAIF — AI Safety Field-Building in India

A four-pathway pilot proposal that became the strategic foundation for AISCF and SAFL.

Open the proposal

EAGxIndia 2024

Lessons from AI Safety Field-Building in India

The field talk codifying two years of learning — career legibility, TIA-style assignments, and high-fidelity small cohorts outperform the standard playbook in India.

Slides PDF

Theory of change

IA-TIARA

Goal: reduce the risk of AI misalignment. Inputs: in-person AGISF 101 reading groups across IITs, IISc, CMI, ISI; a 3-month full-time research programme led by Global Research Leads in sync with Indian researchers. Outputs: technically competent fellows and Indian Alignment Research Leads. Outcomes: joint research sprints, working papers, and MVPs across interpretability, evals, and governance.

Slides

Preliminary proposal to Open Phil, 2024

AISCF Research & Instruction Program

Expands AISCF to IIT-Bombay and pan-India IITs / NITs, with a 3-month paid research fellowship led by international Research Leads, with FAR.AI, SERI MATS, and BlueDot Impact among the advising orgs.

AI governance events

Rooms I've built and led

India AI Impact Summit 2026 — New Delhi

Hosted SAFL's Hardware-Rooted Sovereignty Workshop featuring Prof. Stuart Russell. Appeared on stage with India's IT Secretary, S Krishnan.

Trustworthy AI Panel — Singapore

Convened leaders from the NUS AI Institute, IIT Madras, the UNSW AI Institute, and UC Berkeley on the Safe & Trusted AI pillar.

Trustworthy AI Academic Reception - New Delhi

Organised a baithak-style convening near Bharat Mandapam alongside the AI Impact Summit. Brought together academics from IITD, IITM, IIITD, IIITH, UNSW, MBZUAI, and ASU — building convening infrastructure for the Trustworthy AI research community, as a lead-in to SAFL’s first workshop.

Co-founded

Secure AI Futures Lab (SAFL)

The field-building infrastructure that grew out of the EAIF pathways. Backed by $400K from Schmidt Sciences and the AI Safety Tactical Opportunities Fund, with additional support from the Future of Life Institute. Partners with CeRAI (IIT Madras), IIT Madras, the Singaporean MDDI, NUS, UNSW, and FAR.AI.