Back to Home

Grant investigation · advising · strategic writing

Grantmaking & Writing

A grantmaker is two things at once: someone with judgment about which projects should exist, and someone willing to do the operational work to make them happen.

For three-plus years I've worked on both sides — investigating grant opportunities, framing proposals for other founders, advising grantmakers on India and South Asia, and writing the theory-of-change documents that turn instincts into strategy.

The work has concentrated in the AI safety ecosystem, where my read is that the binding constraint is rarely the funding itself. What seems to matter more is identifying the right projects to back, finding the right people to back them, and getting both moving fast enough to matter in a world where transformative-AI timelines may be short.

Approach

How I reason about what to fund

Below are some principles I lean on when thinking about what to fund. Most come from causal inference and decision theory.

Counterfactual reasoning

The question I find most useful to ask about any program: what would have happened anyway, without it? AISCF's strongest outcomes — including multiple participants who declined private-sector tech offers, one with a confirmed ~$400K quant-firm offer, to apply to alignment-focused PhDs — were counterfactual: the next-best path for those students was already excellent. I'm wary of ecosystem-building reports that stack up observational outcomes without a model of what would have happened anyway.

Probabilistic calibration

I try to bound estimates rather than state them as point claims. The January 2025 line about “fewer than 10 people in the Indian ecosystem on the same page about AI existential safety” was deliberate — a count of people I could name, with the bound kept narrow rather than gestural. AISCF over-admitting 47 against a target of 24 was the same instinct on the other side: my prior was that career-pressure attrition would cost us roughly half, so we recruited for the loss in advance.

Expected-value discipline

I find expected-value thinking is more useful when it's about funnel shape than topline numbers. GAISF's pipeline reads 4,200 warm candidates → 26 offers across 6 partners → 12 accepted placements — roughly 0.3% of warm candidates land at a partner lab. When I'm sizing recruiting investment, that conversion ratio is what I'm looking at, not the warm-candidate number on its own. The tier framework I've sketched for India grantmaking orders deployment the same way — by marginal leverage, not gross spend: Tier 1 (evaluations capacity, alignment research at top Indian institutions, a regranting function) compounds; Tier 3 (general convenings) mostly doesn't, regardless of dollar size.

External validity

One of the hardest moves in fieldbuilding is figuring out which parts of a model that worked elsewhere actually transfer. The standard EA-group → AGISF → research-career template ports reasonably well into Western universities and, in my experience, badly into Indian ones — different career-risk profile, different gatekeeping structure, different ML-literacy baseline. A lot of what I write down about Indian AIS fieldbuilding is, in effect, an attempt to be honest about which findings have external validity and which don't: AISCF's funnel-to-placement structure transferred to GAISF; AISCF's reading-group format didn't transfer to other IITs without per-campus professorial assurance.

Asymmetric regret

Errors of commission and of omission rarely cost the same. For early-stage AIS fieldbuilding in India, the cost of a public misstep — publicising programming as “AGI” rather than “AGI safety”, casual x-risk framings that invite backlash — spreads across the ecosystem and is hard to undo; the cost of not running a program is mostly recoverable. That asymmetry is why I've argued capital for student-led AIS groups needs structured operational and pedagogical support, not just “start a club” funding. AISCF's 2× over-admit was the same instinct in the other direction: cheaper to over-recruit than to ship an under-cohort.

Epistemic deference

There are domains where I have a strong inside view, and domains where I should defer. Before writing the EAIF four-pathway proposal in 2022, I had ~80 student conversations across 13 institutions in 8 states — because the proposal's claims were about on-ground realities, and I didn't trust desk reasoning to substitute for them. The other side of this: the AISCF Research & Instruction Program proposal explicitly built a Governing Council plan around senior backers from AI safety and adjacent grantmaking ecosystems, because that's a layer of judgment I shouldn't be substituting for.

Reasoning transparency

Show the inputs, the assumptions, and the path from one to the other. The EAIF proposal listed all four pathways with rationale for each and ranked them, so a reader could see why I'd start with the IIT-Delhi AGISF pilot rather than the long-horizon J-PAL-style research lab. The piece below on India-specific AI safety grantmaking shows the tier reasoning, not just the tier conclusion. Where I haven't shown the math, that's a fair thing to push back on — and I'd want to show it.

Grant investigation, framing, and advising

Partial list of grantmakers, founders, and fieldbuilders supported

Manifund proposals

I've supported multiple Manifund applicants in the AIS / GCR space with theory of change, scope reasoning, and grant framing.

FAR.AI — India / South Asia talent strategy

Ongoing advisory on where to allocate talent-side investment in the region, what's working in Indian fieldbuilding, and which orgs CG-style funders should be watching.

Advisory on India-focused AI safety commitments

Opinionated briefs for philanthropic networks scoping new India AI safety commitments — mapping where capital has the highest leverage, distinguishing frontier safety work from the broader responsible-AI agenda, and flagging where low-fidelity intermediation tends to lose signal. The most recent is published in anonymised form below as a writing sample.

SERI MATS — Indian-pipeline cross-pollination

Sharing rosters, surfacing strong Indian MATS candidates, and helping calibrate what an Indian SERI-MATS-equivalent should look like.

EA career guide for people from LMICs

Co-authored a guide for high-aptitude individuals in LMICs trying to find their way into impactful AI / GCR careers. Read on EA Forum.

General advisory to fieldbuilders across India / LMICs / SEA

A growing share of my time goes to fieldbuilders in India and adjacent LMICs / SEA designing their own programs — the conversations focus on what to copy from the US/UK template, what to adapt, and what to drop. The Condor Camp SEA Career Planning Workshop (Dec 2024) is the most recent example.

Strategic writing & theory-of-change

In my experience, the most useful grantmaking work isn't always grantmaking — sometimes it's writing the document that lets someone else make a better grant.

Each piece below is paired with the artifact, the audience it was written for, and the link where one exists.

EAIF AIS Field-Building in India

Sept 2022, updated 2023

EAIF — AI Safety Field-Building in India

A four-pathway pilot proposal for longtermist field-building in India:

  1. Incubate high-fidelity, career-focused university groups (HAIST-style) across top Indian institutions, starting with an IIT-Delhi AGISF-based pilot.
  2. Partner with an existing ERI (CERI / CHERI) to stand up an Indian chapter.
  3. Build a global feeder pipeline with direct interview / screening fast-tracks into x-risk orgs.
  4. As a long-horizon bet, stand up an in-house J-PAL-style interdisciplinary x-risk research lab.

Built on conversations with 80+ students across 13 Indian institutions in 8 states, and shaped by feedback from senior fieldbuilders and grantmakers across Rethink Priorities, CEA, FAR.AI, Momentum, and Stanford. It became the strategic foundation for AISCF, GAISF, and SAFL.

Open the proposal
AISCF Research & Instruction Program

Preliminary proposal to Open Phil, 2024

AISCF Research & Instruction Program

A scale-out proposal for the AISCF model — from the IIT-Delhi pilot to IIT-Bombay in Q1 2024, then pan-India IITs in hybrid mode by Q4 2024, with a 3–4 month paid summer / winter research fellowship for top performers mentored by international Research Leads in sync with India-based researchers.

  • A Research Leads pipeline with 10+ candidates already engaged, sourced via FAR.AI, BlueDot, and SERI MATS networks.
  • Org partnerships engaged, including FAR.AI, BlueDot Impact, SERI MATS, ERA, ARENA, CBAI, HAIST, EffiSciences (ML4Good bootcamp for India), CAIS, AI Safety Camp, Arkose, and CHERI.
  • A Governing Council plan with senior backers from AI safety and adjacent GCR grantmaking ecosystems, plus a planned senior FAR.AI / SERI MATS / RAND member and an Indian IIT dean.
  • An Indian researcher pipeline of 5+ senior faculty at IIT-Bombay and IIT-Delhi across alignment-relevant fields including formal verification of neural nets, NLP, AI school leadership, and algorithmic game theory.
  • The core team at proposal time included myself plus a FAR.AI alignment researcher (then the only India-based full-time alignment researcher), a SERI MATS-affiliated researcher joining full-time, and a top IIT-Delhi student.
“Alignment research needs more parallelization, and this parallelization is principally bottlenecked by high-quality research leads. Two of the best ways to remove this bottleneck are high-quality mentorship and an academic cohort to accelerate the development of research leads.”
EAGxIndia 2024

EAGxIndia 2024

Lessons from AI Safety Field-Building in India

A field talk codifying two years of learning: the standard “incubate an EA university group → run AGISF → research career follows” template under-delivers in India, while career legibility, TIA-style assignments, and high-fidelity small cohorts outperform.

IA-TIARA ToC

Theory of Change

IA-TIARA

Goal
Reduce the risk of AI misalignment.
Inputs
In-person AGISF 101 reading groups across IITs, IISc, CMI, ISI; a 3-month full-time research programme led by Global Research Leads in sync with Indian researchers.
Outputs
Technically competent fellows who understand alignment and see it as a career; Indian Alignment Research Leads who can serve as mentors at scale.
Outcomes
Joint research sprints, working papers, and MVPs across interpretability, evals, and governance.

Evidence base: India has one of the largest pools of technically competent ML researchers; the Indian share of AI publications has been rising steadily.

Slides

April 2026

How I think about India-specific AI safety grantmaking

My current thinking on the scope and shape of grantmaking for AI safety and allied causes in India — what to fund, what to avoid, and where I think capital actually compounds. It draws on three-plus years of full-time AI safety field-building in India and on private advisory briefs I've written for philanthropic networks considering India commitments. It's a working framework, not a settled view — I expect to update it as the field evolves and as I learn from grantees, other operators, and the funders deploying capital here.

Why I think India needs dedicated AI safety capital

India is simultaneously a frontier-AI consumer at massive scale, an emerging developer of sovereign models, and a major node in the global AI talent supply chain — yet it has almost no institutional infrastructure for AI safety. Most India-based work on AI governance that I see is focused on responsible AI and digital rights rather than alignment, evaluations, or catastrophic risk. Four gaps stand out to me in particular:

Deployment risk

India's 1.4-billion population is being exposed to frontier AI systems at scale with negligible safety intermediation. Integration into Aadhaar-linked government services creates a concentration of dependency that few democracies face — a single alignment failure could affect hundreds of millions.

Development risk

Multiple sovereign Indian AI efforts are underway, built under competitive pressure and with minimal safety-evaluation infrastructure. India has an on-paper AI Safety Institute but no strong equivalent of UK AISI, US CAISI, or Singapore AISI.

Talent pipeline risk

India produces more AI/ML researchers than almost any country, but vanishingly few work on safety. Top talent leaks to frontier labs and hot AI startups abroad. Without domestic institutions, India remains a net exporter of capability and a net importer of risk.

Governance gap

India's regulatory posture remains 'innovation-first' — desirable for harnessing AI for citizen welfare, but it leaves frontier-risk discourse undeveloped. Existing civil-society work focuses on digital rights, data protection, and fairness; awareness of extreme risks is rare.

My current view is that India doesn't need another AI policy think tank writing reports. What seems most useful to build is a think-and-do body — technical evaluations capacity, red-teaming infrastructure, a domestic talent pipeline for alignment and governance research, and trusted intermediaries who can translate between the global safety community and India's government and tech ecosystem.

Where I think capital has the highest leverage

I think about deployment in three rough tiers.

Tier 1Highest leverage

India-based AI safety evaluations and red-teaming

A credible organisation that can evaluate frontier models deployed in India, conduct red-teaming for Indian-language and Indian-context vulnerabilities, and over time serve as a trusted government evaluator. Closest analogues: METR, Apollo Research, UK AISI's evaluations work.

Alignment and safety research at Indian institutions

Seeding 3–5 research groups at top Indian institutions (IITs, IISc, IIIT-H) with dedicated funding for technical AI safety — alignment, interpretability, and control. Not 'responsible AI' rebranded. Requires both funding and mentorship linkages to established labs (FAR.AI, CAIS, Anthropic safety team).

Regranting and technical advisory function

A capable intermediary who understands both the global AI safety landscape and India's institutional terrain, identifies high-quality grantees, conducts due diligence, and provides ongoing technical stewardship. In my experience this role is hard to outsource to a generalist philanthropic advisor.

Tier 2Important supporting infrastructure

Government engagement and policy translation

A small high-credibility team engaging MeitY, NITI Aayog, STEM policy offices, and security bodies on frontier AI risk — building epistemic infrastructure so when officials are ready to engage, there are credible Indian interlocutors and a defensible evidence base.

Talent pipeline and fellowship programmes

Funding 20–30 fellowships per year for Indian researchers to spend 3–6 months at established safety labs, with return commitments to work on safety in India, paired with domestic research placements and an annual India AI safety conference.

Tier 3Useful but lower leverage

Convenings and ecosystem coordination

Useful as connective tissue but insufficient on their own; India already has plenty of AI convenings. What's missing is technical depth, not networking. Any convening should tie to concrete output — a red-teaming exercise, a policy workshop — rather than general awareness-raising.

Risks I'd watch for

Conflating frontier AI safety with responsible AI

India's existing AI governance ecosystem is, in my read, overwhelmingly focused on fairness, bias, transparency, and digital rights — important work, but not AI safety in the sense of reducing catastrophic or existential risk from frontier systems. My view is that the two require different interventions, and that capital routed primarily into the responsible-AI ecosystem won't close the AI safety gap.

Low-fidelity intermediaries

I'd be cautious about capital routed through intermediaries that can't distinguish high-quality AI safety work from adjacent-but-different work (responsible AI, AI ethics, digital rights, AI for social good) and from low-quality work (generic hackathons, shallow research, networking events) — in my experience that capital is unlikely to compound. Domain expertise and contextual engagement with on-ground Indian networks seem to matter more here than philanthropic generalism.

Talent drain without a return mechanism

Fellowship and training programmes without strong return commitments can become one-way pipelines to Western labs. My current view is that any talent programme should pair with domestic institutional capacity — funded research groups, a credible Indian safety institute, embedded roles in government — that gives researchers a reason to stay or come back.

Capability surface without safety surface

ML talent in India is dense and growing. I'd argue that, at the margin, adding more ML bootcamps without alignment-focused programming is closer to capabilities work than to safety work.

What this is built on

This thinking draws on three-plus years of full-time AI safety field-building in India — running the country's first cohort-based in-person AI safety careers fellowship at IIT-Delhi, building the GAISF recruiting funnel that now reaches 4,200+ candidates across 13 partner labs, co-founding SAFL, and ongoing scoping conversations with Indian government, academia, civil-society think tanks, and global safety orgs. The specific operational recommendations behind this framework draw on private advisory briefs I've written for philanthropic networks considering India commitments — and I expect to keep refining them as the field and the funding landscape evolve.