Back to Home

Founder bets · advisor seats · talent-infra experiments

Special Projects

Some of the most useful work I've done doesn't fit into any one of the other categories — it's a project I started because no one else was going to, and the expected-value math made sense at the time.

The frame I keep coming back to: “Identify the most important projects and organisations that need to exist, and make them happen … incubating impactful projects, headhunting founders, and working with for-profit entities to achieve a specific outcome in the world.” Most of these live there.

What I'm running right now

Five active bets

Each is paired with the role I hold, the artifact it produces, and where it links into the rest of the work.

Measuremint

Founder · Jan 2026 · in pilot

Measuremint — AI-powered talent intelligence platform

A voice-first AI career agent for high-volume markets, currently in pilot in India with 2K–20K applicants per job. ElevenLabs and Claude for 10-minute AI-led candidate interviews, PostgreSQL for storage.

  • Three-tier evaluation funnel — CV parsing → async voice challenge → full AI interview.
  • Unified candidate pipeline (demo) — multi-source ingestion (Apollo, OpenReview, GitHub) with handle-based deduplication and a three-tier caching layer that cuts external API costs by 90%.
  • Network Engine — a graph DB linking 100K+ STEM profiles across 500+ sources, with RAG-powered search and degree analysis to surface influencers, talent clusters, and research ties.
  • Nexus — a cross-platform aggregation pipeline that monitors public layoff signals across Reddit, GitHub, and HackerNews, assembles deduplicated candidates weighted by reliability, and feeds leads into recruiting workflows.

The thesis: high-volume hiring in India and similar markets is where AI-native recruiting will land first, and the same infrastructure that supports talent-side experiments at SteadRise can be productised for the broader ML / SWE labour market — with safety, fairness, and candidate experience tested at scale before the model lands at frontier-lab scale.

Project writeup
Singapore AI Safety Hub

Founding Member & Strategic Advisor · since Dec 2024

Singapore AI Safety Hub

Helping spin up an AI-safety convening function in Singapore — a venue that bridges the Indian, South-East Asian, and Australasian AI-safety ecosystems with the established US / UK communities.

Singapore is a natural middle node: technically sophisticated, geopolitically distinct, and well-positioned for AI governance work that doesn't fit cleanly in either Washington or Brussels.

This work also feeds SAFL's Trustworthy AI panel and the broader Singaporean Ministry of Digital Development partnership — see SAFL.

Budhimaan Baccha

Founder · since Nov 2019 · pivot in pilot

Budhimaan Baccha — digital literacy → RLHF labour channel

Started as a digital-literacy nonprofit training 53 underprivileged Indian students for back-office employment. The pivot, now in pilot, is to convert the trained network into an early-stage RLHF labour platform that supplies high-quality human feedback for model training while paying contributors above local-market rates.

The angle for safety: a well-vetted, ethically-paid, geographically-distributed RLHF workforce is a piece of infrastructure several frontier labs say they need, but no one has built well at scale.

Project writeup
80,000 Hours career advising

By invitation · since Sept 2023

80,000 Hours — Career Advisor (AI Alignment & Tech Governance)

50+ AIS / AI-governance career calls over the past two years — usually one or two hours per candidate, sometimes more for senior transitions, with non-trivial decisions about AI-safety careers.

The cumulative effect is a continuous read on what the global ML / AIS talent market is actually optimising for, which feeds back into Talent and Field Building.

Raksha — guardrails for LLM agents

Co-built with Basil Labib · production-ready

Raksha — HRO guardrails for LLM agents

High reliability · checkpointed runs · process-level verification

A platform for deploying, monitoring, and governing LLM agents with structural guardrails — external verifiers, bounded autonomy, dual-control gates, and durable checkpoints. Raksha sits between an orchestrator and the model providers: BYOK, structural caps, every step verified and checkpointed.

Four primitives, one control plane

  • Reliability — structural caps on USD, tokens, tool calls, and wall-clock; every step checkpointed, fork-from-any-step retry.
  • Oversight — a separate vendor's model scores each step on grounding, goal alignment, tool legitimacy, and safety.
  • Dual control — tool calls flagged requires_approval block on a Slack ping and a UI gate: approve, deny, or fork.
  • Recovery — every step is durable; fork from any prior state and replay with new policy, no full-task re-runs.
Launch app

Earlier · convening & talent identification

Past work that still shapes the practice

The through-line: the talent-identification practice isn't AI-specific — it's about finding sharp people in places established pipelines don't reach, and matching them into the highest-leverage roles available.

India Atlas Fellowship search (2021–22)

As Senior Recruitment Consultant, managed a 15-person team across data engineering, outreach, and relationship-building. Reached 60K students across 8K schools and sourced 20 finalists, of whom 5 were selected.

J-PAL Health & Well-Being vertical sourcing (2020–21)

Built recruiting ops for research associates and converted 31 interns to researchers, accounting for 35% of annual hiring.

Polish Mathematical Olympiad sponsorship

Sourced elite talent from underrepresented regions across Eastern Europe, South-East Asia, India, and Australasia.

Connections to the rest of the work

These projects don't sit cleanly inside any one tab, but each one feeds the others.

  • Measuremint runs on the same talent-graph infrastructure that powers SteadRise's Talent work.
  • Singapore AI Safety Hub is the regional bridge for SAFL's Field Building programs.
  • Budhimaan Baccha → RLHF platform is the kind of for-profit safety-infra play more grantmakers should be funding under Grantmaking & Writing.
  • 80,000 Hours advising is the steady-state pulse on what the global AIS talent market actually rewards, and that's what makes the rest of the strategy land in calibrated places.