Opportunity

Get Up to $100,000 for AI Safety and Science: Foresight Institute Nodes Grant 2026 ($3M Available)

Artificial intelligence is sprinting ahead, reshaping what researchers can do in biology, nanotechnology, forecasting, and security. That speed brings opportunity — and real risk.

JJ Ben-Joseph
JJ Ben-Joseph
🏛️ Source Web Crawl
Apply Now

Artificial intelligence is sprinting ahead, reshaping what researchers can do in biology, nanotechnology, forecasting, and security. That speed brings opportunity — and real risk. The Foresight Institute is funding a network of AI for Safety & Science Nodes in 2026 to bankroll teams that want to push AI-enabled science while keeping the world safer. If your project sits at the intersection of advanced models and measurable public benefit — especially work that helps reduce existential or systemic AI risks — this program might be one of the most useful sources of catalytic funding and community support you’ll find.

This is not a simple stipend. The nodes combine grant awards with community space, programming, and in-house compute so teams can build quickly, test ideas, and show early results. The program distributes roughly $3 million per year across projects, with individual grants typically between $10,000 and $100,000. Larger grants tend to go to initiatives squarely aimed at AI safety; smaller amounts support projects in longevity biotech and molecular nanotechnology. Applications are reviewed monthly until the nodes reach capacity, and the final application window closes on December 31, 2025 — so timing matters.

Below is a practical, candid guide to whether you should apply, how to shape a competitive proposal, and the exact materials reviewers expect. Read this if you want to submit something that actually gets funded — not just something that sounds nice on paper.

At a Glance

DetailInformation
ProgramForesight Institute AI for Safety & Science Nodes 2026
Funding TypeGrant (node-based funding + resources)
Total Program Budget~ $3,000,000 annually
Typical Award Size$10,000 – $100,000 (higher for AI safety work)
DeadlineRolling monthly review; final deadline December 31, 2025
Eligible ApplicantsIndividuals, teams, and organizations (non-profit and for-profit)
Geographic FocusGlobal (tagged with interest in Africa and decentralized nodes)
PreferenceOpen source projects preferred; exceptions allowed with justification
Application Formhttps://airtable.com/appyVXc5SMPAvIKpP/pagzBRWeiG3HjH6Qn/form

What This Opportunity Offers

Money matters, but context matters more. Foresight Institute is trying to seed a decentralized ecosystem where AI-driven science advances without concentrating power in a few institutions. That means each funded node is more than a check: you get access to shared office and community spaces, curated programming (workshops, speakers, mentorship), and in-house compute resources so early-stage teams can prototype quickly rather than waiting months for cloud credits or institutional approvals.

Financially, grants most often fall between ten and one hundred thousand dollars. If your project is squarely about AI safety — say a self-improving defensive agent, formal verification tooling for model behavior, or scalable red-teaming infrastructure — you stand a better chance of being recommended for higher awards. Projects in longevity biotechnology, molecular nanotechnology, and neuro/BCI research are welcome but often receive smaller grants because they typically require specialized facilities or longer lead times.

Beyond direct funding, winners join a network. Nodes are intended to be geographically and philosophically distributed: hubs where local researchers, engineers, and stakeholders meet, share compute, and form collaborations. For applicants in underfunded regions — the program has tags indicating interest in Africa — a node could mean access to equipment, compute, and mentorship that would otherwise be out of reach.

Finally, the program prizes projects that can produce tangible deliverables within short timelines (1–3 years). That means roadmaps with clear milestones, reproducible artifacts (code, datasets, models), and an orientation toward openness where possible. If your work reduces plausible catastrophic failure modes from advanced AI, the reviewers want to read about it.

Who Should Apply

This grant is intentionally broad: individuals, interdisciplinary teams, non-profits, and companies can apply. But breadth does not mean everything has equal odds.

If you are an AI safety researcher with a concrete plan to remediate a known risk (automated red-teaming pipelines, formal proofs for model constraints, scalable monitoring systems), you are a strong fit. The review panel prioritizes projects that can yield meaningful progress in short AGI timelines and that show clear impact on reducing existential or systemic risks.

If you develop tooling that improves private computation (confidential compute, MPC, secure enclaves) or privacy-preserving ML at scale, you also belong here — especially if your idea could make privacy-friendly AI practical in real deployments.

Teams working on decentralized cooperation between AI systems (protocols for safe negotiation or distributed decision-making) should apply when they can show a testable prototype or simulation plan. Similarly, researchers building AI infrastructure to accelerate scientific discovery — better model-data pipelines, forecasting tools, or epistemic quality controls — are welcome if they tie deliverables to concrete experiments.

Biotech, neurotech, and nanotech projects can apply, but be realistic about timelines and facility needs. Projects that use AI to produce incremental, verifiable results in 12–36 months (e.g., computational design of a molecule plus in vitro validation plan) are more competitive than blue-sky programs that require years of wet-lab work before producing results.

Examples of applicants who should apply:

  • A two-person team building an autonomous red-team agent that finds misalignment scenarios and outputs machine-readable formal tests.
  • A university spinout creating an open-source confidential compute stack tailored to large model deployment in healthcare.
  • A research lab in Nairobi proposing an AI-driven forecasting platform for epidemic preparedness, paired with local data partnerships.
  • A small company developing model-based design tools for nanoscale assembly with a simulation-to-prototype pipeline.

If you’re unsure, apply. The program reviews monthly and encourages early submissions rather than last-minute rushes.

Insider Tips for a Winning Application

  1. Be ruthlessly specific about short-term deliverables. Reviewers want milestones you can hit in 1–3 years. Instead of “advance BCI models,” write “train and validate a simulation-to-real transfer pipeline that maps 100-channel ECoG data to motor intent, validated on a 10-subject dataset, with code and pre-trained models released within 18 months.” Specificity signals feasibility.

  2. Show a concrete compute and cost plan. Nodes provide in-house compute, but reviewers will ask how you’ll use it. Estimate GPU/TPU hours, storage, and expected cloud fallback costs. If you need specialized hardware for biotech simulations, explain your plan to access it. A clear cost model avoids the “mystery budget” penalty.

  3. Match deliverables to evaluation criteria. The program scores projects on impact to reducing existential risk, feasibility under short AGI timelines, and alignment with focus areas. Spell out how each milestone maps to those criteria. For example, if your deliverable is a runtime monitor, state how it demonstrably reduces a class of failure modes and how you’ll evaluate that reduction.

  4. Embrace openness where you can. The program prefers open source. If you must keep components closed (safety, IP, or regulatory reasons), document why and supply a limited-sharing plan (auditor access, redacted releases, or reproducible evaluation code). Reviewers are more forgiving when you explain trade-offs.

  5. Build a balanced team and show prior execution. A project led by a single idea author is riskier than one with engineers, domain experts, and an operations lead. Include short biographies highlighting relevant wins: shipped systems, datasets curated, successful experiments, or prior grants. If collaborators lack full-time commitment, make that clear and provide contingency plans.

  6. Draft a short risk and mitigation section. Identify the top 3 technical or operational risks and how you’ll address them. This is not pessimism — it’s competence. Demonstrating backup strategies makes your proposal feel real.

  7. Localize impact for node proposals. If you plan to host or anchor a node in a specific region (Africa is explicitly tagged), explain how you’ll build local partnerships, handle recurring costs, and ensure equitable access. Funders want nodes that grow healthy ecosystems, not one-off events.

These tips add up. The best applications read like a startup pitch plus a research plan: clear problem, feasible path to measurable outputs, a team that can execute, and sensible budgeting.

Application Timeline (Work Backwards)

The nodes review applications monthly and will accept submissions until capacity is reached, with a final cutoff on December 31, 2025. Plan for at least 6–8 weeks of preparation.

  • 8 weeks before submission: Draft your project narrative, basic budget, and team bios. Identify letter writers and technical reviewers.
  • 6 weeks before submission: Finalize milestones, compute plan, and data management strategy. Send early drafts to a technical colleague for sanity checks.
  • 4 weeks before submission: Collect letters of support and institutional approvals. For for-profits, prepare a short justification for why grant funding is needed.
  • 2 weeks before submission: Iterate on narrative and budget after feedback. Prepare concise executive summary and one-page one-liner for outreach.
  • 48–72 hours before submission: Final proofread, confirm attachments, and upload early to avoid last-minute errors.

If you plan multiple submission attempts, apply earlier in the month rather than on the last day. Review cycles are monthly, so an earlier submission can yield an earlier decision.

Required Materials

The exact application form will ask for structured entries, but you should prepare these documents ahead of time:

  • Project Narrative (3–6 pages): Significance, technical approach, milestones (with dates), deliverables, and how the work reduces AI risk or advances science.
  • Budget and Justification: Line-item budget for personnel, compute, equipment, travel, and overhead. Explain why grant funding (vs. customer revenue or VC) is appropriate.
  • Team Bios (1–2 page CVs): Focus on relevant experience and concrete accomplishments.
  • Compute Plan: Estimated GPU/TPU hours, storage, software stacks, and contingency plans.
  • Data and Code Plan: How you’ll manage, share, and preserve datasets and code. Include privacy protections where needed.
  • Risk Assessment and Mitigations: Top technical and operational risks and fallback strategies.
  • Letters of Support or Collaboration (if applicable): Confirm access to labs, datasets, or local partnerships.
  • IP and Openness Statement (especially for for-profits): Explain what will be open-sourced and what will remain proprietary, plus reasons.
  • Optional: Prototype demos, short videos, or links to repositories.

Prepare these documents as standalone files so you can attach them quickly to the online form.

What Makes an Application Stand Out

Reviewers are looking for three things: credible impact on pressing AI safety problems, the ability to execute within compressed timelines, and a plan that yields tangible artifacts.

Impact: Projects that clearly reduce plausible catastrophic failure modes — e.g., automated verification tools that prevent model misalignment scenarios — score high. Make the causal chain explicit: here is the failure mode, here is the intervention, here’s how you’ll measure the reduction.

Feasibility: Short timelines matter. A project that promises an 18-month deliverable with a clear timeline and budget beats a two-year conceptual plan. Show prior work or prototypes that prove you can hit the first milestones.

Execution capability: Teams with hands-on experience shipping systems, producing datasets, or publishing reproducible results are preferred. If you’re early stage, include advisors or partners who provide missing skills.

Open practices: Open-source deliverables, reproducible benchmarks, and public datasets make your work more useful to the wider community and more palatable to reviewers.

Local impact and node-building: For node proposals, a concrete plan to recruit, mentor, and sustain a local research community adds weight. Funders want nodes that persist beyond the initial grant.

Common Mistakes to Avoid

  1. Vague milestones. Saying “we will explore X” is a red flag. Instead, provide specific experiments, evaluation metrics, and dates.

  2. No compute plan. Reviewers dislike projects that underestimate resource needs. Give realistic GPU/TPU estimates and show how you’ll use in-house node compute.

  3. Overly broad scope. Trying to cover too many objectives dilutes impact. Focus on a few achievable deliverables.

  4. Missing team capability. Don’t submit without showing you have the technical skills or access to partners who do.

  5. Ignoring openness questions. If you can’t open-source parts, explain why and offer controlled auditability.

  6. Weak budgets. A budget that doesn’t match milestones or omits essential costs (data acquisition, compute, instrument time) suggests poor planning.

Address these directly in your proposal and reviewers will reward clarity.

Frequently Asked Questions

Q: Can international teams apply? A: Yes. The program accepts individuals and organizations globally. The nodes are intentionally decentralized and include interest in regions such as Africa. Check the application form for any location-specific requirements.

Q: Are for-profit companies eligible? A: Yes. For-profits can apply but must justify why grant funding is needed versus private capital. Expect questions about IP, productization, and open-source commitments.

Q: How often are decisions made? A: Applications are reviewed monthly until the nodes reach capacity. Submit early in the cycle to get a faster decision.

Q: What does the program prioritize — safety or science? A: Both. Projects directly addressing AI safety (defense, verification, monitoring) often receive larger awards, but scientific infrastructure and domain applications (biotech, nanotech, neurotech) are eligible and supported with appropriate funding levels.

Q: Is open source required? A: Preferred, but not strictly required. Provide a clear rationale if you keep components closed, and offer mitigations such as audit access or reproducible evaluation scripts.

Q: Can I apply multiple times? A: You can revise and resubmit if your first proposal is unsuccessful, but treat each submission as a new, improved draft. Since reviews are monthly, iterating quickly is feasible.

Q: What kind of support do nodes provide beyond funding? A: Grants are paired with office/community spaces, programming, and in-house compute designed to accelerate prototyping and foster collaboration.

Next Steps / How to Apply

Ready to apply? Don’t wait until the final week. Prepare the documents listed above, refine your milestones, and draft a crisp one-page summary that a non-specialist can understand.

Apply now through the official form: https://airtable.com/appyVXc5SMPAvIKpP/pagzBRWeiG3HjH6Qn/form

Before submitting, recheck the program’s full initiative page at the Foresight Institute for any updated guidelines or FAQs. If you plan to request node-specific resources (on-site space, special hardware), describe those needs clearly in your compute and budget sections.

If you want a quick checklist before you hit submit:

  • One-page executive summary that answers What, Why, How, and When.
  • Clear milestones with dates and measurable deliverables.
  • Realistic budget tied to milestones.
  • Team bios demonstrating relevant experience.
  • Compute and openness plan.
  • Letters of support if you require external facilities or collaborations.

Apply early, write precisely, and make every sentence earn its place. This program rewards clear thinking, concrete outcomes, and teams that can show how their work will make AI-enabled science safer and more accessible. Good luck — and if your proposal lands a node, expect to go from idea to demonstrable results much faster than you thought possible.