Win up to GBP 625,000 in UKRI NERC Ocean Carbon Modeling Grant Funding: A Practical Guide to the 2026 UK-US Biological Ocean Carbon Assessment Call
If you’ve ever tried to explain the ocean carbon cycle to a non-scientist at a dinner party, you know the look: polite smile, mild panic, quick pivot to “So… how about that weather?” Meanwhile, the ocean is doing the heavy lifting—quietl…
If you’ve ever tried to explain the ocean carbon cycle to a non-scientist at a dinner party, you know the look: polite smile, mild panic, quick pivot to “So… how about that weather?” Meanwhile, the ocean is doing the heavy lifting—quietly absorbing a huge share of humanity’s carbon dioxide emissions—and we’re still arguing about the fine print of how it stores carbon and for how long.
That fine print matters. A lot. Because the difference between “carbon stays down there” and “carbon pops back up” is the difference between climate projections that hold together and ones that wobble like a wonky table in a seaside café.
This UKRI opportunity (through NERC) is aimed squarely at that fine print. It’s funding a UK–US project to compare and improve the way biological processes controlling ocean carbon storage are represented in multiple global models—specifically, contrasting models where assumptions and parameterisations can lead to very different answers. The premise is simple and slightly brutal: if two respected models disagree about carbon storage, you don’t get to shrug. You have to find out why.
There’s also a pragmatic, collaborative twist: NASA will support the US side, while UKRI/NERC supports the UK component. That means you can build a transatlantic team that’s properly resourced, not held together with goodwill and half-funded postdocs.
And yes, it’s competitive. It’s also the sort of grant that can define a research arc for several years—because improving how models represent ocean biology and carbon storage is foundational work. The kind that ends up cited everywhere, quietly powering the next generation of Earth system projections.
At a Glance: Key Facts for This UKRI NERC Ocean Carbon Storage Grant
| Item | Details |
|---|---|
| Funding type | Research Grant (UKRI/NERC; UK–US collaboration with NASA support for US team) |
| Topic | Biological influence on ocean carbon storage; multi-model comparison and improvement |
| Who can lead | UK-based project lead at a NERC-eligible UK research organisation |
| US participation | Allowed; US researchers must be affiliated with a US institution at submission; NASA provides support |
| Max UK budget (FEC) | Up to £625,000 FEC (UK component) |
| UKRI/NERC pays | 80% of FEC (typical UKRI model) |
| Status | Open |
| Deadline | 14 July 2026, 16:00 (UK time) |
| Mandatory step | Notification of Intent (NoI) required to submit a full application |
| Opportunity page | https://www.ukri.org/opportunity/multiple-model-assessment-of-biological-influence-on-ocean-carbon/ |
| Key contacts | [email protected]; [email protected]; [email protected]; [email protected] |
What This Opportunity Actually Offers (Beyond the Headline Funding)
Let’s talk about what you’re really buying with this call—because it’s not just staff time and compute.
First, the UK component can request up to £625,000 full economic cost (FEC). UKRI/NERC typically covers 80%, meaning you should plan your finance story accordingly (your institution covers the remaining 20% through its standard contribution mechanism). In plain English: you can build a serious UK work programme, but you need to cost it properly and make sure your institution is happy with the match portion embedded in FEC rules.
Second, the call is explicitly about incorporating new representations of key processes regulating ocean carbon storage into contrasting global models. That’s important. This isn’t “run Model A for scenario X and write a paper.” It’s more like: identify biological processes that models handle poorly or inconsistently, upgrade the representation, test across models, and show what changes.
Third, the UK–US structure is a real asset if you use it well. The best multi-model work often depends on access to different modelling ecosystems, different codebases, and different traditions of parameterisation. A transatlantic team can be more than a courtesy co-authorship arrangement; it can be the difference between a narrow technical patch and a genuinely transferable modelling improvement.
Finally, there’s a subtle but valuable benefit: this call forces you to design for comparability and synthesis. If you do it right, you’ll come out with more than a one-off result—you’ll produce methods, benchmarks, and evaluation habits that other modelling groups can adopt. That’s the kind of output that ages well.
Who Should Apply (And Who Should Not Waste Their Time)
This call wants a very specific kind of applicant: someone who can sit comfortably at the table with Earth system modellers, biogeochemists, and data people—and keep the conversation productive.
The non-negotiables
The project lead must be based at a UK research organisation eligible for NERC funding. If you’re not sure whether your organisation is eligible, check internally with your research office early. Do not wait until the week before the deadline and discover a technicality.
You also must submit a Notification of Intent (NoI) to be eligible for a full application. Treat the NoI as a gate, not a formality.
US collaborators are welcomed, and NASA will provide support on the US side. The US researchers must be affiliated with a US institution at the time you submit. That sounds obvious until you remember how often people move roles in summer.
Who this is ideal for (real-world examples)
If you lead or co-lead work on an ocean biogeochemistry module in a global model and you’ve been itching to fix “that one process” everyone complains about—this is your moment.
If you’re a UK PI who already collaborates with US modelling groups (NASA-linked or otherwise) and you can propose a clean division of labour—UK handles model development and evaluation framework, US contributes model comparison pipelines and observational constraints—this call fits like a glove.
If you’re a mid-career researcher with a strong publication record in ocean carbon cycling but less experience in code-level model development, you can still be competitive—if you build the right team. You’ll need credible modelling horsepower and a plan that doesn’t read like “we will learn the model.”
Who should think twice
If your proposal is mainly observational with only a vague nod to models (“we will inform models”), it’s probably not aligned. Likewise, if you’re only working within one modelling framework and the “multiple model” element is basically a sensitivity test, reviewers are likely to notice.
This call is about comparative assessment and improvement across models. If that phrase makes you tired just reading it, save yourself the stress.
The Science Aim in Plain English: Biological Processes That Decide Whether Carbon Sticks
Ocean carbon storage isn’t only chemistry. Biology acts like the shipping department: packaging carbon, routing it through the water column, and sometimes sending it right back to the surface with a return label.
Depending on your niche, your “key processes” might include things like plankton community structure, particle formation and sinking, remineralisation depth, nutrient limitation, ecosystem dynamics, or food-web interactions that change export efficiency. The call doesn’t prescribe the single correct process—it pushes you to justify which processes matter and then prove you can improve how models treat them.
The multi-model angle matters because a process can look “fine” inside one model simply because the rest of that model compensates for it. Comparing across contrasting models is how you discover whether your shiny new parameterisation is genuinely better, or just better at matching one model’s quirks.
Insider Tips for a Winning Application (The Stuff Reviewers Quietly Reward)
You can have the best science idea in the room and still lose to a proposal that’s clearer, more testable, and better organised. Here are practical ways to stack the odds in your favour.
1) Make the multi-model strategy the main character
Don’t bury the “multiple model assessment” part in Work Package 4 like it’s an afterthought. Put it upfront. Explain why the chosen models are genuinely contrasting (structure, resolution, ecosystem complexity, parameter choices, coupling). Then explain what a successful cross-model improvement looks like.
A strong line is: “We will implement the same process representation across Model A and Model B, then evaluate whether it reduces divergence in carbon storage diagnostics under shared forcing.” That’s testable. Reviewers love testable.
2) Define 2–4 diagnostics that will anchor the project
Too many proposals say “we will evaluate model performance” and stop there. Decide what performance means.
For example: export production, remineralisation profiles, carbon sequestration timescales, air-sea CO₂ flux patterns, oxygen utilization signatures, nutrient tracer distributions. Pick a manageable set and explain why they diagnose the biology-carbon connection.
If you can tie diagnostics to available datasets and uncertainty bounds, even better.
3) Treat “new representation” as software plus science, not vibes
If you’re proposing a new parameterisation or process module, explain the engineering reality: where does the code live, how will you validate, how will you keep versions aligned across models, and what is the plan for documentation?
This doesn’t need to read like a Git manual, but reviewers want confidence that you can build something other people can actually run.
4) Use the UK–US partnership to create genuine reciprocity
Avoid the classic failure mode: “UK does everything, US advises.” Instead, set up interlocking responsibilities. For instance, the UK team leads implementation and evaluation design; the US team leads harmonised forcing/experiment protocol and coordinates model intercomparison workflows; both teams jointly interpret and publish.
Spell out the collaboration rhythm: joint sprints, shared code reviews, co-led papers. The more concrete, the more believable.
5) Build a plan for uncertainty, not just a plan for results
Ocean carbon biology is messy. So own it. Include sensitivity experiments that show how robust the improvement is across plausible parameter ranges. Explain how you’ll separate genuine improvement from tuning.
A proposal that acknowledges uncertainty reads like science. One that pretends the ocean will behave nicely reads like marketing.
6) Show that you can finish on time
Reviewers are allergic to projects that promise the moon in year one and then vaguely “synthesise” in year three. Create a work plan where each year produces something concrete: implemented process, baseline comparison, evaluation results, cross-model synthesis.
You’re not just proving your idea is good—you’re proving it’s schedulable.
7) Plan for outputs people can reuse
If you want extra credit, propose deliverables that others can adopt: benchmark scripts, evaluation notebooks, documented parameter sets, or a small set of standardised experiments. Reusability signals seriousness and multiplies impact.
Application Timeline: A Realistic Plan Working Backward from 14 July 2026
The deadline is 14 July 2026 at 16:00 UK time. If you aim to start writing in late June, you’ll end up submitting a proposal that smells like late June. Instead, work backward like a person who enjoys sleeping.
About 5–6 months out, you should lock the scientific core: which biological processes you’ll target, which models you’ll use, what “better” means, and what the experiment design is. This is also when you confirm your US partners and their NASA support assumptions, because collaboration details take time.
At 3–4 months out, draft the work packages and build the evaluation plan. This is when you should also start costing in earnest. FEC budgets rarely “just work” on the first pass, especially with compute, staff time, and institutional overheads.
At 8–10 weeks out, you want a full ugly draft—yes, ugly. Then you circulate it to colleagues who will actually critique it (not just cheer for it). Make time for revision, because the first round of feedback usually reveals that your “obvious” logic wasn’t obvious to anyone else.
At 4–6 weeks out, finalise partnerships, letters of support (if relevant), data management thinking, and the NoI requirement. Then start compliance checks with your research office so you’re not chasing signatures on deadline day.
In the final 10–14 days, stop rewriting the science and focus on clarity, formatting, and consistency: the budget matches the work plan, the roles match the timeline, and the claims match the methods.
Required Materials (And How to Prepare Without Losing a Weekend)
The call summary doesn’t list every single attachment in the raw text, but for UKRI funding opportunities like this, you should expect a full set of proposal components through the UKRI Funding Service. Prepare early because the slowest part is rarely writing—it’s aligning people and documents.
You will definitely need:
- Notification of Intent (NoI) (mandatory). Draft this as a crisp preview: your models, your target processes, your UK–US team, and the core question.
- Case for Support / project narrative explaining the science aim, methods, work plan, and why multi-model comparison is essential.
- Budget at FEC for the UK component, with justification that maps cleanly to staff roles and work packages.
- Data management considerations, especially if you’re producing model outputs, evaluation code, or harmonised datasets. Think about where data will live, who maintains it, and what you’ll share.
Preparation advice: write your work plan and your budget together, not separately. If a work package has no named staff time, it’s imaginary. If staff time appears with no work package, it’s suspicious.
What Makes an Application Stand Out (How Reviewers Tend to Judge These)
Even when reviewers are excited by the topic, they’re looking for signs that the project will produce credible, transferable improvements—rather than a bespoke tweak that only works in one lab.
Strong applications usually share a few traits:
They are specific about the biological processes being improved, and they explain why those processes are a bottleneck for representing carbon storage. They don’t list ten processes; they pick a small set and go deep.
They design clean comparisons across contrasting models. Not “we’ll run a bunch of simulations,” but “we’ll run the same protocol across models, isolate the process change, and evaluate with agreed diagnostics.”
They show competence in implementation and evaluation, which is code, compute, and scientific judgment combined. Reviewers can tell when a team has never shipped model changes at scale.
They include a believable plan for integration with the US team. Since NASA is supporting the US side, reviewers will expect a mature collaboration structure—not a last-minute add-on.
And finally, standout proposals write like they know what they’re doing. Clear aims, clear tests, clear outputs. No fog.
Common Mistakes to Avoid (And What to Do Instead)
Mistake 1: Vague multi-model claims
Saying “we will compare multiple models” without specifying the models, what makes them different, and what experiments you’ll run is a fast way to lose credibility.
Fix: Name the models (or at least define their classes), explain the contrasts, and describe the shared protocol.
Mistake 2: A process upgrade with no evaluation spine
If you propose a new representation but don’t define how you’ll judge improvement, reviewers will assume you’re just tuning.
Fix: Commit to diagnostics, reference datasets, and success criteria. Even if improvement is partial, show how you’ll measure it.
Mistake 3: A budget that doesn’t match the ambition
Big claims with a thin staffing plan reads like wishful thinking.
Fix: Make sure staff time reflects the real work: development, testing, documentation, and synthesis. Under-costing is not virtue; it’s risk.
Mistake 4: Treating the NoI as a last-minute checkbox
Because the NoI is mandatory, mishandling it can disqualify you from the full submission.
Fix: Put the NoI deadline on your calendar the moment you decide to apply. Draft it early and get internal sign-off.
Mistake 5: Forgetting practical collaboration mechanics
UK–US projects fail on logistics as often as science: incompatible timelines, unclear responsibilities, data transfer headaches.
Fix: Write down how you’ll collaborate: meeting cadence, shared repositories, experiment coordination, authorship expectations.
Frequently Asked Questions
1) How much money can the UK team request?
The UK component can request up to £625,000 full economic cost (FEC). UKRI/NERC will fund 80% of the FEC, so your institution typically covers the remainder through its usual FEC mechanism.
2) Do I need US collaborators to apply?
The call is framed as a UK–US project with NASA support on the US side, so a serious application will normally include US partners. If you try to submit without them, you’d need a very convincing justification for how the “UK–US project” aim is met.
3) Can US researchers be co-leads?
The raw listing specifies the project lead must be UK-based at a NERC-eligible organisation. US researchers can absolutely be central scientific partners, but leadership and submission responsibility sit with the UK side.
4) What does Notification of Intent mean in practice?
It’s an eligibility requirement. Think of it as your formal “heads-up” to the funder that you plan to submit, along with key details. If you skip it or submit it incorrectly, you may not be allowed to submit the full proposal. Treat it like Step 1 of the application, not pre-application fluff.
5) What does contrasting global models mean?
It means models that differ in meaningful ways—structure, complexity, parameterisations, or coupling—so the comparison reveals something real. Two near-identical configurations don’t teach much. Two models with different ecosystem parameterisations, resolution choices, or biogeochemical modules can expose why carbon storage diverges.
6) Is this more about model development or analysis?
Both, but the call explicitly points to incorporating new representations of key processes, which implies real model development. Analysis matters because you must demonstrate what changed and why it matters for carbon storage.
7) Where do I ask questions about fit or logistics?
For scientific/topic questions, try [email protected] and for data-related angles [email protected]. For system/application mechanics, [email protected] is your friend. Since NASA is involved on the US side, [email protected] is listed as a contact as well.
How to Apply (And What to Do This Week)
Start by confirming two things immediately: (1) your UK organisation is eligible for NERC funding, and (2) you can meet the Notification of Intent requirement. Those are the two simplest ways to get knocked out before the science is even considered.
Next, schedule a working session with your US partners to agree on the core scientific claim: which biological processes you’ll improve, which models you’ll implement in, and what experiments will prove the improvement is real. Write down the division of labour in plain language. If you can’t explain it without diagrams and interpretive dance, reviewers won’t follow it either.
Then build your budget alongside the work plan. Tie people to tasks, tasks to outputs, and outputs to evaluation. The cleaner that chain is, the more confident reviewers feel that you’ll finish what you start.
Finally, give yourself time for an internal review. Not a quick skim—an actual critique from someone who models ocean carbon, someone who reviews grants, and someone who can spot unclear writing from three rooms away.
Get Started: Official Opportunity Link
Ready to apply? Visit the official opportunity page for full instructions and submission details: https://www.ukri.org/opportunity/multiple-model-assessment-of-biological-influence-on-ocean-carbon/
