Opportunity

Keep Your Research Software Alive With Up to £150,000: The 2026 UKRI SSI Research Software Maintenance Fund Round Two Grant

Research software is the plumbing of modern science. It’s not glamorous. Nobody gives keynote talks about “fixed the dependency hell and updated the docs.

JJ Ben-Joseph
JJ Ben-Joseph
📅 Deadline Feb 25, 2026
🏛️ Source GCRF Opportunities
Apply Now

Research software is the plumbing of modern science. It’s not glamorous. Nobody gives keynote talks about “fixed the dependency hell and updated the docs.” But when it breaks, whole labs grind to a halt like a city whose water main just snapped.

That’s why the Software Sustainability Institute (SSI) Research Software Maintenance Fund is such a breath of fresh air: it pays for the unsexy work that keeps research moving. Not a shiny new prototype. Not a “we’ll refactor someday.” Actual maintenance—right now—so the tools your community relies on don’t quietly rot in a GitHub repo while PhD students reinvent them for the third time.

Round two is open, backed by the UKRI Digital Research Infrastructure programme, with grants of up to £150,000 for up to 12 months. There’s £4.8 million in the pot across the fund. If you maintain software that’s used beyond your immediate team—software that underpins methods, data pipelines, analysis, modelling, or community workflows—this is one of the rare opportunities that treats your work like the critical infrastructure it is.

And yes: it’s competitive. But it’s also absolutely worth the effort, because the alternative is the usual “best-effort maintenance” funded by leftover scraps of time and someone’s fading optimism.

Key Details at a Glance

DetailInformation
Funding typeGrant (software maintenance and sustainability)
Funder / programme contextSoftware Sustainability Institute (SSI), supported by UKRI Digital Research Infrastructure programme
Total funding available£4.8 million (fund-level)
Max award (Round two)Up to £150,000
Project lengthUp to 12 months
Deadline25 February 2026, 16:00 (UK time)
StatusOpen
Best forTeams maintaining widely-used research software that needs reliability, security, documentation, and long-term sustainability work
Official opportunity pagehttps://www.ukri.org/opportunity/ssi-research-software-maintenance-fund-round-two/

What This Maintenance Grant Actually Pays For (And Why It Matters)

Let’s translate “maintenance” into human language. This fund is about keeping essential research software usable, trustworthy, and available—especially the kind of software that lots of people depend on but nobody has time to keep healthy.

In practice, a strong maintenance proposal typically centers on work like stabilising releases, reducing technical debt, improving test coverage, modernising build systems, tightening security practices, making the software easier to install, and writing the documentation that turns “a clever tool” into “a tool other people can actually use.”

The biggest hidden win here is that maintenance work creates compounding returns. Fixing installation pain points doesn’t just help one user—it helps every future user. Adding a continuous integration pipeline doesn’t just catch one bug—it prevents dozens. Improving onboarding docs doesn’t just save you time answering emails—it grows your contributor base, which is the closest thing software has to a retirement plan.

A £150,000 grant over 12 months can realistically cover a focused package of improvements: dedicated developer time, targeted community work, perhaps a part-time research software engineer (RSE), plus the essential “glue tasks” that always get neglected—release management, issue triage, dependency updates, and long-overdue cleanup. In other words: the stuff that makes software survive contact with reality.

If your software is already popular, this fund can help you keep up with demand. If it’s mission-critical in a niche community, it can help you avoid becoming a single point of failure (“Only Priya knows how the parser works, and Priya is graduating in June”). Either way, the goal is the same: keep research software reliable and accessible.

Who Should Apply (And Who Should Probably Sit This One Out)

You’re a strong candidate if you can say, with a straight face and evidence to match, that your software matters beyond your desk.

That usually looks like one (or more) of these realities:

Your software is a dependency in other projects, cited in papers, embedded in analysis pipelines, or used by multiple institutions. It might be a library, a workflow tool, a modelling framework, a domain-specific package, or an enabling piece of infrastructure that makes a whole research method possible. The key is that it’s research software that other people rely on, not a one-off script stapled to a single paper.

You should also apply if the maintenance needs are clear and time-bounded. A 12-month award is perfect for “we need to stabilise, modernise, and make this maintainable,” not “we should rewrite everything from scratch and also build an app store.” Maintenance can be ambitious, but it should be believable.

Real-world examples of good fits:

  • A widely-used R/Python package where dependency updates and failing tests are starting to break user workflows, and you can map out a plan to restore reliability.
  • A research community tool where documentation is outdated, onboarding is painful, and contributor activity is dropping—so you propose improvements that make contribution and use dramatically easier.
  • A mature codebase with high scientific value but a fragile release process; you propose proper versioning, automated testing, packaging improvements, and a defined support model.
  • A tool used by multiple groups where the original developer has moved on; you propose governance, maintainership changes, and sustainability planning so it doesn’t die of neglect.

On the other hand, you should think twice if your “maintenance plan” is really a disguised feature roadmap. This fund is not a candy store for new functionality. If your application reads like “we’ll add lots of new stuff and call it maintenance,” reviewers will notice. Maintenance is about stability, reliability, accessibility, and sustainability—the long game.

What a Realistic £150,000 Maintenance Plan Can Look Like

A good maintenance proposal usually has a satisfying sense of geometry: a messy pile of issues becomes a clear set of work packages with measurable outcomes.

For example, you might outline a 12-month plan that includes: dependency and compatibility updates in the first quarter, automated testing and continuous integration in the second, documentation and onboarding improvements in the third, and governance plus release processes in the fourth. That’s not the only approach, but it shows you understand sequencing: you don’t paint the walls before you fix the roof.

It also helps to think in “user pain removed per pound spent.” If a change will save hundreds of researchers hours of frustration—or reduce risk of incorrect results—that’s exactly the kind of value maintenance funding is meant to protect.

Insider Tips for a Winning Application (From the School of Hard-Won Maintenance)

1) Treat maintenance like a research output, not a housekeeping chore

Your first job is to persuade reviewers that maintenance is intellectually serious because it protects scientific reliability. Make the case that breakage, bit-rot, and opaque workflows threaten reproducibility the way a contaminated lab reagent would.

Plain English beats jargon here. Explain what fails today, who it affects, and how your work will prevent future failures.

2) Bring receipts: usage evidence beats enthusiasm

Don’t just claim “widely used.” Show it. Depending on what’s appropriate for your project, that could mean download statistics, citations, GitHub clones, reverse dependencies, known adopter institutions, training workshop attendance, or testimonials from research groups.

If you have a user community, demonstrate it like you’re showing a landlord you can pay rent.

3) Define “done” with measurable outcomes

Maintenance is notorious for becoming a bottomless pit. Your proposal should make reviewers feel safe by stating what completion looks like.

Good examples include: “release process automated,” “test coverage increased from X to Y,” “time-to-install reduced,” “docs rewritten for onboarding,” “supported versions defined,” “security scanning in place,” “governance model adopted,” “bus factor improved by adding maintainers.”

4) Make your work packages ruthlessly scoped

A 12-month window is generous—until it isn’t. Pick the improvements that change the trajectory of the software, not every nice-to-have. Reviewers will prefer a smaller set of high-impact, fully delivered improvements over a sprawling plan that ends mid-flight.

If there’s a mountain of tech debt, don’t promise to remove it all. Promise to remove the pieces that cause the most outages, user pain, or maintenance burden.

5) Budget like someone who has actually shipped software

Your budget tells a story. If your work is mostly engineering effort, allocate meaningful time for an RSE/developer and include time for code review, testing, release management, and documentation. Maintenance isn’t just writing code—it’s validating it, packaging it, and making it survivable.

Also plan for the time sink nobody admits: issue triage, user support, and coordinating contributions.

6) Talk sustainability without pretending you can predict the next decade

“Sustainability” doesn’t require a crystal ball. It requires choices: governance, maintainer roles, contribution guidelines, release cadence, and a plan for what happens after funding ends.

If the software will remain free and open source, say how you’ll encourage external contributions and reduce dependence on one person. If the software is institutionally supported, explain what that support looks like in practical terms (staff time, hosting, long-term ownership).

7) Write for the smart non-specialist reviewer

Even if reviewers know software, they may not know your domain. Explain what the software enables in research terms, then connect maintenance outcomes to research outcomes. Keep the scientific value front and center, and make the technical plan readable.

Application Timeline (Working Backward From 25 February 2026)

Treat the 25 February 2026, 16:00 deadline as a hard stop. Not “submit at 15:59 with a shaky Wi‑Fi connection” hard stop. More like “submit two days early and sleep that night” hard stop.

A sensible timeline looks like this:

8–10 weeks before the deadline (mid–late December 2025): Decide scope. Audit your backlog, identify the highest-impact maintenance tasks, and draft 3–5 work packages with clear outcomes. Start gathering evidence of usage and impact—citations, adoption, dependency info, user testimonials.

6–8 weeks before (early January 2026): Write the first full draft of your case for support. At this stage, you want completeness, not perfection. Make sure your narrative answers: What breaks today? What will be improved? Who benefits? How will you measure success?

4–6 weeks before (late January 2026): Get external eyes on it. Ideally, one reviewer who understands the domain and one who doesn’t. If the non-domain person can’t explain the value back to you, rewrite until they can.

2–3 weeks before (early February 2026): Finalise budget, confirm staffing, and tighten the delivery plan. Ensure the scope fits the time and money. Cut anything that’s aspirational fluff.

Final week: Proofread, check every requirement, and submit early enough to handle portal problems, file format issues, and last-minute clarifications.

Required Materials (What You Should Prepare Even If the Page Is Brief)

The official listing is concise, but maintenance grants typically require a familiar set of components. Plan to assemble, at minimum, the following—and tailor them to whatever the official guidance specifies.

  • Project description / case for support: Your narrative explaining the software, its importance, current maintenance risks, and the work plan. This is where you translate technical debt into research risk and user burden.
  • Work plan with milestones: A timeline broken into phases with concrete deliverables. Reviewers should be able to tell what happens in month 2 versus month 10.
  • Budget and justification: A clear explanation of costs (primarily people time) tied to deliverables. If you’re paying for developer time, say what they will do and what “done” means.
  • Evidence of use and community need: Citations, user numbers, adoption statements, dependency graphs, letters/emails of support where appropriate.
  • Team capability statement: Who will do the work, what their roles are, and why they’re qualified to maintain this codebase without breaking it.

If your software touches sensitive data, security, or critical decision-making, be explicit about how maintenance will reduce risk (testing, validation, release discipline, auditability).

What Makes an Application Stand Out to Reviewers

The best applications make reviewers feel three things at once: this software matters, this team can deliver, and this plan will leave the software in a materially better state.

Strong proposals usually have:

A crisp explanation of the software’s role in research. Not just “it’s used in physics,” but “it enables X method, supports Y workflow, and is referenced in Z outputs.” Reviewers should understand, quickly, what breaks if the software degrades.

A realistic, ordered plan. Maintenance is often a tangle of tasks; you need to show you can untangle it. Sequencing matters. So does prioritisation. And so does acknowledging constraints—like needing to maintain backward compatibility or supporting multiple platforms.

Clear impact beyond the grant period. Not “we hope it continues,” but concrete actions that reduce future maintenance load: automation, docs, contributor pathways, governance, and defined support commitments.

In short: reviewers reward applications that treat maintenance as a discipline, not a wish.

Common Mistakes to Avoid (So You Dont Accidentally Self-Sabotage)

Mistake 1: Calling new features “maintenance” and hoping nobody notices

Reviewers aren’t allergic to improvements, but this fund is about stability and sustainability. If you want new features, frame them as strictly necessary to reliability, accessibility, or compatibility—and be honest about why.

Mistake 2: Writing a vague plan with no measurable finish line

“Improve code quality” is not a deliverable. “Add CI, increase test coverage, implement automated releases, and publish a maintenance policy” is.

Specificity is your friend. Vagueness is how maintenance plans quietly turn into never-ending projects.

Mistake 3: Underestimating documentation and onboarding

Docs are not decorative. They are how your community survives turnover, how new contributors join, and how users avoid mistakes. If your project has confusing setup or outdated docs, ignoring that is like funding a bridge repair but refusing to buy bolts.

Mistake 4: A budget that does not match the work

If your plan is engineering-heavy but your budget doesn’t pay for engineering time, reviewers will wonder who is doing the work—your future self at 2 a.m.? Conversely, if you request substantial funds without a clear task-to-cost logic, it can look sloppy.

Mistake 5: Pretending there are no risks

Every maintenance project has risks: breaking changes, dependency conflicts, limited maintainer time, tricky refactors. Name the risks and explain how you’ll manage them (phased releases, deprecation policies, test suites, staged rollouts). That reads as competence, not pessimism.

Mistake 6: Forgetting the human factor (governance, ownership, bus factor)

If only one person can merge PRs or cut releases, you have a single point of failure. Reviewers know this. Show how the project will become maintainable by a team, not a hero.

Frequently Asked Questions

1) Is this fund for building new research software or maintaining existing tools?

It’s primarily aimed at maintenance and sustainability—keeping key research software reliable and accessible. If you’re proposing brand-new development, you’ll need to justify why it is essential to maintenance outcomes rather than a separate product roadmap.

2) How much funding can we request, and for how long?

Round two offers up to £150,000 for projects lasting up to 12 months. Think of it as a focused intervention: enough time and money to meaningfully improve stability and longevity, not an endless support contract.

3) What counts as research software?

In practical terms, research software is code that enables research: analysis tools, modelling frameworks, pipelines, libraries, community platforms, and similar tools. The strongest fit is software that supports a broader research community, not just a single project team.

4) What if our software is essential but not famous?

Fame is optional; evidence of reliance is not. If your tool is critical in a smaller field, show it through user statements, known deployments, dependency relationships, or outputs that depend on it.

5) Can we include community building, training, or governance work?

Yes—when it directly supports sustainability. Good examples: contributor guides, maintainership structures, release policies, onboarding improvements, and practices that reduce future maintenance burden.

6) What should we do if we are already behind on maintenance?

You’re not alone. The best approach is to prioritise ruthlessly: address the issues that cause failures, block installs, create incorrect results, or make contributions impossible. Write a plan that makes the backlog shrink and stay shrunk.

7) When should we start preparing?

Now. Maintenance proposals look simple until you realise you need evidence, a scoped plan, staffing clarity, and a budget that matches reality. Give yourself at least 6–8 weeks to produce something polished.

8) Where do we find the official guidance and submission instructions?

Use the official UKRI opportunity page (linked below). That’s where you’ll find the definitive eligibility rules, required documents, and the submission route.

How to Apply (And What to Do This Week)

First, read the official call page carefully and treat it as the rulebook. Then do a quick internal “maintenance triage” meeting: list the top failures, bottlenecks, and risks in your software, and rank them by harm to users and research reliability.

Next, sketch a 12-month plan with milestones you can actually hit. If you can’t explain your plan in five minutes to a colleague outside your domain, simplify it until you can. Maintenance funding rewards clarity.

Finally, gather evidence of impact—citations, user metrics, adoption stories—and line up the people who will do the work. If your plan depends on one overstretched maintainer, fix that in the plan itself: define roles, add maintainers, and document decision-making.

Ready to apply? Visit the official opportunity page:
https://www.ukri.org/opportunity/ssi-research-software-maintenance-fund-round-two/