Opportunity

Win a Share of GBP 3 Million for Metascience and Scientometrics Research: UKRI Sandpit on Smarter Research Assessment (April 2026)

There are grants you apply for with a carefully polished PDF and a prayer. And then there are opportunities where the real application is your brain in the room, live, with other sharp minds, building something from scratch.

JJ Ben-Joseph
JJ Ben-Joseph
📅 Deadline Feb 26, 2026
🏛️ Source UKRI Opportunities
Apply Now

There are grants you apply for with a carefully polished PDF and a prayer. And then there are opportunities where the real application is your brain in the room, live, with other sharp minds, building something from scratch.

This UKRI metascience sandpit is the second kind.

UKRI is convening a four-day, high-intensity “sandpit” (think research bootcamp meets creative lab) focused on scientometrics for research assessment—the metrics, indicators, and evidence that shape how we judge research quality, impact, and value. The goal isn’t to admire indicators from afar. It’s to co-develop projects that create, test, validate, and critique new scientometric measures that could actually stand up in real assessment settings—and be useful for future metascience work.

And yes, there’s serious money behind the seriousness: UKRI expects to fund up to £3 million (at 80% full economic cost) across the projects that emerge from the sandpit.

If you’ve ever looked at a citation metric and thought, “This is… fine, but also deeply weird,” or you’ve sat through an assessment exercise wondering whether anyone in the room had met the messy reality of research, this is your invitation to help design something better. Not perfect. Better. More honest. More defensible. More useful.

One more thing: the sandpit format is not for spectators. If you’re selected, attendance is mandatory across the dates (two days in person, two days remote). You’re not submitting a full proposal now—you’re submitting an expression of interest to earn a seat at the table where proposals are born.

Key Details at a Glance

DetailInformation
Opportunity TypeUKRI Sandpit (interactive workshop to form teams and co-develop projects)
ThemeMetascience: scientometrics for research assessment
Funding AvailableUp to £3 million total across sandpit-generated projects
FEC Rate80% full economic cost (typical UKRI model)
Application StageExpression of interest (EOI) to attend
Deadline26 February 2026, 16:00 (UK time)
Sandpit Dates & FormatIn-person: 15–16 April 2026 (UKRI Swindon) • Remote: 21 April and 23 April 2026
AttendanceMandatory across all four days if selected
Funder(s)UKRI councils including ESRC, EPSRC, MRC, AHRC, BBSRC, NERC, STFC
Contact[email protected]
Official Listinghttps://www.ukri.org/opportunity/metascience-sandpit-scientometrics-for-research-assessment/

What This Opportunity Actually Offers (Beyond the Money)

Let’s start with the obvious: up to £3 million spread across the projects that come out of the sandpit is not pocket change. Even once that total is divided among multiple teams, you’re still looking at the kind of support that can pay for serious methodological work: data access, engineering time, mixed-methods evaluation, stakeholder engagement, and the unglamorous but essential stuff (like validation studies) that makes indicators trustworthy.

But the sandpit offers something more unusual and, for many people, more valuable: structured collision.

In a normal call, you write in isolation, maybe with a couple collaborators you already know. In a sandpit, UKRI is deliberately creating an environment where metascientists, statisticians, domain researchers, evaluators, bibliometricians, policy-facing people, and “I hate metrics but I care about fairness” skeptics can meet, disagree productively, and build a project that reflects real-world tensions rather than ignoring them.

That matters because research assessment is a bit like urban planning: if you design only for what looks nice on paper, you get traffic jams, weird dead zones, and a lot of angry humans. Indicators behave the same way. They create incentives. They shift behaviour. They can distort the very thing they claim to measure. A sandpit is built for surfacing those second-order effects early—while you can still design around them.

Finally, there’s the meta-benefit: projects coming out of this kind of UKRI convening often leave with clearer pathways to uptake. Not guaranteed adoption, of course. But if your research is meant to influence assessment practice, it helps when the work is born in a setting explicitly aimed at use, critique, and future iteration—not just publication.

What They Mean by Scientometrics and Novel Indicators (Plain English Edition)

“Scientometrics” is the measurement of science—patterns in publications, citations, collaboration networks, funding flows, and other traces research leaves behind.

A “scientometric indicator” is a summarised signal, like citations per paper, field-normalised citation impact, co-authorship network measures, altmetrics, or more complex constructs derived from datasets.

The key word in this call is novel—but novelty isn’t the same as “new and shiny.” A truly useful indicator is more like a reliable kitchen scale than a mood ring. It should be:

  • Valid (it measures what it claims to measure),
  • Reliable (it doesn’t wobble wildly for silly reasons),
  • Fair-ish (it doesn’t systematically punish certain fields, career stages, or research types),
  • Hard to game (or at least gaming becomes expensive and obvious),
  • Interpretable (decision-makers can understand what it means and what it does not mean).

And because this is metascience, “critique” is not a footnote. It’s part of the mission. If your project can only survive in ideal conditions, it’s not ready for assessment—assessment is never ideal.

Who Should Apply (And Who Will Thrive in the Sandpit Format)

This opportunity is for people who can handle two truths at once: measurement is necessary, and measurement is dangerous.

If you work in bibliometrics, scientometrics, research evaluation, science policy, responsible metrics, or meta-research, you’re an obvious fit. But UKRI sandpits are often at their best when they include people from the “affected communities” of the system being measured: working scientists, humanities scholars, clinicians, engineers, early-career researchers, interdisciplinary researchers, and research managers who’ve watched metrics shape careers in real time.

You should seriously consider applying if you can bring at least one of these kinds of value to a team:

You can build indicators (quant methods, network science, NLP, data engineering), and you’re willing to test them against uncomfortable edge cases. For example: what does your measure do to fields where books matter more than journal articles? What happens in disciplines with slow citation patterns? What about mission-oriented research where the “outputs” aren’t papers?

You understand research assessment as a socio-technical system. Maybe you’ve studied incentive effects, gaming, Goodhart’s law (“when a measure becomes a target, it stops being a good measure”), or organisational behaviour. You can help a team avoid building a metric that looks accurate but creates perverse incentives.

You work close to decision-making: in a university, funder environment, learned society, or policy space. You know what assessors actually need: clarity, comparability, defensible reasoning, and indicators that support judgement rather than replace it.

You bring qualitative or mixed-methods strength. The best indicator work usually needs ground-truthing: interviews, case studies, expert panels, user testing with assessors, and careful interpretation. If your instinct is “numbers need context,” you belong here.

You’re also a good fit if you’re curious and collaborative. Sandpits reward people who can contribute strongly without clinging to authorship status like it’s oxygen. You’ll build quickly, iterate, scrap ideas, and rebuild. If that sounds energising rather than terrifying, apply.

How the Sandpit Works (So You Can Decide If It Suits You)

A sandpit is not a conference. You don’t show up with a finished paper. You show up with a strong point of view, useful skills, and enough humility to be wrong in public.

Typically, you can expect a rhythm like this: shared framing and problems on day one, exploratory teaming and idea generation, and then progressively tighter project shaping—aims, methods, roles, feasibility, and what success looks like. By the end, teams are usually aiming at a fundable concept with a coherent plan.

This specific sandpit runs across two weeks in four mandatory sessions: two days in-person in Swindon (15–16 April 2026) and two remote days (21 and 23 April 2026). That split matters. It means momentum-building in person, then refinement and decisions online. If you can’t protect those dates, don’t apply half-heartedly—this format punishes divided attention.

Insider Tips for a Winning Expression of Interest (And a Strong Sandpit Presence)

Sandpit selection isn’t only about CV sparkle. It’s about assembling a room that can produce fundable, high-integrity projects. Here’s how to position yourself.

1) Pitch your “tool” and your “why,” not your entire life story

Your EOI should make it effortless to answer two questions: What can you do? and Why do you care about this problem? For instance, “I build field-normalised indicators using open bibliographic data, and I’m interested in where they fail for interdisciplinary work” is stronger than a general statement about loving research excellence.

2) Show you understand the difference between correlation and meaning

People new to assessment often treat an indicator as a verdict. People who’ve lived through assessment know it’s a clue at best. If you can articulate limitations clearly—without becoming anti-metric—you’ll look like someone UKRI can trust around sensitive decisions.

3) Bring an “evaluation plan” mindset from the start

Novel indicators die in two ways: they’re untestable, or they’re only tested on the dataset that makes them look good. Signal that you’d push for validation strategies: comparing against peer review outcomes, checking stability over time, testing across fields, and running sensitivity analyses. Say it plainly. It lands.

4) Offer a bridge to stakeholders

Assessment is a contact sport. If you can help access real users—REF-adjacent communities, research managers, disciplinary panels, funder processes—say so. Even better if you can describe how you’d run user-centred testing without turning it into a box-ticking exercise.

5) Have one brave example of what you would critique

This sandpit explicitly includes “critique.” That’s permission to be intelligent and honest. You might mention, for example, how citation-based indicators can reward popularity over rigour, or how altmetrics can mirror media dynamics more than scholarly value. The trick: critique the tool, not the people using it.

6) Speak to fairness without turning it into vague virtue

“Equity” and “responsibility” are easy words to say and hard things to operationalise. If you can name a concrete risk—field bias, language bias, career-stage bias, team-science undervaluation—and propose a measurable way to detect it, you’ll stand out.

7) Be the person teams want to build with

Sandpits are intense. Demonstrate you can collaborate: you’ve worked across disciplines, you can translate jargon, you can write clearly, you can prototype quickly, you can disagree without turning it personal. Selection panels notice that energy, even on paper.

Application Timeline (Working Backward from 26 February 2026)

The deadline is 26 February 2026 at 16:00. That’s an unforgiving timestamp, and UKRI systems don’t tend to reward last-minute heroics.

Plan to have your EOI essentially written by the end of January. Early February is for tightening: removing jargon, clarifying what you bring, and making sure your experience matches the sandpit goals (development, validation, critique, and future metascience use).

Aim to get at least one external read-through by mid-February—ideally from someone adjacent to your field. If they can’t quickly tell what you contribute, a selector skimming dozens of EOIs won’t either.

In the final week, do your admin checks: confirm the sandpit dates are protected, speak to your line manager if you need approval for travel/time, and prepare a short “if selected, I will attend all sessions” statement in your own words. Submit at least 48 hours early. Not because you’re anxious—because you’re professional.

Required Materials (And How to Prep Without Overthinking It)

UKRI indicates you’ll complete an expression of interest form. EOIs usually ask for some combination of your background, motivation, relevant skills, and what you’d contribute.

Prepare these components so you’re not writing from zero the night before:

  • A crisp bio (short paragraph) that highlights relevant methods, domains, and collaboration experience.
  • A motivation statement that connects your interests to research assessment and indicator integrity (development and critique).
  • Examples of relevant work, kept practical: datasets you’ve worked with, methods you’ve used, evaluation studies you’ve done, tools you’ve built, or policy/assessment contexts you understand.
  • Availability confirmation for all sandpit dates, including the in-person Swindon days and the remote sessions.
  • A short “what I hope comes out of this” note, framed as outcomes (validated indicators, guidance on appropriate use, comparative studies, open resources), not just aspirations.

What Makes Projects Stand Out Once You Are in the Room

Inside the sandpit, good ideas are common. Fundable ideas are rarer. The projects that tend to rise share a few traits.

They start with a real assessment need, not a metric looking for a job. “We can compute X” is weaker than “Assessors struggle with Y; here’s a measurable signal that helps, and here’s what it cannot tell you.”

They treat validation as the main event. Strong teams design studies that challenge their own indicator: cross-field comparisons, robustness checks, replications, benchmarking against human judgement, and sensitivity to data quality problems.

They acknowledge incentives. If your indicator could be gamed, say how you’ll detect gaming or reduce the payoff. If it might shift behaviour, plan to study those behavioural effects rather than acting surprised later.

They build interpretability and guidance in from the beginning. The best indicator in the world is useless if decision-makers don’t understand when to trust it. Projects that produce usable guidance—not just a new number—tend to have longer lives.

Finally, they are honest about where metrics should stop and expert judgement should begin. UKRI doesn’t need another scheme that pretends to automate wisdom. They need evidence-based tools that support better decisions.

Common Mistakes to Avoid (So You Do Not Self-Sabotage)

One classic mistake is treating the sandpit like a conference talk: “Here’s my work, please admire it.” That’s not the job. The job is to show you can build with others toward the sandpit’s goals.

Another is arriving as either a metrics zealot or a metrics nihilist. If your vibe is “numbers solve everything,” you’ll miss social reality. If your vibe is “all metrics are evil,” you’ll miss the fact that assessment will use signals regardless—better to shape them than complain afterward.

People also undersell practical constraints. You will be working across four mandatory days, in-person and remote. If your schedule is chaos, it will show.

A quieter mistake: proposing “novel” indicators that are mostly rebranded versions of existing ones, with no validation plan and no explanation of what changes in assessment practice. Novelty without usefulness is just trivia.

Finally, don’t forget that UKRI funding is typically at 80% FEC. If your institution expects the remaining 20% to be covered internally, you’ll want early conversations with your research office once you’re moving toward a project.

Frequently Asked Questions

Is this a grant I apply for directly, or a workshop first?

It’s a workshop first. You apply via an expression of interest to attend the sandpit. The funded projects are expected to emerge from the sandpit process.

Do I have to attend every session?

Yes. UKRI states attendance is mandatory for selected participants across all four days: in-person on 15–16 April 2026 in Swindon and remote on 21 and 23 April 2026.

Can I apply if I am not a scientometrics specialist?

Yes—if you bring something genuinely useful to indicator development, validation, critique, or assessment practice. Mixed-methods researchers, domain experts, research managers, and policy-facing professionals can add real value.

What does 80% full economic cost mean in practice?

In the UKRI model, full economic cost (FEC) is the total cost of the research to your organisation. UKRI commonly pays 80%, and your institution covers the remainder. Your research office will help you model this if you end up on a project.

Is the £3 million for one project?

No. UKRI expects to fund up to £3 million total across the set of projects that come out of the sandpit.

What kind of projects fit this call best?

Projects that develop and rigorously test new or improved scientometric indicators for research assessment, including critical analysis of limitations and appropriate use. Strong projects usually combine technical methods with real-world validation and stakeholder relevance.

Can I contact someone with questions before applying?

Yes. UKRI provides a contact email: [email protected]. Use it for clarification, especially about fit and sandpit logistics.

What if I cannot make the Swindon dates but can do the remote ones?

Then this is not the right round for you. In-person attendance on 15–16 April 2026 is part of the required commitment.

How to Apply (And What to Do This Week)

First, block the sandpit dates in your calendar now: 15–16 April 2026 (in person, Swindon) and 21 & 23 April 2026 (remote). If you can’t protect them, save yourself the stress and don’t apply.

Next, sketch a one-paragraph EOI core: the skills you bring, the assessment problem you care about, and how you think indicators should be tested. Keep it sharp enough that someone skimming can repeat it back to you accurately.

Then, read the official opportunity page carefully and complete the expression of interest form well before the deadline. Give yourself time to revise—good EOIs feel inevitable, not frantic.

Finally, if you’re unsure whether your background fits, email UKRI at [email protected] with a short description of your expertise and interest area. A clear, specific question gets a clearer, more useful answer.

Apply Now and Read the Full Details

Ready to apply? Visit the official UKRI opportunity page here: https://www.ukri.org/opportunity/metascience-sandpit-scientometrics-for-research-assessment/

If you want to help build research assessment indicators that are smarter, fairer, and harder to misuse, this sandpit is one of the rare places where that ambition is not only welcome—it’s the whole point.