Opportunity

AI and Humanities Research Grants 2026: How to Win a Share of GBP 780,000 and CAD 1,000,000 for Human Centered AI Design

Artificial intelligence has a talent for making smart people act a little starry-eyed. New model, new benchmark, new demo that writes poetry (and occasionally invents court cases).

JJ Ben-Joseph
Reviewed by JJ Ben-Joseph
📅 Deadline May 28, 2026
🏛️ Source GCRF Opportunities
Apply Now

Artificial intelligence has a talent for making smart people act a little starry-eyed. New model, new benchmark, new demo that writes poetry (and occasionally invents court cases). But if you’ve spent any time around real-world AI—especially AI that touches hiring, healthcare, education, immigration, policing, social services, or the news—you know the truth: the hard part isn’t making AI “work.” It’s making AI behave in society.

That’s where the humanities stop being the side dish and become the main course. Ethics, history, philosophy, linguistics, cultural studies, media theory, law, and design research aren’t there to sprinkle a little moral seasoning on top of a technical system. They’re the disciplines that ask the questions engineers often don’t have time (or incentives) to ask: Who is this for? Who gets harmed? What values are being baked in? What gets erased? What gets normalized?

This invite-only opportunity—born from a sandpit workshop in Montreal—puts that idea into funding form. It’s not trying to glue “ethics” onto AI at the end like an awkward bumper sticker. The whole point is to put humanities insights and methods at the heart of AI technology design.

And yes, it’s competitive. Only up to five grants will be funded. But if you attended the Montreal sandpit in February 2026, you’re already in the room where the decision-makers expect serious, cross-border, genuinely interdisciplinary ideas. Now you have to turn the energy of that workshop into a proposal that reads like a project the world actually needs—because it is.

At a Glance: Key Facts for the AI Humanities Sandpit Grants (Canada, UK, US)

ItemDetails
Funding typeResearch Grant (invite only)
FocusIntegrating humanities methods and insights into AI technology design
Who can leadProject Lead (PL) must have attended the Montreal sandpit (February 2026) and be invited to apply
Eligible locationsUK, Canada, US (via eligible research organisations)
Total awardsUp to five grants
UK funding potUp to GBP 780,000 total available for UK-based teams
Canada funding potUp to CAD 1,000,000 total available for Canada-based teams
US participationUS team members can be funded from UK or Canada budgets (as applicable)
Deadline28 May 2026, 16:00 (local time as stated on the funder page)
Earliest startProject must begin before 1 Oct 2026
Latest end dateProject must end before 30 Apr 2028
Official pagehttps://www.ukri.org/opportunity/artificial-intelligence-humanities-sandpits-canada-uk-and-us-invite-only/

What This Opportunity Offers (And Why It’s More Than Money)

Let’s talk about what you’re actually being handed here, beyond the headline figures.

First, this is targeted funding. That matters. General AI calls often reward whatever looks shiny in a technical appendix. This one is explicitly asking you to place humanities thinking at the center—meaning your proposal can be brave about questions like interpretability, accountability, meaning, language, culture, narrative, power, and human experience. You’re not apologizing for your discipline; you’re being paid for it.

Second, it’s international by design. The call spans the UK, Canada, and the US, with budgets allocated to UK- and Canada-based teams and flexibility for US collaborators to be supported through those budgets. In practice, that means you can build a team where the methods match the problem: a linguist in Montreal, a responsible AI researcher in the UK, a law-and-technology scholar in the US—whatever configuration makes the research stronger.

Third, the timeline (start before 1 October 2026, end before 30 April 2028) gives you room to do work that isn’t just a quick prototype. That’s important for humanities-centered AI design, which often requires deep user research, careful stakeholder engagement, iterative evaluation, and time to translate insights into technical and governance changes.

Finally, because this opportunity comes from a sandpit, you’re not starting from zero. Sandpits are pressure cookers for collaboration: you’ve already tested your ideas in conversation, negotiated scope, and found intellectual chemistry. The grant becomes the next stage: taking that early spark and building something that can survive contact with reality (and peer review).

Who Should Apply (Eligibility, Plus Real-World Fit Checks)

There are two gates you can’t charm your way around.

You must have been invited to apply. And if you want to be the Project Lead (PL), you must have attended the sandpit workshop in Montreal in February 2026. This is not a “well, my colleague attended” situation. The funder is using the sandpit as the qualifying round.

Next, you must be based at an eligible UK, Canadian, or US research organisation that can receive funding under the relevant rules. In plain English: universities and recognised research organisations are the usual suspects. If you’re in a museum, archive, think tank, or nonprofit research institute, you’ll need to confirm eligibility through your institutional research office.

Now the more interesting question: who is this for intellectually?

This call is a strong fit if you’re building AI systems—or studying them—in ways that benefit from humanities methods like textual analysis, hermeneutics, ethnography, discourse analysis, critical theory, participatory design, historical comparison, or normative ethics. It’s also a fit if you’re already working with technical colleagues and you’re tired of being asked to write “the ethics section” after everything has been decided.

A few examples of teams that tend to thrive in calls like this:

  • A computer science group building language technologies partnering with linguists and sociolinguists to prevent harm to minoritized dialect communities—then testing the system with real speakers in context, not just in benchmarks.
  • A digital health AI project teaming up with philosophers of medicine and disability studies scholars to rethink what “outcomes” should mean, and how models encode assumptions about bodies and normality.
  • A generative AI tool for education designed with historians of pedagogy and media scholars to examine what kinds of authority and citation practices the tool encourages.

If your proposal treats humanities as window dressing, reviewers will smell it a mile away. If, however, your project genuinely needs humanities thinking to function—this is your moment.

The Core Idea: Putting Humanities at the Heart of AI Tech Design

The phrase “humanities insights and methodologies” can sound abstract until you translate it into project choices.

“Insights” means you’re producing knowledge about meaning, values, power, interpretation, history, culture, language, and human behavior—things AI systems bump into constantly.

“Methodologies” means you’re not just writing opinions. You’re using disciplined approaches: close reading of training data and outputs; ethnographic observation of model use; discourse analysis of how AI reshapes institutional language; participatory workshops with affected communities; archival research to trace how older technologies created similar harms; normative frameworks to evaluate trade-offs.

And “at the heart of AI tech design” means these approaches influence what gets built, how it gets evaluated, and what gets shipped—or refused.

In other words: not an ethics appendix. A design engine.

Insider Tips for a Winning Application (The Stuff Reviewers Actually Reward)

Because this is invite-only, you’re competing against people who were also deemed promising in Montreal. Your edge will come from clarity, credibility, and the feeling that your team can execute.

Here are seven practical ways to improve your odds.

1) Write the problem like a human, not a white paper

Start with a specific AI design problem anchored in a real context: a sector, a user group, a decision point, a failure mode. “Bias in AI” is a category. “A triage model that deprioritises nonstandard symptom descriptions in multilingual communities” is a problem.

Then make the stakes legible. Who gets harmed? Who carries the cost? What does “better” look like?

2) Make the humanities contribution unavoidable

A great test: if you removed the humanities components, would the project still basically work? If the answer is yes, you’ve got a problem.

Spell out how humanities methods change the design: new evaluation criteria, alternative data practices, participatory processes, interpretability approaches grounded in meaning, governance models shaped by legal and historical knowledge, and so on.

3) Avoid the false binary: “ethics vs innovation”

Reviewers aren’t looking for a scolding. They’re looking for work that makes AI systems more accurate in the real world, safer, and more aligned with how people live.

Frame humanities as a way to reduce costly failure: reputational damage, legal risk, user abandonment, harmful outcomes, and technical debt caused by ignoring social complexity.

4) Build a team that can actually build

Interdisciplinary proposals fail when they’re a dinner party of famous names with no plan for who does what.

Give each co-investigator a job tied to outputs. Explain the workflow: how insights move from humanities research into technical design decisions, and how technical constraints feed back into research questions.

5) Define outputs that prove design impact

Promise things you can show, not just things you can argue.

Examples: a prototype with documented design decisions; a validated evaluation framework; an audit toolkit; a dataset documentation standard; model cards or system documentation shaped by humanities research; case studies with traceable changes to system behavior; policy guidance tied to evidence.

6) Treat “US partners funded via UK/Canada” as a planning detail, not a footnote

If you have US collaborators, be explicit about how their work will be supported through the UK or Canada budget streams and what that means for deliverables and timing.

Nothing panics reviewers like vagueness around money flows—especially in international projects.

7) Make your timeline feel lived-in

A timeline should reflect reality: ethics approvals, community engagement, iterative design, recruitment challenges, and evaluation.

If your plan reads like “Month 1: hire RA; Month 2: solve ethics; Month 3: publish paper,” you’ll lose trust fast.

Application Timeline: A Realistic Plan Working Backward from 28 May 2026

The deadline is 28 May 2026 at 16:00. For a cross-border, interdisciplinary proposal, the best strategy is to work backward with zero romance about how long approvals take.

Aim to have your project narrative 80% drafted by late March 2026. That gives you time to do the hard part: aligning methods, responsibilities, and evaluation across disciplines without turning your proposal into a patchwork quilt.

By early to mid-April 2026, lock the project design: work packages, milestones, and who owns each deliverable. This is when you should also confirm institutional eligibility and internal sign-off requirements. Many universities need at least 5–10 working days for final approvals, and some need more.

By late April 2026, run a “hostile reader” review. Give the draft to someone outside your project—ideally a smart colleague who isn’t immersed in the sandpit jargon—and ask them to point out what’s unclear, implausible, or underspecified.

In early May 2026, finalize budget narratives and confirm cross-border funding arrangements, especially if US team members are being supported via UK/Canada budgets. Leave at least a week for revisions after the budget is assembled; money has a way of changing the story.

Finally, plan to submit several days before 28 May. Online systems fail. People get sick. A missing attachment can wreck a month of work.

Required Materials: What You’ll Likely Need and How to Prep Without Panic

The official page will specify the exact submission requirements, but invite-only international calls like this typically ask for a core package that covers: your idea, your team, your budget, and your proof you can deliver.

Prepare for items like the following, and start early:

  • A full project description (case for support) explaining the research challenge, why humanities methods are central, what you will build or change in AI design, and how you will evaluate success.
  • A work plan with milestones that map to the start-before 1 Oct 2026 requirement and the end-before 30 Apr 2028 boundary.
  • A budget and justification aligned to the correct national pot (UK or Canada) and clear explanations of any funding for US-based collaborators through those budgets.
  • CVs or biosketches for key team members that show interdisciplinary competence (not just disciplinary excellence).
  • Letters of support or partner statements, if your project relies on external organisations, data access, community partners, or deployment contexts.

Preparation advice that saves headaches: ask your institutional grants office what they need four weeks before you think they need it. International projects create administrative questions, and you don’t want to discover them at the deadline.

What Makes an Application Stand Out (Review Criteria, Decoded)

Even when a call doesn’t publish a long scoring rubric, you can predict what reviewers look for—especially in a sandpit-derived competition.

A standout application makes four things feel obvious.

First: the problem is real and well-scoped. Not a vague manifesto. Not a world-saving promise. Something you can actually study and change within the project window.

Second: the humanities are structurally central. Your methods shape the design process, the evaluation, and the outputs. You’re not merely commenting on technology; you’re changing how it’s made.

Third: the team is coherent. Reviewers should be able to picture the collaboration: meetings, decision points, handoffs, and how disagreements will be resolved. Interdisciplinary work is messy. Strong proposals admit that—and manage it.

Fourth: the impact is legible. That doesn’t mean hype. It means clear evidence that the project will produce something others can use: frameworks, tools, standards, prototypes, or documented design practices that can travel beyond your team.

If you can make reviewers think, “This group will produce results we’ll still be citing in two years,” you’re in excellent shape.

Common Mistakes to Avoid (And How to Fix Them)

Even brilliant teams trip on predictable banana peels. Here are the big ones.

Mistake 1: Treating humanities as a compliance step

If the humanities portion reads like “we will consider ethics,” reviewers will assume it’s decorative.

Fix: Put humanities methods into the critical path. Show where they change system requirements, data practices, evaluation metrics, or governance choices.

Mistake 2: Writing two proposals glued together

You can feel when the technical team wrote one half and the humanities team wrote the other, and they met for the first time in the PDF.

Fix: Use shared concepts and shared deliverables. Make at least one central work package jointly owned, with integrated methods.

Mistake 3: Overpromising societal impact with no mechanism

“Will improve trust in AI” is not a plan. It’s a wish.

Fix: Define measurable outcomes: documented design changes, user study results, adoption by a partner, publication of standards, or demonstrable reduction in a known failure mode.

Mistake 4: Ignoring the administrative reality of international funding

Cross-border budgets and eligibility rules can sink a proposal that is otherwise excellent.

Fix: Confirm eligibility early, specify which budget funds which activities, and explain how US collaborators are supported via the UK/Canada allocations.

Mistake 5: A timeline that disrespects research with humans

Community engagement, ethics review, and iterative design take time.

Fix: Build in time for approvals, recruitment, participant care, and iteration. Reviewers trust proposals that plan for reality.

Frequently Asked Questions

1) Can I apply if I did not attend the Montreal sandpit in February 2026?

Not as Project Lead. The call requires sandpit attendance in Montreal (February 2026) to act as PL, and it is invite only. If you’re joining as a team member, eligibility will depend on the invited team structure and the funder rules—confirm on the official page and with the lead institution.

2) What does invite only actually mean here?

It means the funder is not accepting open submissions from the general public. Typically, only individuals or teams who received an invitation—often connected to sandpit participation—can submit.

3) How many projects will be funded?

Up to five grants total. That’s selective, so coherence and execution planning matter as much as brilliance.

4) How much funding is available?

For UK-based teams, there is GBP 780,000 total available across the funded grants. For Canada-based teams, there is CAD 1,000,000 total available. Your specific request will depend on the call rules and your project design.

5) Can US-based researchers be funded?

Yes—US-based team members can be funded from the UK or Canada budgets. Plan this carefully and describe it clearly, because budget ambiguity is a common reason proposals get marked down.

6) What are the project dates I need to respect?

Your project must start before 1 October 2026 and end before 30 April 2028. When you design your work plan, keep those boundaries in view so your milestones don’t drift past the end date.

7) What kinds of projects fit best?

Projects where humanities methods change how AI is designed, evaluated, documented, governed, or deployed. Think: language and meaning in NLP systems, cultural context in evaluation, historical and legal analysis shaping governance, participatory design with affected communities, interpretability grounded in human understanding—not just model internals.

8) If I am eligible, what is the smartest next thing to do?

Schedule a short meeting with your collaborators to agree on one sentence: “Our project will change X in AI design by doing Y humanities method, producing Z outputs by April 2028.” If you can’t say it simply, the proposal will struggle.

How to Apply (Next Steps You Can Take This Week)

Start by confirming the two non-negotiables: you were invited and (if you are the Project Lead) you attended the Montreal sandpit in February 2026. Then meet with your research office early—international funding and cross-border budgets raise practical questions that take time to answer.

Next, write a one-page concept note for your team that includes: the AI design problem, the humanities methods that will drive design choices, the technical work that will implement those choices, and what success looks like by 30 April 2028. Use that page to align your collaborators before anyone disappears into their own document.

Finally, build your timeline backward from 28 May 2026, 16:00 and set an internal submission deadline at least a few days earlier. You want your final week to be for polishing, not for panic.

Ready to apply? Visit the official opportunity page for complete eligibility rules, submission instructions, and updates:
https://www.ukri.org/opportunity/artificial-intelligence-humanities-sandpits-canada-uk-and-us-invite-only/