Can Experimentation Lead to Better Outcomes in the Granting World?

By Dan Monafu - 27 May 2020
Can Experimentation Lead to Better Outcomes in the Granting World?

Dan Monafu argues experimentation with randomized methods could modernize the traditional granting process, increasing incentive for risk-taking and innovation.

If you’re ever applied for an institutional grant, scholarship, challenge, or competition, you’ll know precisely what I’m talking about: in most cases, the application takes the time of multiple employees, each application has very many specific requirements, and it doesn’t always allow you to show what your organization is truly capable of doing.

A 2013 study from Australia found that preparing one grant proposal for submission to the National Health and Medical Research Council of Australia took an average of 34 days of work. Despite a historical annual success rate hovering around the 20% to 25% mark, researchers applying to this grant spent an estimated 550 working years preparing all (3,727) submitted proposals that year. Arguably much of that time was of no immediate benefit to the organization, the researcher or society, resulting in lost research output. 

This is caused by the need for publicly-funded granting institutions to ensure public dollars are spent equitably. For that reason, they design complex selection processes and spend much of their time ensuring the very best projects are chosen. 

But do these institutions ever find out if indeed these complex processes resulted in funding of the best projects? 

I would argue that we don’t know. What we do know is that experimentation offers a yet-untested and intriguing way to find out: let’s assemble the applications that showed promise but didn’t excel in the traditional rubric, randomly draw a percentage of this pool to receive funding, and then rigorously test to see if their outcomes are better than the traditionally-picked projects.

My day job involves promoting the use of experimental design methodologies for a national government. This means that I spend a lot of time thinking about the potential advantages of using experimentation methods like randomized controlled trials to ultimately get to better evidence-based decision-making.

I’ve also worked for teams that promote certain public good behaviors using public funds (increasing physical activity, for instance, in one job, or lowering elder abuse in another). Being public funds, this type of funding is scarce; there are limits to how much funding exists, and not all groups that apply for funding receive it. Typically, scoring rubrics are created with key criteria, and individuals with different types of expertise (e.g. subject-matter knowledge, program management expertise) score these applications in processes that are often very transparent (i.e. criteria is publicly listed, as are the weights associated with the criteria, and/or the threshold needed to be successful). 

The problem is that there are projects that are just below that threshold (i.e. a project that received just 1 or 2 points under what was needed to pass into the ‘funded’ category), that may show great innovation, or contain important localized knowledge. Despite often multi-phase assessment processes and good intentions to find the best projects, we know humans are not particularly rational, and they are easily influenced by extraneous factors: many of us have heard about the study showing judges are harsher right before lunch time, when they are hungry and more tired.

A solution then would be to introduce an element of randomization into the selection process - let’s call it Randomized Allotment (RA). How would it work? A grant manager could decide to allocate a number of funding spots randomly to projects that were scored under a designated threshold. One way to do this would be to pick all projects that were in a particular score range just under the threshold. A second, bolder option, would be to include all eligible projects that scored under the threshold into a random draw (I emphasize eligible, as minimum standards should always be kept). The projects funded under the draw would then be evaluated to see if they perform as well or better than the projects initially selected through the traditional assessment method.

The RA process would introduce a number of advantages to more traditional assessment models. 

First, it would lessen the disproportionate focus public sector grant managers place on getting the project criteria absolutely right. This pressure is translated into time and much effort expended on designing the best selection criteria to be certain that they are making the best funding decision. A move to an RA model would lessen the pressure, freeing up the time for the grant manager to focus on what matters, that is, ensuring that the evaluation of the funded interventions is properly resourced, with a focus on meeting the outcomes sought and demonstrating impact. 

Second, the RA model may potentially break a cycle of inequality that gets created over time when organizations or individuals perpetually do well in certain application processes. As grant-winning organizations gradually perfect their application craft and professionalize grant writing, they edge out non-traditional players that don’t know how to play the game but may still have excellent projects that will benefit their communities.

To use another well-documented example, this time from the world of sports, inequality between teams is growing in football because the richest clubs are able to spend a lot of money on the best players in the world. This allows them to win competitions, which means that they get more popular, form better brands, receive more money for winning competitions and selling their branding rights, and can perpetually afford best players. Being good creates a virtuous (or depending on where you sit, vicious) cycle. 

An RA model would allow non-traditional organizations to demonstrate that, while their score did not meet the traditional threshold, their interventions may have a unique value proposition. Again, the idea is not to fund projects we all think are bad, as those should not be included in the RA pool. But we should consider whether we are currently disqualifying projects that have a certain spark but that don’t stand a chance when scored on all mandatory criteria. Often, this is how innovation happens in a system: by allowing non-traditional players to introduce new elements and fund them alongside established players. 

The above point is related to the third and final possible advantage, which is about risk aversion. Introducing the RA model might allow public sector organizations to fund non-traditional players while continuing to fulfil their obligations of equity, as long as the randomization is designed properly. Figuring out which new types of interventions work well (through evaluating the results of non-traditional players), would allow public sector organizations to continually fund new types of projects without needing precedent that they were successful in the past (a constant refrain we hear in the field), or a sense that they are misusing public dollars.

Randomization is not new. Greek democracy was based on it, with Athenian citizens drawn into public life through a public lottery system. We still use lotteries when we call up people for jury duty, and some newer firms such as Canada’s MASSLBP Inc., a Toronto consultancy, uses a ‘civic lottery’ program to run processes that randomly selects people to serve a public function - in their case, offering a supply of opinions on how best to make public decisions. We also know that individuals enjoy lotteries -- behavioral scientists write that participation in lotteries is higher than it should be, when compared to chances of being picked. What we don’t yet know very well is how organizations would act when a lottery is introduced into a granting model. Would they submit more or less applications overall? Or would they shift their behavior and pass in less “quality” applications, focusing on just the bare minimum? And if that happens, would that be such a bad thing, when we consider the sunk costs of research applications that go nowhere each year? These are research questions worth pursuing. To my knowledge, the only example of a publicly-funded grant that incorporates randomization is New Zealand’s Health Research Council, which in 2016 started funding its Explorer Grants through a random number generator, following an initial screening of both viability and potential to be transformative. In 2016, nine grants split equally a $1.35M New Zealand dollars pot of money - the equivalent of about £0.67M GBP).

While above I mainly discussed the introduction of randomization in the public sector funding assessment criteria, there are potentially other applications for such a feature, one obvious one being hiring processes, which follow similar scoring criteria processes. What would it look like if we allowed a number of recruitment spots for individuals that don’t score well on exams/interviews but could bring fresh and diverse talents into our workforces?

The RA model might be a very good gateway intervention, especially since it seems not many people like full randomizations, despite best advice from scientists. A recent study found that 40-50% of individuals thought randomization was deeply wrong, where people receive a certain treatment while others don’t. There is a sense of deep injustice that we feel as individuals when left to complete chance -- we want to always remain in control. This is why this offers, in a sense, the best of both worlds: we’d still use merit-based criteria, but we allow a certain percentage of our interventions to be left to chance, hoping that good things might come from our “controlled disruption”.

Without properly trialing to see which intervention works better than another, we won’t know if our newly-proposed RA model is worth pursuing. We first need some institutions bold enough to try it out, creating that much-needed precedent.

 

 

Dan Monafu is a Canadian public servant working at the intersection of policy innovation, civic technology, and critical thinking platforms. He has founded and co-founded a number of initiatives that work to better the community, from the local to the national. Twitter account: @danutfm.

The views represented in this article are the author’s personal opinion as a public policy practitioner and do not represent those of his employer.

Photo by Steve Johnson from Pexels

Disqus comments