Bias in Canadian Research Funding Attribution

We know there's a problem, but what are the causes and solutions?

by Danielle Nadin

Artwork by Jeannie PhanImage Description: The illustration depicts a lattice of coloured orbs connected by white lines. One woman is at the top of a ladder, hanging a new orb amongst the rest. Two women hold the ladder and look up towards her; …

Artwork by Jeannie Phan

Image Description: The illustration depicts a lattice of coloured orbs connected by white lines. One woman is at the top of a ladder, hanging a new orb amongst the rest. Two women hold the ladder and look up towards her; three others are gathered around the base of the ladder, carrying coloured orbs.

 

Women in academia are still disadvantaged compared to men when it comes to their number of publications and invitations to give presentations, as well as their ratings on student evaluations and the time they are expected to devote to unrewarded emotional labor. Such metrics have lasting impacts on scientists’ careers, as they are used as performance measures to attribute funding, leadership positions, and tenure. When women scientists receive less funding, the consequences can snowball beyond the individual, influencing the trajectory of scientific innovation. While it's clear that women receive less funding than men, the drivers of this discrepancy remain unknown—are women proposing lower quality research or is the evaluation process biased? Studies so far have been mostly observational; without the ability to manipulate how applicants are evaluated, it’s difficult to conclude what’s biasing the outcome of grant competitions. That was until 2014, when the Canadian Institutes of Health Research (CIHR), one of Canada’s largest funding bodies, created the perfect environment for a real-world experiment. The agency split their traditional grant program into two, introducing a new program that ranked applicants primarily on leadership, productivity, and contributions to their field during the first stage of the review process, instead of focussing on their research as in other programs.

 
Image Description: Table listing the evaluation criteria used by reviewers to evaluate applicants at the first stage of the CIHR funding review process. Applicants’ scores were based on either their profile as a scientist (left column) or the qualit…

Image Description: Table listing the evaluation criteria used by reviewers to evaluate applicants at the first stage of the CIHR funding review process. Applicants’ scores were based on either their profile as a scientist (left column) or the quality of their research proposal (right column). The left column lists the following criteria: Leadership, Significance of Contributions, Productivity, Vision and Program Direction. The right lists: Quality of the Idea, Importance of the Idea, Approach and Expertise, Experience, and Resources. From: Witteman et al., 2019.

 

With the CIHR’s help, first author Dr. Holly Witteman and colleagues tracked over 23,000 applications made between 2011 and 2016. They recorded applicants’ self-reported sex (they assumed that applicants’ reported binary sex [female-male] corresponded to their gender identity [woman-man] and acknowledge this limitation). Strikingly, men and women succeeded similarly when evaluated based on their research, whereas women were 4% less likely to receive funding when judged on their caliber as a scientist.

Two weeks ago, I attended a Win4Science talk on the study and its implications for addressing binary gender bias in academia, given by Dr. Michael Hendricks, one of Witteman’s co-authors on the paper describing these findings. He delved into how this difference in the evaluation of female applicants could stem from reviewers’ individual biases, systemic bias built into evaluation criteria, or the way female scientists seek leadership opportunities and describe their work. From a systemic perspective, Hendricks makes an analogy to positive feedback loops: when men are favoured in the funding review process, it results in more data, more publications and more opportunities for them, in turn bolstering following funding applications. Once the system is contaminated by bias, the repercussions of such “head starts” persist over entire careers.

It seems that we don’t always fund the best research, but rather the most systemically advantaged scientists. Providing anti-bias training for reviewers, blinding reviewers, adjusting scores for historical biases, and revamping evaluation criteria are some potential solutions. But, to paraphrase Hendricks, universities are reluctant to implement these solutions in the absence of incentives or penalties.

Take, for example, the Canadian Tri-Council’s Dimensions Pilot Program. The program began this September under the leadership of former Minister of Science, Dr. Kirsty Duncan. It recognizes post-secondary institutions for their commitment to promoting equity, diversity and inclusion (EDI) as laid out in the Dimensions Charter. As part of the program, 17 participating universities will collect and analyse data on their current EDI practices in order to identify barriers to inclusion within their institutions and develop actionable solutions. While Canada’s top five research universities (University of Toronto, University of British Columbia, Université de Montréal, McGill University, University of Alberta) have endorsed the charter, only the University of British Columbia is part of the pilot program. The other endorsers are not held to any particular guidelines; their support therefore remains no more than a formalism for the time being.

On the other hand, the Canada Research Chairs Program (CRCP) has begun to regulate post-secondary institutions’ commitment to EDI, and penalize them when they fail to meet the mark. Following critiques of insufficient progress towards increasing the representation of women, persons with disabilities, Indigenous peoples and members of visible minorities, as well as lack of transparency, the CRCP launched the Equity, Diversity and Inclusion Action Plan. They mandated that all universities with five or more Research Chair allocations develop an EDI action plan, provide annual updates on progress, and make these documents publicly available. Failure to abide to these policies would result in suspension of payments. The CRCP also began developing resources to assist post-secondary institutions with this process (e.g. best practices for recruitment and retentionguidelines for evaluating productivityanti-bias trainingmethodology for setting EDI targets). The CRCP’s initiative is continuously evolving: they plan to update their Action Plan in the coming months by establishing support for LGBTQ2+ communities and incorporating principles of intersectionality, among other things. The Action Plan—launched in 2017 thanks to the devotion of eight women scientists who filed a human rights complaint against the CRCP in 2003—requires that participating institutions implement their own action plans by December 2019. Along with many others, I’ll be waiting with baited breath to see if these measures were successful.

While I’m glad Canadian funding agencies seem to be taking these issues seriously, I left Hendricks' lecture thinking about how much more needs to be done to assess and eliminate the impact of racial and ethnic identitydisabilityindigeneity, and queer identity on funding attribution. For the CRCP program, the numbers have increased, but still aren’t where they should be: as of June 2019, less than 5 of 1,836 appointed Canada Research Chairs self-identify as belonging to the LGBTQ2+ community, and less than 1% of Tier 1 Chairs identify as Indigenous people or persons with disabilities. I’m excited to continue these discussions, and hope that, as Hendricks puts it so nicely, we’ll break the “institutional inertia” that slows movement from theoretical to action-oriented EDI.

 
Previous
Previous

Black History Month

Next
Next

Crossing Science with Philosophy