Maya Sinha discusses how racism is embedded in medical algorithms, affecting patient care. Historical inequalities in Canadian health systems impact racialized communities' access to healthcare. Medical algorithms, despite aiming for efficiency, perpetuate biases. Algorithms may score black individuals lower, leading to unfair placement on transplant lists. Biases also affect risk assessments for heart conditions, potentially leading to unnecessary actions. Research by students Rachel and Gabriella highlights the negative impact of bias within algorithms and the need for policy solutions.
My name is Maya Sinha, and you are listening to Office Hours, a science communication podcast. I want you to imagine two patients. Both are in need of a kidney transplant. They're staying at the same hospital, being treated by the same healthcare professionals, and are in very similar physical conditions. But one of them is much higher up on the transplant receiver list than the other one is. The only difference? Their race. Now, this isn't just a hypothetical.
This is something that is happening right now. In hospitals and clinics around Canada and the US, racialized patients are less likely to be flagged for specialty programs and life-saving interventions. And this gap doesn't always exist because doctors are ignoring patients, although human biases certainly can play a role. But aside from that, these gaps are largely influenced by something called medical algorithms. These are computer programs that help professionals make decisions about patient care, and they're based on large amounts of pre-existing data.
My name is Maya Sinha, and in today's episode of Office Hours, we're talking about how racism becomes embedded in medical algorithms, their functions, and the real-world implications of racial bias within this technology. Before we get into the episode, I think we should discuss the history of racism within Canadian health systems. It's important to understand that racism in health systems didn't start with this modern technology. It's rooted in a long and deeply uncomfortable history. Health systems in Canada have systemic racism embedded within them.
For generations, Indigenous people were subjected to residential schools, forced sterilization, segregated hospitals, and many other discriminatory policies. Black communities have also faced persistent inequalities, including harmful stereotypes about pain tolerance and physiology, which have persisted for decades. Now, these historical inequalities are not confined to the past. They continue to shape the relationship that racialized communities have with health systems to this day. The generational impacts of segregation, colonialism, and racism have contributed to higher rates of food insecurity, inadequate living conditions, and poverty, all of which contribute to barriers and access to health care and poor health outcomes within these communities.
So, what has happened over recent years is that scientists have started building medical algorithms, technology that is supposed to be efficient and objective, with a goal to help streamline health care. The problem is that this technology is being built over top of an already unequal system, and as a result, seems to perpetuate these inequalities. To learn more about this, I hopped on a call with two undergraduate students here at McMaster. Rachel Shildrop and Gabriella Mashry are both in their fourth year of the Integrative Science program, and they performed a research project that studies the relationship between racism, algorithms, and they proposed a potential policy solution to address these issues.
Hi, I'm Gabriella Mashry. I'm a fourth-year Integrative Science student. I'm Rachel. I'm in fourth-year ISI as well. I know I touched on this earlier, but I think we should talk a little bit more about what medical algorithms are and what they're used for. Here is how Rachel describes them. Like a medical algorithm, yeah, something that like scores patients on like an index, and it's usually used in a situation where there's kind of like a priority list.
So for example, like for kidneys, there's a kidney transplant list, so getting a new kidney can be a really lengthy process, and so to determine who needs the kidney most first or who gets the next available kidney, there's kind of a scoring process, and the scoring process is based on what we call a medical algorithm, which is a series of questions and kind of factors that about an individual and their bodily functions, how their kidney's functioning like today, and like what's predicted to happen to them in the next like week, two weeks, six months, and then based on that scoring for kidney, for example, then they are placed like on a certain place on the wait list.
So medical algorithms are super common, and they exist in many parts of health care, from automatic intake processes in primary care to even ranking patients on transplant waiting lists. These algorithms work in many different ways. Some are mathematical formulas that help calculate things like patient risk or medication dosage. Recently, the use of AI and machine learning in developing algorithms has become very popular, especially for things like triaging patients based on urgency and helping to prioritize high-risk cases.
These models are trained on large amounts of medical data obtained from electronic health records, although a lot of the time, the data sets used to train the algorithms lack diversity and are quite biased. Here is what Rachel says about this. They like have within them systemic biases towards black individuals that ends up scoring black individuals lower. So they're inherently lower on kidney receivable lists than someone else who might have like the same functions as them, but is white.
So you're seeing that they're getting like negated health care or less comprehensive health care because of their skin color, because of the way they're being scored on these algorithms. And those are just because of like the inherent biases that exist from the algorithms. Yeah, so for the kidney one, for example, it's kidney measures glomerulus function, which is how well your kidney can filtrate through blood and like substances in your body. And so it was thought that like black individuals have like a higher filtration rate, so they can filter more at like per day or per whatever the rate is.
But because of that, they're being scored on like a higher, there's like an added value to their scoring because of this like thought difference in filtration when like really most individuals and like it's not a race-based, it's not race-based, it is more individual. So just because someone is like identifying as black, they're being scored differently on this scale and this being like an additive value to their scale. And then they're scoring lower on the overall like assessment and then being placed like lower on a kidney transplant list.
What Rachel just described is how false stereotypes can lead to unfair scoring within these algorithms. And as a result, can lead to black individuals being placed lower on transplant waiting lists. Now, this is not the only way that bias can become embedded within medical algorithms. There is actually an article published in Nature back in 2019 that talks about this specifically. So this article discusses how a ton of medical algorithms, and I mean, these are algorithms that serve upwards of 200 million people in the United States.
They were found to be less likely to refer black people to sick programs than white people who were just as sick. And a group of researchers from UC Berkeley looked into why this might be. They found that these algorithms weren't measuring the illnesses of patients directly, but it would assign risk scores based on how much a patient had spent on health care over the last year. So, for example, it would assign someone who spent $3000 on care over the last year as higher risk than someone who spent just $180.
I mean, on the surface, this sounds logical. If someone spends more on health care a year, that probably means that they are more sick. However, this method is very flawed. Among patients with the exact same medical conditions, it was found that black patients spent on average almost $2000 less on health care than their white counterparts. And this is because black communities face larger barriers to health access and treatment. So, because black individuals don't spend as much money on health care, the algorithm interprets them as being less sick.
In turn, these patients have to be much sicker than their white counterparts in order to be flagged for additional support programs. Assessing patient risk based on cost or untrue stereotypes are just a few examples of inherent bias within how these algorithms are structured. There are other variables that these algorithms use that fail to account for systemic racism and bias that exists within health systems. Here is Gabriela talking a little bit more about this and what they found during their research project.
Yeah. So, I think from what our research and kind of what we found is that it does negatively affect black individuals and it puts them either at a lower like place at a list or it's also like in, for instance, for the heart, we looked into this like risk score for heart attacks and stroke and it showed that they're at a higher risk of heart attacks and stroke and then that led to misconceptions and then it leads people to take action when action is not really needed.
So, I think it just can confuse and for the heart one, it was actually the algorithm that we looked into. It has like a equation for like white people versus black people because just of a historical thought and kind of what we talked about what Rachel touched upon before that physiologically like black people and white people are different but through like more recent research, it would find out that this is not true and there are other factors that influence a person's physiological.
It's not the race that influences that like things such as the environment where you grew up and things like that. So, I think, yeah, we just found that it does negatively impact the patients but also it's systematically as well, right? Like these are equations and algorithms that are in the healthcare field and they're still used. So, it's also like the people who are using it, what training do they have and that we talked about that later in our policies but kind of how do we go against that and how it negatively impacts patients when they're most vulnerable.
So, what can be done to fix this? We know that these algorithms influence millions of people right now. In Gabriella and Rachel's research, they actually proposed a potential policy plan to help. This plan has three main pillars, training, community health, and funding and engagement. Let's start with training. Here is how Rachel explains it. Okay. So, our first pillar was training. So, I think one thing we realized is that a lot of people don't know that this still exists or know that this is being like used every day.
So, making healthcare individuals like doctors, nurses, training staff, educators, like making them aware that these things exist and that they're important to be taught in classes and that people should kind of look at these algorithms with a finer eye is really important. So, implementing training and implementing kind of like awareness of these inequities in healthcare is really important before people actually start treating patients. So, doing this in nursing school, in medical school, even in like science undergrad programs, like teaching people that these exist and that like before you actually get in a room with a patient, you should understand like what since the algorithms are like fluctuating and what actually the outcome of a situation could be.
So, that was like our first pillar. Rachel just explained that incoming healthcare professionals should be taught about the struggles that racialized communities face within Canadian health systems. This could mean more education about systemic racism and social determinants of health starting in science undergrad programs. Incoming professionals should also have an understanding of how medical algorithms work and the risks that the implicit biases play within these algorithms. The second pillar focuses on community engagement. Here is how Gabriella explains it.
The second pillar was kind of rooted in our research and kind of what we learned in class that these things happen and these biases happen because research is not diverse. Because those who are we are like researching and during like clinical trials and things like that, like it's not diverse enough. So, then things like this don't come through. So, our second pillar was kind of rooted in that community health, but also not just trying to recruit people, but partnering with those organizations that already have that trust with these communities and then with these programs to further get them into research and make sure that the research that we're doing is diverse enough.
So, emphasizing that community health initiative and public health strategies to kind of reduce that bias and bring that there is that awareness that there is that racial bias in healthcare. And then just with that addressing those structural inequities and social determinants of health. And then ensuring again, I think our second pillar was to make sure that research is diverse and we can account for the inequities and ensuring that the Black community is represented in research. Because even with our other research, we found that all of this comes up because it's not just the sample is not diverse, but also those who are conducting the research is not diverse.
So, kind of diversity at both levels is really important as well. Gabriela just explained that it is important that government agencies and data developers partner with community health programs to help better inform health research. These partnerships give researchers a deeper understanding of the struggles of the communities their research aims to serve. Not only will this help scientists design more effective tools, but it also gives individuals in the community a way for their needs and concerns to be addressed.
The third pillar talks about funding and engagement. Here is how Rachel describes it. And then our last one was funding and engagement. And that kind of ties in with Gabby. So, like an algorithm is really only as good as the data you put into it. So, like if you give CHADCPT like a sentence, it can really only do so much. Whereas if you give it like a paragraph, it can understand more. Same goes for these algorithms that the databases are primarily like white individuals.
It can only produce like white statistics and like information for white patients. Where you need to diversify these databases of patients, then it will produce more diverse results as well. And we'll see that there's actually not these large differences between like physiologies of humans as we think there is. So, that kind of is our funding and engagement part, which is like getting more communities involved in research and building that like trust kind of from like the bottom up of the community to encourage them that like, like I know historically that you've been discriminated against.
But to like stop this and to like create positive change, like we have to like disrupt the system now and add more like diverse voices to the training, the algorithms, the people conducting like the research and those giving out treatment, everything. And so, we want to make sure that the institutions that are already have these like community, communities like stakeholders are funded properly like by the government of Canada, by the government of Ontario as well to put more money into where like it needs to be seen to get these communities to involve in research and diversify algorithms and kind of stop this bias.
Rachel explained that increased funding from the government would help bring their proposal to life. Additional resources would help create more diverse and representative data sets and would also help foster collaboration between community organizations and data developers. This can help reduce bias in medical algorithms and give healthcare professionals the tools to best serve racialized communities. Medical algorithms were designed to be efficient, objective, and to help streamline healthcare. In reality, this technology has been reflecting already existing biases which disproportionately impact communities of colour.
When biased data like this informs clinical decisions, real people feel the consequences. It can lead to wider and more diverse communities of colour. It is so important that we address these issues head-on. And that could mean more education for healthcare providers and data developers. It could mean increased funding from the Canadian government and collaborating with community organizations. These algorithms have great potential to improve health outcomes for all, but they are not enough. We need to make sure that we are not The more we learn and the more we talk about these issues, the better equipped we are to recognize harmful patterns and initiate meaningful change.
Thank you so much for listening to today's episode. And a big thank you to Rachel and Gabriela for participating and sharing their project with us. If you'd like to learn more, I've linked some resources in the description. Other than that, thank you so much and I'll see you next time!