META mHealth: Ethical, Legal and Social Aspects in the Technological Age
print

Language Selection

Breadcrumb Navigation


Content

Vulnerability, justice and algorithmic fairness

As mHealth technologies are increasingly becoming an integral part of preventative health care, it is crucial to understand how they shape human lives. This project investigates the complex ways through which algorithm-generated knowledge produced by mHealth technologies impacts on society, communities and individuals from particular social groups.

Many scholars and data experts have noted that algorithms do not represent ‘neutral’ mathematical formulas but values embedded in code. As such, they can be grounded in dominant normative assumptions about the world and carry space for algorithmic bias. Biased algorithms skew the knowledge produced by mHealth technologies and in consequence, disadvantage some users and/or exacerbate their vulnerability in a health care context. This way, algorithmic bias contributes to social injustice and discrimination against individuals, particularly those from already marginalised social groups.

Through an investigation of algorithms and algorithm-generated knowledge, this project seeks to promote algorithmic fairness. The creation of fair algorithms is crucial for the designing of just and inclusive mHealth technologies, which will facilitate good public health outcomes and benefit a diverse cohort of users.

Main investigator: Tereza Hendl