You are here

  1. Home
  2. Unbiased by Design - ‘Towards a Fair Machine Learning’

Unbiased by Design - ‘Towards a Fair Machine Learning’

With the increased pervasiveness of Artificial Intelligence (AI) and Machine learning (ML) systems, machines are making decisions impacting our everyday lives.  Shall we trust the algorithms and the decisions?  Are they fair to everyone? Can they be biased?  How is race, gender, social class…. etc considered during the decision-making process?

Whilst human bias is acknowledged, very well studied and documented, the machine bias is still to be explored. Indeed, as ML algorithms are developed by humans, they inevitably inherit from the humans biased approaches resulting in digitised bias. This may raise because the bias is within the data (how it was collected, sampled or how it is used to train the ML system), within the ML algorithm (how it is built, how the results are interpreted ...etc) or simply caused by unconscious bias.

In this project you will investigate how current approaches to ML can be improved to remove the existing bias in both the datasets used as well as the methodologies and algorithms applied.  Can fairness regulators be implemented?

Evaluation of the outputs through case study analysis is critical and will be a central part of the project.

We are happy to consider other variations on the theme!


Skills Required:

A degree in Computing (or equivalent),

Artificial intelligence skills

Algorithm design skills

Experience with experimental design useful

Knowledge of statistics/maths useful


Background Reading:

 M.O.R. Prates, P.H.C. Avelar, L.C. Lamb (2019). "Assessing Gender Bias in Machine Translation -- A Case Study with Google Translate”  Neural Comput & Applic 32, 6363–6381 (2020).



Dr. Soraya Kouadri Mostéfaoui


Request your prospectus

Request a prospectus icon

Explore our qualifications and courses by requesting one of our prospectuses today.

Request prospectus

Are you already an OU student?

Go to StudentHome