Machine Learning (ML) is gaining notable momentum, with applications ranging from facial recognition, intelligent gaming, spam detection and travel recommendation to healthcare and finance. Whilst widely accepted, ML is often regarded as a black-box model. Should the black-box be trusted for being inclusive? Are the inequalities found in today’s societies digitised and replicated in the black-box model? Is this done implicitly or explicitly?
In this project you will focus on the application of ML in healthcare provision, as this is responsible for human lives it is extremely important to ensure that the black-box model could be trusted. You will explore ways to understand how explainable ML can help reverse inequalities. You will first, investigate the usefulness of explainable ML in healthcare provision through the explorations of the impact of explanation on the transparency, interoperability, trust …etc. You will then design and evaluate an inclusive explainable ML system for healthcare provision.
Many scenarios and case studies can be envisaged for this project, we are happy to discuss other variations of the topic.
A degree in Computing (or equivalent),
Artificial intelligence skills
Algorithm design and programming skills
Experience with experimental design useful
Knowledge of statistics/maths useful
Aaron, F, Rudin, C and Dominici, F. “Model Class Reliance: Variable importance measures for any machine learning model class, from the ‘Rashomon’ perspective.” http://arxiv.org/abs/1801.01489 (2018).
Dandl, S, Molnar, C, Binder, M, Bischl, B. "Multi-Objective Counterfactual Explanations". In: Bäck T. et al. (eds) Parallel Problem Solving from Nature – PPSN XVI. PPSN 2020. Lecture Notes in Computer Science, vol 12269. Springer, Cham (2020).