The UK Centre for Data Ethics and Innovation defines deepfakes as “visual and audio content that has been manipulated using advanced software to change how a person, object or environment is presented”. The potential of using these images for misinformation has lead politicians to consider criminalising distribution of non-consensual deepfake images (Hern 2022). While there has been increasing research on deepfake face detection the emerge of deepfake satellite imagery is starting to pose new challenges (Vincent 2021). Indeed, deepfake satellite images can be used to mislead intelligence agencies leading to erroneous strategic decisions, they can be weaponized to influence public opinion especially in times of war and natural disasters and can be compromise scientific research by providing inaccurate data for climate studies and geological exploration.
The aim of this study is thus to develop an interpretable model for detection of deepfake satellite imagery.
Objectives
The main objectives of this project are:
Skills Required
Machine learning, explainable AI, image processing, and strong programming skills.
Reference
Hern, A., (2022) Online safety bill will criminalise ‘downblousing’ and ‘deepfake’ porn. Available at: https://www.theguardian.com/technology/2022/nov/24/online-safety-bill-to-return-to-parliament-next-month (Accessed: 23 November 2023).
Vincent, J., (2021) Deepfake satellite imagery poses a not-so-distant threat, warn geographers. Available at: https://www.theverge.com/2021/4/27/22403741/deepfake-geography-satellite-imagery-ai-generated-fakes-threat (Accessed: 23 November 2023).
Contact
For more information about the project, please contact:
Explore our qualifications and courses by requesting one of our prospectuses today.