AI for Algorithmic Auditing: Mitigating Bias and Improving Fairness in Big Data Systems

Priya Patel 

Department of Computer Science,  Indian Institute of Technology Delhi (IITD), India 

Mohd Nasim Uddin

Teesside University, England

Keywords: Algorithmic decision-making, AI systems, Bias, Fairness, Auditing methods


Abstract

Algorithmic decision-making systems are being increasingly used in high-impact domains like finance, healthcare, and criminal justice. However, these systems can unintentionally discriminate against certain groups due to biases in training data or models. This has led to calls for increased transparency and algorithmic auditing to detect and mitigate unfairness. This paper provides an overview of emerging techniques using AI to audit black-box systems for bias. First, we discuss sources of algorithmic bias and the importance of fairness in AI systems. We then review different definitions and metrics for fairness, including group versus individual notions of fairness. Next, we survey different algorithmic auditing methods to assess system behaviour using only input and output queries. These include techniques based on causality, counterfactual reasoning, and adversarial models. We also examine methods to improve system fairness by detecting and mitigating biases in the training pipeline, modifying model parameters directly, or post-processing model outputs. Finally, we outline key challenges and opportunities, including model interpretability, scalability, and the need to incorporate domain expertise into notions of fairness. Overall, this paper synthesises recent advancements in using AI for algorithmic auditing and provides insights into translating these methods into practise to build more trustworthy and ethical AI systems.


Author Biographies

Priya Patel , Department of Computer Science,  Indian Institute of Technology Delhi (IITD), India 

 

 

 

Mohd Nasim Uddin, Teesside University, England