Bias in Machine Learning Algorithms

Main Article Content

Indrani Sharma
Bhimraj Rathodiya

Abstract

Bias in machine learning algorithms has emerged as a critical concern, casting a shadow on the perceived objectivity and fairness of these systems. This paper delves into the multifaceted landscape of biases inherent in machine learning models, exploring their origins, manifestations, implications, and potential remedies. The investigation begins by elucidating the sources of bias, stemming from various stages of the machine learning pipeline, including data collection, feature selection, algorithmic design, and human interventions. It unravels how biases, whether implicit in historical data or inadvertently introduced, can perpetuate societal inequalities, reinforce stereotypes, and result in discriminatory outcomes. The paper examines the manifestations ofbias in different domains, such as healthcare, criminal justice, finance, and employment, where machine learning algorithms wield substantial influence. It highlights instances where biased models can lead to unequal treatment, exacerbating societal disparities and compromising ethical standards. Moreover, the study explores the challenges associated with detecting, measuring, and mitigating bias in machine learning algorithms. It navigates through various fairness metrics, algorithmic transparency techniques, and debiasing strategies aimed at promoting fairness, accountability,and transparency in algorithmic decision-making. In addition to uncovering the intricacies of bias, this paper underscores the ethical imperatives in mitigating bias, emphasizing the need for interdisciplinary collaboration,ethical guidelines, and regulatory frameworks. It advocates for a holistic approach that amalgamates technical advancements with ethical considerations to steer machine learning algorithms toward equitable and socially responsible outcomes.
In conclusion, bias in machine learning algorithms represents a multifaceted challenge, necessitating a concerted effort from researchers, policymakers, and practitioners. Addressing bias requires not only technical innovations but also ethical scrutiny, transparency, and a commitment to promoting fairness and inclusivity in algorithmic systems.
This abstract provides an overview of the multifaceted nature of bias in machine learning algorithms, exploring its origins, implications, challenges, and the necessity for a holistic approach encompassing technical and ethical considerations.

Downloads

Download data is not yet available.

Metrics

Metrics Loading ...

Article Details

How to Cite
Sharma, I., & Rathodiya, B. (2019). Bias in Machine Learning Algorithms. Turkish Journal of Computer and Mathematics Education (TURCOMAT), 10(2), 1158–1161. https://doi.org/10.61841/turcomat.v10i2.14387
Section
Articles

References

Caliskan, A., Bryson, J. J., & Narayanan, A. (2017). Semantics derived automatically from language corpora contain human-like biases. Science, 356(6334), 183-186.

Corbett-Davies, S., & Goel, S. (2018). The measure and mismeasure of fairness: A critical review of fair machine learning. arXiv preprint arXiv:1808.00023.

Dwork, C., Hardt, M., Pitassi, T., Reingold, O., & Zemel, R. S. (2012). Fairness through awareness. In Proceedings of the 3rd Innovations in Theoretical Computer Science Conference (pp. 214-226).

Friedman, B., & Nissenbaum, H. (1996). Bias in computer systems. ACM Transactions on Information Systems,14(3), 330-347.

Hajian, S., & Domingo-Ferrer, J. (2013). A methodology for direct and indirect discrimination prevention in data mining. IEEE Transactions on Knowledge and Data Engineering, 25(7), 1445-1459.

Hardt, M., Price, E., & Srebro, N. (2016). Equality of opportunity in supervised learning. In Advances in neural information processing systems (pp. 3315-3323).

Obermeyer, Z., Powers, B., Vogeli, C., & Mullainathan, S. (2019). Dissecting racial bias in an algorithm used to manage the health of populations. Science, 366(6464), 447-453.

Pedreshi, D., Ruggieri, S., & Turini, F. (2008).Discrimination-aware data mining. In Proceedings of the 14th ACM SIGKDD international conference on Knowledge discovery and data mining (pp. 560-568).

Ribeiro, M. T., Singh, S., & Guestrin, C. (2016)."Why Should I Trust You?": Explaining the Predictions of Any Classifier. In Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (pp. 1135-1144).

Romei, A., & Ruggieri, S. (2014). A multidisciplinary survey on discrimination analysis. The Knowledge Engineering Review, 29(5), 582-638.

Verma, S., & Rubin, J. (2018). Fairness definitions explained. In Proceedings of the International Workshop on Software Fairness (pp. 1-7).

Zafar, M. B., Valera, I., Gomez Rodriguez, M., & Gummadi, K. P. (2017).Fairness beyond disparate treatment & disparate impact: Learning classification without disparate mistreatment. In Proceedings of the 26th International Conference on World Wide Web (pp. 1171-1180).

Zliobaite, I. (2015). On the relation between accuracy and fairness in binary classification. In Conference on Fairness, Accountability and Transparency (pp. 7-9).

Zou, J., Schiebinger, L., Hernandez, B., Oussani, C., Thakar, A. R., & Altman, R. B. (2018). Gender bias in open source: Pull request acceptance of women versus men. PeerJ Computer Science, 4, e111.

R. K. Kaushik Anjali and D. Sharma, "Analyzing the Effect of Partial Shading on Performance of Grid Connected Solar PV System", 2018 3rd International Conference and Workshops on Recent Advances and Innovations in Engineering (ICRAIE), pp. 1-4, 2018.