Transfer Learning: Leveraging Knowledge Across Domains in AI

Main Article Content

Shweta Sharma
Kaushal Sharma Arrawatia
Bhawana Asthana
Bhawana Asthana

Abstract

Transfer learning is a key paradigm in synthetic intelligence, permitting the use of information received in one area to enhance learning and overall performance in some other. This paper delves into the essential ideas and packages of switch gaining knowledge of, elucidating its function in decreasing reliance on large categorized datasets whilst accelerating version training. The research includes several mechanisms, along with feature extraction, pleasant-tuning, and area model, emphasizing their importance in leveraging previous know-how across disparate domains. The paper delves into the complexities of transfer studying, dropping mild on its advantages in improving version overall performance, growing efficiency, and locating significant programs in fields ranging from computer vision to natural language processing. Furthermore, this paper examines the difficulties associated with transfer studying, which include area shifts, potential bad transfers, and the hazard of overfitting, as well as moral concerns regarding biases inherited from source domains. It concludes with a comprehensive review of recent advances, ongoing studies traits, and potential ethical implications, ensuing in a complete understanding of the role of transfer mastering in AI and its promising trajectory for future improvements.

Downloads

Download data is not yet available.

Metrics

Metrics Loading ...

Article Details

How to Cite
Sharma, S. ., Arrawatia, K. S. ., Asthana, B., & Asthana, B. . (2019). Transfer Learning: Leveraging Knowledge Across Domains in AI. Turkish Journal of Computer and Mathematics Education (TURCOMAT), 10(2), 1167–1170. https://doi.org/10.61841/turcomat.v10i2.14389
Section
Articles

References

Pan, S. J., & Yang, Q. (2010). A survey on transfer learning. IEEE Transactions on Knowledge and Data

Engineering, 22(10), 1345-1359.

Yosinski, J., Clune, J., Bengio, Y., & Lipson, H. (2014). How transferable are features in deep neural networks?

In Advances in Neural Information Processing Systems (pp. 3320-3328).

Tzeng, E., Hoffman, J., Zhang, N., Saenko, K., & Darrell, T. (2017). Adversarial discriminative domain

adaptation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (pp. 7167-7176).

R. K. Kaushik Anjali and D. Sharma, "Analyzing the Effect of Partial Shading on Performance of Grid

Connected Solar PV System", 2018 3rd International Conference and Workshops on Recent Advances and

Innovations in Engineering (ICRAIE), pp. 1-4, 2018.

Goodfellow, I. J., Bengio, Y., & Courville, A. (2016). Deep Learning. MIT Press.

Weiss, K., Khoshgoftaar, T. M., & Wang, D. (2016). A survey of transfer learning. Journal of Big Data, 3(1), 9.

Yosinski, J., Clune, J., Bengio, Y., & Lipson, H. (2014). Transfer learning for deep neural networks. arXiv

preprint arXiv:1411.1792.

Shin, H., Roth, H. R., Gao, M., Lu, L., Xu, Z., Nogues, I., ... & Summers, R. M. (2016). Deep convolutional

neural networks for computer-aided detection: CNN architectures, dataset characteristics and transfer learning.

IEEE Transactions on Medical Imaging, 35(5), 1285-1298.

Dai, W., Yang, Q., Xue, G. R., & Yu, Y. (2007). Boosting for transfer learning. In Proceedings of the 24th

international conference on Machine learning (pp. 193-200).

Sun, B., Feng, J., & Saenko, K. (2016). Return of frustratingly easy domain adaptation. In Thirtieth AAAI

Conference on Artificial Intelligence.

Bengio, Y., Louradour, J., Collobert, R., & Weston, J. (2009). Curriculum learning. In Proceedings of the 26th

annual international conference on machine learning (pp. 41-48)