Online Multi-Object Tracking in Videos Based on Features Detected by YOLO
Main Article Content
Abstract
With the rapid development of different applications that rely on multi-object detection and tracking, significant attention has been brought toward improving the performance of these methods. Recently, Artificial Neural Networks (ANNs) have shown outstanding performance in different applications, where objects detection and tracking are no exception. In this paper, we proposed a new object tracking method based on descriptors extracted using the convolutional filters of the YOLOv3 neural network. As these features are detected and processed during the detection phase, the proposed method has exploited these features to produce efficient and robust descriptors. The proposed method has shown better performance, compared to state-of-the-art methods, by producing better predictions using less computations. The evaluation results show that the proposed method has been able to process an average of 207.6 frames per second to track objects with 67.6% Multi-Object Tracking Accuracy (MOTA) and 89.1% Multi-Object Tracking Precision (MOTP).
Downloads
Metrics
Article Details
Licensing
TURCOMAT publishes articles under the Creative Commons Attribution 4.0 International License (CC BY 4.0). This licensing allows for any use of the work, provided the original author(s) and source are credited, thereby facilitating the free exchange and use of research for the advancement of knowledge.
Detailed Licensing Terms
Attribution (BY): Users must give appropriate credit, provide a link to the license, and indicate if changes were made. Users may do so in any reasonable manner, but not in any way that suggests the licensor endorses them or their use.
No Additional Restrictions: Users may not apply legal terms or technological measures that legally restrict others from doing anything the license permits.