Application of an Improved Focal Loss in Vehicle Detection

aut.relation.conferenceICAISC: 19th International Conference on Artificial Intelligence and Soft Computingen_NZ
aut.relation.endpage123
aut.relation.startpage114
aut.relation.volume12415 LNAIen_NZ
aut.researcherKassabov, Nikola
dc.contributor.authorHe, Xen_NZ
dc.contributor.authorYang, Jen_NZ
dc.contributor.authorKasabov, Nen_NZ
dc.contributor.editorRutkowski, Len_NZ
dc.contributor.editorScherer, Ren_NZ
dc.contributor.editorKorytkowski, Men_NZ
dc.contributor.editorPerycz, Wen_NZ
dc.contributor.editorTadeusiewicz, Ren_NZ
dc.contributor.editorZurada, JMen_NZ
dc.date.accessioned2022-02-04T02:49:39Z
dc.date.available2022-02-04T02:49:39Z
dc.date.copyright2020en_NZ
dc.date.issued2020en_NZ
dc.description.abstractObject detection is an important and fundamental task in computer vision. Recently, the emergence of deep neural network has made considerable progress in object detection. Deep neural network object detectors can be grouped in two broad categories: the two-stage detector and the one-stage detector. One-stage detectors are faster than two-stage detectors. However, they suffer from a severe foreground-backg-round class imbalance during training that causes a low accuracy performance. RetinaNet is a one-stage detector with a novel loss function named Focal Loss which can reduce the class imbalance effect. Thereby RetinaNet outperforms all the two-stage and one-stage detectors in term of accuracy. The main idea of focal loss is to add a modulating factor to rectify the cross-entropy loss, which down-weights the loss of easy examples during training and thus focuses on the hard examples. However, cross-entropy loss only focuses on the loss of the ground-truth classes and thus it can’t gain the loss feedback from the false classes. Thereby cross-entropy loss does not achieve the best convergence. In this paper, we proposed a new loss function named Dual Cross-Entropy Focal Loss, which improves on the focal loss. Dual cross-entropy focal loss adds a modulating factor to rectify the dual cross-entropy loss towards focusing on the hard samples. Dual cross-entropy loss is an improved variant of cross-entropy loss, which gains the loss feedback from both the ground-truth classes and the false classes. We changed the loss function of RetinaNet from focal loss to our dual cross-entropy focal loss and performed some experiments on a small vehicle dataset. The experimental results show that our new loss function improves the vehicle detection performance.en_NZ
dc.identifier.citationIn: Rutkowski L., Scherer R., Korytkowski M., Pedrycz W., Tadeusiewicz R., Zurada J.M. (eds) Artificial Intelligence and Soft Computing. ICAISC 2020. Lecture Notes in Computer Science, vol 12415. Springer, Cham. https://doi.org/10.1007/978-3-030-61401-0_11
dc.identifier.doi10.1007/978-3-030-61401-0_11en_NZ
dc.identifier.isbn9783030614003en_NZ
dc.identifier.issn0302-9743en_NZ
dc.identifier.issn1611-3349en_NZ
dc.identifier.urihttps://hdl.handle.net/10292/14878
dc.publisherSpringer
dc.relation.urihttps://link.springer.com/chapter/10.1007%2F978-3-030-61401-0_11
dc.rightsAn author may self-archive an author-created version of his/her article on his/her own website and or in his/her institutional repository. He/she may also deposit this version on his/her funder’s or funder’s designated repository at the funder’s request or as a result of a legal obligation, provided it is not made publicly available until 12 months after official publication. He/ she may not use the publisher's PDF version, which is posted on www.springerlink.com, for the purpose of self-archiving or deposit. Furthermore, the author may only post his/her version provided acknowledgement is given to the original source of publication and a link is inserted to the published article on Springer's website. The link must be accompanied by the following text: "The final publication is available at www.springerlink.com”. (Please also see Publisher’s Version and Citation).
dc.rights.accessrightsOpenAccessen_NZ
dc.subjectFocal Loss; Class Imbalance; Cross-Entropy Loss; RetinaNet; Vehicle Detection; Object Detection; Deep Neural Network
dc.titleApplication of an Improved Focal Loss in Vehicle Detectionen_NZ
dc.typeConference Contribution
pubs.elements-id396002
pubs.organisational-data/AUT
pubs.organisational-data/AUT/Faculty of Design & Creative Technologies
pubs.organisational-data/AUT/PBRF
pubs.organisational-data/AUT/PBRF/PBRF Design and Creative Technologies
pubs.organisational-data/AUT/PBRF/PBRF Design and Creative Technologies/PBRF ECMS
Files
Original bundle
Now showing 1 - 1 of 1
Loading...
Thumbnail Image
Name:
Jie Yang et al ICAIS2020:8897.pdf
Size:
383.36 KB
Format:
Adobe Portable Document Format
Description:
Conference contribution
License bundle
Now showing 1 - 1 of 1
Loading...
Thumbnail Image
Name:
AUT Grant of Licence for Tuwhera Jun 2021.pdf
Size:
360.95 KB
Format:
Adobe Portable Document Format
Description: