Please use this identifier to cite or link to this item: http://hdl.handle.net/20.500.12188/17147
DC FieldValueLanguage
dc.contributor.authorMadjarov, Gjorgjien_US
dc.contributor.authorKocev, Dragien_US
dc.contributor.authorGJorgjevikj, Dejanen_US
dc.contributor.authorDžeroski, Sašoen_US
dc.date.accessioned2022-03-29T12:24:00Z-
dc.date.available2022-03-29T12:24:00Z-
dc.date.issued2012-09-
dc.identifier.urihttp://hdl.handle.net/20.500.12188/17147-
dc.description.abstractMulti-label learning has received significant attention in the research community over the past few years: this has resulted in the development of a variety of multi-label learning methods. In this paper, we present an extensive experimental comparison of 12 multi-label learning methods using 16 evaluation measures over 11 benchmark datasets. We selected the competing methods based on their previous usage by the community, the representation of different groups of methods and the variety of basic underlying machine learning methods. Similarly, we selected the evaluation measures to be able to assess the behavior of the methods from a variety of view-points. In order to make conclusions independent from the application domain, we use 11 datasets from different domains. Furthermore, we compare the methods by their efficiency in terms of time needed to learn a classifier and time needed to produce a prediction for an unseen example. We analyze the results from the experiments using Friedman and Nemenyi tests for assessing the statistical significance of differences in performance. The results of the analysis show that for multi-label classification the best performing methods overall are random forests of predictive clustering trees (RF-PCT) and hierarchy of multi-label classifiers (HOMER), followed by binary relevance (BR) and classifier chains (CC). Furthermore, RF-PCT exhibited the best performance according to all measures for multi-label ranking. The recommendation from this study is that when new methods for multi-label learning are proposed, they should be compared to RF-PCT and HOMER using multiple evaluation measures.en_US
dc.language.isoenen_US
dc.publisherElsevier BVen_US
dc.relation.ispartofPattern Recognitionen_US
dc.subjectMulti-label rankingen_US
dc.subjectMulti-label classificationen_US
dc.subjectComparison of multi-label learning methodsen_US
dc.titleAn extensive experimental comparison of methods for multi-label learningen_US
dc.typeJournal Articleen_US
dc.identifier.doi10.1016/j.patcog.2012.03.004-
dc.identifier.urlhttps://api.elsevier.com/content/article/PII:S0031320312001203?httpAccept=text/xml-
dc.identifier.urlhttps://api.elsevier.com/content/article/PII:S0031320312001203?httpAccept=text/plain-
dc.identifier.volume45-
dc.identifier.issue9-
item.grantfulltextopen-
item.fulltextWith Fulltext-
crisitem.author.deptFaculty of Computer Science and Engineering-
crisitem.author.deptFaculty of Computer Science and Engineering-
Appears in Collections:Faculty of Computer Science and Engineering: Journal Articles
Files in This Item:
File Description SizeFormat 
PR_2012.pdf707.58 kBAdobe PDFView/Open
Show simple item record

Page view(s)

40
checked on Jul 24, 2024

Download(s)

204
checked on Jul 24, 2024

Google ScholarTM

Check

Altmetric


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.