Evaluating prediction systems in software project estimation

aut.relation.endpage827
aut.relation.issue8
aut.relation.startpage820
aut.relation.volume54
aut.researcherMacDonell, Stephen Gerard
dc.contributor.authorShepperd, M
dc.contributor.authorMacDonell, SG
dc.date.accessioned2012-06-13T23:10:52Z
dc.date.available2012-06-13T23:10:52Z
dc.date.copyright2012
dc.date.issued2012
dc.description.abstractContext Software engineering has a problem in that when we empirically evaluate competing prediction systems we obtain conflicting results. Objective To reduce the inconsistency amongst validation study results and provide a more formal foundation to interpret results with a particular focus on continuous prediction systems. Method A new framework is proposed for evaluating competing prediction systems based upon (1) an unbiased statistic, Standardised Accuracy, (2) testing the result likelihood relative to the baseline technique of random ‘predictions’, that is guessing, and (3) calculation of effect sizes. Results Previously published empirical evaluations of prediction systems are re-examined and the original conclusions shown to be unsafe. Additionally, even the strongest results are shown to have no more than a medium effect size relative to random guessing. Conclusions Biased accuracy statistics such as MMRE are deprecated. By contrast this new empirical validation framework leads to meaningful results. Such steps will assist in performing future meta-analyses and in providing more robust and usable recommendations to practitioners.
dc.identifier.citationInformation and Software Technology, vol.54(8), pp.820 - 827
dc.identifier.doi10.1016/j.infsof.2011.12.008
dc.identifier.urihttps://hdl.handle.net/10292/4423
dc.publisherElsevier
dc.relation.urihttp://dx.doi.org/10.1016/j.infsof.2011.12.008
dc.rightsCopyright © 2012 Elsevier Ltd. All rights reserved. This is the author’s version of a work that was accepted for publication in (see Citation). Changes resulting from the publishing process, such as peer review, editing, corrections, structural formatting, and other quality control mechanisms may not be reflected in this document. Changes may have been made to this work since it was submitted for publication. The definitive version was published in (see Citation). The original publication is available at (see Publisher's Version).
dc.rights.accessrightsOpenAccess
dc.titleEvaluating prediction systems in software project estimation
dc.typeJournal Article
pubs.elements-id113493
pubs.organisational-data/AUT
pubs.organisational-data/AUT/Design & Creative Technologies
pubs.organisational-data/AUT/Design & Creative Technologies/School of Computing & Mathematical Science
pubs.organisational-data/AUT/PBRF Researchers
pubs.organisational-data/AUT/PBRF Researchers/Design & Creative Technologies PBRF Researchers
pubs.organisational-data/AUT/PBRF Researchers/Design & Creative Technologies PBRF Researchers/DCT C & M Computing
Files
Original bundle
Now showing 1 - 1 of 1
Loading...
Thumbnail Image
Name:
Shepperd and MacDonell (2012) I&ST.pdf
Size:
278.37 KB
Format:
Adobe Portable Document Format
Description:
License bundle
Now showing 1 - 1 of 1
Loading...
Thumbnail Image
Name:
licence.htm
Size:
29.98 KB
Format:
Unknown data format
Description: