Show simple item record

dc.contributor.authorGray, AR
dc.contributor.authorMacDonell, SG
dc.contributor.authorShepperd, MJ
dc.date.accessioned2011-10-01T07:17:36Z
dc.date.available2011-10-01T07:17:36Z
dc.date.copyright1999-11-04
dc.date.issued2011-10-01
dc.identifier.citationProceedings from the Sixth International Software Metrics Symposium (METRICS'99), pp.216
dc.identifier.isbn0-7695-0403-5
dc.identifier.isbn0-7695-0403-5
dc.identifier.urihttp://hdl.handle.net/10292/2180
dc.description.abstractEstimation of project development effort is most often performed by expert judgment rather than by using an empirically derived model (although such may be used by the expert to assist their decision). One question that can be asked about these estimates is how stable are they with respect to characteristics of the development process and product? This stability can be assessed in relation to the degree to which the project has advanced over time, the type of module for which the estimate is being made, and the characteristics of that module. In this paper we examine a set of expert-derived estimates for the effort required to develop a collection of modules from a large health-care system. Statistical tests are used to identify relationships between the type (screen or report) and characteristics of modules and the likelihood of the associated development effort being underestimated, approximately correct, or over-estimated. Distinct relationships are found that suggest that the estimation process being examined was not unbiased to such characteristics. This is a potentially useful finding in that it provides an opportunity for estimators to improve their prediction performance
dc.publisherIEEE Computer Society
dc.rightsCopyright © 1999 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.
dc.subjectComputer errors
dc.subjectElectrical capacitance tomography
dc.subjectHip
dc.subjectInformation science
dc.subjectProgramming
dc.subjectRead only memory
dc.subjectSoftware measurement
dc.subjectSoftware metrics
dc.subjectStability
dc.subjectTesting
dc.titleFactors systematically associated with errors in subjective estimates of software development effort: the stability of expert judgment
dc.typeConference Contribution
dc.rights.accessrightsOpenAccess
dc.identifier.doi10.1109/METRIC.1999.809743


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record