I like ROCs they allow assessing (for example) the quality of a test without needing to decide on the threshold between “normal” and “pathologic” (yet). They even boil it down to a single number: AUC (area-under-curve). They are known since World War II, where they were developed for Radar interpretation, and Swets wrote a pertinent seminal paper in 1973. In 1993 I used them for analysis but did not place them into the paper strange in hindsight , rather showed a combined sensitivity-specificity figure. It took me until 2001 to publish ROC curves (for Pattern-ERG) to detect glaucoma. I’ve applied ROCs often since, and always programmed the algorithm myself (initially in Excel, then Igor Pro, and in the last few years in R). Until I found this paper pROC: an open-source package for R and S+ to analyze and compare ROC curves by Xavier Robin et al.


They also offer pROC as an R package on the standard R site, great, thanks! One of its many advantages are confidence intervals for the AUCs, significance tests between several discriminators, and confidence areas around the traces my own bootstrapping took much longer than theirs. If you want to follow the bootstrapping in the console, select something like the following option:

options(pROCProgress=list(name="text", width=NA, char=".", style=3))