Waldmann, Patrik
- Department of Animal Biosciences, Swedish University of Agricultural Sciences
Research article2019Peer reviewedOpen access
Waldmann, Patrik
The large number of markers in genome-wide prediction demands the use of methods with regularization and model comparison based on some hold-out test prediction error measure. In quantitative genetics, it is common practice to calculate the Pearson correlation coefficient (r(2)) as a standardized measure of the predictive accuracy of a model. Based on arguments from the bias-variance trade-off theory in statistical learning, we show that shrinkage of the regression coefficients (i.e., QTL effects) reduces the prediction mean squared error (MSE) by introducing model bias compared with the ordinary least squares method. We also show that the LASSO and the adaptive LASSO (ALASSO) can reduce the model bias and prediction MSE by adding model variance. In an application of ridge regression, the LASSO and ALASSO to a simulated example based on results for 9,723 SNPs and 3,226 individuals, the best model selected was with the LASSO when r(2) was used as a measure. However, when model selection was based on test MSE and coefficient of determination R-2 the ALASSO proved to be the best method. Hence, use of r(2) may lead to selection of the wrong model and therefore also nonoptimal ranking of phenotype predictions and genomic breeding values. Instead, we propose use of the test MSE for model selection and R-2 as a standardized measure of the accuracy.
genomic selection; model comparison; accuracy; bias-variance trade-off; coefficient of determination
Frontiers in Genetics
2019, Volume: 10, article number: 899
Publisher: FRONTIERS MEDIA SA
Genetics
DOI: https://doi.org/10.3389/fgene.2019.00899
https://res.slu.se/id/publ/102189