# If K is small in a K-fold cross validation is the bias in the estimate of out-of-sample (test set) accuracy smaller or bigger? If K is small is the variance in the estimate of out-of-sample (test set) accuracy smaller or bigger. Is K large or small in leave one out cross validation?

## Home » If K is small in a K-fold cross validation is the bias in the estimate of out-of-sample (test set) accuracy smaller or bigger? If K is small is the variance in the estimate of out-of-sample (test set) accuracy smaller or bigger. Is K large or small in leave one out cross validation?

Practice More Questions From: Quiz 3

## Q:

### Load the South Africa Heart Disease Data and create training and test sets with the following code: library(ElemStatLearn)data(SAheart)set.seed(8484)train = sample(1:dim(SAheart),size=dim(SAheart)/2,replace=F)trainSA = SAheart[train,]testSA = SAheart[-train,]Then set the seed to 13234 and fit a logistic regression model (method=”glm”, be sure to specify family=”binomial”) with Coronary Heart Disease (chd) as the outcome and age at onset, current alcohol consumption, obesity levels, cumulative tabacco, type-A behavior, and low density lipoprotein cholesterol as predictors. Calculate the misclassification rate for your model using this function and a prediction on the “response” scale:missClass = function(values,prediction){sum(((prediction > 0.5)*1) != values)/length(values)}What is the misclassification rate on the training set? What is the misclassification rate on the test set?

Subscribe
Notify of 