Load the Alzheimer's disease data using the commands:
library(caret)
library(AppliedPredictiveModeling)
set.seed(3433)data(AlzheimerDisease)
adData = data.frame(diagnosis,predictors)
inTrain = createDataPartition(adData$diagnosis, p = 3/4)[[1]]training = adData[ inTrain,]
testing = adData[-inTrain,]
Create a training data set consisting of only the predictors with variable names beginning with IL and the diagnosis. Build two predictive models, one using the predictors as they are and one using PCA with principal components explaining 80% of the variance in the predictors. Use method="glm" in the train function. What is the accuracy of each method in the test set? Which is more accurate?

Practice More Questions From: Quiz 2

Q:

Load the cement data using the commands: library(AppliedPredictiveModeling)data(concrete)library(caret)set.seed(1000)inTrain = createDataPartition(mixtures$CompressiveStrength, p = 3/4)[[1]]training = mixtures[ inTrain,]testing = mixtures[-inTrain,]Make a plot of the outcome (CompressiveStrength) versus the index of the samples. Color by each of the variables in the data set (you may find the cut2() function in the Hmisc package useful for turning continuous covariates into factors). What do you notice in these plots?

Q:

Load the cement data using the commands: library(AppliedPredictiveModeling)data(concrete)library(caret)set.seed(1000)inTrain = createDataPartition(mixtures$CompressiveStrength, p = 3/4)[[1]]training = mixtures[ inTrain,]testing = mixtures[-inTrain,]Make a histogram and confirm the SuperPlasticizer variable is skewed. Normally you might use the log transform to try to make the data more symmetric. Why would that be a poor choice for this variable?

Q:

Load the Alzheimer’s disease data using the commands: library(caret)library(AppliedPredictiveModeling)set.seed(3433)data(AlzheimerDisease)adData = data.frame(diagnosis,predictors)inTrain = createDataPartition(adData$diagnosis, p = 3/4)[[1]]training = adData[ inTrain,]testing = adData[-inTrain,]

Q:

Load the Alzheimer’s disease data using the commands: library(caret)library(AppliedPredictiveModeling)set.seed(3433)data(AlzheimerDisease)adData = data.frame(diagnosis,predictors)inTrain = createDataPartition(adData$diagnosis, p = 3/4)[[1]]training = adData[ inTrain,]testing = adData[-inTrain,]Create a training data set consisting of only the predictors with variable names beginning with IL and the diagnosis. Build two predictive models, one using the predictors as they are and one using PCA with principal components explaining 80% of the variance in the predictors. Use method=”glm” in the train function. What is the accuracy of each method in the test set? Which is more accurate?

Subscribe
Notify of
guest
0 Comments
Inline Feedbacks
View all comments