Coursera: Machine Learning- Andrew NG(Week 6) Quiz - Advice for Applying Machine Learning
These solutions are for reference only.
try to solve on your own
but if you get stuck in between than you can refer these solutions
there are different set of questions ,
read questions carefully before marking
-----------------------------------------------------------------------------------------
EXPLANATION:
This learning curve shows high error on both the training and test sets, so the algorithm is suffering from high bias.
EXPLANATION:
Advice for Applying Machine Learning
TOTAL POINTS 5
EXPLANATION:
This learning curve shows high error on both the training and test sets, so the algorithm is suffering from high bias.
EXPLANATION:
Try adding polynomial features.
Adding polynomial feature will increase the high variance problem.
Use fewer training examples.
Decreasing training examples will increase the high variance problem.
Try using a smaller set of features.
The gap in errors between training and test suggests a high variance problem in which the algorithm has overfit the training set. Reducing the feature set will ameliorate the overfitting and help with the variance problem
Get more training examples.
The gap in errors between training and test suggests a high variance problem in which the algorithm has overfit the training set. Adding more training data will increase the complexity of the training set and help with the variance problem.
Try evaluating the hypothesis on a cross validation set rather than the test set.
A cross validation set is useful for choosing the optimal non-model parameters like the regularization parameter λ, but the train / test split is sufficient for debugging problems with the algorithm itself.
Try decreasing the regularization parameter λ.
The gap in errors between training and test suggests a high variance problem in which the algorithm has overfit the training set. Decreasing the regularization parameter will increase the overfitting, not decrease it.
Try increasing the regularization parameter λ.
The gap in errors between training and test suggests a high variance problem in which the algorithm has overfit the training set. Increasing the regularization parameter will reduce overfitting and help with the variance problem.
EXPLANATION:
Try increasing the regularization parameter λ.
The poor performance on both the training and test sets suggests a high bias problem. Increasing the regularization parameter will allow the hypothesis to fit the data worse, decreasing both training and test set performance.
Try decreasing the regularization parameter λ.
Decreasing the regularization parameter will improve the high bias problem and may improve the performance on the training set.
Try evaluating the hypothesis on a cross validation set rather than the test set.
You should not use the cross validation set to evaluate performance on new examples since we have used cross validation set to set the regularization parameter, as you will then have an artificially low value for test error and it will not give a good estimate of generalization error.
Use fewer training examples.
Using fewer training example will make the situation worse. It will not solve the high bias problem but might increase high variance problem as well.
Try adding polynomial features.
The poor performance on both the training and test sets suggests a high bias problem. Adding more complex features will increase the complexity of the hypothesis, thereby improving the fit to both the train and test data.
Try using a smaller set of features.
The poor performance on both the training and test sets suggests a high bias problem. Using fewer features will decrease the complexity of the hypothesis and will make the bias problem worse
Try to obtain and use additional features.
The poor performance on both the training and test sets suggests a high bias problem. Using additional features will increase the complexity of the hypothesis, thereby improving the fit to both the train and test data.
EXPLANATION:
Suppose you are training a regularized linear regression model. The recommended way to choose what value of regularization parameter to use is to choose the value of which gives the lowest test set error.
You should not use the test set to choose the regularization parameter, as you will then have an artificially low value for test error and it will not give a good estimate of generalization error.
Suppose you are training a regularized linear regression model.The recommended way to choose what value of regularization parameter to use is to choose the value of which gives the lowest training set error.
You should not use training error to choose the regularization parameter, as you can always improve training error by using less regularization (a smaller value of ). But too small of a value will not generalize well onthe test set.
The performance of a learning algorithm on the training set will typically be better than its performance on the test set.
The learning algorithm finds parameters to minimize training set error, so the performance should be better on the training set than the test set.
Suppose you are training a regularized linear regression model. The recommended way to choose what value of regularization parameter to use is to choose the value of which gives the lowest cross validation error.
The cross validation lets us find the “just right” setting of the regularization parameter given the fixed model parameters learned from the training set.
A typical split of a dataset into training, validation and test sets might be 60% training set, 20% validation set, and 20% test set.
This is a good split of the data, as it dedicates the bulk of the data to finding model parameters in training while leaving enough data for cross validation and estimating generalization error.
Suppose you are training a logistic regression classifier using polynomial features and want to select what degree polynomial (denoted in the lecture videos) to use. After training the classifier on the entire training set, you decide to use a subset of the training examples as a validation set. This will work just as well as having a validation set that is separate (disjoint) from the training set.
cross validation set should not be the subset of training set. Training / Cross validation / Test set should be similar (from same source) but disjoint.
It is okay to use data from the test set to choose the regularization parameter λ, but not the model parameters (θ).
We should not use test set data to choose any of the parameters (regularization and model parameters)
Suppose you are using linear regression to predict housing prices, and your dataset comes sorted in order of increasing sizes of houses. It is then important to randomly shuffle the dataset before splitting it into training, validation and test sets, so that we don’t have all the smallest houses going into the training set, and all the largest houses going into the test set.
We should shuffle the data before spliting it into training / cross validation / test set.
EXPLANATION:
A model with more parameters is more prone to overfitting and typically has higher variance.
More model parameters increases the model’s complexity, so it can more tightly fit data in training, increasing the chances of overfitting.
If the training and test errors are about the same, adding more features will not help improve the results.
Training and test errors are about the same means model is facing high bias problem. Adding more features will help in solving high bias problem.
If a learning algorithm is suffering from high bias, only adding more training examples may not improve the test error significantly.
For solving high bias problem, adding more features useful but adding more training example won’t help.
If a learning algorithm is suffering from high variance, adding more training examples is likely to improve the test error.
Adding more training data solves the high variance problem.
When debugging learning algorithms, it is useful to plot a learning curve to understand if there is a high bias or high variance problem.
The shape of a learning curve is a good indicator of bias or variance problems with your learning algorithm.
If a neural network has much lower training error than test error, then adding more layers will help bring the test error down because we can fit the test set better.
With lower training than test error, the model has high variance. Adding more layers will increase model complexity, making the variance problem worse.
-------------------------------------------------------------------------------------
variation in 1st question:
This learning curve shows high error on the test sets but comparatively low error on training set, so the algorithm is suffering from high variance.
--------------------------------------------------------------------------------
reference : coursera