Coursera: Machine Learning- Andrew NG (Week 2) Quiz - Linear Regression with Multiple Variables

These solutions are for reference only.

try to solve on your own

but if you get stuck in between than you can refer these solutions


there are different set of questions ,

we have provided the variations in particular question at the end.

read questions carefully before marking


--------------------------------------------------------------------

Linear Regression with Multiple Variables

TOTAL POINTS 5






EXPLANATION:

The mean of x1 is 81 and the range is 94−69=25

So x1(1) is 89−81/25=0.32




EXPLANATION: We want gradient descent to quickly converge to the minimum, so the current setting of α seems to be good




EXPLANATION: X has m rows and n + 1 columns (+1 because of the x0=1 term. y is an m-vector. θ is an (n+1)-vector.

therefore, X is 14×4, y is 14×1, θ is 4×1









EXPLANATION: With n = 200000 features, you have to invert a 200001 x 200001 matrix to compute the normal equation. Inverting such a large matrix is computationally expensive, so gradient descent is a good choice.

therefore, Gradient descent, since (XTX)−1 will be very slow to compute in the normal equation.







EXPLANATION: 
It is necessary to prevent gradient descent from getting stuck in local optima. (false)
    The cost function J(θ) for linear regression has no local optima.

It prevents the matrix XTX (used in the normal equation) from being non-invertable (singular/degenerate).(false)
    XTX can be singular when features are redundant or there are too few examples. Feature scaling does not solve these problems.

It speeds up gradient descent by making each iteration of gradient descent less expensive to compute.(false)
    The magnitude of the feature values are insignificant in terms of computational cost.

It speeds up gradient descent by making it require fewer iterations to get to a good solution.(true)
    Feature scaling speeds up gradient descent by avoiding many extra iterations that are required when one or more features take on much larger values than the rest.



----------------------------------------------------------------------

Variations in question 1:

1.Suppose m=4 students have taken some class, and the class had a midterm exam and a final exam. You have collected a dataset of their scores on the two exams, which is as follows:

Midterm Exam(midterm exam)2Final Exam
89792196
72518474
94883687
69476178

You'd like to use polynomial regression to predict a student's final exam score from their midterm exam score. Concretely, suppose you want to fit a model of the form hθ(x)=θ0+θ1x1+θ2x2, where x1 is the midterm score and x2 is (midterm score)2. Further, you plan to use both feature scaling (dividing by the "max-min", or range, of a feature) and mean normalization.

What is the normalized feature x2(4)? (Hint: midterm = 69, final = 78 is training example 4.) Please round off your answer to two decimal places and enter in the text box below.

Answer:

The mean of x2 is 6675.5 and the range is 8836 - 4761 is 4075.

x2(4) = (4761 - 6675.5) / 4075 = -0.47





1.Suppose m=4 students have taken some class, and the class had a midterm exam and a final exam. You have collected a dataset of their scores on the two exams, which is as follows:

midterm exam(midterm exam)^2final exam
89792196
72518474
94883687
69476178

You'd like to use polynomial regression to predict a student's final exam score from their midterm exam score. Concretely, suppose you want to fit a model of the form hθ(x)=θ0+θ1x1+θ2x2, where x1 is the midterm score and x2 is (midterm score)2. Further, you plan to use both feature scaling (dividing by the "max-min", or range, of a feature) and mean normalization. What is the normalized feature x1(3)? (Hint: midterm = 89, final = 96 is training example 1.) Please enter your answer in the text box below. If applicable, please provide at least two digits after the decimal place.

Answer

The mean of x1 is 81 and the range is 94−69=25

So x1(3) is 94−81/25=0.52.




Variations in question 2:

2.You run gradient descent for 15 iterations with α=0.3 and compute J(θ) after each iteration. You find that the value of J(θ) decreases slowly and is still decreasing after 15 iterations. Based on this, which of the following conclusions seems most plausible?

1)Rather than use the current value of α, it'd be more promising to try a larger value of α (say α=1.0).

2)Rather than use the current value of α, it'd be more promising to try a smaller value of α (say α=0.1).

3)α=0.3 is an effective choice of learning rate.

answer: 1
A larger value for α should increase the rate of convergence to the minimum of J(θ).




Q2. You run gradient descent for 15 iterations with α = 0.3 and compute J(θ) after each iteration. You find that the value of J(θ) increases over time. Based on this, which of the following conclusions seems most plausible?

  1. α = 0.3 is an effective choice of learning rate.

  2. Rather than use the current value of α, it'd be more promising to try a larger value of α (say α = 1.0).

  3. Rather than use the current value of α, it'd be more promising to try a smaller value of α (α = 0.1).

answer :  3 

since if α is too large:J(θ)  may not decrease on every iteration and thus may not converge.





Variations in question 3:

3. Suppose you have m = 23 training examples with n = 5 features (excluding the additional all-ones feature for the intercept term, which you should add). The normal equation is θ=(XTX)−1XT y. For the given values of m and n, what are the dimensions of θ, X, and y in this equation?

  1. X is 23 × 6, y is 23 × 6, θ is 6 × 6

  2. X is 23 × 5, y is 23 × 1, θ is 5 × 1

  3. X is 23 × 6, y is 23 × 1, θ is 6 × 1

  4. X is 23 × 6, y is 23 × 1, θ is 5 × 5

Answer: 3





---------------------------------------------------------------------------------

reference : coursera



darkmode