Recurrent Neural Networks
Graded Quiz • 30 min Quiz10 Questions Week 1
Re⥃askly Logo

Recurrent Neural Networks

Graded Quiz • 30 min Quiz10 Questions Week 1
Practice More Quizzes:
Quiz
10 Questions Week 1

Q:

Suppose your training examples are sentences (sequences of words). Which of the following refers to the jthj^{th}jth word in the ithi^{th}ith training example?

Q:

Consider this RNN:This specific type of architecture is appropriate when:

Q:

To which of these tasks would you apply a many-to-one RNN architecture? (Check all that apply).

Q:

Using this as the training model below, answer the following: True/False: At the ttht^{th}tth time step the RNN is estimating P(y<t>)P(y^{<t>})P(y<t>)

Q:

You have finished training a language model RNN and are using it to sample random sentences, as follows:

Q:

True/False: If you are training an RNN model, and find that your weights and activations are all taking on the value of NaN (“Not a Number”) then you have an exploding gradient problem.

Q:

Suppose you are training an LSTM. You have a 10000 word vocabulary, and are using an LSTM with 100-dimensional activations a<t>a^{<t>}a<t>. What is the dimension of ΓuGamma_uΓu​ at each time step?

Q:

Here are the update equations for the GRU. Alice proposes to simplify the GRU by always removing the ΓuGamma_uΓu​. I.e., setting ΓuGamma_uΓu​ = 0. Betty proposes to simplify the GRU by removing the ΓrGamma_rΓr​. I. e., setting ΓrGamma_rΓr​ = 1 always. Which of these models is more likely to work without vanishing gradient problems even when trained on very long input sequences?

Q:

True/False: Using the equations for the GRU and LSTM below the Update Gate and Forget Gate in the LSTM play a role similar to 1- Γu and Γu.

Q:

You have a pet dog whose mood is heavily dependent on the current and past few days’ weather. You’ve collected data for the past 365 days on the weather, which you represent as a sequence as x<1>,…,x<365>x^{<1>}, …, x^{<365>}x<1>,…,x<365>. You’ve also collected data on your dog’s mood, which you represent as y<1>,…,y<365>y^{<1>}, …, y^{<365>}y<1>,…,y<365>. You’d like to build a model to map from x→yx rightarrow yx→y. Should you use a Unidirectional RNN or Bidirectional RNN for this problem?

Subscribe
Notify of
guest
0 Comments
Inline Feedbacks
View all comments
Find Questions in This Page: "CTRL+F"