[Audio] Hello all, we welcome you all to this infographic on Long Short term memory and its variants. The group members are, Rohi Bindal, Prasanna Kulkarni, Prajjwal Dewangan and Anshita Singh..
[Audio] Recurrent Neural Networks (RNNs) are a type of neural network designed to handle sequential data by using a hidden state that is passed from one time step to another. RNNs differ from traditional neural networks in that they can process inputs of varying lengths by using a hidden state that is passed on from one time step to another. This hidden state allows RNNs to maintain information about the previous inputs, which is important for tasks such as predicting the next word in a sentence or forecasting future values in a time-series..
[Audio] RNNs are important in processing sequential data because they can maintain information about the previous inputs, which is crucial for tasks such as predicting the next word in a sentence or forecasting future values in a time-series..
[Audio] RNNs are useful for tasks such as speech recognition, natural language processing, and time-series analysis. Here are some brief information about them..
[Audio] The vanishing gradient problem in conventional RNNs is addressed by the recurrent neural network (RNN) architecture known as LSTM (Long Short-Term Memory). The network may forget crucial information from earlier time steps as a result of the vanishing gradient problem, which happens when gradients are very small during backpropagation across time..
[Audio] In structure of LSTM, the three main gate are input gate, forget gate and output gate Input gate determines what information should be stored in the memory cell. Forget gate determines what information should be removed from the memory cell. Output gate determines what information should be output from the memory cell to the rest of the network..
[Audio] Here is an example of LSTM model with one LSTM layer that has 64 units and an input shape of (10, 1), the model expects input sequences of length 10 with one feature at each time step. The output of the LSTM layer is passed to a dense layer with one unit and a sigmoid activation function, which produces a binary output.
[Audio] Variants of LSTM are bidirectional, Convolutional, Depth gated LSTM, Hierarchical LSTM and Gated Recurrent Unit.
[Audio] The GRU is like a long short-term memory (LSTM) with a forget gate, but has fewer parameters than LSTM, as it lacks an output gate. GRU's performance on certain tasks of polyphonic music modeling, speech signal modeling and natural language processing was found to be similar to that of LSTM.
[Audio] DGLSTMs are deep variants of LSTMs that add additional LSTM layers to the network. Each layer has its own set of gates, allowing it to capture more complex dependencies in the input sequence..
[Audio] A hiLSTM uses a hierarchical structure to model the input sequence at multiple levels of ion. The lower-level LSTMs process individual elements of the sequence, while the higher-level LSTMs process the output of the lower-level LSTMs. This allows the network to capture both fine-grained and high-level information in the input sequence.
[Audio] A ConvLSTM processes spatial or temporal data in addition to sequential data by including convolutions into the LSTM structure. For jobs like video analysis or weather forecasting, this is helpful..
[Audio] A BiLSTM processes the input sequence both forward and backward using two different LSTM layers, and at each time step, the outputs from the two layers are combined. This permits the network, while making predictions, to take into account both past and future context..
[Audio] Advantages of LSTM over traditional RNNs are Long-term dependencies, Selective memory, Gradient flow, Multiple inputs and outputs and Easy implementation.
[Audio] In conclusion, LSTM (Long Short-Term Memory) and its variants have become an important tool for processing sequential data in a wide range of applications..
[Audio] And at last these are some references that we have used in our infographics..
[Audio] Thankyou for being a patient listener. Thank you.