Traditional machine learning sequence models, such as RNN and LSTM, can solve sequential data problems with the use of internal memory states. However, neuron units weights are shared at each time step to reduce computational costs, limiting their ability learn time-varying relationships between model inputs outputs. In this context, paper proposes two methods characterize dynamic in real-world...