Deep Learning FundamentalsΒΆ
Deep learning, as the rising method for time series forecasting, requires the knowledge of some fundamental principles.
In this part, we explain and demonstrate some popular deep learning models. Note that we do not intend to cover all models but only discuss a few popular principles.
The simplest deep learning model, is a fully connected Feedforward Neural Network (FFNN). A FFNN might work for in-distribution predictions, it is likely to overfit and perform poorly for out-of-distribution predictions. In reality, most of the deep learning models are much more complicated than a FFNN, and a large population of deep learning models are utilizing the self-supervised learning concept, providing better generalizations1.
In the following chapters, we provide some popular deep learning architectures and cool ideas.
Notations
In this document, we use the following notations.
- Sets, domains, abstract variables, \(X\), \(Y\);
- Probability distribution \(P\), \(Q\);
- Probability density \(p\), \(q\);
- Slicing arrays from index \(i\) to index \(j\) using \({}_{i:j}\).
-
Liu X, Zhang F, Hou Z, Wang Z, Mian L, Zhang J et al. Self-supervised learning: Generative or contrastive. 2020.http://arxiv.org/abs/2006.08218. ↩