# Time Series Prediction - an overview (2022)

## Related terms:

• Exchange Rate
• Hidden Markov Models
• Energy Demand
• Time Series
• Foreign Exchange
• Artificial Intelligence
• Neural Network
• Traffic Sign
View all TopicsNavigate Right

## Metro traffic flow monitoring and passenger guidance

Hui Liu, ... Ye Li, in Smart Metro Station Systems, 2022

### 2.2.2 Principles of time series prediction

By studying the nonlinear correlation between the data at a certain moment and its historical data, the time series prediction method can summarize the characteristics of the fluctuation law and realize the prediction of some future time points [6]. The key technology used in this study is to realize the prediction of traffic flow time series. The original traffic flow fluctuation data is one-dimensional data, and there must be corresponding observed values at a certain time point [7].

In order to meet the modeling requirements of the supervised learning algorithm, the original one-dimensional traffic flow time series data should be transformed into the format of multidimensional input feature vectors and model output sample labels. At present, scholars usually adopt the strategy of “sliding window” to transform one-dimensional data into two-dimensional data. This method can effectively transform a one-dimensional traffic flow time series into a two-dimensional machine learning data form [8].

The main steps of this method are as follows [9]:

Step 1: Select time T, collect N historical values from time T to time TN−1 in sequence and set them as feature vectors. N is the length of the feature vector, that is, the dimension of the input.

Step 2: Set the observed value from T+1 to T+M as the label to construct the output vector. M is the number of output variables and represents the predicted step size.

The basic flow chart of data format transformation based on the “sliding window” method is shown in Fig. 2.4:

View chapterPurchase book

URL:

https://www.sciencedirect.com/science/article/pii/B9780323905886000020

## Forecasting

Elena Mocanu, ... Madeleine Gibescu, in Local Electricity Markets, 2021

### 14.1 Introduction

As prediction developed, different subfields were created. The electrical forecasting problem can be regarded as a nonlinear time series prediction problem depending on many complex factors since it is required at various aggregation levels and high resolution [1]. Furthermore, the electrical forecasting accuracy and the resulting errors will be reflected in the performance of the local energy market. In this context, a variety of forecasts are necessary at national level, regional level, or specific to the type of consumers (residential, industrial). Worldwide, residential buildings have one of the highest energy consumption rates, on average they consume around 40% of the global primary energy and contribute to over 30% of CO2 emission. Within Europe, residential energy usage grows at an annual rate of 1.5%. This is higher than the industrial and transportation sector energy consumption increase rate [2].

Consequently, the current growth of urbanization and electricity demands introduce new requirements for future power grids and keep the electricity market under pressure. To satisfy these demands, future power grids will need to predict, learn, schedule, make decisions, and monitor local energy production and consumption. Following this, to improve the flow of energy requires energy predictions over various time horizons [3,4].

As both the aggregation level and the prediction horizon are decreasing more and more, the fluctuations are increasing in the electrical patterns. To solve these challenging problems, various time series and machine learning (ML) approaches have been proposed in the literature. These range from heuristic based approaches to mathematically grounded ones such as those residing in the realm of ML.

When analyzing the local energy market impact, it is imperative to not only predict the electrical pattern, but also to consider a deeper range of factors. This allows the decomposition of demand and price forecasting, to not only help identify consumption and generation trends, detect faults, or predict savings, but it allows for better decision-making strategies to control and schedule loads to off-peak times [3,4]. The choice of a high-performance method depends on the special characteristics that electrical patterns have.

View chapterPurchase book

URL:

https://www.sciencedirect.com/science/article/pii/B9780128200742000071

## Personalized mobility services and AI

George Dimitrakopoulos, ... Iraklis Varlamis, in The Future of Intelligent Transport Systems, 2020

### 20.1 Artificial intelligence in transportation

Artificial intelligence (AI) is a simulation of human intelligence processes by machines and depending on the problem complexity either aims to support or to replace human intelligence. The increasing interest of researchers and companies for artificial intelligence solutions brought AI-driven development among the top-10 strategic technology trends for 2019 (Gartner’s top-10 strategic technology trends for 2019: https://www.gartner.com/smarterwithgartner/gartner-top-10-strategic-technology-trends-for-2019/). Although AI is not new, and its history begins back in the 1950s, there have been dramatic changes in the last 10 years, which is mainly due to the rise in computing power and available data. These two components are the main enablers that allow AI algorithms to control smart services and automate complex operations.

Almost all the major cloud vendors invested in AI in order to create a new market of AI solutions as a service, with applications in many domains.

IBM launched Watson (IBM Watson AI platfotm: https://www.ibm.com/watson) as a service in an attempt to attract new customers and allow third-party companies to develop AI solutions. IBM Watson started as a question-answering system that was able to understand natural language and respond with facts from a huge knowledge base and evolved into a suite of enterprise-ready AI services, applications, and tools. According to the IBM, Watson allows companies to accelerate research and discovery, enrich their interactions, anticipate, and preempt disruptions, recommend with confidence, scale expertise and learning, detect liabilities and mitigate risk and consequently frees employees from repetitive tasks and empowers them to focus on the high-value work that is critical for an enterprise. Question-answering systems that extract facts from company documents, virtual assistants that respond to online customers and chatbots and smart readers of complex documents (e.g., contracts) are among the tools offered by the Watson suite. In the domain of mobility, IBM Watson has powered with AI technology the autonomous vehicle of Local Motors, Olli, a fully electric car that can be 3D printed and can hold up to 12 people (English, 2016).

Microsoft’s Azure AI (Microsoft Azzure AI platform, https://gallery.azure.ai/) offers a suite of algorithms, solution templates, reference architectures, and design patterns that allow companies to develop custom AI solutions to their problems. Several experiments are showcased in their website including image classification and recognition, outlier detection and time-series predictions as well as an example that uses regression algorithms to score parking availability in the city of Birmingham, UK, using open data. The domains of application include among others, retail market (sales forecasting, predicting customer churn, and pricing models), manufacturing (predict equipment maintenance, forecast energy prices), banking (predict credit risk and monitor for online fraud), and healthcare (detect disease and predict hospital readmissions).

Salesforce’s solution to smart CRM is called Einstein AI (Salesforce Einstein platform https://www.salesforce.com/products/einstein/overview/). The platform uses machine learning and AI to support faster decision making for managers and increase the productivity of employees, and personalized recommendations that increase customers’ satisfaction. With a combination of the AI tools which are available as a service, companies can build solutions that discover significant patterns and trends in sales data, can understand their customers by learning which channels, messages, and content they prefer and allows every employee to have instant access to smart insights and business AI-powered analytics.

C3 AI develops solutions for the operators of transportation, logistics, and travel companies, which comprise IoT analytics and predictive machine-learning models. For example, the C3 predictive maintenance solution, which is a part of the C3 AI suite, provides estimations related to the risk of failure of various vehicle equipment and recommends maintenance actions that can prolong their safe operation. It is, in essence, a supportive tool for fleet owners with monitoring and preventing capabilities that guarantee optimized fleet performance. In addition, C3 AI provides data analytic services for sales and demand data and allows enhanced demand forecasting, and increased customer service. The full list of applications (C3 AI solutions, https://c3.ai/products/c3-applications/) comprises sensor health algorithms, inventory optimization, facility energy management, supply network, and CRM solutions.

Google AI (Google AI platform https://ai.google) brings together hardware, software, and AI, and makes devices faster, smarter, and more useful. It offers Cloud AutoML, an online solution that allows developers, researchers, and businesses with limited AI expertise to build their own custom models. Google’s parent company Alphabet runs several transport-relevant projects and encompasses a host of other subsidiaries called “Other Bets,” such as (1) Waymo, the self-driving car initiative, (2) Sidewalk Labs, the urban innovation organization, and (3) Project Wing, which is developing an autonomous delivery drone service. The combination of cloud computing and IoT-edge computing (Google’s IOT edge computing https://cloud.google.com/iot-edge/) allows the training of complex models on the cloud and the deployment of the trained models at the edge for faster real-time prediction. Thus intelligent technologies that prevent and avoid collisions, detect driver’s distraction and provide alerts, collect and analyze traffic information, and give route alternatives can be easily integrated with existing infrastructure and embedded on existing and future vehicles using IoT devices.

Since 2015, Nvidia invests in hardware acceleration of deep learning architectures, for providing autonomous car and driver assistance functionality (Oh & Yoon, 2019). The Nvidia Drive AGX open autonomous-vehicle computing platform collects data from various sources such as cameras, lidars, ultrasonic sensors, and radars. Then, processes the data in order to get a 360-degree understanding of the surrounding environment in real-time, detect the vehicle location on the map and within the surrounding area and be able to plan the next movement safely. This high-performance computing platform is energy-efficient and able to develop safe and highly responsive self-driving models. TuSimple (TUSimple website https://www.tusimple.com/) is a Chinese startup that uses Nvidia GPUs and the cuDNN CUDA deep neural-network library to power its driverless trucks, that promise to autonomously transport products in a depot-to-depot basis.

URL:

https://www.sciencedirect.com/science/article/pii/B9780128182819000206

## Traffic congestion detection: data-based techniques

Fouzi Harrou, ... Ying Sun, in Road Traffic Modeling and Management, 2022

### 5.4.1 Features extraction with PCA

Detecting anomalies based on PCA has gained significant attention in the last decade. This is mainly due to the flexibility and capability of PCA in modeling multivariate data without the need for deep physical knowledge on the process, and the only information needed is historical measurements characterizing the normal process operation [44].

PCA is possibly the most frequently applied methods for dimensionality reduction. The introduction of PCA can be traced back to Pearson (1901) and Hotelling (1933). This has become a popular feature extraction technique by relating process variables. In this framework, PCA can represent a process efficiently in a reduced subspace via PCA scores, which are linear combinations of original variables. As a matter of fact, PCA has been found useful in various contexts including detection of outliers, denoising [49], data smoothing, time series prediction [50], and process monitoring.

PCA is performed based on one data set containing all the process variables concerned with the problem (i.e., input and output variables). We will denote by Y the whole data set for consistency with the following chapters, whereas the notation X is often used in the literature. Let ${\mathbf{Y}}^{b}={\left[{\mathbf{y}}_{1}^{T},\dots ,{\mathbf{y}}_{n}^{T}\right]}^{T}\in {R}^{n×m}$ denote traffic measurements collected from a detector station containing n observations and m variables. For instance, traffic variables may include flow, speed, occupancy, truck flow, vehicle mile travel, and vehicle hour travel. It is worth pointing out that the traffic data are recorded by different detectors during the absence of traffic congestion. This allows us to build a reference PCA model reflecting the congestion-free behavior.

Before applying the PCA-based approach, an important step is to first preprocess the data by removing the data mean by scaling the data variance to unity. In other words, the data should be autoscaled. This is mainly due to the fact that the process variables collected from different sensors may have different scales. The variables with large variance can mask the variables with weak variance and thus make their comparison challenging. The benefits of data autoscaling are facilitating comparison and data analysis and bypassing the problem of several scales and data with distinct means and standard deviations in diverse units. Hence each autoscaled variable ${\mathbf{y}}_{j}$ ($j=1,\dots ,m$) is expressed as

(5.16)${\mathbf{y}}_{j}=\frac{{\mathbf{y}}_{j}^{b}-{\mu }_{j}}{{\sigma }_{j}},$

where ${\mathbf{y}}_{j}^{b}$ is the jth column of the input matrix ${\mathbf{Y}}^{\mathbf{b}}$, ${\mu }_{j}$ denotes the sample average of all observations in the variables ${\mathbf{y}}_{j}$ and can be written as

(5.17)${\mu }_{j}=\frac{1}{N}\sum _{i=1}^{n}{\mathbf{y}}_{j}^{b}\left(i\right),$

and ${\sigma }_{j}$ refers to the sample standard deviation of the variables ${\mathbf{\text{y}}}_{j}$:

(5.18)${\sigma }_{j}=\sqrt{\frac{1}{n}\sum _{i=1}^{n}{\left({\mathbf{y}}_{j}^{b}-{\mu }_{j}\right)}^{2}}.$

The autoscaled matrix Y is written as

$\mathbf{Y}={\left(\begin{array}{cccc}\hfill {y}_{1,1}\hfill & \hfill {y}_{1,2}\hfill & \hfill \dots \hfill & \hfill {y}_{1,m}\hfill \\ \hfill {y}_{2,1}\hfill & \hfill {y}_{2,2}\hfill & \hfill \dots \hfill & \hfill {y}_{2,m}\hfill \\ \hfill ⋮\hfill & \hfill ⋮\hfill & \hfill \ddots \hfill & \hfill ⋮\hfill \\ \hfill {y}_{n,1}\hfill & \hfill {y}_{n,2}\hfill & \hfill \dots \hfill & \hfill {y}_{n,m}\hfill \end{array}\right)}_{n×m}.$

The attempt of the modeling stage is to model nominal traffic evolution based on historical congestion-free (reference) data. These data are characterized by traffic evolution consistently in an acceptable manner without accidents and congestions, and in which only good quality of data has been obtained. The missing data should be excluded or imputed using dedicated procedures.

Essentially, PCA is initially used to reduce the dimensionality of the multivariate correlated data via the compression of the original data Y into a lower-dimensional subspace of dimension $l such that the most data variability is preserved by a fewer number of principal components (PCs). The latter are linear combinations of the original variables and orthogonal one to each other. PCA can be performed using different procedures, such as singular value decomposition (SVD) and the nonlinear Iterative partial least squares (NIPLAS) algorithm [51]. Using the PCA in both procedures, the data matrix Y is split into two complementary parts, the approximated matrix $\stackrel{ˆ}{\mathbf{Y}}$ and a residual matrix E (Fig. 5.2):

(5.19)$\mathbf{Y}=\mathbf{T}{\mathbf{W}}^{T}=\sum _{i=1}^{k}{t}_{i}{w}_{i}^{T}+\sum _{i=k+1}^{m}{t}_{i}{w}_{i}^{T}=\stackrel{ˆ}{\mathbf{Y}}+\mathbf{E},$

where $\mathbf{T}\in {R}^{n×m}$ refers to the score matrix containing the PCs. As discussed above, the PCs are uncorrelated variables defined as linear combinations of the original variables. They capture the maximum variability in the original data in ascending order. Consequently, in the presence of highly cross-correlated data Y, only a few components, k, are able to preserve the relevant variability in the original data. $\mathbf{W}\in {R}^{m×m}$ denotes the loading matrix containing the eigenvectors corresponding to the coefficients for the linear transformation. The first l eigenvectors represent the directions of the largest variability of the new PC one-dimensional subspace.

Singular value decomposition of the sample covariance matrix C of the data Y is often used to calculate loadings of the data matrix Y:

(5.20)$\mathbf{C}=\frac{1}{n-1}{\mathbf{Y}}^{T}\mathbf{Y}=W\mathrm{\Lambda }{W}^{T}=\left[\begin{array}{ccc}\hfill {\mathbf{w}}_{\mathbf{1}}\hfill & \hfill \cdots \hfill & \hfill {\mathbf{w}}_{\mathbf{m}}\hfill \\ \hfill \hfill \end{array}\right]\left[\begin{array}{ccc}\hfill {\lambda }_{1}\hfill & \hfill \cdots \hfill & \hfill 0\hfill \\ \hfill ⋮\hfill & \hfill \ddots \hfill & \hfill ⋮\hfill \\ \hfill 0\hfill & \hfill \cdots \hfill & \hfill {\lambda }_{m}\hfill \\ \hfill \hfill \end{array}\right]\left[\begin{array}{c}\hfill {{\mathbf{w}}_{\mathbf{1}}}^{T}\hfill \\ \hfill ⋮\hfill \\ \hfill {{\mathbf{w}}_{\mathbf{m}}}^{T}\hfill \\ \hfill \hfill \end{array}\right]$

with $W{W}^{T}={W}^{T}W={I}_{n}$, where Λ denotes a diagonal matrix with the diagonal terms corresponding to the eigenvalues of C ordered in descending order (${\lambda }_{1}>{\lambda }_{2}>\dots >{\lambda }_{m}$). It is worth pointing out that the eigenvalues ${\lambda }_{i}$ are equal to the variances of the PC ${\mathbf{t}}_{i}$ (i.e., ${\sigma }_{i}^{2}={\lambda }_{i}$) [52]. Mathematically, the variance of ${\mathbf{t}}_{i}$ is obtained as $Var\left({\mathbf{w}}_{i}^{T}\mathbf{Y}\right)={\mathbf{w}}_{i}^{T}\mathbf{C}{\mathbf{w}}_{i}={\lambda }_{i}$. In fact, we have $\mathbf{C}{\mathbf{w}}_{i}={\lambda }_{i}{\mathbf{w}}_{i}$, and as ${\mathbf{w}}_{i}$ is the eigenvector for ${\lambda }_{i}$, we have ${\mathbf{w}}_{i}^{T}\mathbf{C}{\mathbf{w}}_{i}={\mathbf{w}}_{i}^{T}{\lambda }_{i}{\mathbf{w}}_{i}$. Of course, the variance of the ith PC is the ith eigenvalue ${\lambda }_{i}$. Another important property that should be pointed out is that $Cov\left({\mathbf{w}}_{i}^{T}\mathbf{Y},{\mathbf{w}}_{j}^{T}\mathbf{Y}\right)={\mathbf{w}}_{i}^{T}\mathbf{C}{\mathbf{w}}_{i}={\lambda }_{i}{\mathbf{w}}_{i}^{T}{\mathbf{w}}_{i}=0$.

$\left[\begin{array}{c}\hfill {\stackrel{ˆ}{W}}_{l×l}^{T}\hfill \\ \hfill {\stackrel{˜}{\mathbf{W}}}_{m×m-l}^{T}\hfill \end{array}\right]\left[\begin{array}{cc}\hfill {\stackrel{ˆ}{W}}_{l×l}\hfill & \hfill {\stackrel{˜}{\mathbf{W}}}_{m×m-l}\hfill \end{array}\right]=\left[\begin{array}{cc}\hfill {\mathbf{I}}_{l×l}\hfill & \hfill 0\hfill \\ \hfill 0\hfill & \hfill {\mathbf{I}}_{\left(m-l\right)×\left(m-l\right)}\hfill \end{array}\right].$

The matrices W and Λ can be partitioned as follows:

(5.21)$\mathrm{\Lambda }=\left(\begin{array}{ccc}\hfill {\stackrel{ˆ}{\mathrm{\Lambda }}}_{l×l}\hfill & \hfill \phantom{\rule{0ex}{0ex}}\hfill & \hfill 0\hfill \\ \hfill 0\hfill & \hfill \phantom{\rule{0ex}{0ex}}\hfill & \hfill {\stackrel{˜}{\mathrm{\Lambda }}}_{m-l×m-l}\hfill \\ \hfill \hfill \end{array}\right),$

(5.22)$\mathbf{W}=\left(\begin{array}{ccc}\hfill {\stackrel{ˆ}{\mathbf{W}}}_{l×l}\hfill & \hfill \phantom{\rule{0ex}{0ex}}\hfill & \hfill {\stackrel{˜}{\mathbf{W}}}_{m×m-l}\hfill \\ \hfill \hfill \end{array}\right).$

It is obvious that Eq. (5.19) can be written as

(5.23)$\mathbf{Y}={\stackrel{ˆ}{\mathbf{T}}}_{l}{\stackrel{ˆ}{\mathbf{W}}}_{l}^{T}+{\stackrel{˜}{\mathbf{T}}}_{m-l}{\stackrel{˜}{\mathbf{W}}}_{m-l}^{T}$

with $\stackrel{ˆ}{\mathbf{Y}}=\mathbf{Y}{\stackrel{ˆ}{\mathbf{C}}}_{l}$ and $\stackrel{˜}{\mathbf{Y}}=\mathbf{Y}{\stackrel{˜}{\mathbf{C}}}_{m-l}$.

A simple geometrical illustration of the basic PCA framework is presented in Fig. 5.2. In the cases where the process variables are cross-correlated, PCA decomposes the space of measure ${\mathbb{R}}^{m}$ into a PCs subspace ${S}_{\mathrm{PCs}}$, where important variations take place, and residuals subspace ${S}_{\mathrm{res}}$, where outliers and errors can appear. The approximated data $\stackrel{ˆ}{\mathbf{Y}}=\mathbf{Y}{\stackrel{ˆ}{\mathbf{C}}}_{l}$ are obtained via the projection of the original data into the subspace of the PCs, ${S}_{\mathrm{PCs}}{\mathbb{R}}^{l}$, and the residuals E are computed by projecting Y into the subspace of the residuals ${S}_{\mathrm{res}}\in {\mathbb{R}}^{m-l}$ (i.e., $\mathbf{E}=\mathbf{Y}{\stackrel{˜}{\mathbf{C}}}_{m-l}\right)$.

To geometrically show the principle of principal components, for simplicity, assume that we have three cross-correlated variables (Fig. 5.3). Fig. 5.3 indicates that two principal components are sufficient to explain the covariance structure in this three-dimensional data. As shown in Fig. 5.3, the first principal component, which captures the maximum variance in the original data, is presented by a line through the largest variance in data, and the second component describes the largest variance omitted by the first component, and it is orthogonal to the first PC. Thus the data can be summarized in reduced dimensional space.

It is worth pointing out that the residual matrix generated by PCA is an important indicator for sensing traffic congestions. Generally, anomalous traffic detection can be done by evaluating the residuals using multivariate monitoring schemes.

View chapterPurchase book

URL:

https://www.sciencedirect.com/science/article/pii/B9780128234327000100

## Soft computing hybrids for FOREX rate prediction: A comprehensive review

### 1 Introduction

The Foreign Exchange (FOREX) rate is the price of one currency paid in terms of another. It is the most important price in any country’s economic system and is a measure of the economic health of that country. The exchange rates have the tremendous influence on the trade relationship of one country with another, which in turn, affect the common man’s standard of living. A country with lower currency rate has to make its exports very cheap and its imports very expensive in foreign currency market thereby affecting its economy. It can revisit its economic policies and change them suitably based on the accurate prediction of FOREX rate. These changes help in maintaining trade relationships properly which, in turn, lead its economy to be stronger. Thus, the prediction of FOREX rate is paramount, and it should never be underestimated (Hoagand Hoag,2002).

Financial time series is a collection of chronologically recorded observations of the financial variable(s). For example, the daily FOREX rate of a currency pair is a univariate financial time series. Compared to other time series, the financial time series are intrinsically non-stationary and chaotic (Yaoand Tan,2000). A time series is said to be chaotic if and only if it is nonlinear, deterministic and sensitive to initial conditions (Dhanyaand Nagesh Kumar,2010). The prediction of a chaotic time series engages with the prediction of future behavior of the chaotic system by utilizing the current and past states of that system.

In addition to these, financial time series prediction is a highly complicated task as a financial time series exhibits the following characteristics:

1.

Financial time series often behave nearly like a random-walk process, making the prediction almost impossible (from a theoretical point of view) (Hellstromand Holmstrom,1998).

2.

Financial time series are usually very noisy, i.e., there is a large amount of random (unpredictable) day-to-day variations (Magdon-Ismailetal., 1998).

3.

Statistical properties of the financial time series are different at different points in time as the process is time-varying (Hellstromand Holmstrom,1998).

Time series forecasting involves collecting historical observations of a variable, analyzing them to develop a model capturing the underlying process of data generation and utilizing that model to predict the future. Whenever a single model fails to find all characteristics of a Time series and a bunch of models, in their stand-alone mode, cannot find the true process of generating data in, it is better to build hybrid models (Teruiand VanDijk,2002). A hybrid is either homogeneous or heterogeneous depending on whether only nonlinear models comprise it or a combination of linear and nonlinear models comprise it (Taskaya-Temizeland Casey,2005).

Several researchers demonstrated that hybrid or ensemble models do yield better results compared to the constituent stand-alone models. Reid(1968) and Batesand Granger(1969) laid the foundation for proposing various hybrid time series models. Batesand Granger(1969) concluded that suitably combining different forecasting models can yield better predictions than the stand-alone models. Similarly, Makridakisetal.(1982) reported that a hybrid or an ensemble of several models is commonly needed to improve forecasting accuracy. Pelikanand DeGroot(1992), and Ginzburgand Horn(1993) reported that the combination of several artificial neural networks (ANNs) improved time series forecasting accuracy. An excellent comprehensive review of various hybrid prediction models and annotated bibliography can be found in Clemen(1989). Usually, a good hybrid prediction model can:

1.

Improve the forecasting performance.

2.

Overcome deficiencies of the constituent stand-alone models.

3.

Reduce the model uncertainty (Chatfield,1996).

Soft Computing (SC) is a collection of various computational techniques from computer science and some engineering disciplines. It is aimed at exploiting the tolerance for imprecision, uncertainty, partial truth, and approximation to achieve tractability, robustness and low solution cost. The idea of hybridizing two or more machine learning techniques emanated from the fact that each one of them in their stand-alone mode has merits and demerits. Once hybridized, their demerits would be nullified, while merits would be amplified (Zadeh,1994). It is founded on the fact that the human mind can store and process information which is commonly unclear, ambivalent and lacking in categorization. It can model and analyze complex systems arising in bioscience, medicine, the humanities, management sciences, and all fields of science and engineering (Galindo,2008).

The Table1 presents the constituents of SC such as Artificial Neural Network (ANN), Evolutionary Computation (EC), Fuzzy Logic (FL), Support Vector Machine (SVM) and Chaos Theory that are used in predicting FOREX rate time series along with their respective merits and demerits. The constituents of SC, in general, include fuzzy computing, neural computing, evolutionary computing, SVM, decision trees, chaos, probabilistic reasoning and rough sets. Some of the hybrid systems (or Hybrids) include neuro-fuzzy, neuro-genetic, neuro-fuzzy-genetic, fuzzy-neural, fuzzy-genetic, etc. to mention a few.

Table 1. Constituents of soft computing used in hybrids.

Constituent of soft computingBasic ideaMeritsDemerits
Artificial Neural Network (ANN)Capable of learning patterns from examples using various algorithms mimicking human learningSuitable for diverse tasks of classification, clustering, forecasting, optimization and function approximationOptimal parameter combination of a training algorithm is found by fine tuning. A lot of training data and time are needed.
Evolutionary Computation (EC)Imitates the Darwin’s principles of evolution in order to solve nonlinear, non-convex global optimization problemsCapable of finding the near global optimal solution of an nonlinear, non-convex function without getting entrapped in local minimaConvergence process is slow,while convergence to the global optimal solution is not guaranteed unless improved by a suitable direct search method.
Fuzzy Logic (FL)Fuzzy sets can model the imprecision and the ambiguity in the data. FL brings the human experiential knowledge into the model via suitable fuzzy mathematics.Capable of deriving human understandable fuzzy ‘if-then’ rules; It has low computational complexityOften, the selection of a membership function is not scientific and unique.
Support Vector Machine (SVM)Finds a hyperplane which separates the d-dimensional data perfectly into two classes.Training is relatively easy and scales relatively well to high- dimensional data and yields the global optimal solution.Need to choose a good kernel function
Chaos TheoryCharacterizes a dynamical system by transforming it into its equivalent phase-space.It models underlying deterministic complex behavior in a system.It is not clear how much data are required to construct the phase space set and susceptible to initial conditions.

It is interesting to note that even though some past reviews (Bahrammirzaee, 2010; Cavalcante etal., 2016; Huang etal., 2004; Li and Ma, 2010; Mochon etal., 2008; Yu etal., 2007a) covered the use of intelligent techniques to financial time series prediction appeared in the literature, to the best of our knowledge, no study performed a comprehensive review of the use of hybrid intelligent techniques also known as soft computing techniques exclusively to the problem of FOREX rate prediction. Therefore, this paper attempts to fill that gap in the literature by presenting a comprehensive review of various Soft Computing hybrid forecasting models for FOREX rate prediction appeared during 1998–2017. This is necessitated because in other fields such as bankruptcy prediction (Ravi Kumar and Ravi, 2007; Verikas etal., 2010) and software engineering (Mohantyetal., 2010) such reviews caught the attention of the researchers and proved to be useful to them, while FOREX rate prediction area does not have a single such paper.

The objectives of the current review paper are as follows:

1.

To systematically analyze the current state-of-the-art of soft computing models for FOREX rate prediction.

2.

To identify gaps in the current research efforts towards involving hybrid intelligent forecasting models in FOREX rate prediction, which will hopefully stimulate fruitful research in new exciting and hitherto unexplored areas.

The remainder of this paper is organized as follows: various earlier reviews are presented in Section2. Section3 presents the overview of the review methodology. The Sections48 present ANN-based hybrids, EC-based hybrids, FL-based hybrids, SVM-based hybrids and Chaos-based hybrids respectively. Each of these sections describes corresponding hybrids. Section9 discusses overall observations and gaps found in the literature. Finally, Section10 presents various conclusions and future directions. Table2 presents various acronyms, Table3 presents the currency codes and Table4 presents the performance metrics used in the paper.

Table 2. Acronyms used in the paper.

AcronymInterpretationAcronymInterpretation
AAEAverage Absolute ErrorICAIndependent Component Analysis
ACOAnt Colony Optimizationk-NNk-Nearest Neighbor
ANNArtificial Neural NetworkLSSVRLeast-Squares SVR
APEAbsolute Percentage ErrorMax AEMaximum Absolute Error
ARAnnualized ReturnMACDMoving Average Convergence/ Divergence
ARMAAutoregressive Moving AverageMAFEMean Absolute Forecast Error
ARIMAAutoregressive Integrated Moving AveragesMAPEMean Absolute Percentage Error
ARMSEAbsolute Root Mean Square ErrorMARSMultivariate Adaptive Regression Splines
BFOBacterial Foraging OptimizationMD/MDDMaximum Drawdown
BPNN/BPNBack propagation Neural NetworkMFMembership Function
BVARBayesian Vector AutoregressionMKLMultiple Kernel Learning
CARTClassification and Regression TreeMLMachine Learning
CCCorrelation CoefficientMLPMulti-layer Perceptron
CDCorrect DowntrendMREMean Relative Error
CDCConservative Dual- CriteriaMSEMean Squared Error
CGPCartesian Genetic ProgrammingMTFMultivariate Transfer Function
CPCorrect Uptrend/ Correct PredictionNMSENormalized Mean Square Error
CSOCat Swarm OptimizationNRMSENormalized Root Mean Squared Error
CTRCorrect Trend RatePCAPrincipal Component Analysis
DBNDeep Belief NetworkPNNProbabilistic Neural Network
DEDifferential EvolutionPSNPsi Sigma Neural Network
DENFISDynamic Evolving Neuro-Fuzzy Inference SystemRBFRadial Basis Function
DstatDirectional change statisticRERelative Error
DWTDiscrete Wavelet TransformRLSERecursive Least Squares Estimator
ECEvolutionary ComputationRMSERoot Mean Squared Error
ELMExtreme Learning MachineRNNRecurrent Neural Network
ESExponential SmoothingRPNNRidge Polynomial Neural Network
FBLMSForward Backward Least Mean SquareRSQR-Square
EMDEmpirical Mode DecompositionSCSoft Computing
FLFuzzy LogicSMAPESymmetric MAPE
GAGenetic AlgorithmSNRSignal to Noise Ratio
GARCHGeneralized Auto Regression Conditional HeteroskedasticitySVMSupport Vector Machine
GLARGeneralized linear auto-regressionSVRSupport Vector Regression
GMDHGroup Method of Data HandlingTAFETotal Absolute Forecast Error
GMMGeneralized Method of MomentsTETotal Error
GPGenetic ProgrammingVARVector Autoregression
GRNNGeneral Regression Neural NetworkRWRandom Walk
HMMHidden Markov ModelPSOParticle Swarm Optimization
ICIndependent ComponentNSGA-IINon-dominated Sorting GA-II
QRQuantile RegressionRFRandom Forest
QRRFQuantile Regression RFLASSOLeast Absolute Shrinkage Selection Operator

Table 3. Currency codes used in the paper.

CodeCurrency
AUDAustralian Dollar
CHFSwiss Franc
CNYChinese Yuan
DEMDeutsche Mark
EUREuro
FFFrench Franc
GBPBritish Pound
HKDHongkong Dollar
INRIndian Rupees
IRRIranian Rial
JPYJapanese Yen
KRWKorean Won
MOPMacanese Pataca
MXNMexican Peso
MYRMalaysian Ringgit
NTDNew Taiwan Dollar
PHPPhilippine Peso
RMBYuan Renminbi
ROLRomanian Lei
RUBRussian Ruble
SGDSingapore Dollar
USDUnited States Dollar

Table 4. Performance measures used.

Performance measureDescription
$SSE={\sum }_{t=1}^{N}{{e}_{t}}^{2}$SSE measures the sum of squared errors.
Less value results in more accurate predictions.
$MSE=\frac{SSE}{N}$MSE measures the mean of squared errors.
Less value results in more accurate predictions.
$RMSE=\sqrt{MSE}$RMSE measures the square root of mean of squared errors.
Less value results in more accurate predictions.
$NMSE=\frac{1}{N}{\sum }_{t=1}^{N}\frac{{{e}_{t}}^{2}}{{\left({y}_{t}-\overline{Y}\right)}^{2}}$NMSE measures the mean of normalized squared errors.
Less value results in more accurate predictions.
$NRMSE=\sqrt{NMSE}$NRMSE measures the square root of mean of normalized squared errors.
Less value results in more accurate predictions.
$MAD=\frac{{\sum }_{t=1}^{N}|{y}_{t}-\overline{Y}|}{N}$MAD measures the average distance between each data value and mean.
Less value results in more accurate predictions.
$MAE=\frac{{\sum }_{t=1}^{N}|{e}_{t}|}{N}$MAE measures the mean of absolute errors.
Less value results in more accurate predictions.
$MAPE=\frac{100}{N}{\sum }_{t=1}^{N}|\frac{{e}_{t}}{{y}_{t}}|$MAPE measures the mean of absolute errors in percentages.
Less value results in more accurate predictions.
$U=\frac{\sqrt{\frac{1}{N}SSE}}{\sqrt{\frac{1}{N}{\sum }_{t=1}^{N}{{y}_{t}}^{2}}+\sqrt{\frac{1}{N}{\sum }_{t=1}^{N}{{\stackrel{^}{y}}_{t}}^{2}}}$Theil’s Inequality coefficient (U) measures the closeness of predictions to
actual values. The value of U closer to zero results in
more accurate predictions.
$Dstat=\frac{1}{N}{\sum }_{t=1}^{N}{a}_{t}*100%$
where ${a}_{t}=\left\{\begin{array}{c}1,\phantom{\rule{0ex}{0ex}}\text{if}\left({y}_{t+1}-{y}_{t}\right)*\left(\stackrel{^}{{y}_{t+1}}-\stackrel{^}{{y}_{t}}\right)\ge 0\hfill \\ 0,\phantom{\rule{0ex}{0ex}}\text{otherwise}\hfill \end{array}$Dstat Measures the direction movement of financial variable.
Higher value results in more accurate predictions.
$RE={\sum }_{t=1}^{N}|\frac{{e}_{t}}{{y}_{t}}|$RE measures the ratio between the absolute error and the actual data.
Less value results in more accurate predictions.
$MRE=\frac{1}{N}\left(RE\right)$MRE measures the percentage of accuracy of predictions expressing it
in a stricter way. Less the value more accurate the predictions. $CTR=\frac{1}{N}{\sum }_{t=1}^{N}\frac{{b}_{t}}{N}$
where ${b}_{t}=\left\{\begin{array}{c}1,\phantom{\rule{0ex}{0ex}}\text{if}\left({y}_{t+1}-{y}_{t}\right)*\left(\stackrel{^}{{y}_{t+1}}-{y}_{t}\right)\ge 0\hfill \\ 0,\phantom{\rule{0ex}{0ex}}\text{otherwise}\hfill \end{array}$CTR measures the prediction effect of the algorithm.
Higher the value more accurate the predictions are.
$TE={\sum }_{t=1}^{N}|{e}_{t}|$TE measures the total error.
Less the value more accurate the predictions are.
$RSQ=1-\frac{{\sum }_{t=1}^{N}{{e}_{t}}^{2}}{{\sum }_{t=1}^{N}{\left({y}_{t}-\overline{Y}\right)}^{2}}$RSQ measures how close the data are to the fitted regression line.
$SNR=10*log\left(\frac{max\left({{y}_{t}}^{2}\right)*N}{SSE}\right)$SNR Measures how much noise is in the data.
Less the value more accurate the predictions are.
$CC=\frac{N{\sum }_{t=1}^{N}{y}_{t}\stackrel{^}{{y}_{t}}-{\sum }_{t=1}^{N}{y}_{t}{\sum }_{t=1}^{N}\stackrel{^}{{y}_{t}}}{\sqrt{N{\sum }_{t=1}^{N}{{y}_{t}}^{2}-{\left({\sum }_{t=1}^{N}{y}_{t}\right)}^{2}}\sqrt{N{\sum }_{t=1}^{N}{\stackrel{^}{{y}_{t}}}^{2}-{\left({\sum }_{t=1}^{N}\stackrel{^}{{y}_{t}}\right)}^{2}}}$CC Measures the capability of predicted series whether it follows
the upward or downward jumps same as actual series.
A CC value near 1 shows that both have same jumps. However, a negative CC sign points out that the predicted series follows the same ups or downs of the actual series with a negative mirroring.

yt=Actual observation at time t; $\stackrel{^}{{y}_{t}}$=Predicted value at time t; ${e}_{t}\phantom{\rule{0ex}{0ex}}=\phantom{\rule{0ex}{0ex}}{y}_{t}\phantom{\rule{0ex}{0ex}}-\phantom{\rule{0ex}{0ex}}\stackrel{^}{{y}_{t}}$;$\overline{Y}$=Mean of actal observations

View article

URL:

https://www.sciencedirect.com/science/article/pii/S0305054818301436

## Enhancing transportation systems via deep learning: A survey

Yuan Wang, ... Loo Hay Lee, in Transportation Research Part C: Emerging Technologies, 2019

### 6.5 Tips for DL model design

We have reviewed the technology evolving trend for the tasks of time series prediction and observed that they follow similar patterns. Initially, simple DNN models are applied. Afterwards, CNN or LSTM models are used for improvement. Finally, hybrid models are proposed and achieve state-of-the-art performance. It is also well recognized that CNN models are particularly effective to process image data; LSTM and GRU are more effective in extracting useful features from sequential data. In certain scenarios, these two types of models can be integrated in an end-to-end network to improve the accuracy. Finally, attention mechanism has been shown very effective in many applications (Vaswani et al., 2017). It can be conveniently integrated with existing deep learning models. Unfortunately, we rarely observe the usage of attention mechanism in ITS and this is a direction that is worth exploration.

When there lacks sufficient amount of training data, overfitting is rather common in DL models because they are too many parameters that require training. To resolve the issue, a useful strategy is to apply dropout (Srivastava et al., 2014) that randomly ignores parameter update in certain neurons during the training phase. Another solution is to apply regularization with L1 or L2 norm on the weight parameters of the network.

View article

URL:

https://www.sciencedirect.com/science/article/pii/S0968090X18304108

## Electrical load forecasting models: A critical systematic review

Corentin Kuster, ... Monjur Mourshed, in Sustainable Cities and Society, 2017

### 2.3.4 Support vector machine

Support Vector Machines have been first introduced by Vladimir Vapnik with a paper at the COLT 1992 conference (Boser, Guyon, & Vapnik, 1992). Then, in 1995, the soft margin classifier was introduced by Cortes and Vapnik in the paper Support Vector Networks (Cortes & Vapnik, 1995). Originally, SVMs were created to deal with pattern classification problems like character recognition, face identification and text classification. In 1995, Vladimir Vapnik extends SVM to a regression algorithm in his book, The Nature of Statistical Learning Theory (Cherkassky, 1997). Over the years various applications were found in the literature; e.g. time series prediction problem. The purpose of an SVM is to create an optimal separating hyperplane in a higher dimensional feature space such that subsequent observations can be classified into separate subsets. In practice, real data are not as perfectly separable. In order to provide a hyperplane, one has to relax the requirement that a separating hyperplane will perfectly separate every training observation. For that, a soft margin classifier (SVC) has been constructed. In the case of non-linear boundaries, the use of SVM is convenient (Auria & Moro, 2008). Indeed, the SVM allows non-linear decision boundaries by using an appropriate transformation that makes them linear on a higher dimensional feature space. Unfortunately, computation on high dimension feature space can be very costly and SVM depend a lot on the proper selection of the hyper-parameters (Adhikari & Agrawal, 2013). To improve the computation efficiency, a solution also called the “Kernel trick” is used (Adhikari & Agrawal, 2013). Kernels are functions used to represent inner products between observations rather than observations themselves. Thus, it modifies how we calculate “similarity” between two observations in a more flexible way, allowing to change and solve a non-linear problem by a linear problem on a higher-dimensional space.

View article

URL:

https://www.sciencedirect.com/science/article/pii/S2210670717305899

## A review on renewable energy and electricity requirement forecasting models for smart grid and buildings

Tanveer Ahmad, ... Biao Yan, in Sustainable Cities and Society, 2020

### 2.1.4 Electricity requirement forecasting models

Future electricity requirement prediction, also identified as energy electricity requirement prediction or load prediction, is not a novel theory. The primary study on electricity requirement prediciton can back to history in 1965 (Heinemann, Plant, & Nordman, 1966). Reliable electricity requirement prediction is essential such as critical economic importance (Bunn, 2000). In Ref. (Hobbs, 1999), with 1 % decrease in the prediciton for mean absolute percentalge error (MAPE), 10,000 MW h energy can be conserved, it explains that a precise energy method may conserve up to 1.6 \$ million in a specific fiscal year. Electricity requirement prediction is doing a significant part in system operations, energy production, transmission, and storage, while accurate prediciton is shifting considerable attention of energy planning and management (Khuntia, Rueda, & van der Meijden, 2016). However, more studies also concentrated on how to increase the accuracy and robustness of energy demand prediction.

#### 2.1.4.1 Machine learning models

Electricity requirement prediction technologies based on ML models are extensively applied in the area of applied energy, like wind energy and wind speed prediction (Meng, Ge, Yin, & Chen, 2016), load demand and peak demand prediciton (Hernandez et al., 2014), building load requirement forecasting (Ahmad, Chen, & Shair, 2018), cooling load prediction (Ahmad & Chen, 2018b) and so on. The heating, ventilation and air conditioning (HVAC) system in the commercial sector accounts for about 40 % of real-time energy expenditure, specifically for subtropical regions, therefore the fundamental tools to increase the building energy performance, reliability and accuracy (Fan, Xiao, & Wang, 2014). The rapid advancement in the ML approach do this an efficient system of univariate time-series prediction, and the basic challenge extends in the choice and selection of prediction techniques (Bontempi, Ben Taieb, & Le Borgne, 2013).

Current studies of load prediction give an overview of the present recent forecasting classifications and their further distribution in different sectors. Studies (Deb, Zhang, Yang, Lee, & Shah, 2017) has analyzed and compared the forecasting accuracy results of prior researches like (Deb et al., 2017; Zhao & Magoulès, 2012). They investigated the nine general prediction approaches which comprised of the ML platform. Magoules and Zhao (2012) reviewed the proposed classifications for forecasting energy load demand, including complicated and complex statistical techniques, AI approaches and engineering-based techniques (Zhao & Magoulès, 2012). Though, from these studies, analyzed that the past studies are based on forecasting analysis results but including the different kinds of database. The accuracy of these models is further presented in Table 9.

Table 9. Short-term electricity forecasting with machine learning models.

Sr. No.ModelsAdvantagesYearRegionRef.Performance evaluation statistics
MAECVMAPERMSER
1Grey wolf optimization, Least-squares SVMA practical technique that can increase the prediciton performance remarkably2019Australia(Yang, Li, &amp; Yang, 2019)32.200.5550.99
2Autoregressive integrated moving averageThe model can gain the various characteristics connected with power load2018Australia(Zhang, Wei, Li, Tan, &amp; Zhou, 2018)113.61.42 %
3CDT, FitcKnn, LRM, and Stepwise-LRMFacilities for investment management by power companies, commercial and industrial consumers2018Beijing, China(Ahmad &amp; Chen, 2018c)1.670.03 %15.10 %
4Machine learning modelsSupport consumer in energy planning and management2018New Taipei City, Taiwan(Chou &amp; Tran, 2018)0.0215.65 %0.09 %0.79
5An artificial neural network with nonlinear autoregressive exogenous multivariable Inputs, Multiple linear regression model, AdaBoostAppearances or accuracy are shown to be promising2018ISO New England(Ahmad &amp; Chen, 2018a)4.99 %0.01 %
6Chaos–support vector regressionThe developed algorithm could also be applied in different forecasting areas2019(Xuan et al., 2019)2.44 %3.89 %0.74
7Extreme learning machine modelUseful mapping capability and can adequately handle with a considerable variety of designing and mapping difficulties2018(Chen, Kloft, Yang, Li, &amp; Li, 2018)71.000.92 %83.93 %
8Machine learning, Support vector regression, Regression treesCapability to find world energy minima rather than local minima in the forecasting solution space, higher speed and accuracy2017University of New South Wales(Yildiz, Bilbao, &amp; Sproul, 2017)1.04 %0.99
9Recurrent extreme learning machineHigher potential to be used in modeling dynamic systems efficiently2016Portugal(Ertugrul, 2016)0.02 %
10Machine Learning algorithmsBetter and higher performance2018Canada(Saloux &amp; Candanedo, 2018)2.9 % – 3.9 %
11Nadaraya–Watson Kernel density estimator approachCaptures one of the higher reliability indicator numbers2018Spain and Portugal(Monteiro, Ramirez-Rosado, Fernandez-Jimenez, &amp; Ribeiro, 2018)5.55
12Machine learning modelBig-data can be consolidated in the forecasting approach to increase the performance and interpret complicated data analytic issues2016United States(Naimur Rahman, Esmailpour, &amp; Zhao, 2016)4.13 %
13Supervised based machine learning modelsHigher speed, higher accuracy, the stability of the network2018ISO New England(Ahmad, Chen Huang et al., 2018)1.60 %0.98 %
14Stepwise regression, nonlinear autoregressive modelAlgorithms are guaranteed the precise design and network operation of the different distributed energy system operations2019ISO New England(T. Ahmad &amp; Chen, 2019)4.53 %3.19 %402 %
15Deep learning, Multi-modalCalculate the energy tendency more perfectly with lower errors2018New York City(Tong et al., 2018)1.72 %
16Multiple linear regressionPrecise cooling prediction in the building sector2018Beijing, China(Ahmad, Chen Shair, 2018)0.731.70 %0.78 %
17Multivariate adaptive regression splineHelpful scientific tools for the different investigation of real-time power requirement data prediciton2018Queensland, Australia(Al-Musaylh, Deo, Adamowski, &amp; Li, 2018)0.760.995
18Chaos-SVR, WD-SVR, SVR and BPSeveral features which are suitable for various kinds of cooling load in time series2017(Xuan, Zhubing, Liequan, Junwei, &amp; Dongmei, 2017)0.85
19Artificial fish swarm and gene expression programmingThe benefits in mean time-consumption, the mean number of convergences, higher predicted efficiency and most top parallel performance in scale-up and speedup2018Multiple locations(Deng, Yuan, Yang, &amp; Zhang, 2018)3.69 %
20Gaussian process regressionModels are useful in forecasting the abnormal behavior in the datasets as well as cooling energy requirement prediciton2019Beijing, China(Ahmad, Chen, Shair, &amp; Xu, 2019)0.192.05 %2.59 %0.99

#### 2.1.4.2 Ensemble-based approaches

Ensemble-based approaches consist of a different number of ensemble methods (ELMs) used for model training. It is observed that the ensemble approaches possess the strengths in term of unique methods for increased robustness and accuracy. Many ensemble techniques have been proposed for short-term energy prediction (Abdel-Aal, 2005; Brown, Wyatt, & Tino, 2005; Taylor & Buizza, 2002). For instance, in Ref. (De Felice & Yao, 2011), the negative regularized correlation learning approach was applied to improve the forecastability of the ensemble network.

Ensemble learning techniques, which achieve higher prediction accuracy and efficiency by strategically connecting recurring learning models, has been extensively used in different research areas including time-series forecasting, regression and pattern classification. Dietterich has presumed three primary causes for the success and achievement rate of ensemble techniques: representational, computational and statistical characteristics (Dietterich, 2007). Furthermore, the decomposition of bias-variance (Geman, Bienenstock, & Doursat, 1992) and strong relationship also demonstrate why ensemble models have higher accuracy and efficiency than their non-ensemble classifications. Between the many ensemble techniques (EMD based AdaBoost-BPNN method for wind speed forecasting, 2014; Hu, Bao, & Xiong, 2014; Qiu, Zhang, Ren, Suganthan, & Amaratunga, 2014; Ren, Suganthan et al., 2016; Wei, 2016), conquer and divide (Radhakrishnan, Kolippakkam, & Mathura, 2007) is a theory which holds models which frequently used in the time series prediction. The wavelet transform approach is generally applied in time-series decomposition model. It disintegrates the primary time series into several orthonormal subseries from seeing at a different domain of frequency-time in the network (Benaouda, Murtagh, Starck, & Renaud, 2006) use multistate decomposition wavelet-based nonlinear approach for energy demand prediction. Adaptive wavelet ANNs applied for short-term prediciton with the feed-forward network and different hidden layers with neurons. Table 10 shows the short-term electricity forecasting with ensemble-based approaches. It can be observed that the ensemble-based empirical model decomposition and deep learning-based ensemble models widely used in different kinds of research including classification and time series analysis.

Table 10. Short-term electricity forecasting with ensemble-based approaches.

Sr. No.ModelsAdvantagesYearRegionRef.Performance evaluation statistics
MAECVMAPERMSER
1Partial least squares regression approach, Extreme learning machineThe different numerical results determine that the developed approaches can substantially increase prediction accuracy2016ISO New England(Li, Goel, &amp; Wang, 2016)1.14 %
2Ensemble methodAchieve better prediction results in contrast with different state-of-art standard approaches2016ISO New England(Li, Wang, &amp; Goel, 2016)0.91 %
3Deep learning, Ensemble methodFault reliability and prediction is higher in real time applications2014National Aeronautics and Space Administration(Qiu et al., 2014)0.11 %27.33 %0.16 %
4Empirical mode decomposition, Deep learning ensemble methodWidely used in different research areas including pattern classification, time-series, and regression prediciton2017Australia(Qiu et al., 2017)266.583.00 %
5Autoregressive integrated moving averageBenefits of some predictive algorithms to obtain reliable results2016Iran(Barak &amp; Sadegh, 2016)12.5915.74 %
6Random forests, Gradient boosting regression treesGradient boosting and random forests trees can be suitable for energy prediciton applications and yield actual results2015Burlington, Concord, Portland, Boston, Bridgeport(Papadopoulos &amp; Karakatsanis, 2015)1.97 %270.6 %
7Generalizable approachEnsemble approach is proficient of incorporating complicated forecasters2015California(Burger &amp; Moura, 2015)7.5 %
8Ensemble empirical mode decompositionGreat generalization capability, Higher training accuracy and speed and a better balance of error2018Jiangsu, China(Li, Tao, Ao, Yang, &amp; Bai, 2018)26,7655.31 %22,3.0 %
9Ensemble learningEnsemble learning approach an accurate and convenient method to forecast household energy usage requirement2018United States(Chen, Jiang, Zheng, &amp; Chen, 2018)1562 %0.16
10Ensemble Kalman filterThe efficiency of developed algorithms is substantially higher than the present state-of-art approaches2016Japan(Takeda, Tamura, &amp; Sato, 2016)1.86 %
11Evolutionary algorithms, Multi-objective optimizationResults show reliability and higher accuracy of used models2018New Zealand(Peimankar, Weddell, Jalal, &amp; Lapthorn, 2018)0.02 %0.09 %
12AdaBoost ensemble modelDon't require a pre-assumed form of the method; higher nonlinear mapping capability; can solve the complex nonlinear problems2018China(Xiao, Li, Xie, Liu, &amp; Huang, 2018)1.20 %0.46 %
13Ensemble learning, Robust regressionConceptual advantages of ensemble learning, relying on the need for diversity within different kinds of network datasets2018France(Alobaidi, Chebana, &amp; Meguid, 2018)11.39 %296.34 %
14Ensemble forecasting, Echo state networkBoosting models more appropriate for unstable time-series predictionChina(Wang, Lv, &amp; Zeng, 2018)4.184.69 %6.48 %
15Ensemble approachCan be used immediately to HVACs to tackle the time-lag issue2018Hong Kong(Wang, Lee, &amp; Yuen, 2018)0.86
16Ensemble learning, Bagging treesCan be used for the real-time energy networks such as system fault detection and diagnosis2018Gainesville, Florida(Wang, Wang, Srinivasan, 2018)3.68 %1.91 %0.89

#### 2.1.4.3 Artificial neural networks

A considerable number of researches have been carried on load demand forecasting, and various methods, like autoregressive integrated moving average (AIMA), SVM, and ANNs have been introduced to solve such kinds of complex problems. For places where no accurate technique has been recognized, the combination of energy prediction has been one of the most effective, essential and successful study aspects used since it is the introduction part by Granger and Bates (Bates & Granger, 1969) in earlier 1960s. However, defining these determinants is a vital and challenging issue for the ANNs applications. Therefore, the systematic approach prevails unavailable; various heuristic methods have been developed in different literature such as presented in (Aladag, 2011; Aladag, Egrioglu, Gunay, & Basaran, 2010; Anders & Korn, 1999; Aras & Kocakoç, 2016; Egrioglu, Yolcu, Aladag, & Bas, 2015; Heravi, Osborn, & Birchenhall, 2004; Lachtermacher & Fuller, 1995).

A substantial number of researches have studied the building energy forecast applying various computational intelligence techniques. In the field of building load, the ANNs recognized as the common favourite option for forecasting load demand in the buildings sector (Ahmad, Mourshed, & Rezgui, 2017). The ANNs was applied to estimate the load demand for the passive solar house in reference (Kalogirou & Bojic, 2000). A single backpropagation ANN for building short-term energy prediction was applied by Gonzales et al. (González & Zamarreño, 2005). A customary regression-based ANNs was applied to predict the cooling energy demand which associated with energy usage for three main buildings (Ben-Nakhi & Mahmoud, 2004). Four forecasting models consist of the conventional back-propagation ANNs, general regression ANNs, the radial function and SVM were applied to forecast the one-hour cooling energy demand of a building located in China (Li, Meng, Cai, Yoshino, & Mochida, 2009). Several other studies have concentrated on ANNs for short-term energy prediciton (Hippert, Pedreira, & Souza, 2001; Rodrigues, Cardeira, & Calado, 2017). The forecasting results from these studies explain that the ANNs approaches a comparatively efficient method to predict the short-term energy demand for commercial buildings and homes. Table 11 explicates the advantages of ANNs. It also presents the that the ANNs improves the stability and accuracy of forecasts with simplicity and higer perfroamnce.

Table 11. Short-term electricity forecasting with artificial neural networks.

Sr. No.ModelsAdvantagesYearRegionRef.Performance evaluation statistics
MAECVMAPERMSER
1Least absolute shrinkage and selection operator, quantile regression Neural network, probability density forecastingCan not only higher get the high-dimensional of the data in energy demand prediciton, but also give more accurate results2019Guangdong province, China(Yaoyao He, Qin, Wang, Wang, &amp; Wang, 2019)0.16 %
2Probabilistic load forecasting, Neural networksVery precise predictions among the top machine learning models2019ISO New England(Dimoulkas, Mazidi, &amp; Herre, 2018)2.54 %26.2 %
3Artificial neural networkAn effective method to calculate the short-term load for commercial and homes buildings2018Japan(Yuan, Farnham, Azuma, &amp; Emura, 2018)0.99
4Neural networksSpecify models parsimoniously at a lower computational cost2018Egypt(Tealab, 2018)
5Neural networks-based linear ensemble frameworkAttempting to determine the familiar overfitting issue of the networks2018Northern Canada(Wang, Wang, Qu, &amp; Liu, 2018)−0.49
6Back-propagation (BP) neural networkImproves the stability and accuracy of forecasts, and it’s appropriate for the short-term forecasting2018China(Ye &amp; Kim, 2018)659.4 %
7Wavelet transform with best basis selectionIt decreases the dimensionality of data without losing relevant information2016Australia &amp; Spanish(Rana &amp; Koprinska, 2016)23.580.26 %
8Deep neural networkModels are quite adjustable and can be used to other time-series forecast tasks2017China(W. He, 2017)99.411.34 %
9Deep belief networks, Restricted Boltzmann machinesConcentrate on the parameters that are very important for the network output and neglect the learning rate that has a small influence on the output2016Macedonia(Dedinec, Filiposka, Dedinec, &amp; Kocarev, 2016)8.6 %
10Time series forecastingSimplicity, higher accuracy2018University of Granada, Granada, Andalucía, Spain(Ruiz, Rueda, Cuéllar, &amp; Pegalajar, 2018)
11Artificial Neural Network, Bayesian regularisationAlgorithm with adaptive training classifications is intelligent of predicting the power consumption2016International Business Machines Building(Chae, Horesh, Hwang, &amp; Lee, 2016)9.35 %
12The artificial neural network, COCO frameworkGive the advantage of shorter training time2018New Pool England(Singh &amp; Dwivedi, 2018)--3.28 %
13Forecast neural network, CID-STNN forecasting modelThe models have a substantial power to prepare unconstrained issue2018West Texas(Cen &amp; Wang, 2018)0.891.21 %
14Artificial neural networksHigher performance and forecasting accuracy2018Multiple locations(Ahmad, Chen, Guo, &amp; Wang, 2018)21.45
15Artificial intelligence approachesAbility to generalization and construct-in cross-validation and low sensitivity to variable costs2017France(Mordjaoui, Haddad, Medoued, &amp; Laouafi, 2017)3.26 %2604.4 %
16Artificial neural networksCalculate the maximum peak load and minimum off-peak order with greater efficiency2018Taiwan(Hsu, Tung, Yeh, &amp; Lu, 2018)1.90 %
17Combined forecasting method, BP, ANFIS, diff-SARIMAThe methods are effective to decrease errors and increase the performance between the forecasted and real time load consumption effectively2016(Yang, Chen, Wang, Li, &amp; Li, 2016)139.651.59 %
18Artificial neural networks, Ensemble neural networksHigher prediction accuracy and performance as they can accurately algorithm the highly non-linear correlation2017New England(Khwaja, Zhang, Anpalagan, &amp; Venkatesh, 2017)1.99 %
19Fruit fly optimization model, General regression ANNsThis kind of methods present the full play to the benefits from every single approach2019Langfang, China(Liang, Niu, &amp; Hong, 2019)7.350.80 %9.58 %

View article

URL:

https://www.sciencedirect.com/science/article/pii/S2210670720300391

## Latest Posts

Article information

Author: Dean Jakubowski Ret

Last Updated: 08/30/2022

Views: 5605

Rating: 5 / 5 (50 voted)

Author information

Name: Dean Jakubowski Ret

Birthday: 1996-05-10

Address: Apt. 425 4346 Santiago Islands, Shariside, AK 38830-1874

Phone: +96313309894162

Job: Legacy Sales Designer

Hobby: Baseball, Wood carving, Candle making, Jigsaw puzzles, Lacemaking, Parkour, Drawing

Introduction: My name is Dean Jakubowski Ret, I am a enthusiastic, friendly, homely, handsome, zealous, brainy, elegant person who loves writing and wants to share my knowledge and understanding with you.