Forecasting Models with R

  • 3.9
5.5 hours on-demand video
$ 12.99

Brief Introduction

Learn main forecasting models from basic to expert level through a practical course with R statistical software.

Description

Full Course Content Last Update 04/2018

Learn forecasting models through a practical course with R statistical software using S&P 500® Index ETF prices historical data. It explores main concepts from basic to expert level which can help you achieve better grades, develop your academic career, apply your knowledge at work or do your business forecasting research. All of this while exploring the wisdom of best academics and practitioners in the field.

Become a Forecasting Models Expert in this Practical Course with R

  • Read S&P 500® Index ETF prices data and perform forecasting models operations by installing related packages and running script code on RStudio IDE.
  • Estimate simple forecasting methods such as arithmetic mean, random walk, seasonal random walk and random walk with drift.
  • Evaluate simple forecasting methods forecasting accuracy through mean absolute error, root mean squared error scale-dependent and mean absolute percentage error scale-independent metrics.
  • Approximate simple moving averages and exponential smoothing methods with no trend or seasonal patterns such as Brown simple exponential smoothing method.
  • Estimate exponential smoothing methods with only trend patterns such as Holt linear trend, exponential trend, Gardner additive damped trend and Taylor multiplicative damped trend methods.
  • Approximate exponential smoothing methods with trend and seasonal patters such as Holt-Winters additive, Holt-Winters multiplicative and Holt-Winters damped methods.
  • Automatically select exponential smoothing method with lowest information loss criteria.
  • Asses simple moving average and exponential smoothing methods forecasting accuracy through Hyndman-Koehler mean absolute scaled error scale-independent metric.
  • Identify Box-Jenkins autoregressive integrated moving average model integration order through level and differentiated first order trend stationary time series augmented Dickey-Fuller and Phillips-Perron unit root tests.
  • Recognize autoregressive integrated moving average model autoregressive and moving average orders through autocorrelation and partial autocorrelation functions.
  • Estimate non-seasonal autoregressive integrated moving average models such as random walk, random walk with drift, differentiated first order autoregressive, Brown simple exponential smoothing, simple exponential smoothing with growth, Holt linear trend and Gardner additive damped trend models.
  • Approximate seasonal autoregressive integrated moving average models such as seasonal random walk, seasonal random trend, general seasonal, general first order autoregressive seasonal, seasonally differentiated first order autoregressive and Holt-Winters additive models.
  • Automatically choose autoregressive integrated moving average model with lowest information loss criteria.
  • Evaluate autoregressive integrated moving average models forecasting accuracy through scale-dependent, scale-independent metrics, Akaike, corrected Akaike and Schwarz Bayesian information loss criteria.
  • Assess lowest information loss criteria autoregressive integrated moving average model residuals or forecasting errors white noise requirement through Ljung-Box lagged autocorrelation test.

Become a Forecasting Models Expert and Put Your Knowledge in Practice

Learning forecasting models is indispensable for business or financial data science applications in areas such as sales and financial forecasting, inventory optimization, demand and operations planning, and cash flow management. It is also essential for academic careers in data science, applied statistics, operations research, economics, econometrics and quantitative finance. And it’s necessary for business forecasting research.

But as learning curve can become steep as complexity grows, this course helps by leading you step by step using S&P 500® Index ETF prices historical data for forecast modelling to achieve greater effectiveness. 

Content and Overview

This practical course contains 40 lectures and 6 hours of content. It’s designed for all forecasting models knowledge levels and a basic understanding of R statistical software is useful but not required.

At first, you’ll learn how to read S&P 500® Index ETF prices historical data to perform forecasting models operations by installing related packages and running script code on RStudio IDE.

Then, you’ll define simple forecasting methods such as arithmetic mean, random walk, seasonal random walk and random walk with drift. Next, you’ll evaluate simple methods forecasting accuracy through scale-dependent and scale-independent error metrics. For scale-dependent metrics, you’ll define mean absolute error and root mean squared error. For scale-independent metrics, you’ll define mean absolute percentage error. 

Next, you’ll define simple moving averages and exponential smoothing methods. For exponential smoothing methods with no trend or seasonal patterns, you’ll define Brown simple exponential smoothing method. For exponential smoothing methods with only trend patterns, you’ll define Holt linear trend, exponential trend, Gardner additive damped trend and Taylor multiplicative damped trend methods. For exponential smoothing methods with trend and seasonal patterns, you’ll define Holt-Winters additive, Holt-Winters multiplicative and Holt-Winters damped methods. After that, you’ll automatically select exponential smoothing method with lowest information loss criteria. Later, you’ll evaluate simple moving average and exponential smoothing methods forecasting accuracy through Hyndman-Koehler scale-independent mean absolute scaled error metric.

After that, you’ll define Box-Jenkins autoregressive integrated moving average models. Then, you’ll identify autoregressive integrated moving average model integration order through level and differentiated time series first order trend stationary augmented Dickey-Fuller and Phillips-Perron unit root tests. Next, you’ll identify autoregressive integrated moving average model autoregressive and moving average orders through autocorrelation and partial autocorrelation functions. For non-seasonal autoregressive integrated moving average models, you’ll define random walk, random walk with drift, differentiated first order autoregressive, Brown simple exponential smoothing, simple exponential smoothing with growth, Holt linear trend and Gardner additive damped trend models. For seasonal autoregressive integrated moving average models, you’ll define seasonal random walk, seasonal random trend, general seasonal, general first order autoregressive seasonal, seasonally differentiated first order autoregressive and Holt-Winters additive models. After that, you’ll automatically select autoregressive integrated moving average model with lowest information loss criteria.  Later, you’ll evaluate models forecasting accuracy and information loss criteria. For information loss criteria, you’ll define Akaike, corrected Akaike and Schwarz Bayesian information loss criteria. Finally, you’ll assess lowest information loss criteria autoregressive integrated moving average model residuals or forecasting errors white noise requirement through Ljung-Box lagged autocorrelation test.

Requirements

  • Requirements
  • R statistical software is required. Downloading instructions included.
  • RStudio Integrated Development Environment (IDE) is recommended. Downloading instructions included.
  • Practical example data and R script code files provided with the course.
  • Prior basic R software knowledge is useful but not required.

Knowledge

  • Read S&P 500® Index ETF prices data and perform forecasting models operations by installing related packages and running script code on RStudio IDE.
  • Estimate simple forecasting methods such as arithmetic mean, random walk, seasonal random walk and random walk with drift.
  • Evaluate simple forecasting methods forecasting accuracy through mean absolute error, root mean squared error scale-dependent and mean absolute percentage error scale-independent metrics.
  • Approximate simple moving averages and exponential smoothing methods with no trend or seasonal patterns such as Brown simple exponential smoothing method.
  • Estimate exponential smoothing methods with only trend patterns such as Holt linear trend, exponential trend, Gardner additive damped trend and Taylor multiplicative damped trend methods.
  • Approximate exponential smoothing methods with trend and seasonal patters such as Holt-Winters additive, Holt-Winters multiplicative and Holt-Winters damped methods.
  • Automatically select exponential smoothing method with lowest information loss criteria.
  • Asses simple moving average and exponential smoothing methods forecasting accuracy through Hyndman-Koehler mean absolute scaled error scale-independent metric.
  • Identify Box-Jenkins autoregressive integrated moving average model integration order through level and differentiated first order trend stationary time series augmented Dickey-Fuller and Phillips-Perron unit root tests.
  • Recognize autoregressive integrated moving average model autoregressive and moving average orders through autocorrelation and partial autocorrelation functions.
  • Estimate non-seasonal autoregressive integrated moving average models such as random walk, random walk with drift, differentiated first order autoregressive, Brown simple exponential smoothing, simple exponential smoothing with growth, Holt linear trend and Gardner additive damped trend models.
  • Approximate seasonal autoregressive integrated moving average models such as seasonal random walk, seasonal random trend, general seasonal, general first order autoregressive seasonal, seasonally differentiated first order autoregressive and Holt-Winters additive models.
  • Automatically choose autoregressive integrated moving average model with lowest information loss criteria.
  • Evaluate autoregressive integrated moving average models forecasting accuracy through scale-dependent, scale-independent metrics, Akaike, corrected Akaike and Schwarz Bayesian information loss criteria.
  • Assess lowest information loss criteria autoregressive integrated moving average model residuals or forecasting errors white noise requirement through Ljung-Box lagged autocorrelation test.
$ 12.99
English
Available now
5.5 hours on-demand video
Diego Fernandez
Udemy

Instructor

Diego Fernandez

  • 3.9 Raiting
Share
Saved Course list
Cancel
Get Course Update
Computer Courses