Kshitij Singh answered .
2025-07-09 10:10:30
% Neural Network Tool Box
% Asked by Bashar Ali on 12 Oct 2012 at 18:14
% Latest activity Edited by Image Analyst on 12 Oct 2012 at 19:30
% Hello everybody
% I am new to neural networks, but I have studied the theory and everything is
% OK. Now I've come to the practical part. I am using the Neural Network
% toolbox in Matlab, and start using NARX where x(t) is Excel file (1 column
% and 3500 rows) and y(t) is also an Excel file (1 column and 3500 rows). I
% import these files and train the system, and get the figures and everything
% is OK. My question is:
% % After doing all of the above , how do I use this network to predict the
% value of y for a given x?
I think it is as simple as
y = net(x);
% Here is the resulting script file :
% Solve an Autoregression Problem with External Input with a NARX Neural
% Network Script generated by NTSTOOL
% Created Fri Oct 12 20:48:21 IST 2012
% This script assumes these variables are defined:
% EMA - input time series.
% Close - feedback time series.
close all, clear all, clc, plt = 0; % Input purge
>inputSeries = tonndata(EMA,false,false);
>targetSeries = tonndata(Close,false,false);
Not a good name. lower case close is a MATLAB function (see above).
I'll just use X and T below
plt=plt+1;figure(plt)
plot(T)
plot(X)
% Create a Nonlinear Autoregressive Network with External Input
>inputDelays = 1:2;
Often need 0 for input (but not feedback)
>feedbackDelays = 1:2;
Use xcorr, crosscorr or a corrected version of nncorr (see below) too obtain the lags at significant peak values.
N = length(T);
ZT = zscore(T);
ZX = zscore(X);
autocorrT = nncorr(ZT,ZT,N-1);
crosscorrXTR = nncorr(ZX,ZT,N-1); % Incorrectly symmetric
crosscorrXTL = nncorr(ZT,ZX,N-1); % Incorrectly symmetric
crosscorrXT = [ crosscorrXTL(1:N-1), crosscorrXTR(N:end) ];
lags = -(N-1):N-1;
Enter code to find POSITIVE lags at significant peaks (Yeah, I could
have just used crosscorrXTR) but ..., well, you understand)
>hiddenLayerSize = 10;
Choose optimal small value ( H <= (N-1)/3 ) by trial and error.
>net = narxnet(inputDelays,feedbackDelays,hiddenLayerSize);
% Choose Input and Feedback Pre/Post-Processing Functions
% Settings for feedback input are automatically applied to feedback output
% For a list of all processing functions type: help nnprocess
% Customize input parameters at: net.inputs{i}.processParam
% Customize output parameters at: net.outputs{i}.processParam
>net.inputs{1}.processFcns = {'removeconstantrows','mapminmax'};
>net.inputs{2}.processFcns = {'removeconstantrows','mapminmax'};
Unnecessary. These are defaults
% Prepare the Data for Training and Simulation
% The function PREPARETS prepares timeseries data for a particular network,
% shifting time by the minimum amount to fill input states and layer states.
% Using PREPARETS allows you to keep your original time series data unchanged,
% while easily customizing it for networks with differing numbers of delays,
% with open loop or closed loop feedback modes.
>[inputs,inputStates,layerStates,targets] = preparets(net,inputSeries, {},targetSeries);
% Setup Division of Data for Training, Validation, Testing
% The function DIVIDERAND randomly assigns target values to training,
% validation and test sets during training.
% For a list of all data division functions type: help nndivide
>net.divideFcn = 'dividerand'; % Divide data randomly
NO. This screws up the time series uniform spacing. Use another divide
function. For example:
net.divideFcn = 'divideblock';
% The property DIVIDEMODE set to TIMESTEP means that targets are divided
% into training, validation and test sets according to timesteps.
% For a list of data division modes type: help nntype_data_division_mode
>net.divideMode = 'value'; % Divide up every value
>net.divideParam.trainRatio = 70/100;
>net.divideParam.valRatio = 15/100;
>net.divideParam.testRatio = 15/100;
Unnecessary ..
% Choose a Training Function
% For a list of all training functions type: help nntrain
% Customize training parameters at: net.trainParam
>net.trainFcn = 'trainlm'; % Levenberg-Marquardt
Unnecessary ...
% Choose a Performance Function
% For a list of all performance functions type: help nnperformance
% Customize performance parameters at: net.performParam
>net.performFcn = 'mse'; % Mean squared error
Unnecessary...
% Choose Plot Functions
% For a list of all plot functions type: help nnplot
% Customize plot parameters at: net.plotParam
>net.plotFcns = {'plotperform','plottrainstate','plotresponse', ...
'ploterrcorr', 'plotinerrcorr'};
Unnecessary ...
% Train the Network
>[net,tr] = train(net,inputs,targets,inputStates,layerStates);
tr = tr % No semicolon to expose results
% Test the Network
>outputs = net(inputs,inputStates,layerStates);
>errors = gsubtract(targets,outputs);
>performance = perform(net,targets,outputs)
Normalize and obtainR^2 statistic
NMSE = performance/var(T,1)
R2 = 1-NMSE
% Recalculate Training, Validation and Test Performance
>trainTargets = gmultiply(targets,tr.trainMask);
>valTargets = gmultiply(targets,tr.valMask);
>testTargets = gmultiply(targets,tr.testMask);
>trainPerformance = perform(net,trainTargets,outputs)
>valPerformance = perform(net,valTargets,outputs)
>testPerformance = perform(net,testTargets,outputs)
Training MSE should be adjusted to mitigate bias of using same data for training and testing.
[ I Ntrn ] = size(Xtrn)
[ O Ntrn ] = size(Ttrn)
Ntrneq = Ntrn*O % No. of training equations
ND = length( [ inputDellays, FeedbackDelays] )
Nw = (ND+1)*H +(H+1)*O % Number of estimated weights
MSEtrna = Ntrneq*MSEtrn/(Ntrneq-Nw) %Decreases with H
NMSEtrna = MSEtrna/var(Ttrn,0)
R2trna = 1-NMSEa
% View the Network
>view(net)
% Plots
% Uncomment these lines to enable various plots.
%figure, plotperform(tr)
%figure, plottrainstate(tr)
%figure, plotregression(targets,outputs)
%figure, plotresponse(targets,outputs)
%figure, ploterrcorr(errors)
%figure, plotinerrcorr(inputs,errors)
% Closed Loop Network
% Use this network to do multi-step prediction.
% The function CLOSELOOP replaces the feedback input with a direct
% connection from the outout layer.
>netc = closeloop(net);
>netc.name = [net.name ' - Closed Loop'];
>view(netc)
>[xc,xic,aic,tc] = preparets(netc,inputSeries,{},targetSeries);
>yc = netc(xc,xic,aic);
>closedLoopPerformance = perform(netc,tc,yc)
% Early Prediction Network
% For some applications it helps to get the prediction a timestep early.
% The original network returns predicted y(t+1) at the same time it is given y(t+1).
ARE YOU SURE? HAVE YOU DEMONSTRATED THIS?
I THOUGHT THIS COULD ONLY HAPPENS IF THE MINIMUM FEEDBACK DELAY IS ZERO.
HOWEVER, MATLAB SHOULD THROW AN ERROR FOR A ZERO FEEDBACK DELAY
WHEN I HAVE TIME I WILL CHECK
% For some applications such as decision making, it would help to have predicted
% y(t+1) once y(t) is available, but before the actual y(t+1) occurs.
% The network can be made to return its output a timestep early by removing one delay
% so that its minimal tap delay is now 0 instead of 1. The new network returns the
% same outputs as the original network, but outputs are shifted left one timestep.
>nets = removedelay(net);
>nets.name = [net.name ' - Predict One Step Ahead'];
>view(nets)
>[xs,xis,ais,ts] = preparets(nets,inputSeries,{},targetSeries);
>ys = nets(xs,xis,ais);
>earlyPredictPerformance = perform(nets,ts,ys)
Not satisfied with the answer ?? ASK NOW