FUNCTION cekf performs the complex-valued EKF algorithm. Based on the paper of SuLee, "An augmented Extended Kalman Filter algorithm for complex-valued Recurrent Neural Networks", Neural Computation, vol 19, no 4, 2007. INPUT: input: input signal which should be scaled according to the dynamic range of nonlinearity target: desired response mode: choice of nonlinearity beta: the value of slope of nonlinearity p: length of input vector KalmanR: covariance matrix KalmanQ: covariance matrix KalmanP: covariance matrix monte: number of iterations OUTPUT: y: output signal e: output error signal w: weight vector of adaptive filter Complex Valued Nonlinear Adaptive Filtering toolbox for MATLAB Supplementary to the book: "Complex Valued Nonlinear Adaptive Filters: Noncircularity, Widely Linear and Neural Models" by Danilo P. Mandic and Vanessa Su Lee Goh (c) Copyright Danilo P. Mandic 2009 http://www.commsp.ee.ic.ac.uk/~mandic %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% This program is free software; you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation; either version 2 of the License, or (at your option) any later version. This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. You can obtain a copy of the GNU General Public License from http://www.gnu.org/copyleft/gpl.html or by writing to Free Software Foundation, Inc.,675 Mass Ave, Cambridge, MA 02139, USA. %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% ...........................................
0001 % FUNCTION cekf performs the complex-valued EKF algorithm. 0002 % 0003 % Based on the paper of SuLee, "An augmented Extended Kalman Filter algorithm for complex-valued Recurrent Neural Networks", Neural Computation, vol 19, no 4, 2007. 0004 % 0005 % INPUT: 0006 % input: input signal which should be scaled according to the dynamic range of nonlinearity 0007 % target: desired response 0008 % mode: choice of nonlinearity 0009 % beta: the value of slope of nonlinearity 0010 % p: length of input vector 0011 % KalmanR: covariance matrix 0012 % KalmanQ: covariance matrix 0013 % KalmanP: covariance matrix 0014 % monte: number of iterations 0015 % 0016 % OUTPUT: 0017 % y: output signal 0018 % e: output error signal 0019 % w: weight vector of adaptive filter 0020 % 0021 % 0022 % Complex Valued Nonlinear Adaptive Filtering toolbox for MATLAB 0023 % Supplementary to the book: 0024 % 0025 % "Complex Valued Nonlinear Adaptive Filters: Noncircularity, Widely Linear and Neural Models" 0026 % by Danilo P. Mandic and Vanessa Su Lee Goh 0027 % 0028 % (c) Copyright Danilo P. Mandic 2009 0029 % http://www.commsp.ee.ic.ac.uk/~mandic 0030 % 0031 % %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% 0032 % This program is free software; you can redistribute it and/or modify 0033 % it under the terms of the GNU General Public License as published by 0034 % the Free Software Foundation; either version 2 of the License, or 0035 % (at your option) any later version. 0036 % 0037 % This program is distributed in the hope that it will be useful, 0038 % but WITHOUT ANY WARRANTY; without even the implied warranty of 0039 % MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the 0040 % GNU General Public License for more details. 0041 % 0042 % You can obtain a copy of the GNU General Public License from 0043 % http://www.gnu.org/copyleft/gpl.html or by writing to 0044 % Free Software Foundation, Inc.,675 Mass Ave, Cambridge, MA 02139, USA. 0045 % %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% 0046 % ........................................... 0047 function [y, e, W, weightRecord,K,A,OutputVar,P] = cekf(input, target, beta, mode, p, N, KalmanR, KalmanQ, KalmanP, monte) 0048 0049 pp = length(input); % Number of input signal 0050 0051 %ita=0.1 0052 target = target + KalmanR*(randn(size(target))+i*randn(size(target))); % network target vector 0053 0054 s = zeros(p,1) + i*zeros(p,1); % The input vector 0055 %DW = zeros(p+N+1, N) + i*zeros(p+N+1, N); % Change applied to the weight matrix for neurons 1&2 0056 0057 % Weights vectors 0058 W = eye(p+N+1,N) + i*eye(p+N+1,N); % The weight matrix 0059 PI_prev = zeros(N*N, p+N+1) + i*zeros(N*N, p+N+1); PI_curr = zeros(N*N, p+N+1)+ i*zeros(N*N, p+N+1); % Jacobian matrix 0060 K = zeros(N*N, p+N+1)+ i*zeros(N*N, p+N+1); 0061 P = KalmanP*(eye(p+N+1,p+N+1)+i*eye(p+N+1,p+N+1)); 0062 R = KalmanR*(eye(N*N)+i*eye(N*N)); 0063 Q = KalmanQ*(eye(p+N+1,p+N+1)+i*eye(p+N+1,p+N+1)); 0064 0065 weightRecord = zeros(p+N+1,pp) + i*zeros(p+N+1,pp); 0066 P_record = zeros(p+N+1,p+N+1,pp) + i*zeros(p+N+1,p+N+1,pp); 0067 OutpurVar = zeros(1,pp) + i*zeros(1,pp); 0068 E = zeros(pp,1) + i*zeros(pp,1); 0069 MSE = zeros(pp,1) + i*zeros(pp,1); 0070 % Output vectors from the neurons 0071 Y_curr = zeros(pp, N) + i*zeros(pp, N); 0072 0073 U = zeros(p+N+1, 1) + i*zeros(p+N+1, 1); % INput to the network(external,feedback and bias) 0074 V = zeros(1,N) + i*zeros(1,N); % weighted input vector to a neuron 0075 Errors = 0.001*(randn(pp, 1) + i*randn(pp, 1)); 0076 Phi = zeros(1, N) + i*zeros(1,N); 0077 0078 for Monte=1:monte 0079 for k=2:pp 0080 s = [input(k-1);s(1:p-1)]; 0081 U(:) = [s;1;Y_curr(k-1,:).']; % The input vector to the network 0082 V = U.'*W; 0083 0084 switch (mode) 0085 case 1 0086 %Y_curr(k,:) = logsig(beta*V); 0087 Y_curr(k,:) = 1./(1+exp(-beta*V)); 0088 case 2 0089 %Y_curr(k,:) = tansig( beta * V); 0090 Y_curr(k,:) = tanh( beta * V); 0091 end 0092 0093 Errors(k) = target(k)-Y_curr(k, 1) ; %taking the delay of observations to minus the y output 0094 Errors_ori(k) = (input(k)-Y_curr(k, 1)) ; %taking the delay of observations to minus the y output 0095 0096 switch(mode) 0097 case 1 0098 %Phi = dlogsig(beta*V,Y_curr(k,:)); 0099 Phi = beta * (1./(1+exp(-beta*V))).*( 1 - 1./(1+exp(-beta*V)) ); 0100 case 2 0101 %Phi = dtansig(beta * V,Y_curr(k,:)); 0102 Phi = 1 - (tanh(beta * V)).^2; 0103 end 0104 0105 for j=1:N %Put N PI matrix into one Big PI, that's N x [(1+p+N) x N] size 0106 for n=1:N 0107 for l=1:p+1+N 0108 temp = 0; 0109 for m=1:N 0110 temp = temp + conj(W(1+p+m, j))*PI_prev((m-1)*N+n, l); 0111 end 0112 wtemp(n,l) = temp; 0113 if n==j 0114 wtemp(n, l) = wtemp(n, l) + conj(U(l)); 0115 end 0116 PI_curr((j-1)*N+n, l) = conj(Phi(j)) * wtemp(n, l); 0117 end 0118 end 0119 end 0120 PI_prev = (PI_curr); 0121 0122 A = (R+conj(PI_curr)*(P+Q)*(PI_curr.')); % danger of matrix singularity due to inversion 0123 0124 K = (P+Q)*(PI_curr.')*(A^(-1)); 0125 W = W + K(:,1:N)*Errors(k); 0126 P = P - K*conj(PI_curr)*(P+Q)+Q; 0127 %keyboard; 0128 OutputVar(:,k) = diag(R+conj(PI_curr)*(P)*(PI_curr.')); 0129 weightRecord(:, k) = W(:,1); 0130 P_record(:,:,k) = P; 0131 end 0132 0133 E = E +(1/2)*abs(Errors).^2; 0134 MSE = MSE+10*log10((1/2)*abs(Errors).^2); 0135 var_error(Monte) = var(Errors(1:end)); 0136 var_signal(Monte) = var(input(1:end)); 0137 Rp(Monte) = 10*log10(var_signal(Monte)/var_error(Monte)); 0138 0139 end 0140 %keyboard; 0141 y = Y_curr(:,1); 0142 e = Errors; 0143 0144 errorekf = norm(y(1:end).' - input(1:end)); % check on this 0145 0146 fprintf('EKF error = %6.2f',errorekf) 0147 fprintf('\n') 0148 0149 fprintf('Rp = %6.2f',mean(Rp)) 0150 fprintf('\n') 0151 0152 % PLOT RESULTS: 0153 % ============ 0154 0155 figure(1) 0156 0157 plot(1:length(target),abs(target),'k+:',1:length(y),abs(y),'r',1:length(input),abs(input),'b'); 0158 ylabel('Prediction','fontsize',15) 0159 %axis([0 pp -1.5 1.5]); 0160 0161 figure(2) 0162 plot(1:length(OutputVar),mean(abs(OutputVar)),'m'); 0163 ylabel('Output variance','fontsize',15); 0164 %axis([0 pp 1.4 2]); 0165 0166 figure(3) 0167 plot(E) 0168 save Ewind_ekf E 0169 0170 figure(4) 0171 plot(MSE) 0172