RUN_NETWORK calculates the firing behavior of a network of theta neuron Description: Calculates the firing behavior of a network of theta neurons as described by ThNN. The function can handle recurrency as well as multiple spikes per synapse (MIMO). Gradient outputs are calculated if desired. Syntax: RUN_NETWORKED_NEURON(ThNN, ti); RUN_NETWORKED_NEURON(..., [MaxSimTime], [GradFlag], [Verbose]); [ts, Y, Z] = RUN_NETWORKED_NEURON(...); Input Parameters: o ThNN: An object of the theta neuron network class o ti: A qx2 array that contains the neuron indices (globally indexed to the network) for each spike time (column 1) and the input spike times (column 2). ti may be empty, which in most cases will result in a non-firing network. o MaxSimTime: Optional scalar that indicates the maximum time to which the simulation should be run. For cases where Io is positive or recurrency creates infinite output spikes, MaxSimTime prevents the simulation from becoming infinite. The default value is 1000. o GradFlag: Optional boolean flag to indicate whether the gradient matrices Y and Z should be calculated. If the flag is 0, then -1 is returned for Y and Z. The default value is 1. o Verbose: Optional flag to indicate if extra information will be displayed on the screen. A value of 0 displays no additional information (this is the default value), while a value of 1 displays all information. Values greater than 1 display partial information. See Verbose for more details. Output Parameters: o ts: A cell array of length NumNeurons containing in each cell an array of length r_j of neuron j's output spike times. o Y: A cell array of length NumNeurons-NumInputs containing in each cell a nxr matrix of the effect of each synapse n on output spike r for the neuron indexed by the cell. o Z: A cell array of length NumNeurons-NumInputs containing in each cell a qxr matrix of the effect of each input spike q on output spike r for the neuron indexed by the cell. o thm: A qx1 array of phase values directly before each input spike. o thp: A qx1 array of phase values directly after each input spike. Examples: >> %Multiple inputs per synapse Example >> ThNN=theta_neuron_network; >> ts = run_network(ThNN, [2 4; 2 18]); >> %Positive Baseline Current Example >> ThNN=theta_neuron_network('Io',0.05); >> [ts, Y, Z, thm, thp]=run_network(ThNN, [2 8; 2 50], 99, 1, 1); >> %Recurrency Example >> ThNN=theta_neuron_network('ReferenceTime', -1, 'StructureFormat', ... {'ConnectionMatrix', [0 1;1 0]}, 'Io', [0.05 -0.0005]); >> [ts, Y, Z, thm, thp]=run_network(ThNN, [], 500, 1); See also theta_neuron_network, verbose
0001 function [ts, Y, Z, thm, thp] = run_network(ThNN,ti,MaxSimTime,GradFlag,Verbose) 0002 %RUN_NETWORK calculates the firing behavior of a network of theta neuron 0003 % 0004 %Description: 0005 %Calculates the firing behavior of a network of theta neurons as described 0006 %by ThNN. The function can handle recurrency as well as multiple spikes 0007 %per synapse (MIMO). Gradient outputs are calculated if desired. 0008 % 0009 %Syntax: 0010 %RUN_NETWORKED_NEURON(ThNN, ti); 0011 %RUN_NETWORKED_NEURON(..., [MaxSimTime], [GradFlag], [Verbose]); 0012 %[ts, Y, Z] = RUN_NETWORKED_NEURON(...); 0013 % 0014 %Input Parameters: 0015 %o ThNN: An object of the theta neuron network class 0016 %o ti: A qx2 array that contains the neuron indices (globally indexed to 0017 % the network) for each spike time (column 1) and the input spike times 0018 % (column 2). ti may be empty, which in most cases will result in a 0019 % non-firing network. 0020 %o MaxSimTime: Optional scalar that indicates the maximum time to which the 0021 % simulation should be run. For cases where Io is positive or recurrency 0022 % creates infinite output spikes, MaxSimTime prevents the simulation 0023 % from becoming infinite. The default value is 1000. 0024 %o GradFlag: Optional boolean flag to indicate whether the gradient 0025 % matrices Y and Z should be calculated. If the flag is 0, then -1 is 0026 % returned for Y and Z. The default value is 1. 0027 %o Verbose: Optional flag to indicate if extra information will be 0028 % displayed on the screen. A value of 0 displays no additional 0029 % information (this is the default value), while a value of 1 displays 0030 % all information. Values greater than 1 display partial information. 0031 % See Verbose for more details. 0032 % 0033 %Output Parameters: 0034 %o ts: A cell array of length NumNeurons containing in each cell an array 0035 % of length r_j of neuron j's output spike times. 0036 %o Y: A cell array of length NumNeurons-NumInputs containing in each cell 0037 % a nxr matrix of the effect of each synapse n on output spike r for the 0038 % neuron indexed by the cell. 0039 %o Z: A cell array of length NumNeurons-NumInputs containing in each cell 0040 % a qxr matrix of the effect of each input spike q on output spike r for 0041 % the neuron indexed by the cell. 0042 %o thm: A qx1 array of phase values directly before each input spike. 0043 %o thp: A qx1 array of phase values directly after each input spike. 0044 % 0045 %Examples: 0046 %>> %Multiple inputs per synapse Example 0047 %>> ThNN=theta_neuron_network; 0048 %>> ts = run_network(ThNN, [2 4; 2 18]); 0049 % 0050 %>> %Positive Baseline Current Example 0051 %>> ThNN=theta_neuron_network('Io',0.05); 0052 %>> [ts, Y, Z, thm, thp]=run_network(ThNN, [2 8; 2 50], 99, 1, 1); 0053 % 0054 %>> %Recurrency Example 0055 %>> ThNN=theta_neuron_network('ReferenceTime', -1, 'StructureFormat', ... 0056 % {'ConnectionMatrix', [0 1;1 0]}, 'Io', [0.05 -0.0005]); 0057 %>> [ts, Y, Z, thm, thp]=run_network(ThNN, [], 500, 1); 0058 % 0059 %See also theta_neuron_network, verbose 0060 0061 %Copyright (C) 2008 Sam McKennoch <Samuel.McKennoch@loria.fr> 0062 0063 0064 if nargin<2 0065 disp('Error in run_network: Not enough input arguements'); 0066 disp(['Needed at least 2 inputs but only got ' num2str(nargin)]); 0067 return; 0068 end 0069 0070 if (nargin < 3) MaxSimTime=1000; end 0071 if (nargin < 4) GradFlag=0; end 0072 if (nargin < 5) Verbose=0; end 0073 0074 if (Verbose==1) 0075 disp(' '); 0076 disp('********** Function "run" for a theta neuron network **********'); 0077 end 0078 0079 0080 %Initialize Outputs 0081 NumNeurons=length(ThNN.Neurons); 0082 ts=cell(NumNeurons,1); 0083 Y=cell(NumNeurons,1); 0084 Z=cell(NumNeurons,1); 0085 thm=cell(NumNeurons,1); 0086 thp=cell(NumNeurons,1); 0087 PreviousInputs=cell(NumNeurons,1); 0088 RefTime=ThNN.ReferenceTime; 0089 0090 %Reformat input time to cell format... 0091 if RefTime>=0 0092 ts{1}=RefTime; 0093 end 0094 for j=1:size(ti,1) 0095 ts{ti(j,1)}=[ts{ti(j,1)}, ti(j,2)]; 0096 end 0097 0098 %Generate List of neurons that need to be processed based on either neurons 0099 %that have no inputs (the input neurons) or neurons with pending spikes 0100 if ThNN.RecursionFlag 0101 %Generate Input Neurons 0102 NeuronReady=ThNN.InputNeurons; 0103 NeuronQueue=[]; 0104 if ~isempty(ti) 0105 %Inline unique 0106 %NeuronInputs=unique(ti(:,1)); 0107 Pos=2; 0108 NeuronInputs=sort(ti(:,1)); 0109 for j=2:length(NeuronInputs) 0110 if NeuronInputs(j)~=NeuronInputs(j-1) 0111 NeuronInputs(Pos)=NeuronInputs(j); 0112 Pos=Pos+1; 0113 end 0114 end 0115 NeuronInputs=NeuronInputs(1:(Pos-1)); 0116 0117 for j=1:length(NeuronInputs) 0118 NeuronQueue=push_unique(NeuronQueue, ThNN.RelativeOutputNeurons{NeuronInputs(j)}'); 0119 end 0120 else %Get List of all Non-input Neurons with positive baseline currents or 0121 %neurons with negative baseline currents in the spiking region 0122 for j=1:NumNeurons 0123 if ThNN.Neurons(j).Io>0 && ~ismember(j,NeuronReady) 0124 NeuronQueue=push(NeuronQueue, j); 0125 elseif ThNN.Neurons(j).Io<0 && ~ismember(j,NeuronReady) && ... 0126 ThNN.Neurons(j).Phase>ThNN.Neurons(j).PositiveFixedPoint && ... 0127 ThNN.Neurons(j).Phase<pi 0128 NeuronQueue=push(NeuronQueue, j); 0129 end 0130 end 0131 end 0132 else 0133 NeuronQueue=ThNN.NeuronQueue; 0134 NeuronReady=NeuronQueue; 0135 end 0136 AddSpike=zeros(NumNeurons,1); 0137 Pos=1; 0138 0139 while ~isempty(NeuronQueue) 0140 CurrentNeuron=NeuronQueue(Pos); 0141 CurrentInputNeurons=find(ThNN.Weights(:,CurrentNeuron)~=0); 0142 if (Verbose==1) 0143 disp(' '); 0144 disp('*****************************************************'); 0145 disp('*****************************************************'); 0146 disp(['--->Calculating Neuron ',num2str(CurrentNeuron)]); 0147 disp(['NeuronQueue: ' num2str(NeuronQueue')]); 0148 disp(['NeuronReady: ' num2str(NeuronReady')]); 0149 pause(0.01); 0150 end 0151 0152 0153 %Determine if the neuron is ready to be processed... 0154 %Also Need to Verify that there are no dependencies that would require certain 0155 %calculations to wait or be recalculated (recurrency) 0156 %Inline min(ismember(CurrentInputNeurons,NeuronReady)) 0157 %NeuronGo=min(ismember(CurrentInputNeurons,NeuronReady));%isempty(setdiff(intersect(CurrentInputNeurons,NeuronReady),CurrentInputNeurons)); 0158 NeuronGo=1; 0159 %Don't need to check if neurons are ready if no recursion (Feedforward execution order pre-determined) 0160 if ThNN.RecursionFlag 0161 for j=1:length(CurrentInputNeurons) 0162 if ~(max(CurrentInputNeurons(j)==NeuronReady)) 0163 NeuronGo=0; 0164 break; 0165 end 0166 end 0167 end 0168 0169 %Need to process inputs into ts format, and then into ti format here... 0170 0171 if 1%(NeuronGo) 0172 CurrentInputs=[]; 0173 for k=1:length(CurrentInputNeurons) 0174 Temp=[CurrentInputNeurons(k)*ones(1, length(ts{CurrentInputNeurons(k)})); ts{CurrentInputNeurons(k)}]'; 0175 CurrentInputs=[CurrentInputs; Temp]; 0176 end 0177 if ~isempty(CurrentInputs) && ~isempty(find(CurrentInputs(:,2)<0,1)) 0178 disp(['Error in run_network!!! Negative input spike time for neuron ', num2str(CurrentNeuron) ,'!!!']); 0179 return; 0180 end 0181 0182 %For recursive networks, Check Current Inputs to See if they all occur before the current ts 0183 if ThNN.RecursionFlag && ThNN.Neurons(CurrentNeuron).Io<=0 ... 0184 && size(PreviousInputs{CurrentNeuron},1)==size(CurrentInputs,1) && ~isempty(PreviousInputs{CurrentNeuron}) ... 0185 && ~isempty(CurrentInputs) && sum((PreviousInputs{CurrentNeuron}(:,2)-CurrentInputs(:,2)).^2) < 1e-10 0186 if (Verbose==1) 0187 disp(['---> Breaking Simulation as Recursion has Settled to Steady-State']); 0188 end 0189 break; 0190 else 0191 PreviousInputs{CurrentNeuron}=CurrentInputs; 0192 end 0193 0194 tsOld=ts{CurrentNeuron}; 0195 [ts{CurrentNeuron}, ThetaTemp, Y{CurrentNeuron}, Z{CurrentNeuron}, thm{CurrentNeuron}, thp{CurrentNeuron}] = run_networked_neuron(ThNN, CurrentInputs, CurrentNeuron, GradFlag, Verbose); 0196 %Add a spike when Io>0 0197 if ~isempty(tsOld) && ThNN.Neurons(CurrentNeuron).Io>0 ... 0198 && ((length(tsOld)==length(ts{CurrentNeuron}) && (min(tsOld==ts{CurrentNeuron})==1)) || (length(tsOld)>length(ts{CurrentNeuron}))) 0199 Period=pi/ThNN.Neurons(CurrentNeuron).Beta; 0200 %No Change (Add Another Cyclic Spike) 0201 if size(PreviousInputs{CurrentNeuron},1)==size(CurrentInputs,1) && ~isempty(PreviousInputs{CurrentNeuron}) ... 0202 && ~isempty(CurrentInputs) && sum((PreviousInputs{CurrentNeuron}(:,2)-CurrentInputs(:,2)).^2) < 1e-10 0203 AddSpike(CurrentNeuron)=AddSpike(CurrentNeuron)+1; 0204 %Only add 1 new cyclic spike, this is a new situation 0205 else 0206 AddSpike(CurrentNeuron)=1; 0207 end 0208 if (Verbose==1) 0209 disp(['---> Adding ' num2str(AddSpike(CurrentNeuron)) ' Cyclic Spike(s)']); 0210 end 0211 for j=1:AddSpike(CurrentNeuron) 0212 ts{CurrentNeuron}(end+1)=ts{CurrentNeuron}(end)+Period; 0213 if GradFlag 0214 Z{CurrentNeuron}(:,end+1)=Z{CurrentNeuron}(:,end); 0215 Y{CurrentNeuron}(:,end+1)=Y{CurrentNeuron}(:,end); 0216 end 0217 end 0218 0219 end 0220 if ~isempty(tsOld) && ThNN.Neurons(CurrentNeuron).Io>0 && (length(tsOld)==length(ts{CurrentNeuron})) && (min(tsOld==ts{CurrentNeuron})==0) 0221 AddSpike(CurrentNeuron)=0; 0222 end 0223 if (Verbose==1) 0224 disp(['---> Output Spike Times: ' num2str(ts{CurrentNeuron})]); 0225 end 0226 if ~isempty(find(ts{CurrentNeuron}>MaxSimTime, 1)) 0227 if (Verbose==1) 0228 disp(['---> Breaking Simulation at MaxSimTime, Most Likely Due to Recursion']); 0229 disp(['---> Removing Output Spikes that Exceeded MaxSimTime']); 0230 end 0231 ts{CurrentNeuron}=ts{CurrentNeuron}(find(ts{CurrentNeuron}<=MaxSimTime)); 0232 if GradFlag 0233 Z{CurrentNeuron}=Z{CurrentNeuron}(:,1:length(ts{CurrentNeuron})); 0234 Y{CurrentNeuron}=Y{CurrentNeuron}(:,1:length(ts{CurrentNeuron})); 0235 end 0236 break; 0237 end 0238 0239 %Update Queue Based on New Results (Add Neurons Connected to the 0240 %output of Current Neuron that are not already in the queue) 0241 if ThNN.RecursionFlag 0242 NeuronQueue=pop(NeuronQueue); 0243 NeuronQueue=push_unique(NeuronQueue,ThNN.RelativeOutputNeurons{CurrentNeuron}'); 0244 NeuronReady=push_unique(NeuronReady,CurrentNeuron); 0245 else 0246 Pos=Pos+1; 0247 if Pos>length(NeuronQueue) 0248 break; 0249 end 0250 end 0251 else 0252 %Put CurrentNeuron to bottom of the queue... 0253 NeuronQueue=pop_push(NeuronQueue); 0254 if (Verbose==1) 0255 disp(['--->Neuron ',num2str(CurrentNeuron), ' is not yet ready. Going to next neuron...']); 0256 end 0257 end 0258 end 0259 0260 if (Verbose==1) 0261 disp('******** End Function "run" for a theta neuron network ********'); 0262 disp(' '); 0263 end 0264 0265 return;