Published in: J. Neurosci. Methods (1992) 43: 83-108.
Figure 1
To explore the contributions of the myriad elements in such a network requires simulating details of biophysics, such as membrane ion channels, synaptic transfer functions, and cell morphology. The theory of compartmental modeling has developed over the past 25 years precisely to address this problem at the single neuron level (Cooley and Dodge, 1966; Rall, 1964; 1967; Pellionisz and Llinas, 1977; Joyner et al., 1978; Carnevale and Lebeda, 1987; Segev et al., 1990). Recently Hines (1991, 1989) presented a neuron simulation program in which branching structures and various types of membrane conductance are described to an interpreter for translation into difference equations. It is well suited for studying one complex branching neuron or a few simple ones.
However, to explore the contribution of any given microscopic property to a large neural network requires that the detailed biophysical properties be simulated on a much larger scale. The goal of simulating such a network creates a new problem because the description of the network must be specified at many different levels. The problem is how to decide what level of detail is appropriate for a particular network and how to describe and organize this detail in a way that is comprehensible.
To address this problem, I have developed a simulation language, called "NeuronC". It employs the basic approach developed by Hines (1991, 1989) and earlier workers (Cooley and Dodge, 1966; Joyner et al., 1978; Smith et al., 1984) in using a variation of the Crank-Nicholson method of numerical integration. However, NeuronC contains several special features: 1) it can construct and run models as large as 50,000 compartments using a conventional workstation, with run times on the order of hours (proportionately less for smaller models); 2) the model can be constructed hierarchically as a set of "modules", and this allows parameters and connections of a module or entire neural circuit to be comprehended and easily altered; 3) it translates a macroscopic stimulus (such as a spot or bar of light) into microscopic events (such as photons) that neurons actually register.
Before one can comprehend a large-scale computational simulation of a neural circuit with all of its biological detail and assumptions, one requires a full description of the computational methods. This paper describes the computational assumptions, methods and their consequences for a typical large-scale simulation using NeuronC. The METHODS section contains details about the NeuronC language (description of neural elements, corresponding data structures, and numerical computations). The RESULTS section evaluates the language, presenting "higher-level" issues and possible solutions, i.e. how to use the language to construct and run a typical large-scale neural circuit simulation.
Figure 2
Table 1 Features of NeuronC Language Neural # of Usage elements: nodes cable 2 Dendritic cable with electrotonic decay sphere 1 Isopotential sphere: cell body synapse 2 Chemical synapse between two nodes gap junction 2 Series resistance, calibr. by membrane area series resistor 2 Series resistance, calibr. in Ohms load resistor 1 Parallel resistance to ground gnd capacitor 1 Parallel capacitance to ground capacitor 2 Series capacitor battery 2 Voltage clamp rod 1 Transduction model of rod cone 1 Transduction model of cone channel 1 Voltage sensitive channels in membrane voltage buffer 2 Voltage follower (similar to op. amp.) Descriptor terms: length, diameter Describe stimulus, cables, rods/cones rm,ri,rg Membrane, axial, gap junction resistance reversal potential Battery V in series with syn. or cable cond. resting potential Initial resting potential at start threshold potential Threshold for control of synaptic cond. conductance Max conductance for synapses and photoreceptors noise Vesicle and channel noise for synapse location (x,y) location of receptor bar, spot, sine wave Type of light stimulus intensity Intensity of light stimulus wavelength Wavelength of light stimulus start, duration Time for stimulus blur Stimulus point spread function diameter scatter Scattered light function pigment Photosensitive cone or rod pigment Simulation actions: stimulus Light, voltage or current clamp stimulus record Function which returns voltage at a node plot Make continuous plot of a node during run graph Graph any variables at end of step or run modify Change value of neural element parameter step Run for short time and stop to allow graphing run Run simulation continuously for high speed Standard programming language statements/features: if else for foreach while block definition {} assignment define procedure, function call procedure, function formal parameters in procedure, function local variables in procedure, function return break continue formatted print arithmetic and logical operators increment operators numeric expressions trig, log, expon, power, sqrt functions predefined constants (pi, e, etc.) include file edit file run a system command automatic variable definition local variables dynamically allocated, multidimensional arrays read array from fileFigure 3
The compartments and their resistive interconnections are created from the neural circuit description by a cable-segmentation algorithm. This algorithm partitions each cable into multiple compartments of equal size based on a criterion length. Ends of cable segments are partitioned into half-sized compartments so that 2 cable segments join at a full-size common compartment (Hines, 1989; Figure 4). The criterion length for a cable segment partition is calculated individually for each cable as a small fraction (default 10%) of the space constant (lambda) for that cable. This method approximates a continuous cable model reasonably well in terms of amplitude and waveshape (Perkel and Mulloney, 1978). Accuracy can be evaluated by using successively smaller compartments until no further change in amplitude or waveshape results (Perkel et al., 1981).
Figure 4
Each compartment is represented by a data structure that specifies the current state of numerical computations simulating the compartment. Compartment state variables include membrane conductance, capacitance, voltage and convergence estimates, and a pointer to a list of connections to other compartments. Each connection is represented by a data structure that contains a conductance value (axial resistance) and pointers to the 2 compartments it connects. A compartment may have as many (low-level) connections as necessary to represent the corresponding branching pattern in the neural circuit.
Figure 5
Binomial noise (originating in vesicular neurotransmitter release and channel activation) can be specified by defining the number of vesicle release sites or channels. For static simulations, the synaptic time functions are removed and the synapse runs as an instantaneous transfer function. Electrical synapses (gap junctions) are represented in NeuronC as linear conductances, although their conductance may be varied at runtime (see below).
Figure 6
Channels may be joined to other neural elements at nodes, or specified as a membrane property (conductance density). They allow the simulation of many types of nonlinear effects, including action potentials and regenerative feedback. Several channel types may be specified in unison for sphere or cable membrane in order to match published physiological records (e.g. see Fohlmeister et al., 1990; Traub and Llinas, 1979). A new channel type can be added to NeuronC by adding a set of states and transition functions to the source code and recompiling. Channel kinetics may be altered by modifying the number of states or allowable transitions between states, or by altering the transition functions. The transition functions for a channel may be modified at runtime (overriding the default functions) by writing a function in NeuronC to calculate alpha and beta rate constants as a function of voltage, in the manner of Hodgkin and Huxley (1952).
Figure 7
NeuronC can plot voltage, current, or light flux from any point in a neural circuit as a function of time (Figure 7), and in addition can parametrically graph any 2 variables (one as a function of the other). For example, to observe how a neuron's input resistance varies, one can direct NeuronC to record voltage before and after a current step, calculate input resistance at a node in the circuit, and graph the node's input resistance as a function of time. I/V plots may be graphed in a similar fashion using a programmed sequence of voltage or current steps with associated current or voltage records.
Figure 8
In addition, an "electrode" may be placed at any node to clamp voltage or current, and simultaneously, current or voltage respectively, can be recorded. Number of recording or stimulating electrodes is not limited.
A light stimulus such as a spot or bar of light is defined in NeuronC by a statement that specifies the optical parameters as well as start time and duration. From this information, contained in a sequence of stimulus statements, NeuronC creates an internal linked list of photoreceptor stimulus events (low-level), which initiate stimulus actions during a simulation run. A stimulus statement normally specifies intensity as an increment above a background. This feature allows several stimuli to be spatially summed, so complex stimuli can be fashioned from a collection of simple stimuli. When a stimulus event is processed during the simulation, the stimulus wavelength defines which photopigment lookup table to use for calculating a light response. Stimuli may be white or monochromatic (wavelength 380-800 nm), and these may be combined for more complex hues.
To simulate optical blur, one can specify a stimulus blurred by a Gaussian point-spread function. In this case, a separate program called "Stim" (which interprets the same syntax as NeuronC) is used to "generate" the stimulus. The effect of light scatter can also be simulated (see Robson and Enroth-Cugell, 1978). The Stim program calculates light intensity individually for each photoreceptor defined in the neural microcircuit by convolving the optical stimulus with the blur function. Additional calculations are done whenever the stimulus intensity changes. The result is saved in a "stimulus file", which contains a list of stimulus events for individual photoreceptors (Figure 9). In order to save computation, the stimulus file may be used for many different simulation runs but need only be calculated once.
Figure 9
To simulate photon noise, quantal absorptions in a photoreceptor during one time step can be made a random function of the instantaneous flux with Poisson statistics. A pseudo-random number sequence generates photon arrival times, and the random sequence is determined by the numeric value of a "seed" which may be changed to produce different random sequences (Press et al., 1988). By specifying the same seed, it is possible to "re-run" a simulation with identical photon noise.
The "implicit" method of solving simultaneous difference equations ("backward-Euler" method) is included in NeuronC because it has the best mathematical stability (Joyner, et al., 1978; Hines, 1989; Mascagni, 1989). Stability in this context means that any combination of time step and compartment size (also related to coupling between compartments) may be used without generating the spurious oscillations that can occur with "explicit" solution methods (Conte and de Boor, 1980; Mascagni, 1989; Press et al.. 1988). NeuronC achieves stability (with the Crank-Nicholson and "implicit" methods) by evaluating the instantaneous effect of a compartment's voltage change on the voltages of all other compartments resistively connected with it (see below). Two compartments connected by a chemical synapse are not resistively coupled, because a NeuronC chemical synapse normally includes a temporal delay. This delay in coupling increases stability of computation (Hines, 1984).
To solve the equations generated by the implicit integration methods, NeuronC uses a "relaxation" method known as a variation of Gauss-Seidel iteration (Conte and de Boor, 1980). In this technique, the voltage in each compartment during one time step is repeatedly calculated as a function of its neighbors' voltages until the new calculated voltage converges to within a predefined criterion voltage.
Other methods for solving matrices generated from neural cables, such as the Gaussian elimination method, may be faster than the "relaxation" method in some cases when the circuit model lacks resistive loops (Hines, 1984; 1989). However, these methods are impractical for retinal networks which have many resistive feedback loops caused by gap junctions. The relaxation method I have implemented is fast for reasonably small compartment size (0.2 - 0.1 lambda), resistant to instability due to "round-off error", and compatible with any network geometry.
Simply, my version of the Crank-Nicholson algorithm is:
1) for all compartments, calculate: Vnew = Vold + dt/c * ((Inew + Iold) / 2) * implf (1) Where: Vnew = voltage at end of time step. Vold = voltage at beginning of time step. Inew = total current in compartment at end of time step (func. of Vnew). Iold = total current in compartment at beginning of time step. implf = implicit factor (function of dt, c, total comp. conductance). dt = time step. c = membrane capacitance in compartment. References: Cooley and Dodge, 1966 Joyner et al, 1978 Crank and Nicholson, 1947 Hines, 1984; 19892) Repeat step 1) above, recalculating Inew for all compartments, until Vnew converges with a change (from its previous value) less than a preset criterion voltage. The iterations are started by assuming that Inew equals Iold (Euler's method).
The number of iterations required for convergence is a function of the number of compartments, their voltage activity, and their closeness of resistive coupling. This function is related to the average "space constant", or "diffusion distance" for a voltage step along a chain of compartments during one time step. In a static model, the "space constant" defines how far a voltage perturbation spreads along neural cable (Rall, 1959), and in a compartmental model the space constant is a function of the closeness of coupling between adjacent compartments. Within one time step, voltage in a dynamic model ideally should spread instantaneously along a chain of compartments as it would in a static model.
However, the iterative compartment calculations of NeuronC are "second order in space" (Mascagni, 1989), which means practically that a voltage perturbation spreads laterally along a chain of compartments only one additional compartment (in each direction along a cable) per iteration. Thus, the number of iterations required (in one time step) to compute the spread of a voltage perturbation is related to the closeness of coupling between compartments. The closeness of coupling between compartments in a dynamic model varies inversely with time step duration. A longer time step increases the influence of a compartment's voltage on its neighbor's voltage by increasing voltage change (time integral of current). Therefore, although a longer time step tends to reduce computation by reducing the number of time steps, a longer time step also tends to increase computation time for NeuronC because more iterations are required to compute voltage spread. These two opposing tendencies define a broad "window" of time step duration (50 - 200 usec) in which computation speed for a typical neural model is constant. As a result, the advantage of a shorter time step is primarily better accuracy when computing fast-changing voltages.
If the SOR factor is too large, voltage estimates oscillate and convergence suffers, i.e. the final voltage for a compartment has been anticipated "too much" and successive voltage estimates alternate positive and negative. This oscillation can occur even with the "optimal" over-relaxation factor because compartments converge at different rates. A quiescent compartment requires a smaller over- relaxation factor because its convergence is inherently faster, i.e. intermediate estimates of voltage for one time step are closer to the final estimate. Active compartments (those that receive current from a synaptic input or electrode stimulus) and those that are closely coupled to active neighbors require a larger over-relaxation factor to anticipate current spread during the time step.
To increase the overall convergence rate, a separate SOR factor is generated for each compartment. It is difficult to anticipate SOR factors for individual compartments in a simulation with instantaneously varying synaptic conductances and stimuli. However, since the optimal SOR factor depends to some extent on the (unvarying) degree of coupling between adjacent compartments, it is useful to compute the SOR factor from the compartment's previous convergence rate. This is implemented by counting the number of oscillations of the voltage estimate that occur within a time step and after convergence comparing this number with the total number of iterations. If there were any oscillations, a compartment's SOR factor (for the next time step) is decreased, depending on the number of oscillations, or increased if no oscillations occurred. Thus the SOR factor is controlled by a negative feedback loop and is self-adjusting. Typically 10 time steps (1 msec) are required (depending on the amount of change) for the SOR factor to reach an approximate equilibrium.
To further reduce convergence iterations, calculations for compartments that have already converged are omitted, using a stepwise decreasing voltage-change criterion. The initial criterion is set to 1e-3 V (1 mV). If a compartment's voltage estimate has changed less than the criterion, its recalculation is omitted while other compartments' estimates are iteratively converged. After all compartments converge to this criterion, the criterion voltage is reduced by a factor of 10 and the compartments converged again. This process is repeated until all compartments reach the final criterion. Omitting calculations in this way increases speed by 50% to 100%; however, when a large fraction of the compartments in a model are quiescent, speed can increase by a factor of 5.
For neural models partitioned into compartments with a size of 0.1 cable space constant, typically between 10 to 200 iterations are required (depending on connectivity and size of model) to obtain convergence. Criterion values of less than 1e-8 volts are needed to ensure stability of computation, and also to ensure acceptable accuracy over long modeling "run" durations. The criterion voltage does not completely determine error in the voltage estimate because convergence rates vary, but I have found the typical voltage error to be within 10 times the criterion.
To eliminate smaller compartments, NeuronC compares them to a criterion for minimal compartment size (based on fraction of the space constant). Any compartments smaller than the criterion are "condensed" with a neighbor compartment if their mutual connection is resistive. This reduces unnecessary compartments (in a typical branched retinal ganglion cell or horizontal cell by about 30-50%) but maintains accuracy of electrotonic decay along the dendritic tree (Figure 10). Normalizing compartment size in this way can reduce computation time (when the iterative solution method is used) by a factor of 10 or more, depending on the original closeness of coupling.
Figure 10
The horizontal cell was analyzed into a set of "neural elements", such as a sphere for the soma, a set of cables for the dendritic arbor, and a set of synaptic input locations. Length and diameter of the cable segments were measured manually from a published drawing (Fisher and Boycott, 1974) (see Figure 10). A list of the elements' dimensions, connections, and locations was placed into the "morphology" file. The name of the "experiment" file was given to the NeuronC simulator, which constructed the neuron, ran the "experiment" on the neuron, and plotted temporal or spatial responses (Figure 7) (also see Freed et al, 1992).
For a range of assumptions about membrane resistivity (Rm=5000- 50,000 Ohm-cm2), morphology of the type A horizontal cell produced a relatively flat synaptic "weighting function" at the soma. Type A dendritic branches, being thick (1-5 um) transferred voltages several hundred microns without electrotonic decay. Magnitude of post-synaptic potential did not transfer in the model with great efficiency to the soma, but this depended on the shunting effect of cell conductance on the synaptic conductance, not primarily on synaptic location within the dendritic arbor (Smith and Sterling, 1991). Post-synaptic potentials recorded at distal synapses tended to be larger than those recorded at proximal synapses, but this effect was due to the higher input (axial) resistance "seen" by distal synapses and had little influence on signal transfer to the soma (see Rall and Rinzel, 1973).
Type B horizontal cell morphology gave a shallow dome-shape to the synaptic weighting function at the soma. Type B dendrites are relatively thin (1-2 um) and tend to branch relatively close to the soma. Each branch therefore has fewer sub-branches and receives fewer cone synaptic contacts than in the type A cell. My simulations suggested that electrotonic decay in the type B cell is relatively more important to synaptic signal transfer than in the type A cell (Smith and Sterling, 1991). However, for a wide range of membrane resistivity (Rm = 5000 - 50000 ohm-cm2), the synaptic "weighting function" in the type B cell was within one space constant.
Lacking a general way to derive such morphological rules, I derived salient features of the two horizontal cell types from published morphological descriptions and my preliminary simulations of their function, and included the salient features into an algorithm. To simulate a "generic" retinal horizontal cell, I simplified its morphology into a symmetrical pattern and specified this with an algorithm (Figure 11).
Figure 11
To arrange cells of various arrays in correct physical location with respect to both cell density and synaptic interconnection requires some form of "connection rule". This rule may be implied by a symmetrical morphology, e.g. "all dendritic branches look alike, and each branch makes one synaptic contact". Alternately, the rule may be a "developmental" one where synaptic contacts are created independently of exact morphology by an algorithm (see DISCUSSION).
For the cone-horizontal cell network, I chose a simple algorithm to create a regular cone array and a correspondingly regular "generic" dendritic tree for the horizontal cell. The connection rule was straightforward: each horizontal cell connects to all cones within a square "dendritic field", a separate horizontal cell dendrite reaching from the soma to each cone (Figure 11). This "connection rule" avoided the issue of incomplete specification of connectivity, at the expense of simplifying morphology.
The neural circuit was defined using "for" loops to make 2- dimensional arrays of cones and 2 types of horizontal cells. Horizontal cell dendritic tips were simulated with cable segments of 0.1 um diameter, which were connected to a soma by 2 um diameter primary dendrites. The complete model required a NeuronC program (text file) of about 300 lines. The model defined a network of some 12,500 nodes, 13,500 compartments, and 10,000 synapses, and gave realistic responses to typical stimuli (Figure 12). A simulation run of 0.1 second required about 35 minutes on an IBM 6000/320 workstation.
Figure 12
Figure 13
Figure 14
Therefore, to investigate the interactions of the individual receptive fields, I selectively "blocked" synaptic connections by modifying "synapse" statements. With all chemical synapses blocked, the effect of gap junctions was to spread electric signals laterally, giving an exponential skirt (Bessel function) to a point stimulus (Figure 14). When the reciprocal chemical synapses were included, the effect of removing gap junctions between cones was to dramatically reduce the size of the cone receptive field center (Figure 15). The effect of removing, in a separate "run", the feedback synapse from each horizontal cell type to cones was to modify a different component of the cone receptive field center and surround.
Figure 15
A third advantage of using predefined neural elements is that compartmental detail is summarized succinctly at a high level. A specification of individual conductances is necessary for a compartmental model, but when the model contains on the order of 104 compartments, merely specifying and verifying the details of each compartment can distract from larger scale issues (e.g., see Hines, 1989; Segev et al., 1989).
For networks where examples of individual neuron morphology are known and at least a general idea of connectivity is also known (number of synapses, "convergence / divergence"; see Sterling et al., 1988), it might seem trivial to build the network. However, a problem exists: Given that a neuron of type 1 synaptically connects to each available neuron of type 2, exactly which dendritic branches make the contacts? In the mammalian retina, horizontal cells are thought to receive input from most of the available cone photoreceptors within their dendritic fields (Wa"ssle et al 1978). However, unless 1) all the individual locations and morphologies for an array of horizontal cells and their dendrites' positional correspondence to cones are specified, or 2) horizontal cell morphology is simplified to a "generic" symmetrical dendritic tree in which dendrites always contact cones in predetermined symmetrical patterns, no neural network can be constructed for lack of actual synaptic positions (Figure 16a). The problem is more difficult if density of neuron type 2 is not an integral multiple of neuron type 1.
Figure 16
A solution is to combine details of morphology and cell density in a "connection rule" that defines for an individual neuron of specific type exactly where and in what quantity it makes synaptic connections with other neurons. For example, a connection rule might extend a dendrite from a neuron of type 1 to form a spine and make a synaptic connection (according to the proximity of 2 neurons) with the nearest dendrite of type 2.
By stating a synaptic connection rule in "developmental" terms of 3- dimensional distance instead of in terms of predetermined symmetrical cell spacing and morphology, actual neuron morphology and cell density data can be used without simplification (Figure 16b). The rule must be specified on a precise morphological basis only to the extent that a synapse's effect on the network is related to the synapse's precise location on a cell's dendritic tree. Because such a rule attempts to bridge a gap in anatomical knowledge with a more general procedure, it seems appropriate to state the rule as an algorithm instead of a purely anatomical pattern.
Algorithm to connect cones to a horizontal cell with "real morphology", based on a simple "connection rule":
Create an array of cone photoreceptor neurons, spaced regularly with a separation defined as "conespace". Create a second neural array of horizontal cells, with appropriate spacing, overlapping the first array. For each cone located within 1 "conespace" of a horizontal cell dendrite, connect the cone axon terminal to the horizontal cell, using a synapse at the point of closest approach of the cone to the horizontal cell. Connect the synapse to the horizontal cell via a thin- necked spine of appropriate length.
Such "high-level" rules are facilitated by a "foreach" loop statement that searches through all of the neural circuit information (in the network constructed so far). It executes a series of statements only for neurons or their subregions that match a specified name pattern (a pattern of node and neural element numbers). This statement specifies a synaptic connection between 2 types (or subregions) of neuron, eliminating other types. The match is further narrowed by statements inside the "foreach" loop that eliminate neurons based on other criteria, for example, those neurons of type 2 beyond a certain distance from a neuron of type 1. In this case, a neighboring neuron that matches all the conditions is a candidate for synaptic contacts.
Further, a rule that defines a cell's morphology as well as its connections with neighboring cells serves as a definition of "anatomical cell type" for the neuron. The effect of modifying the neuron (in morphology or connection) on its function in the circuit can be tested by simply modifying its rule. Finally, a construction rule relates more directly to intuition about the neuron's function in the circuit than would a simple list of structural details.
The GENESIS simulator (Wilson and Bower, 1989) allows multiple- neuron circuits to be defined as hierarchical systems using "elements", uses "script" text files to describe the elements, and uses a similar numerical integration method, but is not primarily defined as a language in the style of other general programming languages. Compared with GENESIS, NeuronC is relatively specialized because 1) its "neural elements" are compact, predefined and precompiled, and 2) it has a precompiled light stimulator with optical blur and photoreceptor transduction. However, compared with GENESIS, NeuronC is more general in that it allows neural circuit "connection-rules" to be written in the style of a procedural language.
For more information about the NeuronC simulation language, a NeuronC User's Manual is available (110 pp.). Source code can be provided to run on most UNIX systems.
The "run" statement at program end (or a "step" statement) causes NeuronC to simulate time-related neural actions. The simulator program translates from neural elements to their electric circuit equivalents (resistors, capacitors, etc.; see Joyner et al., 1978), and translates from these in turn to numerical approximations using difference equations. Each compartment in the simulation is given an initial starting voltage ("resting potential"), defined by neural element statement parameters or by default. The starting voltage of neurons in a large network may be approximately set by simulating and obtaining the equilibrium voltages of a smaller but similar network. The simulation advances in time (normally in steps of 25 to 100 usec) by numerically integrating state variables (voltage, channel state concentrations, and synaptic neurotransmitter concentration). At each time step, the NeuronC simulator calculates current flows between compartments by tracing the connections of a compartment with its neighbors. Stimuli and recordings proceed separately from, but synchronized with, the neural circuit time integration.
A synapse consists of separable amplitude and time functions which modify the signal passing through them: 1) Presynaptic time delay filter This filter affects both time delay and frequency characteristics of the synaptic signal passed on to other synaptic functions. The two parameters, 1) number of stages and 2) time constant (tau) for each stage allow controlling delay and frequency characteristics separately. Each stage is described by:Synapse
Syntax: connto synapse 'parameters' optional parameters: default: meaning: open | close open Neurotrans opens or closes chans. linear | expon 5 mv linear or exponential const (1/mvolt) vrev -.00 V reversal pot. (battery) (Volts) thresh -.05 V threshold for transfer (Volts) igain 1 synaptic gain (100 mv -> saturate) nfilt1 2 number of presynaptic filters. timec1 .2 msec synaptic low pass filter. nfilt2 1 number of filters after nt release. timec2 .2 msec synaptic low pass filter. kd 1 saturation point: cond/(cond+kd) maxcond 1e-8 S maximum conductance (Siemens) chnoise defines channel noise N 100 number of channels dur .0001 sec duration of channel events vesnoise defines vesicle noise N 10 number of vesicle release sites dur .0001 sec duration of release events
Vout (t+1) = Vout (t) + (Vin (t) - Vout(t)) * 0.63 where: Vout = output of delay filter. Vin = input to filter. t = time in increments of basic timestep2) Synaptic transfer function 2) a transfer function which converts from mvolts at the presynaptic node to a normalized "transmitter" level.
Linear transfer Trel = (Vpre - Vthresh) * igain where: Trel = transmitter released to postsynaptic receptor. (limited to be non-negative) Vpre = presynaptic voltage (normalized to mv). Vthresh = synaptic threshold (normalized to mv). igain = linear transfer gain.Exponential transfer The transfer function may also be exponential, defined by the equation
Trel = .025 * e(Vpre-Vthresh)/expon * igain where: Trel = transmitter released to postsynaptic receptor. Vpre = presynaptic voltage (mv). Vthresh = synaptic threshold (mv). expon = constant defining mvolts per e-fold increase. igain = transfer gain (normally set to 1). Reference: Belgum and Copenhagen, 19883) Synaptic vesicle noise Vesicle noise is defined by two parameters, the number of release sites (N), and the average duration of a vesicle release event. The number of vesicles released is a random function (binomial distribution) of the probability of release at each release site and the number of sites. Quantal release probability is calculated with the following equations:
p = Trel / (size * N * k) q = binom(p,N) Tnoise = q * size * k where: p = probability of release. k = constant to reduce p. Trel = transmitter released. size = average size (duration * height) of release event. q = number of quantal vesicles released per time step. N = number of release sites. binom = random binomial distribution funtion of p, n. Tnoise = transmitter signal with noise, normalized to 1. Reference: Korn, et al., 1982.4) Postsynaptic delay function for transmitter released Identical to presynaptic delay filter described above.
Vout2 (t+1) = Vout2 (t) + (Vin2 (t) - Vout2(t)) * 0.63 where: Vout2 = output of delay filter. Vin2 = input to filter. t = time in increments of basic timestep5) Synaptic saturation Transmitter levels delivered from the second delay function are passed through a Michaelis-Menten saturation function:
Rbound = Vout2 / (Vout2 + kd) (3) where: Rbound = Fraction of receptors bound with transmitter Vout2 = averaged released transmitter kd = binding constant for synaptic saturation (default = 1)6) Synaptic conductance The postsynaptic conductance is related to the product of the transmitter bound (Rbound) to postsynaptic receptors and the maximum synaptic conductance, depending on the action of neurotransmitter:
Neurotransmitter opens channels: Gpost = Rbound * maxcond Neurotransmitter closes channels: Gpost = (1 - Rbound) * maxcond where: Gpost = postsynaptic conductance. Rbound = Fraction of receptors bound, after saturation. maxcond = maximum postsynaptic conductance (default = 1e-8 S)The fraction of receptors bound is always less than 1, so the postsynaptic conductance can only approach the level specified by "maxcond". 7) Synaptic channel noise Channel noise is defined by two parameters, the number of channels (N), and the average duration of a channel opening (dur). The number of open channels is a random function (binomial distribution) of the probability of each channel opening and the number of channels. Quantal release probability is calculated with the following equations:
p = Rbound / (size * N) q = binom(p,N) Rnoise = q * size where: p = probability of opening Rbound = receptors bound with transmitter size = average size (duration * height) of channel opening q = number of channels opened per time step N = number of channels binom = random binomial distribution funtion Rnoise = channel open signal with noise, normalized to 1.8) Synaptic reversal potential
Ipost = (Vcell - Vrev) * Gpost where: Ipost = postsynaptic current Vcell = intracellular voltage Vrev = reversal (battery) voltage for synapse Gpost = postsynaptic conductance
Syntax: atDifference equations for rod and cone phototransduction elements. The NeuronC rod and cone transduction models are based on the enzyme cascade postulated by Pugh and Altman (1988). Each individual rod or cone is allocated a data structure that contains its individual set of state variables, along with a pointer to the appropriate parameter set (see APPENDIX 2). The rod and cone data structures define a set of 7 difference equations to simulate the photocurrent rising phase and 4 difference equations to simulate the decay phase and calcium feedback (see Pugh and Lamb, 1990) (Figure 8). Difference equations for rod and cone:rod ( , ) 'parameters' at cone ( , ) 'parameters' parameters: default: , location of receptor in (x,y) plane. maxcond 1.4e-9 S dark cond of light-modulated channels. dia 2 um dia of receptor used for phot. flux. pigm 1 type 0 = rod, type 1-3 = cone r,g,b pathl 25 um path length for light through receptor attf .9 light attenuation factor (lens, etc.) filt 0 0=none; 1=macular pigment photnoise off poisson photon noise from light flux The light absorbed by a receptor is calculated from: sod = function of pigment and wavelength (table lookup) od = sod * path length T = exp10 ( -od ) Labs = ( 1 - T ) * mfilt * attf * Qeff Where: sod = specific optical density (extinction coeff. * concentration) od = optical density T = Transmitted light mfilt = macular filter (if defined) attf = miscellaneous attenuation factor Qeff = quantum efficiency (set at .67) Labs = Light absorbed (absolute sensitivity)
rhod += light_flux; gprt += rhod * gprgain; gpr2 += gprt * ggain2; gpr3 += gpr2 * ggain3; pde += gpr3 * (1.0 - pde) * ggain4; ca2 = cax * cax; gcyc = 1.0 - (ca2 / (ca2 + kdgcyc)); cycg -= pde * cycg / (cycg + 1.0) * pdegain; cycg += gcyc * gcygain; cond += cycg * cycg * (totcond - cond) / totcond * condgain; ca += cond * gca; cax += ca * cxgain; rhod *= dec1; gprt *= dec2; gpr2 *= dec3; gpr3 *= dec4; pde = (pde - pdebase) * pdedec + pdebase; ca -= capump * ca / (ca + 1.0); cond *= dec5; cax *= dec6;
Baylor, D.A., Nunn, B.J., and Schnapf, J.L. (1984) The photocurrent, noise and spectral sensitivity of rods of the monkey macaca fascicularis. J. Physiol. 357: 575-607.
Baylor, D.A., Nunn, B.J., and Schnapf, J.L. (1987) Spectral sensitivity of cones of the monkey macaca fascicularis. J. Physiol. 390: 145-160.
Belgum, J.H. and Copenhagen, D.R. (1988) Synaptic transfer of rod signals to horizontal and bipolar cells in the retina of the toad (Bufo marinus). J. Physiol. 396: 225-245.
Carnevale, N.T. and Lebeda, F.J. (1987) Numerical analysis of electrotonus in multicompartmental neuron models. J. Neurosci. Meth. 19: 69-87.
Cohen, E. and Sterling, P. (1990) Microcircuitry related to the receptive field center of the on-beta ganglion cell. J. Neurophysiol. 65: 352-359.
Cooley, J.W. and Dodge, F.A. (1966) Digital computer solutions for excitation and propagation of the nerve impulse. Biophys. J. 6: 587.
Conte, S.D. and de Boor, C. (1980) Elementary numerical analysis. McGraw- Hill, New York.
Crank, J. and Nicholson, P. (1947) A practical method for numerical evaluation of solutions for partial differential equations of the heat conduction type. Proc. Camb. Phil. Soc. 43: 50-67.
Edelman, G.M., Gall, W.E., and Cowan, W.M. (1987) editors, Synaptic Function. New York: Wiley.
Fisher, S.K., and Boycott, B.B. (1974) Synaptic connexions made by horizontal cells within the outer plexiform layer of the retina of the cat and the rabbit. Proc. Roy. Soc. Lond. B, 186: 317-331.
Foerster, M.H., van de Grind, W.A., and Grusser, O.-J. (1977) The response of cat horizontal cells to flicker stimuli of different area, intensity and frequency. Exp. Brain Res. 29: 367-385.
Fohlmeister, J.F., Coleman, P.A., and Miller, R.F. (1990) Modeling the repetitive firing of retinal ganglion cells. Brain Res. 510: 343-345.
Freed, M.A., and Sterling, P. (1988) The On-alpha ganglion cell of the cat retina and its presynaptic cell types. J. Neurosci. 8: 2303-2320.
Freed, M.A., Sterling, P., and Smith, R.G. (1992) Computational model of the On-alpha ganglion cell receptive field based on bipolar cell circuitry. Proc. Nat. Acad. Sci. 89: 236-240.
Hines, M. (1984) Efficient computation of branched nerve equations. Int. J. Biomed. Comput. 15: 69-76.
Hines, M. (1989) A program for simulation of nerve equations with branching connections. Int J. Biomed. Comput. 24: 55-68.
Hines, M. (1991) NEURON: a program for simulation of nerve equations. In: Analysis and Modeling of Neural Systems, II. Kluwer (in press).
Hodgkin, A.L. and Huxley, A.F. (1952) A quantitative description of membrane current and its application to conduction and excitation in nerve. J. Physiol. 117: 500-544.
Joyner, R.W., Westerfield, M., Moore, J.W., and Stockbridge, N. (1978) A Numerical Method to model excitable cells. Biophys. J. 22: 155-170.
Kernighan, B.W., and Pike, R. (1984) The UNIX programming environment. Englewood Cliffs, NJ: Prentice-Hall.
Kolb, H. (1974) The connections between horizontal cells and photoreceptors in the retina of the cat: electron microscopy of Golgi preparations. J. Comp. Neurol. 155: 1-14.
Kolb, H. (1977) The organization of the outer plexiform layer in the retina of the cat: electron microscopic observations. J. Neurocytol. 6: 131-153.
Korn, H., Mallet, A., Triller, A., and Faber D.S. (1982) Transmission at a central synapse, II. Quantal description of release with a physical correlate for binomial n. J. Neurophysiol. 48: 679-707.
Mascagni, M.V. (1989) Numerical methods for neuronal modeling. Chapter 13 in: Methods in Neuronal Modeling, edited by C. Koch and I. Segev. Cambridge, MA: MIT Press.
McGuire, B.A., Stevens, J.K., and Sterling, P. (1984) Microcircuitry of bipolar cells in cat retina. J. Neurosci. 4: 2920-2938.
McGuire, B.A., Stevens, J.K., and Sterling, P. (1986) Microcircuitry of beta ganglion cells in cat retina. J. Neurosci. 6: 907-918.
Miller, R.F., Coleman, P.A., and Jayram, S. (1990) How ganglion cells summate epsps in the receptive field center: physiology, biophysics and computer simulations. Soc. Neurosci. Abstr. 16: 1217.
Nelson, R. (1977) Cat cones have rod input: a comparison of the response properties of cones and horizontal. cell bodies in the retina of the cat. J. Comp. Neurol. 172: 109-136.
Pellionisz, A., and Llinas, R. (1977) A computer model of cerebellar Purkinje cells. Neurosci. 2: 37-48.
Perkel, D.H. and Mulloney, B. (1978) Electrotonic properties of neurons: the steady-state compartmental model. J. Neurophysiol. 41: 621-639.
Perkel, D.H., Mulloney, B., and Budelli, R.W. (1981) Quantitative methods for predicting neuronal behavior. Neurosci. 6: 823-837.
Press, W.H., Flannery, B.P., Teukolsky, S.A., and Vetterling, W.T. (1988) Chapter 7, "Random numbers", and Chapter 17, "Partial differential equations" in: Numerical Recipes in C: The Art of Scientific Computing. New York: Cambridge University Press.
Pugh, E.N. and Altman, J. (1988) A role for calcium in adaptation. Nature 334: 16-17.
Pugh, E.N. and Lamb, T.D. (1990) Cyclic GMP and calcium: the internal messengers of excitation and adaptation in vertebrate photoreceptors. Vision Res. 30: 1923-1948.
Rall, W. (1959) Branching dendritic trees and motoneuron membrane resistivity. Exp. Neurol. 1: 491-527.
Rall, W. (1962) Theory of physiological properties of dendrites. Ann. N. Y. Acad. Sci. 96: 1071-1092.
Rall, W. (1964) Theoretical significance of dendritic tree for input-output relation. In: Neural Theory and Modeling, edited by R.F. Reiss. Stanford: Stanford University Press, pp 73-97.
Rall, W. (1967) Distinguishing theoretical synaptic potentials computed for different soma-dendritic distributions of synaptic input. J. Neurophysiol. 30: 1138-1168.
Rall, W., and Rinzel, J. (1973) Branch input resistance and steady attenuation for input to one branch of a dendritic neuron model. Biophys. J. 13: 648-688.
Redman, S.J. (1973) The attenuation of passively propagating dendritic potentials in a motoneurone cable model. J. Physiol. 234: 637-664.
Robson, J.G., and Enroth-Cugell, C.E. (1978) Light distribution in the cat's retinal image. Vision Res. 18: 159-173.
Schnapf, J.L., Nunn, B.J., Meister, M. and Baylor, D.A. (1990) Visual transduction in cones of the monkey macaca fascicularis. J. Physiol. 427: 681-713.
Segev, I., Fleshman, J.W., and Burke, R.E. (1989) Compartmental models of complex neurons. Chapter 3 in: Methods in Neuronal Modeling, edited by C. Koch, C. and I. Segev. Cambridge, MA: MIT Press.
Smith, R.G., Freed, M.A., Sterling, P., Hines, M.L., and Moore, J.W. (1984) Microcircuitry of the On-Alpha (Y) cell contributes to velocity tuning. Soc. Neurosci. Abstr. 10: 326.
Smith, R.G., Freed, M.A., and Sterling, P. (1986) Microcircuitry of the dark- adapted cat retina: functional architecture of the rod-cone network. J. Neurosci. 6: 3505-3517.
Smith, R.G., and Sterling, P. (1990) Cone receptive field in cat retina computed from microcircuitry. Visual Neurosci. 5: 453-461.
Smith, R.G., and Sterling, P. (1991) Functional consequences of morphology in types A and B horizontal cells in cat retina. Soc. Neurosci. Abstr. 17: 1012.
Steinberg, R. (1969) Rod and cone contributions to S-potentials from the cat retina. Vision Res. 9: 1319-1329.
Sterling, P., Freed, M., and Smith, R.G. (1988) Architecture of rod and cone circuits to the on-beta ganglion cell. J. Neurosci. 8: 623-642.
Traub, R.D., and Llinas, R. (1979) Hippocampal pyramidal cells: significance of dendritic ionic conductances for neuronal function and epileptogenesis. J. Neurophysiol. 42: 476-496.
Vaney, D.I. (1991) Biocytin and neurobiotin reveal novel connections between retinal neurons. Invest. Ophthalmol. Vis. Sci. 32: 1131, 1991.
Wa"ssle, H., Boycott, B.B., and Peichl, L. (1978) Receptor contacts of horizontal cells in the retina of the domestic cat. Proc. R. Soc. B 203: 247- 267.
Werblin, F. (1974) Control of retinal sensitivity. II. Lateral interactions at the outer plexiform layer. J. Gen. Physiol. 63: 62-87.
Wilson, M.A., and Bower, J.M. (1998) The simulation of large-scale neural networks. Chapter 9 in: Methods in Neuronal Modeling, edited by C. Koch and I. Segev. Cambridge, MA: MIT Press.
Figure 2) Conceptual modules used in constructing neural circuit models. At bottom, "neural elements" may be connected together to make whole neurons, at the second level. At the third level, a "connection pattern" is an interaction between an individual neuron of one type and one or more neurons of a second type. At the top level, arrays of neurons are constructed from individual neurons and neural connection patterns.
Figure 3) Schematic diagram of high level circuit description from cone to horizontal cell (Hz), showing 7 nodes interconnected with neural elements in bold. Nodes are "points of junction" and are used to define anatomical connection patterns in the circuit. A node can be connectedto any number of other nodes, and this feature allows the description of loops for feedback. Neural elements such as cables and chemical or electrical synapses connect 2 nodes.Other elements such as rod or cone transduction mechanisms andisopotential spheres connect to a single node. Hz, horizontal cell.
Figure 4) Translation of cable segments into compartments. Two cable segments connect at Node 2, with Node 1 and Node3 at the cable segment ends. Each cable segment is translated into appropriate low-level compartments and resistive connections. Compartments at end of cable are 1/2 standard size, so that when 2 cables connect, their ends form one standard size compartment at the node common to both.
Figure 5) Components used to simulate chemical synapses. Presynaptic voltage signal passes through a temporal filter, static transfer function with variable gain and threshold, a second temporal filter, and a saturation function which controls the conductance of the output channel. Finally, a voltage source (Vrev) in series with the conductance applies current to the post-synaptic cell.
Figure 6) Sequential state model used to describe macroscopic channel currents, similar to that used by Armstrong and Matteson, 1984. Model represents the average distribution of a population of individual channel proteins. Channel is described as a series of states and allowable transitions between states. Rate constants for each transition are calculated as a function of voltage. Each state is assigned a fractional "concentration", and the sum of all states' concentrations is always 1. The conductance of each state is either 1 or 0, therefore total conductance is calculated as the sum of concentrations of the open states.
Figure 7) Time and space plot of a voltage-clamped cable with closed ends. Cable is 2 um diameter, with 40 segments of 20 um length and 41 nodes; Rm=5000 Ohm-cm2, Ri=100 Ohm-cm. Spatial plots drawn every 100 usec, starting at time=100 usec. Voltage-clamp stimulus applied at time=0 for 500 microseconds, then cable voltage allowed to decay passively. Note that dV/dx approaches 0 at ends of cable.
Figure 8) Response of cone to series of light flashes while voltage clamped to -30 mv. Lowest intensity was 500 photons/um2. Flash intensity was increased by a factor of 2 between flashes. Strength of flash response depends on the clamp voltage because light-dependent conductance of cone is ohmic. Note that amplitude of downwards "undershoot" saturates in tandem with amplitude of rising phase (see APPENDIX 3), in approximation to responses of monkey cones (Schnapf et al, 1990).
Figure 9) Explanation of macroscopic -> microscopic stimulus generation. A "spot" definition causes the stimulus generator program to create a spot of appropriate diameter and intensity in its internal stimulus array. A "blur" definition causes this stimulus to be convolved with a Gaussian point-spread function of appropriate dimensions. The stimulus generator creates a list of intensity changes for each photoreceptor, and places the resulting event list in a "stimulus file". At runtime, the neural simulator reads the stimulus file, creates an internal list of stimulus actions, and converts each action into a microscopic stimulus for an individual photoreceptor, using wavelength lookup tables and an optional Poisson photon noise function.
Figure 10) Model of type A horizontal cell of cat retina, derived from Fisher and Boycott, 1974. A single synapse from each cone pedicle is assumed to contact the horizontal cell through a 0.1 um dia. spine. A) Horizontal cell morphology as specified to program; B) translation of morphology into low- level compartments; without compartment condensation, 223 compartments. A compartment is displayed as a sphere that would possess the equivalent of the compartment's membrane surface area. Each branch point requires a node, and each node requires 1 compartment to represent it, so thick dendrites are broken into "small" compartments (very closely coupled together). C) Partial condensation; 198 compartments. Criterion for condensing a compartment with its neighbor was set to 0.1 of standard compartment size (normally 0.1 lambda; criterion in this case set to 0.01 lambda). Each compartment displayed at mean location of nodes it represents. D) Greater condensation; criterion set at 0.3 of standard compartment size; 104 compartments. Simulation D ran approximately 10 times faster than B, yet effect of synaptic potentials at soma between the two differed by less than 5%.
Figure 11) Symmetric morphology of horizontal cell used in model of cone- horizontal cell network, shown in radial view. Each cone photoreceptor makes a synaptic contact with each horizontal cell that extends beneath it. All of a horizontal cell's dendrites are equal in electrotonic length, so there is no spatial weighting of cone inputs.
Figure 12) Output of 30x30 model during simulation to map cone receptive field. Simultaneous traces from 10 cones and 2 horizontal cells are plotted vs. time. Model requires about 50 msec to reach equilibrium. Stimulus at 50 msec, network response sampled at 50 and 110 msec. Stimulus is small blurred spot, offset from center of array. Response of cones near edge of array is depolarized.
Figure 13) Solid line, cone receptive field mapped by 30x30 model. Dotted line, difference-of-Gaussians from linear summation model of cone receptive field (Smith and Sterling, 1990). Gap junction strength and transfer functions of feedforward and feedback synapses have been tuned to give approximate match.
Figure 14) Solid line, cone receptive field with optical blur alone; Gaussian of 11 um radius. Dotted line, cone receptive field with gap junctions alone, no blur or chemical synapses. Gap junctions give exponential-type receptive field (discretely-sampled Bessel function) with wide skirt. Gap junction conductance was already tuned to match linear-summation model as in Figure 13.
Figure 15) Solid line, complete cone receptive field. Dotted line, cone receptive computed with chemical synaptic feedback but without gap junctions. Effect of gap junctions is to widen cone receptive field center and surround.
Figure 16) A) Problem that exists when synaptically connecting two types of neuron. Each neuron of type 2 might conceivably connect to any of several points on type 1, but no rule is specified, so no network can be constructed. B) Rule (R) applied for each neuron of type 2 computes exactly where on type 1 to make a synaptic contact. Rule does not require specifying in advance any particular cell morphology or spacing.