Information

What is the difference between spike-triggered averaging and reverse correlation?

What is the difference between spike-triggered averaging and reverse correlation?



We are searching data for your request:

Forums and discussions:
Manuals and reference books:
Data from registers:
Wait the end of the search in all databases.
Upon completion, a link will appear to access the found materials.

I'm interested in the difference between spike-triggered averaging and reverse correlation.

In some papers (i.e., Schwartz, Odelia, et al) I see the term 'Spike Triggered Averaging'. In others, (ie Ringach et al 2004) I see the term 'Reverse Correlation'. According to wikipedia, they are the same:

Spike-triggered averaging is also commonly referred to as “reverse correlation" or “white-noise analysis”

I was wondering though: Is there a subtle difference between spike-triggered averaging and reverse correlation?

References

  • Schwartz, Odelia, et al (2006). "Spike-triggered neural characterization." Journal of Vision 6.4
  • Ringach, Dario, and Robert Shapley. "Reverse correlation in neurophysiology." Cognitive Science 28.2 (2004): 147-166

There's the naïve version of spike triggered averaging, and the sophisticated version. Both of them are consistent estimators for a linear-nonlinear system under certain conditions (Paninski, 2003). If your stimulus is $x_i$ and your spike count in a small bin is $y_i$, naïve version is $$mathrm{STA} = frac{1}{N} sum_i x_i y_i$$ The sophisticated version is equivalent to linear regression where a (pseudo-)inverse of the stimulus covariance is premultiplied to the naïve version. The naïve version converges slower in general.

In short, both of them are trying to estimate the same thing and will converge to the same thing, and sometimes called the same thing. However, it could refer to different things too, so read the methods section of papers before figuring out which one is which.

  • Paninski, L. (2003). Convergence properties of three spike-triggered analysis techniques. Network: Computation in Neural Systems, 14, 437-464.
  • Dayan, P. and Abbott, L. F. (2001). Theoretical neuroscience: Computational and mathematical modeling of neural systems. Massachusetts Institute of Technology Press. http://amzn.to/1nGUdII

Spike trigger is a specific type or you could say a sub-set of reverse correlations, covariance and probabilities. Other various examples include differential reverse correlation, Poisson spike trains, nonlinear reverse correlation and motion reverse correlation.


Difference Between ROE and RNOA

In finance, equity is the interest or claim of shareholders on the assets of a company after all its liabilities are liquidated. Shareholders’ equity or stockholders’ equity is the interest on the company’s assets that is divided among all shareholders of common stock.

When a business is established, the funds that investors put up as capital causes it to incur liabilities. To come up with the shareholders’ equity, all liabilities must be deducted from its assets, and the remainder comprises the shareholders’ equity or interest in the business.

There are two methods of measuring shareholders’ equity. One is the Return on Equity (ROE) which is the measure of shareholders’ equity on a company’s common stocks. It shows how a company skillfully manages its funds to produce maximum interest and growth.
To come up with a company’s ROE, all assets including long term (equipment and capital) and current ones (receivables and cash) are added. Its long term (debts that do not have to be paid within the year) and current (accounts payable and employees’ salaries) liabilities are also added. The total liabilities are then subtracted from the total assets.

Return on Net Operating Assets (RNOA), on the other hand, is the measure of a company’s capability to create profit from each piece of equity. It calculates the amount that a company earns for each dollar that it invests. A company’s net income before tax (profit before tax) is divided by its total assets to come up with its RNOA. It is also known as a profitability or productivity ratio that gives owners an idea of how well their company is doing based on their goals, competitors, and the industry as a whole.

Computing the RNOA involves the inclusion of assets incurred from its liabilities which is not very useful for investors but is a good measure of the profitability and performance of the different divisions of the company. It is a good internal management ratio and is most suitable for companies that have large capitalization. While a company’s liabilities are not included in computing the RNOA, they are included in the computation of ROE. The dividends paid to preferred shareholders are also subtracted from the net income.

1.ROE is Return on Equity while RNOA is Return on Net Operating Asset.
2.The formula for ROE is net income after taxes divided by shareholder equity while the formula for RNOA is net income divided by total assets.
3.The computation of ROE includes the deduction of all liabilities and preferred dividends from all assets while the computation of RNOA does not include this.
4.The ROE is computed after taxes while the RNOA is computed before taxes.
5.While RNOA is a good internal management ratio, ROE is a good gauge for investors on how well their funds are utilized to generate more profit.
6.ROE is a good tool for determining how well a company does compared to other companies in the same industry while RNOA is not as good.


Moderation (statistics)

In statistics and regression analysis, moderation occurs when the relationship between two variables depends on a third variable. The third variable is referred to as the moderator variable or simply the moderator. [1] The effect of a moderating variable is characterized statistically as an interaction [1] that is, a categorical (e.g., sex, ethnicity, class) or quantitative (e.g., level of reward) variable that affects the direction and/or strength of the relation between dependent and independent variables. Specifically within a correlational analysis framework, a moderator is a third variable that affects the zero-order correlation between two other variables, or the value of the slope of the dependent variable on the independent variable. In analysis of variance (ANOVA) terms, a basic moderator effect can be represented as an interaction between a focal independent variable and a factor that specifies the appropriate conditions for its operation. [2]


Abstract

Emotions and affective feelings are influenced by one's internal state of bodily arousal via interoception. Autism Spectrum Conditions (ASC) are associated with difficulties in recognising others' emotions, and in regulating own emotions. We tested the hypothesis that, in people with ASC, such affective differences may arise from abnormalities in interoceptive processing. We demonstrated that individuals with ASC have reduced interoceptive accuracy (quantified using heartbeat detection tests) and exaggerated interoceptive sensibility (subjective sensitivity to internal sensations on self-report questionnaires), reflecting an impaired ability to objectively detect bodily signals alongside an over-inflated subjective perception of bodily sensations. The divergence of these two interoceptive axes can be computed as a trait prediction error. This error correlated with deficits in emotion sensitivity and occurrence of anxiety symptoms. Our results indicate an origin of emotion deficits and affective symptoms in ASC at the interface between body and mind, specifically in expectancy-driven interpretation of interoceptive information.


Discussion

This study demonstrates that social judgments of dominance and trustworthiness from spoken utterances are driven by robust mental prototypes of pitch contours, using a code that is identical across sender and observer gender, and that prosodic mental representations such as these can be uncovered with a technique combining state-of-the-art pitch manipulations and psychophysical reverse correlation.

The mental representations found here for dominant prosody, which combine lower mean pitch with a decreasing dynamical pattern, are consistent with previous research showing that people’s judgments of dominance can be affected by average pitch and pitch variability (19, 20). Similarly, trustworthy prosodic prototypes, which combine a moderate increase of mean pitch with an upward dynamical pattern, are consistent with findings that high pitch, as well as, e.g., slow articulation rate and smiling voice, increase trusting behaviors toward the speaker (ref. 21 but see ref. 8). Beyond mean pitch, the temporal dynamics of the patterns found here were also consistent with previous associations found between general pitch variations and personality or attitudinal impressions, e.g., falling pitch in assertive utterances (22) or rising pitch in affiliatory infant-directed speech (23). However, the present results show that mental representations for a speaker’s dominance or trustworthiness can and should be described in much finer temporal terms than a general rising or falling pitch variation. First, our participants were in striking agreement on shapes sampled at <100 ms (Fig. S3) and the fine details of these shapes, while they generalized to a variety of utterances and (in the case of dominance) even to other two-syllable words, varied depending on the morphology of the words (Fig. S4). Second, participants gave significantly better evaluations of, e.g., dominance for falling-pitch profiles that were prototype-specific, rather than obtained by inverting the profile of the other construct (Fig. 3).

The fact that both male and female participants relied on the same dynamic pitch prototypes to perceive dominance and trustworthiness in speech is in striking contrast to previous findings of gender effects on vocal dominance judgments (6, 8), and, more generally, the sexual dimorphic features of the human voice (24). Our paradigm, in which pitch variations are generated algorithmically based on otherwise flat-pitch utterances, is able to control for incident variations of male and female prosody that may have obfuscated this processing similarity in previous studies. This finding, which provides behavioral evidence of a unique code for intonation, is consistent with a recent study suggesting that intonation processing is rooted at early processing stages in the auditory cortex (25). Gender symmetry, and more generally independence from a speaker’s physical characteristics, seems a very desirable property of a code governing social trait attribution: For instance, judgments of voice attractiveness, which increases via averaging, are also highly similar across gender (26). By focusing on temporal variations in addition to static pitch level, the prosodic code uncovered here appears to be a particularly robust strategy, enabling listeners to discriminate, e.g., dominant from submissive males, even at a similarly low pitch.

While our study shows that both dimensions have distinct prosodic prototypes that are robust within and across participants, the amount to which prototypes inferred on a given word explained the responses on other words differed, with better explanatory power for dominance than trustworthiness. First, it is possible that the trustworthiness prototype, because it appears to be more finely dynamic and tuned to the temporal morphology of the original two-syllable word, is more discriminating of any acoustic–phonetic deviations from this pattern than the smoother dominance kernel. Second, it is also possible that the position of a given exemplar with regard to the prototype is exploited more conservatively in the case of trustworthiness than dominance. In particular, the analysis of response probabilities as a function of the mean pitch change in experiment 2 (Exp. 2) (Fig. S4C) shows a more strongly nonlinear relationship in the case of trustworthiness—suggesting that there is such a thing as being “too trustworthy.” This pattern is consistent with a recent series of neuroimaging results showing nonlinear amygdala responses to both highly trustworthy and highly untrustworthy faces relative to neutral (27, 28), as well as, behaviorally, more negative face evaluations the more they deviate from a learned central tendency for trustworthiness, but less so for dominance (29).

Given the simple and repetitive nature of the judgment tasks, it appears important to consider whether some degree of participant learning or demand may be involved in the present results. First, one should note that, in Exp. 1, the same intonation pattern was never presented twice. On the contrary, we presented several thousands of different, random intonation patterns across the experiment, in such a way that the experimenters did not a priori favor one shape over another. In Exp. 2, prototypes inferred from Exp. 1 were repeated, but also interleaved with random variations. Therefore, it is unlikely that participants were able to discover, then respond differentially to one particular pitch pattern as the experiment unfolded (see also Fig. S5). While this does not exclude the possibility that participants have set themselves an arbitrary response criteria from the onset of the experiment, this criteria can in no way be guided by conditions decided in advance by the experimenter. Second, because dominance and trustworthiness tasks were conducted on an independent group of participants, the opposite (although nonsymmetric) patterns found for the two constructs cannot be attributed to transfer effects from one task to the other (30). The question remains, however, whether the prototypes evoked in explicit tasks such as the ones described here are consciously accessible to the participants and whether they are similar to those prototypes used in computations in which the corresponding traits are involved, but not directly assessed (see, e.g., ref. 31).

These findings, and the associated technique, bring the power of reverse-correlation methods to the vast domain of speech prosody and thus open avenues of research in communicative behavior and social cognition. First, while these results were derived by using single-word utterances, they initiate a research program to explore how they would scale up to multiword utterances and, more generally, how expressive intonation interacts with aspects of a sentence such as its length, syntax, and semantics. Analyses of infant utterances at the end of the single-word period (32) suggest that prosodic profiles are stretched, rather than repeated, over successive words. Whether such production patterns are reflected in listeners’ mental representations can be tested with our technique by using multiword utterances manipulated with single-word filters that are either repeated or scaled to the duration of the excerpts. Another related question concerns how social intonation codes interact with the position of focus words or with conjoint syntactic intonation, both of which are also conveyed with pitch. For instance, English speakers required to maintain focus on certain words may eliminate emotional f0 distinctions at these locations (33). These interactions can be studied with our technique by using reverse correlation on baseline sentences which, contrary to the flat-pitch stimuli used here, already feature prosodic variations or focus markers.

Second, although arguably most important, suprasegmental pitch variations are not the only constitutive elements of expressive prosody, which also affects an utterance’s amplitude envelope, speech rate, rhythm, and voice quality (34). By applying not only random pitch changes on each temporal segment, but also loudness, rate, and timbre changes (35), our paradigm can be extended to reveal listeners’ mental representations of social prosody along these other auditory characteristics and, more generally, probe contour processing in the human auditory system for other dimensions than pitch, such as loudness and timbre (36). Similarly, while judgements of dominance and trustworthiness may be of prime importance in the context of encounters with strangers, in intragroup interactions with familiar others, e.g., in parent–infant dyads, it may be more important to evaluate states, such as the other’s emotions (e.g., being happy, angry, or sad) or attitudes (e.g., being critical, impressed, or ironic). Our method can be applied to all of these categories.

By measuring how any given individual’s or population’s mental representations may differ from the generic code, data-driven paradigms have been especially important in studying individual or cultural differences in face (13, 16) or lexical processing (37). By providing a similar paradigm to map mental representations in the vast domain of speech prosody, the present technique opens avenues to explore, e.g., dysprosody and social-cognitive deficits in autism spectrum disorder (38), schizophrenia (39), or congenital amusia (40), as well as cultural differences in social and affective prosody (41).

Finally, once derived experimentally with our paradigm, pitch prototypes can be reapplied to novel recordings as social makeup so as to modulate how they are socially processed, while preserving their nonprosodic characteristics such as speaker identity. This process provides a principled and effective way to manipulate personality impressions from arbitrary spoken utterances and could form the foundation of future audio algorithms for social signal processing and human–computer interaction (42).


Passion, grit and mindset in young adults: Exploring the relationship and gender differences

The main aim of the study was to explore the associations between passion, grit and mindset in a group of young Icelandic adults. The sample consisted of 146 participants. The eight item Passion Scale was used to assess passion, and the Grit-S scale was used to assess grit. Mindset was measured with the Theories of Intelligence Scale (TIS). The scale has 8-items.

The results show significant difference between female and male in the passion factor only, in favor of males. In addition the results indicated a significant correlation between all factors for the group as a whole passion and grit, r = .435 passion and mindset, r = .260 grit and mindset, r = .274. The results for the gender separate indicate a same pattern for the females, significant correlation between all the factors passion – grit, r = .382, passion-mindset, r = . 299 and grit-mindset, r = .356. For the males the pattern was different. Significant correlation was between passion-grit, r = .500 and for passion-mindset r =.260. For grit-mindset there was not significant correlation r = .215. The results indicate gender differences in associations between passion, grit and mindset.


WHAT DO EMPATHS HAVE TO DO WITH THIS?

There are multiple opinions on the connection between empaths and co-dependency. One of them talks of the great need empaths generally feel to understand people around them and even provide support as far as they can.

The catch here is that empaths typically attract people who either have a victim narrative or an attention-seeking narrative (the narcissist). What compounds the problem further is that empaths usually have poor boundaries themselves, which means they can get sucked into such relationship patterns rather easily.

Another view on this connection is that co-dependency can often be masked under the label of “empath”. How? Well, you may genuinely be an empath, with a high quotient of empathy and sensitivity, but you may not be aware of how this plays out in your relationships. So you might stay in the same relationships, feeling stuck, feeling unhappy, unconsciously playing out the co-dependent side of your own nature.


Positive Correlation

When two related variables move in the same direction, their relationship is positive. This correlation is measured by the coefficient of correlation (r). When r is greater than 0, it is positive. When r is +1.0, there is a perfect positive correlation. Examples of positive correlations occur in most people's daily lives. The more money spent on advertising, the more customers buy from the company. Because this is often difficult to measure, the coefficient of correlation would likely be less than +1.0. A stronger correlation would exist with the more hours an employee works, the larger that employee's paycheck will be.

Correlation is suitable when analyzing the relationship between significant, quantifiable data.


Understanding Negative Correlation

When two variables are correlated, the relative changes in their values appear to be linked. This pattern may be the result of the same underlying cause or could be pure coincidence. It is thus important to recognize the adage, "correlation does not imply causation." Nevertheless, correlation is an important statistical tool used to measure the strength of a relationship between two or more variables.

This measure is expressed numerically by the correlation coefficient, sometimes denoted by 'r' or the Greek letter rho (ρ). The values assigned to the correlation coefficients range from -1.0 and 1.0. A "perfect" positive correlation of +1.0 would mean that two variables move exactly in lockstep with one another—so if variable A increases by two, so does variable B. A "perfect" negative correlation of -1.0, by contrast, would indicate that the two variables move in opposite directions with equal magnitude—if A increases by two, B decreases by two.

In reality, very few factors are perfectly correlated either way, and the correlation coefficient will fall somewhere within the negative-one-to-one range. Note that a correlation of zero suggests that there is no relationship between two variables and their movements are completely unrelated or random to one another.

Negative correlations occur naturally in many contexts. For instance, as the amount of snowfall increases, fewer drivers appear on the road. Or, as a cow gets older, her milk production drops. As you exercise more, you tend to lose weight. The more cats there are in a neighborhood is related to fewer mice. Negative correlations also appear in the world of economics and finance.


Results

Reverse correlation deviates from true sensory weights

In a typical reverse correlation experiment, subjects observe a sequence of noisy sensory stimuli and try to detect the presence of a target or categorize a stimulus 3,27,28,29,30 (Fig. 1). The stimuli could be a random dot kinematogram 26,27 , oriented gratings or bars 4,31 , or any other sensory inputs that randomly vary within or across trials along one or more stimulus attributes. The reverse correlation analysis calculates the relationship between subjects’ choice and stimulus fluctuations by averaging over the stimuli that precede a particular choice. For two-alternative decision tasks, the analysis yields two kernels, one for each choice. Because of symmetry of the two choices, the kernels tend to be mirror images of each other 27,32 . Therefore, it is customary to subtract the two kernels and report the result (Fig. 1b):

where E[s(t)|choice1] indicates the trial average of the stimulus at time t conditional on choice 1, s(t) is the stimulus drawn from a stochastic function with symmetric noise (e.g., Gaussian), and K(t) is the magnitude of the psychophysical kernel at time t.

Psychophysical kernels are guaranteed to match the sensory filters when decisions are made by applying a static nonlinearity 6,7 , for example, comparison to a decision criterion, as suggested by SDT 8 . However, recent advances suggest that SDT offers an incomplete characterization of the decision-making process. In particular, many perceptual decisions depend on integration of sensory information toward a decision bound 13,14,15,16,28,33,34 , the decision bound can vary based on speed–accuracy tradeoff 17,18 , the integration is influenced by urgency 35,36,37 and prior signals 14,33,38,39 , and experimentally measured RTs consist of a combination of decision time and non-decision time 24,25 .

A simple and commonly used class of decision-making models that takes these intricacies into account and provides a quantitative explanation of behavior in perceptual tasks is the drift diffusion model (DDM) 13,14,15 and its extensions 16,18,19,20,22,40 . In DDM, weighted sensory evidence is integrated over time until the integrated evidence (the decision variable, DV) reaches either an upper (positive) or a lower (negative) bound (Fig. 2), where each bound corresponds to one of the choices. We begin our exploration with the most basic model but will focus on more complex implementations later in the paper.

The drift diffusion model (DDM) captures the core computations for perceptual decisions made by integration of sensory information over time. We use variants of this model and more sophisticated extensions to explore how the decision-making mechanism influences psychophysical kernels. In DDMs, a weighting function, w(t), is applied to the sensory inputs to generate the momentary evidence, which is integrated over time to form the decision variable (DV). The DV fluctuates over time due to changes in the sensory stimulus and neural noise for stimulus representation and integration. As soon as the DV reaches one of the two decision bounds (+B for choice 1 and −B for choice 2), the integration terminates and a choice is made (decision time). However, reporting the choice happens after a temporal gap due to sensory and motor delays (non-decision time). Experimenters know about the choice after this gap and can measure only the reaction time (the sum of decision and non-decision times) but not the decision time

Neither the integration process nor the boundedness of the integration per se causes a systematic deviation of psychophysical kernels from true sensory weights. We define true sensory weights as the weights applied to the sensory stimulus to create the momentary evidence that will be accumulated over time for making a decision. In Methods, we provide the mathematical proof that in a simple DDM where decision bound and noise are constant over time and behavioral responses are generated as soon as the DV reaches one of the bounds (non-decision time = 0), psychophysical kernels are proportional to the sensory weights:

where w(t) is the time-dependent weight, (sigma _s^2) is the variance of stimulus fluctuations, and B is the height of the decision bound. Similar results can be obtained for unbounded DDMs (Eq. 14). Figure 3 shows simulations that confirm our proofs. Reverse correlation for an unbounded integration process with constant or sinusoidally varying weights recovers the true weighting function (Fig. 3a–c, Supplementary Fig. 1). Similarly, it yields the true weights for a bounded DDM (Fig. 3d–e, h), regardless of the decision-bound height.

Psychophysical kernels deviate from sensory weights in DDM because of incomplete knowledge about decision time. ac Integration of evidence per se does not preclude accurate recovery of sensory weights. For an unbounded DDM that integrates momentary evidence as long as sensory inputs are available, the kernel matches the true sensory weights. In this simulation, the weight is stationary and fixed at 1, but similarly matching results are obtained for any sensory weight (c Supplementary Fig. 1). Distortion quantifies root-mean-square error between the psychophysical kernel and the true sensory weights (Eq. 17). dh The decision bound does not preclude accurate recovery of sensory weights. In a bounded DDM without non-decision time, RTs are identical to decision time (d). Model simulations for RT tasks result in stimulus-aligned kernels that match sensory weights (e) and response-aligned kernels that rise monotonically (f), as expected for termination with bound crossing. However, stimulus-aligned kernels in fixed-duration tasks show a monotonic decrease because later stimuli are less likely to influence the choice (g). This deviation from true sensory weights is caused by early commitment to a choice and becomes smaller as the decision bound rises (h). im Variability of non-decision time makes reaction time an unreliable estimate of decision time, causing systematic deviations between psychophysical kernels and true sensory weights. After including non-decision time in the bounded DDM, stimulus-aligned kernels in RT tasks show a monotonic decrease because the stimuli that immediately precede the choice do not contribute to it (j). Response-aligned kernels show a peak, whose time is dependent on the distribution of non-decision times (k). Kernels for fixed-duration tasks are not affected by non-decision time (l) but still show the decline caused by bound crossing, similar to g. Deviation of stimulus-aligned kernels in the RT task increases with variability of non-decision time (m). Standard deviation of non-decision time is assumed to be 1/3 of its mean in these simulations. All kernels are normalized according to Eq. 2 or Eq. 14 to allow direct comparison with the true sensory weights (see Methods)

Although the proportionality in Eq. 2 may suggest that psychophysical kernels can be successfully used to recover spatiotemporal dynamics of sensory weights, critical limitations prevent that in practice, as we explain below. The most common limitation is the experimenter’s lack of knowledge about decision time, which is caused by asynchrony between the time that the DV reaches a decision bound (bound-crossing time or decision time) and the subject’s report of the decision (when the choice becomes known to the experimenter). Such asynchronies stem from two sources: delays in neural circuitry and experimental design.

In many experiments, subjects are exposed to the stimulus for a duration determined by the experimenter and can report their choice only after a Go cue. In these “fixed-duration” designs, the exact decision time and its trial-to-trial variability are unknown to the experimenter, and decision times are likely to be prior to the Go cue 27,41 . Because stimuli presented after the bound-crossing time do not contribute to the choice (or contribute less), including that period in the calculation of psychophysical kernels leads to a progressive underestimation of sensory weights 26,27,30 , causing a systematic deviation from Eq. 2 (Fig. 3g–h), compatible with past studies 42 . The diminishing kernel (Fig. 3g) correctly characterizes the effective reduction of the influence of the sensory stimulus on choice. However, note that from an experimenter’s perspective, the shape of the kernel is inadequate to tell whether the reduced influence of the stimulus is caused by a change in sensory weights, by early termination of the decision during stimulus viewing, by a combination of both, or by another mechanism in the decision-making process (see below). Such a mechanistic understanding could be achieved only if the experimental design is enriched and a model-based approach is adopted. Although there are successful examples of achieving such goals 26,27 , fixed-duration tasks impose significant limitations on experimenters’ ability to determine the beginning and end of the decision-making process (cf. ref. 41 ), which would be necessary for separating sensory and decision-making mechanisms that shape psychophysical kernels.

Experimental designs in which subjects respond as soon as they make their decision (RT tasks Fig. 3e) enable measurement of decision times and can be used to address the problem. However, RT tasks come with their own challenges. Sensory and motor delays are among them (Fig. 3i). Although the presence of such delays is widely appreciated, their effect on psychophysical kernels is unexplored. These delays effectively create a temporal gap between bound crossing and the report of the decision, making stimuli immediately before the report inconsequential for the decision. Figure 3j shows that non-decision times pull down the psychophysical kernel. These systematic reductions can cause the illusion of nonstationarity for stationary sensory weights (Fig. 3j, m) or distort the dynamics of time-varying weights (Supplementary Fig. 2).

What makes the psychophysical reverse correlation especially vulnerable to non-decision times is the variable nature of the sensory and motor delays 43,44 . A fixed non-decision time would cause a readily detectable signature (Supplementary Fig. 3) and is easy to correct for by excluding the last stimulus fluctuation in each trial that corresponds to the non-decision time. Similarly, if the non-decision time was variable but we could know the exact delay on each trial, we could easily discard the corresponding period at the end of the stimulus to correct for the artificial dynamics caused by the non-decision time. In practice, however, the non-decision time is not a fixed number 25 . Further, the variability of non-decision time is often in the same order of magnitude as the decision time 20,34,45 , making it challenging to thoroughly scrub away the effect of non-decision time just by trimming the stimuli. A more efficient solution is to embrace the distortion caused by the non-decision time, develop an explicit model of both sensory and decision-making mechanisms, and compare the model predictions with experimentally derived kernels (see the next section).

The fixed-duration design is not affected by the non-decision time, if there is a long enough delay between the stimulus and Go cue or if the stimulus duration is long enough to exceed the tail of the reaction time distribution in an equivalent RT task design (Fig. 3l–m). However, as mentioned above, lack of knowledge about the beginning and end of the integration process in fixed-duration tasks impedes mechanistic studies of kernel dynamics.

So far, we have focused on psychophysical kernels aligned to the stimulus onset. In an RT task, the stimulus-viewing duration varies from trial to trial and we can choose to align the kernel to subjects’ responses. Such an alignment is informative both about the termination mechanism of the decision-making process and about the distribution of non-decision times. When the decision-making process stops by reaching a decision bound, the kernel is guaranteed to show a steep rise close to the decision time (Fig. 3f) because stopping is conditional on a stimulus fluctuation that takes the DV beyond the bound. This rise of the kernel does not indicate an increase of sensory weights immediately before the decision. Further, the magnitude of this rise is not always fixed and depends on the decision bound and distribution of non-decision times (see below Supplementary Fig. 3). In the presence of a variable non-decision time (Fig. 3k), response-aligned kernels peak and then drop down to zero before the response. The drop happens because the non-decision time causes later fluctuations in the stimulus not to bear on the choice. The difference between the peak of the kernel and the reaction time is dependent on the mean and standard deviation of the non-decision time distribution, as well as its higher moments (Supplementary Fig. 3). Since it is known that the distribution of non-decision times can be quite diverse 46 , the shape of the response-aligned psychophysical kernels can provide an important clue about the distribution of non-decision times and also verification of model-based attempts to discover the non-decision time distribution 46 . Overall, psychophysical kernels aligned to the response are influenced by sensory weights, termination criterion of the decision, and the non-decision time.

Experimental measurements confirm model predictions

The results of the previous section suggest that psychophysical kernels reflect a mixture of sensory and decision-making processes. By embracing this complexity, one can leverage psychophysical kernels to gain insight about both processes. The key is to develop explicit models and compare model predictions against experimentally derived kernels. Below, we highlight two experiments designed to achieve this goal.

The first experiment is an RT version of the direction discrimination task 20,35 . On each trial, subjects viewed a random dot stimulus and made a saccadic eye movement to one of the two targets as soon as they were ready to report their choice (Fig. 4a). Consistent with previous studies, accuracy improved and RTs decreased monotonically with motion strength (Fig. 4b–c) 20,35 . We quantified moment-to-moment fluctuations of motion in each trial by calculating the motion energy 27,47,48 (see Methods Supplementary Fig. 4). Figure 4d shows the average and standard deviation of motion energies across all 0% coherence trials and four single-trial examples. As expected, single-trial motion energies departed from 0 with a short latency 47 and then fluctuated between positive and negative values, which corresponded to the two motion directions discriminated by subjects. Across all 0% coherence trials, these fluctuations canceled each other out, resulting in a zero mean but the standard deviation remained large, indicating short bouts of varying motion strengths in either direction throughout the trial. The stochastic nature of the stimulus and the known effect of motion energy on the choice 27,48,49 provided an excellent opportunity to quantify how stimulus dynamics shaped the behavior.

Psychophysical kernels in the direction discrimination task match predictions of a bounded DDM with non-decision time. a RT task design. Subjects initiated each trial by fixating on a central fixation point. Two targets appeared after a short delay, followed by the random dot stimulus. When ready, subjects indicated their perceived motion direction with a saccadic eye movement to a choice target. The net motion strength (coherence) varied from trial to trial, but also fluctuated within trials due to the stochastic nature of the stimulus. b, c Choice accuracy increased and RTs decreased with motion strength. Data points are averages across 13 subjects. Accuracy for 0% motion coherence is 0.5 by design and therefore not shown. Gray lines are fits of a bounded DDM with non-decision time. Error bars denote s.e.m. across subjects. d Motion energy of example 0% coherence trials (dotted lines), and the average (solid black line) and standard deviation (shading) of motion energy across all 0% coherence trials. Positive and negative motion energies indicate the two opposite motion directions in the task. e, f The bounded DDM predicts psychophysical kernels (gray lines), which accurately match the dynamics of subjects’ kernels (red lines). Because the model sensory weights are stationary, kernel dynamics in the model are caused by the decision-making process and non-decision times. Kernels are calculated for 0% coherence trials. Shading indicates s.e.m. across subjects. All kernels are shown up to the minimum of the median RTs across subjects

Experimentally derived kernels for 0% coherence trials (Fig. 4e–f, red lines) showed a clear nonstationarity with remarkable resemblance to the kernels expected from a DDM with non-decision time and stationary sensory weights (Fig. 3j–k the delayed rise of the psychophysical kernel in Fig. 4e is inherent to the motion energy calculation, as shown in Fig. 4d). We quantitatively tested the hypothesis that kernel dynamics reflect bound crossing and non-decision time by fitting the DDM to subjects’ choices and RTs and generating a model prediction for the psychophysical kernels. Consistent with past studies, the distribution of RTs and choices across trials provided adequate constraints for estimating all model parameters 20,35 , evidenced by the quantitative match between subjects’ accuracy and RTs with model fits (data points vs. solid gray lines in Fig. 4b–c R 2 , 0.97 ± 0.01 for accuracy and 0.98 ± 0.01 for RTs, mean ± s.e.m. across subjects). After estimating the model parameters, we used them to predict the shape of the psychophysical kernel for the 0% coherence motion energies used in the experiment. These predicted kernels (Fig. 4e–f, solid gray lines) closely matched the experimentally derived ones (R 2 , 0.57), establishing that the dynamics of the kernels were both qualitatively and quantitatively compatible with stationary sensory weights and a decision-making process based on bounded accumulation of evidence.

In a second experiment, we focused on a more complex sensory decision that required combining multiple spatial features over time (Fig. 5a). Subjects categorized faces based on their similarity to two prototypes. Each face was designed to have only three informative features (eye, nose, and mouth) (Fig. 5b). On each trial, the mean strengths (percent morph) of these three features were similar and randomly chosen from a fixed set spanning the morph line between the two prototypes. However, the three features fluctuated independently along their respective morph lines every 106.7 ms (Fig. 5c). All other parts of the faces remained fixed halfway between the two prototypes and, therefore, were uninformative. Further, each frame of the face stimulus was quickly masked to prevent conscious perception of fluctuations in eyes, nose, and mouth. Subjects reported the identity of the face (closer to prototype 1 or 2) with a saccade to one of the two targets, as soon as they were ready. The key difference with the direction discrimination task was that instead of one stimulus attribute that fluctuated over time (motion energy), there were three attributes that fluctuated independently. The three informative features could support the same or different choices in each stimulus frame and across frames. This task provided a richer setting to test how humans combine multiple spatial features to make a decision.

Psychophysical reverse correlation in a face discrimination task with multiple informative features reveals relative weighting of features and kernel dynamics similar to the direction discrimination task. a Task design. Subjects viewed a sequence of faces interleaved with masks and reported whether the face identity matched one of two prototypes. They reported their choice with a saccadic eye movement to one of the two targets, as soon as ready. b Using a custom algorithm, we designed intermediate morph images between the two prototype faces such that only three facial features (eyes, nose, and mouth) could be informative. These features were morphed independently from one prototype (+100% morph) to another (−100% morph), enabling us to create stimuli in which different features could be biased toward different identities. All regions outside the three informative features were set to halfway between the prototypes and were uninformative. c The three informative features underwent subliminal fluctuations within each trial (updated with 106.7-ms interval). The mean morph levels of the three features were similar but varied across trials. Fluctuations of the three features were independent (Gaussian distribution with standard deviations set to 20% morph level). d, e Choice accuracy increased and RTs decreased with stimulus strength. Data points are averages across nine subjects. Error bars are s.e.m. across subjects. Gray lines are model fits. f The DDM used to fit subjects’ choices and RTs extends the model in Fig. 2 by assuming different sensitivity for the three informative features. Momentary evidence is a weighted average of three features where the weights correspond to the sensitivity parameters. The momentary evidence is integrated toward a decision bound. g Psychophysical kernels estimated from the model (gray lines) match subjects’ kernels for the three features. Shaded areas are s.e.m. across subjects

Consistent with the simpler direction discrimination task, as the average morph level of the three features approached one of the prototypes, choices became both more accurate and faster (Fig. 5d–e). The psychophysical kernels of the three features (Fig. 5g) had rich dynamics. First, the eye kernels had larger amplitude than the mouth and nose kernels, suggesting that choices were more strongly influenced by fluctuations in the eye region 50,51 . Second, the stimulus-aligned kernels dropped gradually over time, and the saccade-aligned kernels showed a characteristic peak a few hundreds of milliseconds prior to the choice. A multi-feature integration process with stationary weights for eyes, nose, and mouth regions could quantitatively explain our results. For each stimulus frame, the model calculated a weighted sum of the three features to estimate the momentary sensory evidence and then integrated this momentary evidence over time in a bounded diffusion model (Fig. 5f, see Methods). Fitting the model to the choice and RT distributions provided a quantitative match for both (Fig. 5d–e R 2 , 0.998 ± 0.001 for accuracy and 0.98 ± 0.01 for RTs) and the resulting parameters led to kernels that well matched the dynamics of experimentally observed kernels for the three features (R 2 , 0.74).

Testing for temporal dynamics of sensory weights

Our exploration of the model and fits to experimental data in the previous sections focused largely on cases in which sensory weights were static and the dynamics of the psychophysical kernel were solely due to the decision-making process. However, as discussed earlier, changes of sensory weights could also be a major factor in shaping psychophysical kernels (Fig. 3c, Supplementary Fig. 1 and 2). In theory, a model-based approach to understanding kernel dynamics should be able to distinguish changes of sensory weights from decision-making processes because of their distinct effects on the choice and RT distributions. To test this prediction, we simulated a direction discrimination experiment in which decisions were made by accumulation of weighted sensory evidence toward a bound in the presence of non-decision time and various dynamics of sensory weights (Supplementary Fig. 5). Then, we used the simulated choice and RT distributions to fit an extended DDM that allowed temporal dynamics of sensory weights. The model recovered the weight dynamics and accurately predicted psychophysical kernels of the simulated experiments in each case (Supplementary Fig. 5). A few thousand trials, similar to those available in our experimental datasets, were adequate to achieve accurate fits and predictions. Therefore, there does not seem to be critical limitations in the ability of a model-based approach to detect sensory weight dynamics, when such dynamics are present.

Knowing about the model’s ability, we extended the DDMs used in the previous section to explore dynamics of sensory weights for human subjects. The extended models included linear and quadratic terms to capture a wide variety of temporal dynamics (Eqs. 23 and 25). The results did not support substantial temporal dynamics of sensory weights in either task (12 out of 13 subjects of the direction discrimination task and all subjects of the face discrimination task showed static weights). Overall, the addition of temporal dynamics to the weight function did not significantly improve the fits or the match between model and experimental psychophysical kernels (for direction discrimination, Eq. 23, β1, −3.0 ± 1.6 across subjects, p = 0.10, median, −0.65, and β2, −2.1 ± 2.3, p = 0.36, median, 0.17 for face discrimination, Eq. 25, β1, −0.19 ± 0.10, p = 0.09, median, 0.10, and β2, 0.055 ± 0.028, p = 0.08, median, 0.028). Because similar models could accurately recover weight dynamics in the simulated data, we do not think that our observation about the experimental data is caused by a low power for detection of weight dynamics or a fundamental bias to attribute changes of psychophysical kernels to the decision-making process.

Speed–accuracy tradeoff, bias, and more complex decision models

Although a simple DDM for accumulation of evidence captures several key aspects of behavior in sensory decisions, it is only an abstraction for the more complex computations implemented by the decision-making circuitry. More complex and nuanced models are required both to explain details of behavior and to create biologically plausible models of integration in a network of neurons. We use this section to explore a non-exhaustive list of key parameters commonly used in various implementations of evidence integration models. For clarity, we simulate models without non-decision time to isolate the effects of these model parameters from those of non-decision time.

First, we focus on how changes of decision bound influence the shape of psychophysical kernels. The effect is best demonstrated by Eq. 2 for a simple DDM, which shows the kernel is inversely proportional to bound height. This dependence is expected because a lower decision bound boosts the effect of stimulus fluctuations on choice and vice versa. As a result, if subjects increase the decision bound to improve their accuracy 14,17,52 , psychophysical kernels will shrink (Supplementary Fig. 6a–b). Similarly, urgency signals, which push the integration process toward the decision bound 37,53 , influence the kernels. Urgency is effectively a reduction of decision bound over time (Fig. 6a) and leads to inflation of the psychophysical kernel (Fig. 6b). The scaling of kernels with bound height can be largely corrected by estimating the decision bound from behavior and multiplying the kernels by it, as we did for the results in the previous sections.

Psychophysical kernels are susceptible to changes of decision bound, input correlation, mutual inhibition, integration time constant, and limited dynamic range. The figure shows extensions of DDM and systematic deviations that additional realism to the model can cause in psychophysical kernels. Conventions are similar to Fig. 3, except that we focus only on RT tasks. Also, to isolate the effects of different model parameters from the effect of non-decision time, we use zero non-decision time in these simulations. ac Collapsing decision bound (urgency signal) inflates the psychophysical kernel over time. The rate of bound collapse is defined by τ1/2—the time it takes to have a 50% drop in bound height. df Extending DDM to a competition between two bounded accumulators reveals that input correlation of the accumulators has only modest effects on psychophysical kernels, causing an initial overshoot followed by an undershoot compared to true sensory weights. gi The presence of a lower reflective bound in the accumulators causes an opposite distortion: an initial undershoot followed by a later overshoot. jl Balancing the effect of mutual inhibition by making the integrators leaky causes the model to behave like a DDM, eliminating the effects of both the inhibition and leak on the psychophysical kernels (black curves in m). Any imbalance between leak and inhibition, however, causes systematic deviations in the kernels from the true sensory weights (brown, red, and blue curves in k). See Supplementary Fig. 8 for more examples

The proportionality constant in Eq. 2 also points at another important conclusion: changes of stimulus variance, if present, systematically distort psychophysical kernels. Larger stimulus noise inflates the kernel and vice versa (Supplementary Fig. 6c–e). This contrasts with the effects of internal (neural) noise for the representation of sensory stimuli or the DV. We show in Methods that in a bounded DDM, internal noise does not have a systematic effect on psychophysical kernels of RT tasks (but compare to unbounded DDM).

The presence of choice bias in the decision-making process is another factor that can cause distortions in psychophysical kernels. Two competing hypotheses have been suggested for implementation of bias in the accumulation to bound models. One hypothesis is a static change in the starting point of the accumulation process (or an equivalent static change in decision bounds) 14,15,33,54 , which would cause an initial inflation in the psychophysical kernels without a lasting effect (Supplementary Fig. 7a–c). A second hypothesis is a dynamic bias signal that pushes the DV toward one of the decision bounds and away from the other 38 . This dynamic bias signal can be approximated by a change in the drift rate of DDM, which would cause a DC offset in the psychophysical kernels (Supplementary Fig. 7d–f).

Electrophysiological recordings from motor-planning regions of the primate brain suggest that integration of sensory evidence is best explained with an array of accumulators, rather than a single integration process 37,40,55,56,57 . A class of models that matches this observation better than the simple DDM is competing integrators—one for each choice—that accumulate evidence toward a bound 16,19,20,23,40,57 . Our mathematical proof for DDM does not exactly apply to these models. However, many of these models can be formulated as extensions of the DDM with new parameters added to provide more flexible dynamics 23 . The following parameters are worth special attention: input correlation, lower reflective bound, mutual inhibition, and leak (see Supplementary Notes for more detailed explanations).

A DDM is mathematically equivalent to two integrators that receive perfectly anti-correlated inputs (correlation = −1) and, consequently, are anti-correlated with each other 20,23 . However, perfect anti-correlation in neural responses is not expected because even when signal correlations are negative, noise correlations tend to be close to zero or slightly positive 58,59 . Figure 6d–f shows that the shape of the psychophysical kernel is only minimally affected by a wide range of input correlations. Sizeable distortions arise only when the input correlation approaches 0, in which case the kernel is initially inflated but later drops below the true sensory weight (Fig. 6e and Supplementary Fig. 8a).

A frequent feature of biologically plausible implementations of the integration process is a lower reflective bound that limits the lowest possible DV 19,20,21,22,40 . Such reflective bounds are inspired by the observation that the spike count of neurons is limited from below and cannot become negative. Reflective bounds cause the psychophysical kernel to begin lower than the true sensory weight but exceed it later (Fig. 6g–i and Supplementary Fig. 8b). However, these distortions are small when the reflective bounds are far enough from the starting point of the integrators.

Several models incorporate mutual inhibition either through direct interactions between the integrators 19,40 or indirectly through intermediate inhibitory units 22,60 . Mutual inhibition is often combined with decay (leak) in the integration process (Fig. 6j) to create richer dynamics and curtail the effects of inhibition 19,23 . The balance between leak and mutual inhibition defines whether the model implements bistable point attractor dynamics or line attractor dynamics 23 . This balance also determines the kernel dynamics (Fig. 6j–l and Supplementary Fig. 8c). When mutual inhibition dominates (leak/inhibition ratio < 1), psychophysical kernels show an early amplification but later converge on the true sensory weights. When leak and inhibition balance each other out, the model acts similarly to a line attractor and the psychophysical kernels resemble those of a DDM. Finally, when leak dominates, the integrators lose information and psychophysical kernels systematically underestimate the sensory weights, especially for earlier sensory evidence in the trial.

Interestingly, and perhaps by luck, applying these more complex model variations to our experimental data resulted in model parameters that closely resembled linear integration of evidence, which is why the DDMs in the last section performed so well. Because of this parameterization, these more sophisticated models would produce predictions similar to the simple DDM about the dynamics of the psychophysical kernel. However, we note that this observation may not generalize to other experiments and should, therefore, be tested for new behavioral paradigms on a case-by-case basis.

As explained above, different parameters of decision-making models have different and even opposing effects on the expected shape of psychophysical kernels. As a result, a mixture of these features can, in principle, generate a variety of kernel dynamics, depending on their exact parameters. To illustrate this point, we consider models with two competing integrators that have different levels of mutual inhibition, leak, collapsing bounds, and sensory and motor delays (Fig. 7a). For static sensory weights over time, this class of models can generate monotonically decreasing kernels (Fig. 7b), monotonically increasing kernels (Fig. 7c), or kernels that exactly match the true sensory weights (Fig. 7d), depending on the model parameterization. To understand this diversity, consider, for examplem the opposing effects of collapsing bounds (urgency) and non-decision time on the kernels. The gradual reduction of the kernel due to non-decision time can cancel out the increase of the kernel due to urgency. Alternatively, one of the two effects may overpower the other one. Complementary to the examples in Fig. 7, one can also imagine parameterizations that would result in a flat psychophysical kernel in the presence of nonstationary true sensory weights. The presence of mutual inhibition and leak further complicates the relationship of sensory weights and psychophysical kernels and expands the space of possible dynamics for the kernels.

A decision-making model that has a mixture of parameters with opposing effects on psychophysical kernels can create a diversity of kernel dynamics for static sensory weights. a A model composed of two competing integrators that allows different ratios of leak and inhibition, collapsing decision bounds, and non-decision times. The model also has input correlation >−1 and reflective lower bounds, but they are fixed for simplicity. b When bound collapse is small and non-decision times are long, the kernel drops monotonically over time. c When bound collapse is large and non-decision times are short, the kernel rises monotonically. d When these opposing factors balance each other, the kernel becomes flat


How do I interpret a statistically significant Spearman correlation?

It is important to realize that statistical significance does not indicate the strength of Spearman's correlation. In fact, the statistical significance testing of the Spearman correlation does not provide you with any information about the strength of the relationship. Thus, achieving a value of p = 0.001, for example, does not mean that the relationship is stronger than if you achieved a value of p = 0.04. This is because the significance test is investigating whether you can reject or fail to reject the null hypothesis. If you set &alpha = 0.05, achieving a statistically significant Spearman rank-order correlation means that you can be sure that there is less than a 5% chance that the strength of the relationship you found (your &rho coefficient) happened by chance if the null hypothesis were true.


Watch the video: - Test the Difference Between Two Correlations Practice 8 (August 2022).