top of page
Search
  • Writer's pictureLuna Creates

Is the concept of absolute threshold supported by psychophysics and MEG analysis? (2018)


ABSTRACT

Absolute threshold is understood as the minimum level of energy required for a stimulus to

be perceived. The concept leads to the interpretation that there might be a certain value

after which stimulus is always perceived by the subject. Psychophysics experiments show

that such a specific value leading to 100% certainty that a stimulus will be perceived does

not exist but rather show that the process of stimulus perception occurs gradually. In this

report psychophysics and MEG analysis are reported in the attempt to demonstrate how the

stimulus perception occurs gradually rather than following absolute trends.


INTRODUCTION

In neuroscience, absolute threshold is the minimal energy required for a stimulus to be

perceived. There are three different methods to measure the threshold: adjustment

method, adaptive method and constant stimulus method.

Contrary to what could be expected, the threshold is not a clear line above which all

stimulus is perceived. In fact, the process that goes from 0% probability of stimulus

recognition to 100% probability of recognition is gradual and tends to present a sigmoid

shape when plotted against stimulus intensity, as shown in figure 1. Usually the threshold is

set at 50% probability of stimulus recognition.

Figure 1. (from power-point presentation)

The brain is constantly active, noisy, and so the response to stimulus measurement is based

on the sensitivity to distinguish noise vs noise + stimulus conditions, where different

participants might have different sensitivity. The higher the difference between noise and

noise+stimulus, the lower the threshold value.

Another factor that influences the threshold value is the criteria the participants use to

assess how confident they must be to assert if a stimulus is recognisable or not. Depending

on the criteria used, participants can be classified into liberal, conservative or neutral.

Liberals, who respond affirmatively more often, will tend to have more hits, but also more

false alarms (low threshold values) whereas conservatives, who do not respond as often,

will tend to have less false alarms, but will also miss the stimulus more often (high threshold

values).

Taking in consideration these influential factors, a useful tool to estimate how reliable a

certain threshold is, is the receiver operating characteristic (ROC) curve. The ROC curve plots

the hit rates against the false alarm rates for every criterion point in the noise vs noise +

stimulus plot and uses the area below the obtained function as a reliability measure. As

figure 2 shows, the higher the area found, the more reliable is the threshold. In broad terms,

ROC is a technique that provides the language and the graphic notation to analyse decision

making in the presence of uncertainty (Heeger, 2003).

Figure 2.

In terms of variability of neuronal responses, ROC can be used as a tool to determine

whether a certain signal is noise or stimulus (Lemon & Smith, 2006). The use of ROC in

determining the difference between noise and noise + stimulus is used instead of the

normal d’ calculation because the ROC analysis does not involve the assumption that the

distributions are Gaussian and have equal variance (Lemon & Smith, 2006).

In the experiments that follow, an attempt to demonstrate the absolute threshold was

made through the use of psychophysics and MEG techniques.

METHODS

Psychophysics

The experiment was performed in one single participant. Visual stimuli consisting of an X

and a back-ground with varying levels of intensities were presented in a computer screen.

The participant was asked to press ‘x’ or the spacebar on the keyboard according to

whether the X in the stimulus was noticeable (x) or not (spacebar). The stimulus was

presented for 0.05 seconds and the strength of the stimulus was manipulated by altering

the number of squares in the background (the more squares, the higher the noise level).

Two different experiments were performed based on these parameters. The first to be

performed was an adaptive method experiment following the Levitt and Wetherill

guidelines. It was set that 4 different responses would be necessary to step up to a harder

(noisier) stimulus and 1 single error would be necessary to step down to an easier (less

noisy) stimulus. Because the probability of a correct response (Pc) in the Levitt and Wetherill

method relates to the number of correct responses (Nc) necessary to step up in such a way

that 𝑃𝑐 = 0.5 '( (Zwislocki & Relkin, 2001), the percent of correct responses in this

experiment should be equal to 0.84%. The experiment was set to start from a hard level

(noisy) and then step down to easier levels.

Once the data had been collected, the threshold was calculated by finding the arithmetic

mean of the last 14 reversal points, where the direction of the stimulus change is reversed.

The second experiment followed the constant stimulus method. For this experiment, 5

different conditions were used. One condition with the same level obtained in the threshold

in the adaptive method and the others were this same level multiplied by 2, 1.5, 0.75 and

0.5 respectively. The conditions were set to these levels so that they could represent near

perfect performance, chance level performance and in-between performance. 50 trials per

condition were performed and every trial included a present and a non-present X in the

stimulus. Because of this, there was a total of 500 trials (𝑇𝑜𝑡𝑎𝑙 𝑡𝑟𝑖𝑎𝑙 = 50 𝑡𝑟𝑖𝑎𝑙 Å~

5 𝑐𝑜𝑛𝑑𝑖𝑜𝑡𝑛𝑠 Å~ 2 𝑠𝑢𝑏𝑐𝑜𝑛𝑑𝑖𝑡𝑖𝑜𝑛𝑠).

The constant stimulus experiment was performed 3 times under different instructions. The

first to be performed carried no specific instruction: all the participant had to do was to

indicate when the X was recognised and, when a miss or a false alarm occurred, no warning

would show up. The second experiment to be performed had the instruction of pressing x

whenever the participant was unsure of whether the X had being perceived or not. A

warning every time an X was missed would show up. The third experiment had the

instruction of pressing spacebar whenever the participant was unsure about seeing an X. A

warning would show up every time an ‘x’ was pressed in the absence of an X stimulus on the

screen. These different instructions were created to simulate neutral, liberal and

conservative responses, respectively. Liberal responses were expected to lead to a lower

threshold and conservative responses to a higher threshold; neutral were expected to be inbetween.

With the data gathered, the threshold for the constant stimulus experiment was

determined by fitting a Weibul function to the plotted graphs. The function was of the form

y=mx + c where c and m where the free parameter values. The c value equalled the stimulus

intensity threshold and the m value equalled the slope (rate of improvement from condition

to condition).

To calculate the difference between noise and noise + stimulus (D’), the data gathered from

the constant stimulus experiment with liberal instructions was used. D’ was calculated by

subtracting the hit rate z-score value from the false alarm z-score value. The criterion was

calculated by diving the D’ value by two.

MEG

The data for this experiment was obtained from the experiment published under the name

‘Parametric variation of gamma frequency and power with luminance contrast: A

comparative study of human MEG and monkey LFP and spike responses’ by Hadjipapas et al.

Here, the participant was asked to look at a screen that presented different stimuli of

varying contrast (20%, 36%, 48%, 66% and 96%) while inside a MEG scan. Each condition

was presented 100 times forming a total of 500 trials. In every trial the orientation (0° to

90°) and the phase (8 phases) of the granting was changed. After each block of 50 trials

there was a 10s break. To ensure the participant had the gaze fixed, the target would

occasionally change colour, which the participant had to indicate by pressing a button. The

stimuli were presented in the lower right visual quadrant and so the response was expected

to be found in the left hemisphere. In total, there were 278 channels and the sampling

frequency used to measure the data was 1200Hz.

The Fieldtrip Matlab toolbox was used to analyse the data. As the sampling rate was quite

high, the Fieldtrip function ft_resampledata was used to down sample the data to 600Hz.

The data was aligned to the onset of the stimulus and the channel of interest was selected

based on how clearly the ‘dip’ of neuronal activity could be seen.

RESULTS

Psychophysics

Staircase measurements

As a result of the adaptive method or staircase method, the following plot on figure 3 was

obtained. As can be seen, stimulus intensity is plotted against number of trials and the

correct and false responses are plotted in such a way that every time that a false response is

given, the stimulus intensity is increased and every 4 correct responses, the stimulus

intensity is decreases. The round circles stand for reversal, when the direction of the

intensity change is swapped. After 20 reversals, the threshold was calculated based on the

last 14 reversals and was found to be 0.25.

Figure 3.

Constant stimuli measurements

The constant stimuli measurement gave the following plot on figure 4. The plot shows the

experimental result plotted with the expected result calculated based on the staircase

experiment. The threshold in this experiment was set at 0.33.

Figure 4.

The plot on figure 5 shows the comparison between the 3 different conditions that

underwent the constant stimuli experiment. Against what was expected, the liberal

condition showed the lowest correct response rate whereas the non-bias showed the

highest. The reason for this might be that, when asked to change the behaviour, the

participant could not maintain the same pattern of response, leading to lower hit rate.

Figure 5.

Signal Detection Theory analysis

The table 1 bellows shows how the D’ value and the criterion value were calculated. The

values on the table were obtained from the liberal condition.

Stimulus + Stimulus - Mean D’ = Z(Hit

rate)- Z(FA

rate)

Criterion =

Z(Hit rate)+

Z(FA

rate))/2

YES 0.62 (hit) 0.1 (False

Alarm)

0.315 1.587 0.794

NO 0.38 (miss) 0.9 (correct

rejection)

0.64

1 1

Table 1.

Figure 6 shows how the different aspects of the functions change according to the condition

used. The hit rate plot shows how the liberal condition indeed leads to higher hit rates and

the conservative leads to lower hit rates; the correct rejection plot, as expected, show the

opposite of that. The D’ plot shows how the non-biased condition made the difference

between noise and noise + stimulus higher. The criterion value shows that the liberal

condition had the highest criteria, which is also expected, as the liberal condition takes the

largest area of the noise and noise + stimulus functions.

Figure 6.

MEG

Time domain analysis

Figure 7 shows the time locked responses to the different contrast conditions. It can be

noted from the graphs that as the contrast increases (contrast increases from condition 1 to

5), the neural response also increases (blue to yellow). This increase is explained by the

increasing ‘dip’ (in between dotted lines) that becomes more prominent as the contrast

increases.

Figure 7

Figure 8 shows how the chosen channel reacts differently to the different contrast

conditions. It shows how the intensity of the neural reaction is directly proportional to the

intensity of the stimulus.

Figure 8.

Frequency domain analysis

The following plot on figure 9 shows the time-frequency responses to different contrasts. It

shows how the intensity at different frequencies vary according to time. As can be noted, as

the contrast level increases, the intensity of the frequencies on the 30Hz to 60Hz range also

increases.

Figure 9.

The graph below, figure 10, shows the intensity against frequency of the different contrast

conditions. As can be noted, the highest contrast has the highest activity intensity, whereas

the lowest contrast has near no change in intensity.

Figure 10.

Signal Detection Theory and ROC analysis

Below, on figure 11, sensitivity was calculated under different distributions of neuronal

responses. For the ‘stimulus absent’ condition, data carrying a lot of noise was used. This is

seen on plot 1, where the intensity of the stimulus is randomly distributed along time and

frequency. This random distribution is then observed on the noise vs noise+stimulus plot,

where the peaks seem to almost overlap, leading to a ROC curve where the AUC is just on

the 50% line (meaning hits and FAs happens half the time each).

For the ‘stimulus present’ condition, the plot with the clearest intensity pattern (plot 5) was

chosen. This plot lead to a noise and noise+stimulus plot with a D’ value of 2.7, leading to an

almost perfect ROC curve, with an AUC of 0.98.

The hit and FA rates were calculated by finding the area to the right of the criteria line (blue

line in the noise vs noise+stimulus plot. The hit rate is the area under the red plot

(noise+stimulus) and the FA is the area under the black plot (noise).

Figure 11.

A plot displaying the ROC curve of the 5 different conditions above is displayed below on the

right (figure 11). In this plot, it is clear how the AUC is greatly improved for the condition 5,

whereas condition 1 lies just under the 50% line. On the left, the ROC sensitivity is plotted

against the different contrast conditions. As can be seen, the plot shows that the lowest

contrast level at which the observer has an above chance level of recognition is at contrast

36%.

Figure 11.

DISCUSSION

The analysis of the data gathered showed that the idea of an absolute threshold, after

which all stimuli is perceived, is not a reality. Indeed, the perception of stimuli happens

gradually and, as is shown in the psychophysics and MEG results, the stimulus strength has a

non-linear effect on the perception.

Neurons in the brain are always active, no matter if in presence or absence of stimuli, and

so the differentiation between stimuli and no stimuli perception through neural signalling is

hampered by the uncertainty of whether a certain signal represent stimulus or just noise.

ROC analysis can be useful in the differentiation of this two states, but the results should

always be taken carefully.

Not all stimuli that comes from the outside world will provoke a significant response in the

brain leading to perception. Our senses are not as reliable as one would expect and the

perception of stimuli is influenced by more than the stimulus itself.

This study dealt with a single participant and so the results cannot be easily generalised to a

whole population. Variables such as stress, tiredness, or attention were not taken in

consideration and might have affected the results of the experiment. To make the

experiment more accurate, more participants could be included and variables such as those

mentioned should be taken into consideration.

REFERENCES

Heeger, D. (2003). Signal Detection Theory. Retrieved from

http://www.cns.nyu.edu/~david/handouts/sdt/sdt.html

Lemon, C. H., & Smith, D. V. (2006). Influence of response variability on the coding

performance of central gustatory neurons. J Neurosci, 26(28), 7433-7443.

doi:10.1523/JNEUROSCI.0106-06.2006

Zwislocki, J. J., & Relkin, E. M. (2001). On a psychophysical transformed-rule up and down

method converging on a 75% level of correct responses. Proc Natl Acad Sci U S A,

98(8), 4811-4814. doi:10.1073/pnas.081082598

0 views0 comments

Recent Posts

See All

Sharing economy?

Abstract Sharing economies such as Airbnb or BlaBlaCar have been taking over the market over the past few years. As these economies have at its core the idea of sharing rather than possessing, some sa

Personalised Medicine (essay from 2018)

INTRODUCTION: Personalised genomics is a key component of the broader field of personalised medicine. It is concerned with the sequencing, analysis and interpretation of an individual’s genome and its

bottom of page