Does expectation change the sensory signal?

Expecting to see something makes us more likely to report seeing it. Imagine that I show you an ambiguous green/blue color. As I am showing it to you, I inform you that you’re about to see green. And, no surprise here, you’re more likely to say that you saw green.

download

The fascinating question is why. In particular, there are two very different possibilities for what expecting green did to influence your report:
Option 1: Expectation changed the sensory signal itself
Option 2: Expectation changed the decision processes but left the sensory signal unaltered

Several lines of evidence suggest that expectation alters the sensory signal (Option 1). First, popular computational theories, such as predictive coding, suggest that top-down signals affect how the forthcoming stimulus is actually processed, thus altering the signal itself. Second, neuroimaging studies have repeatedly found that expectations affect the activity in early sensory areas such as V1. Such results seem to fit best with Option 1 since areas such as V1 are assumed to reflect the sensory signal. The problem is that all of this evidence is very indirect: computational theories can be wrong, while activity in sensory areas do not always have to reflect only the feedforward signal.

Therefore, distinguishing between Options 1 and 2 would be most convincing if based on behavioral data. If we can find behavioral evidence that the sensory signal is processed differently in the presence of expectations, then that would be solid evidence for Option 1. If we can’t, then we should prefer Option 2.

This is exactly the problem we tackled in our recent article in Scientific Reports (Bang & Rahnev, 2017).

Full disclosure: we went into this project fully expecting to provide the killer evidence for Option 1. Instead, we ended up with (what seems to us to be) strong evidence for Option 2.

Here’s briefly what we did. We asked subjects to discriminate between left- and right-tilted Gabor patches. Before or after the presentation of the Gabor patch, we provided a cue about the likely identity of the stimulus. We reasoned that a cue coming before the stimulus (pre cue) would induce an expectation that could change the sensory signal. On the other hand, a cue presented long after the stimulus (post cue) can’t change the sensory signal but must act through affecting the decision. Now, we only need to compare how the sensory information was used in the presence of pre vs. post cues – any difference that we can find would necessarily stem from a sensory signal change induced by the pre but not the post cue.

So, was there a difference in the use of sensory information between pre and post cues? Not a trace.

We ran several types of sensitive analyses including temporal reverse correlation (to check for a change in how signals at different time points influence the decision) and feature-based reverse correlation (to check for a change in how different stimulus orientations influence the decision). All of our results were perfectly explained by a simple model in which both the pre and post cue only affected the decision criterion but left the sensory signal unaltered.

Of course, this is a single study and we need more evidence before completely rejecting the idea that expectations alter the sensory signal (Option 1). But right now, I’d put my money on expectations leaving the sensory signal unchanged and simply altering our decision processes (Option 2).

In other words, if our results generalize, it means that you’re not actually seeing the ambiguous color as green; you’re just deciding that it must be green.

Advertisements

Is your task really 2AFC?

Probably not.

You probably know that 2AFC stands for “two-alternative forced choice.” However, you may not know that this means 2AFC tasks involve the presentation of two stimuli on each trial!

Let’s say that you want your participants to distinguish between faces and houses. On each trial, you present an image and ask participants to decide if it’s a face or a house. This is a fantastic design, just not a 2AFC task. Rather, it’s a discrimination task though some psychophysicists would also call it a detection task. Likewise if your subjects are judging the direction of motion in a random dot kinematogram, or if they are deciding whether an item is old or new, or whenever they are judging a single stimulus per trial.

Example of tasks that are and aren't 2AFC.
Example of tasks that are and aren’t 2AFC.

So, what are 2AFC tasks? There are two variants: spatial 2AFC and temporal 2AFC (also called two-interval forced choice, or 2IFC). In the spatial 2AFC, on each trial you’d present a face and a house in different locations (usually left and right of fixation). In 2IFC tasks, on each trial you’d present first the face and then the house (or vice versa) in the same spatial location. In both cases, participants’ task is to determine which of the two stimuli was the face.

Despite my best efforts, I haven’t been able to find out how the confusion around what a 2AFC task is emerged. According to this Wikipedia page, Fechner developed 2AFC back in the 19th century. Confusing 1-stimulus discrimination tasks with 2AFC seems to have already been the rule by the middle of the 20th century (and perhaps much earlier). It appears to be the rule now, at least outside of the vision science world.

What is so special about 2AFC tasks? They are supposed to be ‘unbiased.’ In other words, subjects are often assumed not to have much of a bias between left/right or first/second interval decisions. Such lack of bias is great for measuring performance and generally makes distributional assumptions (like assuming Gaussians in SDT) unnecessary. The problem is this assumption has been shown to be empirically false (Yeshurun, Carrasco, & Maloney, 2008). Yeshurun and colleagues even recommend against using 2IFC tasks altogether.

A note on comparing 1-stimulus discrimination and 2AFC tasks: because 2AFC tasks present two stimuli on each trial, participants have more information to work with and thus are expected to perform better. In fact, their signal-to-noise ratio (d’) should be exactly √2 higher in 2AFC tasks. Empirically, however, it has been found to be both substantially higher or substantially lower. Math is prettier than actual human performance!

In the end, perhaps it’s not a bad thing if you’ve used a simple discrimination task in your experiment. Just don’t call it 2AFC.