The drift diffusion model (DDM) is one of the most well-known and widespread models of perceptual decision making. I also happen to think that it is wrong. The reasons for this opinion are many and varied, but I’ll keep those for another post. However, while the theoretical arguments for or again DDM are important, it is arguably even more important whether it “works”. Practically, there is a reason why DDM is so popular and it is that DDM provides very good fits to behavioral data. The problem is: it may very well be that the model fits well because it is overly flexible, and some researchers have made that argument before (Jones & Dzhafarov, 2014). The question of flexibility is also a tough one: all models are at least somewhat flexible and it’s hard to say when a model is “too” flexible. Again, I have opinions on this issue but others have equally strong opinions in the opposite direction, and the flexibility of the model is not what this post is about.
Of course, the issues of whether a model is a good approximation or reality and whether it is too flexible are not specific to DDM; the same issues come up with virtually any model in cognitive science. So, what would be a truly convincing evidence that a model is good? There is a pretty standard answer and it’s called predictions.
So, that leads us to the central question of this post: Has DDM made any new predictions?
I think that I’m reasonably aware of the literature on DDM and have read all major reviews from the last decade. There is a lot of talk about what DDM “explains” (read “fits”) but I haven’t seen a single case where DDM has made a new prediction that has subsequently been confirmed empirically. The timeline if of course critical here: historically, DDM has been modified a few times to fit several effects that had already been known in the literature, such as error trials being slower than correct trials in some conditions but faster in others. So, the question is whether DDM predicted any effects that were not known when it was developed. I would assume that a case like that would be well known, which makes me think that there is no such case.
But perhaps I just missed something. So, if I did, please let me know in the comments. I’ll update this blog post with a discussion of any predictions that anyone mentions in the comments (or via email, Twitter, etc).
Finally, should DDM be expected to make new predictions? Well, I think so. DDM is not a simple model. It postulates a specific structure for the mechanism underlying perceptual decision making and it has both choice and RT as outputs. If it indeed captures the process of perceptual decision making well, then it is hard to imagine that it wouldn’t make a prediction about an effect that we hadn’t already discovered.
So, has the drift diffusion model made any new predictions?
UPDATES
Based on discussions on Twitter, here are several predictions that have been mentioned. The Examples 1-2 from the section “Successful predictions?” and Examples 2-4 from the section Failed predictions come from this Twitter thread by @PeterKvam.
Successful predictions?
- “The distribution of evidence sampled by decision makers following a diffusion strategy should not match the true distribution of evidence – it should be more extreme” (Kvam et al., 2021). My reaction: yes, but this prediction doesn’t seem specific to DDM or other accumulation-to-bound models.
- “Absorbing choice boundaries imply that once a choice is made, subsequent evidence is ignored.” Some evidence that this is correct (Peixoto et al., 2021). However, many popular models (e.g., 2DSD, Pleskac & Busemeyer, 2010) propose that subsequent evidence could be considered and provide suggestive evidence for it. In the end, I don’t think that what happens after a choice is made can either falsify or support DDM.
Note: many people suggested different effects where DDM was used to express a cognitive prediction not directly related to the model. I am not including such examples here because they are not predictions of DDM itself; for example, if such predictions fail, this would have no implication about DDM and only about the cognitive effect.
Failed predictions
- DDM predicts that placing more emphasis on accuracy should lead to monotonically increasing or inverted-U shaped functions for RT skewness and the ratio SD(RT)/mean(RT). Recently, we’ve shown that both of these predictions are false and the functions are instead U-shaped (Rafiei & Rahnev, 2021). One criticism that we received repeatedly: DDM only predicts this under the selective influence assumption (namely that manipulations of SAT difficulty should only affect the boundary and drift rate). If you don’t buy this, then this is not really a “prediction of DDM”. However, throwing away the selective influence assumption robs the model of its ability to predict (or even explain) many cognitive effects.
- “A choice can only be triggered after sampling a piece of evidence for the to-be-chosen option”. Difficult to evaluate in perception tasks but incorrect in purchasing decisions (Busemeyer & Rapoport, 1988).
- “Choice should be determined by the balance of evidence and not by the support for any individual option”. In other words, the magnitude of support / total evidence for different options should not affect choice as long as balance is maintained. Doesn’t seem correct (Steverson et al., 2019).
- “Evidence represented by decision makers should not be autocorrelated across time if the underlying information itself isn’t autocorrelated.” Some evidence exists that this is incorrect (Kelly et al., 2001).
Interim verdict
DDM clearly can make predictions. However, it’s “successful” predictions are either not specific to the model or not predictions at all. Will update this verdict as I receive more examples.