How to Carve Intuition at Its Joints, Lessons from The Generality Problem

Edison Yi
24 min readApr 10, 2021

Abstract:

‘Is intuition reliable?’ I argue that this is not the right question to ask. Belief-forming processes such as appealing to intuition can be typed more finely or more coarsely. Appealing to intuition might be a very unreliable process, but appealing to epistemic intuitions might be somewhat more reliable. In this essay, I argue that belief-forming processes involving intuitions should be typed differently according to the algorithm involved, where algorithms are step-by-step procedures mapping inputs to outputs. I further argue that attributions of reliability need to be restricted to the relevant input and output types. Applying these lessons to intuitions, I argue that since intuitions are most likely subserved by many distinct algorithms, reliability should be attributed to individual intuitive algorithms, not intuition simpliciter. Further, the same algorithm might be more reliable for some kinds of inputs and outputs than others. Reliability, then, is the property of an algorithm relative to input and output types. Consequently, arguments that rely on attributing a single degree of reliability to intuition simpliciter are fallacious.

Introduction

The topic of whether intuition is reliable has attracted a lot of discussion in philosophy.[1] While appeals to intuition seem ubiquitous in modern philosophy,[2] it seems that such appeals would be unjustified if appealing to intuition is an unreliable belief-forming process.[3] If it turns out that when we appeal to intuitions, we frequently form beliefs that are false, then clearly we would have good reason to stop forming beliefs by appealing to intuitions.

However, we face a generality problem. Suppose I form the belief that Henry doesn’t know that there is a red barn in front of him in the fake barn scenario by appealing to my intuition that Henry lacks knowledge in this case. Is this belief-forming process reliable? Since reliability is a tendency, it must be attributed to process types, not single tokens.[4] But this instance of appealing to intuition is a token of many different process types, some of which might be reliable while others might not be. The problem is that it is not clear which process type we should be concerned about. Is it appealing to intuition the relevant process type whose reliability we should care about? Or is appealing to epistemological intuitions the relevant type? Perhaps the type that matters is appealing to epistemological intuitions about beliefs about red barn cases formed on Fridays at 21:27?

The question of how to determine the relevant process type for a given token is the generality problem (Feldman 1985). A solution to the generality problem must avoid picking a process type that is at a level of generality that is too broad or too narrow. The higher the level of generality a process belongs to, the more tokens it contains. Types that are too broad, or at levels of generality that are too high, are ill-suited for epistemic considerations because they fail to recognize epistemically relevant differences between member tokens. For example, even if intuition was unreliable in general, it might still be possible to draw a tenable distinction between expert intuitions and layman intuitions. If expert intuitions are reliable but layman intuitions are not, then my belief might still be justified because it is based on an expert intuition, despite the unreliability of intuition in general. In this case, intuition in general is at too broad a level of generality. On the other hand, types that are too specific, or at levels of generality that are too low, might be constituted by single tokens in extreme cases, and the reliability of such processes just collapses into the truth of the resulting beliefs. However, since beliefs can be true but unjustified and also false but justified, typing processes so narrowly also results in a poor correlation between reliability and justification.

The problem of identifying the right level of generality is not limited to intuitions; it is a concern whenever we make reference to the reliability of a process. In this essay, I will argue that the right level of generality is one that is at the algorithmic level, where the relevant token processes are further constrained by the relevant input and output types. In other words, the relevant belief-forming process type for a token process is the set of all token processes that takes the same relevant input type, uses the same algorithm, and produces the same relevant output type. In section 1, I will present arguments in favor of identifying the relevant belief-forming process type by identifying the algorithm involved in the causal production of the belief. In section 2, I argue that algorithms aren’t all there is to the target of reliability attributions. Reliability attributions must be sensitive to input and output types. But I remain open to the possibility that other factors might be relevant for typing too. In section 3, I apply lessons from the first two section and argue that attributions of reliability or unreliability to intuitions must correspondingly be made at a level of generality algorithmic, input and output types involved. We have good reasons to believe that what we call ‘intuition’ is actually subserved by multiple distinct algorithms. There are also plausible epistemically relevant input and output distinctions that we can draw. Consequently, claims that intuitions are reliable or unreliable simpliciter[5] are prone to overgeneralization. Arguments that implicitly rely on the reliability of intuitions being homogenous are likely to be fallacious (Bealer, 1992).

1. Process Identification with Algorithms

A number of recently proposed solutions to the generality problem involve defining the belief-forming process partially in terms of the algorithm through which the belief has been formed (Beebe, 2004; Kampa 2018; Lyons, 2019). They claim that two beliefs have the same relevant process type only if the narrowest algorithmic types that were causally involved in producing or sustaining the two beliefs are identical. Following Lyons (2019), I will take an algorithm to be a step-by-step procedure for computing a mapping from certain inputs to certain outputs, where each step further consists in an input-output mapping. To borrow an example from Lyon, multiplication is a function that maps certain inputs (factors) to certain outputs (products). You and I might both try to multiply 5 and 7 but employ very different procedures to achieve the same goal. I may look up a multiplication table to find the correct product for 5 x 7, while you may add 5 to zero 7 times. These distinct ways in which we can do the calculation constitute distinct algorithms. When forming a belief, the brain employs a complex procedure taking us from whatever inputs we are given to the belief as the output. This is the algorithm we are concerned with.

A benefit to conditioning the sameness of processes on the sameness of algorithms is it narrows down the amount of possible process types that could be considered relevant, avoiding types that are too broad. For example, it rules out process types such as ‘beliefs formed by George’ as relevant types for conferring justification to beliefs, as it clearly has constituent tokens that involve different algorithms. George might form beliefs by looking as well as by guessing. Looking and guessing presumably involve different cognitive algorithms.

For algorithmic types to help with the generality problem, there needs to be a principled way of typing algorithms that doesn’t run into the generality problem itself. A plausible way to do so is by relying on cognitive science. Lyons (2019) was explicit on this reliance, writing that ‘the algorithmic typing will be entirely farmed out to cognitive psychology’ (p. 469).

Two points need to be clarified on behalf of Lyons. Firstly, there is little reason to only outsource algorithmic typing to cognitive psychologists, as other kinds of cognitive sciences will have relevant things to say about how to type algorithms.[6] Models from computational neuroscientists, for example, are especially relevant for typing algorithms.

Secondly, algorithmic typing isn’t outsourced to the current cognitive scientific theories, as there are surely inaccuracies and gaps of knowledge in our current theories, which we don’t want to be reflected in our epistemology. Instead, we must rely on ideal or complete cognitive scientific theories, similar to how some philosophers might appeal to an ideal or future physics to draw a distinction between physical and non-physical entities.

Outsourcing to science offers a principled method of typing processes are at the right level of generality. It simultaneously avoids some overly narrow and some overly broad types. Laws that refer to overly narrow types, such as ‘using perception or inference at 9:38pm on a Tuesday’, are not generalisable enough to be admitted by ideal cognitive science; overly broad types might contain tokens that use distinct algorithms that figure in ideal cognitive scientific theories.

In addition, attention to empirical evidence allows us to make epistemically relevant distinctions between belief-forming processes that we wouldn’t have been able to make in the armchair. For instance, it is difficult to imagine that we would have distinguished between the reliability of our ability to visually recognize face-like stimuli and non-face-like stimuli in the armchair, but empirical evidence shows that they involve distinct processes and can diverge in reliability (Peterson and Rhodes 2003, Farah 2004).

In summary: an algorithmic type T is the relevant type for an algorithmic token t if and only if T is the narrowest algorithmic type tokening t that is recognized by an ideal or complete cognitive science. Two process tokens p and q are of the same relevant process type P only if the same algorithmic type T is involved in both p and q.

2. Reliability Attributions Must Be Sensitive to Input and Output Types

Despite the benefits of typing processes by algorithm, it isn’t what we attribute reliability to. It can only be part of it, as carving processes by algorithm can still result in process types that are too broad, as we will see below. Beebe (2004) and Lyons (2019) both recognized this point and supplemented their process definition with other factors. I will go down a different path here, putting an emphasis on input and output types. I hope to show that input and output types can play a significant role in our judgements of reliability and justification and reliabilist theories that neglect the roles they play are implausible.

The type of input that goes into an algorithm matters for judgements of reliability of the process and the justificational status of the resulting belief. Consider an expert bird watcher who is asked to identify the species a bird in an image belongs to. The bird watcher is presented with two images of birds, one blurry and one clear. Even if she uses the same cognitive algorithm to arrive at a belief about the bird’s species, her belief formed from looking at the blurry image will be less justified than her belief formed from the sharp image. In cases where the image is extremely blurry, no beliefs will be justified and the only rationally permissible attitude will be a suspension of belief, even though an identical algorithm may be used as with the clear image. If this is right, then a notion of reliability that correlates with justification needs to attribute degrees of reliability in a way that is sensitive to the type of inputs involved in addition to the algorithm used.

Taking into account the input type for judgements of reliability is not an ad hoc amendment solely added to enhance reliability’s correlation with justification. Our ordinary reliability evaluations are sensitive to input types and it is natural for a theory that makes references to reliability to capture this sensitivity. Consider the expert bird watcher again. Suppose the expert bird watcher is well-versed in South American birds but she is at a loss when asked to recognize European bird species. The same algorithm is involved ‘under the hood’ when she processes both types of images. When she sees a European bird, the algorithm chooses a South American species that the bird is most similar to and forms the belief that the bird belongs to that species. Therefore, she is prone to making many mistakes when forming belief about the identity of European bird species but not South American species. A natural description of her is: she is reliable when it comes to South American birds, but not European birds. So, we often attribute reliability not to an algorithmic type simpliciter, but to an algorithmic type relative to an input type. Reliability is not a property of algorithms but a property of algorithmic types relative to input types.

The output type is also relevant for reliability judgements even though it isn’t part of the causal process (as the output is just the belief itself). It might be instructive to abstract away from humans here and look at how we attribute reliability to tools. Suppose a novice programmer has just created a computer algorithm that can do multiplication reliably unless the correct product is over 1000. If the product is over 1000, then the algorithm will always give an output of 1000. The computer algorithm can take in any number of numbers as inputs and the individual inputs can be any number (e.g. it will give a correct answer for 1000000 X 0.000001 X 2 X π). If you were asked if the computer algorithm was reliable, a natural answer is: ‘Well, it is reliable for outputs that are less than or equal to 1000.’ Similarly, knowing the limitations of the algorithm, it is justified for you to believe that the output is correct for outputs less than or equal to 1000 but unjustified to believe so otherwise.

Such non-human examples are useful because it is clear in these cases that we are holding the algorithm constant, but human examples are abundant too. Suppose there is a doctor whose cognitive algorithm for diagnosing covid-19 is overly sensitive. Consequently, if he believes that you don’t have the virus, he is always correct. However, if he believes that you have the virus, it is a false positive a significant amount of the time. You would naturally say that the doctor is very reliable for negative diagnoses, since it is always accurate, but the doctor is not so reliable for positive diagnoses. If you know that he is overly sensitive, you would be justified to believe in the doctor’s diagnosis if it’s negative, but you would be less justified to believe him if the diagnosis is positive. These cases give us reasons to think that the targets of ordinary reliability attributions are often not just the algorithmic type relative to an input type, but also the algorithmic type relative to an output type. Further, to capture the sensitivity of our justificational evaluations to output types, a reliability judgement that is epistemically relevant needs to be output relative.

Analogous to why we want to outsource algorithmic typing to science, we also want to outsource input and output typing to science for a principled, non-arbitrary way of typing that delivers types that are at the right level of generality. A natural view is, for an input or output type to be relevant for evaluation of reliability, it must figure in some ideal or complete theory in cognitive science. Some cognitive scientific laws will likely make reference to input types while others will refer to output types. Consider the doctor who is overly sensitive. Suppose a single algorithm drives his diagnosis, how would a cognitive scientist describe this phenomenon? The cognitive scientist would say something like: ‘the algorithm is biased toward giving a positive diagnosis’. In other words, the description would be couched in terms of the output type. The description will likely make no reference to any input types, because the sensitivity of the doctor isn’t specific to any well-defined input type. But suppose another doctor’s cognitive algorithm leads him to only give positive diagnosis to patients who cough. A description of this doctor’s algorithm will now naturally make reference to an input type, namely the symptom of coughing.

I don’t wish to claim that no other factor is relevant for reliability evaluation than the algorithm, input, and output types that are picked out by cognitive science, nor am I proposing that the algorithm plus the input and output types constitutes a full account of what we attribute reliability to.[7] For the purposes of this essay, I only wish to claim that, for a given belief-forming process, its relevant process type only contains process tokens that share the same algorithmic, input and output types.

Let’s see how this works for a concrete case. When we say that Sam has a reliably formed belief that the animal in front of him is a penguin, we are attributing reliability to the algorithm that gave rise to his belief for visual stimuli of the relevant type and for the relevant type of beliefs. The relevant algorithm might be a Convolutional-Neural-Network-like algorithm[8] that is domain-specifically activated for animate object recognition[9]. Perhaps the relevant input type will be visual stimuli that are of high-enough resolution, in good lighting conditions, etc. The relevant type of output might be beliefs about the categorization of animate objects at the basic level.[10] What exactly the relevant types are will ultimately depend on the details of the ideal cognitive scientific theories.

3. Why We Mustn’t Attribute Reliability to Intuition Simpliciter

An implication of my view is that, in evaluating whether intuition is reliable or not, talking about the reliability of intuitions monolithically is misleading. Philosophers frequently make reference to the epistemic status of intuition, full stop. Nado (2014) writes:

‘There is a general — though admittedly not universal — tendency to write as though the intuitive judgements invoked by philosophers stand or fall together, and that their doing so will be a consequence of the reliability or lack thereof of some unified mental capacity called “intuition”.’ (p. 16)

Examples of works that have to some extent treated intuitions monolithically can be found in Pust (2000), Goldman (1998), Korman (2005), Sosa (2006), and Audi (2008), among others.

An upshot of this monolithic treatment of intuition is that philosophers often talk about the reliability of intuitions simpliciter, failing to distinguish between the reliability of different types of intuitions, even though, as I will defend, distinct cognitive algorithms likely underlie different types of intuitions. They’ve also failed to distinguish between the reliability of intuition for different types of questions, cases and answers. I argue that these distinctions are crucial, following my previous defense of the importance of attributing reliability in a way that is sensitive to algorithmic, input, and output types.

Evidence from psychology suggests that intuiting likely consists of multiple distinct algorithms. Models of intuitions in the psychology literature are often explicitly multi-process. Gore and Sadler-Smith (2011) proposed three domain-general mechanisms of intuiting. These domain-general mechanisms can be utilized to solve novel problems or old problems in novel ways. In addition to domain-general mechanisms of intuiting, they describe domain-specific mechanisms. The authors identified four domains of intuitions: problem-solving (expert intuition), creativity (insights/eureka moments), moral judgements, and social judgements (ability to attribute mental states to others), each with its own information processing mechanism and distinct neural correlates. Similarly, Glöckner and Witteman (2010) argues that intuition is not a homogenous concept but a label for distinct cognitive mechanisms.

Of course, we can’t be sure that ideal cognitive science will say that intuition is composed of different algorithms, but we have very good reason to expect that it will. Domain-specific processing is prevalently posited in the psychological literature. We know from double dissociative evidence that cognitive capacities that folk psychology posits are almost invariably subserved by distinct algorithms[11]. Even within a single domain, such as the mathematical domain, a double dissociation between impairments in multiplication and subtraction has been found (Dehaene and Cohen, 1997). It is extremely plausible that given the prevalence of domain-specific processing in the brain, the intuitive capacity would also be constituted by a multiplicity of domain-specific algorithms. Given the arguments for typing processes based on algorithms from the previous sections, it seems reasonable to expect that reliability attribution should be properly directed at individual types of intuitive algorithms, rather than all of them as a whole.

If one wants to contest the claim that intuitions consist of distinct algorithmic types, one must do so on empirical grounds, appealing to evidence from cognitive science. As Talbot (2009) put it, ‘a correct understanding of how intuitions work can only be gained empirically and only by doing psychology, not philosophy’. Now, there are many theories of intuitions that type intuitions non-empirically. Notably, some argue that intuitions are a unified kind due to a shared phenomenology (Pollock, 1974; Plantinga, 1993). This might very well be true. However, the crucial question is whether the unified kind intuitions supposedly constitute is at the level of generality that is relevant for the purposes of reliability evaluation. As Schukraft (2016) argues, cases of synesthesia demonstrate that the same phenomenology need not always accompany the same processes. There might be someone who has no evidence regarding the result of a coin flip but has a hunch that the coin will land heads. It is possible that her hunch might just be accompanied by exactly the same kind of phenomenology as philosophical intuitions are accompanied by. Surely, we want to deny that her hunch has the same evidential status as the intuitions we take to ground philosophical theories.

Even if a single algorithm is causally responsible for all philosophically relevant intuitions, intuition might still be less reliable for some types of inputs and outputs than others. Let’s start with inputs. Inputs to a belief-generating algorithm need not be evidence. Descriptions of cases could, for example, serve as inputs. In fact, they very frequently do serve as inputs to intuitions in philosophy as thought experiments are heavily employed in philosophical research. Research from psychology and experimental philosophy has suggested that people’s moral intuitions about cases can be influenced by morally irrelevant factors such as whether options are phrased in terms of ‘saving’ or ‘killing’, even though the options are actually identical (Tversky & Kahneman, 1982), and whether a case concerns the moral permissibility of the actions of the participants themselves or a third party (Nadelhoffer and Feltz, 2008). Plausibly, if these findings are true, then intuition is less reliable for case descriptions that make people vulnerable to such effects (Sinnott-Armstrong, 2008). There are of course disputes about whether these findings genuinely concern the kind of intuitions that philosophers use. The point is not to say that these are genuine types of inputs that attributions of reliability to intuitions should distinguish, they serve as examples of what these types could look like. The input types that are actually relevant for reliability are empirical facts that we currently know little about.

For output types, the natural way to carve them is by domain. It is possible that intuition is less reliable for moral beliefs than for linguistic beliefs. But of course, the output types that figure in ideal cognitive science may be very different. It may distinguish between types of intuitive beliefs based on factors such as the degree to which they are subject to framing effects (perhaps only moral beliefs, but not mathematical beliefs, suffer from framing effect), or the amount of interpersonal variations regarding the beliefs. Weinberg and colleagues (2001) argue that epistemic intuitions vary based on, among other things, one’s culture and socioeconomic group to another. If these intuitions are incompatible, and the same algorithm is involved[12], then the algorithm would seem to be unreliable for this domain. By analogy, suppose there is a room. People with perfectly normal visual systems enter the room and all report different colors. One reports blue and the other reports red. Clearly, we have reasons to doubt the reliability of color perception for beliefs about this room’s color, even though color perception might be reliable generally. Intuitions may therefore be unreliable for belief types regarding which there are lots of interpersonal disagreement but more reliable for other belief types.

One implication of my view is that arguments that seek to attack or support the role intuitions play in philosophy need to be carefully localized to relevant algorithm, input and output types. Failing to do so may lead to over-generalization or, at worst, fallacious arguments. An example is Mizrahi (2013) who argues that intuitions are unreliable since the inference from it seeming to me that P to P being true is unreliable. To argue for this, Mizrahi points to examples such as cases where such inference go wrong. For example, one might infer that since it seems to most students in the class that (a) is the correct answer to the multiple-choice question, the correct answer is (a). But (a) is actually incorrect. Students confused (a) with (b) because they’re similarly worded. Even granting that appeals to intuitions must take the form of inferences from intellectual seemings and that the students’ seeming is of the same phenomenal kind as philosophers’, Mizrahi still hasn’t successfully shown what he intended to show. Inferences from seemings might just be unreliable for multiple choice questions where the choices are similarly worded. In addition, even though the phenomenal experience is the same for students and philosophers, the algorithms with which they arrived at the seemings and therefore at the ultimate beliefs are likely to be vastly different.

An example where an attempt to support the use of intuitions goes too far can be found in Bealer (1992), who argues that objections to the evidential role of intuitions themselves rely on intuitions as evidence. If intuitions are unreliable, then these objections are self-defeating. If the conclusion is taken to be that all objections which rely on intuitions to undermine the use of intuitions as evidence are self-defeating, the argument is downright fallacious. Even granting the premise that objections to the use of intuitions must rely on intuitions themselves, this argument only shows that universal rejections of the use of intuitions are misguided. But there is nothing self-defeating about someone who uses a limited type of intuitions as evidence to argue that all other types of intuitions are unreliable.

There is, however, still a place for talk of the reliability of intuitions simpliciter. Sometimes it is sensible to talk about the reliability of intuitions as a type of simplifying generalization, the same kind of simplifying generalization that scientists do when they treat all gases as if they are the same for some calculations. This might be appropriate when we are ignorant of the distinct algorithms subserving intuitions and the finer distinctions that we ought to make between different inputs and outputs of intuitions, or when the difference that these distinctions make is small for the discussion at hand. My goal is not to change how people talk about the reliability of intuitions from now on. I only wish to point out that when people talk about reliability of intuitions simpliciter, they are simplifying and generalizing, and the underlying facts about the reliability of intuitions are a lot messier and are likely couched in terms of algorithmic, input, and output types.

Conclusion

This essay has two main takeaways. Firstly, we have good reason to think that reliability should be attributed to an algorithm relative to input and output types. Secondly, since reliability should be attributed to individual algorithms that subserve intuitions relative to input and output types, there is no reliability of intuitions simpliciter.

In response to empirical challenges showing that certain factors cause intuitions to go awry, Sosa (2007) writes that ‘the upshot is that we have to be careful in how we use intuition, not that intuition is useless’ (p.105). This essay suggests some precise ways in which we must be careful in using intuition. We should pay attention to the specific algorithm, input and output types involved in an appeal to intuition.

That our belief-forming algorithms are better for some input or output types than others is just what we should expect. These algorithms are meant to serve biological and social functions, and they should not be expected to be reliable for problems that they are not meant to solve. A heart being bad at pumping oil doesn’t mean that it is defective. Our capacity of intuition being unreliable for judging artificial, difficult moral questions doesn’t mean that it is bad at its job when it comes to judging ordinary, mundane cases. On the flip side, intuition being good as mundane cases doesn’t give us license to trust its judgements regarding fringe, weird philosophical thought experiments.

Of course, intuition is only one area where the first takeaway can be applied. A similar moral can be drawn for perception as well. But what is obvious when it comes to perception is the opposite of obvious for intuitions. Few people will assume that all types of perception will have the same level of reliability. Not many people will think that visual perception being generally reliable means that beliefs about relative sizes in normal settings are as reliably formed as those formed looking into the Ames room. It is easy to make these mistakes when it comes to intuitions, and so we must be cautious.

Bibliography

1. Audi, Robert (2008). Intuition, Inference, and Rational Disagreement in Ethics. Ethical Theory and Moral Practice 11 (5):475–492.

2. Bealer, George (1992). The incoherence of empiricism. Aristotelian Society Supplementary Volume 66 (1):99–138.

3. Bealer, George (1998). Intuition and the Autonomy of Philosophy. In Michael DePaul & William Ramsey (eds.), Rethinking Intuition: The Psychology of Intuition and Its Role in Philosophical Inquiry. Rowman & Littlefield. pp. 201–240.

4. Beebe, James (2004). The Generality Problem, Statistical Relevance and the Tri-Level Hypothesis. Noûs 38 (1):177–195.

5. Bishop, Michael A. (2010). Why the generality problem is everybody’s problem. Philosophical Studies 151 (2):285–298.

6. Cappelen, Herman (2012). Philosophy Without Intuitions. Oxford University Press UK.

7. Caramazza, A., & Shelton, J. R. (1998). Domain-specific knowledge systems in the brain: The animate-inanimate distinction. Journal of cognitive neuroscience, 10(1), 1–34.

8. Comesaña, Juan (2006). A Well-Founded Solution to the Generality Problem. Philosophical Studies 129 (1):27–47.

9. Dehaene, S., & Cohen, L. (1997). Cerebral pathways for calculation: Double dissociation between rote verbal and quantitative knowledge of arithmetic. Cortex, 33(2), 219–250.

10. Deutsch, Max (2015). The Myth of the Intuitive. The MIT Press.

11. Farah, M. J. (2004). Visual agnosia. MIT press.

12. Feldman, R. (1985). Reliability and Justification. The Monist, 68: 159–74

13. Gabrieli, J. D., Fleischman, D. A., Keane, M. M., Reminger, S. L., & Morrell, F. (1995). Double dissociation between memory systems underlying explicit and implicit memory in the human brain. Psychological Science, 6(2), 76–82.

14. Glisky, E. L., Polster, M. R., & Routhieaux, B. C. (1995). Double dissociation between item and source memory. Neuropsychology, 9(2), 229.

15. Glöckner, A., & Witteman, C. (2010). Beyond dual-process models: A categorisation of processes underlying intuitive judgement and decision making. Thinking & Reasoning, 16(1), 1–25.

16. Goldman, Alvin I. & Pust, Joel (1998). Philosophical Theory and Intuitional Evidence. In Michael Depaul & William Ramsey (eds.), Rethinking Intuition: The Psychology of Intuition and its Role in Philosophical Inquiry. Rowman & Littlefield.

17. Gore, J., & Sadler-Smith, E. (2011). Unpacking intuition: A process and outcome framework. Review of General Psychology, 15(4), 304–316.

18. Hajibayova, L. (2013). Basic-level categories: A review. Journal of Information Science, 39(5), 676–687.

19. Johnson, Michael & Nado, Jennifer Ellen (2014). Moderate Intuitionism. In Booth Anthony Robert & P. Rowbottom Darrell (eds.), Intuitions. Oxford University Press.

20. Kahneman, Daniel ; Slovic, Paul & Tversky, Amos (eds.) (1982). Judgment Under Uncertainty: Heuristics and Biases. Cambridge University Press.

21. Kampa, Samuel (2018). A new statistical solution to the generality problem. Episteme 15 (2):228–244.

22. Korman, Dan (2005). Law necessitarianism and the importance of being intuitive. Philosophical Quarterly 55 (221):649–657.

23. Kuzovkin, I., Vicente, R., Petton, M., Lachaux, J. P., Baciu, M., Kahane, P., … & Aru, J. (2018). Activations of deep convolutional neural networks are aligned with gamma band activity of human visual cortex. Communications biology, 1(1), 1–12.

24. Lomber, S. G., & Malhotra, S. (2008). Double dissociation of ‘what’ and ‘where’ processing in auditory cortex. Nature neuroscience, 11(5), 609.

25. Lycan, William G. (1996). Bealer on the possibility of philosophical knowledge. Philosophical Studies 81 (2–3):143–150.

26. Lyons, Jack C. (2019). Algorithm and Parameters: Solving the Generality Problem for Reliabilism. Philosophical Review 128 (4):463–509.

27. Malone, D. R., Morris, H. H., Kay, M. C., & Levin, H. S. (1982). Prosopagnosia: a double dissociation between the recognition of familiar and unfamiliar faces. Journal of Neurology, Neurosurgery & Psychiatry, 45(9), 820–822.

28. Marr, David (1982). Vison. W. H. Freeman.

29. Mizrahi, Moti (2013). More Intuition Mongering. The Reasoner 7 (1):5–6.

30. Nadelhoffer, Thomas & Feltz, Adam (2008). The Actor–Observer Bias and Moral Intuitions: Adding Fuel to Sinnott-Armstrong’s Fire. Neuroethics 1 (2):133–144.

31. Nado, Jennifer (2014). Why Intuition? Philosophy and Phenomenological Research 89 (1):15–41.

32. Nguyen, A. P., Spetch, M. L., Crowder, N. A., Winship, I. R., Hurd, P. L., & Wylie, D. R. (2004). A dissociation of motion and spatial-pattern vision in the avian telencephalon: implications for the evolution of “visual streams”. Journal of Neuroscience, 24(21), 4962–4970.

33. Peterson, Michael L. & Rhodes, G. (eds.) (2003). Perception of Faces, Objects, and Scenes: Analytic and Holistic Processes (335–355). Oxford University Press.

34. Plantinga, Alvin (1993). Warrant and Proper Function. Oxford University Press.

35. Pollock, John (1974). Knowledge and Justification. Princeton University Press.

36. Pust, Joel (2000). Intuitions as Evidence. Routledge.

37. Schukraft, Jason (2016). Carving Intuition at its Joints. Metaphilosophy 47 (3):326–352.

38. Sinnott-Armstrong, W. (2008). “Framing Moral Intuitions,” in W. Sinnott-Armstrong (ed.), Moral Psychology, Volume 2. Cambridge, MA: MIT Press, pp. 47–75.

39. Sosa, David (2006). Scepticism about intuition. Philosophy 81 (4):633–648.

40. Sosa, Ernest (1998). Minimal Intuition. In Michael DePaul & William Ramsey (eds.), Rethinking Intuition. Rowman & Littlefield. pp. 257–269.

41. Talbot, Brian (2009). Psychology and the Use of Intuitions in Philosophy. Studia Philosophica Estonica 2 (2):157–176.

42. Weinberg, Jonathan M. ; Crowley, Stephen ; Gonnerman, Chad ; Vandewalker, Ian & Swain, Stacey (2012). Intuition & calibration. Essays in Philosophy 13 (1):15.

43. Weinberg, Jonathan M. ; Nichols, Shaun & Stich, Stephen (2001). Normativity and epistemic intuitions. Philosophical Topics, 29 (1–2):429–460.

44. Winters, B. D., Forwood, S. E., Cowell, R. A., Saksida, L. M., & Bussey, T. J. (2004). Double dissociation between the effects of peri-postrhinal cortex and hippocampal lesions on tests of object recognition and spatial memory: heterogeneity of function within the temporal lobe. Journal of Neuroscience, 24(26), 5901–5908.

[1] See, among others, Bealer (1998); Mizrahi (2013); Sosa (1998); Weinberg et al (2012); Johnson & Nado (2014).

[2] But see Cappelen (2012) and Deutsch (2015)

[3] This is, of course, true if reliabilism about justification is true. But even if reliabilism about justification is false, it is very plausible that unreliability of intuitions would at least present a serious problem for those who wish to claim that beliefs formed by appealing to intuitions are justified.

[4] But see Comesaña (2006).

[5] For example, Mizrahi (2013) writes that ‘intellectual intuition is an unreliable belief-forming-process’; Lycan (1995) writes that ‘I think philosophical intuition is and always will be laughably unreliable.’

[6] Though not all theories from cognitive science are relevant. We should be wary of defining processes by purely implementational level characteristics rather than more abstract algorithmic properties. As Beebe (2004) pointed out, process types are multiply realizable, but implementational level properties are often not. So, at the very least, a theory is relevant to us only if it is on the algorithmic level of explanation (Marr 1982).

[7] A full account might need to give room to external conditions that could not plausibly figure in cognitive scientific laws, such as whether I am using an accurate thermometer to tell the temperature or an inaccurate one.

[8] see Kuzovkin et al (2018) for evidence that the human visual system employs a CNN-like algorithm.

[9] see Caramazza & Shelton (1998) for evidence that animate and inanimate object recognition involve distinct mechanisms.

[10] see Hajibayova (2013) for a review of the literature on basic level categories.

[11] We have evidence of double dissociation in memory (Glisky et al, 1995; Gabrieli et al, 1995, Winters, 2004), visual processing (Malone et al, 1982; Nguyen, 2004), auditory processing (Lomber, 2008), etc.

[12] It is actually quite likely that different algorithms are involved across cultural groups. As Weinberg and colleagues noted, there are systematic differences in cognitive style between the East Asian subjects and the American subjects that they tested.

--

--

Edison Yi

This blog contains a collection of satires, notes, and essays on philosophy, economics, etc. I’m a master’s student in Philosophy at Oxford.