Mechanization of thought processes and relationship

The Language of Thought Hypothesis (Stanford Encyclopedia of Philosophy)

mechanization of thought processes and relationship

The metaphor of the jigsaw puzzle should not go unnoticed in connection with the postulates of fertility; fitting the In: Mechanization of Thought Processes. The former is the hypothesis that mental processes are causal processes The latter is the hypothesis that propositional attitudes are relations between . formal procedures can be mechanized, and thus, implemented as causal processes in. Mechanisation of Thought Processes (2 Volumes Set) [National Physical Laboratory., Oliver Selfridge] on trannycams.info *FREE* shipping on qualifying offers.

It is among the origins of work in artificial intelligence, and though there has since been much debate about whether the digital computer is the best model for the brain see below many researchers still presume linguistic representation to be a central component of thought.

mechanization of thought processes and relationship

Second, CTM offers an account of how a physical object in particular, the brain can produce rational thought and behavior. The answer is that it can do so by implementing rational processes as causal processes. This answer provides a response to what some philosophers—most famously Descarteshave believed: It therefore stands as a major development in the philosophy of mind.

Theories of Meaning Explaining rationality in purely physical terms is one task for a naturalized theory of mind. Still, CTM lends itself to a physicalist account of intentionality. There are two general strategies here. Internalist accounts explain meaning without making mention of any objects or features external to the subject. For example, conceptual role theories see for instance Loar explain the meaning of a mental representation in terms of the relations it bears to other representations in the system.

Externalist accounts explicitly tie the meaning of mental representations to the environment of the thinker. For example, causal theories see for instance Dretske explain meaning in terms of causal regularities between environmental features and mental representations.

For example, on a dark evening, someone might easily mistake a cow for a horse; in other words, a cow might cause the tokening of a mental representation that means horse. But if, as causal theories have it, the meaning of a representation is determined by the object or objects that cause it, then the meaning of such a representation is not horse, but rather horse or cow since the type of representation is sometimes caused by horses and sometimes caused by cows.

That is, if the representation was not caused by horses, then it would not sometimes be caused by cows. But this dependence is asymmetric: As all of the above examples explain meaning in physical terms, the coupling of a successful CTM with a successful version of any of them would yield an entirely physical account of two of the most important general features of the mind: Arguments for LOTH LOTH then, is the claim that mental representations possess combinatorial syntax and compositional semantics—that is, that mental representations are sentences in a mental language.

This section describes four central arguments for LOTH. Fodor argued that LOTH was presupposed by all plausible psychological models. Fodor and Pylyshyn argue that thinking has the properties of productivity, systematicity, and inferential coherence, and that the best explanation for such properties is a linguistically structured representational system.

In short, the argument was that the only game in town for explaining rational behavior presupposed internal representations with a linguistic structure. The development of connectionist networks—computational systems that do not presuppose representations with a linguistic format—therefore pose a serious challenge to this argument.

In the s, the idea that intelligent behavior could be explained by appeal to connectionist networks grew in popularity and Fodor and Pylyshyn argued on empirical grounds that such an explanation could not work, and thus that even though linguistic computation was no longer the only game in town, it was still the only plausible explanation of rational behavior.

Their argument rested on claiming that thought is productive, systematic, and inferentially coherent. Productivity Productivity is the property a system of representations has if it is capable, in principle, of producing an infinite number of distinct representations.

For example, sentential logic typically allows an infinite number of sentence letters A, B, C, Thus the system is productive. The system is not productive.

  • The Language of Thought Hypothesis

Productivity can be achieved in systems with a finite number of atomic representations, so long as those representations may be combined to form compound representations, with no limit on the length of the compounds. Here are three examples: That is, productivity can be achieved with finite means by employing both combinatorial syntax and compositional semantics. Fodor and Pylyshyn argue that mental representation is productive, and that the best explanation for its being so is that it is couched in a system possessing combinatorial syntax and compositional semantics.

They first claim that natural languages are productive. For example, English possesses only a finite number of words, but because there is no upper bound on the length of sentences, there is no upper bound on the number of unique sentences that can be formed.

More specifically, they argue that the capacity for sentence construction of a competent speaker is productive—that is, competent speakers are able to create an infinite number of unique sentences.

Of course, this is an issue in principle. No individual speaker will ever construct more than a finite number of unique sentences. Nevertheless, Fodor and Pylyshyn argue that this limitation is a result of having finite resources such as time. The argument proceeds by noting that, just as competent speakers of a language can compose an infinite number of unique sentences, they can also understand an infinite number of unique sentences.

Fodor and Pylyshyn write, there are indefinitely many propositions which the system can encode. However, this unbounded expressive power must presumably be achieved by finite means. The way to do this is to treat the system of representations as consisting of expressions belonging to a generated set.

More precisely, the correspondence between a representation and the proposition it expresses is, in arbitrarily many cases, built up recursively out of correspondences between parts of the expression and parts of the proposition.

But, of course, this strategy can only operate when an unbounded number of the expressions are non-atomic. So linguistic and mental representations must constitute [systems possessing combinatorial syntax and compositional semantics]. But since humans are finite creatures, they cannot possess an infinite number of unique atomic mental representations.

Thus, they must possess a system that allows for construction of an infinite number of thoughts given only finite atomic parts. The only systems that can do that are systems that possess combinatorial syntax and compositional semantics. Thus, the system of mental representation must possess those features. Systematicity Systematicity is the property a representational system has when the ability of the system to express certain propositions is intrinsically related to the ability the system has to express certain other propositions where the ability to express a proposition is just the ability to token a representation whose content is that proposition.

mechanization of thought processes and relationship

For example, sentential logic is systematic with respect to the propositions Bill is boring and Fred is funny and Fred is funny and Bill is boring, as it can express the former if and only if it can also express the latter. Similarly to the argument from productivity, Fodor and Pylyshyn argue that thought is largely systematic, and that the best explanation for its being so is that mental representation possesses a combinatorial syntax and compositional semantics.

Stay Connected

The argument rests on the claim that the only thing that can account for two propositions being systematically related within a representational system is if the expressions of those propositions within the system are compound representations having the same overall structure and the same components, differing only in the arrangement of the parts within the structure, and whose content is determined by structure, parts, and arrangement of parts within the structure.

That is, they are both conjunctions, they have the same components, they only differ in the arrangement of the components within the structure, and the content of each is determined by their structure, their parts, and the arrangement of the parts within the structure. But, the argument continues, any representational system that possesses multiple compound representations that are capable of having the same constituent parts and whose content is determined by their structure, parts and arrangement of parts within the structure is a system with combinatorial syntax and compositional semantics.

Hence, systematicity guarantees linguistically structured representations. Fodor and Pylyshyn argue that, if thought is largely systematic, then it must be linguistically structured.

They argue that for the most part it is, pointing out that anyone who can entertain the proposition that John loves Mary can also entertain the proposition that Mary loves John. What explains that is that the underlying representations are compound, have the same parts, and have contents that are determined by the parts and the arrangement of the parts within the structure.

But then what underlies the ability to entertain those propositions is a representational system that is linguistically structured. See Johnson for an argument that language, and probably thought as well, is not systematic. Inferential Coherence A system is inferentially coherent with respect to a certain kind of logical inference, if given that it can draw one or more specific inferences that are instances of that kind, it can draw any specific inferences that are of that kind.

Here A is a logical conjunction, and B is the first conjunct. A system that can draw the inference from A to B is a system that is able to infer the first conjunct from a conjunction with two conjuncts, in at least one instance. A system may or may not be able to do the same given other instances of the same kind of inference. It may not for example be able to infer Bill is boring from Bill is boring and Fred is funny. If it can infer the first conjunct from a logical conjunction regardless of the content of the proposition, then it is inferentially coherent with respect to that kind of inference.

As with productivity and systematicity, Fodor and Pylyshyn point to inferential coherence as a feature of thought that is best explained on the hypothesis that mental representation is linguistically structured. The argument here is that what best explains inferential coherence with respect to a particular kind of inference, is if the syntactic structure of the representations involved mirrors the semantic structure of the propositions represented.

For example, if all logical conjunctions are represented by syntactic conjunctions, and if the system is able to separate the first conjunct from such representations, then it will be able to infer for example, Emily is in Scranton from Emily is in Scranton and Judy is in New York, and it will also be able to infer Bill is boring from Bill is boring and Fred is funny, and so on for any logical conjunction. Thus it will be inferentially coherent with respect to that kind of inference.

If the syntactic structure of all the representations matches the logical structure of the propositions represented, and if the system has general rules for processing those representations, then it will be inferentially coherent with respect to any of the kinds of inferences it can perform.

Representations whose syntactic structure mirrors the logical structure of the propositions they represent, however, are representations with combinatorial syntax and compositional semantics; they are linguistically structured representations. Thus, if thought is inferentially coherent, then mental representation is linguistically structured. Any example of inferential coherence is best explained by appeal to linguistically structured representations.

Hence, inferential coherence in human thought is best explained by appeal to linguistically structured representations. The first is the problem of individuating the symbols of the language of thought, which if unsolvable would prove fatal for LOTH, at least insofar as LOTH is to be a component of a fully naturalized theory of mind, or insofar as it is to provide a framework within which psychological generalizations ranging across individuals may be made.

The second is the problem of explaining context-dependent properties of thought, which should not exist if thinking is a computational process. The third is the objection that contemporary cognitive science shows that some thinking takes place in mental images, which do not have a linguistic structure, so LOTH cannot be the whole story about rational thought.

The fourth is the objection that systematicity, productivity, and inferential coherence may be accounted for in representational systems that do not employ linguistic formats such as mapsso the arguments from those features do not prove LOTH. The fifth is the argument that connectionist networks, computational systems that do not employ linguistic representation, provide a more biologically realistic model of the human brain than do classical digital computers. The last part briefly raises the question whether the mind is best viewed as an analog or digital machine.

Individuating Symbols An important and difficult problem concerning LOTH is the individuation of primitive symbols within the language of thought, the atomic mental representations. There are three possibilities for doing so: Schneider a argues that none of the above proposals so far are consistent with the roles that symbols are supposed to play within LOTH.

In particular, an appeal to meaning in order to individuate symbols would not reduce intentionality to purely physical terms, and would thus stand opposed to a fully naturalized philosophy of mind. An appeal to syntax conceived of as brain states would amount to a type-identity theory for mental representation, and would thus be prone to difficulties faced by a general type-identity theory of mental states.

And an appeal to computational role would render impossible an explanation of how concepts can be shared by individuals, since no two individuals will employ symbols that have identical computational roles. A failure to explain how concepts may be shared, moreover, would render impossible the stating of true psychological generalizations ranging across individuals. See Schneider b for a proposed solution to this problem.

In his view, even were the theory to be completed, it would not offer an entire picture of the nature of thought see Fodor His primary argument for this conclusion is that computation is sensitive only to the syntax of the representations involved, so if thinking is computation it should be sensitive only to the syntax of mental representations, but quite often this appears not to be so.

More specifically, the syntax of a representation is context-independent, but thoughts often have properties that are context-dependent. So there would seem to be no explanation why that thought would prompt different thoughts in different contexts, since the computations are not sensitive to those contexts. However, CTM would seem to require it to be. Thus according to Fodor, there is much cognition that cannot be understood on a computational model. Mental Images Throughout the s, investigators designed a series of experiments concerned with mental imagery.

The general conclusion many drew was that mental imagery presents a kind of mental representation that is not linguistically structured. More specifically, it was believed that the parts of mental images correspond to the spatial features of their content, whereas the parts of linguistic representations correspond to logical features of their content see Kosslyn In one well-known experiment, Kosslyn et al.

They then asked the subjects to imagine this map in their mind and to focus on a particular location. They asked the subjects i to say whether another given named location was on the map, and if so, ii to follow an imagined black dot as it traveled the shortest distance from the location on which they were focused to the named location The result was that as the distance between the original location and the named location increased, so did the time it took subjects to respond.

It is important to note here that while the experiments involved invoke mental images as those images a subject can examine introspectively, the debate is best understood as being about non-introspectible mental representations. Since LOTH is a hypothesis about non-introspectible cognitive processing, any purported challenges to the hypothesis would likewise need to be about such processing. Thus if the above conclusion is correct, then it at least limits the scope of LOTH.

The computer metaphor goes naturally with descriptional representations, but it is not at all clear how it can work when the representations are nondescriptional. Tye argues that on a proper understanding of the thesis that mental images have spatial properties, it does not straightforwardly undermine the claim that mental representation has a linguistic structure.

When viewed this way, scientific theories advanced within the LOTH framework are not, strictly speaking, committed to preserving the folk taxonomy of the mental states in any very exact way. Notions like belief, desire, hope, fear, etc. On the contrary, there is every reason to believe that scientific counterparts of these notions will carve the mental space somewhat differently. For instance, it has been noted that the folk notion of belief harbors many distinctions.

For example, it has both a dispositional and an occurrent sense.

Language of Thought Hypothesis | Internet Encyclopedia of Philosophy

In the occurrent sense, it seems to mean something like consciously entertaining and accepting a thought proposition as true. There is quite a bit of literature and controversy on the dispositional sense. I believe that there was a big surprise party for my 24th birthday vs.

There is furthermore the issue of degree of belief: It is unlikely that there will be one single construct of scientific psychology that will exactly correspond to the folk notion of belief in all these ways.

For LOTH to vindicate folk psychology it is sufficient that a scientific psychology with a LOT architecture come up with scientifically grounded psychological states that are recognizably like the propositional attitudes of folk psychology, and that play more or less similar roles in psychological explanations. As such, it may or may not be applicable to other aspects of mental life. Officially, it is silent about the nature of some mental phenomena such as experience, qualia,[ 4 ] sensory processes, mental images, visual and auditory imagination, sensory memory, perceptual pattern-recognition capacities, dreaming, hallucinating, etc.

To be sure, many LOT theorists hold views about these aspects of mental life that sometimes make it seem that they are also to be explained by something similar to LOTH. Indeed, many contemporary psychological models treat perceptual input systems in just these terms. But it is to be kept in mind that a system may employ representations and be computational without necessarily satisfying any or both of the clauses in B above in any full-fledged way.

For a useful discussion of varieties of computational processes and their classification, see Piccinini Whether sensory or perceptual processes are to be treated within the framework of full-blown LOTH is again an open empirical question.

It might be that the answer to this question is affirmative. So LOTH is not committed to there being a single representational system realized in the brain, nor is it committed to the claim that all mental representations are complex or language-like, nor would it be falsified if it turns out that most aspects of mental life other than the ones involving propositional attitudes don't require a LOT.

Similarly, there is strong evidence that the mind also exploits an image-like representational medium for certain kinds of mental tasks. But it is committed to the claim that propositional thought and thinking cannot be successfully accounted for in its entirety in purely imagistic terms. It claims that a combinatorial sentential syntax is necessary for propositional attitudes and a purely imagistic medium is not adequate for capturing that. The adequacy of an imagistic system seems to turn on the nature of syntax at the sentential level.

This is an attempt to combine discursive and imagistic representational elements at the lexical level. There may even be a well defined sense in which pictures can be combined to produce structurally complex pictures as in British Empiricism: But what is absolutely essential for LOTH, and what Fodor insists on, is the claim that there is no adequate way in which a purely image-like system can capture what is involved in making judgments, i. This seems to require a discursive syntactic approach at the sentential level.

The general problem here is the inadequacy of pictures or image-like representations to express propositions. I can judge that the blue box is on top of the red one without judging that the red box is under the blue one. I can judge that Mary kisses John without judging that John kisses Mary, and so on for indefinitely many such cases.

It is hard to see how images or pictures can do that without using any syntactic structure or discursive elements, to say nothing of judging, e. As we will see below, B2 turns out to provide the foundations for one of the most important arguments for LOTH: It is not clear, however, how an equivalent of B2 could be provided for images or pictures in order to accommodate operations defined over them, even if something like an equivalent of B1 could be given.

On the other hand, there are truly promising attempts to integrate discursive symbolic theorem-proving with reasoning with image-like symbols. They achieve impressive efficiency in theorem-proving or in any deductive process defined over the expressions of such an integrated system. Such attempts, if they prove to be generalizable to psychological theorizing, are by no means threats to LOTH; on the contrary, such systems have every feature to make them a species of a LOT system: As a result, the connection between LOTH and an implausibly strong version of conceptual nativism looked very much internal.

This historical coincidence has led some people to think that LOTH is essentially committed to a very strong form of nativism, so strong in fact that it seems to make a reductio of itself see, for instance, P. The gist of his argument was that since learning concepts is a form of hypothesis formation and confirmation, it requires a system of mental representations in which formation and confirmation of hypotheses are to be carried out, but then there is a non-trivial sense in which one already has albeit potentially the resources to express the extension of the concepts to be learned.

But the inductive evaluation of that hypothesis itself requires inter alia bringing the property green or triangular before the mind as such. Quite generally, you can't represent anything as such and such unless you already have the concept such and such.

If concept learning is as HF understands it, there can be no such thing. The crux of the issue seems to be that learning concepts is a rational process. This evidence base needs to be represented and rationally tied to the target concept. This target concept needs also to be expressed in terms of representations one already possesses.

Fodor thinks that any model of concept learning understood in this sense will have to be a form of hypothesis formation and confirmation. But not every form of concept acquisition is learning. There are non-rational ways of acquiring concepts whose explanation need not be at the cognitive level e. If concepts cannot be learned, then they are either innate or non-rationally acquired.

Whereas early Fodor used to think that concepts must therefore be innate maybe he thought that non-learning concept acquisition forms are limited to sensory or certain classes of perceptual conceptshe now thinks that they may be acquired but the explanation of this is not the business of cognitive psychology. Whatever one may think of the merits of Fodor's arguments for concept nativism or of his recent anti-learning stance, it should be emphasized that LOTH per se has very little to do with it.

LOTH is not committed to such a strong version of nativism, especially about concepts. It also need not be committed to any anti-learning stance about concepts. But this much is to be expected especially in the light of recent empirical findings and trends. This, however, does not constitutes a reductio.

It is an open empirical question how much nativism is true about concepts, and LOTH should be so taken as to be capable of accommodating whatever turns out to be true in this matter. LOTH, therefore, when properly conceived, is independent of any specific proposal about conceptual nativism. Naturalism and LOTH One of the most attractive features of LOTH is that it is a central component of an ongoing research program in philosophy of psychology to naturalize the mind, that is, to give a theoretical framework in which the mind could naturally be seen as part of the physical world without postulating irreducibly psychic entities, events, processes or properties.

Fodor, historically the most important defender of LOTH, once identified the major mysteries in philosophy of mind thus: How could anything material have conscious states? How could anything material have semantical properties?

How could anything material be rational? This much can, in principle, be granted by an intentional realist who might nevertheless reject LOTH. Indeed, there are plenty of theorists who accept RTM in some suitable form and also happily accept C in many cases but reject LOTH either by explicitly rejecting B or simply by remaining neutral about it.

Among some of the prominent philosophers who choose the former option are Searle,StalnakerLewisBarwise and Perry How, then, is the addition of B supposed to help? Let us first try to see in a bit more detail what the problem is supposed to be in the first place to which B is proposed as a solution.

Let us start by reflecting on thinking and see what it is about thinking that makes it a mystery in Fodor's list. This will give rise to one of the most powerful albeit still nondemonstrative arguments for LOTH.

But, surely, thinking is more. There could be a causally connected series of intentional states that makes no sense at all. Thinking, therefore, is causally proceeding from states to states that makes semantic sense: In the ideal case, this property would be the truth value of the states. But in most cases, any interesting intentional or epistemic property would do e. The intuitive idea, however, should be clear. Thinking is not proceeding from thoughts to thoughts in arbitrary fashion: If this were not so, there would be little point in thinking—thinking couldn't serve any useful purpose.

Call this general phenomenon, then, the semantic coherence of causally connected thought processes.

mechanization of thought processes and relationship

LOTH is offered as a solution to this puzzle: This is the problem of thinking, thus the problem of mechanization of rationality in Fodor's version. How does LOTH propose to solve this problem and bring us one big step closer to the naturalization of the mind? Computation The two most important achievements of 20th century that are at the foundations of LOTH as well as most of modern Artificial Intelligence AI research and most of the so-called information processing approaches to cognition are i the developments in modern symbolic formal logic, and ii Alan Turing's idea of a Turing Machine and Turing computability.

It is putting these two ideas together that gives LOTH its enormous explanatory power within a naturalistic framework. Modern logic showed that most of deductive reasoning can be formalized, i. And Turing showed, roughly, that if a process has a formally specifiable character then it can be mechanized. So we can appreciate the implications of i and ii for the philosophy of psychology in this way: Thus, given the commitment to naturalism, the hypothesis that the brain is a kind of computer trafficking in representations in virtue of their syntactic properties is the basic idea of LOTH and the AI vision of cognition.

Computers are environments in which symbols are manipulated in virtue of their formal features, but what is thus preserved are their semantic properties, hence the semantic coherence of symbolic processes. Slightly paraphrasing Haugeland cf. If you take care of the syntax of a representational system, its semantics will take care of itself. This is in virtue of the mimicry or mirroring relation between the semantic and formal properties of symbols. As Dennett once put it in describing LOTH, we can view the thinking brain as a syntactically driven engine preserving semantic properties of its processes, i.

What is so nice about this picture is that if LOTH is true we have a naturalistically adequate causal treatment of thinking that respects the semantic properties of the thoughts involved: Whether or not LOTH actually turns out to be empirically true in the details or in its entire vision of rational thinking, this picture of a syntactic engine driving a semantic one can at least be taken to be an important philosophical demonstration of how Descartes' challenge can be met cf.

Descartes was completely puzzled by just this rational character and semantic coherence of thought processes so much so that he failed to even imagine a possible mechanistic explication of it. He thus was forced to appeal to Divine creation. How can they mean anything? This is Brentano's challenge to a naturalist. Brentano's bafflement was with the intentionality of the human mind, its apparently mysterious power to represent things, events, properties in the world.

He thought that nothing physical can have this property: This problem of intentionality is the second problem or mystery in Fodor's list quoted above. I said that LOTH officially offers only a partial solution to it and perhaps proposes a framework within which the remainder of the solution can be couched and elaborated in a naturalistically acceptable way. Again, B1 attributes a compositional semantics to the syntactically complex symbols belonging to one's LOT that are, as per Crealized by the physical properties of a thinking system.

According to LOTH, the semantic content of propositional attitudes is inherited from the semantic content of the mental symbols. So Brentano's questions for a LOT theorist becomes: There are two levels or stages at which this question can be raised and answered: There have been at least two major lines LOT theorists have taken regarding these questions. The one that is least committal might perhaps be usefully described as the official position regarding LOTH's treatment of intentionality.

Most LOT theorists seem to have taken this line. The official line doesn't propose any theory about the first stage, but simply assumes that the first question can be answered in a naturalistically acceptable way.

This procedure is familiar from a Tarski-style[ 17 ] definition of truth conditions of sentences. The truth-value of complex sentences in propositional logic are completely determined by the truth-values of the atomic sentences they contain together with the rules fixed by the truth-tables of the connectives occurring in the complex sentences.

This process is similar but more complex in first-order languages, and even more so for natural languages—in fact, we don't have a completely working compositional semantics for the latter at the moment. So, if we have a semantic interpretation of atomic symbols if we have symbols whose reference and extension are fixed at the first stage by whatever naturalistic mechanism turns out to govern itthen the combinatorial syntax will take over and effectively determine the semantic interpretation truth-conditions of the complex sentences they are constituents of.

So officially LOTH would only contribute to a complete naturalization project if there is a naturalistic story at the atomic level. Early Fodor, a,for instance, envisaged a science of psychology which, among other things, would reasonably set for itself the goal of discovering the combinatorial syntactic principles of LOT and the computational rules governing its operations, without worrying much about semantic matters, especially about how to fix the semantics of atomic symbols he probably thought that this was not a job for LOTH.

Similarly, Field is very explicit about the combinatorial rules for assigning truth-conditions to the sentences of the internal code. In fact, Field's major argument for LOTH is that, given a naturalistic causal theory of reference for atomic symbols, about which he is optimistic Fieldit is the only naturalistic theory that has a chance of solving Brentano's puzzle.

For the moment, this is not much more than a hope, but, according to the LOT theorist, it is a well-founded hope based on a number of theoretical and empirical assumptions and data. Furthermore, it is a framework defining a naturalistic research program in which there have been promising successes. But some have gone beyond it and explored the ways in which the resources of LOTH can be exploited even in answering the first question 1 about the semantics of atomic symbols.

Now, there is a weak version of an answer to 1 on the part of LOTH and a strong version. On the weak version, LOTH may be untendentiously viewed as inevitably providing some of the resources in giving the ultimate naturalistic theory in naturalizing the meaning of atomic symbols. The basic idea is that whatever the ultimate naturalistic theory turns out to be true about atomic expressions, computation as conceived by LOTH will be part of it. For instance, it may be that, as with nomic covariation theories of meaning Fodora; Dretskethe meaning of an atomic predicate may consist in its potential to get tokened in the presence of or, in causal response to something that instantiates the property the predicate is said to express.

Insofar as computation is naturalistically understood in the way LOTH proposes, a complete answer to the first question about the semantics of atomic symbols may plausibly involve an explicatory appeal to computation within a system of symbols. This is the weak version because it doesn't see LOTH as proposing a complete solution to the first question 1 above, but only helping it. A strong version would have it that LOTH provides a complete naturalistic solution to both questions: We can then take these roles to determine the semantic identity of concepts: LOTH then comes as a naturalistic rescuer for conceptual role semantics.

It is not clear whether any one holds this strong version of LOTH in this rather naive form. But certainly some people have elaborated the basic idea in quite subtle ways, for which Cummins But also see Block and Field But even in the best hands, the proposal turns out to be very problematic and full of difficulties nobody seems to know how to straighten out. In fact, some of the most ardent critics of taking LOTH as incorporating a functional role semantics turn out to be some of the most ardent defenders of LOTH understood in a weak, non-committal sense we have explored above—see Fodor HaugelandSearle, and Putnam quite explicitly take LOTH to involve a program for providing a complete semantic account of mental symbols, which they then attack accordingly.

The result is sometimes known as two-factor theories. If this turns out to be the right way to naturalize intentionality, then, given what is said above about the potential resources of LOTH in contributing to both factors, it is easy to see why many theorists who worry about naturalizing intentionality are attracted to LOTH. As indicated previously, LOTH is almost completely silent about consciousness and the problem of qualia, the third mystery in Fodor's list in the quote above.

But the naturalist's hope is that this problem too will be solved, if not by LOTH, then by something else. If it turns out that qualia cannot be naturalized, this would by no means show that LOTH is false or defective in some way. In fact, there are people who seem to think that LOTH may well turn out to be true even though qualia can perhaps not be naturalized e. Finally, it should be emphasized that LOTH has no particular commitment to every symbolic activity's being conscious.

Conscious thoughts and thinking may be the tip of a computational iceberg. Nevertheless, there are ways in which LOTH can be helpful for an account of state consciousness that seeks to explain a thought's being conscious in terms of a higher order thought which is about the first order thought. So, to the extent to which thought and thinking are conscious, to that extent LOTH can perhaps be viewed as providing some of the necessary resources for a naturalistic account of state consciousness—for elaboration see Rosenthal and Lycan First, we have noted that if LOTH is true then all the essential features of the common sense conception of propositional attitudes will be explicated in a naturalistic framework which is likely to be co-opted by scientific cognitive psychology, thus vindicating folk psychology.

Second, we have discussed that, if true, LOTH would solve one of the mysteries about thinking minds: How is rationality mechanically possible? Then we have also seen a third argument that LOTH would partially contribute to the project of naturalizing intentionality by offering an account of how the semantic properties of whole attitudes are fixed on the basis of their atomic constituents. But there have been many other arguments for LOTH.

In this section, I will describe only those arguments that have been historically more influential and controversial. It was basically this: More specifically, he analyzed the basic form of the information processing models developed to account for three types of cognitive phenomena: He rightly pointed out that all these psychological models treated mental processes as computational processes defined over representations.

Then he drew what seems to be the obvious conclusion: But all the elements are surely there. Indeed adults who speak a natural language are capable of understanding sentences they have never heard uttered before. I bet that you have never heard this sentence before, and yet, you have no difficulty in understanding it: But this sentence was arbitrary, there are infinitely many such sentences I can in principle utter and you can in principle understand.

So there are in principle infinitely many thoughts you are capable of entertaining. This is sometimes expressed by saying that we have an unbounded competence in entertaining different thoughts, even though we have a bounded performance. But this unbounded capacity is to be achieved by finite means.

For instance, storing an infinite number of representations in our heads is out of the question: If human cognitive capacities capacities to entertain an unbounded number of thoughts, or to have attitudes towards an unbounded number of propositions are productive in this sense, how is this to be explained on the basis of finitary resources? The explanation LOTH offers is straightforward: Indeed, recursion is the only known way to produce an infinite number of symbols from a finite base.

In fact, given LOTH, productivity of thought as a competence mechanism seems to be guaranteed. Thoughts that are related in a certain way. There is a certain initial difficulty in answering such questions. I think, partly because of this, Fodor and Fodor and Pylyshynwho are the original defenders of this kind of argument, first argue for the systematicity of language production and understanding: For instance, we don't find speakers who know how to express in their native language the fact that John loves the girl but not the fact that the girl loves John.

This is apparently so, moreover, for expressions of any n-place relation. Fodor and Pylyshyn bring out the force of this psychological fact by comparing learning languages the way we actually do with learning a language by memorizing a huge phrase book.

In other words, the phrase book model of learning a language allows arbitrarily punctate linguistic capabilities. In contrast, a speaker's knowledge of her native language is not punctate, it is systematic. Accordingly, we do not find, by nomological necessity, native speakers whose linguistic capacities are punctate. Now, how is this empirical truth in fact, a law-like generalization to be explained?

Obviously if this is a general nomological fact, then learning one's native language cannot be modeled on the phrase book model. What is the alternative? The alternative is well known. Native speakers master the grammar and vocabulary of their language.

But this is just to say that sentences are not atomic, but have syntactic constituent structure. If you have a vocabulary, the grammar tells you how to combine systematically the words into sentences. Hence, in this way, if you know how to construct a particular sentence out of certain words, you automatically know how to construct many others.

This is the orthodox explanation of linguistic systematicity. From here, according to Fodor and Pylyshyn, establishing the systematicity of thought as a nomological fact is one step away. If it is a law that the ability to understand a sentence is systematically connected to the ability to understand many others, then it is similarly a law that the ability to think a thought is systematically connected to the ability to think many others.

Since, according to RTM, to think a certain thought is just to token a representation in the head that expresses the relevant proposition, the ability to token certain representations is systematically connected to the ability to token certain others. But then, this fact needs an adequate explanation too.

mechanization of thought processes and relationship

The classical explanation LOTH offers is to postulate a system of representations with combinatorial syntax exactly as in the case of the explanation of the linguistic systematicity.