L&PS Logic and Philosophy of Science Vol. IX, No. 1, 2011, pp. 469-476 Analytic Pragmatism, Artificial Intelligence and Religious Beliefs Raffaela Giovagnoli Pontificia Università Lateranense (Roma) raffa.giovagnoli@tiscali.it 1. Pragmatic Bootstrapping and AI 2. Material Inferences, AI and Religious Beliefs 3. Conclusion ABSTRACT. Some formal aspects of human reasoning, as Brandom shows in the second chapter of Between Saying and Doing (Oxford University Press, Oxford, 2008), can be elaborated by a Turing Machine (TM) namely by a machine that simulates human reasoning. But what can t be elaborated either by Artificial Intelligence (AI) and by the Analytic Pragmatism (AP) proposed by Brandom is the content of beliefs. I ll consider the original case of religious beliefs. I explain the phenomenon of Bootstrapping in the pragmatic context, which shows how from basic practices described by a metavocabulary new practices and abilities characterized by a new vocabulary emerge. The elaboration of certain aspects of practices and bailities by a Turing Machine is an example of pragmatic bootstrapping. I clarify why human beliefs (in our case, religious beliefs) can t completely be elaborated either by AI or by AP through material inferences embedded in conditionals as they have peculiar contents. 1. 1. Pragmatic Botstrapping and AI Let s begin with the phenomenon of bootstrapping in Brandom s analytic pragmatism (Brandom 2008 p. 11): ( ) pragmatic metavocabularies exist that differ significantly in their expressive power from the vocabularies for the deploy- The Author 2011. Published by L&PS Logic and Philosophy of Science http://www2.units.it/ ~episteme ISSN: 1826-1043
RAFFAELA GIOVAGNOLI ment of which they specify sufficient practices-or-abilities. I will call that phenomenon pragmatic expressive bootstrapping. A first example of bootstrapping is exemplified by the abilities of transducing automata to elaborate primitive practices-or-abilities into more complex ones. Just to give a brief idea, we can distinguish between single-state transducing automata (SSTA), final-state transducing automata (FSTA) and push-down automata (PDA) to show some idealizations about pragmatically mediated syntactic relations and pragmatically mediated semantic relations. SSTA generalize the primitive reading-and-writing abilities i.e. discriminating stimuli of any kind, on the input side, and differentially responding in any way, on the output side. This model is similar to behaviorism, which provides a VP-sufficient vocabulary to explain some basic abilities such as riding a bike or toeing the party line. FSTA are more flexible because besides responding differentially to stimuli by producing performances from their responsive repertoire they can respond differentially by changing state. This process is an advance from behaviorism to functionalism in the philosophy of mind that corresponds to the move from a single-state to a multi-state model. PDA are a kind of automata (for instance a TM) that elaborate information according to implemented rules and so they seem to simulate humans semantic abilities. Let s refer to the following diagram (Brandom 2008, p. 40): 470
ANALYTIC PRAGMATISM, ARTIFICIAL INTELLIGENCE AND RELIGIOUS BE- LIEFS In this case we have three vocabularies: V 1 emerges from basic practices [(P 1 ) that give rise to new practices (P 2 )], V 2 characterizes V 1 i.e. is a syntactic or semantic metavocabulary and V 3 specifies what the system is doing according to certain rules. The impossibility of artificially elaborating the content of beliefs is evident in the case of religious beliefs. Following the diagram presented above, we can describe the aspects of religious practices that could be elaborated by a TM. This is the mechanical process like a sort of rule following that characterizes rituals belonging to certain religious practices that possess a certain vocabulary. In this case we have three vocabularies: V 1 emerges from basic practices (performance of rituals), V 2 characterizes V 1 i.e. is a syntactic or semantic metavocabulary (describes what we are doing in the performance of certain rituals) and V 3 specifies what the system is doing according to certain rules (specifies the rules that govern the performance of rituals). Obviously, the result is that what we can elaborate is a procedure that does not grasp the content of religious beliefs: this is because it obviously refer to a first person ontology implicit in individual beliefs. 471
RAFFAELA GIOVAGNOLI 2. Material Inferences, AI and Religious Beliefs The second point of my argumentation concerns the impossibility of the logical elaboration of the content of religious beliefs. The practices that can be elaborated by a TM are sufficient i.e. PP-sufficient to deploy a particular vocabulary (in our case the vocabulary that characterizes a certain religious ritual). Now we can ask: are there any practical abilities that are universally PVnecessary? According to the PV-necessity thesis, there are two abilities that must be had by any system that can deploy an autonomous vocabulary: the ability to respond differentially to some sentence-tokenings as expressing claims the system is disposed to assert and the ability to respond differentially to moves relating one set of such sentence-tokenings to another as inferences the system is disposed to endorse. These abilities are PP-sufficient for the purpose of algorithmic elaboration as the following diagram shows (Brandom 2008, p44): What is important is that if we want to sort inferences into good or bad we must focus on conditionals that are PP-necessary to deploy an autonomous vocabulary. What is the relationship between these abilities? By hypothesis, the system has the ability to respond differentially to the inference from p (premise) to q (conclusion) by accepting or rejecting it. It also must have the 472
ANALYTIC PRAGMATISM, ARTIFICIAL INTELLIGENCE AND RELIGIOUS BE- LIEFS ability to produce tokenings of p and q in the form of asserting (Brandom 2008, pp. 45-46): Saying that if something is copper then it conducts electricity is a new way of doing by saying what one was doing before endorsing the material inference from that is copper to That conducts electricity. Conditionals make explicit something that otherwise was implicit in the practical sorting of non-logical inferences into good and bad. Where before one could only in practice talk or treat inferences as good or bad, after the algorithmic introduction of conditionals one can indorse or reject the inference by explicitly saying something, by asserting or denying the corresponding conditionals. What the conditional says explicitly is what one endorsed by doing what one did. The following diagram shows the algorithmic elaboration of conditionals (Brandom 2008, p. 44): 473
RAFFAELA GIOVAGNOLI Conditionals are the paradigm of logical vocabulary to remain in the spirit of Frege s Begriffschrift. The meaning-use analysis provides an account of conditionals that specifies the genus of which logical vocabulary is a species. This genus are ascribed three characteristics: (1) being deployed by practicesor-abilities that are algorithmically elaborated from (2) practices-or-abilities that are PV-necessary for every autonomous vocabulary (and hence every vocabulary whatsoever) and that (3) suffice to specify explicitly those PVnecessary practices-or-abilities. Any vocabulary meeting these conditions is called by Brandom universal LX-vocabulary. What are the results of AP? Apart from considering the results for the socalled logicist dilemma (Giovagnoli 2009), I want to highlight two characteristics Brandom ascribes to his own account: semantic transparency and analytical efficacy. A further step is therefore to explain why analytic pragmatism is semantically transparent and analytically efficacious. The semantic transparency is due to the fact that we do not need, for example, to use notions such as definitability, translateability, reducibility, supervenience or whatever because there is no interest to the claim that culinary vocabulary supervenes, for instance, on chemical vocabulary, if it turns out we mean that it does so if we can help ourselves to the vocabulary of home economics as an auxiliary in securing that relation. The problem is: how is the contrast between semantic form and content to be drawn so as to underwrite criteria for demarcation for logical vocabulary? Even Frege s notion of substitution seems not to fulfill this requirement as it does not provide but presuppose a criterion of demarcation of logical vocabulary. According to Brandom, Frege makes the notion of formality promiscuous because we can pick any vocabulary we like to privilege substitutionally: an inference in good and a claim true in virtue of its theological or geological form just in case it is good or true and remains so under all substitutions of non-theological for non-theological vocabulary, or non-geological for nongeological vocabulary. For Brandom, the sense-dependence in Frege s terms implies that theological and geological formality will not just depend upon but will express an important aspect of the content of theological and geological concepts. The second criteria of analytical efficacy means that logic must help in the processes of establishing the semantic relation between vocabularies and we have, according to Brandom, a much more powerful glue available to stock together and articulate what is expressed by favored base vocabularies be they 474
ANALYTIC PRAGMATISM, ARTIFICIAL INTELLIGENCE AND RELIGIOUS BE- LIEFS phenomenological, secondary-quality or observational (criticism to Russell and Whitehead Principia). Semantic transparency is thus secured by the fact that practices sufficient to deploy logical vocabulary can be algorithmically elaborated from practices necessary to deploy any autonomous vocabulary. The notion of algorithmic elaboration gives a definite sense to the claim that the one set of abilities is in principle sufficient for the other: anyone who can use any base vocabulary already knows how to do everything needed to deploy any universal LXvocabulary. For analytical efficacy we focus on the fact that logic has an expressive task: to show how to say in a different vocabulary what can be already be said using the target vocabulary. But logic is PV necessary i.e. logical vocabulary must make it possible to say something one could not say without it. According to Brandom, Frege s notion of substitution presupposes a criterion of demarcation of logical vocabulary so that logic loses its semantic transparency. In this case he refers to geological vocabulary and theological vocabulary in the some way. If an autonomous vocabulary is a set of good sentences derived from incompatibility relations with other set of sentences, what hat is the contribution of logic in a realm that is out of our capacity of perceptions? Is our true nature logical in virtue of the fact that conditionals are the genus of our expressive rationality? Could it be rather that we are communicative beings so that in Frege s sense our nature is to express thoughts (even false thoughts) through assertions, questions and negation of assertions and to perform judgments? Conclusion AP implies that inferential practices are necessary to deploy every vocabulary we use in our ordinary life. Could we elaborate religious practices and vocabulary from a logical point of view using inferential processes as proposed by Brandom? In this case we ought to follow conditionals governed by material inference such as If Vic is a dog then Vic is a mammal or If this ball is red then it is not green. The validity of a material inference is given by the correct use of concepts such as dog and mammal not just by the use of the logical form If then.... An example of conditional applied to the religious practice is if you are a good Christian then you ought to go to Mass. It entails a material inference embedded in a social norm like the inferential pattern If I am a bank employee I ought to wear a necktie (because Bank em- 475
RAFFAELA GIOVAGNOLI ployees are obliged [required] to wear neckties is a social norm). If we want to consider what we really do in social and discursive practices, we d better consider the different dimensions of judgment. Moreover, the real challenge for AI is to approximate to our real nature, namely to our first person ontology. Brandom s enterprise goes in the direction of a fruitful dialog between AI, logic and philosophy. Nevertheless, I think that the cognitive sense of human beliefs needs a sort of consideration of the level of thoughts that according to Frege belong to a third realm (though they are graspable ). Thoughts exist but they are not graspable by means of material inferences in the sense Brandom proposes. For he seems to imply that material inferences become devises to grasp true thoughts; but he does not provide a plausible description of the semantic content expressed in linguistic expressions. References Brandom R. (1994), Making It Explicit, Cambridge: Cambridge University Press. Brandom R. (2008), Between Saying and Doing, Oxford: Oxford University Press. Frege G. (1918-19). Negation in M. Beany (ed.) (1997), The Frege Reader, Oxford: Blackwell. Giovagnoli R. (2004), Razionalità espressiva. Scorekeeping: inferenzialismo, pratiche sociali e autonomia, Milano: Mimesis. Giovagnoli R. (2005) Intenzionalità e spazio sociale delle ragioni, Epistemologia, XXVIII, pp. 75-92. Giovagnoli R. (2007), Autonomy. A Matter of Content, Firenze: Firenze University Press. Giovagnoli R. (2008), Osservazioni sul concetto di pratica autonoma discorsiva in Robert Brandom, Etica & Politica/Ethics and Politics, IX, 1, pp. 223-235. Giovagnoli R. (ed.) (2009), Prelinguistic Practices, Social Ontology and Semantics, Etica & Politica/Ethics & Politics, vol. XI, n. 1. Giovagnoli R. (2010), On Brandom s Logical functionalism, The Reasoner, volume 4, number 3, www.thereasoner.org Giovagnoli R. (2010), Analytic Pragmatism and Religious Beliefs, The Reasoner, volume 4, number 6. Giovagnoli R. (2011), Computational Ontology and Deontology, The Reasoner, volume 5, number 7. 476