Recently, Reason Revolution host Justin Clark sat down with author J. R. Becker to discuss the disagreements between Daniel Dennett and Sam Harris concerning free will. As a complement to their conversation, I want to discuss free will from the standpoint of the meaning of concepts, to ascertain what the difference between Dennett and Harris amounts to and to shed some light on why this debate is happening in the first place.
Clark begins the conversation by outlining three ways of thinking about free will:
- Determinism, which understands free will to be an illusion because every action and event has prior causes, and these causes had prior causes, all the way back to the big bang;
- Libertarian Free Will, a position, Clark states, promulgated by Christian theology and existential philosophy, understanding free will to be total, that we are condemned to be free;
- Compatibilism, a sort-of middle ground, recognizes the truth of determinism that all actions and events have prior causes but does not understand this to be a defeater for free will.
This basic framework is accurate enough. It is interesting to me that Clark outright rejects libertarian free will, as the reasons one would accept it are very similar to those one would use to be a compatibilist. Although I am not sure how accurate it is to say existentialists are talking about free will when they talk about choice (rather than the political term “freedom”), the reason one would come from a libertarian free will position is that it begins with our everyday use of the word “choice” to radically ground the meaning of life in how we choose to be response-able for it, how we choose and live our values. This isn’t unlike the compatibilist position. To be a compatibilist is to essentially use everyday language to think about the meaning of concepts, rather than the scientific conception of reality. The difference between these two positions is essentially that the compatibilist is more accommodating (or perhaps more knowledgeable of different frames of reference) and therefore leaves the language of science to speak in its contexts and the language of existentialism to speak within its contexts. Let’s think more about this movement between everyday use and scientific use. Is it always more reasonable to begin from the scientific conception of reality?
Beginning at the End
I want to tell you a story about a famous philosopher from the 20th century, the two schools of thought he created, and how, although the latter school supersedes the former, the former is still alive and well. This story is not your usual story, because the point of the story is not so much what it refers to outside of itself but rather is the story itself; the very language it uses is as much the “author” of the text as I am.
We are linguistic beings, which means we both communicate through language and, at the same time, create the language by which we communicate. This isn’t so much a strange fact today. For instance, iPhones did not fall from the heavens. We understand that any new iPhone will be manufactured by humans and that it will, most likely, alter the ways we communicate and relate to each other in some way. What is less readily conceivable is applying this recognition to our most basic and natural communication tool: language. The language we use about things, situations, emotions, and the like give meaning to and partially determine our behavior toward them. A very clear example of this is how the word “thinking” has been displaced by the word “processing,” and how the rise of science has changed the metaphors we use to reflect and think about ourselves. Today, our brains are computers, and the hardware of neurobiology creates the software of consciousness. Is anything today so illuminating as this metaphor, and so radically different from the historical view of the spiritual soul, disconnected from all things physical, trapped within the prison of fallen, finite things? Not only has the metaphor informed the kinds of questions we now ask about what it is to be human, but it has altered our situation as humans, from the technology we create to capture, manipulate, and transcend our human capabilities to how we relate to each other. Accordingly, forms of language are, in some ways, forms of reality. If you question nothing else in this article, please question this statement: live with it, by it, for it, against it, without it, because of it. Just don’t forget it. Language causes and solves our problems. It is to language we must turn to understand the origins of our problems and the way to their solutions.
As we enter this story, let us not forget that the concepts we use and our forms of language belong to contexts, and these contexts are composed of specific problems, objects, and logics. Within these contexts, we either use language to extend our concepts to include more experiences, situations, and phenomena (as when religious people call a tragedy part of “the will of God”), or we use concepts to disrupt the very logic of the language we use and the contexts in which our language makes sense (as in when we use irony or hyperbole, or when Sam Harris says, “Free will is an illusion”). The great advantage we have over animals, as a result of our ability to use language, is that we can project possible futures, using concepts as extensions of realities. We can confer motives to things and predict their actions. We can ascribe cause and effect to the world and therefore project possible situations in which we must act. Grounding all this, however, is the fact that our primary tools for acting are not simply instinctual, but they are social. This is of course not to say that language acquisition is not instinctual,[i] but that rather our instincts have given us tools that far exceed the limitations of mere instinct, just as our thumbs give us abilities that far exceed its mere movement.
Finally, I want to offer one more tool as you proceed to this story. Kenneth Burke points out in Permanence and Change: An Anatomy of Purpose that the ways in which we are trained to think and act in specific situations may make us blind to what is relevant and important in situations where our training does not apply. He calls this unfortunate fact “trained incapacity,” which, specifically, he defines as “that state of affairs whereby one’s very abilities can function as blindness.”[ii] Many secular humanists, unfortunately, fall into trained incapacity when they critique religion, especially when critiquing the notion of salvation as “escape.” Burke describes the problem with this criticism of religion well: “Whereas it [the motive to “escape” reality] applies to all men, there was an attempt to restrict its application to some men….While apparently defining a trait of the person referred to, the term hardly did more than convey the attitude of the person making the reference.” Burke wants to frame the problem of incapacity as a problem of “faulty means-selection,” which is a “comparison between outstanding and outstanding,” a comparison of relevant details between different situations (what stands-out in one situation and what stands-out in another). When we reason about the world, we reason by the means of language. As a result, the means of selecting what is relevant in certain situations, and how these relevant things connect with other relevant things in other situations, is a question of our means of selection, or, in other words, what concepts we use to talk about the things we are trying to talk about.
My claim, at the outset, is that Harris has a trained incapacity, and that this is a consequence of his scientific training. As a result, what Harris thinks is relevant in conversations about free will is the cause and effect continuum and therefore calls all talk about free will senseless (this is what it means to say “free will is an illusion”). Dennett, by contrast, examines how “free will” makes certain concepts like “responsibility,” “control,” “choice,” and “agency” relevant. Now, it is important to affirm that Harris’s scientific perspective is a legitimate enterprise and acknowledge that when we think scientifically, we must extend the logic of science as a means of selection for understanding the world. However, we must not fall into the trap that science is the only way to make sense of every situation and concept. Does knowing what chemicals are released in the brain in situations of “love” fully answer the beloved’s question, “Why do you love me?” Telling my wife we are in love solely because of our biology would be offensive to the language of love. Likewise, we must consider the extent to which Harris’s analysis is offensive to the social and linguistic understanding of free will.
Science and the Meaning of Concepts
In the mid 1910s to late 1930s, a group of philosophers, mathematicians, and scientists formed an influential club now known as the Vienna Circle. Among its most famous members like Rudolf Carnap, Kurt Gödel, W. V. O. Quine, Alfred Ayer, Frank Ramsey, and Karl Popper was Ludwig Wittgenstein, an esoteric and eccentric philosopher obsessed with language. The purpose of the group was to make philosophy into a science, to bring a precision to the language of philosophy that would turn it into a science. Using logic, mathematics, and empiricism, the Circle mounted a devastating critique against philosophical metaphysics. Perhaps the greatest and most obscure representative document of this critique is Wittgenstein’s Tractatus Logico Philosophicus. Here, he laid the groundwork for the principle of verification: the meaning of a concept is its referent in reality. The principle of verification is that a concept is true, or has cognitive meaning, to the extent that it represents an object or state of affairs in reality; in other words, a concept is meaningful if it can be verified. This principle was a watershed for the logical positivist movement, or “positivism.”
Many things followed from this principle. For instance, it can be definitively claimed that religious language is contentless, senseless. The term “God” represents nothing in reality, and certainly is not derived from a state of affairs, and therefore it is meaningless. The principle seems to give truth claims of science a more robust framework. Concepts like “free will,” “soul,” and “ego,” can be thrown out without a thought, shown to be nonsense and without content. If a concept cannot be verified, it cannot have meaning.
Yet the principle is not without issues. One obvious problem is that it does not verify itself. It is a mere tautology.[iii] How do we know that a concept has meaning only to the extent that it represents something in reality? Well, because that’s how the Vienna Circle defined “truth” and “meaning.” The Vienna Circle’s concept of truth does not adequately account for the many different uses the concept has. Another problem is that it does not distinguish between statements that are descriptive (reports) and statements that are normative (imperative statements). Are all imperative statements nonsense? To say that something is “hot” or “cold” is to describe your world. We can verify whether something is hot or cold by our senses or by agreeing on what hot or cold means on a thermometer. But to say that something is “good” or “bad” is normative: one could say it’s good to be a Democrat and bad to be a Republican, or vice versa. What sense does this have from the positivist perspective? Where can I point to and identify the “good” of Democrats or “bad” of Republicans, unless I already assume the nature of this goodness? This is the is-ought problem, rearticulated. We will return to this later.
After Wittenstein wrote the Tractatus, he believed he solved all the problems of philosophy. These problems were either confusions of language, claiming content for its concepts where none could be found in reality, or philosophical problems were caused by railing against the limits of language. The limits of language are, indeed, the limits of philosophy. Famously, the final proposition in this influential work states, “Whereof one cannot speak, thereof one must be silent.” Silence is the best we can do with questions about the ultimate things, those things which ground our languages, which form the connections between the is and oughts.
Wittgenstein’s retirement from philosophy was brief. He soon realized the positivist conception of language did not adequately account for the complex ways in which language is used and still has meaning. Consider metaphor, poetry, body language, allegory, and the like. These uses of language clearly say something, and for language to say something is for it to “make sense,” to “have meaning.” Whether or not words refer to things or states of affairs is not the whole question of meaning or truth, Wittgenstein realized. Language acquisition and use plays, perhaps, an even larger role than reference. Consider when a mother points to a ball and says, “ball” to her toddler. How is the toddler to know that when the mother points to the ball the toddler isn’t supposed to follow a line from the elbow, or that the mother isn’t talking about the ball but about the color of the ball, or the shape, or even the space the ball fills by its existence? The toddler comes to know what “ball” means by interacting with the ball, by learning how “ball” is used in the contexts in which it is appropriate to talk about “ball.” This is the central insight of Wittgenstein’s later work, his rebuke to positivism: The meaning of a concept is its use in a context: meaning is a function of context.
Do We Agree on the Facts, or Are We Just Playing a Semantic Game?
Let’s return to the topic of free will but with a different light: the determinist position appears to be derivative from positivism, whereas Dennett’s Wittgensteinian and pragmatist influences shows in his position, for it matters to Dennett that our reflection on concepts begins on the basis of accurate use of these concepts. We can call anything truth: but does the arbitrary changing of definitions mean anything? This question brings to mind the work of James K. A. Smith, presently a popular theologian in ultra-conservative Calvinist circles, who relies on stale arguments and linguistic slights-of-hand. For example, he defines “liturgy” as anything that shapes our desires. So it appears “deep” when he makes the claim that basically everything is liturgy: from the ways in which we shop at malls to our daily after-work routines. One implication in calling desire-shaping phenomena “liturgy” is to suggest that we’re all “religious” at the core. And, indeed, this is assumed in the very conception of the matter. This is an extremely boring and underhanded way of saying something without saying something: Smith is an expert at employing the “deepity.”[iv] But it’s a telling example of how the words we use can affect our perceptions of our objects of study. Why not just substitute “liturgy” with “stimuli?” Well, for one reason, Smith would be out of a job. Additionally, there would no implication, in any given instance when we use “liturgy,” that forming habits fulfills a religious need. Smith’s trailblazing conclusion, that desire-shaping practices are ultimately about “worship,” would not be assumed at the outset. Smith’s method of argumentation is one way to have your conclusions made for you: the very words we use shape our intuitions as linguistic beings.
What does it mean to ask if we agree on “the facts?” Consider that you’re having a discussion with James K. A. Smith on desire-shaping practices. What sense does it make to describe the things that draw our attention and shape our desires as “stimuli,” and not “liturgy?” Are we disagreeing about facts, here? Is it all “just semantics?”
The fact is that our words shape and, in some ways, determine, what we see in the world, giving rise to disparate forms of thinking about what is “the world.” If we use “liturgy” to talk about desire-shaping practices, the inferences we are compelled to make by the use of the concept itself infer that when we conduct acts which shape our desires (that is, when we do anything), we are indeed performing acts of “worship,” and the places in which we perform these acts of “worship” are our “holy” sites. This is what I mean when I say that the conclusions are already contained within the very assumptions from which we begin any analysis. For Smith, just as for Harris, the facts are given as a starting point. Consider what would follow if we began our analysis of desire-shaping practices from the mechanistic conception of the universe. Unlike Smith, Harris would say that we do not shape our desires (“liturgies”) by “performing acts of worship” in “holy sites,” but by being influenced by the “conditioners” in our “environments.” Nothing like “worship” or “holy sites” is insinuated by the use of the words “conditioners” and “environments.”
As such, using a word like “facts” is more so determined by our our points of reference, our forms of analyses, and not so much what we find in the world. Our very use of the concept of “fact” delivers objects in the world which are essentially different from the objects in the world we find when we think of things as projections from our emotions, as symbols of what the future will bring, or as “miracles.” Both Smith and Harris can agree on the “facts,” to the extent that they can analyze the same situations, but what these facts are named, whether “liturgy” or “stimuli,” is just as important in shaping what the facts mean as the objects and situations under investigation.
So What is at Stake?
The free will debate is simply a good representation of what occurs in every discussion where science attempts to analyze concepts derived from everyday use but without paying attention to the inferences we make by these concepts: concepts like “mind,” “thinking,” and “belief,” and “morality.” This debate is also a good example of the difference between positivist and ordinary language philosophers. But let us take a look at another aspect of this debate, moving beyond the analysis of the concepts put in play, and consider the consequences that follow from these concepts.
The debate between Harris and Dennett boils down, in some ways, to the question B. F. Skinner raised half a century ago. When we are trying to understand the reasons for actions, do we look at the intentions of the person from our everyday use of concepts and within a normative framework of moral responsibility, or do we look at the conditioners of action, the mechanics of the universe that make some actions more likely than others and put in place mechanisms that will influence better outcomes? This is the crux of the free will debate between Dennett and Harris. And to the extent that we side with Dennett, we are looking for ways to innovate our normative schemes, to extend some concepts and retract others when it comes to our language about free will, responsibility, and justice. And when we agree with Harris, we are looking at the physical mechanisms of the world in order to manipulate and shape them to improve society.
Going back to the is-ought problem as introduced earlier, we can say that both the descriptive and normative frameworks are different for Dennett and Harris. For Dennett, the descriptive side of his analysis involves looking at the everyday situations in which it makes sense to use “free will” and then to outline the inferences we make in those situations, the consequences of using this concept. For Harris, the descriptive side involves data about the mechanisms of reality. What we count as descriptions, or the “is” of reality, informs, then, the “oughts” that follow. For Dennett, to rid us of the concept of “free will” is to rid us of the kinds of practical, social relations in which we participate when, in the everyday world, we use this concept. That’s why Dennett wants to talk about the moral aspect of free will. For Harris, to lose the concept of free will is to lose nothing, because both morality and free will are about the mechanisms of reality, and just as our moral intuitions are facts that pertain to the operations of these basic mechanisms of reality, so too is the illusion of free will. We have Dennett representing Wittgenstein’s later position and Harris representing his earlier philosophy.
The difference between Dennett and Harris is not only in the frameworks from which they analyze the problem of free will, but in the consequences that follow from their methods of analysis. To accept both projects as legitimate, which I think we should, would mean that we should work both to be linguistic innovators and also social revolutionaries. We should be attentive to the ways in which language shapes thought but also be open to using the tools of science to move beyond mere argumentation and hermeneutical innovation to improve society. The public clash between two legitimate ideas generally revolves around the fallacy that these ideas must be integrated in some theoretically general way for them both to be legitimate, or else one must give way to another. What is more likely true is that Harris and Dennett have different levels of analysis, and that it is a fallacy to think different levels of analysis must be reconciled in general ways. Rather, they must be married in the life and action of individuals, and to the extent that one level is more useful for some people in some situations than it is for others in other situations, then one level of analysis will be more significant and appropriate. We must move beyond the rationalist fallacy. Employed in a different example, this fallacy would have us believe that to use 1+1=2 we must understand the nature of addition and how 1+1=2 can both be grounded in quantum physics and explain why my wife is angry at me for not walking my dog this morning. The rationalist fallacy bewitches us by making us think we have to have a theory of everything to have a perfect language. Yet, we know, different levels of analysis are true in different ways, for different projects, and for different people.
Ending at the Beginning
The difference between Harris and Dennett amounts to this: while Harris is unwittingly reducing other vocabularies to his scientific vocabulary and thereby displaying a trained incapacity, Dennett wants to keep both vocabularies for creating different contexts, exploring different kinds of experiences, and communicating different ways of existing. The contexts in which free will makes sense are not forms of existence that are delusionary, as Harris would have us think. As Kenneth Burke puts it, “To explain one’s conduct by the vocabulary of motives current among one’s group is about as self-deceptive as giving the area of a field in the accepted terms of measurement.”[v] Put another way, “Motives are shorthands for situations.”[vi] When we consider a breach of contract, what is relevant, in these situations, is not as Harris would have it: a consideration of the cause and effect universe and every single way in which our actions and decisions have prior causes. Rather, what is important for Dennett’s form of free will is that the person has “chosen” to breach the contract, based on the concepts we use in contractual situations. When we say a person made a “choice,” we are saying the possible future outlined in the contract in which “breach” makes sense has been actualized: we are not stating a description of neurobiology or physics. We are using concepts, just as scientists use concepts to both create and describe the world, to make sense and act in the world where “contractual relation” is our current situation. Against Harris’s referentialism, Dennett’s free will reaffirms Wittgenstein and Burke: the meaning of our words has to do with relevance, what it makes relevant, and not reference.
We’ve talked so much about language at this point. Let us just throw out, as the straw that breaks the camel’s back, a simple point of logic which Harris and Becker himself do not seemingly acknowledge. In the podcast, when Becker brought up the Libet Experiments to ground his claim that our choices are predetermined, he did not, also, acknowledge, as Kenneth Burke does, that “The discovery of a law under simple conditions is not per se evidence that the law operates similarly under hilighy complex conditions.” This is a fact we should have learned from the history of science, when the simple Newtonian vision of the universe was displaced[vii] by the Einsteinian vision.
Our ending is where we began, with the recognition that we are linguistic beings, and that the way in which we use words matters. Also, in the spirit of late Wittgenstein, we end with the American who arrived, at approximately the same time, with later Wittgenstein to his later conclusions, to the rebuttal of his own early philosophy.
“We discern situational pattern by means of the particular vocabulary of the cultural group into which we are born. Our minds, as linguistic products, are composed of concepts (verbally molded) which select certain relationships as meaningful. Other groups may select other relations as meaningful. These relationships are not realities, they are interpretations of reality—hence different frameworks of interpretation will lead to different conclusions about what reality is.”[viii]
Photo Credit: Jef Safi
[i] Indeed, this is absolutely the case, as Steven Pinker argues in The Language Instinct.
[ii] Kenneth Bruke, Permanence and Change, 7.
[iii] I refer to this as “tautology” rather than “axiom” to point out a basic point of later Wittgenstein’s insight. Our definitional statements that are supposedly “self evident” actually are the boundaries of our conceptual schemes our language games. They show the logic of our basic conceptual framework: this is what “definition” means in a function sense.
[iv] “Deepity” is from an amusing chapter in Dennett’s book Intuition Pumps and Other Tools for Thinking.
[v] Burke, 21.
[vi] Ibid., 29.
[vii] I say displaced and not “replaced” because Newtonian physics still works when we are measuring short distances, but we need Einstein’s theory of relativity to measure distances between planets. I heard Lawrence Krauss make this point.
[viii] Burke, 35.
In this episode, we are doubting the historical existence of a man you may have heard about: Jesus of Nazareth. Ever since critical biblical scholarship began in the eighteenth century, largely a product of the Enlightenment, the consensus among mainstream historians and religious scholars has been that a man named Jesus did historically exist in Palestine and was crucified by the Romans in the first decades of the Common Era. Although these biblical critics did doubt and challenge the reality of the New Testament’s portrait of Jesus as a miracle worker and divinely appointed savior, they did think – or, more precisely, assume – that there was a real man named Jesus upon whom theological legends were later based. But there has always been another school of thought. The mythicists argued that not only was the Christ of faith a theological fantasy, but the Jesus of history was also a fiction. Jesus, said the mythicist scholars, never even existed historically.
“The strange thing about Dillahunty’s reflections is that he’s actually much closer to Peterson than would have appeared in Pangburn’s video. As I have written, Peterson thinks religion has evolved by Darwinian mechanisms, religious myths provide for us the grammar of stories, and, because they rely on competence hierarchies, these stories set the background evolutionary setting to which we’ve adapted as a species, and the conceptual grounds from which our concepts of the individual derived. There is nothing supernaturalist about this position and, in fact, it’s a denial of special revelation, miracles, and divine inspiration altogether, at least, if these concepts are employed at all, they’re stripped of their traditional content.”
“I think the secular humanist movement would be better off, especially in its relation to religious people and its understanding of religion and religious belief, if it sidestepped the question of the existence of God and asked what it means to say that God exists and what it means to believe or have faith in God. It seems to me that this change of emphasis must be granted purely out of the principles of charity and skepticism; the principle of charity because to arrive at a position about religion and religious belief, we have to engage with the best religious thinkers who do ask these questions; and from the principles of skepticism because we have to be skeptical of our own assumptions and ideas about what religion and religious belief are.”
“As social media continues to shape our discourses by selecting for epigrams over nuanced discussion, Bradbury asks us if we will become like Mildred, whose words are like those “heard once in a nursery at a friend’s house, a two-year-old child building word patterns, talking jargon, making pretty sounds in the air,” or whether we will become like the talking, depthless faces of anchors operating distraction machines like Fox News or CNN: “the gibbering pack of tree apes that said nothing, nothing, nothing and said it loud, loud, loud.” May we find the words that wrestle and struggle with the challenges of life, without strangling or flattening them, and, consequently, diminishing the possibility for genuine human flourishing.”
“As a thinker, he sits firmly within the philosophical traditions spurred by Nietzsche, William James, and Jung. And as an influence, he’s a cultural force that we will not soon forget. Why tell the truth in our age of group-think and Twitter epigrams? Well, it’s our only hope for survival, and the only way for the hero, who speaks a freeing word that organizes chaos into novel order, to emerge.”