Almost All Ideas are Wrong: The Jordan Peterson and Slavoj Zizek Debate

“Almost all ideas are wrong,” Peterson remarked in his opening critique of Marx’s Communist Manifesto. Indeed, it is far easier to speak of what things aren’t than what they are. Yesterday, April 19, 2019, brought us the Jordan Peterson and Slavoj Zizek debate. This meeting of very different minds was unimaginable back in 2015, and unthinkable as recently as last year. On the face of it, these two thinkers, despite their common psychoanalytic influence, could not be further apart. Jordan Peterson, an overtly scientific thinker who uses Christian stories and ideas to help explain deep psychological truths, is a pragmatic, almost pastoral thinker, whose driving motivation is to tell his audience that the most important and worthwhile pursuit in life is not the attainment of happiness, the mere accretion of wealth, or an unbridled wielding of power. Peterson tells us that rather it is the pursuit of meaning, and that we can only live meaningful lives if we take on responsibility, fix what is immediately fixable, aim at a higher goal, and shoulder the heavy burden of our own existences. Speaking the truth and acting responsibly are the only ways to, when tragedy strikes, have a depth of spirit that can withstand it, without making us resentful. This is the core of Peterson’s message as I understand it. Slavoj Zizek, on the other hand, against the pragmatic, simple spoken language of the sciences, is a bonafide philosopher, touting the ideas of obscure, very difficult thinkers like Lacan and Hegel. Instead of Peterson’s focused study of the individual through the lens of psychology and myth, Zizek employs techniques from psychoanalysis and critical theory to discern the role ideology takes in shaping the individual. To risk over-extending this comparison, I simply want to say it is long past due for Peterson to speak with a philosopher and thinker of Zizek’s caliber and influence. And, as we will see, their differences are dwarfed by what they hold in common.

Revisiting the Matt Dillahunty and Jordan Peterson Discussion

Recently I wrote on how I was disappointed by the Matt Dillahunty and Jordan Peterson dialogue produced by Pangburn Philosophy. Although I still remain fundamentally disappointed by it, a few things have been clarified for me by Matt Dillahunty’s reflections on the discussion.

The thing that made the discussion so interesting was that Matt Dillahunty was not interested in debating or strawmanning Peterson. His goal, and I take him at his word, was to have a good conversation, be open and honest, seek clarification, and see where they agree and disagree. He wasn’t even the slightest bit disappointed in the dialogue, thinking he succeeded on many fronts. Maybe so. I just want to clarify a few open questions Dillahunty has concerning Peterson’s positions. Although it is quite odd Dillahunty did so little research on Peterson before the discussion, not even aware, in this recent video, of Peterson’s decades-long work as a clinician, the interchange seemed to have happened in good faith, and I have faith that this conversation can now move forward.

Language Use, the True, and the Real

One issue Dillahunty has with Peterson is he thinks people who no longer believe in God but still find religious language useful need to say they’re using religious language idiosyncratically, because they’re not talking about the God people believe in, but the human condition, and the kinds of Gods people invent to cope with that. This point on the face of it appears to be about simply being clear. In Peterson’s view, this is is actually indicative of Dillahunty’s primarily Enlightenment over Darwinian influences.[1] For Peterson, you can’t be a post-Enlightenment rationalist thinker and a Darwinian at the same time because what the latter explicitly conceptualizes the former ignores; that is, you can structure your world according to different presuppositions, and different systems of thought have different purposes. Furthermore, from his Darwinism, Peterson concludes that what is “real” subjectively and objectively, though they may be distinguished for analytical purposes, cannot be ultimately separated in reality. They have amorphous and porous borders, and this point seems lost on the post-Enlightenment thinkers.

Peterson thinks American pragmatists figured this out. The pragmatic concept of truth articulates the meaning of truth as that which works. As a result, the only kind of knowledge we can have about our environment is knowledge that is sufficient: knowledge that allows us to survive. To abstract ideas from survival value and assume that facts as they pertain to belief about morality, the world, and ourselves exist in and of themselves, separate from how they serve or diminish life, is suspect for Peterson. The assumption of post-Enlightenment thinkers is that the knowledge gained by this reduction doesn’t diminish the possibility for genuine human flourishing. Peterson says, “I think it’s dangerous to consider truth independent of its effect upon us.”[2]

This brings us to the question of the real and the true. Peterson takes what he calls a Darwinian position on the question of the real. The real is that which is consistent and endures across time. This is why Peterson is so fixated on religious myths. Dominance and competence hierarchies are some of the oldest evolutionary structures: over 300 million years old, older than trees. The patterns that constituted the competence hierarchy is the place from which ethics derives. What religious myth does is distill the grammar of competence hierarchies. Therefore to know the meaning of religious belief is to understand the millenia long solution to the problem of suffering and chaos, and this, Peterson believes, grounds our ethics.[3]

The question of what is real is actually connected to the question of the true because what is true is what is real, and what is real serves life. This is Peterson’s basic Darwinian position. Some things are only true for one thing, some things are true for ten things. Some are true for thousands of things. And that truth which is more pervasive and most enduring is the most true. Because the true and the real are connected in the notion of that which serves life, and in Peterson’s estimation, when we try to reduce the truth to just facts we have left out the thing that connects truth to reality. It’s not correspondence, and it’s not coherence. It’s life.

Are True Atheists Murderers?

One idea that got online atheist communities in an uproar is a comment Peterson made about nobody being a true atheist. Dillahunty seemed to have taken great offense at this, and perhaps rightfully so, for Dillahunty certainly doesn’t believe in a supernatural being, and he can ground morality in self-interest, of all things. Why do we need a god to be good?

The problem is Peterson isn’t actually taking the typical Christian apologist position on this issue. He’s rather concerned about the consequences of what would happen if the   of our culture is lost.[4] For Peterson, the person who lives after this event is the true atheist. People in the west who call themselves “atheists” do not in fact live after this event, for atheists of the west still live within the metaphysical substrate established by the Christian  myth. Atheists of the west today are different, for instance, from atheists in Athens. Lack of belief is where their commonalities begin and end, for atheists before the west without the Christian mythical substructure did not have a belief in the inherent dignity of individuals, the value of self-interest, natural law (which grounded the first human rights language), and the like. Although, for instance, somebody like Socrates could have argued for natural law, and so it would seem the philosophers of Athens were in effect taking a modern stance on morality, they still believed that the ordering of nature, with its natural inequality, made women and slaves naturally inferior to citizens who could participate in the polity.[5]

Another way to conceptualize Peterson’s idea is in the way Joseph Campbell did in the popular Myths To Live By. In chapter four, “The Separation of East and West,” he begins

“It is not easy for Westerners to realize that the ideas recently developed in the West of the individual, his self-hood, his rights, and his freedom, have no meaning whatsoever in the Orient. They had no meaning for primitive man. They would have meant nothing to the peoples of the early Mesopotamian, Egyptian, Chinese, or Indian civilizations. They are, in fact, repugnant to the ideals, the aims and orders of life, of most of the peoples of this earth. And yet—and here is my second point—they are the truly great ‘new thing’ that we do indeed represent to the world and that constitutes our Occidental revelation of a properly human spiritual ideal, true to the highest potentiality of our species.”[6]

He goes on to trace the history of cultures, to show that archaic civilizations operated according to a belief in a great cosmic law which left no room for the individual, and where one’s birth determined who one is, what one is to be, and what one can think. Indeed, strikingly Campbell points out that the “Sanskrit verb ‘to be’ is sati…and refers to the character of the devout Hindu wife immolating herself on her deceased husband’s funeral pyre.”

But the west (what he calls the “occident”) is different from the orient, and it is because of the myths it told. The God who judged an entire world for their sins and sent a flood to destroy them as a consequence implies that humans are not just cogs in a predestined universal machine. Especially in the Old Testament, as we see in Job,

“the focus of concern is the individual, who is born but once, lives but once, and is distinct in his willing, his thinking, and his doing from every other; in the whole great Orient of India, Tibet, China, Korea, and Japan the living entity is [rather] understood to be an immaterial transmigrant that puts on bodies and puts them off. You are not your body. You are not your ego. You are to think of these as delusory.”[7]

So what does this have to do with atheism in the west and, particularly, Dillahunty’s argument that from self interest he can establish a moral system that isn’t contingent on religion? Well, rationality is a recent invention, and Peterson thinks our concepts are abstractions from the myths we’ve told for millenia. This is why, for instance, the west is individualistic, democratic, tending to understanding justice in terms of liberty, whereas the east is susceptible to collectivism, communism, tending to understand justice in terms of social expectations. Our very sense that self interest is a viable candidate for moral belief in the first place is an outgrowth of the Christian myth.

This leads us back to the previous section: as Peterson said in the discussion, it is difficult to draw a bright line between what is real and what is useful. When you strip subjectivity from the world at the beginning of the analysis of the human condition or the world, Peterson thinks it creates two possible pathologies: totalitarianism and nihilism; neither of which fundamentally value life because they’ve separated vitality from mechanism, breath from logic.

The strange thing about Dillahunty’s reflections is that he’s actually much closer to Peterson than it appears in Pangburn’s video. As I have written, Peterson thinks religion has evolved by Darwinian mechanisms, religious myths provide for us the grammar of stories, and, because they rely on competence hierarchies, these stories set the background evolutionary setting to which we’ve adapted as a species, and the conceptual grounds from which our concepts of the individual derived. There is nothing supernaturalist about this position and, in fact, it’s a denial of special revelation, miracles, and divine inspiration altogether, at least, if these concepts are employed at all, they’re stripped of their traditional content. I would like to see Dillahunty and Peterson discuss these issues more fully, and I think for this to happen we have to get beyond, as I’ve said, the full stop question as to the existence of God. With or without God, how does religion affect our modern landscape? With or without God, what does the language of myth provide that, say, pure-hard logic can’t (if anything at all)? I’m hopeful the conversation might turn more interesting on these points, given that it appears both Dillahunty and Peterson had a good faith dialogue last time. Next time we might be in for something special.



[1] See Peterson’s discussion on this difference in “04 – Religion, Myth, Science, Truth.”

[2] Ibid.

[3] See much more in “Why Tell the Truth: On the Curious Notions of Jordan B. Peterson.”

[4] See much more in the article above. The logic of “mythical substrate” is basically that our ideas and rationalities derive from our behaviors which are abstracted into myths which are further abstracted into concepts. The loss of the mythical substrate is essentially the loss of the behaviors that give rise to it.

[5] See Inventing the Individual: The Origins of Western Liberalism for a much fuller picture of what the claim that the west was founded on both Jerusalem and Athens (i.e., Christianity) means. Note that this is not a normative judgment, entailing that now all our values must revert back to some Christian theology to be grounded. It’s simply a description of history, and the acceptance of value derived from Christian thought doesn’t entail the acceptance of Christianity to be intelligible today.

[6] Joseph Campbell, Myths To Live By, 61.

[7] Ibid., 69.


My Dissapointment with the Matt Dillahunty and Jordan Peterson Discussion

My Disappointment with the Matt Dillahunty and Jordan Peterson Discussion

Since writing this article, Matt Dillahunty has released his reflections on the discussion. I’ve revisited the dialogue here in light of his comments.

I recently listened to the Matt Dillahunty and Jordan Peterson’s Pangburn Philosophy sponsored discussion and was extremely disappointed by it. The discussion represented something that has become commonplace in the secular movement when prominent thinkers attempt to discuss religion: there is a full stop at the question of the existence of God. This is unbelievably stifling and, frankly, uninteresting for (at least a few) reasons I will outline below. After a brief interchange with Dillahunty himself about this, I am still rather unsatisfied by his responses to my questions. He welcomed an email from me, and I will update you all when I hear his response.

As a precursor for my exposition below, I just want to give a brief description of my history with religion and religious people, specifically Christianity and Christians, to show that my ideas are not, indeed, foreign either to the study of this religion or these religious people themselves. Dillahunty had charged that I sounded like a person who has never talked with a fundamentalist or Evangelical Christian. In fact the truth is the opposite: these are the people I have known my whole life, and many friends of mine still live within both traditions. I grew up in a small town of 2,000 people in northwestern Indiana: a rural, mostly farmland community where 90% of the population was conservative, Christian, and Republican. I still attend a church there sometimes, although I live near Indianapolis now, and consider myself a secular humanist. I also attended a small, private Christian University (Anderson University in Indiana) to study philosophy and theology (although they cut their philosophy program my fourth year there and I dropped out). I attend seminary courses at the Christian Theological Seminary in Indianapolis in my free time and anticipate enrolling in their MTS program in the coming months. I like to, as Christopher Hitchens used to say, keep two sets of books. Though I’m a secular humanist, I am fascinated by belief in God and have a deep desire to understand it.

This is where the recent discussion comes in. It seems like the secular humanist movement really needs to get beyond the question of whether God exists, mainly because this question assumes it understands what religious people mean when they talk about the “existence” of God. I just want to briefly suggest here how difficult it is to understand what is meant by the “existence of God,” or the meaning of faith by referring to the ideas of a few prominent theologians.

The theologian Rudolf Bultmann wrote on the difference between talking about God and talking from the existential reality of God, effectively claiming that the person of faith can never talk about God (positing God as an object outside herself to be comprehended), but that for religious people God is something like the “Wholly Other” that exceeds all language and thought. Consequently, for him faith means “the abandonment of man’s own security and the readiness to find security only in the unseen beyond, in God.” This is a far cry away from the notion that religious people have some kind of rational grounding for believing in God, or that the average religious person strives to do so. The language Bultmann uses suggests an entirely different grammar from the logic of rationality.

Similarly, Paul Tillich defines faith as “ultimate concern.” As JBH commentates, “While faith may certainly involve rationality and emotion, for Tillich it transcends them both without destroying either, thereby overcoming the gap between subjectivity and objectivity.” Continuing, for Tillich, “God functions as the most fundamental symbol for ultimate concern. Regardless of whether one accepts or rejects ‘God,’ the symbol of God is always affirmed insofar God is a type of shorthand for what concerns humanity ultimately.” Here again, we find a robust definition of faith and belief which goes beyond the understanding that belief is merely the acceptance of a proposition without evidence. It is an open question, given Tillich’s understanding, whether faith can be obtained through reason, or whether faith itself provides a logic of its own for interpreting the world and its events.

Indeed, Friedrich Schleiermacher, the father of modern liberal theology, writes in his book to “Religion’s Cultured Despisers” that faith is different from physics, ethics, and art. This Christian thinker understands religious doctrines and dogmas as contemplations of a feeling of ultimate dependence on the universe. Schleiermacher recognizes that this exposition of religious language, as an expression of a certain feeling, puts it in a distinct discourse: “Religion, however loudly it may demand back all those well abused conceptions, leaves your physics untouched, and please God, also your psychology.” He goes on, in this light, to describe the uses of religious terms. A “miracle” is “simply the religious name for an event.” A “revelation” is every “original and new communication of the Universe to man.” I take this to mean that when language gives perspective to life, then it is revelatory language. He also makes a distinction between true belief and false belief: “Not every person has religion who believes in a sacred writing, but only the man who has a lively and immediate understanding of it, and who, therefore, so far as he himself is concerned, could most easily do without it.” Although Schleiermacher calls “God” and “immortality” ideas as opposed to feelings, he points to “God” as a unifying concept “in whom alone the particular thing is one and all.” “Is not God the highest, the only unity?” “And if you see the world as a Whole, a Universe, can you do it otherwise than in God?” With this kind of talk, we secular humanists are certainly standing on a strange continent. Yet we should not turn around, now, and give over thinking to cliches about what “God” or “faith” or “religion” must mean, but we should explore the jungles of religious thought in hopes to find what is worthwhile and intelligible, for in either case we learn about the common humanity that connects us all, whether secular or religious.

With a few questions, let’s further free our minds from the prejudices derived from overly simplistic understandings of religious belief and think for a second about what it would mean for religious people to understand God as a being like other beings. It would mean that fundamentalists themselves would say that we can get closer to God depending on where we stand on the earth, that we could see God if we had better qualities of perception, that we could hear God if our auditory system was more powerful. But this isn’t what even fundamentalists claim. They’ll say God is everywhere. And we have to take that seriously. God isn’t a being like other beings (see the debates surrounding the analogia entis).

You might ask why listen to the major thinkers of theology when we can ask everyday believers what their belief means. This is an important question and bears more attention than it has received. This is a question the philosopher of religion D. Z. Phillips took up in The Concept of Prayer. Just because someone knows how to paint, it doesn’t follow that they have anything to say about art theory. Just because a religious person prays, it doesn’t follow that they have some kind of robust understanding of prayer or can articulate it with symbols other than those passed onto them. Daniel Dennett makes this wonderful distinction between having competence in a game and comprehending the game (many pragmatist philosophers of language do as well, such as Robert Brandom in Making It Explicit). I can be competent at playing guitar, for instance, but it doesn’t follow that I comprehend what I’m doing when I play guitar: that I know what the chord names are or I know how to place musical symbols on a scale and write a song with notation. In the same way, not all religious people comprehend the meaning of their beliefs, although they are competent actors within the rituals and systems of discourse in their communities. So a discussion with the actors who are competent religious actors and comprehend religion’s history is paramount for understanding it. This, I think, is the import of Peterson’s point that Sam Harris doesn’t reference Eliade (virtually the founder of religious studies) once in his works.

Another point that D. Z. Phillips made over and over in his career is that distinct discourses (or “language games”) can infect each other, and this infection can either undermine discourses or revolutionize them. The undermining process occurs when the logic of one discourse (say science) is used to interpret the surface grammar[1] of another discourse (say religion), so that even religious believers begin to use scientific logic to think about their beliefs, despite this logic being foreign to their beliefs. So the problem with being a competent actor who does not also comprehend the discourse she participates in is that she is susceptible to this undermining. It creates cognitive dissonance. I think this happens a lot to religious people. And examples of this undermining can be seen when faith is reduced to the shallow understanding of belief (the acceptance of propositions without evidence), when God is reduced to a being (existing somewhere), and religious practices are reduced to their social benefits.

The secular humanist movement would be better off, especially in its relation to religious people and its understanding of religion and religious belief, if it sidestepped the question of the existence of God and asked what it means to say that God exists and what it means to believe or have faith in God. It seems to me that this change of emphasis must be granted purely out of the principles of charity and skepticism; the principle of charity because to arrive at a position about religion and religious belief, we have to engage with the best religious thinkers who do ask these questions; and from the principles of skepticism because we have to be skeptical of our own assumptions and ideas about what religion and religious belief are.

As we have seen, the father of modern liberal theology Friedrich Schleiermacher wrote on the relation between religion and the sciences and arts. And I think his answers still have pertinence  today. Is faith a feeling of ultimate dependence? Is “miracle” the religious word for any event, and the more religious you are the more miracles you see? Do religious beliefs, in fact, have nothing to do with ethics and physics, as he claims? These are open questions, I think, and can’t be answered just by taking a small sample size, as Dillahunty seems to do, of a small movement, of a relatively new branch of Christianity at its word (fundamentalist Southern Baptists, for instance). A certain sect’s view of theology isn’t necessarily the majority Christian view, nor is it the most traditionally representative. For instance, the Americas only house about a third of the world’s Christians, and at least half of the world’s Christians are Catholic. Why not engage with the thoughts of someone like the Catholic thinkers Karl Rahner or Thomas Aquinas?

As the theologian Paul Tillich defined faith as “ultimate concern,” a disposition toward reality as a whole shaped by an ultimate concern (for instance, maybe that being is good despite suffering), and another important theologian said that beliefs are the “thoughts of faith,” we can begin to see how the question of “what do you believe” is a little misleading and unhelpful for us who want to understand religion. The beliefs of religious people seem to be expressions of a disposition toward life as a whole, and aren’t themselves what is worthy of worship (the Reformers for instance distinguished between the letter of the Bible and the Spirit of the Word). Let’s therefore draw a distinction between faith and belief. Belief is an expression of faith and does not ground it. Our questions should be directed toward the lived reality and experiences indicative of faith rather than the propositions of belief. Wittgenstein once said that the concept “God” is something like the concept “object,” in that it is a basic concept for a way of conceiving the basic things in reality. I think it would be fascinating to explore the ways in which the word “God” is similar to that of “object,” for in answering that we might actually articulate an authentic abstraction of religious belief and, perhaps, distill the meaning of faith.

Why fixate on the question of the existence of God when even in theological circles it is a cliche that people do not come to faith through rational argument and, in philosophical theology, there is a distinction made between the God of the philosophers (something like the first mover, the idea greater than that which can be conceived, etc.) and the God of religion (who is worthy of worship, the God of love and hope and freedom, etc.)? Why argue against a God not worth believing in, even by religious standards (and quite likely nobody believes in), and not try to articulate the God who religious people put their faith in? It seems like the major thinkers in the secular humanist movement have done next to no homework on the variety of religious experiences and the different conceptions of religious belief and ritual (as these have been explored extensively in religious studies), and the secular humanist movement suffers for it. If indeed it is possible that the grammar of religious language differs from the logic of rationality, it seems absurd to dismiss it out of hand as not worthy of discussion or serious thought. It seems we have a long way to go before we can actually mount a criticism of religion, because we have yet to understand it. And I’m not advocating here for a distinction between the facts of religion and the values of religion, for us to see the social or psychological benefits or ill effects of religious belief, but an investigation into the phenomenology of religious experiences, and the kinds of experiences and the kinds of thinking that religious belief expresses.

I hope this makes some sense and that I have presented my question sufficiently enough (though of course not comprehensively) so that where I’m coming from might be at least basically understood. Is my concern here unfounded? Does the secular humanist movement have no more work to do in the realm of understanding religion, and the only work before it is to deny and refute it at every turn? Might there be a possibility for building bridges, to recognize the possibility that our common humanity might allow for different dispositions toward the world, and that understanding these differences might allow us all to work together better?



[1]  Some Wittgensteinians draw a distinction between “surface” and “depth” grammar. The surface grammar is the way the grammar of a statement appears to a person. So the surface grammar of “God is in heaven” appears for many nonreligious people as the same as the depth grammar of “Mom is in the kitchen.” Depth grammar is the intended logic that underlies a statement and motivates inferences and conclusions from that statement. So the depth grammar of “Mom is in the kitchen” could be something like “Dinner will be ready soon” or “Mom is not in the living room, basement, upstairs, etc.” The question I am raising here is something like: The surface grammar of the statement “God is in heaven” misleads us to think religious people are making an empirical claim when the depth grammar might mean something like “Come what may, existence is good.”

The Free Will Debate: Martin Gardner and the Mysterians

As we continue our series of articles and podcasts on the subject of free will, one particular viewpoint keeps tapping the back of my mind, like a reliable friend who is there to remind you of your lapses. What if we’re approaching the free will discussion incorrectly altogether? What if the problem of free will can’t be solved, or at least not yet? What if we don’t have the requisite knowledge to definitively answer the free will problem?

These questions were brilliantly elucidated by the grandfather of the skeptic movement himself, author Martin Gardner. Mathematician, master debunker of the paranormal, and self-proclaimed “philosophical scrivener,” Gardner outlined his views on the free will problem in an essay entitled, “The Mystery of Free Will.” He argues that “the free will problem cannot be solved because we do not know exactly how to put the question.”[i] The complexities involved in establishing a proper investigation of free will (a fuller picture of human consciousness, physics, and social systems) currently precludes us from answering the free will question with any confidence. As he puts it, “Our attempt to capture the essence of that freedom either slides off into determinism, another name for destiny, or it tumbles over to the side of pure caprice. Neither definition gives us what we desperately want free will to mean.”[ii]

So, what does Gardner mean by free will? He describes the problem as “another name for self-awareness or consciousness. I cannot conceive of having one without the other.”[iii] In other words, Gardner believes that free will is predicated on the presumption that human beings have some level of self-awareness or consciousness. Now, while this is descriptive of what Gardner thinks we have, he thinks we’re currently incapable of “distinguishing free will from determinism and haphazardry.”[iv] Determinism’s reductionism places free will in the ash heap of philosophical history, relegating the problem to nothing more than an illusion that we must accept. Conversely, indeterminism “becomes equally delusory, a choice made by some obscure randomizer in the brain which functions like the flip of a coin.” Neither option leaves a ponderer fully satisfied that the problem has been solved; it is best to leave free will as an open-ended mystery — “a mystery bound up, how we do not know, with the transcendent mystery of time.”[v]

With this answer, Gardner belongs to a small, but influential cadre of philosophers described as the “Mysterians,” thinkers whose unsettled views of free will, mind, and consciousness require their shrewdness. Gardner shared this view with physicist Roger Penrose, and they both believed that “there are deep mysteries about the brain that neurobiologists are nowhere close to solving.”[vi] Other “Mysterians” on the problem of free will are philosophers Thomas Nagel, Colin McGinn, and Jerry Fodor, as well as linguist and social theorist Noam Chomsky. They follow the simple, but effective adage that Ludwig Wittgenstein penned in his Tractatus Logico-Philosophicus: “Whereof one cannot speak, thereof one must be silent.”[vii]

Wittgenstein appears not to be the only German-language philosopher that Gardner consulted when coming to his conclusion on free will. For that, we turn to the Prussian enlightenment genius, Immanuel Kant. Like Kant, Gardner believed that “the best we can do (we who are not gods) is, Kant wrote, comprehend its [free will’s] incomprehensibility.”[viii]  According to Kant, the empirical, rational investigation of reality rested on a logical assumption of causal determinism, but the intangible (or numinous) aspects of human freedom (what he attributed to a soul) belonged to a “transcendent, timeless realm” where humans are “truly free.” These two contradictory forces, “empirical determinism” and “noumenal freedom,” seem impossible to reconcile.[ix]

Kant specifically addressed this issue in his work, Religion within the Limits of Reason:

Here we understand perfectly well what freedom is, practically (when it is a question of duty), whereas we cannot without contradiction even think of wishing to understand theoretically the causality of freedom (or its nature).[x]

Gardner admits (as a proper skeptic) that he doesn’t necessarily buy into some of Kant’s metaphysical claims, but the general point is the same. We feel we have free will, but that’s at odds with what we know about the mechanics of the universe. This is the apparent contradiction that is unsolved by mere sophistry, leaving Gardner most comfortable with admitting his doesn’t have a solution.

As someone who identifies as a compatibilist and has spoken of its merits, I am equally enthralled with the mysterian position. Gardner and others are not afraid to say, “I don’t know,” which is both intellectually honest and philosophically astute. Perhaps there are mysteries about consciousness, mind, and time that we have yet to fully comprehend, and until we have the requisite knowledge about these conceptions, we are inept to solve the problem of free will. Humility is the beginning of the path to wisdom, and in that regard, Gardner had it in spades.



[i] Martin Gardner, The Night Is Large: Collected Essays, 1938-1995 (New York: Macmillan/St. Martin’s Press, 1995), 427.

[ii] Ibid., 428.

[iii] Ibid., 427.

[iv] Ibid.

[v] Ibid., 428.

[vi] Ibid., xix. In a future essay, I will explore how neuroscientist Michael Gazzaniga aptly attempts to assuage Garner and Penrose’s fears by demonstrating a pragmatic approach to free will that is grounded in neuroscience.

[vii] Ludwig Wittgenstein, Tractatus Logico-Philosophicus (New York: Harcourt, Brace & Company, Inc., 1922), 189, accessed February 5, 2018, Google Books.

[viii] Gardner, The Night is Large, 428.

[ix] Ibid., 440.

[x] Kant, as quoted in Gardner, 440.


The Architecture of Language_ On the Free Will Debate

The Architecture of Language: On the Free Will Debate

Recently, Reason Revolution host Justin Clark sat down with author J. R. Becker to discuss the disagreements between Daniel Dennett and Sam Harris concerning free will. As a complement to their conversation, I want to discuss free will from the standpoint of the meaning of concepts, to ascertain what the difference between Dennett and Harris amounts to and to shed some light on why this debate is happening in the first place.

Clark begins the conversation by outlining three ways of thinking about free will:

  1. Determinism, which understands free will to be an illusion because every action and event has prior causes, and these causes had prior causes, all the way back to the big bang;
  2. Libertarian Free Will, a position, Clark states, promulgated by Christian theology and existential philosophy, understanding free will to be total, that we are condemned to be free;
  3. Compatibilism, a sort-of middle ground, recognizes the truth of determinism that all actions and events have prior causes but does not understand this to be a defeater for free will.

This basic framework is accurate enough. It is interesting to me that Clark outright rejects libertarian free will, as the reasons one would accept it are very similar to those one would use to be a compatibilist. Although I am not sure how accurate it is to say existentialists are talking about free will when they talk about choice (rather than the political term “freedom”), the reason one would come from a libertarian free will position is that it begins with our everyday use of the word “choice” to radically ground the meaning of life in how we choose to be response-able for it, how we choose and live our values. This isn’t unlike the compatibilist position. To be a compatibilist is to essentially use everyday language to think about the meaning of concepts, rather than the scientific conception of reality. The difference between these two positions is essentially that the compatibilist is more accommodating (or perhaps more knowledgeable of different frames of reference) and therefore leaves the language of science to speak in its contexts and the language of existentialism to speak within its contexts. Let’s think more about this movement between everyday use and scientific use. Is it always more reasonable to begin from the scientific conception of reality?

Beginning at the End

I want to tell you a story about a famous philosopher from the 20th century, the two schools of thought he created, and how, although the latter school supersedes the former, the former is still alive and well. This story is not your usual story, because the point of the story is not so much what it refers to outside of itself but rather is the story itself; the very language it uses is as much the “author” of the text as I am.

We are linguistic beings, which means we both communicate through language and, at the same time, create the language by which we communicate. This isn’t so much a strange fact today. For instance, iPhones did not fall from the heavens. We understand that any new iPhone will be manufactured by humans and that it will, most likely, alter the ways we communicate and relate to each other in some way. What is less readily conceivable is applying this recognition to our most basic and natural communication tool: language. The language we use about things, situations, emotions, and the like give meaning to and partially determine our behavior toward them. A very clear example of this is how the word “thinking” has been displaced by the word “processing,” and how the rise of science has changed the metaphors we use to reflect and think about ourselves. Today, our brains are computers, and the hardware of neurobiology creates the software of consciousness. Is anything today so illuminating as this metaphor, and so radically different from the historical view of the spiritual soul, disconnected from all things physical, trapped within the prison of fallen, finite things? Not only has the metaphor informed the kinds of questions we now ask about what it is to be human, but it has altered our situation as humans, from the technology we create to capture, manipulate, and transcend our human capabilities to how we relate to each other. Accordingly, forms of language are, in some ways, forms of reality. If you question nothing else in this article, please question this statement: live with it, by it, for it, against it, without it, because of it. Just don’t forget it. Language causes and solves our problems. It is to language we must turn to understand the origins of our problems and the way to their solutions.

As we enter this story, let us not forget that the concepts we use and our forms of language belong to contexts, and these contexts are composed of specific problems, objects, and logics. Within these contexts, we either use language to extend our concepts to include more experiences, situations, and phenomena (as when religious people call a tragedy part of “the will of God”), or we use concepts to disrupt the very logic of the language we use and the contexts in which our language makes sense (as in when we use irony or hyperbole, or when Sam Harris says, “Free will is an illusion”). The great advantage we have over animals, as a result of our ability to use language, is that we can project possible futures, using concepts as extensions of realities. We can confer motives to things and predict their actions. We can ascribe cause and effect to the world and therefore project possible situations in which we must act. Grounding all this, however, is the fact that our primary tools for acting are not simply instinctual, but they are social. This is of course not to say that language acquisition is not instinctual,[i] but that rather our instincts have given us tools that far exceed the limitations of mere instinct, just as our thumbs give us abilities that far exceed its mere movement.

Finally, I want to offer one more tool as you proceed to this story. Kenneth Burke points out in Permanence and Change: An Anatomy of Purpose that the ways in which we are trained to think and act in specific situations may make us blind to what is relevant and important in situations where our training does not apply. He calls this unfortunate fact “trained incapacity,” which, specifically, he defines as “that state of affairs whereby one’s very abilities can function as blindness.”[ii] Many secular humanists, unfortunately, fall into trained incapacity when they critique religion, especially when critiquing the notion of salvation as “escape.” Burke describes the problem with this criticism of religion well: “Whereas it [the motive to “escape” reality] applies to all men, there was an attempt to restrict its application to some men….While apparently defining a trait of the person referred to, the term hardly did more than convey the attitude of the person making the reference.” Burke wants to frame the problem of incapacity as a problem of “faulty means-selection,” which is a “comparison between outstanding and outstanding,” a comparison of relevant details between different situations (what stands-out in one situation and what stands-out in another). When we reason about the world, we reason by the means of language. As a result, the means of selecting what is relevant in certain situations, and how these relevant things connect with other relevant things in other situations, is a question of our means of selection, or, in other words, what concepts we use to talk about the things we are trying to talk about.

My claim, at the outset, is that Harris has a trained incapacity, and that this is a consequence of his scientific training. As a result, what Harris thinks is relevant in conversations about free will is the cause and effect continuum and therefore calls all talk about free will senseless (this is what it means to say “free will is an illusion”). Dennett, by contrast, examines how “free will” makes certain concepts like “responsibility,” “control,” “choice,” and “agency” relevant. Now, it is important to affirm that Harris’s scientific perspective is a legitimate enterprise and acknowledge that when we think scientifically, we must extend the logic of science as a means of selection for understanding the world. However, we must not fall into the trap that science is the only way to make sense of every situation and concept. Does knowing what chemicals are released in the brain in situations of “love” fully answer the beloved’s question, “Why do you love me?” Telling my wife we are in love solely because of our biology would be offensive to the language of love. Likewise, we must consider the extent to which Harris’s analysis is offensive to the social and linguistic understanding of free will.

Science and the Meaning of Concepts

In the mid 1910s to late 1930s, a group of philosophers, mathematicians, and scientists formed an influential club now known as the Vienna Circle. Among its most famous members like Rudolf Carnap, Kurt Gödel, W. V. O. Quine, Alfred Ayer, Frank Ramsey, and Karl Popper was Ludwig Wittgenstein, an esoteric and eccentric philosopher obsessed with language. The purpose of the group was to make philosophy into a science, to bring a precision to the language of philosophy that would turn it into a science. Using logic, mathematics, and empiricism, the Circle mounted a devastating critique against philosophical metaphysics. Perhaps the greatest and most obscure representative document of this critique is Wittgenstein’s Tractatus Logico Philosophicus. Here, he laid the groundwork for the principle of verification: the meaning of a concept is its referent in reality. The principle of verification is that a concept is true, or has cognitive meaning, to the extent that it represents an object or state of affairs in reality; in other words, a concept is meaningful if it can be verified. This principle was a watershed for the logical positivist movement, or “positivism.”

Many things followed from this principle. For instance, it can be definitively claimed that religious language is contentless, senseless. The term “God” represents nothing in reality, and certainly is not derived from a state of affairs, and therefore it is meaningless. The principle seems to give truth claims of science a more robust framework. Concepts like “free will,” “soul,” and “ego,” can be thrown out without a thought, shown to be nonsense and without content. If a concept cannot be verified, it cannot have meaning.

Yet the principle is not without issues. One obvious problem is that it does not verify itself. It is a mere tautology.[iii] How do we know that a concept has meaning only to the extent that it represents something in reality? Well, because that’s how the Vienna Circle defined “truth” and “meaning.” The Vienna Circle’s concept of truth does not adequately account for the many different uses the concept has. Another problem is that it does not distinguish between statements that are descriptive (reports) and statements that are normative (imperative statements). Are all imperative statements nonsense? To say that something is “hot” or “cold” is to describe your world. We can verify whether something is hot or cold by our senses or by agreeing on what hot or cold means on a thermometer. But to say that something is “good” or “bad” is normative: one could say it’s good to be a Democrat and bad to be a Republican, or vice versa. What sense does this have from the positivist perspective? Where can I point to and identify the “good” of Democrats or “bad” of Republicans, unless I already assume the nature of this goodness? This is the is-ought problem, rearticulated. We will return to this later.

After Wittenstein wrote the Tractatus, he believed he solved all the problems of philosophy. These problems were either confusions of language, claiming content for its concepts where none could be found in reality, or philosophical problems were caused by railing against the limits of language. The limits of language are, indeed, the limits of philosophy. Famously, the final proposition in this influential work states, “Whereof one cannot speak, thereof one must be silent.” Silence is the best we can do with questions about the ultimate things, those things which ground our languages, which form the connections between the is and oughts.

Wittgenstein’s retirement from philosophy was brief. He soon realized the positivist conception of language did not adequately account for the complex ways in which language is used and still has meaning. Consider metaphor, poetry, body language, allegory, and the like. These uses of language clearly say something, and for language to say something is for it to “make sense,” to “have meaning.” Whether or not words refer to things or states of affairs is not the whole question of meaning or truth, Wittgenstein realized. Language acquisition and use plays, perhaps, an even larger role than reference. Consider when a mother points to a ball and says, “ball” to her toddler. How is the toddler to know that when the mother points to the ball the toddler isn’t supposed to follow a line from the elbow, or that the mother isn’t talking about the ball but about the color of the ball, or the shape, or even the space the ball fills by its existence? The toddler comes to know what “ball” means by interacting with the ball, by learning how “ball” is used in the contexts in which it is appropriate to talk about “ball.” This is the central insight of Wittgenstein’s later work, his rebuke to positivism: The meaning of a concept is its use in a context: meaning is a function of context.

Do We Agree on the Facts, or Are We Just Playing a Semantic Game?

Let’s return to the topic of free will but with a different light: the determinist position appears to be derivative from positivism, whereas Dennett’s Wittgensteinian and pragmatist influences shows in his position, for it matters to Dennett that our reflection on concepts begins on the basis of accurate use of these concepts. We can call anything truth: but does the arbitrary changing of definitions mean anything? This question brings to mind the work of James K. A. Smith, presently a popular theologian in ultra-conservative Calvinist circles, who relies on stale arguments and linguistic slights-of-hand. For example, he defines “liturgy” as anything that shapes our desires. So it appears “deep” when he makes the claim that basically everything is liturgy: from the ways in which we shop at malls to our daily after-work routines. One implication in calling desire-shaping phenomena “liturgy” is to suggest that we’re all “religious” at the core. And, indeed, this is assumed in the very conception of the matter. This is an extremely boring and underhanded way of saying something without saying something: Smith is an expert at employing the “deepity.”[iv] But it’s a telling example of how the words we use can affect our perceptions of our objects of study. Why not just substitute “liturgy” with “stimuli?” Well, for one reason, Smith would be out of a job. Additionally, there would no implication, in any given instance when we use “liturgy,” that forming habits fulfills a religious need. Smith’s trailblazing conclusion, that desire-shaping practices are ultimately about “worship,” would not be assumed at the outset. Smith’s method of argumentation is one way to have your conclusions made for you: the very words we use shape our intuitions as linguistic beings.

What does it mean to ask if we agree on “the facts?” Consider that you’re having a discussion with James K. A. Smith on desire-shaping practices. What sense does it make to describe the things that draw our attention and shape our desires as “stimuli,” and not “liturgy?” Are we disagreeing about facts, here? Is it all “just semantics?”

The fact is that our words shape and, in some ways, determine, what we see in the world, giving rise to disparate forms of thinking about what is “the world.” If we use “liturgy” to talk about desire-shaping practices, the inferences we are compelled to make by the use of the concept itself infer that when we conduct acts which shape our desires (that is, when we do anything), we are indeed performing acts of “worship,” and the places in which we perform these acts of “worship” are our “holy” sites. This is what I mean when I say that the conclusions are already contained within the very assumptions from which we begin any analysis. For Smith, just as for Harris, the facts are given as a starting point. Consider what would follow if we began our analysis of desire-shaping practices from the mechanistic conception of the universe. Unlike Smith, Harris would say that we do not shape our desires (“liturgies”) by “performing acts of worship” in “holy sites,” but by being influenced by the “conditioners” in our “environments.” Nothing like “worship” or “holy sites” is insinuated by the use of the words “conditioners” and “environments.”

As such, using a word like “facts” is more so determined by our our points of reference, our forms of analyses, and not so much what we find in the world. Our very use of the concept of “fact” delivers objects in the world which are essentially different from the objects in the world we find when we think of things as projections from our emotions, as symbols of what the future will bring, or as “miracles.” Both Smith and Harris can agree on the “facts,” to the extent that they can analyze the same situations, but what these facts are named, whether “liturgy” or “stimuli,” is just as important in shaping what the facts mean as the objects and situations under investigation.

So What is at Stake?

The free will debate is simply a good representation of what occurs in every discussion where science attempts to analyze concepts derived from everyday use but without paying attention to the inferences we make by these concepts: concepts like “mind,” “thinking,” and “belief,” and “morality.” This debate is also a good example of the difference between positivist and ordinary language philosophers. But let us take a look at another aspect of this debate, moving beyond the analysis of the concepts put in play, and consider the consequences that follow from these concepts.

The debate between Harris and Dennett boils down, in some ways, to the question B. F. Skinner raised half a century ago. When we are trying to understand the reasons for actions, do we look at the intentions of the person from our everyday use of concepts and within a normative framework of moral responsibility, or do we look at the conditioners of action, the mechanics of the universe that make some actions more likely than others and put in place mechanisms that will influence better outcomes? This is the crux of the free will debate between Dennett and Harris. And to the extent that we side with Dennett, we are looking for ways to innovate our normative schemes, to extend some concepts and retract others when it comes to our language about free will, responsibility, and justice. And when we agree with Harris, we are looking at the physical mechanisms of the world in order to manipulate and shape them to improve society.

Going back to the is-ought problem as introduced earlier, we can say that both the descriptive and normative frameworks are different for Dennett and Harris. For Dennett, the descriptive side of his analysis involves looking at the everyday situations in which it makes sense to use “free will” and then to outline the inferences we make in those situations, the consequences of using this concept. For Harris, the descriptive side involves data about the mechanisms of reality. What we count as descriptions, or the “is” of reality, informs, then, the “oughts” that follow. For Dennett, to rid us of the concept of “free will” is to rid us of the kinds of practical, social relations in which we participate when, in the everyday world, we use this concept. That’s why Dennett wants to talk about the moral aspect of free will. For Harris, to lose the concept of free will is to lose nothing, because both morality and free will are about the mechanisms of reality, and just as our moral intuitions are facts that pertain to the operations of these basic mechanisms of reality, so too is the illusion of free will. We have Dennett representing Wittgenstein’s later position and Harris representing his earlier philosophy.

The difference between Dennett and Harris is not only in the frameworks from which they analyze the problem of free will, but in the consequences that follow from their methods of analysis. To accept both projects as legitimate, which I think we should, would mean that we should work both to be linguistic innovators and also social revolutionaries. We should be attentive to the ways in which language shapes thought but also be open to using the tools of science to move beyond mere argumentation and hermeneutical innovation to improve society. The public clash between two legitimate ideas generally revolves around the fallacy that these ideas must be integrated in some theoretically general way for them both to be legitimate, or else one must give way to another. What is more likely true is that Harris and Dennett have different levels of analysis, and that it is a fallacy to think different levels of analysis must be reconciled in general ways. Rather, they must be married in the life and action of individuals, and to the extent that one level is more useful for some people in some situations than it is for others in other situations, then one level of analysis will be more significant and appropriate. We must move beyond the rationalist fallacy. Employed in a different example, this fallacy would have us believe that to use 1+1=2 we must understand the nature of addition and how 1+1=2 can both be grounded in quantum physics and explain why my wife is angry at me for not walking my dog this morning. The rationalist fallacy bewitches us by making us think we have to have a theory of everything to have a perfect language. Yet, we know, different levels of analysis are true in different ways, for different projects, and for different people.

Ending at the Beginning

The difference between Harris and Dennett amounts to this: while Harris is unwittingly reducing other vocabularies to his scientific vocabulary and thereby displaying a trained incapacity, Dennett wants to keep both vocabularies for creating different contexts, exploring different kinds of experiences, and communicating different ways of existing. The contexts in which free will makes sense are not forms of existence that are delusionary, as Harris would have us think. As Kenneth Burke puts it, “To explain one’s conduct by the vocabulary of motives current among one’s group is about as self-deceptive as giving the area of a field in the accepted terms of measurement.”[v] Put another way, “Motives are shorthands for situations.”[vi] When we consider a breach of contract, what is relevant, in these situations, is not as Harris would have it: a consideration of the cause and effect universe and every single way in which our actions and decisions have prior causes. Rather, what is important for Dennett’s form of free will is that the person has “chosen” to breach the contract, based on the concepts we use in contractual situations. When we say a person made a “choice,” we are saying the possible future outlined in the contract in which “breach” makes sense has been actualized: we are not stating a description of neurobiology or physics. We are using concepts, just as scientists use concepts to both create and describe the world, to make sense and act in the world where “contractual relation” is our current situation. Against Harris’s referentialism, Dennett’s free will reaffirms Wittgenstein and Burke: the meaning of our words has to do with relevance, what it makes relevant, and not reference.

We’ve talked so much about language at this point. Let us just throw out, as the straw that breaks the camel’s back, a simple point of logic which Harris and Becker himself do not seemingly acknowledge. In the podcast, when Becker brought up the Libet Experiments to ground his claim that our choices are predetermined, he did not, also, acknowledge, as Kenneth Burke does, that “The discovery of a law under simple conditions is not per se evidence that the law operates similarly under hilighy complex conditions.” This is a fact we should have learned from the history of science, when the simple Newtonian vision of the universe was displaced[vii] by the Einsteinian vision.

Our ending is where we began, with the recognition that we are linguistic beings, and that the way in which we use words matters. Also, in the spirit of late Wittgenstein, we end with the American who arrived, at approximately the same time, with later Wittgenstein to his later conclusions, to the rebuttal of his own early philosophy.

“We discern situational pattern by means of the particular vocabulary of the cultural group into which we are born. Our minds, as linguistic products, are composed of concepts (verbally molded) which select certain relationships as meaningful. Other groups may select other relations as meaningful. These relationships are not realities, they are interpretations of reality—hence different frameworks of interpretation will lead to different conclusions about what reality is.”[viii]



Photo Credit: Jef Safi

[i] Indeed, this is absolutely the case, as Steven Pinker argues in The Language Instinct.

[ii] Kenneth Bruke, Permanence and Change, 7.

[iii] I refer to this as “tautology” rather than “axiom” to point out a basic point of later Wittgenstein’s insight. Our definitional statements that are supposedly “self evident” actually are the boundaries of our conceptual schemes our language games. They show the logic of our basic conceptual framework: this is what “definition” means in a function sense.

[iv] “Deepity” is from an amusing chapter in Dennett’s book Intuition Pumps and Other Tools for Thinking.

[v] Burke, 21.

[vi] Ibid., 29.

[vii] I say displaced and not “replaced” because Newtonian physics still works when we are measuring short distances, but we need Einstein’s theory of relativity to measure distances between planets. I heard Lawrence Krauss make this point.

[viii] Burke, 35.


Harris's Moral Landscape

Harris’s Moral Landscape: A Scientific Utilitarianism

The history of moral thought varies. Though traditionally associated with either philosophers or theologians, whose theories often extrapolate general concepts without empirical evidence, recent trends in both science and philosophy favor another approach to morality, one steeped in empirical observation and scientific study to define and defend moral principles. Garnering controversy and praise for its fresh discussion of morality, The Moral Landscape by neuroscientist Sam Harris represents such an approach . For Harris, moral relativism (the belief that moral goods are not objective) does not effectively create a just and ethical society.[i] Additionally, he rejects moral (usually religious) absolutism, which defines moral goods under strict, dictatorial guidelines.

As an alternative to moral relativism and absolutism, Harris introduces the idea of a moral landscape, where moral situations and concepts are on a continuum of approval or disapproval based on scientific studies of neurological and social data. His benchmark for what constitutes a moral good is the “well being of conscious creatures.”[ii] This argument is a new approach to the classical study of utilitarianism, founded in the nineteenth century by philosophers Jeremy Bentham and John Stuart Mill. Bentham and Mill’s social philosophy used the idea of “the greatest good for the greatest number” as the standard by which to make moral judgments. Harris’s moral landscape is a modern, more empirically grounded version of this time-honored philosophical tradition, but focuses more on the situational aspects of moral judgement. Thus, Harris’s moral landscape provides us with a new incarnation of utilitarianism based on scientific, as well as philosophical, foundations.

Utilitarianism: The Classical Approach

Before understanding the nature of Harris’s thought, a survey of classical utilitarianism must be conducted. Utilitarianism, as a social and political theory, argues that moral decisions should be made by considering the greatest amount of happiness for the most amount of people possible. The founder of this theory was political philosopher Jeremy Bentham, and he outlined his concepts in an essay entitled “An Introduction to the Principles and Morals of Legislation.” Bentham argues, “nature has placed mankind under the governance of two sovereign masters, ‘pain’ and ‘pleasure.’ It is for them alone to point out what we ought to do, as well as to determine what we shall do.”[iii] Pain and pleasure, generally understood as functionally meaning “favorable” and “unfavorable,” self-evidently show the most appropriate actions for humanity, according to Bentham. Since we are subjected to pleasure and pain, “the ‘principle of utility’ recognizes this subjection, and assumes it for the foundation”[iv] of an ethical and moral system. In Bentham’s view, the principle of utility is the guiding precept governing moral action, both for government and for individuals, that expands pleasure or diminishes pain for the greatest amount of people possible.

Bentham arrives at this conclusion with what is called the theory of “hedonistic calculus.” Hedonistic calculus aggregates the principles of intensity, duration, certainty, remoteness, fecundity (relation to others), and purity of the established pleasures or pain within interactions between social individuals to establish the greatest utility possible in any given situation.[v] These criteria, which are applied like an algorithm to each moral situation individually, deliver the best possible moral outcome.This is generally called “act utilitarianism”: moral actions are made individually and situationally, but collectively expand the moral benevolence of a society. Bentham’s theory powerfully argues for the equality of humanity as well as for the unification of laws and moral customs under a principle of utility. Yet, his approach is harder to implement in the real world because there are no unifying, general axioms that might guide society towards actions of the greatest utility. Also, it takes too much time in the real world to use hedonistic calculus in every situation that requires an action. This is where John Stuart Mill, the co-founder of utilitarianism, comes in to pick up the task.

Mill is in agreement with Bentham on the principle of utility, but he expands upon this concept with his own version of the principle, the “Greatest Happiness Principle.”[vi] The principle posits that “actions are right in proportion as they tend to promote happiness, wrong as they tend to produce the reverse of happiness. By happiness is intended pleasure, and the absence of pain; by unhappiness, pain, and the privation of pleasure.”[vii] Therefore, all utilitarian moral evaluation and action is based upon this principle for Mill. In responding to critics who argued that pleasure is only of the body, Mill counters with asserting that some intellectual goals, when achieved, are more pleasurable than bodily desires, which must have some form of primacy over the base, bodily pleasures of humankind.[viii] Thus, Mill’s utilitarian theory argues that broad rules must be created in accordance with the Greatest Happiness Principle in order to effectively implement a standard of morality for as many people as possible.[ix] This is known as “rule utilitarianism,” which argues for the creation of the most general amount of happiness through broad, unifying guidelines that all members of a society use. But what are those rules?

In attempting to create some guidelines, Mill argues, “the ultimate sanction, therefore, of all morality…[is] the conscientious feelings of mankind.”[x] Humanity’s initial moral guidelines stem from subjective value judgments that then evolve into broader social commitments, to ethical ideals like happiness. In an interesting turn, Mill dissents from Bentham and argues for something revolutionary within the utilitarian framework, something that will have a clear influence in Harris’s thinking: human morality is equivalent to states of mind. As such, the sanctions on moral behavior exist, “always in the mind itself…this which is restraining me [from immoral action], and which is called my conscience, is only feeling in my own mind.”[xi] Mill’s dedication to the human mind anticipates the development of the neurological sciences and their relationship to human behavior, something Harris has openly defended. While these properties are of the mind, Mill argues that they are not innate and must be “a natural outgrowth…brought by cultivation to a high degree of development.”[xii] Another key axiom for Mill is that rules for conduct in society be created by, “those who are qualified by knowledge of both ‘moral attributes and consequences,’” and that it, “must be admitted as final.”[xiii] Mill thinks somebody, or groups of people, should be thinking about the possibilities of action given by current circumstances and running the General Happiness Principle through an algorithm to determine general rules of conduct. Due to the natural propensity for intellectual growth and moral guidelines through the expansion of education, utilitarianism can be applied to society through general rules of conduct. This is something Harris, presumably, would agree with.

Both Bentham and Mill created a social philosophy which philosopher Leonard Peikoff described as “knowing skepticism,” meaning that while these do not fully produce objective rules of conduct, the subjective value-states of humankind lead to the creation of larger rules that society functions by.[xiv] In introducing this skepticism, Mill and Bentham orchestrated a social philosophy that has practical value, especially with the introduction of uniform rules of conduct based on collectively understood value judgments. Sam Harris’s “moral landscape” seeks to revamp rule utilitarianism using neuroscience to explain social conduct and the nature of human happiness in a more scientific, objective way.

Harris’s Moral Landscape

As a trained neuroscientist, Sam Harris uses the tools of science to answer our long-standing moral and ethical dilemmas. “Human well-being entirely depends on events in the world and on states of the human brain…. Differences of opinion will remain—but opinions will be increasingly constrained by facts.”[xv] Harris is putting forth a more actionable way of approaching ethics; instead of using traditional and potentially subjective modes of moral and ethical thought, these shift into discussions of quantifiable rules of conduct that can be measured within the constructs of science and reason. To this end, Harris posits the moral landscape as, “a space of real and potential outcomes whose peaks correspond to the heights of potential well-being and whose valleys represent the deepest possible suffering.”[xvi] These moral peaks and valleys are directly proportional to levels of brain states. And under this scheme, various cultural, ethnic, religious, and social customs are represented as features of the landscape. As Harris puts it, “Culture becomes a mechanism for further social, emotional, and moral development. There is simply no doubt that the human brain is the nexus of these influences.”[xvii]

In trying to develop better modes of moral behavior, Harris posits that general well-being, much like the utility principle for Bentham and Mill, is the benchmark for what constitutes a moral judgment, action, or outcome.[xviii] Yet, he disagrees with them about the importance of subjectivity in the moral decision-making process. Harris argues that, “there must be facts regarding human and animal well-being about which we can also be ignorant or mistaken. In both cases, science—and rational thought generally—is the tool we can use to uncover these facts.”[xix] Humanity’s evolutionary shift towards rationality and reciprocity has paved the way for moral and ethical concepts that increase the well-being of most parties within a society.[xx] The insistence on rationality, brain states, human thought, and general well-being creates the necessary moral framework that makes Harris’s views consistent with Mill’s rule utilitarianism, even though Harris believes that objective moral truths are easier to grasp than Mill did.

In explaining the nature of brain chemistry and its relation to human morality, Harris cites a study involving psychopaths and sociopaths. These two psychological categories of people, on average, make immoral or amoral decisions at the expense of others’ well-being. Harris explains that, “the first neuroimaging experiment done on psychopaths found that, when compared to nonpsychopathic criminals and noncriminal controls, they exhibit significantly less activity in regions of the brain that generally respond to emotional stimuli.”[xxi] This correlation suggests that in the future, as the nature of neuroscience progresses to create an even fuller picture of the brain, society may be able to establish social norms based on such empirical data. Harris’s explanation of evil lends itself to Mill’s view that the importance of social norms and reliance on people of experience could be used to create a utilitarianism that has real social weight.

Another way that Harris’s moral landscape shares the qualities of rule utilitarianism is that studies on human belief show facts and values are beginning to become intertwined. To understand this further, Harris elaborates on the nature of biases in human thought processes; he argues that bias, “is not merely a source of error; it is a reliable [italics in original] pattern of error. Every bias, therefore, reveals something about the structure of the human mind.”[xxii] The problems associated with biases serves as a counterpoint to the prevailing moral precepts of a given society. Since logical arguments are created from the withering of bias within a sound proposition, when facts are thus determined, they become believed; a sound fact “inspires belief.”[xxiii] Morality, in some instances, can be inspired beliefs based on past elimination of biases and the creation of sound facts. Logically, our understanding of sound facts allows us to implement a form of rule utilitarianism that applies to a wide variety of societies.


Sam Harris has argued human flourishing is directly correlated with a sound understanding of the fundamental facts of human well-being, particularly freedom, security, and equality. In the conclusion to his book, he argues that, while there may never be a completely implemented form of universal morals, humanity, “must admit that some interests are more defensible than others. Indeed, some interests are so compelling that they need no defense at all.”[xxiv] This brief passage on the nature of competing interests in society is one of the most powerful, implicit defenses of utilitarian thinking: some interests will take precedence over others, for the most amount of well being in a society, and utilitarianism gives us a way of navigating competing social interests. What makes Harris’s moral landscape important to the evolution of ethics is that it offers a method, one rooted in empirical evidence and philosophical consistency. It offers an attainable, institutional form of human morality that is a secular alternative to the all-pervasive contradictions inherent in theological ethics and moral relativism. Rule utilitarianism, from Mill’s classical form to Harris’s moral landscape, shows a systematic approach to the expansion of positive human values that, through science and philosophical inquiry, will only further evolve.




[i] A moral good is any moral decision or consequence that has the characteristic of being “moral.” So the moral good is a general term for any decision or consequence that is morally good.

[ii] Harris, 2010, p. 11

[iii] Curtis, 117, 1962.

[iv] Ibid.

[v] As cited in Curtis, 120, 1962.

[vi] (2002 p. 239)

[vii] Ibid.

[viii] Ibid., 240-241.

[ix] Ibid. 241.

[x] Ibid., 262-263.

[xi] Ibid.

[xii] Ibid., 264.

[xiii] Ibid., 243.

[xiv] 2002 p. 59

[xv] Harris, 2010, p. 2-3

[xvi]Ibid., 7.

[xvii] Ibid., 9.

[xviii] Ibid., 55.

[xix] Ibid., 31.

[xx] Ibid.

[xxi] Ibid., 97.

[xxii] Ibid., 132.

[xxiii] Ibid., 133.

[xxiv] Ibid., 190-191.



Curtis, M. (1962). The great political theories, volume two. New York: Harper Perennial.

Harris, S. (2010). The moral landscape. New York: Free Press.

Mill, J. S. (2002). The basic writings of John Stuart Mill. New York: The Modern Library.

Peikoff, L. (2012). The DIM hypothesis. New York: New American Library.


What is Atheism by Tylor Lovins

What is Atheism?

With the continued development of secularism, the term “atheist” is becoming more common. More and more people are talking about “atheism,” but what is it, exactly? A tension exists between the method this kind of question brings to bear in its search for an answer, and the reality—that there are people who are atheists—it attempts to explore.

The method assumes atheism is something like a religion. It is interesting the extent to which this method pervades even the secular community, yielding a conclusion uncontested by nearly anyone: atheism is lack of belief in God or gods. We are told atheism is about belief, just like a religion.

Atheism, empirically speaking, signifies the status that a certain belief holds in the lives of certain people. This, so far, is rather banal. Although many confusions follow this method, such as when theists ask atheists for reasons for their atheism, it is perhaps conceivable that atheism is an option, like a commodity, in the marketplace of ideas. This assumption has yet to be supported, yet it is seemingly believed by all. Atheists argue for atheism like Christians argue for Christianity. Is this a case of mistaken identity? Is atheism something like Christianity?

Let us explore this question, not from the assumption that atheism is a belief that atheists have, but that atheism exists because there are atheists. Let us not assume an equivocation of function between atheism and religion and simply pose the question: why are there atheists?

It is no doubt true that there are some atheists who were once theists. Disenchanted of belief in God or gods by experiences of tragedy, power struggles in religious institutions, perceptions of disparities between scientific and religious claims, and the like, some atheists are reactions to religious institutions and beliefs. This seems where the concept of atheism originated: as the status of a person who refused the beliefs of larger society. Atheism, in this sense, is disbelief. This is a refusal to believe either based on reason, intuition, or emotion. There are many in the ranks of the atheists who would identify with this kind of atheism. This is atheism as anti-theism. These atheists would give reasons for unbelief, and atheism, here, might be accounted as something like a one-eyed religion, in that it develops a totalizing system of beliefs about God or gods, nature, and humankind, without the rituals and community associated with these ideas in religion.

Another kind of atheism has emerged in the modern world. One where religion wasn’t received as a candidate for belief in the first place. In this sense, atheists aren’t those who refuse religious beliefs and institutions, but those who never considered them as meaningful options. It’s not that atheists have acquired disbelief, it’s more accurate to say that the concept of God or gods holds no meaning for atheists. It bears no weight on their day-to-day lives. The world is thought about and lived in without God or gods. This kind of atheism resembles religion in no conceivable way. Atheism, here, isn’t a status of belief, because it doesn’t occur to the secular atheist to refuse God or gods: what would it mean to refuse? There are no questions, here, of the existence of God or gods for it is unclear what such “existence” would entail. A product of a world handed down by science and secularism, atheism in this sense indicates the meaninglessness of religious belief.

As briefly outlined above, there are generally two reasons why there are atheists. There are atheists because of disenchantment, and there are atheists because of secularism. The common definition and understanding of atheism presupposes the first kind of atheist, the anti-theist, as the torchbearer for atheism. This is an oversight. A new kind of atheism has emerged as a result of secularism, one where religious traditions do not make sense in the first place. The secular atheist lives to promote science, humanism, secularism, among others; that is to say, lives to promote and develop positive options for living in a world where religion doesn’t make sense. Anti-theists, on the other hand, while they may promote positive options, also focus on diminishing the status of religious beliefs: actively promote refusal of religion.

As a result of secular atheist influence, atheism may in the future be understood not for its nonreligious point of view but for its secular humanist viewpoint. Whether one population of atheists will give way to the other eventually, it appears that secular atheists are here to stay, and with them, the nature of atheism itself has changed: no longer a mere refusal of what came before, but an openness to what is to come.



Photo by Vlad Tchompalov on Unsplash

Rationalism as a Humanism: Grounding the Secular by Tylor Lovins

Rationalism as a Humanism: Grounding the Secular

What is the defining quality of the secular movement, if there is a center at all? Merriam-Webster defines secularism as “indifference to or rejection or exclusion of religion and religious considerations.” This aspect is self-evident to everyone in the movement. Many prominent secularists have at one point or another declared war on religion, typically by reducing all religious traditions to their fundamentalist, literalist manifestations. Motivated by the theory that religion was a primitive form of science, the mystifying beliefs of divine inspiration, holy-book-inerrancy, and divine-human relations have been shown for what they truly are: linguistic and ritualistic artifacts of a world now left behind by the progress of science.

The movement of secularism isn’t itself contained within this definition of secularism, however. The definition for humanism, which stands today as a largely non-negotiable feature for many in the secular movement, describes the contexture more precisely: “a doctrine, attitude, or way of life centered on human interests or values; especially: a philosophy that usually rejects supernaturalism and stresses an individual’s dignity and worth and capacity for self-realization through reason.” Reason and science, coupled with anti-supernaturalism and displacing religion, appear to be the primary drivers of secularism. This warrants some critical reflection. Although reason can be understood as an intellectual endeavor that utilizes principles of logic, it’s not self-evident whose reason, and which rationality, should undergird the secularist movement. The de facto rationality motivating the secularist movement at present is rationalism.

The rationalist tradition for our purposes can be understood as the tradition of thought that makes truth the outcome of an equation: it proceeds from premises to conclusions that are warranted by logic. This is, in Aristotle’s term, “dialectic.” More broadly, a compelling yet underdeveloped strain of rationalism that creates the framework for secularism subsumes empiricism. Here, the premises of thought do not rely entirely on abstract, a priori conditions but take into account scientific findings and experiential knowledge. Another strain has developed, unfortunately, deducing that our motivated action is grounded by the rationalist equation. Let’s call this “naive rationalism.” The naive rationalist asserts we’re basically rational animals and with our handy reason, we are guided by rationalist equations. The yield of these equations are the truth in the realm of thought, and the good in the realm of action. Proposed as the successor to religious traditions that make claims based on authority, the rationalist tradition appears poised to further the cause of humanism and the advancement of knowledge by the force of reason, in a way that is historically unrivaled and unparalleled.

This ambiguity in the rationalist tradition should be interrogated. For centralizing the naive rationalist tradition in the secularism project devalues the fundamental, constitutive role valence frameworks play in any kind of rationality in the first place. Reasons, as modern philosophy and psychology have shown, do not originate from value-neutral systems, but rather are products of systems of value. The point can be made more explicitly: this rationalist tradition favors facts and reason as the highest goods, virtually diminishing the explicit roles of fitness, creativity, virtue, and meaning in the scheme of human motivations. Secularism could benefit from reintroducing these roles back into the pantheon of humanism.

Situating Rationalism

What I am suggesting is not entirely novel, but it remains sufficiently foreign to many projects sympathetic to secularism that it bears repeating and amplifying here. I am not, after all, calling for a devaluation of reason. Reason is a grand achievement of humankind, and rightfully remains as the symbol of not only progress but of a future world without mass population manipulation by appeal to fantastical claims. I simply want to bring reason back from the clouds of the Enlightenment to the real world,  where values, emotions, and unconscious biological mechanisms propel us to action and thought.

In an episode of The Sopranos, Tony’s therapist explains that rage is the psyche’s way of creating a massive distraction, enabling one to not account for potentially punishing or threatening stimuli (whether in memory or experience), but rather displace them, so as to shut one’s eyes to these stimuli as meaningful or real. The picture of rage here is like the child who hides her head under blankets after seeing a scene from a horror film. The way in which we use arguments to reduce others’ positions to ludicrous strawmen is precisely a type of security blanket, but in linguistic form. Let’s remove this blanket, and confront the ambiguity in the function of rational beliefs that emerges when we ground them in the creaturely realm.  Our beliefs themselves, whether true or not (in the sense that they adequately take into account our place in the world in the present), may be what obstructs us from ascertaining truth in the future. Truth, in this way, returns to the motivational level, and doesn’t remain in the realm of articulate conscious thought. Our knowledge of the present may not be true enough to enable us to thrive or acquire truth in the future. Whether reason itself is (1) a method for finding truth or (2) a claim about the authority of an assertion is a tension for many int he secular movement. Just take a look at all the anti-religious memes and rhetoric flourishing in online secular communities to see just how much reason has been misunderstood as a position or claim and not as a method.

Truth as motivational, as operating in the realm of meaning, is important when the secularism project encounters religious thought, and especially as it invokes science. Humanism’s anti-supernaturalist bent is understandable and significant. With Bacon’s critique of Aristotle’s final cause, the method of science was significantly brought into focus and under these conditions prospered without religious conceptions of the world. We don’t need to know the metaphysical constitution or nature of a thing to determine its efficient or material causes. That there may have been a being that created the material world does not weigh in on the question of why the sky is blue or how bacteria cause disease, or even, now, where humans came from. With Bacon, the weight of supernaturalism no longer grounded science, and it could finally fly freely toward the light of truth.

This is not where the story ends, however. Science appears positioned as Icarus. Important modern figures of secularism and champions of science like Sam Harris and Richard Dawkins have taken their cue from James Frazer’s The Golden Bough, claiming religion is a primitive form of science, and that with the progress of science, it will be left behind. Although Frazer rightly positions the basis of myth and religion in psychology, the view was unfortunately colored by a naive rationalism. Frazer, among others even today, do not account for the importance religion has for the inward life and the psychological mechanisms that motivate religion in the first place. Seen as an institution that delivers a guide to right action and right thinking based on authority, religion becomes cosmology + ethics, undermined by its supernaturalism.

One reason the rationalism of science fails to adequately give an account of religion is because the tradition of rationality itself hasn’t taken into account the creature that uses rationality, but rather has reduced this creature to something like a more-or-less competent logic-guided robot. This oversight is a significant one. The public and communal nature of the scientific enterprise meshed with the philosophical underpinnings of secularism’s rationalism and empiricism make for a formidable force not unlike that of Christendom’s mix of magisteria and religion in the life-world of Medieval Europe. Still, the potential has yet to be unlocked. At this point in history, especially in the post-industrial, Christian-inspired nations of Europe and North America, secularism is like the potential energy of two tectonic plates producing some seismic activity in the last two or three centuries but overdue for a massive earthquake.

Motivation and Articulation

The religious wars that gave impetus to a non-religiously grounded framework for truth and political institutions birthed our modern secularism in more and less obvious ways. As deism rose to prominence during this time, true religious beliefs were no longer associated with the authority of church institutions, which had enforced the status of these truths by political force. Rather, truth became an inward reality, an “inner light.”[1] The public became private, the communal individualized. The stakes of this reformation, owing much to the ideas of the Reformers who ignited growing ideas of nationalism and equality already in place, couldn’t be much higher at the time. The political leaders who were endowed with authority by the Church weren’t just making sure, as in our day, the beliefs of one person didn’t intrude on the liberty of another, but were charged with the task of safeguarding the souls of their people.

As human history moved to favor the death of ideas over the death of people, the importance of symbols and narratives as the spaces where truth showed itself were lost within the development of rationalism. The separation of church and state has reversed the roles of what fundamentally grounds us. This is easily seen in populations of both religious and secular stripes, with people in both groups claiming that the minimal requirement a valid belief must meet to be legitimate (or, at least, not disallowable) is that it won’t infringe on the liberty of others. With rationalism sectioning individuals into types and tokens, our beliefs have become hyper-individualist, and what motivates us on the pre-conceptual level has been lost as a category for thinking, in the demand to typify everything for the calculus of our secular rationality.

For the kinetic energy of secularism to support life rather than diminish it, it’ll have to not only capture the minds of the masses, but also the hearts, and not just in the equivocal, ambiguous way by assuming and sublating the good, or motivational truth, with the method of rationality. The disparity between the proselytizers of religion and the advocates of secularism might just be measured by the forms made available to religious people in symbols and rituals that haven’t found a functionally equivalent home in secular movements. These forms enable the appearance of content framed as statements of belief, which illuminate, inspire, and unify the mind and heart. And the reasons are somewhat obvious, for those with eyes to see. Image processing and pattern recognition, as forms of thinking that are innate and unconscious, are more primary to and pervasive in consciousness than articulate thought.[2] That the myths of religion are saturated by images and narratives is, as a result, no accident. Stories grab us on a pre-conceptual level and even appear to ground our conceptual frameworks in the first place. Daniel Kahneman’s Thinking Fast and Slow depicts this secondary role of articulate thought in consciousness even more acutely: our “fast” system, what in common parlance we name “intuition,” this pattern recognition mechanism that I mentioned before, “makes” choices for us on most occasions. It is only when something unexpected or unknown is encountered that our secondary, “slow” system becomes operative: articulate thought.

If the strictly rationalist perspective of the human were true,[3] whereby the givenness of thought were provided completely in the mediation of sense data from the world, through the eyes, to the vassal of our minds, waiting to be formed by our concepts, then the world would, in a significant way, be value-neutral to our biological systems: there would not be a primitive reaction of fight, flight, or freeze, but an immediate compulsion of reason—articulate thought would be more pervasive than non-linguistic thought. This is, in fact, not what we find and doesn’t account for everyday experience.

A now prevailing theory of perception supports the valence-laden notion of the world. Scientists formerly believed that when we look out at the world and perceive the “givenness” of it, those objects with the most salience attract our attention. The consensus is moving in a different direction. We are, rather, attracted to valence: the most meaningful aspects of our perceptual field. And, on a more general level of analysis, when we don’t know what’s going on, when we find ourselves in situations that are new or unexpected, our amygdala goes to work, and in some degree produces the fight/flight/freeze response.[4] This is true not only for situations in the world when we encounter strangers, animals, natural disasters, or darkness in a foreign place, but also for situations in the mind, when we encounter new ideas and beliefs.

To be fair, the disparity I am outlining, between truth as fact and truth as valence, isn’t irreconcilable. The difference rests merely on two images of humankind conceived in “natural” or “normal” states of affairs. The naive rationalism that grounds some strains of secularism would have us believe it is natural for humans to encounter the world in a value-neutral way, although the methods of science itself, and its empiricism, contradicts this claim. On the other hand, religion, as it encourages literalist interpretations of its mythical symbols, would have us think the world is populated by gods and demons, and that it is natural for humans to encounter a world for or against them. These claims are literaly false, but perhaps metaphorically true. The issues arising from naive rationalism on one hand and religious fundamentalism on the other are not inherent to the secular enterprise itself, but are simply artifacts of the pre-Darwinian philosophy of Descartes. It is my belief that becoming more Darwinian will galvanize secularism to a more synthetic and all-encompassing view of ethics, politics, and especially religion.

Religion and Rationalism

If we take Kahneman’s research and conclusions seriously, rationality appears to be a mechanism motivated by the negation of itself. We can put it conceptually this way, using Hegel as our guide, contrasting the understanding from conceptual thinking: (1) the understanding is an immediate (meaning unmediated) interaction with the environment, bellying most of our thinking most of the time; (2) dialectic, or conceptual thinking, is a mediated form of the immediate, and its goal is to synthesize the mediated with the immediate experience to adapt understanding and return to the world, forgive the religious image, as a new creation, better fit to overcome whatever obstacles stand in one’s way. Rationality, as the conceptual aspect of thinking, arises when we encounter a problem or an unknown in our environment, when our unmediated understanding, our immediate experience of the world, becomes questionable. When the issue appears, we mediate the world, so that we don’t have to die to learn, but can predict, contradict, examine, and evaluate new courses of action to map on our environments. Our mental life returns to immediacy until a new problem or a novelty is encountered again.

This cycle of immediacy and mediation seems to account for a significant difference between rationalism and religion. And I think rationalism could gain from learning about this difference.

A piece of a Darwinian understanding of religion will reside in this framework, I believe,  not limiting religion to either a scheme of morality only or a cosmology only, or simply both together in varying intensities. Wittgenstein once wrote “God” is a term like “object,” and with it, you get an entire conception of the world. The first commandment given to the Jews, that they should have no other god before God, can now be interpreted in the way the Father of Modern Theology, Friedrich Schleiermacher, once spoke of miracles: “Miracle is simply the religious name for event. Every event, even the most natural and usual, becomes a miracle, as soon as the religious view of it can be the dominant….The more religious you are, the more miracle would you see everywhere.”[5] Religion makes a move that rationalism doesn’t necessitate but could, and should, incorporate.[6] The moment of mediation, for religion, is not a moment to figure something out about the objective world, whether that be the causal relations of objects or the laws of nature, and to the extent that these are figured out by religious people, it’s an accidental and not an essential feature of the religious disposition. The moment of mediation is undertaken to correct disposition: mediation is a form of meditation, a reception or correction of behavioral patterns. Immediacy becomes transformed into miracle the very moment God is sought in all things. Consider the words of Deanna A. Thompson, explicating the centrality of faith for the Christian life in light of Martin Luther’s theology:

“…having faith means that your whole life is redirected toward ‘trusting [God] with your whole heart’ and looking to God ‘for all good, grace, and favor,’ honoring God through the orientation of your inner life.”

Rationalism, on the other hand, utilizes mediation in a fundamentally different way, and this is what separates the objectivity of rationalism from the existentiality of religion. The point of mediation for rationality is to understand the causal connections and physical makeup of the world. Yet it doesn’t end there. Mediation becomes saturated with facts, more so than the religious disposition strives to attain, and in such a way sets the mediated move of reason as the primary driver of thought, rather than a certain disposition toward the world as it relates to oneself immediately.

This is a significant difference. It doesn’t mean that religion only operates within the realm of value and rationalism in the realm of truth, but it does indicate a different kind of navigation of the world as it presents itself to human beings, as creatures who not only think and plan but also suffer and love. The platitudes, deriving from metaphors, narratives, and images, used to communicate religion by religious people themselves, inspire a depth of life for many that appears simply, at least in this point in history, inaccessible by other existing avenues. Taken seriously, with a more fully Darwinian conception of religion we may acquire a wisdom and appreciation for not just life itself but the lived experience of life that has been hidden in the cliches of the sages of the past. The fact that so many religious people use platitudes or canonical beliefs, grounded in metaphor and imagery, to communicate deep inward experiences tells us conclusively that these inward experiences need forms to carry them to the public eye, and these forms are patterned and universal. It seems otherwise a miracle, for instance, that the myths of the world have global structures and archetypes, which when abstracted from any individual myth fits within a universal framework common to all myths. To go further, an experience that I can’t mediate to myself doesn’t have meaning, and the way I mediate these to myself is the same way they’re mediated to communities I find myself in: by language and images. There is some sense in which, as a result, the meaning and shape of experiences arise within communal constraints and traditions. And these constraints and traditions, undergirded by patterns of categories seemingly inherited, testify to something all too human.

Rationalism as a Humanism

Rudolf Otto introduced the notion of “awe” as central to the encounter with the divine, as the most salient characteristic of a religious experience. And we might say this “awe” is essential to the propensity to live by inward disposition and motivation rather than external manipulation and control. Joseph Campbell asks in Myths to Live By “what the proper source of awe might be”[7] for us who no longer live in a world of gods and demons? What are the sources and symbols of mystery and inspiration that evoke “the impulse to imitative identification?”[8] He traces these sources in history as beginning with animals and their mystical agency, then to the vegetable world where death changes into life, and then to the cosmos and the seven moving cosmic lights that affected the ordering of societies. He finds in our time the individual stands as the source: “as a Thou, one’s neighbor; not as ‘I’ might wish him to be, or may imagine that I know and relate to him, but in himself, thus come, as a being of mystery and wonder.”[9] Every human is a new beginning, a singularity in the history of humankind, and to diminish this novelty is a kind of blasphemy.

Like Nietzsche, Campbell finds the first explication of the human as a source of awe in the Greek tragedies, already in the period of Homer. From the two classically recognized tragic emotions as indicated by Aristotle, pity and terror, we discover a conceptual framework in which to turn the traditionally religious movements into a humanist project. Campbell uses James Joyce’s exposition to spell these out: “Pity is the feeling that arrests the mind in the presence of whatsoever is grave and constant in human sufferings and unites it with the human sufferer. Terror is the feeling that arrests the mind in the presence of whatsoever is grave and constant in human sufferings and unites it with the secret cause.”[10]

In tragedy, we are compelled to relate to the individual by the shared grave and constant reality between us, and we are inspired by the secret source of this grave and constant which unites us. In our case, it is death which is the grave and constant specter that haunts us, and it is life which is the secret source of death, but also of things greater than these: family, creativity, and meaning. In this recognition, we may return to the Father of Modern Theology but without God: life is received as a gift, that which we share with all our brothers and sisters, which we did not ask for or could not acquire by our own actions, but by the happenstance of evolutionary history, are gifted immeasurably.

For rationalism to motivate secularism properly, it must catch up with the times, and not deliver to us an image of humanity dreamed by the ghost in the machine of Descartes, or in the tabula rasa nothingness of Locke’s children. Being clear about the nature of the creatures who use rationality is one thing. We must also understand the motivations of these creatures. Reducing, disregarding, or criticizing religious beliefs by a way of thinking foreign to it, without first taking genuine steps toward understanding it on its own terms, doesn’t seem to be the most reasonable response to a phenomenon that has enamored most people for most of history. Rationalism, itself, is a tradition, a human tradition. It is imperative that secularism recaptures the human element in the heart of rationalism. The best secularism, in my estimation, is the one that takes into account and integrates the best of all human thought, no matter where it may be found. What images of the human we use in this process will be crucial, for it is our metaphors that “mediate between our procedural wisdom and our explicit knowledge; they constitute the imagistic declarative point of transition between the act and the word.”[11]

The West celebrated the God incarnate for millennia. It’s time we celebrate the fact that life became human, and that now, with the gift of consciousness, we may understand, revere, defend, and serve it. We need not pray that God bless us, for life has. Nor should we pray for God to return, for life is here. No more prayers for miracles of God, for the secret source that connects us all, life, demands of us that we act. The only question is whether we will become worthy of this demand. “The old imagery now carried a new song–of the unique, the unprecedented and induplicable human sufferer; yet equally a sense of the ‘grave and constant’ in our human suffering, as well as a holy intimation of the ungainsayable ‘secret cause,’ without which the rite would have lacked its depth dimension and healing force.”[12]



[1] See Christopher Hill’s wonderful book where he tracks this in England from 1400-1580 in The World Turned Upside Down.

[2] This is Freud’s insight and it has turned out to be true in an interesting way: our “fast system” heuristics are such that we have systematically predictable errors that we make in our thinking.

[3] I find this especially in the Objectivist ethic, but this idea has advocates from Rene Descartes and Immanuel Kant as well as Ayn Rand.

[4] Jordan B. Peterson, Maps of Meaning.

[5] Friedrich Schleiermacher, On Religion: Speeches to Its Cultured Despisers.

[6] And already does to some extent. Listen to lectures and presentations by Carl Sagan or Neil deGrasse Tyson, and you’ll hear a very similar view.

[7] Joseph Campbell, Myths to Live By, 58.

[8] Ibid.

[9] Ibid.

[10] Ibid., 59.

[11] Peterson, 94. We should note, here, a prime example of our danger. The fact that the trolley problem has been posed as a moral problem, in the sense that it awakens our intuitions enough to perceive it as a moral problem in the first place, is disconcerting, as it assumes the moral choice can be perfectly moral while making life expendable.

[12] Campbell, 59.


The Promise of Secular Humanism: Towards a Better Way of Life

In my previous essay, I explored the implications of life without gods and the supernatural. Acknowledging that the abandonment of traditional religion requires a complementary philosophical system, I will present secular humanism as a rigorous and applicable framework for human flourishing. This brief overview will not be exhaustive; it will present an outline for this methodology and present concise arguments in its defense. In sum, a life based on the application of one’s reason, ethical individualism, and democratic participation can facilitate a life of joy, freedom, and achievement.

The Humanist Epistemology

A secular humanist’s epistemology (theory of knowledge) is built upon three essential components: reason, methodological naturalism, and skepticism. First, reason is the foundational pillar that the other components work from. Reason is the capacity of human beings to create abstract thoughts and/or conclusions based on the concretes of reality. It is the emergent faculty of our brains that allows us to conceptualize and systematize the world. The humanist believes that reason, or our ability to perceive and then conceive, is purely natural and without the need for “faith” or “revealed wisdom.”

Philosopher Harry Binswanger has delivered a series of lectures emphasizing this point, basing his conclusions off of the principles of an Objectivist epistemology. In Binswanger’s estimation, perception (taking in information via the senses) is the “given” in our understanding of the world, in that it requires mere physical processes. Abstraction and conceptualization, which turn our perceptions into knowledge, are processes that require discrimination and systemization of the “raw material” of perception. This is where reason comes in. Nearly anyone can perceive a quasi-spherical red object or a vibrational difference in the atmosphere with their senses; it requires reason for the concretizing and systemizing process of conceptualization to understand that it is an apple or a song.

Faith by-passes the entire process of knowledge, by appealing to “revealed” truths that one accepts without the steps of perception, concretization, and abstraction. It treats knowledge as a top-down proposition, akin to Plato’s “forms” or Kant’s “pure reason.” This is a completely inverted understanding of epistemology. As Aristotle, Locke, and others have rightly noted, knowledge is a bottom-up process, requiring ever more complicated levels of thought to arrive at our conclusions. Therefore, it is essential within a humanist understanding to properly acknowledge the importance of perception and reason to epistemological questions.

Second, it is important to base our perception on a solid foundation, which in this case is methodological naturalism (MN). An astute summation of methodological naturalism comes to us from the RationalWiki:

Methodological naturalism is the label for the required assumption of philosophical naturalism when working with the scientific method. Methodological naturalists limit their scientific research to the study of natural causes, because any attempts to define causal relationships with the supernatural are never fruitful, and result in the creation of scientific “dead ends” and God of the gaps-type hypotheses. To avoid these traps scientists assume that all causes are empirical and naturalistic; which means they can be measured, quantified and studied methodically.

MN does not rule out the possibility of the supernatural, but rather recognizes the complicated and often problematic investigations of the supernatural. This view is contrasted with philosophical naturalism (PN), which holds that the natural world is all there is and no supernatural exists. While some humanists hold the position of PN, it is more philosophically and intellectually honest to accept MN.

Having said all that, it is important to note that MN does not ignore supernatural claims altogether. When a faith healer says he can cure cancer or a psychic claims to know intimate details of your life, these are specific, testable claims that can be refuted by the scientific method. Even more broadly, when a religion makes specific claims about the natural world (God created the world in six days, God stopped the Sun in the sky, Jesus rose from the dead), these can also be debunked by scientific investigations. What MN cannot do is refute God or supernaturalism all together, seeing as these concepts are too broad and amorphous to be falsified, a key component to the scientific method. Therefore, Humanism’s dedication to MN, and its lack of confidence in supernaturalism and gods, is based on the simple logic of Occam’s Razor. If a phenomenon can be explained by natural means, it is therefore unnecessary to attribute them to supernatural means. Additionally, if a phenomenon we attributed to the supernatural is proven to be true, it is then added to what is natural.

Finally, a humanist epistemology benefits from a healthy dose of skepticism. For this perspective, we turn to the master of skepticism himself, the Scottish philosopher David Hume. In his Treatise on Human Nature, Hume explains the fallibility of the human mind:

The essence and composition of external bodies are so obscure, that we must necessarily, in our reasonings, or rather conjectures concerning them, involve ourselves in contradictions and absurdities. But as the perceptions of the mind are perfectly known, and I have us’d all imaginable caution in forming conclusions concerning them, I have always hop’d to keep clear of those contradictions, which have attended every other system.

In other words, perceptions are not knowledge. They can be twisted and contradicted from what is actually going on in the real world. This is why the process of reason is indispensable to our lives. Reason allows us to peel back the layers of “contradictions and absurdities” and come to a more accurate conceptualization of reality. As I noted in my previous essay, humans are emotional and messy, often led astray by our biases and misperceptions. Skepticism guides our thinking away from our initial perceptions and requires us to investigate deeper to best approximate our understanding of the world.

The Personal Level: Ethical Individualism

Moving from epistemology to ethics, a predominant theological and philosophical worldview focuses on the collective nature of human beings. In more fundamentalist strains, it can be a complete negation of a person’s thoughts, desires, and talents. For example, the ideologies of Islamism (the politicization of certain sects of Islam), fundamentalist evangelical Christianity, and orthodox Marxism require that the individual be subservient to the cause, or the “ideal” of the faith. In a secular lens, this type of view can be summarized by the 19th century philosopher, and founder of the term “altruism,” Auguste Comte: “The individual must subordinate himself to an Existence outside himself in order to find in it the source of his stability.”

This view wholly distorts our human nature. While some scholars quibble over the nature of group level selection (see Haidt), the foundational level of selection concerns the individual. Human beings, much like our primate ancestors and scores of other beings before us, evolved based on mostly individual changes which then added up over time. As Robert Sapolsky noted in his recent masterwork, Behave: The Biology of Humans at Our Best and Worst:

Animals don’t behave for the good of the species. They behave to maximize the number of copies of their genes passed into the next generation. . . . Individual selection fares better than group selection in explaining basic behaviors.

This has profound ethical implications. While it would be unwise for us to directly extrapolate a system of ethics from biology, it is helpful to understand these conclusions and their relation to us as social creatures. Humans are inherently social; we desire communication and connection. However, that does not mean we should seek to achieve these connections through collectivistic means.

Building off of that, my personal view of humanism is built on the guiding principle of individual rights. As John D. Rockefeller, Jr. once said, “I believe in the supreme worth of the individual and in his right to life, liberty and the pursuit of happiness.” This notion is bigger than biology. It is also built on the Enlightenment principle of “self-proprietorship,” beautifully outlined by the English Leveller Richard Overton (as quoted by intellectual historian and philosopher George H. Smith):

To every individual in nature is given an individual property by nature not to be invaded or usurped by any. For every one, as he is himself, so he has a self-propriety, else could he not be himself; and of this no second may presume to deprive any of without manifest violation and affront to the very principles of nature and of the rules of equity and justice between man and man.

In essence, your life belongs to you, to do with it as you see fit, so long as you do not violate the rights of another. This is a bedrock ideal within the Enlightenment political tradition and one that continues to expand the rights of all people.

In Overton’s time, they attributed individual rights to a sovereign God of nature (similar to Jefferson and the founder’s notion of “Nature’s God.”) While this tradition has historically been built upon that premise, it is equally valid to base these rights upon the virtue of being a thinking, sentient being with the capacity for reason. Philosopher Corliss Lamont described this concept’s classical roots and its modern application:

It is the Humanist view that if the individual pursues activities that are healthy, socially useful, and in accordance with reason, pleasure will generally accompany them; and happiness, the supreme good, will be the eventual result. This ethical doctrine goes all the way back to Aristotle and is called eudaemonism (Greek for happiness). It contrasts with hedonism, which holds that pleasure alone is intrinsically good, by putting primary emphasis on the sorts of activities that a person chooses; at the same time it assigns an important and pervasive role to pleasure. “Pleasure,” as Aristotle said, “perfects the activities,” yet remains secondary. The Humanist ethics, then, “recognizes that the intentional objects of human striving are, in point of fact, not pleasures, but pleasurable things. And by identifying the good with voluntary activities and preferred objects, which are publicly observable, it facilitates discovery, measurement and production of the good.”

Therefore, that which is in accordance with the overall flourishing of the individual, within the context of their own life and their relation to others, undergirds a humanist conception of rights. Supernaturalism and/or god(s) no longer remain necessary.

As mentioned above, a person’s relation to others must also be taken into account. Individualism does not imply a short-sighted selfishness. Rather, it represents a committed recognition to the dignity of each person as well as the need for social cohesion for the flourishing of our species. Lamont, again, elucidates this point perfectly:

Humanism, then, follows the golden mean by recognizing that both self-interest and altruism have their proper place and can be combined in a harmonious pattern. People who try to serve humanity must permit humanity to serve them in turn. Their own welfare is as much a part of the welfare of humankind as that of anyone else.

Our individualism must be grounded on an ethical promise to advance our own interests while seeking to advance the interests of society as a whole. Even though the Devil will be in the details (pun intended), it is the ethical project of humanism that protects individual rights while advancing all of humanity forward.

The Societal Level: The Moral Instinct and the Moral Framework

In the last section, I mentioned the devilish details of the individual’s ethical relation to others, generally known as morality. In my view, our morality breaks down into two major components: the moral instinct and the moral framework. Our moral instincts are the product of natural selection; we are driven by “passing on lots of copies of one’s genes” through “maximizing reproduction.” Base emotions like fear, hunger, dominance, and justice, among others, evolved over millennia so our genes could be passed on from generation to generation. This has not only made us successful biologically; it has made us successful morally. As such, actions which originally evolved to help direct kin began to help non-kin, especially once we developed our social systems.

Here’s a story to illustrate this point. In his book, Life Driven Purpose, Dan Barker recalls a story about saving a baby from being harmed at an airport. He was waiting to board the plane when he noticed that a woman had placed her infant “on top of a luggage cart, about three or four feet off the ground, and the father must have stepped away for a moment.” Out of the corner of his eye, Barker saw the carrier starting to fall to the ground, “made a quick stride to the left,” and his “finger tips caught the edge of the carrier as it was rolling towards the floor.” The mother quickly assisted him in leveling the carrier and thanked him for his action. Now, why would he do something so moral without much intellectual consideration? Barker explains:

We are animals, after all. We come prepackaged with an array of instincts inherited from our ancestors who were able to survive long enough to allow their genes–or closely related genes–to be passed to the next generation because they had those tendencies. An individual who does not care about falling babies is less likely to have his or her genes copied into the future.

The moral instinct compels us to carry out many actions without any logical considerations; we just act in accordance with our human nature. Acknowledging this aspect of who we are goes a long way to improving our ethical systems in the future.

Complementing the moral instinct is the moral framework, what we commonly call “ethics,” or a system of conceived principles that advance flourishing and limit suffering, not just in humans but in the ever-growing moral universe. One way to conceptualize the moral framework is philosopher Peter Singer’s “expanding circle.” Based on an earlier concept from historian W. E. H. Lecky, Singer’s expanding circle hinges on moral agents rationally defending their actions without prizing their own status over anyone else. In other words, it’s a more elaborate variation on the golden rule, but with a twist: make moral decisions among others as you would have others make moral decisions among your kin. The circle expands, as the metaphor goes, as we socially evolve to include more than just other individual humans. Within time, it will include in-group members, out-group members, communities, states, countries, the entire human race, other mammals, all sentient beings, and eventually the entire spectrum of life. Using the moral framework will challenge our culturally-ingrained notions of moral behavior, as its “principles are not laws written up in heaven. Nor are they absolute truths about the universe, known by intuition. The principles of ethics come from our own nature as social, reasoning beings.”

Using the benchmark of advancing flourishing and limiting suffering, there are ways in which behaviors can actually be assessed as moral and immoral. As neuroscientist Sam Harris argues in The Moral Landscape, “there are right and wrong answers to moral questions, just as there are right and wrong answers to questions of physics, and such answers may one day fall within reach of the maturing sciences of mind.” While Harris is right about the importance of science in answering moral questions, we must also use ethics when discussing moral values. Both work hand in hand, with science being the investigatory component and ethics being the evaluative component. This is for a reason. Unbridled science (eugenics, atomic weapons) and unbridled utopianism (totalitarian philosophies such as Fascism and Marxism) can lead to immoral actions; it is only through what biologist E. O. Wilson called “consilience,” or a unification of knowledge, that we can make the best moral decisions. In all, the moral instinct and the moral framework serve as two sides of the same ethical coin. The instinctual and conceptual both have a say in how we advance our lives and the lives of others.

The Political Level: Rights as Paramount, Science and Ethics Guide Policy

Finally, the political sphere, which combines individual and social concerns, becomes the normative framework for ensuring the flourishing of each component listed above. Democracy, the most successful and beneficial form of government, is predicated on the protection and/or fulfillment of rights through the “freely given consent of the governed.” These rights can be broken down into two categories: negative and positive. Negative rights are rights that the government cannot take away from you (freedom of speech, freedom of religion, freedom of association, etc.) while positive rights are those that are granted by the government, such as a right to food, clothing, shelter, medical care, and a living wage or pension system. The best encapsulation of both types of rights comes from President Franklin Roosevelt, in his “Four Freedoms Speech,” delivered in front of Congress in 1941. The “four freedoms” are freedom of speech, freedom of worship, freedom from want, and freedom from fear. The first two are negative rights while the latter two are positive rights. Our modern democratic tradition hinges on these ideals, which fit nicely into a humanist framework.

Humanist scholars such as John Dewey, Sidney Hook, and Paul Kurtz all stress the importance of a healthy democratic society based on the bedrock of political rights. Dewey, in his essay, “On Democracy,” wrote of the necessity of negative rights:

While the idea is not always, not often enough, expressed in words, the basic freedom is that of freedom of mind and of whatever degree of freedom of action and experience is necessary to produce freedom of intelligence. The modes of freedom guaranteed in the Bill of Rights are all of this nature: Freedom of belief and conscience, of expression of opinion, of assembly for discussion and conference, of the press as an organ of communication. They are guaranteed because without them individuals are not free to develop and society is deprived of what they might contribute.

Negative rights ensure that individuals are free to follow the dictates of their own conscience and intelligence to fulfill the needs of themselves and others. To implement these values, a democracy requires a strong separation of church and state and a free press, so that all citizens can implement the values they hold dear without violating the negative liberties of others.

On the other hand, Hook notes of the “positive requirements of a democracy” in his essay, “Democracy as a Way of Life.” Among the various requirements, the most important to this discussion is Hook’s notion of “economic democracy.” He explains:

By economic democracy is meant the power of the community, organized as producers and consumers, to determine the basic question of the objectives of economic development. Such economic democracy presupposes some form of social planning, but whether the economy is to be organized in a single unit or several and whether it is to be highly centralized or not are experimental questions. There are two generic criteria to decide such questions. One is the extent to which a specific form of economic organization makes possible an abundance of goods and services for the greatest number, without which formal political democracy is necessarily limited in its functions, if not actually endangered. The other is the extent to which a specific form of economic organization preserves and strengthens the conditions of the democratic process already mentioned.

Like Dewey, he’s leaving options open to the citizens of democratic societies, such as whether to be more capitalist and less socialist or vice versa. In doing so, Hook defends the principle of positive rights in the same fashion that Roosevelt did: to advance human flourishing.

Lastly, we come to Paul Kurtz and his thoughts on democracy from his book, In Defense of Secular Humanism. Kurtz reaffirms the considerations made by Dewey and Hook but also emphasizes the value of discourse and participation to a functioning democracy. “. . . a political democracy,” Kurtz writes, “can be effective only if its citizens are interested in the affairs of government and participate in it by way of constant discussion, letter writing, free association, and publication. In absence of such interest, democracy will become inoperative; an informed electorate is the best guarantee of its survival.” Each of these views on democracy require citizens to use reason, from protecting their liberties and organizing their economies to discussions among others and petitioning the government for a “redress of grievances.” None of these things happen by virtue of a god or how many prayers a person can say. Rather, democracy is a human-centered, action-oriented enterprise that protects rights, builds economies, facilitates discussions, and encourages achievements.

With that in mind, a functioning democratic society relies on both science and ethics to inform our public policy. With such contentious issues as abortion, the death penalty, law enforcement overreach, sex education, vaccines, and stem cell research, it is essential that we apply our best thinking to these social problems. With only science as a guide, a government falls privy to overbureactization and malfeasance, and at worst, enacts policies which violate individual rights (eugenics, forced sterilization, genocide). This is why an ethical component, based on the application of reason as well as the guidepost of human flourishing, should always play a core role in shaping policy. It will not always provide us with easy answers, but it is far better than leaving our democracy to the whims of crackpots, religious fanatics, and overzealous central planners.

Conclusion: Humanity’s Future

Like so many ages before us, our age falls prey to barbarism, mysticism, hero worship, tribalism, superstition, and flat-out nonsense. To avoid these trends, we need a philosophy of life that prizes reason over faith, knowledge over ignorance, freedom over tyranny, and most importantly, humans over dogmas. Secular humanism is exactly that kind of philosophy. It is a way of life that puts human beings at the center of their own destiny, no longer chained to the whims of fundamentalist religion or totalitarianism. Its openness to new ideas and diversity of thought allow for a more enlightened religion, one that is compatible with humanism’s core principles. If one has left gods behind, it gives you the framework to live a moral and fulfilling life. The beauty of humanism is that it isn’t much of an “ism” at all; its essential values allow for a multiplicity of worldviews to coexist together, in something akin to Robert Nozick’s notion of a “utopia of utopias.” By leaving society free, open, and dedicated to human flourishing, all people can live among one another with more peace, prosperity, and progress.

Isaac Asimov said it best when he declared that, “Humanists recognize that it is only when people feel free to think for themselves, using reason as their guide, that they are best capable of developing values that succeed in satisfying human needs and serving human interests.” This is the apotheosis of humanism. Despite our flaws and failures, humanity has achieved so much in its time. We have conquered the heavens and the earth, built civilizations, eradicated diseases, ameliorated poverty and suffering, expanded freedom and opportunity, and created art and literature that will last for ages. All of this occurred because we valued our lives and dedicated ourselves to improving them. Every minute we waste speculating about the afterlife limits the value of our lives right now. We are young in the vast chasm of the universe, grasping for glimpses of truth and wisdom. We have so much to learn, which requires us to leave behind the shadows of our past and walk into the light of the future with an open mind, an open hand, and an open heart. Humanism gives us the path; we just have to take the first step.


After the Exit by Justin Clark

After the Exit: Reflections on Losing Religion

What do we lose when we leave religion? I was asked to respond to this question by a friend and, to be honest, it’s not easily answered. For us atheists, it’s obvious to mention all the terrible things we abandoned when we left religion: A fundamentalist dedication to barbaric texts and practices; the racism, homophobia, and misogyny of its most literalist believers; and superstitions hindering scientific and moral progress. All of these are good reasons to leave religion on the “ash heap of history.” Nevertheless, many still yearn for something “transcendent,” something to confide in when times are tough. There is also a longing for community that keeps droves within the fold. Both of these latter components are much harder to lose.

One of the biggest insights I’ve gained over the last few months, especially after reading the work of Jonathan Haidt and Emile Durkheim, is that religion is more than the sum of its beliefs. Sure, abandoning the supernatural and all of its problematic baggage is an important first step towards a better world, but it is not the only thing we lose. As mentioned earlier, countless people stay within religion for its community, the songs, or the emotional connection they have with their church. Religion is a system of life, not a mere reflection of it. In the case of Christianity, it is a religion with over 2,000 years of traditions, beliefs, and cultural contextualizations. When someone spends their entire life committed to a system so totalizing, it is often jarring when they leave. I spoke to and read of former believers whom felt an intense sadness when they lost their faith. It was as if a part of them died when they left it behind. This isn’t without reason.

Jonathan Haidt, in his excellent book, The Righteous Mind, devotes an entire chapter to the social character and benevolence of religion. Using his background in evolutionary psychology, Haidt illustrates that religion is not a “parasite” or “virus,” as many contemporary secular scholars believe, but a product of group selection that benefitted early humans. “If the gods evolve (culturally) to condemn selfish and divisive behaviors, they can then be used to promote cooperation and trust within the group,” Haidt notes. Human group dynamics see this play out routinely, especially in the United States. In America, the religious tend to be more social, more cooperative, and more charitable than their secular counterparts. Citing the work of Robert Putnam and David Campbell, Haidt also hits on something profoundly relevant to the socializing character of religion: specific beliefs matter far less than the charitable, community-oriented practices. Haidt concluded:

The only thing that was reliably and powerfully associated with the moral benefits of religion was how enmeshed people were in relationships with their co-religionists. It’s the friendships and group activities, carried out within a moral matrix that emphasizes selflessness. That’s what brings out the best in people.

Haidt’s insights are even more compelling for me since they come from a fellow atheist. He doesn’t dismiss some of the problematic beliefs and practices of religion, but he gives credit where credit is due. This completely reshaped how I viewed religion. Until Haidt, I obsessed over specific beliefs and traditions which I saw as irrational and harmful, and I assumed the world would improve if religion went away all together. Now, I think abandoning the social utility of religion, without a secular alternative, seems like an impossible task.

A reading of Durkheim also reinforces Haidt’s findings. Emile Durkheim, a French sociologist during the nineteenth and early twentieth centuries, astutely explained the communal aspect of religion. As such, he focused less on a religion’s specific beliefs and more on its social constitution. “A religion,” wrote Durkheim, “is a unified system of beliefs and practices relative to sacred things, that is to say, things set apart and forbidden– beliefs and practices which unite a single moral community, called a ‘church,’ and all those who adhere to them.” This framework turns religious beliefs away from being ends-in-themselves and into means of communal binding. In this respect, the beliefs themselves are less ontological and more normative. Durkheim emphasizes this point in another passage: “Thus, among the cosmic forces, only those are accorded divinity which have a collective interest. In other words, it is inter-social factors which have given birth to the religious sentiment.” Losing organized religion unravels social orders and obligations and a secular alternative must, therefore, satisfy both the ontological and normative aspects of human social flourishing.

Alongside the social benefits of religion, individuals also seek experiences that tie them to something bigger than themselves, which is a key component to group selection in evolution. While individual selection is the primary driver of natural selection, group selection plays an important, complementary role. Haidt further elucidates this point by stressing the importance of religion as a binding moral agent that facilitated group level selections. “Gods and religions,” writes Haidt, “are group-level adaptations for producing cohesiveness and trust. Like maypoles and beehives, they are created by the members of the group, and then then organize the activity of the group.” Again, this takes religion from the ontological pedestal many atheists place it on and into the pragmatic, normative plane of human existence.

But this is the group; what about individual religious experiences? From Paul’s road to Damascus and Muhammad’s revelations from the angel Gibreel to Aldous Huxley’s mescaline-fueled “perennial philosophy,” personal religious experiences abound in human history. Yet, one of their drawbacks, at least in a discussion of losing religion, is that these experiences are “necessarily first person” and not easily identifiable with the scientific method. However, the growing field of neuroscience is helping us understand the nature of religious experiences from a naturalistic perspective. Dr. Michael Persinger’s research, and his well-known “God Helmet,” have provided initial findings into the connection between brain function and religious experiences. By stimulating the temporal lobe via electric pulsations, nearly 80% of his subjects said they experienced what they called “religious experiences.” Furthermore, Dr. Andrew Newberg’s research suggests some of our religious or transcendent experiences derive from multi-layered neural processes. No “God Helmet” needed.

While neuroscience shows a causal link between brain states and personal religious experiences, losing religion wouldn’t necessarily end these experiences. As Newberg rightly points out:

. . . the brain has two primary functions that can be considered from either a biological or evolutionary perspective. These two functions are self-maintenance and self-transcendence. The brain performs both of these functions throughout our lives. It turns out that religion also performs these two same functions. So, from the brain’s perspective, religion is a wonderful tool because religion helps the brain perform its primary functions. Unless the human brain undergoes some fundamental change in its function, religion and God will be here for a very long time.

Since our lives are intimately connected to how our brains function, experiences deemed “transcendent” or “religious” occur whether or not the beliefs of a religion are demonstrably true. William James said it best when he stated, “religion doesn’t work because it’s true; it’s true because it works.” Thus, losing organized religion will likely never negate the individual experience of the “transcendent” or the group dynamics resulting from natural selection.

So, what do we lose when we lose religion? In short, we lose some of the supernatural and mystical beliefs that crumble under the light of reason, but we will not lose the experiential or communal desires inherent in the human condition. These two components cannot be replaced by science and reason alone; we desire more than what we can test and independently verify. While we appeal to reason and evidence, we are also complicated, messy, and constantly irrational; this is what makes us human. The goal of an examined life is to try to mitigate the irrational and harmful while encouraging the reasonable and beneficial. In this regard, the experiential and communal aspects of religion will never be lost; they will simply take on a new form, as they have in the past. In the developed world, organized religion is taking on new forms or finding itself irrelevant. The largest growing religious demographic in the US is “none,” which isn’t necessarily atheist but not explicitly religious either. The loss of our traditionally religious life doesn’t spell the end of the numinous all together. Rather, it represents the gain of an intellectually vibrant and diverse culture that isn’t afraid to be different.



Featured image “Exit” by Stuart Cunningham, used under Creative Commons.