As we continue our series of articles and podcasts on the subject of free will, one particular viewpoint keeps tapping the back of my mind, like a reliable friend who is there to remind you of your lapses. What if we’re approaching the free will discussion incorrectly altogether? What if the problem of free will can’t be solved, or at least not yet? What if we don’t have the requisite knowledge to definitively answer the free will problem?

These questions were brilliantly elucidated by the grandfather of the skeptic movement himself, author Martin Gardner. Mathematician, master debunker of the paranormal, and self-proclaimed “philosophical scrivener,” Gardner outlined his views on the free will problem in an essay entitled, “The Mystery of Free Will.” He argues that “the free will problem cannot be solved because we do not know exactly how to put the question.”[i] The complexities involved in establishing a proper investigation of free will (a fuller picture of human consciousness, physics, and social systems) currently precludes us from answering the free will question with any confidence. As he puts it, “Our attempt to capture the essence of that freedom either slides off into determinism, another name for destiny, or it tumbles over to the side of pure caprice. Neither definition gives us what we desperately want free will to mean.”[ii]

So, what does Gardner mean by free will? He describes the problem as “another name for self-awareness or consciousness. I cannot conceive of having one without the other.”[iii] In other words, Gardner believes that free will is predicated on the presumption that human beings have some level of self-awareness or consciousness. Now, while this is descriptive of what Gardner thinks we have, he thinks we’re currently incapable of “distinguishing free will from determinism and haphazardry.”[iv] Determinism’s reductionism places free will in the ash heap of philosophical history, relegating the problem to nothing more than an illusion that we must accept. Conversely, indeterminism “becomes equally delusory, a choice made by some obscure randomizer in the brain which functions like the flip of a coin.” Neither option leaves a ponderer fully satisfied that the problem has been solved; it is best to leave free will as an open-ended mystery — “a mystery bound up, how we do not know, with the transcendent mystery of time.”[v]

With this answer, Gardner belongs to a small, but influential cadre of philosophers described as the “Mysterians,” thinkers whose unsettled views of free will, mind, and consciousness require their shrewdness. Gardner shared this view with physicist Roger Penrose, and they both believed that “there are deep mysteries about the brain that neurobiologists are nowhere close to solving.”[vi] Other “Mysterians” on the problem of free will are philosophers Thomas Nagel, Colin McGinn, and Jerry Fodor, as well as linguist and social theorist Noam Chomsky. They follow the simple, but effective adage that Ludwig Wittgenstein penned in his Tractatus Logico-Philosophicus: “Whereof one cannot speak, thereof one must be silent.”[vii]

Wittgenstein appears not to be the only German-language philosopher that Gardner consulted when coming to his conclusion on free will. For that, we turn to the Prussian enlightenment genius, Immanuel Kant. Like Kant, Gardner believed that “the best we can do (we who are not gods) is, Kant wrote, comprehend its [free will’s] incomprehensibility.”[viii]  According to Kant, the empirical, rational investigation of reality rested on a logical assumption of causal determinism, but the intangible (or numinous) aspects of human freedom (what he attributed to a soul) belonged to a “transcendent, timeless realm” where humans are “truly free.” These two contradictory forces, “empirical determinism” and “noumenal freedom,” seem impossible to reconcile.[ix]

Kant specifically addressed this issue in his work, Religion within the Limits of Reason:

Here we understand perfectly well what freedom is, practically (when it is a question of duty), whereas we cannot without contradiction even think of wishing to understand theoretically the causality of freedom (or its nature).[x]

Gardner admits (as a proper skeptic) that he doesn’t necessarily buy into some of Kant’s metaphysical claims, but the general point is the same. We feel we have free will, but that’s at odds with what we know about the mechanics of the universe. This is the apparent contradiction that is unsolved by mere sophistry, leaving Gardner most comfortable with admitting his doesn’t have a solution.

As someone who identifies as a compatibilist and has spoken of its merits, I am equally enthralled with the mysterian position. Gardner and others are not afraid to say, “I don’t know,” which is both intellectually honest and philosophically astute. Perhaps there are mysteries about consciousness, mind, and time that we have yet to fully comprehend, and until we have the requisite knowledge about these conceptions, we are inept to solve the problem of free will. Humility is the beginning of the path to wisdom, and in that regard, Gardner had it in spades.

 


 

[i] Martin Gardner, The Night Is Large: Collected Essays, 1938-1995 (New York: Macmillan/St. Martin’s Press, 1995), 427.

[ii] Ibid., 428.

[iii] Ibid., 427.

[iv] Ibid.

[v] Ibid., 428.

[vi] Ibid., xix. In a future essay, I will explore how neuroscientist Michael Gazzaniga aptly attempts to assuage Garner and Penrose’s fears by demonstrating a pragmatic approach to free will that is grounded in neuroscience.

[vii] Ludwig Wittgenstein, Tractatus Logico-Philosophicus (New York: Harcourt, Brace & Company, Inc., 1922), 189, accessed February 5, 2018, Google Books.

[viii] Gardner, The Night is Large, 428.

[ix] Ibid., 440.

[x] Kant, as quoted in Gardner, 440.

 

The Architecture of Language_ On the Free Will Debate

Recently, Reason Revolution host Justin Clark sat down with author J. R. Becker to discuss the disagreements between Daniel Dennett and Sam Harris concerning free will. As a complement to their conversation, I want to discuss free will from the standpoint of the meaning of concepts, to ascertain what the difference between Dennett and Harris amounts to and to shed some light on why this debate is happening in the first place.

Clark begins the conversation by outlining three ways of thinking about free will:

  1. Determinism, which understands free will to be an illusion because every action and event has prior causes, and these causes had prior causes, all the way back to the big bang;
  2. Libertarian Free Will, a position, Clark states, promulgated by Christian theology and existential philosophy, understanding free will to be total, that we are condemned to be free;
  3. Compatibilism, a sort-of middle ground, recognizes the truth of determinism that all actions and events have prior causes but does not understand this to be a defeater for free will.

This basic framework is accurate enough. It is interesting to me that Clark outright rejects libertarian free will, as the reasons one would accept it are very similar to those one would use to be a compatibilist. Although I am not sure how accurate it is to say existentialists are talking about free will when they talk about choice (rather than the political term “freedom”), the reason one would come from a libertarian free will position is that it begins with our everyday use of the word “choice” to radically ground the meaning of life in how we choose to be response-able for it, how we choose and live our values. This isn’t unlike the compatibilist position. To be a compatibilist is to essentially use everyday language to think about the meaning of concepts, rather than the scientific conception of reality. The difference between these two positions is essentially that the compatibilist is more accommodating (or perhaps more knowledgeable of different frames of reference) and therefore leaves the language of science to speak in its contexts and the language of existentialism to speak within its contexts. Let’s think more about this movement between everyday use and scientific use. Is it always more reasonable to begin from the scientific conception of reality?

Beginning at the End

I want to tell you a story about a famous philosopher from the 20th century, the two schools of thought he created, and how, although the latter school supersedes the former, the former is still alive and well. This story is not your usual story, because the point of the story is not so much what it refers to outside of itself but rather is the story itself; the very language it uses is as much the “author” of the text as I am.

We are linguistic beings, which means we both communicate through language and, at the same time, create the language by which we communicate. This isn’t so much a strange fact today. For instance, iPhones did not fall from the heavens. We understand that any new iPhone will be manufactured by humans and that it will, most likely, alter the ways we communicate and relate to each other in some way. What is less readily conceivable is applying this recognition to our most basic and natural communication tool: language. The language we use about things, situations, emotions, and the like give meaning to and partially determine our behavior toward them. A very clear example of this is how the word “thinking” has been displaced by the word “processing,” and how the rise of science has changed the metaphors we use to reflect and think about ourselves. Today, our brains are computers, and the hardware of neurobiology creates the software of consciousness. Is anything today so illuminating as this metaphor, and so radically different from the historical view of the spiritual soul, disconnected from all things physical, trapped within the prison of fallen, finite things? Not only has the metaphor informed the kinds of questions we now ask about what it is to be human, but it has altered our situation as humans, from the technology we create to capture, manipulate, and transcend our human capabilities to how we relate to each other. Accordingly, forms of language are, in some ways, forms of reality. If you question nothing else in this article, please question this statement: live with it, by it, for it, against it, without it, because of it. Just don’t forget it. Language causes and solves our problems. It is to language we must turn to understand the origins of our problems and the way to their solutions.

As we enter this story, let us not forget that the concepts we use and our forms of language belong to contexts, and these contexts are composed of specific problems, objects, and logics. Within these contexts, we either use language to extend our concepts to include more experiences, situations, and phenomena (as when religious people call a tragedy part of “the will of God”), or we use concepts to disrupt the very logic of the language we use and the contexts in which our language makes sense (as in when we use irony or hyperbole, or when Sam Harris says, “Free will is an illusion”). The great advantage we have over animals, as a result of our ability to use language, is that we can project possible futures, using concepts as extensions of realities. We can confer motives to things and predict their actions. We can ascribe cause and effect to the world and therefore project possible situations in which we must act. Grounding all this, however, is the fact that our primary tools for acting are not simply instinctual, but they are social. This is of course not to say that language acquisition is not instinctual,[i] but that rather our instincts have given us tools that far exceed the limitations of mere instinct, just as our thumbs give us abilities that far exceed its mere movement.

Finally, I want to offer one more tool as you proceed to this story. Kenneth Burke points out in Permanence and Change: An Anatomy of Purpose that the ways in which we are trained to think and act in specific situations may make us blind to what is relevant and important in situations where our training does not apply. He calls this unfortunate fact “trained incapacity,” which, specifically, he defines as “that state of affairs whereby one’s very abilities can function as blindness.”[ii] Many secular humanists, unfortunately, fall into trained incapacity when they critique religion, especially when critiquing the notion of salvation as “escape.” Burke describes the problem with this criticism of religion well: “Whereas it [the motive to “escape” reality] applies to all men, there was an attempt to restrict its application to some men….While apparently defining a trait of the person referred to, the term hardly did more than convey the attitude of the person making the reference.” Burke wants to frame the problem of incapacity as a problem of “faulty means-selection,” which is a “comparison between outstanding and outstanding,” a comparison of relevant details between different situations (what stands-out in one situation and what stands-out in another). When we reason about the world, we reason by the means of language. As a result, the means of selecting what is relevant in certain situations, and how these relevant things connect with other relevant things in other situations, is a question of our means of selection, or, in other words, what concepts we use to talk about the things we are trying to talk about.

My claim, at the outset, is that Harris has a trained incapacity, and that this is a consequence of his scientific training. As a result, what Harris thinks is relevant in conversations about free will is the cause and effect continuum and therefore calls all talk about free will senseless (this is what it means to say “free will is an illusion”). Dennett, by contrast, examines how “free will” makes certain concepts like “responsibility,” “control,” “choice,” and “agency” relevant. Now, it is important to affirm that Harris’s scientific perspective is a legitimate enterprise and acknowledge that when we think scientifically, we must extend the logic of science as a means of selection for understanding the world. However, we must not fall into the trap that science is the only way to make sense of every situation and concept. Does knowing what chemicals are released in the brain in situations of “love” fully answer the beloved’s question, “Why do you love me?” Telling my wife we are in love solely because of our biology would be offensive to the language of love. Likewise, we must consider the extent to which Harris’s analysis is offensive to the social and linguistic understanding of free will.

Science and the Meaning of Concepts

In the mid 1910s to late 1930s, a group of philosophers, mathematicians, and scientists formed an influential club now known as the Vienna Circle. Among its most famous members like Rudolf Carnap, Kurt Gödel, W. V. O. Quine, Alfred Ayer, Frank Ramsey, and Karl Popper was Ludwig Wittgenstein, an esoteric and eccentric philosopher obsessed with language. The purpose of the group was to make philosophy into a science, to bring a precision to the language of philosophy that would turn it into a science. Using logic, mathematics, and empiricism, the Circle mounted a devastating critique against philosophical metaphysics. Perhaps the greatest and most obscure representative document of this critique is Wittgenstein’s Tractatus Logico Philosophicus. Here, he laid the groundwork for the principle of verification: the meaning of a concept is its referent in reality. The principle of verification is that a concept is true, or has cognitive meaning, to the extent that it represents an object or state of affairs in reality; in other words, a concept is meaningful if it can be verified. This principle was a watershed for the logical positivist movement, or “positivism.”

Many things followed from this principle. For instance, it can be definitively claimed that religious language is contentless, senseless. The term “God” represents nothing in reality, and certainly is not derived from a state of affairs, and therefore it is meaningless. The principle seems to give truth claims of science a more robust framework. Concepts like “free will,” “soul,” and “ego,” can be thrown out without a thought, shown to be nonsense and without content. If a concept cannot be verified, it cannot have meaning.

Yet the principle is not without issues. One obvious problem is that it does not verify itself. It is a mere tautology.[iii] How do we know that a concept has meaning only to the extent that it represents something in reality? Well, because that’s how the Vienna Circle defined “truth” and “meaning.” The Vienna Circle’s concept of truth does not adequately account for the many different uses the concept has. Another problem is that it does not distinguish between statements that are descriptive (reports) and statements that are normative (imperative statements). Are all imperative statements nonsense? To say that something is “hot” or “cold” is to describe your world. We can verify whether something is hot or cold by our senses or by agreeing on what hot or cold means on a thermometer. But to say that something is “good” or “bad” is normative: one could say it’s good to be a Democrat and bad to be a Republican, or vice versa. What sense does this have from the positivist perspective? Where can I point to and identify the “good” of Democrats or “bad” of Republicans, unless I already assume the nature of this goodness? This is the is-ought problem, rearticulated. We will return to this later.

After Wittenstein wrote the Tractatus, he believed he solved all the problems of philosophy. These problems were either confusions of language, claiming content for its concepts where none could be found in reality, or philosophical problems were caused by railing against the limits of language. The limits of language are, indeed, the limits of philosophy. Famously, the final proposition in this influential work states, “Whereof one cannot speak, thereof one must be silent.” Silence is the best we can do with questions about the ultimate things, those things which ground our languages, which form the connections between the is and oughts.

Wittgenstein’s retirement from philosophy was brief. He soon realized the positivist conception of language did not adequately account for the complex ways in which language is used and still has meaning. Consider metaphor, poetry, body language, allegory, and the like. These uses of language clearly say something, and for language to say something is for it to “make sense,” to “have meaning.” Whether or not words refer to things or states of affairs is not the whole question of meaning or truth, Wittgenstein realized. Language acquisition and use plays, perhaps, an even larger role than reference. Consider when a mother points to a ball and says, “ball” to her toddler. How is the toddler to know that when the mother points to the ball the toddler isn’t supposed to follow a line from the elbow, or that the mother isn’t talking about the ball but about the color of the ball, or the shape, or even the space the ball fills by its existence? The toddler comes to know what “ball” means by interacting with the ball, by learning how “ball” is used in the contexts in which it is appropriate to talk about “ball.” This is the central insight of Wittgenstein’s later work, his rebuke to positivism: The meaning of a concept is its use in a context: meaning is a function of context.

Do We Agree on the Facts, or Are We Just Playing a Semantic Game?

Let’s return to the topic of free will but with a different light: the determinist position appears to be derivative from positivism, whereas Dennett’s Wittgensteinian and pragmatist influences shows in his position, for it matters to Dennett that our reflection on concepts begins on the basis of accurate use of these concepts. We can call anything truth: but does the arbitrary changing of definitions mean anything? This question brings to mind the work of James K. A. Smith, presently a popular theologian in ultra-conservative Calvinist circles, who relies on stale arguments and linguistic slights-of-hand. For example, he defines “liturgy” as anything that shapes our desires. So it appears “deep” when he makes the claim that basically everything is liturgy: from the ways in which we shop at malls to our daily after-work routines. One implication in calling desire-shaping phenomena “liturgy” is to suggest that we’re all “religious” at the core. And, indeed, this is assumed in the very conception of the matter. This is an extremely boring and underhanded way of saying something without saying something: Smith is an expert at employing the “deepity.”[iv] But it’s a telling example of how the words we use can affect our perceptions of our objects of study. Why not just substitute “liturgy” with “stimuli?” Well, for one reason, Smith would be out of a job. Additionally, there would no implication, in any given instance when we use “liturgy,” that forming habits fulfills a religious need. Smith’s trailblazing conclusion, that desire-shaping practices are ultimately about “worship,” would not be assumed at the outset. Smith’s method of argumentation is one way to have your conclusions made for you: the very words we use shape our intuitions as linguistic beings.

What does it mean to ask if we agree on “the facts?” Consider that you’re having a discussion with James K. A. Smith on desire-shaping practices. What sense does it make to describe the things that draw our attention and shape our desires as “stimuli,” and not “liturgy?” Are we disagreeing about facts, here? Is it all “just semantics?”

The fact is that our words shape and, in some ways, determine, what we see in the world, giving rise to disparate forms of thinking about what is “the world.” If we use “liturgy” to talk about desire-shaping practices, the inferences we are compelled to make by the use of the concept itself infer that when we conduct acts which shape our desires (that is, when we do anything), we are indeed performing acts of “worship,” and the places in which we perform these acts of “worship” are our “holy” sites. This is what I mean when I say that the conclusions are already contained within the very assumptions from which we begin any analysis. For Smith, just as for Harris, the facts are given as a starting point. Consider what would follow if we began our analysis of desire-shaping practices from the mechanistic conception of the universe. Unlike Smith, Harris would say that we do not shape our desires (“liturgies”) by “performing acts of worship” in “holy sites,” but by being influenced by the “conditioners” in our “environments.” Nothing like “worship” or “holy sites” is insinuated by the use of the words “conditioners” and “environments.”

As such, using a word like “facts” is more so determined by our our points of reference, our forms of analyses, and not so much what we find in the world. Our very use of the concept of “fact” delivers objects in the world which are essentially different from the objects in the world we find when we think of things as projections from our emotions, as symbols of what the future will bring, or as “miracles.” Both Smith and Harris can agree on the “facts,” to the extent that they can analyze the same situations, but what these facts are named, whether “liturgy” or “stimuli,” is just as important in shaping what the facts mean as the objects and situations under investigation.

So What is at Stake?

The free will debate is simply a good representation of what occurs in every discussion where science attempts to analyze concepts derived from everyday use but without paying attention to the inferences we make by these concepts: concepts like “mind,” “thinking,” and “belief,” and “morality.” This debate is also a good example of the difference between positivist and ordinary language philosophers. But let us take a look at another aspect of this debate, moving beyond the analysis of the concepts put in play, and consider the consequences that follow from these concepts.

The debate between Harris and Dennett boils down, in some ways, to the question B. F. Skinner raised half a century ago. When we are trying to understand the reasons for actions, do we look at the intentions of the person from our everyday use of concepts and within a normative framework of moral responsibility, or do we look at the conditioners of action, the mechanics of the universe that make some actions more likely than others and put in place mechanisms that will influence better outcomes? This is the crux of the free will debate between Dennett and Harris. And to the extent that we side with Dennett, we are looking for ways to innovate our normative schemes, to extend some concepts and retract others when it comes to our language about free will, responsibility, and justice. And when we agree with Harris, we are looking at the physical mechanisms of the world in order to manipulate and shape them to improve society.

Going back to the is-ought problem as introduced earlier, we can say that both the descriptive and normative frameworks are different for Dennett and Harris. For Dennett, the descriptive side of his analysis involves looking at the everyday situations in which it makes sense to use “free will” and then to outline the inferences we make in those situations, the consequences of using this concept. For Harris, the descriptive side involves data about the mechanisms of reality. What we count as descriptions, or the “is” of reality, informs, then, the “oughts” that follow. For Dennett, to rid us of the concept of “free will” is to rid us of the kinds of practical, social relations in which we participate when, in the everyday world, we use this concept. That’s why Dennett wants to talk about the moral aspect of free will. For Harris, to lose the concept of free will is to lose nothing, because both morality and free will are about the mechanisms of reality, and just as our moral intuitions are facts that pertain to the operations of these basic mechanisms of reality, so too is the illusion of free will. We have Dennett representing Wittgenstein’s later position and Harris representing his earlier philosophy.

The difference between Dennett and Harris is not only in the frameworks from which they analyze the problem of free will, but in the consequences that follow from their methods of analysis. To accept both projects as legitimate, which I think we should, would mean that we should work both to be linguistic innovators and also social revolutionaries. We should be attentive to the ways in which language shapes thought but also be open to using the tools of science to move beyond mere argumentation and hermeneutical innovation to improve society. The public clash between two legitimate ideas generally revolves around the fallacy that these ideas must be integrated in some theoretically general way for them both to be legitimate, or else one must give way to another. What is more likely true is that Harris and Dennett have different levels of analysis, and that it is a fallacy to think different levels of analysis must be reconciled in general ways. Rather, they must be married in the life and action of individuals, and to the extent that one level is more useful for some people in some situations than it is for others in other situations, then one level of analysis will be more significant and appropriate. We must move beyond the rationalist fallacy. Employed in a different example, this fallacy would have us believe that to use 1+1=2 we must understand the nature of addition and how 1+1=2 can both be grounded in quantum physics and explain why my wife is angry at me for not walking my dog this morning. The rationalist fallacy bewitches us by making us think we have to have a theory of everything to have a perfect language. Yet, we know, different levels of analysis are true in different ways, for different projects, and for different people.

Ending at the Beginning

The difference between Harris and Dennett amounts to this: while Harris is unwittingly reducing other vocabularies to his scientific vocabulary and thereby displaying a trained incapacity, Dennett wants to keep both vocabularies for creating different contexts, exploring different kinds of experiences, and communicating different ways of existing. The contexts in which free will makes sense are not forms of existence that are delusionary, as Harris would have us think. As Kenneth Burke puts it, “To explain one’s conduct by the vocabulary of motives current among one’s group is about as self-deceptive as giving the area of a field in the accepted terms of measurement.”[v] Put another way, “Motives are shorthands for situations.”[vi] When we consider a breach of contract, what is relevant, in these situations, is not as Harris would have it: a consideration of the cause and effect universe and every single way in which our actions and decisions have prior causes. Rather, what is important for Dennett’s form of free will is that the person has “chosen” to breach the contract, based on the concepts we use in contractual situations. When we say a person made a “choice,” we are saying the possible future outlined in the contract in which “breach” makes sense has been actualized: we are not stating a description of neurobiology or physics. We are using concepts, just as scientists use concepts to both create and describe the world, to make sense and act in the world where “contractual relation” is our current situation. Against Harris’s referentialism, Dennett’s free will reaffirms Wittgenstein and Burke: the meaning of our words has to do with relevance, what it makes relevant, and not reference.

We’ve talked so much about language at this point. Let us just throw out, as the straw that breaks the camel’s back, a simple point of logic which Harris and Becker himself do not seemingly acknowledge. In the podcast, when Becker brought up the Libet Experiments to ground his claim that our choices are predetermined, he did not, also, acknowledge, as Kenneth Burke does, that “The discovery of a law under simple conditions is not per se evidence that the law operates similarly under hilighy complex conditions.” This is a fact we should have learned from the history of science, when the simple Newtonian vision of the universe was displaced[vii] by the Einsteinian vision.

Our ending is where we began, with the recognition that we are linguistic beings, and that the way in which we use words matters. Also, in the spirit of late Wittgenstein, we end with the American who arrived, at approximately the same time, with later Wittgenstein to his later conclusions, to the rebuttal of his own early philosophy.

“We discern situational pattern by means of the particular vocabulary of the cultural group into which we are born. Our minds, as linguistic products, are composed of concepts (verbally molded) which select certain relationships as meaningful. Other groups may select other relations as meaningful. These relationships are not realities, they are interpretations of reality—hence different frameworks of interpretation will lead to different conclusions about what reality is.”[viii]

 


 

Photo Credit: Jef Safi

[i] Indeed, this is absolutely the case, as Steven Pinker argues in The Language Instinct.

[ii] Kenneth Bruke, Permanence and Change, 7.

[iii] I refer to this as “tautology” rather than “axiom” to point out a basic point of later Wittgenstein’s insight. Our definitional statements that are supposedly “self evident” actually are the boundaries of our conceptual schemes our language games. They show the logic of our basic conceptual framework: this is what “definition” means in a function sense.

[iv] “Deepity” is from an amusing chapter in Dennett’s book Intuition Pumps and Other Tools for Thinking.

[v] Burke, 21.

[vi] Ibid., 29.

[vii] I say displaced and not “replaced” because Newtonian physics still works when we are measuring short distances, but we need Einstein’s theory of relativity to measure distances between planets. I heard Lawrence Krauss make this point.

[viii] Burke, 35.

 

Harris's Moral Landscape

The history of moral thought varies. Though traditionally associated with either philosophers or theologians, whose theories often extrapolate general concepts without empirical evidence, recent trends in both science and philosophy favor another approach to morality, one steeped in empirical observation and scientific study to define and defend moral principles. Garnering controversy and praise for its fresh discussion of morality, The Moral Landscape by neuroscientist Sam Harris represents such an approach . For Harris, moral relativism (the belief that moral goods are not objective) does not effectively create a just and ethical society.[i] Additionally, he rejects moral (usually religious) absolutism, which defines moral goods under strict, dictatorial guidelines.

As an alternative to moral relativism and absolutism, Harris introduces the idea of a moral landscape, where moral situations and concepts are on a continuum of approval or disapproval based on scientific studies of neurological and social data. His benchmark for what constitutes a moral good is the “well being of conscious creatures.”[ii] This argument is a new approach to the classical study of utilitarianism, founded in the nineteenth century by philosophers Jeremy Bentham and John Stuart Mill. Bentham and Mill’s social philosophy used the idea of “the greatest good for the greatest number” as the standard by which to make moral judgments. Harris’s moral landscape is a modern, more empirically grounded version of this time-honored philosophical tradition, but focuses more on the situational aspects of moral judgement. Thus, Harris’s moral landscape provides us with a new incarnation of utilitarianism based on scientific, as well as philosophical, foundations.

Utilitarianism: The Classical Approach

Before understanding the nature of Harris’s thought, a survey of classical utilitarianism must be conducted. Utilitarianism, as a social and political theory, argues that moral decisions should be made by considering the greatest amount of happiness for the most amount of people possible. The founder of this theory was political philosopher Jeremy Bentham, and he outlined his concepts in an essay entitled “An Introduction to the Principles and Morals of Legislation.” Bentham argues, “nature has placed mankind under the governance of two sovereign masters, ‘pain’ and ‘pleasure.’ It is for them alone to point out what we ought to do, as well as to determine what we shall do.”[iii] Pain and pleasure, generally understood as functionally meaning “favorable” and “unfavorable,” self-evidently show the most appropriate actions for humanity, according to Bentham. Since we are subjected to pleasure and pain, “the ‘principle of utility’ recognizes this subjection, and assumes it for the foundation”[iv] of an ethical and moral system. In Bentham’s view, the principle of utility is the guiding precept governing moral action, both for government and for individuals, that expands pleasure or diminishes pain for the greatest amount of people possible.

Bentham arrives at this conclusion with what is called the theory of “hedonistic calculus.” Hedonistic calculus aggregates the principles of intensity, duration, certainty, remoteness, fecundity (relation to others), and purity of the established pleasures or pain within interactions between social individuals to establish the greatest utility possible in any given situation.[v] These criteria, which are applied like an algorithm to each moral situation individually, deliver the best possible moral outcome.This is generally called “act utilitarianism”: moral actions are made individually and situationally, but collectively expand the moral benevolence of a society. Bentham’s theory powerfully argues for the equality of humanity as well as for the unification of laws and moral customs under a principle of utility. Yet, his approach is harder to implement in the real world because there are no unifying, general axioms that might guide society towards actions of the greatest utility. Also, it takes too much time in the real world to use hedonistic calculus in every situation that requires an action. This is where John Stuart Mill, the co-founder of utilitarianism, comes in to pick up the task.

Mill is in agreement with Bentham on the principle of utility, but he expands upon this concept with his own version of the principle, the “Greatest Happiness Principle.”[vi] The principle posits that “actions are right in proportion as they tend to promote happiness, wrong as they tend to produce the reverse of happiness. By happiness is intended pleasure, and the absence of pain; by unhappiness, pain, and the privation of pleasure.”[vii] Therefore, all utilitarian moral evaluation and action is based upon this principle for Mill. In responding to critics who argued that pleasure is only of the body, Mill counters with asserting that some intellectual goals, when achieved, are more pleasurable than bodily desires, which must have some form of primacy over the base, bodily pleasures of humankind.[viii] Thus, Mill’s utilitarian theory argues that broad rules must be created in accordance with the Greatest Happiness Principle in order to effectively implement a standard of morality for as many people as possible.[ix] This is known as “rule utilitarianism,” which argues for the creation of the most general amount of happiness through broad, unifying guidelines that all members of a society use. But what are those rules?

In attempting to create some guidelines, Mill argues, “the ultimate sanction, therefore, of all morality…[is] the conscientious feelings of mankind.”[x] Humanity’s initial moral guidelines stem from subjective value judgments that then evolve into broader social commitments, to ethical ideals like happiness. In an interesting turn, Mill dissents from Bentham and argues for something revolutionary within the utilitarian framework, something that will have a clear influence in Harris’s thinking: human morality is equivalent to states of mind. As such, the sanctions on moral behavior exist, “always in the mind itself…this which is restraining me [from immoral action], and which is called my conscience, is only feeling in my own mind.”[xi] Mill’s dedication to the human mind anticipates the development of the neurological sciences and their relationship to human behavior, something Harris has openly defended. While these properties are of the mind, Mill argues that they are not innate and must be “a natural outgrowth…brought by cultivation to a high degree of development.”[xii] Another key axiom for Mill is that rules for conduct in society be created by, “those who are qualified by knowledge of both ‘moral attributes and consequences,’” and that it, “must be admitted as final.”[xiii] Mill thinks somebody, or groups of people, should be thinking about the possibilities of action given by current circumstances and running the General Happiness Principle through an algorithm to determine general rules of conduct. Due to the natural propensity for intellectual growth and moral guidelines through the expansion of education, utilitarianism can be applied to society through general rules of conduct. This is something Harris, presumably, would agree with.

Both Bentham and Mill created a social philosophy which philosopher Leonard Peikoff described as “knowing skepticism,” meaning that while these do not fully produce objective rules of conduct, the subjective value-states of humankind lead to the creation of larger rules that society functions by.[xiv] In introducing this skepticism, Mill and Bentham orchestrated a social philosophy that has practical value, especially with the introduction of uniform rules of conduct based on collectively understood value judgments. Sam Harris’s “moral landscape” seeks to revamp rule utilitarianism using neuroscience to explain social conduct and the nature of human happiness in a more scientific, objective way.

Harris’s Moral Landscape

As a trained neuroscientist, Sam Harris uses the tools of science to answer our long-standing moral and ethical dilemmas. “Human well-being entirely depends on events in the world and on states of the human brain…. Differences of opinion will remain—but opinions will be increasingly constrained by facts.”[xv] Harris is putting forth a more actionable way of approaching ethics; instead of using traditional and potentially subjective modes of moral and ethical thought, these shift into discussions of quantifiable rules of conduct that can be measured within the constructs of science and reason. To this end, Harris posits the moral landscape as, “a space of real and potential outcomes whose peaks correspond to the heights of potential well-being and whose valleys represent the deepest possible suffering.”[xvi] These moral peaks and valleys are directly proportional to levels of brain states. And under this scheme, various cultural, ethnic, religious, and social customs are represented as features of the landscape. As Harris puts it, “Culture becomes a mechanism for further social, emotional, and moral development. There is simply no doubt that the human brain is the nexus of these influences.”[xvii]

In trying to develop better modes of moral behavior, Harris posits that general well-being, much like the utility principle for Bentham and Mill, is the benchmark for what constitutes a moral judgment, action, or outcome.[xviii] Yet, he disagrees with them about the importance of subjectivity in the moral decision-making process. Harris argues that, “there must be facts regarding human and animal well-being about which we can also be ignorant or mistaken. In both cases, science—and rational thought generally—is the tool we can use to uncover these facts.”[xix] Humanity’s evolutionary shift towards rationality and reciprocity has paved the way for moral and ethical concepts that increase the well-being of most parties within a society.[xx] The insistence on rationality, brain states, human thought, and general well-being creates the necessary moral framework that makes Harris’s views consistent with Mill’s rule utilitarianism, even though Harris believes that objective moral truths are easier to grasp than Mill did.

In explaining the nature of brain chemistry and its relation to human morality, Harris cites a study involving psychopaths and sociopaths. These two psychological categories of people, on average, make immoral or amoral decisions at the expense of others’ well-being. Harris explains that, “the first neuroimaging experiment done on psychopaths found that, when compared to nonpsychopathic criminals and noncriminal controls, they exhibit significantly less activity in regions of the brain that generally respond to emotional stimuli.”[xxi] This correlation suggests that in the future, as the nature of neuroscience progresses to create an even fuller picture of the brain, society may be able to establish social norms based on such empirical data. Harris’s explanation of evil lends itself to Mill’s view that the importance of social norms and reliance on people of experience could be used to create a utilitarianism that has real social weight.

Another way that Harris’s moral landscape shares the qualities of rule utilitarianism is that studies on human belief show facts and values are beginning to become intertwined. To understand this further, Harris elaborates on the nature of biases in human thought processes; he argues that bias, “is not merely a source of error; it is a reliable [italics in original] pattern of error. Every bias, therefore, reveals something about the structure of the human mind.”[xxii] The problems associated with biases serves as a counterpoint to the prevailing moral precepts of a given society. Since logical arguments are created from the withering of bias within a sound proposition, when facts are thus determined, they become believed; a sound fact “inspires belief.”[xxiii] Morality, in some instances, can be inspired beliefs based on past elimination of biases and the creation of sound facts. Logically, our understanding of sound facts allows us to implement a form of rule utilitarianism that applies to a wide variety of societies.

Conclusion

Sam Harris has argued human flourishing is directly correlated with a sound understanding of the fundamental facts of human well-being, particularly freedom, security, and equality. In the conclusion to his book, he argues that, while there may never be a completely implemented form of universal morals, humanity, “must admit that some interests are more defensible than others. Indeed, some interests are so compelling that they need no defense at all.”[xxiv] This brief passage on the nature of competing interests in society is one of the most powerful, implicit defenses of utilitarian thinking: some interests will take precedence over others, for the most amount of well being in a society, and utilitarianism gives us a way of navigating competing social interests. What makes Harris’s moral landscape important to the evolution of ethics is that it offers a method, one rooted in empirical evidence and philosophical consistency. It offers an attainable, institutional form of human morality that is a secular alternative to the all-pervasive contradictions inherent in theological ethics and moral relativism. Rule utilitarianism, from Mill’s classical form to Harris’s moral landscape, shows a systematic approach to the expansion of positive human values that, through science and philosophical inquiry, will only further evolve.

 

 


 

[i] A moral good is any moral decision or consequence that has the characteristic of being “moral.” So the moral good is a general term for any decision or consequence that is morally good.

[ii] Harris, 2010, p. 11

[iii] Curtis, 117, 1962.

[iv] Ibid.

[v] As cited in Curtis, 120, 1962.

[vi] (2002 p. 239)

[vii] Ibid.

[viii] Ibid., 240-241.

[ix] Ibid. 241.

[x] Ibid., 262-263.

[xi] Ibid.

[xii] Ibid., 264.

[xiii] Ibid., 243.

[xiv] 2002 p. 59

[xv] Harris, 2010, p. 2-3

[xvi]Ibid., 7.

[xvii] Ibid., 9.

[xviii] Ibid., 55.

[xix] Ibid., 31.

[xx] Ibid.

[xxi] Ibid., 97.

[xxii] Ibid., 132.

[xxiii] Ibid., 133.

[xxiv] Ibid., 190-191.

 

References

Curtis, M. (1962). The great political theories, volume two. New York: Harper Perennial.

Harris, S. (2010). The moral landscape. New York: Free Press.

Mill, J. S. (2002). The basic writings of John Stuart Mill. New York: The Modern Library.

Peikoff, L. (2012). The DIM hypothesis. New York: New American Library.

 

What is Atheism by Tylor Lovins

With the continued development of secularism, the term “atheist” is becoming more common. More and more people are talking about “atheism,” but what is it, exactly? A tension exists between the method this kind of question brings to bear in its search for an answer, and the reality—that there are people who are atheists—it attempts to explore.

The method assumes atheism is something like a religion. It is interesting the extent to which this method pervades even the secular community, yielding a conclusion uncontested by nearly anyone: atheism is lack of belief in God or gods. We are told atheism is about belief, just like a religion.

Atheism, empirically speaking, signifies the status that a certain belief holds in the lives of certain people. This, so far, is rather banal. Although many confusions follow this method, such as when theists ask atheists for reasons for their atheism, it is perhaps conceivable that atheism is an option, like a commodity, in the marketplace of ideas. This assumption has yet to be supported, yet it is seemingly believed by all. Atheists argue for atheism like Christians argue for Christianity. Is this a case of mistaken identity? Is atheism something like Christianity?

Let us explore this question, not from the assumption that atheism is a belief that atheists have, but that atheism exists because there are atheists. Let us not assume an equivocation of function between atheism and religion and simply pose the question: why are there atheists?

It is no doubt true that there are some atheists who were once theists. Disenchanted of belief in God or gods by experiences of tragedy, power struggles in religious institutions, perceptions of disparities between scientific and religious claims, and the like, some atheists are reactions to religious institutions and beliefs. This seems where the concept of atheism originated: as the status of a person who refused the beliefs of larger society. Atheism, in this sense, is disbelief. This is a refusal to believe either based on reason, intuition, or emotion. There are many in the ranks of the atheists who would identify with this kind of atheism. This is atheism as anti-theism. These atheists would give reasons for unbelief, and atheism, here, might be accounted as something like a one-eyed religion, in that it develops a totalizing system of beliefs about God or gods, nature, and humankind, without the rituals and community associated with these ideas in religion.

Another kind of atheism has emerged in the modern world. One where religion wasn’t received as a candidate for belief in the first place. In this sense, atheists aren’t those who refuse religious beliefs and institutions, but those who never considered them as meaningful options. It’s not that atheists have acquired disbelief, it’s more accurate to say that the concept of God or gods holds no meaning for atheists. It bears no weight on their day-to-day lives. The world is thought about and lived in without God or gods. This kind of atheism resembles religion in no conceivable way. Atheism, here, isn’t a status of belief, because it doesn’t occur to the secular atheist to refuse God or gods: what would it mean to refuse? There are no questions, here, of the existence of God or gods for it is unclear what such “existence” would entail. A product of a world handed down by science and secularism, atheism in this sense indicates the meaninglessness of religious belief.

As briefly outlined above, there are generally two reasons why there are atheists. There are atheists because of disenchantment, and there are atheists because of secularism. The common definition and understanding of atheism presupposes the first kind of atheist, the anti-theist, as the torchbearer for atheism. This is an oversight. A new kind of atheism has emerged as a result of secularism, one where religious traditions do not make sense in the first place. The secular atheist lives to promote science, humanism, secularism, among others; that is to say, lives to promote and develop positive options for living in a world where religion doesn’t make sense. Anti-theists, on the other hand, while they may promote positive options, also focus on diminishing the status of religious beliefs: actively promote refusal of religion.

As a result of secular atheist influence, atheism may in the future be understood not for its nonreligious point of view but for its secular humanist viewpoint. Whether one population of atheists will give way to the other eventually, it appears that secular atheists are here to stay, and with them, the nature of atheism itself has changed: no longer a mere refusal of what came before, but an openness to what is to come.

 


 

Photo by Vlad Tchompalov on Unsplash

Rationalism as a Humanism: Grounding the Secular by Tylor Lovins

What is the defining quality of the secular movement, if there is a center at all? Merriam-Webster defines secularism as “indifference to or rejection or exclusion of religion and religious considerations.” This aspect is self-evident to everyone in the movement. Many prominent secularists have at one point or another declared war on religion, typically by reducing all religious traditions to their fundamentalist, literalist manifestations. Motivated by the theory that religion was a primitive form of science, the mystifying beliefs of divine inspiration, holy-book-inerrancy, and divine-human relations have been shown for what they truly are: linguistic and ritualistic artifacts of a world now left behind by the progress of science.

The movement of secularism isn’t itself contained within this definition of secularism, however. The definition for humanism, which stands today as a largely non-negotiable feature for many in the secular movement, describes the contexture more precisely: “a doctrine, attitude, or way of life centered on human interests or values; especially: a philosophy that usually rejects supernaturalism and stresses an individual’s dignity and worth and capacity for self-realization through reason.” Reason and science, coupled with anti-supernaturalism and displacing religion, appear to be the primary drivers of secularism. This warrants some critical reflection. Although reason can be understood as an intellectual endeavor that utilizes principles of logic, it’s not self-evident whose reason, and which rationality, should undergird the secularist movement. The de facto rationality motivating the secularist movement at present is rationalism.

The rationalist tradition for our purposes can be understood as the tradition of thought that makes truth the outcome of an equation: it proceeds from premises to conclusions that are warranted by logic. This is, in Aristotle’s term, “dialectic.” More broadly, a compelling yet underdeveloped strain of rationalism that creates the framework for secularism subsumes empiricism. Here, the premises of thought do not rely entirely on abstract, a priori conditions but take into account scientific findings and experiential knowledge. Another strain has developed, unfortunately, deducing that our motivated action is grounded by the rationalist equation. Let’s call this “naive rationalism.” The naive rationalist asserts we’re basically rational animals and with our handy reason, we are guided by rationalist equations. The yield of these equations are the truth in the realm of thought, and the good in the realm of action. Proposed as the successor to religious traditions that make claims based on authority, the rationalist tradition appears poised to further the cause of humanism and the advancement of knowledge by the force of reason, in a way that is historically unrivaled and unparalleled.

This ambiguity in the rationalist tradition should be interrogated. For centralizing the naive rationalist tradition in the secularism project devalues the fundamental, constitutive role valence frameworks play in any kind of rationality in the first place. Reasons, as modern philosophy and psychology have shown, do not originate from value-neutral systems, but rather are products of systems of value. The point can be made more explicitly: this rationalist tradition favors facts and reason as the highest goods, virtually diminishing the explicit roles of fitness, creativity, virtue, and meaning in the scheme of human motivations. Secularism could benefit from reintroducing these roles back into the pantheon of humanism.

Situating Rationalism

What I am suggesting is not entirely novel, but it remains sufficiently foreign to many projects sympathetic to secularism that it bears repeating and amplifying here. I am not, after all, calling for a devaluation of reason. Reason is a grand achievement of humankind, and rightfully remains as the symbol of not only progress but of a future world without mass population manipulation by appeal to fantastical claims. I simply want to bring reason back from the clouds of the Enlightenment to the real world,  where values, emotions, and unconscious biological mechanisms propel us to action and thought.

In an episode of The Sopranos, Tony’s therapist explains that rage is the psyche’s way of creating a massive distraction, enabling one to not account for potentially punishing or threatening stimuli (whether in memory or experience), but rather displace them, so as to shut one’s eyes to these stimuli as meaningful or real. The picture of rage here is like the child who hides her head under blankets after seeing a scene from a horror film. The way in which we use arguments to reduce others’ positions to ludicrous strawmen is precisely a type of security blanket, but in linguistic form. Let’s remove this blanket, and confront the ambiguity in the function of rational beliefs that emerges when we ground them in the creaturely realm.  Our beliefs themselves, whether true or not (in the sense that they adequately take into account our place in the world in the present), may be what obstructs us from ascertaining truth in the future. Truth, in this way, returns to the motivational level, and doesn’t remain in the realm of articulate conscious thought. Our knowledge of the present may not be true enough to enable us to thrive or acquire truth in the future. Whether reason itself is (1) a method for finding truth or (2) a claim about the authority of an assertion is a tension for many int he secular movement. Just take a look at all the anti-religious memes and rhetoric flourishing in online secular communities to see just how much reason has been misunderstood as a position or claim and not as a method.

Truth as motivational, as operating in the realm of meaning, is important when the secularism project encounters religious thought, and especially as it invokes science. Humanism’s anti-supernaturalist bent is understandable and significant. With Bacon’s critique of Aristotle’s final cause, the method of science was significantly brought into focus and under these conditions prospered without religious conceptions of the world. We don’t need to know the metaphysical constitution or nature of a thing to determine its efficient or material causes. That there may have been a being that created the material world does not weigh in on the question of why the sky is blue or how bacteria cause disease, or even, now, where humans came from. With Bacon, the weight of supernaturalism no longer grounded science, and it could finally fly freely toward the light of truth.

This is not where the story ends, however. Science appears positioned as Icarus. Important modern figures of secularism and champions of science like Sam Harris and Richard Dawkins have taken their cue from James Frazer’s The Golden Bough, claiming religion is a primitive form of science, and that with the progress of science, it will be left behind. Although Frazer rightly positions the basis of myth and religion in psychology, the view was unfortunately colored by a naive rationalism. Frazer, among others even today, do not account for the importance religion has for the inward life and the psychological mechanisms that motivate religion in the first place. Seen as an institution that delivers a guide to right action and right thinking based on authority, religion becomes cosmology + ethics, undermined by its supernaturalism.

One reason the rationalism of science fails to adequately give an account of religion is because the tradition of rationality itself hasn’t taken into account the creature that uses rationality, but rather has reduced this creature to something like a more-or-less competent logic-guided robot. This oversight is a significant one. The public and communal nature of the scientific enterprise meshed with the philosophical underpinnings of secularism’s rationalism and empiricism make for a formidable force not unlike that of Christendom’s mix of magisteria and religion in the life-world of Medieval Europe. Still, the potential has yet to be unlocked. At this point in history, especially in the post-industrial, Christian-inspired nations of Europe and North America, secularism is like the potential energy of two tectonic plates producing some seismic activity in the last two or three centuries but overdue for a massive earthquake.

Motivation and Articulation

The religious wars that gave impetus to a non-religiously grounded framework for truth and political institutions birthed our modern secularism in more and less obvious ways. As deism rose to prominence during this time, true religious beliefs were no longer associated with the authority of church institutions, which had enforced the status of these truths by political force. Rather, truth became an inward reality, an “inner light.”[1] The public became private, the communal individualized. The stakes of this reformation, owing much to the ideas of the Reformers who ignited growing ideas of nationalism and equality already in place, couldn’t be much higher at the time. The political leaders who were endowed with authority by the Church weren’t just making sure, as in our day, the beliefs of one person didn’t intrude on the liberty of another, but were charged with the task of safeguarding the souls of their people.

As human history moved to favor the death of ideas over the death of people, the importance of symbols and narratives as the spaces where truth showed itself were lost within the development of rationalism. The separation of church and state has reversed the roles of what fundamentally grounds us. This is easily seen in populations of both religious and secular stripes, with people in both groups claiming that the minimal requirement a valid belief must meet to be legitimate (or, at least, not disallowable) is that it won’t infringe on the liberty of others. With rationalism sectioning individuals into types and tokens, our beliefs have become hyper-individualist, and what motivates us on the pre-conceptual level has been lost as a category for thinking, in the demand to typify everything for the calculus of our secular rationality.

For the kinetic energy of secularism to support life rather than diminish it, it’ll have to not only capture the minds of the masses, but also the hearts, and not just in the equivocal, ambiguous way by assuming and sublating the good, or motivational truth, with the method of rationality. The disparity between the proselytizers of religion and the advocates of secularism might just be measured by the forms made available to religious people in symbols and rituals that haven’t found a functionally equivalent home in secular movements. These forms enable the appearance of content framed as statements of belief, which illuminate, inspire, and unify the mind and heart. And the reasons are somewhat obvious, for those with eyes to see. Image processing and pattern recognition, as forms of thinking that are innate and unconscious, are more primary to and pervasive in consciousness than articulate thought.[2] That the myths of religion are saturated by images and narratives is, as a result, no accident. Stories grab us on a pre-conceptual level and even appear to ground our conceptual frameworks in the first place. Daniel Kahneman’s Thinking Fast and Slow depicts this secondary role of articulate thought in consciousness even more acutely: our “fast” system, what in common parlance we name “intuition,” this pattern recognition mechanism that I mentioned before, “makes” choices for us on most occasions. It is only when something unexpected or unknown is encountered that our secondary, “slow” system becomes operative: articulate thought.

If the strictly rationalist perspective of the human were true,[3] whereby the givenness of thought were provided completely in the mediation of sense data from the world, through the eyes, to the vassal of our minds, waiting to be formed by our concepts, then the world would, in a significant way, be value-neutral to our biological systems: there would not be a primitive reaction of fight, flight, or freeze, but an immediate compulsion of reason—articulate thought would be more pervasive than non-linguistic thought. This is, in fact, not what we find and doesn’t account for everyday experience.

A now prevailing theory of perception supports the valence-laden notion of the world. Scientists formerly believed that when we look out at the world and perceive the “givenness” of it, those objects with the most salience attract our attention. The consensus is moving in a different direction. We are, rather, attracted to valence: the most meaningful aspects of our perceptual field. And, on a more general level of analysis, when we don’t know what’s going on, when we find ourselves in situations that are new or unexpected, our amygdala goes to work, and in some degree produces the fight/flight/freeze response.[4] This is true not only for situations in the world when we encounter strangers, animals, natural disasters, or darkness in a foreign place, but also for situations in the mind, when we encounter new ideas and beliefs.

To be fair, the disparity I am outlining, between truth as fact and truth as valence, isn’t irreconcilable. The difference rests merely on two images of humankind conceived in “natural” or “normal” states of affairs. The naive rationalism that grounds some strains of secularism would have us believe it is natural for humans to encounter the world in a value-neutral way, although the methods of science itself, and its empiricism, contradicts this claim. On the other hand, religion, as it encourages literalist interpretations of its mythical symbols, would have us think the world is populated by gods and demons, and that it is natural for humans to encounter a world for or against them. These claims are literaly false, but perhaps metaphorically true. The issues arising from naive rationalism on one hand and religious fundamentalism on the other are not inherent to the secular enterprise itself, but are simply artifacts of the pre-Darwinian philosophy of Descartes. It is my belief that becoming more Darwinian will galvanize secularism to a more synthetic and all-encompassing view of ethics, politics, and especially religion.

Religion and Rationalism

If we take Kahneman’s research and conclusions seriously, rationality appears to be a mechanism motivated by the negation of itself. We can put it conceptually this way, using Hegel as our guide, contrasting the understanding from conceptual thinking: (1) the understanding is an immediate (meaning unmediated) interaction with the environment, bellying most of our thinking most of the time; (2) dialectic, or conceptual thinking, is a mediated form of the immediate, and its goal is to synthesize the mediated with the immediate experience to adapt understanding and return to the world, forgive the religious image, as a new creation, better fit to overcome whatever obstacles stand in one’s way. Rationality, as the conceptual aspect of thinking, arises when we encounter a problem or an unknown in our environment, when our unmediated understanding, our immediate experience of the world, becomes questionable. When the issue appears, we mediate the world, so that we don’t have to die to learn, but can predict, contradict, examine, and evaluate new courses of action to map on our environments. Our mental life returns to immediacy until a new problem or a novelty is encountered again.

This cycle of immediacy and mediation seems to account for a significant difference between rationalism and religion. And I think rationalism could gain from learning about this difference.

A piece of a Darwinian understanding of religion will reside in this framework, I believe,  not limiting religion to either a scheme of morality only or a cosmology only, or simply both together in varying intensities. Wittgenstein once wrote “God” is a term like “object,” and with it, you get an entire conception of the world. The first commandment given to the Jews, that they should have no other god before God, can now be interpreted in the way the Father of Modern Theology, Friedrich Schleiermacher, once spoke of miracles: “Miracle is simply the religious name for event. Every event, even the most natural and usual, becomes a miracle, as soon as the religious view of it can be the dominant….The more religious you are, the more miracle would you see everywhere.”[5] Religion makes a move that rationalism doesn’t necessitate but could, and should, incorporate.[6] The moment of mediation, for religion, is not a moment to figure something out about the objective world, whether that be the causal relations of objects or the laws of nature, and to the extent that these are figured out by religious people, it’s an accidental and not an essential feature of the religious disposition. The moment of mediation is undertaken to correct disposition: mediation is a form of meditation, a reception or correction of behavioral patterns. Immediacy becomes transformed into miracle the very moment God is sought in all things. Consider the words of Deanna A. Thompson, explicating the centrality of faith for the Christian life in light of Martin Luther’s theology:

“…having faith means that your whole life is redirected toward ‘trusting [God] with your whole heart’ and looking to God ‘for all good, grace, and favor,’ honoring God through the orientation of your inner life.”

Rationalism, on the other hand, utilizes mediation in a fundamentally different way, and this is what separates the objectivity of rationalism from the existentiality of religion. The point of mediation for rationality is to understand the causal connections and physical makeup of the world. Yet it doesn’t end there. Mediation becomes saturated with facts, more so than the religious disposition strives to attain, and in such a way sets the mediated move of reason as the primary driver of thought, rather than a certain disposition toward the world as it relates to oneself immediately.

This is a significant difference. It doesn’t mean that religion only operates within the realm of value and rationalism in the realm of truth, but it does indicate a different kind of navigation of the world as it presents itself to human beings, as creatures who not only think and plan but also suffer and love. The platitudes, deriving from metaphors, narratives, and images, used to communicate religion by religious people themselves, inspire a depth of life for many that appears simply, at least in this point in history, inaccessible by other existing avenues. Taken seriously, with a more fully Darwinian conception of religion we may acquire a wisdom and appreciation for not just life itself but the lived experience of life that has been hidden in the cliches of the sages of the past. The fact that so many religious people use platitudes or canonical beliefs, grounded in metaphor and imagery, to communicate deep inward experiences tells us conclusively that these inward experiences need forms to carry them to the public eye, and these forms are patterned and universal. It seems otherwise a miracle, for instance, that the myths of the world have global structures and archetypes, which when abstracted from any individual myth fits within a universal framework common to all myths. To go further, an experience that I can’t mediate to myself doesn’t have meaning, and the way I mediate these to myself is the same way they’re mediated to communities I find myself in: by language and images. There is some sense in which, as a result, the meaning and shape of experiences arise within communal constraints and traditions. And these constraints and traditions, undergirded by patterns of categories seemingly inherited, testify to something all too human.

Rationalism as a Humanism

Rudolf Otto introduced the notion of “awe” as central to the encounter with the divine, as the most salient characteristic of a religious experience. And we might say this “awe” is essential to the propensity to live by inward disposition and motivation rather than external manipulation and control. Joseph Campbell asks in Myths to Live By “what the proper source of awe might be”[7] for us who no longer live in a world of gods and demons? What are the sources and symbols of mystery and inspiration that evoke “the impulse to imitative identification?”[8] He traces these sources in history as beginning with animals and their mystical agency, then to the vegetable world where death changes into life, and then to the cosmos and the seven moving cosmic lights that affected the ordering of societies. He finds in our time the individual stands as the source: “as a Thou, one’s neighbor; not as ‘I’ might wish him to be, or may imagine that I know and relate to him, but in himself, thus come, as a being of mystery and wonder.”[9] Every human is a new beginning, a singularity in the history of humankind, and to diminish this novelty is a kind of blasphemy.

Like Nietzsche, Campbell finds the first explication of the human as a source of awe in the Greek tragedies, already in the period of Homer. From the two classically recognized tragic emotions as indicated by Aristotle, pity and terror, we discover a conceptual framework in which to turn the traditionally religious movements into a humanist project. Campbell uses James Joyce’s exposition to spell these out: “Pity is the feeling that arrests the mind in the presence of whatsoever is grave and constant in human sufferings and unites it with the human sufferer. Terror is the feeling that arrests the mind in the presence of whatsoever is grave and constant in human sufferings and unites it with the secret cause.”[10]

In tragedy, we are compelled to relate to the individual by the shared grave and constant reality between us, and we are inspired by the secret source of this grave and constant which unites us. In our case, it is death which is the grave and constant specter that haunts us, and it is life which is the secret source of death, but also of things greater than these: family, creativity, and meaning. In this recognition, we may return to the Father of Modern Theology but without God: life is received as a gift, that which we share with all our brothers and sisters, which we did not ask for or could not acquire by our own actions, but by the happenstance of evolutionary history, are gifted immeasurably.

For rationalism to motivate secularism properly, it must catch up with the times, and not deliver to us an image of humanity dreamed by the ghost in the machine of Descartes, or in the tabula rasa nothingness of Locke’s children. Being clear about the nature of the creatures who use rationality is one thing. We must also understand the motivations of these creatures. Reducing, disregarding, or criticizing religious beliefs by a way of thinking foreign to it, without first taking genuine steps toward understanding it on its own terms, doesn’t seem to be the most reasonable response to a phenomenon that has enamored most people for most of history. Rationalism, itself, is a tradition, a human tradition. It is imperative that secularism recaptures the human element in the heart of rationalism. The best secularism, in my estimation, is the one that takes into account and integrates the best of all human thought, no matter where it may be found. What images of the human we use in this process will be crucial, for it is our metaphors that “mediate between our procedural wisdom and our explicit knowledge; they constitute the imagistic declarative point of transition between the act and the word.”[11]

The West celebrated the God incarnate for millennia. It’s time we celebrate the fact that life became human, and that now, with the gift of consciousness, we may understand, revere, defend, and serve it. We need not pray that God bless us, for life has. Nor should we pray for God to return, for life is here. No more prayers for miracles of God, for the secret source that connects us all, life, demands of us that we act. The only question is whether we will become worthy of this demand. “The old imagery now carried a new song–of the unique, the unprecedented and induplicable human sufferer; yet equally a sense of the ‘grave and constant’ in our human suffering, as well as a holy intimation of the ungainsayable ‘secret cause,’ without which the rite would have lacked its depth dimension and healing force.”[12]

 


 

Photo by Johannes Plenio on Unsplash

 

[1] See Christopher Hill’s wonderful book where he tracks this in England from 1400-1580 in The World Turned Upside Down.

[2] This is Freud’s insight and it has turned out to be true in an interesting way: our “fast system” heuristics are such that we have systematically predictable errors that we make in our thinking.

[3] I find this especially in the Objectivist ethic, but this idea has advocates from Rene Descartes and Immanuel Kant as well as Ayn Rand.

[4] Jordan B. Peterson, Maps of Meaning.

[5] Friedrich Schleiermacher, On Religion: Speeches to Its Cultured Despisers.

[6] And already does to some extent. Listen to lectures and presentations by Carl Sagan or Neil deGrasse Tyson, and you’ll hear a very similar view.

[7] Joseph Campbell, Myths to Live By, 58.

[8] Ibid.

[9] Ibid.

[10] Ibid., 59.

[11] Peterson, 94. We should note, here, a prime example of our danger. The fact that the trolley problem has been posed as a moral problem, in the sense that it awakens our intuitions enough to perceive it as a moral problem in the first place, is disconcerting, as it assumes the moral choice can be perfectly moral while making life expendable.

[12] Campbell, 59.

 

In my previous essay, I explored the implications of life without gods and the supernatural. Acknowledging that the abandonment of traditional religion requires a complementary philosophical system, I will present secular humanism as a rigorous and applicable framework for human flourishing. This brief overview will not be exhaustive; it will present an outline for this methodology and present concise arguments in its defense. In sum, a life based on the application of one’s reason, ethical individualism, and democratic participation can facilitate a life of joy, freedom, and achievement.

The Humanist Epistemology

A secular humanist’s epistemology (theory of knowledge) is built upon three essential components: reason, methodological naturalism, and skepticism. First, reason is the foundational pillar that the other components work from. Reason is the capacity of human beings to create abstract thoughts and/or conclusions based on the concretes of reality. It is the emergent faculty of our brains that allows us to conceptualize and systematize the world. The humanist believes that reason, or our ability to perceive and then conceive, is purely natural and without the need for “faith” or “revealed wisdom.”

Philosopher Harry Binswanger has delivered a series of lectures emphasizing this point, basing his conclusions off of the principles of an Objectivist epistemology. In Binswanger’s estimation, perception (taking in information via the senses) is the “given” in our understanding of the world, in that it requires mere physical processes. Abstraction and conceptualization, which turn our perceptions into knowledge, are processes that require discrimination and systemization of the “raw material” of perception. This is where reason comes in. Nearly anyone can perceive a quasi-spherical red object or a vibrational difference in the atmosphere with their senses; it requires reason for the concretizing and systemizing process of conceptualization to understand that it is an apple or a song.

Faith by-passes the entire process of knowledge, by appealing to “revealed” truths that one accepts without the steps of perception, concretization, and abstraction. It treats knowledge as a top-down proposition, akin to Plato’s “forms” or Kant’s “pure reason.” This is a completely inverted understanding of epistemology. As Aristotle, Locke, and others have rightly noted, knowledge is a bottom-up process, requiring ever more complicated levels of thought to arrive at our conclusions. Therefore, it is essential within a humanist understanding to properly acknowledge the importance of perception and reason to epistemological questions.

Second, it is important to base our perception on a solid foundation, which in this case is methodological naturalism (MN). An astute summation of methodological naturalism comes to us from the RationalWiki:

Methodological naturalism is the label for the required assumption of philosophical naturalism when working with the scientific method. Methodological naturalists limit their scientific research to the study of natural causes, because any attempts to define causal relationships with the supernatural are never fruitful, and result in the creation of scientific “dead ends” and God of the gaps-type hypotheses. To avoid these traps scientists assume that all causes are empirical and naturalistic; which means they can be measured, quantified and studied methodically.

MN does not rule out the possibility of the supernatural, but rather recognizes the complicated and often problematic investigations of the supernatural. This view is contrasted with philosophical naturalism (PN), which holds that the natural world is all there is and no supernatural exists. While some humanists hold the position of PN, it is more philosophically and intellectually honest to accept MN.

Having said all that, it is important to note that MN does not ignore supernatural claims altogether. When a faith healer says he can cure cancer or a psychic claims to know intimate details of your life, these are specific, testable claims that can be refuted by the scientific method. Even more broadly, when a religion makes specific claims about the natural world (God created the world in six days, God stopped the Sun in the sky, Jesus rose from the dead), these can also be debunked by scientific investigations. What MN cannot do is refute God or supernaturalism all together, seeing as these concepts are too broad and amorphous to be falsified, a key component to the scientific method. Therefore, Humanism’s dedication to MN, and its lack of confidence in supernaturalism and gods, is based on the simple logic of Occam’s Razor. If a phenomenon can be explained by natural means, it is therefore unnecessary to attribute them to supernatural means. Additionally, if a phenomenon we attributed to the supernatural is proven to be true, it is then added to what is natural.

Finally, a humanist epistemology benefits from a healthy dose of skepticism. For this perspective, we turn to the master of skepticism himself, the Scottish philosopher David Hume. In his Treatise on Human Nature, Hume explains the fallibility of the human mind:

The essence and composition of external bodies are so obscure, that we must necessarily, in our reasonings, or rather conjectures concerning them, involve ourselves in contradictions and absurdities. But as the perceptions of the mind are perfectly known, and I have us’d all imaginable caution in forming conclusions concerning them, I have always hop’d to keep clear of those contradictions, which have attended every other system.

In other words, perceptions are not knowledge. They can be twisted and contradicted from what is actually going on in the real world. This is why the process of reason is indispensable to our lives. Reason allows us to peel back the layers of “contradictions and absurdities” and come to a more accurate conceptualization of reality. As I noted in my previous essay, humans are emotional and messy, often led astray by our biases and misperceptions. Skepticism guides our thinking away from our initial perceptions and requires us to investigate deeper to best approximate our understanding of the world.

The Personal Level: Ethical Individualism

Moving from epistemology to ethics, a predominant theological and philosophical worldview focuses on the collective nature of human beings. In more fundamentalist strains, it can be a complete negation of a person’s thoughts, desires, and talents. For example, the ideologies of Islamism (the politicization of certain sects of Islam), fundamentalist evangelical Christianity, and orthodox Marxism require that the individual be subservient to the cause, or the “ideal” of the faith. In a secular lens, this type of view can be summarized by the 19th century philosopher, and founder of the term “altruism,” Auguste Comte: “The individual must subordinate himself to an Existence outside himself in order to find in it the source of his stability.”

This view wholly distorts our human nature. While some scholars quibble over the nature of group level selection (see Haidt), the foundational level of selection concerns the individual. Human beings, much like our primate ancestors and scores of other beings before us, evolved based on mostly individual changes which then added up over time. As Robert Sapolsky noted in his recent masterwork, Behave: The Biology of Humans at Our Best and Worst:

Animals don’t behave for the good of the species. They behave to maximize the number of copies of their genes passed into the next generation. . . . Individual selection fares better than group selection in explaining basic behaviors.

This has profound ethical implications. While it would be unwise for us to directly extrapolate a system of ethics from biology, it is helpful to understand these conclusions and their relation to us as social creatures. Humans are inherently social; we desire communication and connection. However, that does not mean we should seek to achieve these connections through collectivistic means.

Building off of that, my personal view of humanism is built on the guiding principle of individual rights. As John D. Rockefeller, Jr. once said, “I believe in the supreme worth of the individual and in his right to life, liberty and the pursuit of happiness.” This notion is bigger than biology. It is also built on the Enlightenment principle of “self-proprietorship,” beautifully outlined by the English Leveller Richard Overton (as quoted by intellectual historian and philosopher George H. Smith):

To every individual in nature is given an individual property by nature not to be invaded or usurped by any. For every one, as he is himself, so he has a self-propriety, else could he not be himself; and of this no second may presume to deprive any of without manifest violation and affront to the very principles of nature and of the rules of equity and justice between man and man.

In essence, your life belongs to you, to do with it as you see fit, so long as you do not violate the rights of another. This is a bedrock ideal within the Enlightenment political tradition and one that continues to expand the rights of all people.

In Overton’s time, they attributed individual rights to a sovereign God of nature (similar to Jefferson and the founder’s notion of “Nature’s God.”) While this tradition has historically been built upon that premise, it is equally valid to base these rights upon the virtue of being a thinking, sentient being with the capacity for reason. Philosopher Corliss Lamont described this concept’s classical roots and its modern application:

It is the Humanist view that if the individual pursues activities that are healthy, socially useful, and in accordance with reason, pleasure will generally accompany them; and happiness, the supreme good, will be the eventual result. This ethical doctrine goes all the way back to Aristotle and is called eudaemonism (Greek for happiness). It contrasts with hedonism, which holds that pleasure alone is intrinsically good, by putting primary emphasis on the sorts of activities that a person chooses; at the same time it assigns an important and pervasive role to pleasure. “Pleasure,” as Aristotle said, “perfects the activities,” yet remains secondary. The Humanist ethics, then, “recognizes that the intentional objects of human striving are, in point of fact, not pleasures, but pleasurable things. And by identifying the good with voluntary activities and preferred objects, which are publicly observable, it facilitates discovery, measurement and production of the good.”

Therefore, that which is in accordance with the overall flourishing of the individual, within the context of their own life and their relation to others, undergirds a humanist conception of rights. Supernaturalism and/or god(s) no longer remain necessary.

As mentioned above, a person’s relation to others must also be taken into account. Individualism does not imply a short-sighted selfishness. Rather, it represents a committed recognition to the dignity of each person as well as the need for social cohesion for the flourishing of our species. Lamont, again, elucidates this point perfectly:

Humanism, then, follows the golden mean by recognizing that both self-interest and altruism have their proper place and can be combined in a harmonious pattern. People who try to serve humanity must permit humanity to serve them in turn. Their own welfare is as much a part of the welfare of humankind as that of anyone else.

Our individualism must be grounded on an ethical promise to advance our own interests while seeking to advance the interests of society as a whole. Even though the Devil will be in the details (pun intended), it is the ethical project of humanism that protects individual rights while advancing all of humanity forward.

The Societal Level: The Moral Instinct and the Moral Framework

In the last section, I mentioned the devilish details of the individual’s ethical relation to others, generally known as morality. In my view, our morality breaks down into two major components: the moral instinct and the moral framework. Our moral instincts are the product of natural selection; we are driven by “passing on lots of copies of one’s genes” through “maximizing reproduction.” Base emotions like fear, hunger, dominance, and justice, among others, evolved over millennia so our genes could be passed on from generation to generation. This has not only made us successful biologically; it has made us successful morally. As such, actions which originally evolved to help direct kin began to help non-kin, especially once we developed our social systems.

Here’s a story to illustrate this point. In his book, Life Driven Purpose, Dan Barker recalls a story about saving a baby from being harmed at an airport. He was waiting to board the plane when he noticed that a woman had placed her infant “on top of a luggage cart, about three or four feet off the ground, and the father must have stepped away for a moment.” Out of the corner of his eye, Barker saw the carrier starting to fall to the ground, “made a quick stride to the left,” and his “finger tips caught the edge of the carrier as it was rolling towards the floor.” The mother quickly assisted him in leveling the carrier and thanked him for his action. Now, why would he do something so moral without much intellectual consideration? Barker explains:

We are animals, after all. We come prepackaged with an array of instincts inherited from our ancestors who were able to survive long enough to allow their genes–or closely related genes–to be passed to the next generation because they had those tendencies. An individual who does not care about falling babies is less likely to have his or her genes copied into the future.

The moral instinct compels us to carry out many actions without any logical considerations; we just act in accordance with our human nature. Acknowledging this aspect of who we are goes a long way to improving our ethical systems in the future.

Complementing the moral instinct is the moral framework, what we commonly call “ethics,” or a system of conceived principles that advance flourishing and limit suffering, not just in humans but in the ever-growing moral universe. One way to conceptualize the moral framework is philosopher Peter Singer’s “expanding circle.” Based on an earlier concept from historian W. E. H. Lecky, Singer’s expanding circle hinges on moral agents rationally defending their actions without prizing their own status over anyone else. In other words, it’s a more elaborate variation on the golden rule, but with a twist: make moral decisions among others as you would have others make moral decisions among your kin. The circle expands, as the metaphor goes, as we socially evolve to include more than just other individual humans. Within time, it will include in-group members, out-group members, communities, states, countries, the entire human race, other mammals, all sentient beings, and eventually the entire spectrum of life. Using the moral framework will challenge our culturally-ingrained notions of moral behavior, as its “principles are not laws written up in heaven. Nor are they absolute truths about the universe, known by intuition. The principles of ethics come from our own nature as social, reasoning beings.”

Using the benchmark of advancing flourishing and limiting suffering, there are ways in which behaviors can actually be assessed as moral and immoral. As neuroscientist Sam Harris argues in The Moral Landscape, “there are right and wrong answers to moral questions, just as there are right and wrong answers to questions of physics, and such answers may one day fall within reach of the maturing sciences of mind.” While Harris is right about the importance of science in answering moral questions, we must also use ethics when discussing moral values. Both work hand in hand, with science being the investigatory component and ethics being the evaluative component. This is for a reason. Unbridled science (eugenics, atomic weapons) and unbridled utopianism (totalitarian philosophies such as Fascism and Marxism) can lead to immoral actions; it is only through what biologist E. O. Wilson called “consilience,” or a unification of knowledge, that we can make the best moral decisions. In all, the moral instinct and the moral framework serve as two sides of the same ethical coin. The instinctual and conceptual both have a say in how we advance our lives and the lives of others.

The Political Level: Rights as Paramount, Science and Ethics Guide Policy

Finally, the political sphere, which combines individual and social concerns, becomes the normative framework for ensuring the flourishing of each component listed above. Democracy, the most successful and beneficial form of government, is predicated on the protection and/or fulfillment of rights through the “freely given consent of the governed.” These rights can be broken down into two categories: negative and positive. Negative rights are rights that the government cannot take away from you (freedom of speech, freedom of religion, freedom of association, etc.) while positive rights are those that are granted by the government, such as a right to food, clothing, shelter, medical care, and a living wage or pension system. The best encapsulation of both types of rights comes from President Franklin Roosevelt, in his “Four Freedoms Speech,” delivered in front of Congress in 1941. The “four freedoms” are freedom of speech, freedom of worship, freedom from want, and freedom from fear. The first two are negative rights while the latter two are positive rights. Our modern democratic tradition hinges on these ideals, which fit nicely into a humanist framework.

Humanist scholars such as John Dewey, Sidney Hook, and Paul Kurtz all stress the importance of a healthy democratic society based on the bedrock of political rights. Dewey, in his essay, “On Democracy,” wrote of the necessity of negative rights:

While the idea is not always, not often enough, expressed in words, the basic freedom is that of freedom of mind and of whatever degree of freedom of action and experience is necessary to produce freedom of intelligence. The modes of freedom guaranteed in the Bill of Rights are all of this nature: Freedom of belief and conscience, of expression of opinion, of assembly for discussion and conference, of the press as an organ of communication. They are guaranteed because without them individuals are not free to develop and society is deprived of what they might contribute.

Negative rights ensure that individuals are free to follow the dictates of their own conscience and intelligence to fulfill the needs of themselves and others. To implement these values, a democracy requires a strong separation of church and state and a free press, so that all citizens can implement the values they hold dear without violating the negative liberties of others.

On the other hand, Hook notes of the “positive requirements of a democracy” in his essay, “Democracy as a Way of Life.” Among the various requirements, the most important to this discussion is Hook’s notion of “economic democracy.” He explains:

By economic democracy is meant the power of the community, organized as producers and consumers, to determine the basic question of the objectives of economic development. Such economic democracy presupposes some form of social planning, but whether the economy is to be organized in a single unit or several and whether it is to be highly centralized or not are experimental questions. There are two generic criteria to decide such questions. One is the extent to which a specific form of economic organization makes possible an abundance of goods and services for the greatest number, without which formal political democracy is necessarily limited in its functions, if not actually endangered. The other is the extent to which a specific form of economic organization preserves and strengthens the conditions of the democratic process already mentioned.

Like Dewey, he’s leaving options open to the citizens of democratic societies, such as whether to be more capitalist and less socialist or vice versa. In doing so, Hook defends the principle of positive rights in the same fashion that Roosevelt did: to advance human flourishing.

Lastly, we come to Paul Kurtz and his thoughts on democracy from his book, In Defense of Secular Humanism. Kurtz reaffirms the considerations made by Dewey and Hook but also emphasizes the value of discourse and participation to a functioning democracy. “. . . a political democracy,” Kurtz writes, “can be effective only if its citizens are interested in the affairs of government and participate in it by way of constant discussion, letter writing, free association, and publication. In absence of such interest, democracy will become inoperative; an informed electorate is the best guarantee of its survival.” Each of these views on democracy require citizens to use reason, from protecting their liberties and organizing their economies to discussions among others and petitioning the government for a “redress of grievances.” None of these things happen by virtue of a god or how many prayers a person can say. Rather, democracy is a human-centered, action-oriented enterprise that protects rights, builds economies, facilitates discussions, and encourages achievements.

With that in mind, a functioning democratic society relies on both science and ethics to inform our public policy. With such contentious issues as abortion, the death penalty, law enforcement overreach, sex education, vaccines, and stem cell research, it is essential that we apply our best thinking to these social problems. With only science as a guide, a government falls privy to overbureactization and malfeasance, and at worst, enacts policies which violate individual rights (eugenics, forced sterilization, genocide). This is why an ethical component, based on the application of reason as well as the guidepost of human flourishing, should always play a core role in shaping policy. It will not always provide us with easy answers, but it is far better than leaving our democracy to the whims of crackpots, religious fanatics, and overzealous central planners.

Conclusion: Humanity’s Future

Like so many ages before us, our age falls prey to barbarism, mysticism, hero worship, tribalism, superstition, and flat-out nonsense. To avoid these trends, we need a philosophy of life that prizes reason over faith, knowledge over ignorance, freedom over tyranny, and most importantly, humans over dogmas. Secular humanism is exactly that kind of philosophy. It is a way of life that puts human beings at the center of their own destiny, no longer chained to the whims of fundamentalist religion or totalitarianism. Its openness to new ideas and diversity of thought allow for a more enlightened religion, one that is compatible with humanism’s core principles. If one has left gods behind, it gives you the framework to live a moral and fulfilling life. The beauty of humanism is that it isn’t much of an “ism” at all; its essential values allow for a multiplicity of worldviews to coexist together, in something akin to Robert Nozick’s notion of a “utopia of utopias.” By leaving society free, open, and dedicated to human flourishing, all people can live among one another with more peace, prosperity, and progress.

Isaac Asimov said it best when he declared that, “Humanists recognize that it is only when people feel free to think for themselves, using reason as their guide, that they are best capable of developing values that succeed in satisfying human needs and serving human interests.” This is the apotheosis of humanism. Despite our flaws and failures, humanity has achieved so much in its time. We have conquered the heavens and the earth, built civilizations, eradicated diseases, ameliorated poverty and suffering, expanded freedom and opportunity, and created art and literature that will last for ages. All of this occurred because we valued our lives and dedicated ourselves to improving them. Every minute we waste speculating about the afterlife limits the value of our lives right now. We are young in the vast chasm of the universe, grasping for glimpses of truth and wisdom. We have so much to learn, which requires us to leave behind the shadows of our past and walk into the light of the future with an open mind, an open hand, and an open heart. Humanism gives us the path; we just have to take the first step.

 

After the Exit by Justin Clark

What do we lose when we leave religion? I was asked to respond to this question by a friend and, to be honest, it’s not easily answered. For us atheists, it’s obvious to mention all the terrible things we abandoned when we left religion: A fundamentalist dedication to barbaric texts and practices; the racism, homophobia, and misogyny of its most literalist believers; and superstitions hindering scientific and moral progress. All of these are good reasons to leave religion on the “ash heap of history.” Nevertheless, many still yearn for something “transcendent,” something to confide in when times are tough. There is also a longing for community that keeps droves within the fold. Both of these latter components are much harder to lose.

One of the biggest insights I’ve gained over the last few months, especially after reading the work of Jonathan Haidt and Emile Durkheim, is that religion is more than the sum of its beliefs. Sure, abandoning the supernatural and all of its problematic baggage is an important first step towards a better world, but it is not the only thing we lose. As mentioned earlier, countless people stay within religion for its community, the songs, or the emotional connection they have with their church. Religion is a system of life, not a mere reflection of it. In the case of Christianity, it is a religion with over 2,000 years of traditions, beliefs, and cultural contextualizations. When someone spends their entire life committed to a system so totalizing, it is often jarring when they leave. I spoke to and read of former believers whom felt an intense sadness when they lost their faith. It was as if a part of them died when they left it behind. This isn’t without reason.

Jonathan Haidt, in his excellent book, The Righteous Mind, devotes an entire chapter to the social character and benevolence of religion. Using his background in evolutionary psychology, Haidt illustrates that religion is not a “parasite” or “virus,” as many contemporary secular scholars believe, but a product of group selection that benefitted early humans. “If the gods evolve (culturally) to condemn selfish and divisive behaviors, they can then be used to promote cooperation and trust within the group,” Haidt notes. Human group dynamics see this play out routinely, especially in the United States. In America, the religious tend to be more social, more cooperative, and more charitable than their secular counterparts. Citing the work of Robert Putnam and David Campbell, Haidt also hits on something profoundly relevant to the socializing character of religion: specific beliefs matter far less than the charitable, community-oriented practices. Haidt concluded:

The only thing that was reliably and powerfully associated with the moral benefits of religion was how enmeshed people were in relationships with their co-religionists. It’s the friendships and group activities, carried out within a moral matrix that emphasizes selflessness. That’s what brings out the best in people.

Haidt’s insights are even more compelling for me since they come from a fellow atheist. He doesn’t dismiss some of the problematic beliefs and practices of religion, but he gives credit where credit is due. This completely reshaped how I viewed religion. Until Haidt, I obsessed over specific beliefs and traditions which I saw as irrational and harmful, and I assumed the world would improve if religion went away all together. Now, I think abandoning the social utility of religion, without a secular alternative, seems like an impossible task.

Haidt’s insights are even more compelling for me since they come from a fellow atheist. He doesn’t dismiss some of the problematic beliefs and practices of religion, but he gives credit where credit is due. This completely reshaped how I viewed religion. Until Haidt, I obsessed over specific beliefs and traditions which I saw as irrational and harmful, and I assumed the world would improve if religion went away all together. Now, I think abandoning the social utility of religion, without a secular alternative, seems like an impossible task.

A reading of Durkheim also reinforces Haidt’s findings. Emile Durkheim, a French sociologist during the nineteenth and early twentieth centuries, astutely explained the communal aspect of religion. As such, he focused less on a religion’s specific beliefs and more on its social constitution. “A religion,” wrote Durkheim, “is a unified system of beliefs and practices relative to sacred things, that is to say, things set apart and forbidden– beliefs and practices which unite a single moral community, called a ‘church,’ and all those who adhere to them.” This framework turns religious beliefs away from being ends-in-themselves and into means of communal binding. In this respect, the beliefs themselves are less ontological and more normative. Durkheim emphasizes this point in another passage: “Thus, among the cosmic forces, only those are accorded divinity which have a collective interest. In other words, it is inter-social factors which have given birth to the religious sentiment.” Losing organized religion unravels social orders and obligations and a secular alternative must, therefore, satisfy both the ontological and normative aspects of human social flourishing.

Alongside the social benefits of religion, individuals also seek experiences that tie them to something bigger than themselves, which is a key component to group selection in evolution. While individual selection is the primary driver of natural selection, group selection plays an important, complementary role. Haidt further elucidates this point by stressing the importance of religion as a binding moral agent that facilitated group level selections. “Gods and religions,” writes Haidt, “are group-level adaptations for producing cohesiveness and trust. Like maypoles and beehives, they are created by the members of the group, and then then organize the activity of the group.” Again, this takes religion from the ontological pedestal many atheists place it on and into the pragmatic, normative plane of human existence.

But this is the group; what about individual religious experiences? From Paul’s road to Damascus and Muhammad’s revelations from the angel Gibreel to Aldous Huxley’s mescaline-fueled “perennial philosophy,” personal religious experiences abound in human history. Yet, one of their drawbacks, at least in a discussion of losing religion, is that these experiences are “necessarily first person” and not easily identifiable with the scientific method. However, the growing field of neuroscience is helping us understand the nature of religious experiences from a naturalistic perspective. Dr. Michael Persinger’s research, and his well-known “God Helmet,” have provided initial findings into the connection between brain function and religious experiences. By stimulating the temporal lobe via electric pulsations, nearly 80% of his subjects said they experienced what they called “religious experiences.” Furthermore, Dr. Andrew Newberg’s research suggests some of our religious or transcendent experiences derive from multi-layered neural processes. No “God Helmet” needed.

While neuroscience shows a causal link between brain states and personal religious experiences, losing religion wouldn’t necessarily end these experiences. As Newberg rightly points out:

. . . the brain has two primary functions that can be considered from either a biological or evolutionary perspective. These two functions are self-maintenance and self-transcendence. The brain performs both of these functions throughout our lives. It turns out that religion also performs these two same functions. So, from the brain’s perspective, religion is a wonderful tool because religion helps the brain perform its primary functions. Unless the human brain undergoes some fundamental change in its function, religion and God will be here for a very long time.

Since our lives are intimately connected to how our brains function, experiences deemed “transcendent” or “religious” occur whether or not the beliefs of a religion are demonstrably true. William James said it best when he stated, “religion doesn’t work because it’s true; it’s true because it works.” Thus, losing organized religion will likely never negate the individual experience of the “transcendent” or the group dynamics resulting from natural selection.

So, what do we lose when we lose religion? In short, we lose some of the supernatural and mystical beliefs that crumble under the light of reason, but we will not lose the experiential or communal desires inherent in the human condition. These two components cannot be replaced by science and reason alone; we desire more than what we can test and independently verify. While we appeal to reason and evidence, we are also complicated, messy, and constantly irrational; this is what makes us human. The goal of an examined life is to try to mitigate the irrational and harmful while encouraging the reasonable and beneficial. In this regard, the experiential and communal aspects of religion will never be lost; they will simply take on a new form, as they have in the past. In the developed world, organized religion is taking on new forms or finding itself irrelevant. The largest growing religious demographic in the US is “none,” which isn’t necessarily atheist but not explicitly religious either. The loss of our traditionally religious life doesn’t spell the end of the numinous all together. Rather, it represents the gain of an intellectually vibrant and diverse culture that isn’t afraid to be different.

 


 

Featured image “Exit” by Stuart Cunningham, used under Creative Commons.

How the rise of identity politics indicates the decline of religion by Tylor Lovins

See my follow-up article here: A Brief Overview of Identity Politics: A Liberal Struggles for Perspective.

 

“…to be a citizen has come to mean something else, it means to be an outsider….the relation itself [between people] is on its last legs inasmuch as they do not essentially relate to each other in the relation, but the relation itself has become a problem in which the parties like rivals in a game watch each other instead of relating to each other, and count, as it is said, each other’s verbal avowals of relation as a substitute for resolute mutual giving in the relation.”

Soren Kierkegaard, “Two Ages: The Age of Revolution and the Present Age – A Literary Review.” March 30, 1864.

 

The rise of the worst kind of identity politics,[1] motivated by group-think, is not a shocking development, given the confluence of Marxist ideology in the social sciences, the pervasion of postmodern philosophy in everything from film to literature, from religion to the concepts of truth and the good, and, finally, the apparent powerlessness of the populace to effect change against known immanent crises like global warming, overpopulation, income inequality, and the like. Most in the electorate feel impotent, considering there seems no route to rouse career politicians to vote on something that doesn’t, in the end, contribute to the lining of their suits or the thickening of party lines. It is the youngest groups that receive the largest blow. So social change must be manufactured. On the most general level of analysis, doesn’t it make intuitive sense that social change can be achieved by sheer numbers, and that the outcomes we desire must be taken, and cannot be given, from the present order?

What may be more surprising to some readers is this development, though with seemingly benevolent intentions,[2] ultimately reflects the direction of history set in motion around the time of the Greeks: nihilism. As an equalizing force, flattening all idiosyncrasies to simple, sanitized ideological order, nihilism is the characteristic movement of thought underlying our age. The list of prophets proclaiming this coming order reaches back to Kierkegaard and Nietzsche, though they are by no means the only two. Both attempted to overcome nihilism by appealing to the individual. This last fortress they recovered by reaching backwards in history and deep into the inner workings of language, for the fossils of this concept date to vestigial conceptions of the world as understood by religion.[3]

The nihilism manifested in our secular age finds the Savior not in a God-man incarnate, which signified within Christianity the importance of the individual, but finds the savior of humankind in group power and identification. By replacing the near-infinite complexity of individual personhood with one or two group-based traits, identity politics, in its most extreme forms, aims at both the loss of individual liberty for group directives and the annihilation of individual identity for group belonging. This is a problem. As the religiously affiliated vanish, it is no accident that group-power fills the void religion leaves behind, for the power of suffering is still evident to all. And it is no accident violent protests at universities against free speech, and no-platforming against scientists and conservative speakers, have become commonplace, for the social sciences have told us we can change anything when we work together. The question remains whether by sheer willpower we can change the the realities science reveals in its methods. Although some higher education institutions are stepping up to the challenges these recent developments pose, others have capitulated. It appears even Google has deferred to the ideological order when challenged with scientific viewpoints.[4] Why? Listen to any of the multitude of protests, conducted by so called “Social Justice Warriors,”[5] filmed and uploaded on YouTube typically by the protestors themselves.[6] You’ll find a harrowing reality, where no evidence is given for assertions, virtue-signalling is the only virtue, and logic and reason are received from opposing parties as weapons of violence. In fact, speech itself is understood as violence.[7] This is a strange new world, yet hardly brave. We should be wary of the attempts of identity politics to place our value as persons in the attainment of group traits, in the assertion that mere belonging to a group bestows epistemic or moral superiority. We should be wary, that is to say, of any wisdom we haven’t earned.[8]

The observant viewer might suspect postmodernists are playing an old game, and I think this suspicion is mostly correct. As nihilism flattens the dimensions of selfhood, identity-politics has made us forget our history, while dooming us to repeat it. We must not forget the power of ideology that ruled the centuries before the Enlightenment during which religious violence ravaged Europe, and we must not take for granted the miraculous gift of rationality that followed. The rise of scientific rationality displaced the more primitive strains of religious logic as the speech in which disparate systems of beliefs may come together to debate, change, and compromise.[9] All the same, the gift is never guaranteed. Postmodernists may mean well, but if they cannot dialogue with those who oppose them they simply replay this scene from history, except in reverse. This time it is the abstract language of science that has written the creeds, and the social sciences that play the role of Inquisitors. The language which emerged to save us from the tribalisms of the past has created a new tribe, and this ostensibly uniworld rationality has materialized a new kind of terror. This is the problem secularism, a world without religion, poses. When we forget religion, will we lose our souls? When you watch the videos, you’ll look upon a pseudo-congregation of activists chanting, wailing, gnashing their teeth. They’re like a priestly class exorcising the world of evil.[10] But these priests are of a different order for they haven’t read their Bibles. They don’t understand that just because they believe in God doesn’t mean they’re not demons.[11]

On the Worlds of Science and Religion

 

There is a distinction to be made between the domain in which science works and articulates the world, in which abstract thinking has its efficacy, and the domain in which religion[13] works and articulates the world and mythology has its efficacy. Jordan B. Peterson makes it this way: Science resides in the world of objects, where things that occupy space and time, distinct among each other, establish the domain of the world. Objects occupying space and time are the constitutive reality. Religion operates in the world as the domain of action, the realm of being (not objects), where the most fundamental reality is suffering. The two have different logics, different conceptions of reality, and different ways of interacting with the phenomena they encounter. The problems of science, establishing cause and effect relationships, are not the problems of religion, where the question of perennial importance is the question of what we do with suffering.[14] Whereas theories of science tell us how we’ve gotten here, the culmination of religious teaching seems to be something like this: being (or existence) can be declared good despite suffering. Religious beliefs tell us what we might do to navigate the chaos of the unknown when it manifests itself in forms of suffering or disillusionment. The world of science gives us data; the world of religion gives us meaning. These two categorically distinct ways of living and viewing the world—the scientific and the religious—exist at this point in history in an enigmatic union.

The social science political ideologies are about the closest thing to religion without religion, because they do offer some sort of account for navigating the world as a forum for action where suffering is a fundamental reality. These accounts are altogether insufficient nevertheless because they do not have a theory of good and evil reckoning with the complexity of individuality. They declare evil is a social phenomena and simply the result of propaganda. Change the propaganda and change the world: the mind of the individual is a vessel waiting to be filled.[15] Evil, for political ideologies, is manifest as the opposition, as the opposing group.[16] This appears to explain why postmodernists have an antipathy toward discussing ideas with people they disagree with. You might hear things like, “If they can’t recognize that is racist, I can’t help them.” Evil, both in its origin and manifestation, is entirely a social phenomenon.

What do we lose if we lose religion? We lose one of its fundamental insights: evil doesn’t derive from the public realm, it is only manifested there, and the line between good and evil runs down the middle of every soul. In Christianity this is the teaching of original sin. We’re not entirely rational. This claim is why religion settled the question of whether establishing the perfect state order would bring about the good life for everyone by ultimately deferring the question to the individual. This is the victory of grace over law in the New Testament. A perfect world order won’t heal the blind man, no love or hope or law, but faith will.

And yet another insight dissipates. Kierkegaard prophesied that our present age is one of “leveling,” where the disparities between things and people are not resolved within their relations to each other, and personal, intimate relations are replaced by relations of abstraction. Everything is held as it is, by their appearances, in abstractions: this is the way one should relate to the world and others. We no longer relate to each other as persons, but as white or black, male or female, Jew or Gentile, oppressor or oppressed. The hero of the religious is nearly extinct, the one who, by an inner peace and satisfaction before God, has gained the knowledge of his or herself and attempts to be ruler over carnal desires and passions instead of others, and, with all mustered vitality, embodies the truths discovered within the personal struggle to overcome suffering into the events of the world. The hero of today, when attaining the social aims the monstrous “public”[12] sets before him or her, is to become so educated, to become so consistent in abstracting, that the he or she is flattened to the level of the crowd in complete, brazen equality. To be a hero today is to remain completely within the definitions of a particular group, to have the same history, the same sufferings, the same enemies, and the same thoughts. Another insight of religion that disappears by the leveling of nihilism is the idea that the constitution of the self is not entirely social, but at least partly subjective, and there are things that can constitute the self that are not retrievable in public, and may never be brought to the gradations of abstraction. The religious insight instructs us that the ability to lead a rich inward life requires taking on the sufferings you’ve experienced and declare victory by the way you live. Nobody can achieve this victory for you. And if the battle against inner demons isn’t fought, history has shown us we project these demons to the outside world, onto others.

Religion tells us evil dwells in the self. It gives us the diametrical separation between the public and private sphere, and in so doing creates an infinitely complex notion of the individual. It tells us there are experiences and choices nobody can touch, that nobody can experience or decide about, except for the individual. This is part of the import of religious expressions such as “hearing God,” “feeling the love of God,” “knowing the will of God,” and the like. For thinkers like Kierkegaard, the movement of faith is entirely individualistic. Secularism, as it’s grounded in empiricism and atheism, forgets this distinction, though not necessarily. The residue of religion is rotting in the carcass of culture, and its remnants, ruined as they might appear, still provide some sustenance to our values for the time being. Empiricism and atheism themselves have been grounded historically in religious values (like the immutable value of the self, free will, moral demands on the self, among others).[17] Only in the void religion leaves behind, which grows by the day, can secularism be possessed by something like the political ideology I am discussing here. And it’s characteristic of our age to, in our forgetting of the religious distinction between self and society, argue that feelings are as valid and public as rational arguments. The mere voicing that one feels oppressed has displaced the requirement for the provision of evidence.

Who’s in Charge?

An old expression that both the philosopher Martin Heidegger and the psychologist Carl Jung used helps us understand the loss of religion more precisely: We don’t think thoughts, rather, thoughts have us, they occur to us. The average person has as much power over what kind of thoughts occur to them as they have the power to summon dreams and determine what happens in them. They are, “Historical and linguistic inevitabilities.”[18] This is a terrifying thought. From Freud onward, it has become clear: pictures and images are something like a precursor to abstract thought. Before humankind could objectify its emotional experiences, it had to project these emotions onto the world. Thus it discovered gods. For much of history gods abstractly symbolized the emotions and values of cultures. The historian of religions Mircea Eliade points out that, as disparate societies met and integrated with one another, over a few decades, a battle of the gods would appear in their mythology. This, of course, on an abstract level, is a merging of values between two societies, something like democratic dialogue before we had the concept of democratic dialogue. So it would happen that the victorious deity would not be one god from one culture, but a combination of gods from both cultures.

As may be clear from the rise and fall of communism in the Soviet Union, and the failure of propaganda to change the basic desires of persons involved in the revolution and the leadership that governed it, it’s not self-evident that, if given the chance, our good intentions to diminish suffering in the world won’t lead to an innocent and accidental opening of the Pandora’s box. As Jung has pointed out, humans are more than rational creatures, and, in fact, our minds might be more accurately construed as a dim candle of reason surrounded by whirlwinds of collective unconscious motivation, perpetually under the threat of eradication by primal forces it can neither articulate nor control. Instead of losing religion to the ether of thoughtlessness, by equivocating religion with fundamentalism (a form of nihilism itself), it might be in our best interest to first understand it and explore whether it has chained up or transformed indomitable beasts not unleashed in the world since the chaos that gave rise to culture.

Religion has given us images to reconcile, especially in the concept of God,[19] our unconscious motivations with our tragically limited abstract understanding of ourselves, others, and the world. Secularism doesn’t appear to be in possession of a functionally equivalent concept to the religious concept of God, and this may spell our doom if we don’t understand the import of the religious concept in the first place. We may be blindly walking into battle with omnipotent dragons, armed with swords of straw. What if religion saves us from ourselves? What if the hundreds of thousands of years humanity survived by telling religious stories is actually the Darwinian solution to the problem of the reconciliation of the collective unconscious to the conscious mind and the solution to the problem of suffering?[20]

I hear often from people who think it’s immoral to have children, that humankind is like a cancer on the world. Jordan B. Peterson reminds us that we better be careful which metaphors we use when we’re talking about ourselves and the world, because it’s not obvious whether we’re in control of them or they control us. If we lose religion then we lose the symbolic grounding for our understandings of ourselves, and the conceptions we’ve inherited from religious traditions will float in the air, without the unifying power of mythical symbols and narrative to unite them with our experiences of suffering. Ironically, as we are seeing now, the movements of religion will appear again, but in a much less sophisticated form. Instead of projecting the unspeakable phenomena of suffering from the collective unconscious onto the gods, we will do so on to other people. To harken to a quote from the television series Fargo, just because dragons aren’t on the map anymore doesn’t mean they don’t exist.

We have seen the power of the collective unconscious in the ideological possession that has become common among the most irreligious section of the population: young people.[21] “We’ve done away with stories of hell so we had to make one on Earth.”[22] It is even suggested by the philosopher Hannah Arendt that our capacity to do evil is limited only to the extent that we think, the ability which makes us individuals. From times long before antiquity, thinking was the meaning and consequence of the divine spark that created the individual in a strike of lightning.[23] Controversially, when Arendt coined the phrase “banality of evil,” reporting on the trial of Adolf Eichmann and his use of cliches, bureaucratic language, and stock phrases in defense of himself, it was this inability which gave rise to the banality, the effect of leveling, the movement of nihilism. “Could the activity of thinking as such, the habit of examining whatever happens to come to pass or to attract attention, regardless of results and specific content, could this activity be among the conditions that make men abstain from evil-doing?”[24] Eichmann’s identity had been swallowed up by propaganda and he had become a mere member of a group. We’re children of history, and we’re not so mature as to have outgrown the collective memories and powers that gave rise to the dark period of WWII.

The sea of secularism hasn’t yet swallowed the world. We still have a somewhat functional concept of God, though the functionality seems to be diminishing by the day. Jung pointed out the concept signified the process of individuation, the process by which individuality is formed. This idea is worth thinking about, should we think about nothing else relating to religion. We may have all the abstract and technological prowess in the universe, but if we lack soul, we’ll lose the spark of divinity, and perhaps ourselves. Religious conceptions just might be the key to resolving the disparities between groups and individuals while safeguarding the distinction between the two.

 


Videos of Protests and Protesters

With Jordan B. Peterson:

https://www.youtube.com/watch?v=ZP3mSamRbYA

At Evergreen College:

https://www.youtube.com/watch?v=bO1agIlLlhg

https://www.youtube.com/watch?v=LTnDpoQLNaY

Middlebury College:

https://www.youtube.com/watch?v=a6EASuhefeI

UC Berkeley:

https://www.youtube.com/watch?v=unnrYfCe4bM

https://www.youtube.com/watch?v=Kku5MX7SFZ8

Endnotes

[1] I want to be clear that I am not talking about feminism in general, or the Black Lives Matter movement in general, or even progressive initiatives in general. I consider myself an adherent to classical liberalism in many ways. I am in fact on the left, and I am pointing out a blind spot to many who I work and agree with on many issues. I am speaking here of a very specific movement that claims the same ends as these just causes-the end of misogyny, racism, sexism, homophobia, inequality, and the like. The movement I am critiquing takes the form of the blind power of herd mentality and the renunciation of reason as the grounds for the general improvement of unjust conditions. Two very specific motives undergird this movement: (1) instead of eliminating inequality by removing obstacles to success people encounter because of their sex, gender, or race, they intend to place obstacles in front of the “privileged,” and, in an ironic bait-and-switch, privilege historically disadvantaged groups; and (2) instead of aiming at equality of opportunity, this movement aims at equality of outcome.

Please see the YouTube links above to witness the kind of groups I am speaking of here.

And see these recent criticisms of identity politics for more perspective:

https://www.nytimes.com/2017/08/12/opinion/sunday/identity-politics-white-men.html

https://www.wsj.com/articles/the-poison-of-identity-politics-1502661521

https://www.vox.com/2017/8/15/16089286/identity-politics-liberalism-republicans-democrats-trump-clinton

And one I highly recommend by Michael Shermer:

https://www.scientificamerican.com/article/the-unfortunate-fallout-of-campus-postmodernism/

 

[2] Jean-Jacques Rousseau distinguished between “good” and “virtuous” in his Reveries of the Solitary Walker, and it’s worth noting here: he thinks humankind exists in a natural state of good, which means that we initially have no desire to harm one another. To be good is to do nothing to anyone, to remain in this “natural state.” But a virtuous person must earn virtue, for it implies a conscious good-will toward others, which is only possible once social relations, language, rationality, and morality have developed. I wonder whether the many who fold-over to the “good intentions” of the postmodernists haven’t yet considered benevolent intentions often are not aligned with benevolent outcomes, and it takes a bit of calculation and compromise, a bit of other-orientation, to conceive of a way of relating to others which isn’t constituted by a master/slave relationship, where one group is always the tyrant and the other the slave.

 

[3] By “religion,” I will be referring to the fundamental conception of the world that is “religious,” and I take as my starting point that each religious tradition is a response to this (namely, the world as constituted by the reality of suffering). My intellectual leanings are with the Christian tradition, however, and you will see the specifically Christian contributions painted in the broad strokes of “religion.” I use the word “religion” instead of “Christianity” because I want to refer to the conception of the world that is specifically religious, albeit instantiated in this article as Christian insights.

 

[4] http://gizmodo.com/exclusive-heres-the-full-10-page-anti-diversity-screed-1797564320

 

[5] To avoid unnecessary animus, I refer to this specific group as “postmodernists” from here on out.

 

[6] See more videos at the end of the article below.

 

[7] See Jonathan Haidt’s wonderful analysis: https://www.theatlantic.com/education/archive/2017/07/why-its-a-bad-idea-to-tell-students-words-are-violence/533970/?utm_content=bufferb0bba&utm_medium=social&utm_source=twitter.com&utm_campaign=buffer

[8] “What is my wisdom, if even the fools can dictate to me? What is my freedom, if all creatures, even the botched and the impotent, are my masters? What is my life, if I am but to bow, to agree and to obey?”

 

[9] See Jeffrey Stout’s wonderful book Flight from Authority.

 

[10] https://www.youtube.com/watch?v=r5_Pv0A-xjE

 

[11] This is the point of James 2.19: “You believe that there is one God. Good! Even the demons believe that—and shudder.” (NIV). Believing in a single good that solves the problem of suffering (like, for instance, the elimination of economic classes), is the attempt of secular society to establish something like a functionally equivalent concept to the concept of God. But it is not functionally equivalent because it misses another key insight of religion: sometimes you do everything right and you suffer anyway. Suffering is a basic element of life. Just because we aim at the good does not mean we won’t bring Hell on Earth in our attempts to attain it.

 

[12] “…the public is a monstrous nonentity….Only when there is no strong communal life to give substance to the concretion [of individuality] will the press create the abstraction ‘the public,’ made up of unsubstantial individuals who are never united or never can be united in the simultaneity of any situation or organization and yet are claimed to be a whole.” Kierkegaard, “On the Present Age.”

 

[13] By religion I don’t merely mean what is referred to by “organized religion,” in today’s parlance. I am referring to the totality of the religious sphere: the myths, the experiences of the divine, and the social organizations

.

[14] See Karen Armstrong’s A History of God, and, of course, Jordan B. Peterson’s work.

 

[15] I refer you here to The Communist Manifesto written by Karl Marx.

 

[16] George Orwell implored the social party to organize themselves under the labels of “oppressed” and the opposition as ”oppressors” in The Road to Wigan Pier.

 

[17] A glance at the history of ideas proves this true, and because suffering is what essentially establishes subjectivity (we hear this in the popular psychoanalysts today), it’s not a surprise religion posits the notion of the individual. Most notable see Larry Siedentop’s Inventing the Individual: The Origins of Western Liberalism, or the more recent work of Nick Spencer in The Evolution of the West: How Christianity Has Shaped Our Values.

 

[18] Martin Heidegger’s Poetry, Language, Thought.

 

[19] I’m using “the concept of God” here to also mean the concepts of “the sacred,” “the holy,” “the transcendent,” and/or “the divine.”

 

[20] “How is it that complex and admirable ancient civilizations could have developed and flourished, initially, if they were predicated upon nonsense? (If a culture survives, and grows, does that not indicate in some profound way that the ideas it is based upon are valid? If myths are mere superstitious proto-theories, why did they work? Why were they remembered?….)

Is it not more likely that we just do not know how it could be that traditional notions are right, given their appearance of extreme rationality?

Is it not likely that this indicates modern philosophical ignorance, rather than ancestral philosophical error?

We have made the great mistake of assuming that the ‘world of spirit’ described by those who preceded us was the modern ‘world of matter,’ primitively conceptualized.”

Jordan B. Peterson, Maps of Meaning, 8.

 

[21] http://www.pewforum.org/religious-landscape-study/age-distribution/

 

[22] Jordan Peterson in an online lecture.

 

[23] Carl Jung, “A study in the process of individuation,” 1950.

 

[24] Hannah Arendt, The Life of the Mind, 1978.

The Special Comment: What We Lose, and Gain, From Leaving Religion

What do we lose when we leave religion? I have been asked to respond to this question by a friend and, to be honest, it’s not easily answered. For us atheists, it’s obvious to mention all the terrible things we abandoned when leaving religion. The dedication to barbaric texts and practices; the racism, homophobia, and misogyny of its most fundamentalist believers; the superstitions that hinder scientific and moral progress. All of these are good reasons to leave religion on the “ash heap of history.” Nevertheless, many still yearn for something bigger than us, something to confide in when times are tough. There is still a longing for the “transcendent,” alongside the need for community, that keeps droves within the fold.

Atheists are often criticized for our lack of a totalizing vision for humanity. “It’s just a negative position; you don’t believe in anything,” we’re often told. In this essay, I hope to dispel this notion and to offer a countervailing, yet meaningful way of life to the broadly-termed “religious.” Atheists often fixate on what we don’t believe; I’m here to tell you what I and many others do believe. I also hope to show how a secular life plainly replaces much of what people miss when they lose their religion.

For starters, atheism is merely a position to the question, “do you have a belief in God?” For us that say “no” or any other answer but “yes,” that makes us atheists. However, for many who came out of religion or experienced a modicum of religious life, that’s not enough to fulfill something inside them that is experiential and not merely rational.

One of the biggest insights I’ve accrued over the last few months, especially after reading the work of Jonathan Haidt and others, is that religion is more than the sum of its beliefs. Sure, abandoning the supernatural and all its problematic baggage is a great first step in creating a more humane world, but it is not the only thing we must reconsider when we lose religion. As mentioned earlier, countless people stay within religion for its community, the songs, or the emotional connection they have with their church. Religion is a system of life, not a mere reflection of it. In the case of Christianity, it is a religious practice with over 2,000 years of traditions, beliefs, and cultural contextualization. When someone spends their entire life committed to a system that totalizing, it is often jarring when they leave. I have spoken to and read of former believers that felt an intense sadness when they lost their faith. It was as if a part of them died when they left it behind.

I don’t know what that feels like. I grew up in a nonreligious home with largely nonreligious parents — not necessarily because they were atheists — but because religion didn’t matter to them all that much. I can count on one hand the amount of times I have been to a church for a religious service. I studied the major religions and tried to adopt one I found intellectually satisfying, and when none of them were, I became an atheist. Atheism was exactly the kind of position that suited my life; I was a rationally minded, critical thinker who neither missed nor yearned for religious experiences. My position was always an intellectual one, not an emotional or experiential one. Because of that, I always discounted these aspects of religion. Now having read about group psychology and the importance of religion in non-rational terms, I’m starting to understand what we really leave behind when we lose religion.

I have never felt “God,” but I have felt the power of music. I have loved music my whole life. There’s something beautifully tribal about the way that music makes us move, cry, and ultimately feel a part of something bigger than ourselves. I especially love film music; I love music that is designed to make you feel something. It has always moved me how specific chords or motifs play off of one another to elicit a response from the viewer. I’ve endlessly believed that a movie is only as good as its soundtrack. I think religion works the same way. The way a church invites you, the hymns move you, and the sermons encourage you. It’s just like music.

This is some of what we lose when we lose religion. The numinous and transcending experiences aren’t easily replaced by a commitment to reason and critical thinking. While those attributes are essential for living a successful and fulfilling life, there’s so much more we have to account for. As such, I think that Secular Humanism fills this void.

Secular Humanism is a philosophical tradition as old as religion, with elements tracing back to antiquity. In essence, it comes down to three component parts: reason as the means of knowledge, ethics as the way to live among others, and experience as the goal of life.

In secular humanism, one leaves behind the superstitious and mystical and embraces the reasonable and evidential. When this component of religion is lost, the possibilities of human achievement and flourishing are boundless; they are no longer shackled by the dogmas of the past.

As for humanity’s relationship to each other, morality is firmly rooted in philosophical investigations (ethics) and a growing understanding of our nature (biology, psychology). The interplay of nature and nurture provides us with the framework by which we advance our individual and societal interests. It will not be easy, but it has not been made easier by religion’s near stranglehold on this conversation. For many, the only way to be moral is through religion. Secular Humanism, by contrast, provides a reasonable and palatable alternative to religion as the sole arbiter of ethics.

Finally, the experience of life, from the numinous to the communal, thrives in a Secular Humanist framework. A sense of achievement, fellowship, and transcendence exists within the real world; there’s no need to rely on religion. The arts, nature, and social interaction become more fulfilling when left to open exploration. Yes, humanists generally reject the afterlife and accept the finitude of their lives, but that encourages them to live well and to treat others equally well. As the humanist philosopher Corliss Lamont once wrote, “Humanism encourages men to face life buoyantly and bravely, relying upon their own freedom and reason to fashion a noble destiny in a future that is open.”

In short, atheism is only the beginning of a person’s journey when they lose their religion. There are countless philosophies and viewpoints to consider when one leaves their faith. However, this shouldn’t be a lament but a celebration of one’s capacity for achievement and fulfillment in this life, the only one we are guaranteed to have. There is so much to gain when one places religion to the wayside. We can build better families, better communities, and better societies. We can dedicate ourselves to improving the lives of others, through scientific discovery, intellectual achievements, and interpersonal connections. We can develop an ethics that views individual rights, not collective, irrational whims, as the pinnacle of political organization. And this can all be done while we enjoy great art and contemplate the meaning of our lives and our place within the cosmos.

While we lose a great deal when losing religion, we gain so much more in freedom, truth, beauty, and wisdom. As Penn Jillette said, “For someone who loves freedom and loves people, I don’t think we should hope for God at all.”