In the second article in our “Humanist Histories” series, we will unpack the life of Hoosier freethinker, W. H. LaMaster. His freethought newspaper, the Iconoclast, became a staple of Indianapolis thought through the 1880s, where he continued writing columns until his death in 1908. LaMaster advocated for religious skepticism, scientific advancement, and was a staunch anti-temperance proponent. LaMaster, alongside notable freethinkers like Ambrose Bierce, Clemens Vonnegut, and Robert Ingersoll, helps us understand the rich religious diversity in the Midwest during the late nineteenth century.


Listing of W. H. LaMaster and his family, 1850 Census. Ancestry Library.

William Hammon LaMaster was born on February 14, 1841 in Shelbyville, Indiana, to Benjamin and Elizabeth LaMaster. His early life is mostly unknown to us, but we do know that he lived for a time in Missouri on the family farm, according to the US Census. From there, LaMaster served for the Union army in the 89th Indiana Infantry and the 146th Indiana Infantry during the Civil War. After the war, he returned home to Shelbyville (and later Liberty), passed the bar exam, and began his law practice. As early as 1868, he was beginning to make a splash within Republican Party circles. As the Daily Ohio Statesman reported, LaMaster was a “rising young lawyer of that city [Shelbyville, Indiana], a gentleman and a scholar, and hitherto was the main hub in the Republican Party in that county. He was in the war, and bears honorable scars.” In 1868, he advertised his law practice in the Connersville Examiner, and described his credentials as “Attorney at Law, and Deputy Common Pleas Prosecutor. Will practice in the Courts of Union and Fayette Counties.”

Connersville Examiner, February 10, 1869. Newspaper Archive.

Also in 1868, LaMaster began writing a regular newspaper column for the Connersville Examiner called “Liberty Items.” In it he shared his thoughts on local happenings in Liberty Township, Union County, Indiana. In personal affairs, he married Harriet Reed on December 26, 1866, with the usual proceedings of a “Minister of Gospel,” as described on their marriage record. LaMaster’s iconoclastic views had not yet bubbled to the surface, at least with regards to his nuptials.

Terre Haute Weekly Gazette, May 1, 1879. Hoosier State Chronicles.

From here, LaMaster’s story is unclear until the late 1870s, when his religious skepticism was in full force. However, by May 1879, his public life as a freethinker emerged in a lecture entitled “The God of the Bible” that he delivered at Terre Haute’s Dowling Hall. The Terre Haute Weekly Gazette described, “From the way he states his subject something of an idea of his manner of treating it may be learned.” Unfortunately, research has yet to uncover the text of this lecture. An advertisement published in an 1884 issue of the Index suggests that it might have been akin to known-agnostic Robert Ingersoll’s critical lecture, Some Mistakes of Moses.

Index, October 2, 1884. Google Books.

Later that year, LaMaster published an investigative piece in the Indianapolis People critical of spiritualism and spirit mediums. LaMaster wrote:

Being a skeptic, so far as spiritualism is concerned in any form, whether manifested through ignorant mediums or otherwise, I must say that I saw nothing on my late experience among spirits in Terre Haute to convince me of the truth of modern spiritualism.

LaMaster’s expose criticized local mediums Anna Stewart, Laura Morgan, and the ever-popular Dr. Allen Pence, concluding rather jokingly that “in the future I shall try very hard to steer clear of the ‘loving and affectionate’ embraces, or even the touch, of such familiar creatures as ghosts.”

Indianapolis People, May 31, 1879. Newspaper Archive.

When LaMaster was not debunking spiritualism in Terre Haute, he was trying to debunk another popular notion during the period: temperance. The movement, which called for the curtailing or elimination of alcohol consumption, gained steam during the late nineteenth century. LaMaster viewed the ideology as he did most creeds—as an overzealous dogma used to control people’s lives. He did not parse words when he wrote in the Indianapolis People that the first temperance lecturer was the Devil, who “taught a very remote grandmother of ours the art of using, in a very temperate manner, a certain kind of ‘fruit,’ to her ‘mental’ advantage, before any wicked distiller ever thought of solving the difficult problem, how to convert its juice into intoxicating beverages.” Now, it is important to clarify LaMaster’s personal view; while he supported any individual or personal efforts to be temperate with drink, he was opposed to using laws to move people in that direction, a distinction the Indianapolis News made sure to print.

Indianapolis News, June 16, 1879. Hoosier State Chronicles.

In the summer of 1879, LaMaster gave an anti-temperance lecture at Indianapolis’s Grand Opera House, where he criticized the “intemperance of temperance orators and temperance people.” He gave another anti-temperance lecture in Lebanon, Indiana in November, where a correspondent to the Indianapolis Journal of Freedom and Right criticized LaMaster’s “shot gun principle” of oratory. The critic concluded, “I would advise him to quit lecturing as it is certainly not his fort [sic].” Nevertheless, LaMaster continued to criticize temperance reforms and reformers in the press, specifically his problems with the 1895 Nicholson Law, which “provided that all persons applying for a license had to specifically describe the room in which he, she or they desired to sell liquors along with the exact location of the same.” LaMaster believed the law was not “in the interest of temperance” but was rather “a measure to increase liquor drinking and drunkenness in our state.”

“What Agnosticism Is?,” in the Improvement Era, December, 1898. Google Books.

While temperance was one of LaMaster’s political hobby horses, his dedication to freethought and secularism was his main contribution to the growing diversity of Indiana’s religious thought during the late nineteenth century. In an 1898 article for the Improvement Era, “What Agnosticism Is?,” LaMaster outlined his own view regarding theological matters. He wrote:

Agnosticism as an applied theory or doctrine may therefore be said to be one which neither asserts nor denies the existence of the infinite, the absolute. Or, it may be defined as a “theory of the unknowable which assumes its most definite form in the denial of the possibility of any knowledge of God.” And so the agnostic may be said to be one who does not claim or profess to know of the existence of a supreme being called God.

Biologist Thomas Henry Huxley. Known as “Darwin’s Bulldog,” Huxley was a early champion of evolutionary theory and coined the term, “agnosticism.” Getty Images.

Regarding agnosticism, LaMaster’s view mirrored the biologist Thomas Henry Huxley (who coined the term) as well as the other titan of Midwestern freethought, Robert G. Ingersoll.  Conversely, LaMaster’s agnosticism under-girded his poor estimation of Christianity, which he believed rested on a shoddy foundation of “faith.” He declared:

To state the proposition more tersely we will say that while Christianity is willing to rest on “faith” alone in arriving at any one or more objective religious truths, agnosticism demands something more—it demands evidence of the highest character before accepting as very truth any kind of a religious belief or dogma. Hence we find Christianity standing for a bare and empty faith and agnosticism for the strongest and the most indisputable of testimony. And so it must be admitted that as between the Christian and the agnostic there is an impassable gulf.

For LaMaster, the use of reason, in conjunction with evidence, provided a person with the clearest picture of the world and their place within it.

Seymour Times, August 20, 1881. Newspaper Archive.

LaMaster promulgated his ideas in a newspaper he planned in the fall of 1881 and began publishing in 1882 called the Iconoclast. First published in Noblesville, LaMaster later moved printing operations to Indianapolis. As the Seymour Times reported, “Mr. LaMaster is a bold and fearless writer, [and] infidelity right in our own midst even in its most unsavory forms to the tastes of Christians may be expected to be advocated by him.” LaMaster published his own essays as well as works from the “world renowned orator and noble defender of free thought and mental liberty, Col. R. G. Ingersoll.” During his time in the capital city, LaMaster undertook his most enduring publishing effort, at least in regards to historical scholarship. He published a series of answers that Ingersoll had given to four Indianapolis clergy on matters concerning the historical accuracy of Jesus’s life, the beginnings of the universe, and pertinent moral questions. LaMaster subsequently printed Ingersoll’s Answers to Indianapolis Clergy as a pamphlet form in 1893. Another notable freethought newspaper, the Truth Seeker, reprinted the essays in 1896.

Ingersoll’s answers to Indianapolis Clergy, as published by W. H. LaMaster, 1893. Indiana State University.

In the introduction to the 1893 version, LaMaster further explained his worldview and the impetus for publishing Ingersoll’s answers. He wrote:

It is for the good and well-being of the whole people that a natural religion should take the place of a supernatural one. With the imaginary or idealistic, progressive thought can have nothing to do, since it is the real, and not the ideal, that men and women should crave to find. The world is in need of a religion of humanity—one of philosophy and good deeds—and not one of creeds.

A lithograph of Robert Ingersoll, Iconoclast, March 10, 1883. Indiana State Library.

The idea of a “religion of humanity” recalls the proto-humanistic philosophy of Auguste Comte, who argued for a natural religion based on altruistic impulses and mutual affection among individuals without the need for supernaturalism. LaMaster also published with these letters an essay that he likely prepared for the International Congress of Freethinkers in Chicago entitled, “The Genesis of Life.” In it, he argued for a naturalistic explanation for life on earth, noting that “whilst there may be no particular source of life in the universe, there is always to be found a general or universal one from which it may emanate and become an active, moving, and expressive energy in organic nature.”

Mind & Matter, April 22, 1882. IAPSOP.

His years publishing the Iconoclast were difficult, especially in a city like Indianapolis, where its community of freethought was “without organization,” according to the Index. “With the Iconoclast,” wrote B. F. Underwood in the same paper, “existence is yet a struggle, as it necessarily is with all young liberal journals.” Despite its success with Ingersoll’s Answers to Indianapolis Clergy, the Iconoclast ceased publication in 1886.

Over the next 20 years, LaMaster continued writing and publishing a variety of essays and pamphlets, both in journals and newspapers. In 1896, he published, “The Growth and Magnitude of the Sidereal Heavens,” in Popular Astronomy, where he speculated on the existence of extraterrestrial life. “Let us then, in our magnanimity,” declared LaMaster, “rise above the compass of our human selfishness and allow our minds to be inspired with the thought that there are other worlds than ours in the starry vaults of heaven, which are the abode of even more sentient beings than ourselves.” These ideas would be echoed nearly a century later by astronomer and science communicator Carl Sagan, in his television series, Cosmos.

“How Do We Think,” Improvement Era, June, 1898. Internet Archive.

In another piece, “How Do We Think?,” LaMaster speculates on the interaction of language and human minds, and whether language is necessary for human thought. LaMaster mused:

If it be true, then, that mind is one of the endowments of matter, even in its organized forms, and one of its functions is that of thinking, it cannot be denied that it will think independently of words actually spoken or disguised . . . . Words themselves presuppose some kind of thought; in fact, words are the natural and legitimate offspring of thought.

Again, LaMaster was extremely prescient about this point. The hypothesis that thought comes before language and that our brains are hard-wired for language has been buttressed by cognitive scientists like Noam Chomsky and Steven Pinker. Despite his training as an attorney, it is evident that LaMaster was a man whose interest in ideas, particularly of the sciences, was well-rounded, especially for the nineteenth century.

Indianapolis News, February 26, 1895. Hoosier State Chronicles.

Throughout the 1880s and 1890s, he continued writing newspaper columns, including authoring pieces for the Indianapolis News. In one article from February 26, 1895, he wrote about the enduring legacy of American revolutionary and freethinker Thomas Paine. In one of his final columns, written for the August 16, 1907 issue of the Indianapolis Star, LaMaster shared his thoughts about the human soul:

The soul per se, unlike other forms of matter, can have neither growth nor decay. It having therefore its own eternal place and fixity in the universe, it can be neither born nor can it die. And whatever then may be its form or shape it possesses potential being, and one, too, of the highest order.

This nascent spiritualism should not be taken to mean that he had changed his mind. LaMaster believed that the “soul” was likely an emergent property of humanity’s natural place in the universe. In other words, he viewed the “soul” as a manifestation of our unique personality that only develops within our physical bodies. It doesn’t exist outside of us, but comes from within us.

Indianapolis News, July 31, 1908.

In 1906, he and his family moved to Westphalia, Knox County, Indiana, away from the hustle of Indianapolis, where he continued his intellectual pursuits until the end. LaMaster died on July 28, 1908, at the age of 67. In his obituary from the Indianapolis News, he was described as a “frequent contributor to the Indianapolis News and other Indianapolis newspapers,” and was a “vigorous writer.” In that last remark, they were certainly correct. In his lifetime, LaMaster had written for numerous newspapers, journals, and pamphlets on a wide-range of topics. His newspaper, the Iconoclast, helped cement a growing freethought community in Indianapolis. His speculations on science are still noteworthy today. In this regard, LaMaster was a classic, nineteenth century “polymath.” In his explorations and religious unorthodoxy, LaMaster contributed much to our understanding of freethought in the Midwest during the late nineteenth century.

W. H. LaMaster’s death certificate, 1908. Ancestry Library.


American humanism has always benefited from its trailblazers, the radicals whose revolutionary ideas moved the progress of freedom, equality, and justice forward. One almost without peer was Emma Goldman, the anarchist philosopher and public intellectual; her unique perspective on atheism constantly challenged the status quo. Goldman, a Lithuanian immigrant to the United States, toiled in the sweatshops of upstate New York before coming to political consciousness after the Haymarket Riot, a massacre that left countless dead and implicated labor activists as scapegoats for the violence. This event pushed her out of her first marriage and into New York City, where she met fellow-anarchist Alexander Berkman and fell in love. She used her new-found freedom to study the ideas of anarchism, socialism, and atheism, which influenced all of her later activism and writing.

Portrait of Emma Goldman, circa 1911

Portrait of Emma Goldman, circa 1911, Library of Congress.

Authorities followed Goldman her entire life. They attempted to charge her with involvement in the near-murder of Carnegie Steel manager Henry Frick, which had been carried out by Berkman as a response to the bloodshed at Homestead. While she was involved in the plot, she was never charged due to lack of evidence. In 1901, she was wrongly arrested for alleged involvement in the assassination of President William McKinley; but, like the case involving Frick, she was later released. In 1919, after speaking out against World War I, the government convicted her of violating the Alien and Sedition Acts and deported her from the United States. She lived in multiple countries during her exile before her death in 1940.

During her many years of activism, Goldman wrote for a variety of publications, including Mother Earth, a magazine she founded in 1906. Her writing championed free speech and expression, free love and open relationships, anarchism, the rights of labor, education, birth control, and criticisms of religion. This essay will explore Goldman’s ideas about atheism and how they fit into her larger ideological framework. As her writings will show, three core themes permeate Goldman’s work: strong advocacy for individual freedom, rejection of Christianity, and the defense of atheism. In all, Emma Goldman’s radical atheism was rooted in her love of humanity, and while the term didn’t exist then, that made her a deeply committed humanist.

The front cover of Mother Earth magazine, February 1916. This volume contains Goldman's essay, "The Philosophy of Atheism."

The front cover of Mother Earth magazine, February 1916. This volume contains Goldman’s essay, “The Philosophy of Atheism.” Google Books.

Women as “Victims of Morality”

In 1913, Goldman published a lecture entitled, “Victims of Morality,” where she argued that religious puritanism had, like a disease, infected the moral compass of America, with significant consequences manifesting particularly in the lives of women. “Through the medium of religion they have paralyzed the mind of the people, just as morality has enslaved the spirit. In other words, religion and morality are a much better whip to keep people in submission than even the club and the gun,” Goldman wrote.[1] She was speaking in reference to Anthony Comstock, the overzealous social reformer who used his position as special agent at the U.S. Post Office Department to enforce strict laws against the purported transfer of “obscene” literature via the mail. In fact, the “Comstock Act,” which prohibited the passage of obscene literature of the mails, is named after him.

Pamphlet, "Victims of Morality and The Failure of Christianity," 1913.

Pamphlet, “Victims of Morality and The Failure of Christianity,” 1913. Google Books.

Goldman believed that Comstock’s style of Victorian puritanism violated the rights of women. “It is Morality,” said Goldman, “which condemns woman to the position of a celibate, a prostitute, or a reckless, incessant breeder of hapless children.”[2] Now, why would she capitalize “morality?” Was she speaking in reference to a specific kind of morality? In the context of this article, her capital-M morality referred to “Property Morality,” her view that the capitalistic United States was beholden to property. “Woe to anyone that dares to question the sanctity of property, or sins against it,” she declared.[3] In this passage, we see Goldman’s critique of morality as part of a greater critique of capitalism itself. To her, capitalism and its slavish devotion to property created the conditions under which those who were oppressed by its machinations barely understood their own servitude. In this milieu, religion (specifically Christianity) and Victorian moralism served as a major contributor to false consciousness. In turn, Goldman estimated that “until the workers lose respect for the instrument of their material enslavement, they need hope for no relief.”

As indicated above, this condition wreaked havoc on the rights of women. For Goldman, the celibate is created by the morality of marriage, the prostitute is created by the morality of property and money, and the mother is created by the morality of socially-sanctioned reproduction. All these moralities amount to the same consequence: the lives of women were preordained by social roles, at the expense of their liberty and freedom. Goldman’s solution to this problem is for women to throw off the social bonds of “Morality” and embrace a moral individualism that is consummate with a person’s own desires and needs. “Woman is awakening, she is throwing off the nightmare of Morality; she will no longer be bound,” Goldman wrote, “Her love is sanction enough for her.”[4] She believed if people lived their lives without any regard for gratifying oppressive structures of the church and the state, they would live full lives of meaning and purpose.

Christianity and the Denial of Life

To further her critique of society’s “Morality,” she published another pamphlet lambasting its fundamental support structure: Christianity. In “The Failure of Christianity,” also published in 1913, Goldman saw herself as the rightful heir of such notable German iconoclasts as Friedrich Nietzsche and Max Stirner. Goldman declared that they “hurled blow upon blow against the portals of Christianity, because they saw in it a pernicious slave morality, the denial of life, and the destroyer of all the elements that make for strength and character.”[5] The concept of “slave morality,” as articulated by Nietzsche, understood Christianity as a system that reinforced moralities that enslaved by making humility, obedience, and charity virtues as opposed to the master moralities that prize pride, power, and nobility. Goldman agreed. As she wrote in a further passage, “I believe, with them, that Christianity is most admirably adapted to the training of slaves, to the perpetuation of a slave society; in short, to the very conditions confronting us today.”[6] Christianity, in Goldman’s eyes, ripped away our human potential by stripping us of our strength, courage, and agency.

Portrait of Friedrich Nietzsche, circa 1910-1915. His notion of "Master and Slave Moralities" influenced Goldman's view of Christianity.

Portrait of Friedrich Nietzsche, circa 1910-1915. His notion of “Master and Slave Moralities” influenced Goldman’s view of Christianity. Library of Congress.

She’s also not forgiving to Christ as a teacher; she saw his religion as “the embodiment of submission, of inertia, of the denial of life; hence responsible for the things done in their name.” Now, she differentiated the concept of “Jesus Christ” into three distinct categories: the theological, the ethical, and the poetic. The theological Christ is the one presented by the Bible, a divine-human figure, with all the miracles and supernaturalism. The ethical Christ, like the one depicted in the Jefferson Bible, is stripped of supernaturalism and miracles to focus on his ethical teachings. Finally, the poetical Christ focuses on the story of Christ as a metaphor for life, a story that helps a person understand their place in the world. In her view, the theological Christ was refuted long ago, by such luminaries as Thomas Paine, Ernest Renan, Richard Strauss, and Ferdinand Christian Baur (she spells as Bauer). Her main contention, which she saw as more important to the culture of her time, was the influence of the ethical and poetic Christs: “the ethical and poetic Christ-myth” Goldman argued, “has so thoroughly saturated our lives, that even some of the most advanced minds make it difficult to emancipate themselves from its yoke.”[7]

Goldman’s frustration was less with the fundamentalists of Christianity (who would be refuted over time by scientific and theological inquiry) but the liberal wing, whose dedication to the myth led to widespread ethical contradictions. They couldn’t see how the metaphor of life the poetical Christ represented had made them slaves to social and political ideologies requiring subservience and intellectual sacrifice. For instance, Christians who decried slavery lacked self-awareness of their own religion, for while it taught them ethical responsibility, it also taught them “slavish acquiescence in the will of others” and encouraged “the complete disregard of character and self-reliance, and [was] therefore destructive of liberty and well-being.”[8] Thus, well-meaning Christians actually propelled and sustained the slave trade for centuries, despite the ethical call to “love thy neighbor.” In order for a society to truly achieve progress, it must reject Christianity, in any form. It is a religion which prizes the allure of heaven over the concerns of the here and now. It teaches that to be “poor in spirit” is to be virtuous, that those who toil on this earth need not bothered with their current status or the political state of the world in which they find themselves. The rich will suffer in hell while the poor live in heaven. And most of all, it reinforces subjugation as a virtue.

This is Goldman’s central problem with Christianity; like “Morality’s” assault on women’s rights, Christianity’s insistence on meekness becomes “the whip, which capitalism and governments have used to force man into dependency, into his slave position.”[9] Furthermore, Goldman observed, “Righteousness grows out of liberty, of social and economic opportunity, and equality. But how can the meek, the poor in spirit, ever establish such a state of affairs?” In order for society to truly promote and preserve individual rights, freedom, and equality, the institutions of social cohesion (the state, market capitalism, organized Christianity) must crumble before the working classes.

Goldman’s Audacious Atheism

Alongside her continued appraisals of religion, Emma Goldman also articulated an alternative in the February, 1916, issue of her magazine, Mother Earth. Called “The Philosophy of Atheism,” this short essay has become her best-known writing on the subject (and was recently included in Christopher Hitchens’s edited omnibus, The Portable Atheist). It’s fairly surprising how prescient she was in this essay, laying out ideas that have become common themes in our modern discourse on atheism. For example, she writes early in the piece that “the God idea is growing more impersonal and nebulous in proportion as the human mind is learning to understand natural phenomena and in the degree that science progressively correlates human and social events.”[10]

Emma Goldman, "The Philosophy of Atheism," Mother Earth Magazine, February, 1916.

Emma Goldman, “The Philosophy of Atheism,” Mother Earth Magazine, February, 1916. Google Books.

Today, this critique is heavily used against the “God of the Gaps” style arguments for theism, which use current gaps in knowledge to posit the existence of God. Astrophysicist Neil deGrasse Tyson echoed Goldman when he said that “God is an ever-receding pocket of scientific ignorance that’s getting smaller and smaller and smaller as time moves on – so just be ready for that to happen, if that’s how you want to come at the problem.”[11] While their views are separated by nearly a century, it’s remarkable how parallel they are; this reinforces my view that American freethought goes back much farther than we often think.

Another clear influence on her own atheism was the anarchist philosopher Mikhail Bakunin, whose own work God and the State she quotes at length in “The Philosophy of Atheism.” Bakunin argued that gods were the product of “the prejudiced fancy of men who had not attained the full development and full possession of their faculties,” which led to the “abdication of human reason and justice” and “necessarily ends in the enslavement of mankind, both in theory and in practice.”[12] If this sounds familiar to you, it should, because Goldman also viewed religion as slavery and wrote about it at length in the aforementioned “Failure of Christianity.” In accepting Bakunin’s thesis, Goldman declared that “In proportion as man learns to realize himself and mold his own destiny theism becomes superfluous. How far man will be able to find his relation to his fellows will depend entirely upon how much he can outgrow his dependence upon God.”[13]

Mikhail Bakunin, photographed by Felix Nadar. His book, "God and the State," heavily influenced Goldman's views on atheism.

Mikhail Bakunin, photographed by Felix Nadar. His book, “God and the State,” heavily influenced Goldman’s views on atheism. New York Public Library Digital Collections.

One more instance in which she presaged another well-known intellectual was with her critique of what she called “theistic tolerance.” Goldman noted that as religious belief wanes in the public square, denominations of all stripes will “combine variegated religious philosophies and conflicting theistic theories into one denominational trust” in a “frantic effort to establish a common ground to rescue the modern mass from the ‘pernicious’ influence of atheistic ideas.” Therefore, “It is characteristic of theistic ‘tolerance’ that no one really cares what the people believe in, just so they believe or pretend to believe.”[14] With this analysis, she anticipated the philosopher Daniel Dennett’s concept of “Belief in Belief,” from his 2006 work, Breaking the Spell. In the chapter of the same name, Dennett argues that many view the belief in a god or gods as essentially valuable to society, regardless of whether or not the god(s) exist or religious doctrines are empirically true. Like Goldman (and me), Dennett is firmly convinced that as societies forge evermore robust secular systems of justice and social harmony there will no longer be any need for this “belief in belief.”[15] Now, Dennett wouldn’t go along with Goldman’s anarchism, but would definitely sign on to her diagnosis. This make her a pretty damn good prognosticator of some of mainstream atheism’s most prevalent ideas.

After clearing away religions under the lash of her pen, Goldman spends the rest of this essay articulating her view of atheism. She begins with an excellent definition:

The philosophy of Atheism represents a concept of life without any metaphysical Beyond or Divine Regulator. It is the concept of an actual, real world with its liberating, expanding and beautifying possibilities, as against an unreal world, which, with its spirits, oracles, and mean contentment has kept humanity in helpless degradation.[16]

Her definition reaffirms her commitment to the real world, not the promise of heaven or the fear of hell. In fact, she even says as much in a further passage:

The philosophy of Atheism has its roots in the earth in this life . . . . Man must break his fetters which have chained him to the gates of heaven and hell, so that he can begin to fashion out of his reawakened and illumined consciousness a new world upon earth.

Atheism allows a person to fully embrace their humanity for the betterment of themselves and the world they live in. When one is dedicated to processes of self and scientific discovery, religious notions can be easily pushed aside.

Atheism’s Moral Affirmation of Humanity

Finally, Goldman turns to moral questions. One of the oldest and most-common questions unbelievers get is, “How can you be good without God?” First, she dismisses the idea of Christian morality outright, as it “has always been a vile product, imbued partly with self righteousness, partly with hypocrisy.”[17] Goldman never thought much of the traditionally Christian notions of fixed moral states set by a god; they don’t reflect what morality is really all about, which is creating a framework of human interaction based on shared norms of freedom, flourishing, and facts. In all times, she declared, the freethinkers were the ones who fought for these principles:

They knew that justice, truth, and fidelity are not conditioned in heaven, but that they are related to and interwoven with the tremendous changes going on in the social and material life of the human race; not fixed and eternal, but fluctuating, even as life itself.[18]

This could be interpreted as moral relativism, but that wasn’t Goldman’s intent. She actually believed in some moral universals such as freedom, choice, and empathy. She just couldn’t stomach a morality disconnected from real-world human needs that precidated its universals on unknowable gods and their indecipherable whims.

Atheism gives humanity agency in a way that theism doesn’t; it compels us to show up for the tasks of life, to make the hard choices, to benefit from our successes, and to learn from our failures. In a sense, it allows us to be fully human. As she writes at the end of her essay, “Atheism in its negation of gods is at the same time the strongest affirmation of man, and through man, the eternal yea to life, purpose, and beauty.”[19]

Nevertheless, it is a radical position: atheism is the eternal “Yes” to humankind.[20] Paradoxically, we try to make our view more palatable by obscuring it, as when we tell an acquaintance that we’re “not religious” instead of explicitly atheist. While this position is rightly applied to those who don’t fully grasp our intentions, it is far better to foist our wares on the counter in the slim hope that some passerby might delight in our goods. This is exactly what Emma Goldman did with her writings on atheism. Raw, rancorous, and always controversial, Goldman’s iconoclasm reads nearly as modern as anything by O’Hair or Hitchens. It’s this boldness—a desire to own one’s radicalism—that electrifies her writing. This disregard for pleasant spectacle in the service of radical truth reaffirms Goldman’s rightful place in the pantheon of American humanism.

Emma Goldman, likely before her deportation from the United States, 1919.

Emma Goldman, likely before her deportation from the United States, 1919. Library of Congress.

[1] Emma Goldman, Victims of Morality and the Failure of Christianity (New York: Mother Earth Publishing Association, 1913), 2, Google Books.

[2] Ibid., 3.

[3] Ibid.

[4] Ibid., 6.

[5] Ibid., 7.

[6] Ibid., 8.

[7] Ibid.

[8] Ibid., 9.

[9] Ibid., 11.

[10] Emma Goldman, “The Philosophy of Atheism,” Mother Nature Vol. 10, No. 12 (February, 1916): 410, accessed June 27, 2017, Google Books.

[11] The Science Network, The Moon, the Tides and why Neil DeGrasse Tyson is Colbert’s God, Interview with Neil deGrasse Tyson, directed by Roger Bingham (2011, The Science Network), Online Video.

[12] Mikhail Bakunin, in Goldman, “The Philosophy of Atheism,” 410.

[13] Goldman, “The Philosophy of Atheism,” 410.

[14] Ibid., 412.

[15] Daniel Dennett, “The Folly of Pretense,” The Guardian, July 16, 2009, accessed March 15, 2018, Guardian Online.

[16] Ibid., 414.

[17] Ibid., 415.

[18] Ibid., 415.

[19] Ibid., 416.

[20] By contrast, Karl Barth, one of the most influential theologians of the 20th century, declared in 1918 that God speaks an eternal “No!” to man.


Ioriginally intended to share my thoughts on a podcast, but found it more in keeping with the future of the project to share them in an essay. Reason Revolution began as a podcast devoted to the intersection of secular humanism, politics, and culture. It’s now evolved into a website and an ever-growing social media community. I am very proud of the work that I, along with my co-founder and collaborator Tylor Lovins, have done in this space. Yet, my life has changed a lot since I started this project. In my professional life, I’m working on a chapter for a book of Midwestern intellectual history, starting a YouTube series about Indiana history and its relevance to current issues, and improving my status within the public history community. I love my career and look forward to growing my profile with each passing year.

As for my personal life, it’s had its ups and downs. My health hasn’t been perfect this last year or so. Years and years of poor habits are catching up with me and trying to change them has been more difficult than I imagined. My wife, Kalie, and I are also thinking about the long-term, planning a vacation (we haven’t taken one in years; even our honeymoon was over a weekend), thinking about future job opportunities, and even buying a home. But, our lives are not without struggles. I’ve spent so many years in school and working on atheist-related side ventures that it has put a strain on our relationship. I’m committed to improving this situation so I can live a happier, more fulfilling life with my partner and do the things we’ve always talked about but have never done.

With all this mind, Tylor and I decided to end the weekly podcast. It has been a joy and a privilege to work on this part of our project, but the time it takes to put together a weekly show is just too much of a burden for me right now. But don’t worry: the show won’t go away entirely. I will possibly do an episode once a month or every couple of months, which will either be an “Ask Me Anything,” an interview, or a recorded presentation from my summer speaking engagements. As such, we’re moving away from Soundcloud and iTunes and all past and future episodes will be archived on the Reason Revolution YouTube channel.

Our social media presence is also changing. We will be ending our Twitter account to focus exclusively on our Facebook page and our newly-created Instagram page. The atheism community on Twitter is an incredibly toxic place right now, especially after the Lawrence Krauss revelations, so it’s not a platform presently equipped or interested in the kind of discourse we aim to have within this space. Facebook and Instagram, while not perfect, are better suited for posting engaging and thought-provoking memes and advertising new content from the website. This improved strategy will hopefully result in stronger audience engagement and satisfaction with our content.

So, if the podcast is going away, what will be our primary focus? For that, we’re taking the lead from great organizations like Vox, Quillette, and Areo Magazine and focusing on long-form essays related to the topics of atheism, secular humanism, politics, and culture. Writing has always been my first love, and with the weekly podcast out of the way we’ll be writing more essays tackling these subjects. Our long-term goal is to publish an essay a week, but until we hit that mark, it may be 2-3 a month. I am currently working on three essays: a deep-dive on the “radical atheism” of anarchist philosopher Emma Goldman, a new addition to our free will series covering Dan Barker’s newest book, Free Will Explained, and a book review of Steven Pinker’s newest masterwork, Enlightenment Now. Tylor is working on a bunch of different projects, but his latest focus is a secular humanist guide to the work of clinical psychologist Jordan Peterson. In this line-up, we’re showing our shared love for Canadian psychologists who write about big-picture social and political questions, as well as other topics.

Alongside these essays, I’m bringing back the “Special Comment,” which is more off-the-cuff and topical than our long-form content. These essays will bring Reason Revolution back to its political and news-oriented roots, something missing from recent months. We’re also working on a series of essays called “Humanist Histories,” which will explore the deep legacy of freethought and secular humanism in the United States and around the world. Some of these essays will come from previous history publications I’ve worked on while others will be brand new. Our first essay about Indianapolis freethinker and newspaper publisher W. H. LaMaster will be released to readers very soon.

I’m very excited about the future of Reason Revolution and I hope that our audience will be, too. We felt that this change was for the better, not just for my time but also for the content. Since there’s a glut of atheism-related podcasts and short-form blogs, we thought the time was right to distinguish ourselves by trying something a little different. For those of you who have been with us from the beginning, we thank you and encourage you to stick around. For those of you who are new to Reason Revolution, welcome to our community and we hope you’ll like what we do. As I said from the very beginning, “dedicating ourselves to the success and further implementation of reason in our broader culture” is our goal with this project. With this new approach, I think we will do just that.

Justin Clark,

Founder, Reason Revolution

As we continue our series of articles and podcasts on the subject of free will, one particular viewpoint keeps tapping the back of my mind, like a reliable friend who is there to remind you of your lapses. What if we’re approaching the free will discussion incorrectly altogether? What if the problem of free will can’t be solved, or at least not yet? What if we don’t have the requisite knowledge to definitively answer the free will problem?

These questions were brilliantly elucidated by the grandfather of the skeptic movement himself, author Martin Gardner. Mathematician, master debunker of the paranormal, and self-proclaimed “philosophical scrivener,” Gardner outlined his views on the free will problem in an essay entitled, “The Mystery of Free Will.” He argues that “the free will problem cannot be solved because we do not know exactly how to put the question.”[i] The complexities involved in establishing a proper investigation of free will (a fuller picture of human consciousness, physics, and social systems) currently precludes us from answering the free will question with any confidence. As he puts it, “Our attempt to capture the essence of that freedom either slides off into determinism, another name for destiny, or it tumbles over to the side of pure caprice. Neither definition gives us what we desperately want free will to mean.”[ii]

So, what does Gardner mean by free will? He describes the problem as “another name for self-awareness or consciousness. I cannot conceive of having one without the other.”[iii] In other words, Gardner believes that free will is predicated on the presumption that human beings have some level of self-awareness or consciousness. Now, while this is descriptive of what Gardner thinks we have, he thinks we’re currently incapable of “distinguishing free will from determinism and haphazardry.”[iv] Determinism’s reductionism places free will in the ash heap of philosophical history, relegating the problem to nothing more than an illusion that we must accept. Conversely, indeterminism “becomes equally delusory, a choice made by some obscure randomizer in the brain which functions like the flip of a coin.” Neither option leaves a ponderer fully satisfied that the problem has been solved; it is best to leave free will as an open-ended mystery — “a mystery bound up, how we do not know, with the transcendent mystery of time.”[v]

With this answer, Gardner belongs to a small, but influential cadre of philosophers described as the “Mysterians,” thinkers whose unsettled views of free will, mind, and consciousness require their shrewdness. Gardner shared this view with physicist Roger Penrose, and they both believed that “there are deep mysteries about the brain that neurobiologists are nowhere close to solving.”[vi] Other “Mysterians” on the problem of free will are philosophers Thomas Nagel, Colin McGinn, and Jerry Fodor, as well as linguist and social theorist Noam Chomsky. They follow the simple, but effective adage that Ludwig Wittgenstein penned in his Tractatus Logico-Philosophicus: “Whereof one cannot speak, thereof one must be silent.”[vii]

Wittgenstein appears not to be the only German-language philosopher that Gardner consulted when coming to his conclusion on free will. For that, we turn to the Prussian enlightenment genius, Immanuel Kant. Like Kant, Gardner believed that “the best we can do (we who are not gods) is, Kant wrote, comprehend its [free will’s] incomprehensibility.”[viii]  According to Kant, the empirical, rational investigation of reality rested on a logical assumption of causal determinism, but the intangible (or numinous) aspects of human freedom (what he attributed to a soul) belonged to a “transcendent, timeless realm” where humans are “truly free.” These two contradictory forces, “empirical determinism” and “noumenal freedom,” seem impossible to reconcile.[ix]

Kant specifically addressed this issue in his work, Religion within the Limits of Reason:

Here we understand perfectly well what freedom is, practically (when it is a question of duty), whereas we cannot without contradiction even think of wishing to understand theoretically the causality of freedom (or its nature).[x]

Gardner admits (as a proper skeptic) that he doesn’t necessarily buy into some of Kant’s metaphysical claims, but the general point is the same. We feel we have free will, but that’s at odds with what we know about the mechanics of the universe. This is the apparent contradiction that is unsolved by mere sophistry, leaving Gardner most comfortable with admitting his doesn’t have a solution.

As someone who identifies as a compatibilist and has spoken of its merits, I am equally enthralled with the mysterian position. Gardner and others are not afraid to say, “I don’t know,” which is both intellectually honest and philosophically astute. Perhaps there are mysteries about consciousness, mind, and time that we have yet to fully comprehend, and until we have the requisite knowledge about these conceptions, we are inept to solve the problem of free will. Humility is the beginning of the path to wisdom, and in that regard, Gardner had it in spades.



[i] Martin Gardner, The Night Is Large: Collected Essays, 1938-1995 (New York: Macmillan/St. Martin’s Press, 1995), 427.

[ii] Ibid., 428.

[iii] Ibid., 427.

[iv] Ibid.

[v] Ibid., 428.

[vi] Ibid., xix. In a future essay, I will explore how neuroscientist Michael Gazzaniga aptly attempts to assuage Garner and Penrose’s fears by demonstrating a pragmatic approach to free will that is grounded in neuroscience.

[vii] Ludwig Wittgenstein, Tractatus Logico-Philosophicus (New York: Harcourt, Brace & Company, Inc., 1922), 189, accessed February 5, 2018, Google Books.

[viii] Gardner, The Night is Large, 428.

[ix] Ibid., 440.

[x] Kant, as quoted in Gardner, 440.


Harris's Moral Landscape

The history of moral thought varies. Though traditionally associated with either philosophers or theologians, whose theories often extrapolate general concepts without empirical evidence, recent trends in both science and philosophy favor another approach to morality, one steeped in empirical observation and scientific study to define and defend moral principles. Garnering controversy and praise for its fresh discussion of morality, The Moral Landscape by neuroscientist Sam Harris represents such an approach . For Harris, moral relativism (the belief that moral goods are not objective) does not effectively create a just and ethical society.[i] Additionally, he rejects moral (usually religious) absolutism, which defines moral goods under strict, dictatorial guidelines.

As an alternative to moral relativism and absolutism, Harris introduces the idea of a moral landscape, where moral situations and concepts are on a continuum of approval or disapproval based on scientific studies of neurological and social data. His benchmark for what constitutes a moral good is the “well being of conscious creatures.”[ii] This argument is a new approach to the classical study of utilitarianism, founded in the nineteenth century by philosophers Jeremy Bentham and John Stuart Mill. Bentham and Mill’s social philosophy used the idea of “the greatest good for the greatest number” as the standard by which to make moral judgments. Harris’s moral landscape is a modern, more empirically grounded version of this time-honored philosophical tradition, but focuses more on the situational aspects of moral judgement. Thus, Harris’s moral landscape provides us with a new incarnation of utilitarianism based on scientific, as well as philosophical, foundations.

Utilitarianism: The Classical Approach

Before understanding the nature of Harris’s thought, a survey of classical utilitarianism must be conducted. Utilitarianism, as a social and political theory, argues that moral decisions should be made by considering the greatest amount of happiness for the most amount of people possible. The founder of this theory was political philosopher Jeremy Bentham, and he outlined his concepts in an essay entitled “An Introduction to the Principles and Morals of Legislation.” Bentham argues, “nature has placed mankind under the governance of two sovereign masters, ‘pain’ and ‘pleasure.’ It is for them alone to point out what we ought to do, as well as to determine what we shall do.”[iii] Pain and pleasure, generally understood as functionally meaning “favorable” and “unfavorable,” self-evidently show the most appropriate actions for humanity, according to Bentham. Since we are subjected to pleasure and pain, “the ‘principle of utility’ recognizes this subjection, and assumes it for the foundation”[iv] of an ethical and moral system. In Bentham’s view, the principle of utility is the guiding precept governing moral action, both for government and for individuals, that expands pleasure or diminishes pain for the greatest amount of people possible.

Bentham arrives at this conclusion with what is called the theory of “hedonistic calculus.” Hedonistic calculus aggregates the principles of intensity, duration, certainty, remoteness, fecundity (relation to others), and purity of the established pleasures or pain within interactions between social individuals to establish the greatest utility possible in any given situation.[v] These criteria, which are applied like an algorithm to each moral situation individually, deliver the best possible moral outcome.This is generally called “act utilitarianism”: moral actions are made individually and situationally, but collectively expand the moral benevolence of a society. Bentham’s theory powerfully argues for the equality of humanity as well as for the unification of laws and moral customs under a principle of utility. Yet, his approach is harder to implement in the real world because there are no unifying, general axioms that might guide society towards actions of the greatest utility. Also, it takes too much time in the real world to use hedonistic calculus in every situation that requires an action. This is where John Stuart Mill, the co-founder of utilitarianism, comes in to pick up the task.

Mill is in agreement with Bentham on the principle of utility, but he expands upon this concept with his own version of the principle, the “Greatest Happiness Principle.”[vi] The principle posits that “actions are right in proportion as they tend to promote happiness, wrong as they tend to produce the reverse of happiness. By happiness is intended pleasure, and the absence of pain; by unhappiness, pain, and the privation of pleasure.”[vii] Therefore, all utilitarian moral evaluation and action is based upon this principle for Mill. In responding to critics who argued that pleasure is only of the body, Mill counters with asserting that some intellectual goals, when achieved, are more pleasurable than bodily desires, which must have some form of primacy over the base, bodily pleasures of humankind.[viii] Thus, Mill’s utilitarian theory argues that broad rules must be created in accordance with the Greatest Happiness Principle in order to effectively implement a standard of morality for as many people as possible.[ix] This is known as “rule utilitarianism,” which argues for the creation of the most general amount of happiness through broad, unifying guidelines that all members of a society use. But what are those rules?

In attempting to create some guidelines, Mill argues, “the ultimate sanction, therefore, of all morality…[is] the conscientious feelings of mankind.”[x] Humanity’s initial moral guidelines stem from subjective value judgments that then evolve into broader social commitments, to ethical ideals like happiness. In an interesting turn, Mill dissents from Bentham and argues for something revolutionary within the utilitarian framework, something that will have a clear influence in Harris’s thinking: human morality is equivalent to states of mind. As such, the sanctions on moral behavior exist, “always in the mind itself…this which is restraining me [from immoral action], and which is called my conscience, is only feeling in my own mind.”[xi] Mill’s dedication to the human mind anticipates the development of the neurological sciences and their relationship to human behavior, something Harris has openly defended. While these properties are of the mind, Mill argues that they are not innate and must be “a natural outgrowth…brought by cultivation to a high degree of development.”[xii] Another key axiom for Mill is that rules for conduct in society be created by, “those who are qualified by knowledge of both ‘moral attributes and consequences,’” and that it, “must be admitted as final.”[xiii] Mill thinks somebody, or groups of people, should be thinking about the possibilities of action given by current circumstances and running the General Happiness Principle through an algorithm to determine general rules of conduct. Due to the natural propensity for intellectual growth and moral guidelines through the expansion of education, utilitarianism can be applied to society through general rules of conduct. This is something Harris, presumably, would agree with.

Both Bentham and Mill created a social philosophy which philosopher Leonard Peikoff described as “knowing skepticism,” meaning that while these do not fully produce objective rules of conduct, the subjective value-states of humankind lead to the creation of larger rules that society functions by.[xiv] In introducing this skepticism, Mill and Bentham orchestrated a social philosophy that has practical value, especially with the introduction of uniform rules of conduct based on collectively understood value judgments. Sam Harris’s “moral landscape” seeks to revamp rule utilitarianism using neuroscience to explain social conduct and the nature of human happiness in a more scientific, objective way.

Harris’s Moral Landscape

As a trained neuroscientist, Sam Harris uses the tools of science to answer our long-standing moral and ethical dilemmas. “Human well-being entirely depends on events in the world and on states of the human brain…. Differences of opinion will remain—but opinions will be increasingly constrained by facts.”[xv] Harris is putting forth a more actionable way of approaching ethics; instead of using traditional and potentially subjective modes of moral and ethical thought, these shift into discussions of quantifiable rules of conduct that can be measured within the constructs of science and reason. To this end, Harris posits the moral landscape as, “a space of real and potential outcomes whose peaks correspond to the heights of potential well-being and whose valleys represent the deepest possible suffering.”[xvi] These moral peaks and valleys are directly proportional to levels of brain states. And under this scheme, various cultural, ethnic, religious, and social customs are represented as features of the landscape. As Harris puts it, “Culture becomes a mechanism for further social, emotional, and moral development. There is simply no doubt that the human brain is the nexus of these influences.”[xvii]

In trying to develop better modes of moral behavior, Harris posits that general well-being, much like the utility principle for Bentham and Mill, is the benchmark for what constitutes a moral judgment, action, or outcome.[xviii] Yet, he disagrees with them about the importance of subjectivity in the moral decision-making process. Harris argues that, “there must be facts regarding human and animal well-being about which we can also be ignorant or mistaken. In both cases, science—and rational thought generally—is the tool we can use to uncover these facts.”[xix] Humanity’s evolutionary shift towards rationality and reciprocity has paved the way for moral and ethical concepts that increase the well-being of most parties within a society.[xx] The insistence on rationality, brain states, human thought, and general well-being creates the necessary moral framework that makes Harris’s views consistent with Mill’s rule utilitarianism, even though Harris believes that objective moral truths are easier to grasp than Mill did.

In explaining the nature of brain chemistry and its relation to human morality, Harris cites a study involving psychopaths and sociopaths. These two psychological categories of people, on average, make immoral or amoral decisions at the expense of others’ well-being. Harris explains that, “the first neuroimaging experiment done on psychopaths found that, when compared to nonpsychopathic criminals and noncriminal controls, they exhibit significantly less activity in regions of the brain that generally respond to emotional stimuli.”[xxi] This correlation suggests that in the future, as the nature of neuroscience progresses to create an even fuller picture of the brain, society may be able to establish social norms based on such empirical data. Harris’s explanation of evil lends itself to Mill’s view that the importance of social norms and reliance on people of experience could be used to create a utilitarianism that has real social weight.

Another way that Harris’s moral landscape shares the qualities of rule utilitarianism is that studies on human belief show facts and values are beginning to become intertwined. To understand this further, Harris elaborates on the nature of biases in human thought processes; he argues that bias, “is not merely a source of error; it is a reliable [italics in original] pattern of error. Every bias, therefore, reveals something about the structure of the human mind.”[xxii] The problems associated with biases serves as a counterpoint to the prevailing moral precepts of a given society. Since logical arguments are created from the withering of bias within a sound proposition, when facts are thus determined, they become believed; a sound fact “inspires belief.”[xxiii] Morality, in some instances, can be inspired beliefs based on past elimination of biases and the creation of sound facts. Logically, our understanding of sound facts allows us to implement a form of rule utilitarianism that applies to a wide variety of societies.


Sam Harris has argued human flourishing is directly correlated with a sound understanding of the fundamental facts of human well-being, particularly freedom, security, and equality. In the conclusion to his book, he argues that, while there may never be a completely implemented form of universal morals, humanity, “must admit that some interests are more defensible than others. Indeed, some interests are so compelling that they need no defense at all.”[xxiv] This brief passage on the nature of competing interests in society is one of the most powerful, implicit defenses of utilitarian thinking: some interests will take precedence over others, for the most amount of well being in a society, and utilitarianism gives us a way of navigating competing social interests. What makes Harris’s moral landscape important to the evolution of ethics is that it offers a method, one rooted in empirical evidence and philosophical consistency. It offers an attainable, institutional form of human morality that is a secular alternative to the all-pervasive contradictions inherent in theological ethics and moral relativism. Rule utilitarianism, from Mill’s classical form to Harris’s moral landscape, shows a systematic approach to the expansion of positive human values that, through science and philosophical inquiry, will only further evolve.




[i] A moral good is any moral decision or consequence that has the characteristic of being “moral.” So the moral good is a general term for any decision or consequence that is morally good.

[ii] Harris, 2010, p. 11

[iii] Curtis, 117, 1962.

[iv] Ibid.

[v] As cited in Curtis, 120, 1962.

[vi] (2002 p. 239)

[vii] Ibid.

[viii] Ibid., 240-241.

[ix] Ibid. 241.

[x] Ibid., 262-263.

[xi] Ibid.

[xii] Ibid., 264.

[xiii] Ibid., 243.

[xiv] 2002 p. 59

[xv] Harris, 2010, p. 2-3

[xvi]Ibid., 7.

[xvii] Ibid., 9.

[xviii] Ibid., 55.

[xix] Ibid., 31.

[xx] Ibid.

[xxi] Ibid., 97.

[xxii] Ibid., 132.

[xxiii] Ibid., 133.

[xxiv] Ibid., 190-191.



Curtis, M. (1962). The great political theories, volume two. New York: Harper Perennial.

Harris, S. (2010). The moral landscape. New York: Free Press.

Mill, J. S. (2002). The basic writings of John Stuart Mill. New York: The Modern Library.

Peikoff, L. (2012). The DIM hypothesis. New York: New American Library.


In my previous essay, I explored the implications of life without gods and the supernatural. Acknowledging that the abandonment of traditional religion requires a complementary philosophical system, I will present secular humanism as a rigorous and applicable framework for human flourishing. This brief overview will not be exhaustive; it will present an outline for this methodology and present concise arguments in its defense. In sum, a life based on the application of one’s reason, ethical individualism, and democratic participation can facilitate a life of joy, freedom, and achievement.

The Humanist Epistemology

A secular humanist’s epistemology (theory of knowledge) is built upon three essential components: reason, methodological naturalism, and skepticism. First, reason is the foundational pillar that the other components work from. Reason is the capacity of human beings to create abstract thoughts and/or conclusions based on the concretes of reality. It is the emergent faculty of our brains that allows us to conceptualize and systematize the world. The humanist believes that reason, or our ability to perceive and then conceive, is purely natural and without the need for “faith” or “revealed wisdom.”

Philosopher Harry Binswanger has delivered a series of lectures emphasizing this point, basing his conclusions off of the principles of an Objectivist epistemology. In Binswanger’s estimation, perception (taking in information via the senses) is the “given” in our understanding of the world, in that it requires mere physical processes. Abstraction and conceptualization, which turn our perceptions into knowledge, are processes that require discrimination and systemization of the “raw material” of perception. This is where reason comes in. Nearly anyone can perceive a quasi-spherical red object or a vibrational difference in the atmosphere with their senses; it requires reason for the concretizing and systemizing process of conceptualization to understand that it is an apple or a song.

Faith by-passes the entire process of knowledge, by appealing to “revealed” truths that one accepts without the steps of perception, concretization, and abstraction. It treats knowledge as a top-down proposition, akin to Plato’s “forms” or Kant’s “pure reason.” This is a completely inverted understanding of epistemology. As Aristotle, Locke, and others have rightly noted, knowledge is a bottom-up process, requiring ever more complicated levels of thought to arrive at our conclusions. Therefore, it is essential within a humanist understanding to properly acknowledge the importance of perception and reason to epistemological questions.

Second, it is important to base our perception on a solid foundation, which in this case is methodological naturalism (MN). An astute summation of methodological naturalism comes to us from the RationalWiki:

Methodological naturalism is the label for the required assumption of philosophical naturalism when working with the scientific method. Methodological naturalists limit their scientific research to the study of natural causes, because any attempts to define causal relationships with the supernatural are never fruitful, and result in the creation of scientific “dead ends” and God of the gaps-type hypotheses. To avoid these traps scientists assume that all causes are empirical and naturalistic; which means they can be measured, quantified and studied methodically.

MN does not rule out the possibility of the supernatural, but rather recognizes the complicated and often problematic investigations of the supernatural. This view is contrasted with philosophical naturalism (PN), which holds that the natural world is all there is and no supernatural exists. While some humanists hold the position of PN, it is more philosophically and intellectually honest to accept MN.

Having said all that, it is important to note that MN does not ignore supernatural claims altogether. When a faith healer says he can cure cancer or a psychic claims to know intimate details of your life, these are specific, testable claims that can be refuted by the scientific method. Even more broadly, when a religion makes specific claims about the natural world (God created the world in six days, God stopped the Sun in the sky, Jesus rose from the dead), these can also be debunked by scientific investigations. What MN cannot do is refute God or supernaturalism all together, seeing as these concepts are too broad and amorphous to be falsified, a key component to the scientific method. Therefore, Humanism’s dedication to MN, and its lack of confidence in supernaturalism and gods, is based on the simple logic of Occam’s Razor. If a phenomenon can be explained by natural means, it is therefore unnecessary to attribute them to supernatural means. Additionally, if a phenomenon we attributed to the supernatural is proven to be true, it is then added to what is natural.

Finally, a humanist epistemology benefits from a healthy dose of skepticism. For this perspective, we turn to the master of skepticism himself, the Scottish philosopher David Hume. In his Treatise on Human Nature, Hume explains the fallibility of the human mind:

The essence and composition of external bodies are so obscure, that we must necessarily, in our reasonings, or rather conjectures concerning them, involve ourselves in contradictions and absurdities. But as the perceptions of the mind are perfectly known, and I have us’d all imaginable caution in forming conclusions concerning them, I have always hop’d to keep clear of those contradictions, which have attended every other system.

In other words, perceptions are not knowledge. They can be twisted and contradicted from what is actually going on in the real world. This is why the process of reason is indispensable to our lives. Reason allows us to peel back the layers of “contradictions and absurdities” and come to a more accurate conceptualization of reality. As I noted in my previous essay, humans are emotional and messy, often led astray by our biases and misperceptions. Skepticism guides our thinking away from our initial perceptions and requires us to investigate deeper to best approximate our understanding of the world.

The Personal Level: Ethical Individualism

Moving from epistemology to ethics, a predominant theological and philosophical worldview focuses on the collective nature of human beings. In more fundamentalist strains, it can be a complete negation of a person’s thoughts, desires, and talents. For example, the ideologies of Islamism (the politicization of certain sects of Islam), fundamentalist evangelical Christianity, and orthodox Marxism require that the individual be subservient to the cause, or the “ideal” of the faith. In a secular lens, this type of view can be summarized by the 19th century philosopher, and founder of the term “altruism,” Auguste Comte: “The individual must subordinate himself to an Existence outside himself in order to find in it the source of his stability.”

This view wholly distorts our human nature. While some scholars quibble over the nature of group level selection (see Haidt), the foundational level of selection concerns the individual. Human beings, much like our primate ancestors and scores of other beings before us, evolved based on mostly individual changes which then added up over time. As Robert Sapolsky noted in his recent masterwork, Behave: The Biology of Humans at Our Best and Worst:

Animals don’t behave for the good of the species. They behave to maximize the number of copies of their genes passed into the next generation. . . . Individual selection fares better than group selection in explaining basic behaviors.

This has profound ethical implications. While it would be unwise for us to directly extrapolate a system of ethics from biology, it is helpful to understand these conclusions and their relation to us as social creatures. Humans are inherently social; we desire communication and connection. However, that does not mean we should seek to achieve these connections through collectivistic means.

Building off of that, my personal view of humanism is built on the guiding principle of individual rights. As John D. Rockefeller, Jr. once said, “I believe in the supreme worth of the individual and in his right to life, liberty and the pursuit of happiness.” This notion is bigger than biology. It is also built on the Enlightenment principle of “self-proprietorship,” beautifully outlined by the English Leveller Richard Overton (as quoted by intellectual historian and philosopher George H. Smith):

To every individual in nature is given an individual property by nature not to be invaded or usurped by any. For every one, as he is himself, so he has a self-propriety, else could he not be himself; and of this no second may presume to deprive any of without manifest violation and affront to the very principles of nature and of the rules of equity and justice between man and man.

In essence, your life belongs to you, to do with it as you see fit, so long as you do not violate the rights of another. This is a bedrock ideal within the Enlightenment political tradition and one that continues to expand the rights of all people.

In Overton’s time, they attributed individual rights to a sovereign God of nature (similar to Jefferson and the founder’s notion of “Nature’s God.”) While this tradition has historically been built upon that premise, it is equally valid to base these rights upon the virtue of being a thinking, sentient being with the capacity for reason. Philosopher Corliss Lamont described this concept’s classical roots and its modern application:

It is the Humanist view that if the individual pursues activities that are healthy, socially useful, and in accordance with reason, pleasure will generally accompany them; and happiness, the supreme good, will be the eventual result. This ethical doctrine goes all the way back to Aristotle and is called eudaemonism (Greek for happiness). It contrasts with hedonism, which holds that pleasure alone is intrinsically good, by putting primary emphasis on the sorts of activities that a person chooses; at the same time it assigns an important and pervasive role to pleasure. “Pleasure,” as Aristotle said, “perfects the activities,” yet remains secondary. The Humanist ethics, then, “recognizes that the intentional objects of human striving are, in point of fact, not pleasures, but pleasurable things. And by identifying the good with voluntary activities and preferred objects, which are publicly observable, it facilitates discovery, measurement and production of the good.”

Therefore, that which is in accordance with the overall flourishing of the individual, within the context of their own life and their relation to others, undergirds a humanist conception of rights. Supernaturalism and/or god(s) no longer remain necessary.

As mentioned above, a person’s relation to others must also be taken into account. Individualism does not imply a short-sighted selfishness. Rather, it represents a committed recognition to the dignity of each person as well as the need for social cohesion for the flourishing of our species. Lamont, again, elucidates this point perfectly:

Humanism, then, follows the golden mean by recognizing that both self-interest and altruism have their proper place and can be combined in a harmonious pattern. People who try to serve humanity must permit humanity to serve them in turn. Their own welfare is as much a part of the welfare of humankind as that of anyone else.

Our individualism must be grounded on an ethical promise to advance our own interests while seeking to advance the interests of society as a whole. Even though the Devil will be in the details (pun intended), it is the ethical project of humanism that protects individual rights while advancing all of humanity forward.

The Societal Level: The Moral Instinct and the Moral Framework

In the last section, I mentioned the devilish details of the individual’s ethical relation to others, generally known as morality. In my view, our morality breaks down into two major components: the moral instinct and the moral framework. Our moral instincts are the product of natural selection; we are driven by “passing on lots of copies of one’s genes” through “maximizing reproduction.” Base emotions like fear, hunger, dominance, and justice, among others, evolved over millennia so our genes could be passed on from generation to generation. This has not only made us successful biologically; it has made us successful morally. As such, actions which originally evolved to help direct kin began to help non-kin, especially once we developed our social systems.

Here’s a story to illustrate this point. In his book, Life Driven Purpose, Dan Barker recalls a story about saving a baby from being harmed at an airport. He was waiting to board the plane when he noticed that a woman had placed her infant “on top of a luggage cart, about three or four feet off the ground, and the father must have stepped away for a moment.” Out of the corner of his eye, Barker saw the carrier starting to fall to the ground, “made a quick stride to the left,” and his “finger tips caught the edge of the carrier as it was rolling towards the floor.” The mother quickly assisted him in leveling the carrier and thanked him for his action. Now, why would he do something so moral without much intellectual consideration? Barker explains:

We are animals, after all. We come prepackaged with an array of instincts inherited from our ancestors who were able to survive long enough to allow their genes–or closely related genes–to be passed to the next generation because they had those tendencies. An individual who does not care about falling babies is less likely to have his or her genes copied into the future.

The moral instinct compels us to carry out many actions without any logical considerations; we just act in accordance with our human nature. Acknowledging this aspect of who we are goes a long way to improving our ethical systems in the future.

Complementing the moral instinct is the moral framework, what we commonly call “ethics,” or a system of conceived principles that advance flourishing and limit suffering, not just in humans but in the ever-growing moral universe. One way to conceptualize the moral framework is philosopher Peter Singer’s “expanding circle.” Based on an earlier concept from historian W. E. H. Lecky, Singer’s expanding circle hinges on moral agents rationally defending their actions without prizing their own status over anyone else. In other words, it’s a more elaborate variation on the golden rule, but with a twist: make moral decisions among others as you would have others make moral decisions among your kin. The circle expands, as the metaphor goes, as we socially evolve to include more than just other individual humans. Within time, it will include in-group members, out-group members, communities, states, countries, the entire human race, other mammals, all sentient beings, and eventually the entire spectrum of life. Using the moral framework will challenge our culturally-ingrained notions of moral behavior, as its “principles are not laws written up in heaven. Nor are they absolute truths about the universe, known by intuition. The principles of ethics come from our own nature as social, reasoning beings.”

Using the benchmark of advancing flourishing and limiting suffering, there are ways in which behaviors can actually be assessed as moral and immoral. As neuroscientist Sam Harris argues in The Moral Landscape, “there are right and wrong answers to moral questions, just as there are right and wrong answers to questions of physics, and such answers may one day fall within reach of the maturing sciences of mind.” While Harris is right about the importance of science in answering moral questions, we must also use ethics when discussing moral values. Both work hand in hand, with science being the investigatory component and ethics being the evaluative component. This is for a reason. Unbridled science (eugenics, atomic weapons) and unbridled utopianism (totalitarian philosophies such as Fascism and Marxism) can lead to immoral actions; it is only through what biologist E. O. Wilson called “consilience,” or a unification of knowledge, that we can make the best moral decisions. In all, the moral instinct and the moral framework serve as two sides of the same ethical coin. The instinctual and conceptual both have a say in how we advance our lives and the lives of others.

The Political Level: Rights as Paramount, Science and Ethics Guide Policy

Finally, the political sphere, which combines individual and social concerns, becomes the normative framework for ensuring the flourishing of each component listed above. Democracy, the most successful and beneficial form of government, is predicated on the protection and/or fulfillment of rights through the “freely given consent of the governed.” These rights can be broken down into two categories: negative and positive. Negative rights are rights that the government cannot take away from you (freedom of speech, freedom of religion, freedom of association, etc.) while positive rights are those that are granted by the government, such as a right to food, clothing, shelter, medical care, and a living wage or pension system. The best encapsulation of both types of rights comes from President Franklin Roosevelt, in his “Four Freedoms Speech,” delivered in front of Congress in 1941. The “four freedoms” are freedom of speech, freedom of worship, freedom from want, and freedom from fear. The first two are negative rights while the latter two are positive rights. Our modern democratic tradition hinges on these ideals, which fit nicely into a humanist framework.

Humanist scholars such as John Dewey, Sidney Hook, and Paul Kurtz all stress the importance of a healthy democratic society based on the bedrock of political rights. Dewey, in his essay, “On Democracy,” wrote of the necessity of negative rights:

While the idea is not always, not often enough, expressed in words, the basic freedom is that of freedom of mind and of whatever degree of freedom of action and experience is necessary to produce freedom of intelligence. The modes of freedom guaranteed in the Bill of Rights are all of this nature: Freedom of belief and conscience, of expression of opinion, of assembly for discussion and conference, of the press as an organ of communication. They are guaranteed because without them individuals are not free to develop and society is deprived of what they might contribute.

Negative rights ensure that individuals are free to follow the dictates of their own conscience and intelligence to fulfill the needs of themselves and others. To implement these values, a democracy requires a strong separation of church and state and a free press, so that all citizens can implement the values they hold dear without violating the negative liberties of others.

On the other hand, Hook notes of the “positive requirements of a democracy” in his essay, “Democracy as a Way of Life.” Among the various requirements, the most important to this discussion is Hook’s notion of “economic democracy.” He explains:

By economic democracy is meant the power of the community, organized as producers and consumers, to determine the basic question of the objectives of economic development. Such economic democracy presupposes some form of social planning, but whether the economy is to be organized in a single unit or several and whether it is to be highly centralized or not are experimental questions. There are two generic criteria to decide such questions. One is the extent to which a specific form of economic organization makes possible an abundance of goods and services for the greatest number, without which formal political democracy is necessarily limited in its functions, if not actually endangered. The other is the extent to which a specific form of economic organization preserves and strengthens the conditions of the democratic process already mentioned.

Like Dewey, he’s leaving options open to the citizens of democratic societies, such as whether to be more capitalist and less socialist or vice versa. In doing so, Hook defends the principle of positive rights in the same fashion that Roosevelt did: to advance human flourishing.

Lastly, we come to Paul Kurtz and his thoughts on democracy from his book, In Defense of Secular Humanism. Kurtz reaffirms the considerations made by Dewey and Hook but also emphasizes the value of discourse and participation to a functioning democracy. “. . . a political democracy,” Kurtz writes, “can be effective only if its citizens are interested in the affairs of government and participate in it by way of constant discussion, letter writing, free association, and publication. In absence of such interest, democracy will become inoperative; an informed electorate is the best guarantee of its survival.” Each of these views on democracy require citizens to use reason, from protecting their liberties and organizing their economies to discussions among others and petitioning the government for a “redress of grievances.” None of these things happen by virtue of a god or how many prayers a person can say. Rather, democracy is a human-centered, action-oriented enterprise that protects rights, builds economies, facilitates discussions, and encourages achievements.

With that in mind, a functioning democratic society relies on both science and ethics to inform our public policy. With such contentious issues as abortion, the death penalty, law enforcement overreach, sex education, vaccines, and stem cell research, it is essential that we apply our best thinking to these social problems. With only science as a guide, a government falls privy to overbureactization and malfeasance, and at worst, enacts policies which violate individual rights (eugenics, forced sterilization, genocide). This is why an ethical component, based on the application of reason as well as the guidepost of human flourishing, should always play a core role in shaping policy. It will not always provide us with easy answers, but it is far better than leaving our democracy to the whims of crackpots, religious fanatics, and overzealous central planners.

Conclusion: Humanity’s Future

Like so many ages before us, our age falls prey to barbarism, mysticism, hero worship, tribalism, superstition, and flat-out nonsense. To avoid these trends, we need a philosophy of life that prizes reason over faith, knowledge over ignorance, freedom over tyranny, and most importantly, humans over dogmas. Secular humanism is exactly that kind of philosophy. It is a way of life that puts human beings at the center of their own destiny, no longer chained to the whims of fundamentalist religion or totalitarianism. Its openness to new ideas and diversity of thought allow for a more enlightened religion, one that is compatible with humanism’s core principles. If one has left gods behind, it gives you the framework to live a moral and fulfilling life. The beauty of humanism is that it isn’t much of an “ism” at all; its essential values allow for a multiplicity of worldviews to coexist together, in something akin to Robert Nozick’s notion of a “utopia of utopias.” By leaving society free, open, and dedicated to human flourishing, all people can live among one another with more peace, prosperity, and progress.

Isaac Asimov said it best when he declared that, “Humanists recognize that it is only when people feel free to think for themselves, using reason as their guide, that they are best capable of developing values that succeed in satisfying human needs and serving human interests.” This is the apotheosis of humanism. Despite our flaws and failures, humanity has achieved so much in its time. We have conquered the heavens and the earth, built civilizations, eradicated diseases, ameliorated poverty and suffering, expanded freedom and opportunity, and created art and literature that will last for ages. All of this occurred because we valued our lives and dedicated ourselves to improving them. Every minute we waste speculating about the afterlife limits the value of our lives right now. We are young in the vast chasm of the universe, grasping for glimpses of truth and wisdom. We have so much to learn, which requires us to leave behind the shadows of our past and walk into the light of the future with an open mind, an open hand, and an open heart. Humanism gives us the path; we just have to take the first step.


We Must Continue the Dream

Heinrich “Henry” Becker arrived in the United States in 1849, stepping off the passenger ship Hermann and claiming a new life for himself in Baltimore, Maryland. His family had lived in Prussia all their lives, but they embarked on a new path for themselves in America. His father, Friedrich Becker, brought his wife, Elizabeth, and their five children (including Heinrich) to United States. He worked as a tailor most of his life in Baltimore but frequently made trips back to Germany. Heinrich, by contrast, likely had odd jobs before settling in Ohio as an employee of an oil mill. He became a  naturalized citizen in 1854 and lived the next six decades in Dayton, Ohio. He died in 1912.

His daughter, Catharine Baker, was born in 1864. She married Harvey Geyer in 1891 and lived in Dayton until around 1899, where she and her family moved to Peru, Indiana. It was here that Paul Richard Geyer was born. He later married Nira Amos and fathered approximately 4 children, of which my Grandfather, Henry William “Butch” Guyer, was born in 1938. Heinrich Baker, the Prussian immigrant and oil mill worker, is my Great-Great-Great Grandfather. His father, Friedrich, is my Great-Great-Great-Great Grandfather. I’m a proud descendent of German immigrants.

This is something I’ve reflected on a lot over the last few days. This week saw one of the most egregious decisions ever made by an American president: the rescinding of DACA, or “Deferred Action for Childhood Arrivals.” This policy, started in 2012 by then-President Barack Obama, ensured that children of undocumented immigrants could stay in the country under a temporary permit. If they came to the US before they were 16, were in high school or completed high school, and had no criminal record, they could stay here under the DACA program. Over 800,000 people utilized DACA to stay in the US. The policy loosely came from a Congressional proposal called the DREAM Act, which would have been a permanent version of DACA that couldn’t be manipulated by executive overreach. Despite broad public support (69% in the latest PRRI poll), the program’s rescindment under the Trump Administration throws everything into uncertainty.

The Obama administration created DACA as a stop-gap measure, after Congress failed to pass the DREAM Act in 2010. Some on the right, including the official line from the White House and the Justice Department, argued that DACA couldn’t withstand constitutional scrutiny. This is not as easy as they make it. The Supreme Court recently issued a split ruling (before Scalia’s replacement) on a similar program, but their differences were mostly based on procedural matters. As Drexel University law professor Anil Kalhan noted in an interview with Quartz, “The issue of constitutionality has never been resolved.” The article further debunks much of Attorney General Jeff Sessions’ comments on rescinding DACA, specifically who qualifies and how the program actually works.

Now, we can set aside the legal issues here, which are complicated and unsettled, but we certainly need to discuss the moral nature of the Trump administration’s decision. This leaves the lives of 800,000 people in an worsened state of limbo than they were already in, causing unnecessary uncertainty to employment, schooling, and eventual paths to citizenship. These young people, who came here as children and know no other home, could be deported to a land they have a tangential connection to. It could split up families, dissolve communities, and hurt our economy. As former Microsoft head and philanthropist Bill Gates wrote on Facebook:

DREAMers represent the best instincts of this country and the tradition that the great experiment of the United States is made better by people from other places coming here to dedicate their talents and commitment to continuing to move our country forward.

Corporate leaders, especially from silicon valley, strongly criticized the president’s decision this week. In fact, CEOs from across the corporate spectrum sent a letter to President Trump and Congressional leaders urging the passage of the DREAM Act and the continuation of DACA.

The strongest criticism of Trump’s actions came from his predecessor, Barack Obama. On Tuesday, the former President published an essay on his Facebook that unpacked the real reason for this decision:

Let’s be clear: the action taken today isn’t required legally. It’s a political decision, and a moral question. Whatever concerns or complaints Americans may have about immigration in general, we shouldn’t threaten the future of this group of young people who are here through no fault of their own, who pose no threat, who are not taking away anything from the rest of us.

He’s right. President Trump made this decision to appeal to the xenophobic, and frankly anti-immigrant, wing of his dwindling political base. This was never about the DREAMers; it was about reversing a policy that made our country more diverse simply to placate a minority of extreme conservatives whose views clashed with the majority of Americans.

As a counter to this horrendous view of America, Obama outlined a better path in the closing of his remarks:

What makes us American is not a question of what we look like, or where our names come from, or the way we pray. What makes us American is our fidelity to a set of ideals – that all of us are created equal; that all of us deserve the chance to make of our lives what we will; that all of us share an obligation to stand up, speak out, and secure our most cherished values for the next generation. That’s how America has traveled this far. That’s how, if we keep at it, we will ultimately reach that more perfect union.

With that in mind, I strongly encourage you to reach out to your Senators and Congresspeople. Tell them to pass the DREAM Act once and for all, so that these people can stay here, work hard, get ahead, and become the Americans they deserve to be. I’ll even give you an easy way to do it. Text “RESIST” to 50409. Give them your name, zip code, and a short message letting them know you support DACA and the DREAMers.

Finally, I’ll leave you with a story. Last year, my former home town of Kokomo, Indiana was hit by a tornado. It touched down near a local Starbucks, leveling it to the ground. Fortunately for everyone, no one was hurt, and that was in no small measure to the manager on duty, Angel Ramos. He rushed everyone to the bathroom and saved them from the building’s collapse. His valiant efforts made him a local hero; they call him the “Starbucks Angel.” He was even commended for his actions by Starbucks’ CEO Howard Schultz. He’s recently married and has a new job in construction, “helping rebuild Kokomo.”

Angel is also a DREAMer. He came here from Mexico with his family when he was nine years old. He became a DACA recipient four years ago, and in that time, he has been able to build a great life for himself here in the US. However, with the rescinding of DACA, he faces uncertainty again. This is something I’m not sure the Trump administration understands. Every time they make a policy move like this, they seem to disregard the very human toll it has. And all of it comes from playing petty partisan politics with people who can’t easily fight back.

It’s characteristic of a bully, someone who thinks they’re strong when they’re actually backed in a corner. In a presidency mired in scandal, comfort with white supremacy, and organizational disarray, this “policy move” is another distraction from the very problems this President has. His lashing out turns into real a hardship for people like Angel, his younger sister, and the 800,000 people helped by the DACA program. As Ramos said in a recent interview, “We’re just trying to come here for a better life. So it’s frustrating… just to see everything kind of start going backwards in a very intolerant and prejudice[d] way.”

He’s right. This isn’t good policy or good politics; it’s just prejudice. The hope is that the DREAM Act will get passed and DACA will be extended, but for now, it’s up to all of us to defend the DREAMers. People come to America for opportunity, freedom, and the chance to build a better life. It’s what brought Angel and his family here and what brought my Great-Great-Great Grandfather Heinrich and his family here. Citizenship is not based simply on where you’re from; it’s about who you are and what you do when you’re here. We’re all Americans united under the principles of life, liberty, and the pursuit of happiness. Extending the blessings of liberty to all people strengthens our country, not weakens it. To ensure the promise of our nation, we must stand by the DREAMers and their pursuit of their dream.

After the Exit by Justin Clark

What do we lose when we leave religion? I was asked to respond to this question by a friend and, to be honest, it’s not easily answered. For us atheists, it’s obvious to mention all the terrible things we abandoned when we left religion: A fundamentalist dedication to barbaric texts and practices; the racism, homophobia, and misogyny of its most literalist believers; and superstitions hindering scientific and moral progress. All of these are good reasons to leave religion on the “ash heap of history.” Nevertheless, many still yearn for something “transcendent,” something to confide in when times are tough. There is also a longing for community that keeps droves within the fold. Both of these latter components are much harder to lose.

One of the biggest insights I’ve gained over the last few months, especially after reading the work of Jonathan Haidt and Emile Durkheim, is that religion is more than the sum of its beliefs. Sure, abandoning the supernatural and all of its problematic baggage is an important first step towards a better world, but it is not the only thing we lose. As mentioned earlier, countless people stay within religion for its community, the songs, or the emotional connection they have with their church. Religion is a system of life, not a mere reflection of it. In the case of Christianity, it is a religion with over 2,000 years of traditions, beliefs, and cultural contextualizations. When someone spends their entire life committed to a system so totalizing, it is often jarring when they leave. I spoke to and read of former believers whom felt an intense sadness when they lost their faith. It was as if a part of them died when they left it behind. This isn’t without reason.

Jonathan Haidt, in his excellent book, The Righteous Mind, devotes an entire chapter to the social character and benevolence of religion. Using his background in evolutionary psychology, Haidt illustrates that religion is not a “parasite” or “virus,” as many contemporary secular scholars believe, but a product of group selection that benefitted early humans. “If the gods evolve (culturally) to condemn selfish and divisive behaviors, they can then be used to promote cooperation and trust within the group,” Haidt notes. Human group dynamics see this play out routinely, especially in the United States. In America, the religious tend to be more social, more cooperative, and more charitable than their secular counterparts. Citing the work of Robert Putnam and David Campbell, Haidt also hits on something profoundly relevant to the socializing character of religion: specific beliefs matter far less than the charitable, community-oriented practices. Haidt concluded:

The only thing that was reliably and powerfully associated with the moral benefits of religion was how enmeshed people were in relationships with their co-religionists. It’s the friendships and group activities, carried out within a moral matrix that emphasizes selflessness. That’s what brings out the best in people.

Haidt’s insights are even more compelling for me since they come from a fellow atheist. He doesn’t dismiss some of the problematic beliefs and practices of religion, but he gives credit where credit is due. This completely reshaped how I viewed religion. Until Haidt, I obsessed over specific beliefs and traditions which I saw as irrational and harmful, and I assumed the world would improve if religion went away all together. Now, I think abandoning the social utility of religion, without a secular alternative, seems like an impossible task.

A reading of Durkheim also reinforces Haidt’s findings. Emile Durkheim, a French sociologist during the nineteenth and early twentieth centuries, astutely explained the communal aspect of religion. As such, he focused less on a religion’s specific beliefs and more on its social constitution. “A religion,” wrote Durkheim, “is a unified system of beliefs and practices relative to sacred things, that is to say, things set apart and forbidden– beliefs and practices which unite a single moral community, called a ‘church,’ and all those who adhere to them.” This framework turns religious beliefs away from being ends-in-themselves and into means of communal binding. In this respect, the beliefs themselves are less ontological and more normative. Durkheim emphasizes this point in another passage: “Thus, among the cosmic forces, only those are accorded divinity which have a collective interest. In other words, it is inter-social factors which have given birth to the religious sentiment.” Losing organized religion unravels social orders and obligations and a secular alternative must, therefore, satisfy both the ontological and normative aspects of human social flourishing.

Alongside the social benefits of religion, individuals also seek experiences that tie them to something bigger than themselves, which is a key component to group selection in evolution. While individual selection is the primary driver of natural selection, group selection plays an important, complementary role. Haidt further elucidates this point by stressing the importance of religion as a binding moral agent that facilitated group level selections. “Gods and religions,” writes Haidt, “are group-level adaptations for producing cohesiveness and trust. Like maypoles and beehives, they are created by the members of the group, and then then organize the activity of the group.” Again, this takes religion from the ontological pedestal many atheists place it on and into the pragmatic, normative plane of human existence.

But this is the group; what about individual religious experiences? From Paul’s road to Damascus and Muhammad’s revelations from the angel Gibreel to Aldous Huxley’s mescaline-fueled “perennial philosophy,” personal religious experiences abound in human history. Yet, one of their drawbacks, at least in a discussion of losing religion, is that these experiences are “necessarily first person” and not easily identifiable with the scientific method. However, the growing field of neuroscience is helping us understand the nature of religious experiences from a naturalistic perspective. Dr. Michael Persinger’s research, and his well-known “God Helmet,” have provided initial findings into the connection between brain function and religious experiences. By stimulating the temporal lobe via electric pulsations, nearly 80% of his subjects said they experienced what they called “religious experiences.” Furthermore, Dr. Andrew Newberg’s research suggests some of our religious or transcendent experiences derive from multi-layered neural processes. No “God Helmet” needed.

While neuroscience shows a causal link between brain states and personal religious experiences, losing religion wouldn’t necessarily end these experiences. As Newberg rightly points out:

. . . the brain has two primary functions that can be considered from either a biological or evolutionary perspective. These two functions are self-maintenance and self-transcendence. The brain performs both of these functions throughout our lives. It turns out that religion also performs these two same functions. So, from the brain’s perspective, religion is a wonderful tool because religion helps the brain perform its primary functions. Unless the human brain undergoes some fundamental change in its function, religion and God will be here for a very long time.

Since our lives are intimately connected to how our brains function, experiences deemed “transcendent” or “religious” occur whether or not the beliefs of a religion are demonstrably true. William James said it best when he stated, “religion doesn’t work because it’s true; it’s true because it works.” Thus, losing organized religion will likely never negate the individual experience of the “transcendent” or the group dynamics resulting from natural selection.

So, what do we lose when we lose religion? In short, we lose some of the supernatural and mystical beliefs that crumble under the light of reason, but we will not lose the experiential or communal desires inherent in the human condition. These two components cannot be replaced by science and reason alone; we desire more than what we can test and independently verify. While we appeal to reason and evidence, we are also complicated, messy, and constantly irrational; this is what makes us human. The goal of an examined life is to try to mitigate the irrational and harmful while encouraging the reasonable and beneficial. In this regard, the experiential and communal aspects of religion will never be lost; they will simply take on a new form, as they have in the past. In the developed world, organized religion is taking on new forms or finding itself irrelevant. The largest growing religious demographic in the US is “none,” which isn’t necessarily atheist but not explicitly religious either. The loss of our traditionally religious life doesn’t spell the end of the numinous all together. Rather, it represents the gain of an intellectually vibrant and diverse culture that isn’t afraid to be different.



Featured image “Exit” by Stuart Cunningham, used under Creative Commons.

We Need to End the War on Pot

In 1936, a church-funded film called Tell Your Children was released in theaters. Originally produced by George Hirliman as a propaganda film, Tell Your Children displayed youths gone wild under the influence of marijuana. However, it is best known to the world under its later title, Reefer Madness. A more salacious version, released just years later, cemented its place as one of most ill-conceived, yet undeniably fascinating pieces of film. In both versions, young people have their lives ruined by the “dangerous” effects of marijuana, with violence, promiscuity, and death as a result of their inhalations. This type of presentation is known as “voodoo pharmacology,” the idea that any drug, no matter how benign, could cause “uncontrollable urge[s] of craving and compulsion.”

Popular culture has maintained this illogical and misguided view of marijuana use, so much so that public leaders continue to rail against it. Our current Attorney General, Jeff Sessions, has attempted to undo much of the Obama-era drug reforms as soon as he came into office. Last May, he sent a letter to congressional leadership urging them not to impede Justice Department prosecutions of marijuana offenses, even in states where medicinal or recreational marijuana is currently legal. In it, Sessions asserted that marijuana is “linked to an increased risk of psychiatric disorders such as psychosis,” which the Guardian’s Jamie Peck noted as “sound[ing] a lot like “reefer madness” to me.” I agree. While the research on medicinal marijuana is still not fully conclusive, most research suggests that it isn’t any worse for a person than tobacco and certainly alcohol.

With that in mind, why have we continued a national policy of marijuana prohibition that lead to 8.2 million arrests between 2001 and 2010, with African-Americans 3.73 times more likely than whites to be arrested? As the ACLU noted, marijuana accounted for 52% of all drug-related arrests during this period, and 88% of them were for mere possession. These aren’t the Pablo Escobar-style drug lords we’re talking about; these are millions of people who were arrested for simply possessing a little pot. Furthermore, the racial bias is ridiculous. During the same decade, African-Americans aged 18-25 used marijuana less than whites but still faced disproportionately higher arrest rates. This is on top of a historically racist and inhumane drug war that has destroyed millions of lives and countless communities.

As with many things, you can tie this nonsense back to Richard Nixon. In 1970, Nixon signed the Controlled Substances Act into law, reclassifying marijuana as a Schedule 1 drug. To give you a sense of how fucked up that is, Schedule 1 puts marijuana on par with heroin — a drug that caused nearly 13,000 overdose deaths in 2015. It also classifies marijuana with having “no currently accepted medical uses” and a “lack of accepted safety for medical use.” This is definitely not the case. According to a report from the National Academies of Sciences, an analysis of 10,000 studies concluded that marijuana strongly “helps chronic pain in adults,” “lessens chemotherapy-induced nausea and vomiting,” and “relieves some symptoms of multiple sclerosis.” It also moderately helps with “sleep problems caused by obstructive sleep apnea syndrome, fibromyalgia, chronic pain, and multiple sclerosis” and “doesn’t increase risk of cancers.” Now, I’m not claiming it’s a wonder-drug like many cannabis supporters do, but I am following the best credible evidence we have. Based on this alone, marijuana shouldn’t be classified as a Schedule 1 drug under the Controlled Substances Act.

This is only the science and the law. Let’s talk about the politics of all this. As with marriage equality, the public has rapidly changed its view of marijuana legalization over the decades. In 1979, only 27% of Americans supported legalization. Today, that number is 61%, according to a recent CBS News poll. As for its supposed relationship to crime, only 23% of Americans think it’s related to violent crime. As for its supposed danger to consumers, 53% of Americans think that alcohol is worse than marijuana, with only 7% believing the inverse. What do these statistics say about the changing culture of pot? For starters, many more Americans have tried marijuana than in previous generations. According to this same poll, 50% of Americans have tried marijuana, as opposed to only 34% in 1997. The country is just getting more and more comfortable with pot; they’re learning that it isn’t the boogeyman drug that politicians like Jeff Sessions paint it as.

States are also getting wise to this conclusion. Eight states and the District of Columbia have legalized marijuana for recreational use and 30 permit medicinal uses. However, Sessions’ Justice Department isn’t dedicated to federalism on this matter. Despite these legalized means, the DEA will continue to prosecute people under the federal Controlled Substances Act. So much for limited government. Until Sessions resigns, or a new administration is elected, it appears that marijuana policy at the Justice Department will not follow the science or public opinion.

That doesn’t mean that Congress can’t do something about it. Last Tuesday, Senator Cory Booker (D-N.J.) introduced the Marijuana Justice Act, a bill that “would amend the Controlled Substance Act to eliminate marijuana’s status as a Schedule 1 drug — a move that would decriminalize marijuana at the federal level.” Booker’s bill also “incentivize[s] states to legalize marijuana if their current laws have a ‘disproportionate arrest rate’ on minority or low-income individuals.” Building on President Obama’s previous reforms, the Marijuana Justice Act would retroactively apply to individuals charged for marijuana-related offenses, allowing for the commutation of sentences and expunging the records for those already released. Booker spoke of his bill in a public statement:

Descheduling marijuana and applying that change retroactively to people currently serving time for marijuana offenses is a necessary step in correcting this unjust system. States have so far led the way in reforming our criminal justice system and it’s about time the federal government catches up and begins to assert leadership.

Booker’s bill is definitely a step in the right direction, but he needs cosponsors as well as broad, bipartisan support. Despite our era of political gridlock and intense partisanship, this is an issue that Democrats and Republicans can get behind. Democrats like it because it will help those disproportionately harmed by terrible drug policy, particularly the poor and people of color. Republicans, especially libertarian-style ones, can get behind expanding personal freedom and cutting wasteful government spending on enforcement. As the recent CBS News poll indicates, “majorities of Republicans (63 percent), Democrats (76 percent), and independents (72 percent) oppose the federal government trying to stop marijuana use in these states.” This would be a prime piece of legislation for bipartisan cooperation as well as reasonable public policy.

Besides the political, legal, and scientific reasons for decriminalizing or legalizing marijuana, there’s also the moral component. I strongly believe in the philosophy of “self-proprietorship,” the Enlightenment principle that you own your life and your body. Jacob Sullum, writer for Reason magazine, brilliantly elucidated this concept and its relation to drugs:

People have a right to control their bodies, to control what goes into their bodies, to control their minds, ultimately, because that’s what you’re talking about. If you’re talking about psychoactive drugs, you’re talking about controlling the contents of your mind, what goes on inside your brain. That’s a pretty basic right, you would think.

Your life should belong to you and you should be able to do as you wish, so long as you’re not violating the rights of others. If we’re a country that prizes liberty above all else, this should be a foundation component of that liberty. Alas, pious politicians, overzealous cops, and moralizing nanny-staters have marched, en masse, to stop people from living their lives as they see fit. Legalizing or decriminalizing marijuana on the federal level would do a great deal to stop them in their tracks, while increasing the liberty, safety, and happiness of our citizens and their communities. Contact your Senators and Representatives and tell them you want a bipartisan push for decriminalization, if not outright legalization, of marijuana at the federal level. Prohibition taught us that when you unrightly criminalize something, you nevertheless make real criminals. Let’s not go down that road again. Let’s end the war on pot.

The Special Comment: What We Lose, and Gain, From Leaving Religion

What do we lose when we leave religion? I have been asked to respond to this question by a friend and, to be honest, it’s not easily answered. For us atheists, it’s obvious to mention all the terrible things we abandoned when leaving religion. The dedication to barbaric texts and practices; the racism, homophobia, and misogyny of its most fundamentalist believers; the superstitions that hinder scientific and moral progress. All of these are good reasons to leave religion on the “ash heap of history.” Nevertheless, many still yearn for something bigger than us, something to confide in when times are tough. There is still a longing for the “transcendent,” alongside the need for community, that keeps droves within the fold.

Atheists are often criticized for our lack of a totalizing vision for humanity. “It’s just a negative position; you don’t believe in anything,” we’re often told. In this essay, I hope to dispel this notion and to offer a countervailing, yet meaningful way of life to the broadly-termed “religious.” Atheists often fixate on what we don’t believe; I’m here to tell you what I and many others do believe. I also hope to show how a secular life plainly replaces much of what people miss when they lose their religion.

For starters, atheism is merely a position to the question, “do you have a belief in God?” For us that say “no” or any other answer but “yes,” that makes us atheists. However, for many who came out of religion or experienced a modicum of religious life, that’s not enough to fulfill something inside them that is experiential and not merely rational.

One of the biggest insights I’ve accrued over the last few months, especially after reading the work of Jonathan Haidt and others, is that religion is more than the sum of its beliefs. Sure, abandoning the supernatural and all its problematic baggage is a great first step in creating a more humane world, but it is not the only thing we must reconsider when we lose religion. As mentioned earlier, countless people stay within religion for its community, the songs, or the emotional connection they have with their church. Religion is a system of life, not a mere reflection of it. In the case of Christianity, it is a religious practice with over 2,000 years of traditions, beliefs, and cultural contextualization. When someone spends their entire life committed to a system that totalizing, it is often jarring when they leave. I have spoken to and read of former believers that felt an intense sadness when they lost their faith. It was as if a part of them died when they left it behind.

I don’t know what that feels like. I grew up in a nonreligious home with largely nonreligious parents — not necessarily because they were atheists — but because religion didn’t matter to them all that much. I can count on one hand the amount of times I have been to a church for a religious service. I studied the major religions and tried to adopt one I found intellectually satisfying, and when none of them were, I became an atheist. Atheism was exactly the kind of position that suited my life; I was a rationally minded, critical thinker who neither missed nor yearned for religious experiences. My position was always an intellectual one, not an emotional or experiential one. Because of that, I always discounted these aspects of religion. Now having read about group psychology and the importance of religion in non-rational terms, I’m starting to understand what we really leave behind when we lose religion.

I have never felt “God,” but I have felt the power of music. I have loved music my whole life. There’s something beautifully tribal about the way that music makes us move, cry, and ultimately feel a part of something bigger than ourselves. I especially love film music; I love music that is designed to make you feel something. It has always moved me how specific chords or motifs play off of one another to elicit a response from the viewer. I’ve endlessly believed that a movie is only as good as its soundtrack. I think religion works the same way. The way a church invites you, the hymns move you, and the sermons encourage you. It’s just like music.

This is some of what we lose when we lose religion. The numinous and transcending experiences aren’t easily replaced by a commitment to reason and critical thinking. While those attributes are essential for living a successful and fulfilling life, there’s so much more we have to account for. As such, I think that Secular Humanism fills this void.

Secular Humanism is a philosophical tradition as old as religion, with elements tracing back to antiquity. In essence, it comes down to three component parts: reason as the means of knowledge, ethics as the way to live among others, and experience as the goal of life.

In secular humanism, one leaves behind the superstitious and mystical and embraces the reasonable and evidential. When this component of religion is lost, the possibilities of human achievement and flourishing are boundless; they are no longer shackled by the dogmas of the past.

As for humanity’s relationship to each other, morality is firmly rooted in philosophical investigations (ethics) and a growing understanding of our nature (biology, psychology). The interplay of nature and nurture provides us with the framework by which we advance our individual and societal interests. It will not be easy, but it has not been made easier by religion’s near stranglehold on this conversation. For many, the only way to be moral is through religion. Secular Humanism, by contrast, provides a reasonable and palatable alternative to religion as the sole arbiter of ethics.

Finally, the experience of life, from the numinous to the communal, thrives in a Secular Humanist framework. A sense of achievement, fellowship, and transcendence exists within the real world; there’s no need to rely on religion. The arts, nature, and social interaction become more fulfilling when left to open exploration. Yes, humanists generally reject the afterlife and accept the finitude of their lives, but that encourages them to live well and to treat others equally well. As the humanist philosopher Corliss Lamont once wrote, “Humanism encourages men to face life buoyantly and bravely, relying upon their own freedom and reason to fashion a noble destiny in a future that is open.”

In short, atheism is only the beginning of a person’s journey when they lose their religion. There are countless philosophies and viewpoints to consider when one leaves their faith. However, this shouldn’t be a lament but a celebration of one’s capacity for achievement and fulfillment in this life, the only one we are guaranteed to have. There is so much to gain when one places religion to the wayside. We can build better families, better communities, and better societies. We can dedicate ourselves to improving the lives of others, through scientific discovery, intellectual achievements, and interpersonal connections. We can develop an ethics that views individual rights, not collective, irrational whims, as the pinnacle of political organization. And this can all be done while we enjoy great art and contemplate the meaning of our lives and our place within the cosmos.

While we lose a great deal when losing religion, we gain so much more in freedom, truth, beauty, and wisdom. As Penn Jillette said, “For someone who loves freedom and loves people, I don’t think we should hope for God at all.”