Tuesday, August 9, 2016

Universal Ethical System

Morality is a set of culturally and emotionally derived principles regarding what people consider to be good vs. bad.  Scientists believe humans evolved to be moral as a means of improving the survival potential of our peer groups.  This biological drive to be moral, coupled with the evolving needs and perspectives of cultures and societies, produced a wide variety of ideas regarding how people ought to behave.  Ethics, as opposed to morals, seeks to universalize principles of what constitutes good and bad, utilizing approaches based on reason.  Unfortunately, philosophers don’t agree on which approach to ethics is truly universal.  According to philpapers.org, most philosophers are divided between 3 different approaches:

  • Deontology (26% of Philosophers): Rightness or wrongness is based on the adherence to moral laws or duties.  Deontologists care more about the intention of an action, rather than its consequence.  Example: Murdering innocent people is wrong because murdering innocent people is inherently wrong.
  • Consequentialism (24% of Philosophers): Rightness or wrongness is based on the consequences of an action, regardless of the intentions or character of the actor.  Example: Murdering innocent people is wrong because it leads to undesirable consequences such as fear, pain, and grief.
  • Virtue Ethics (18% of Philosophers): Rightness or wrongness is based on the character of an individual.  Certain traits are deemed “good” and others “bad”.  Actions are not morally right because of an outcome or adherence to a duty.  Instead, they are good if a person is acting as a result of a virtue. Example: Murdering innocent people is wrong because it requires the absence of certain virtues such as compassion and fairness.

Each of these approaches have different manifestations with differing advantages and disadvantages.  In short, deontology and virtue ethics are far more practical and natural to us humans, but lack a solid justification for why certain duties or virtues are good.  Consequentialism has a solid justification for why actions are good and bad (i.e. consequences), however it can lead to rather uncomfortable scenarios where actions that appear immoral are considered good because they lead to good outcomes.  For example, murdering someone to harvest their organs and save the lives of multiple other individuals.

I am a consequentialist because deontology and virtue ethics cannot be justified without considering how certain duties or virtues impact the world.  Nothing is “good just because”.  Those who hold this view generally believe duties/virtues are “good because they’re natural” or “good because of God or a higher power”.  The former is based on the naturalistic fallacy and can be dismissed outright, given that not everything “natural” is universally considered to be good and not everything “un-natural” is universally considered to be bad (e.g. AIDS is natural, but computers are not).  The latter should be dismissed because there is no reliable evidence nor compelling arguments that support the existence of any god or anything supernatural.  This would be why 78% of philosophers (the world’s argument experts) are either atheists or agnostics, and only 26% claim to be non-naturalists (i.e. believe in supernatural forces).  It is also why 75% are scientific realists, which means they believe judgments should be based on current scientific models for how the universe works.  As it happens, there are no widely held scientific models of the universe that include the supernatural or any gods.

Despite being a consequentialist, I believe that adherence to virtues and duties can be useful in achieving preferred consequences.  For example, if the desired consequence was a world that maximizes human happiness, then having a world inhabited by people who are empathetic and who abide by their obligations would be useful to that aim.  Given that no one can predict the future, it’s difficult for individuals to always make ethical decisions based on consequences.  Many times it’s easier to adhere to duties and virtues that generally lead to positive outcomes.  Thus, an action is ethically right if it abides by useful duties or virtues when consequences are ambiguous.  Otherwise, when consequences are unambiguous, they should drive the action, given that they are the foundation of what is considered ethical.  One aspect of these two distinctions is worthy of note: the goodness of a person can only be judged by how they decide to act, not by the consequences of their actions.  People make mistakes and not all consequences can be foreseen.  An optimally ethical person could only ever be expected to make the best decision given the information available at hand.  Anyone who could always act in ways that lead to an ultimate desired outcome would either be omniscient or lucky.

In general, ethics is about doing good and not doing bad.  Even when goodness and badness are based on consequences, which are concrete, observable, and tied to the physical universe, one still has to justify why the consequences are actually good or bad.  As already stated, nothing is “good just because”, and there is no supreme arbiter of good and evil.  The universe does not care whether or not one person murders another.  Yet, we as individuals know goodness exists because we know what it feels like to feel good or feel bad.  We know that some things make us happy, and those are good.  We also know that some things make us sad, and those are bad.  Thus, on an individual basis, we know that good and bad exist.  They are our positive or negative reactions to our life circumstances.  However, even on an individual basis, it’s difficult to understand what is ultimately good or bad.  Suffering sometimes leads to positive outcomes, and happiness sometimes leads to negative ones.  To understand the ultimate goodness or badness experienced by an individual, one has to take into account the positive and negative reactions to circumstances experienced throughout the span of their life.  In my view, the perfect way to assess the degree of goodness or badness of a person’s life would be to query them throughout their life regarding their satisfaction with their past, present, and likely future life circumstances.  The greater the moments of satisfaction vs. dissatisfaction, the better the life.  

Goodness and badness exist on a spectrum.  For example, most would agree that having one’s finger accidentally cut off is bad, but having one’s leg cut off would be worse.  Likewise, finding a $5 bill on the street is good, but finding a $100 bill would be better.  Thus, the goodness or badness of a life isn’t merely based on the percentage of positive or negative moments experienced through a lifetime.  It’s the average degree of positive and negative moments.  For simplicity’s sake, we can use a scale from -100 to 100 to assess the degree of life satisfaction or dissatisfaction experienced in any given moment.  A good life would be one in which the average of all moments would be above 0, but the best life would be the one with an average that is close to 100.  To the individual, any event that brings the average down is bad, and any event that brings the average up is good.  However, an event that takes someone from 70 to 50 would be better than an event that would take someone from 10 to -10, despite the event being worth -20 in each case.  This is because the end score is what ultimately matters.  Accordingly, in terms of priority, we should first desire to experience events that will first keep us above 0, then events that will increase the average.  Sure, -5 is still better than -50, but -5 is still an overall bad life.  

While it is important to understand goodness and badness from the perspective of an individual, ethical principles generally apply to groups of individuals.  Unfortunately, it’s not always possible to do what is in everyone’s best interest all of the time, considering that sometimes what improves the lives of a few people deprives the lives of others.  Slavery is one example of this, but so is factory farming, Ponzi schemes, and patriarchal societies.  Therefore, any ethical system that seeks to do what is good and avoid doing what is bad for all group members will require compromise.  To do good, according to our understanding of what is good for individuals, the ethical system needs to prohibit actions that bring members below 0 and promote actions that increase the average score.  In other words, first do no harm, then maximize happiness.  Given that some individuals are more capable of achieving these aims when given appropriate incentives and resources, it is better to allow them to achieve a better life than others who are less capable.  Thus, while the ultimate goal may be limiting bad lives and maximizing the wellbeing of group members, inequality is useful as long as it helps to achieve this goal.  On the other end of the spectrum, those who have a negative impact on the wellbeing of the group have to be handled quite differently.  Their negative impact would need to be minimized, reversed, or eliminated.  Yes, eliminated.  If someone cannot exist without causing others to go below 0 without themselves going below 0, their existence is bad for the group.

It seems that “do no harm” and “eliminate those with a negative impact” are mutually exclusive, but they are not.  I call it the “lesser of two evils” principle.  If faced with the possibility of an action leading to 100 people below 0 and another action that would lead 10 people below 0, the latter is the ethical choice.  However, this is assuming there is no 3rd choice wherein no one would be below 0.  If this 3rd choice existed, the first two would be considered unethical.  So, is it permissible to kill one person to harvest their organs and save the lives of others?  If this scenario existed in a vacuum, then yes, it would be ethical.  However, in the real world, there would be other effects from the murder beyond ending one life and saving others.  A society that would allow innocent people to be killed and harvested for organs would be one in which the population would fear for their lives and where it is virtuous to be a murderer.  I would venture to guess that to be that type of murderer would entail the loss of personality traits that benefit society, such as empathy and kindness.

Since the basis of goodness and badness are life experiences, the greater the number of individuals with good lives the more the goodness there is in the universe.  Thus, any person who seeks to be “good” in a universal sense, ought to consider the life experiences of all entities capable of having a good or a bad life.  Often, the scope of our ethics tends to be rather narrow, in that we care about the humans who are a part of the groups with which we most identify.  We may care about the welfare of some animals, such as our pets, but don’t consider the type of life our chicken nuggets once had when they were living, feeling animals.  However, given that many animals can experience a good or bad life, they ought to be included in the population under consideration, as should other humans who are not a part of our immediate peer groups.  The largest of all possible groups, which should thus be given the greatest consideration, is the future generations of humans and other feeling entities.  There may be 7 billion humans now, but imagine how many lives could be experienced in the next 7 billion years if humans and other animals continued to exist.  The universe may end at some point, but it may be possible for us to continue to thrive up until that moment.  If we were to put the heaviest weight on the welfare of future generations, then our primary imperative is to ensure that they will be allowed to exist to experience good lives.  Therefore, to be “good” in a universal sense requires the adherence to these 3 directives, which are prioritized in the order in which they are presented:

  1. Act in ways that will enhance the likelihood that humans and other feeling entities will continue to exist indefinitely.
  2. Never act in ways that cause others to experience a bad life unless it is the only way to ensure that more do not experience a bad life.
  3. Act in ways that maximize the positive life experience of as many other humans and feeling entities as possible.

Friday, January 29, 2016

Evolution of the Religious Mind: Finding Patterns

Pattern recognition is not unique to Homo sapiens. In fact, many animals are adept at perceiving cause and effect relationships in their environment. Taking advantage of such patterns to optimize survival and reproductive success is the key to many species’ longevity. In humans, with our capacity to comprehend complex systems and to retain them in our individual and collective memories, both the advantages and disadvantages of our ancient pattern recognition “software” becomes magnified exponentially.

“Patternicity” is a term coined by Michael Shermer (editor of Skeptic Magazine) to explain our innate pattern-seeking nature. He argues that evolution has primed our brains to see patterns where none exist. In the paper “The Evolution of Superstitious and Superstition-like Behaviour” in the Proceedings of the Royal Society B, Harvard University biologist Kevin R. Foster and University of Helsinki biologist Hanna Kokko tested his theory through complex evolutionary modeling. They concluded:

“The inability of individuals—human or otherwise—to assign causal probabilities to all sets of events that occur around them will often force them to lump causal associations with non-causal ones. From here, the evolutionary rationale for superstition is clear: natural selection will favour strategies that make many incorrect causal associations in order to establish those that are essential for survival and reproduction.”

For example, if our ancient ancestor heard a rustling in a nearby bush, it was always safer to assume it was a dangerous predator, rather than to believe it was anything more benign. Simply put, sometimes it’s advantageous to perceive things that aren’t there.

Superstition in Animals
As previously mentioned, patternicity is not unique to humans, and neither is superstition. In a now famous experiment, renowned Behavioral Psychologist B.F. Skinner was able to elicit behaviors in pigeons akin to those performed by humans in religious rituals. In many of Skinner’s other experiments, he would attempt to train animals to perform various behaviors by giving them food as a reward whenever they performed the behaviors correctly. In this experiment, he had no specific behaviors to teach the pigeons. Instead, he simply gave them food at regular (e.g. 15 second) time intervals. The effect: the pigeons tended to repeat whatever they recall themselves doing before the last time they were given food[1]. They would nod their head, flap their wings, turn counterclockwise, and perform other actions over and over again until they were fed again. As Skinner put it:

“The experiment might be said to demonstrate a sort of superstition. The bird behaves as if there were a causal relation between its behavior and the presentation of food, although such a relation is lacking. There are many analogies in human behavior. Rituals for changing one’s luck at cards are good examples. A few accidental connections between a ritual and favorable consequences suffice to set up and maintain the behavior in spite of many unreinforced instances.” [2]

Is this so different from believing that every time you pray, something good happens? To me, the only major difference is that, in humans, these sort of “accidental connections” lead to very complex systems of belief reinforced and adapted through years of cultural evolution.

The Evolution of Persistent Beliefs
So, let’s say that at one point, a long time ago, ancestors of modern humans really needed it to rain so that their crops would grow. Given that they believed the rain was controlled by some sort of invisible human-like agent, they figured they would need to appease this agent in order for him to turn on the faucet. They decided to dance for this agent and, to ensure there’s no miscommunication, chant “please make it rain” over and over again. After several attempts, it rained. Thousands of years later, human descendents of these people still dance to make it rain.

Now, let’s say a visiting trader from a faraway land noticed these dances usually didn’t lead to rain. He decided to document the success rate, and shared the negative results with the local populace. Despite the evidence, no one would be convinced that rain dancing didn’t work. Why? Well, one evolutionary explanation is that of Cultural Cognition[3]. Recall from my last post that humans evolved to possess pro-group behaviors. One of these behaviors is the tendency to view the world through the lens of one’s culture, and to see all challenges to the beliefs of one’s culture as a threat. Thus, as an innate response to these threats, people entrench their minds against even the most rational of ideas or definitive evidence.  In evolutionary terms, this show of solidarity strengthens the group, and thus the likelihood that its members will survive to pass on their genes. In the modern context, this leads to climate change deniers, creationism, faith healing fatalities, and other unfortunate behaviors and beliefs.

Like other animals, humans are prone to perceiving patterns that do not exist, and we alter our behaviors in quite strange and irrational ways due to our beliefs in the imaginary. For pigeons, this tendency may lead to bobbing heads and flapping wings. For humans, our beliefs become solidified by our innate desire to conform to our peer groups. Unfortunately, this leads many to stay silent when rational voices are most needed: when human lives are at stake.


Michael Shermer on Patternicity:

Great Article on Cognitive Dissonance and Cultural Cognition

Andy Thomson Discusses Ideas from his Book “Why we believe in god(s)”

TIME Article on “The Evolution of Faith”

Friday, May 15, 2015

The Psychology of Faith

Theists are in love with the idea of faith. They often use the word as though it is a magical and indescribable force adhering them to their religious beliefs. However, as with all other aspects of the human experience, it is not beyond description or scientific investigation. Upon review, faith is an inherent drive which is ultimately necessary for our emotional wellbeing. At the same time, despite its positive attributes, it can be quite dangerous if taken too far.

What is Faith?

According to Hebrews 11:1, “Faith is the substance of things hoped for, the evidence of things not seen.” Dictionary.com describes it as “belief that is not based on proof.”[1] I prefer my own definition, which is “a belief which is motivated by positive emotional outcomes, yet is founded on little to no evidence.” From the theistic perspective, faith is a compulsion to believe in God, which is rooted in the human soul. However, as I will explain in future posts, attributes commonly associated with the human soul such as this “compulsion” can be readily explained by psychology and neuroscience. Thus, the word “emotional” should sufficiently accommodate the theistic experience, and place it on equal ground with other non-religious forms of faith.


Dispositional optimism is a term coined by psychologists Charles Carver and Michael Scheier meaning “the global expectation that good things will be plentiful in the future and bad things scarce.”[2] This is a form of faith because most healthy people have an emotional preference for positive outcomes over negative ones, yet there is insufficient evidence to predict all future events. Optimism is strongly associated with greater psychological and physical health. For example, it has been linked to an increased feeling of life satisfaction, improved ability to cope with adversity, better health habits, quicker recovery rates from heart surgery, and increased success in sports and work.[3]

Positive Illusions

Just because optimism leads to healthier, happier people does not mean they it is founded in reality. Faith, after all, is the belief in something for which there is little-to-no evidence. Psychology has coined the term “positive illusion” to explain unfounded, yet psychologically healthy, beliefs.[4] In general, most people harbor these 3 positive illusions:[5]

  • That they are unusually capable and virtuous
  • That they have more control over events than they do
  • That they are optimistic, believing misfortune unlikely and good outcomes likely

There are many benefits to maintaining these unrealistic beliefs about ourselves. For instance, they enhance our self esteem and outlook on life, they motivate others to have greater confidence in us, and they inspire persistence when dealing with difficult problems.[6] Despite these positive effects, unshakable positive illusions can lead to many negative outcomes as well. We can all think of someone we know who is far too confident for their own good, and who makes poor life decisions based on their unrealistic expectations of the future. As Proverbs 16:18 put it “Pride goes before destruction, a haughty spirit before a fall.”

The “Secret” and Gambling Addiction

The Secret is a bestselling book written by Rhonda Byrne which claims that the universe is capable of bending to your will as long as you know how to communicate with it. The book suggests that you must visualize your successes and be thankful to the universe for all current and future successes.[7] My favorite example from the book (which I had the misfortune of listening to) was a visualization technique for those who wish to be financially successful. It entails putting an extra zero to the right of your income when doing your bills so it appears as though you have more money than you actually possess. To me, these kinds of “techniques” are less about communicating with the universe, and more about tricking yourself into having an optimistic outlook. This, of course, may produce the same positive effects derived from optimism and other positive illusions described above. However, “The Secret” DVD claims “Thoughts are sending out that magnetic signal that is drawing the parallel back to you. It always works; it works every time, with every person” which is quite a bold claim.[8] Consider pathological gamblers. Psychological studies show they tend to believe they are in control of the outcomes of the games in which they play, and they are very confident in their success.[9] One can imagine no better practitioners of “The Secret” than gambling addicts, who likely visualize their success on a daily basis and sincerely believe “this time will be different.” However, 20% to 30% of pathological gamblers have declared bankruptcy, compared to 4.2% of low-risk and non-gamblers.[10] In other words, “The Secret” does not work.


Faith is ultimately a good thing. We need it in order to be healthy, happy, and successful. However, it is often illusory, and can lead to making poor decisions due to overconfidence. Thus, despite theists’ conception of faith as knowledge of ultimate truth, it is merely indicative of psychological preferences which may or may not be grounded in reality.


Great article by Malcolm Gladwell on the effects of overconfidence

Great article about optimism and positive illusions

Great critique of “The Secret” by Skeptic Magazine