Tuesday, March 12, 2019

My Ethical Philosophy

Morality, as I define it, is a set of culturally and emotionally derived principles regarding what people consider to be good vs. bad.  Scientists believe humans evolved to be moral as a means of improving the survival potential of our peer groups.  This biological drive to be moral, coupled with the evolving needs and perspectives of cultures and societies, produced a wide variety of ideas regarding how people ought to behave.  While many of these moral systems share common elements and provide desirable outcomes, many notions remain relevant only to those who originally devised them and are not universal.  The study of Ethics seeks to rectify this problem by utilizing reason and universal principles to understand what is good and bad for everyone in every context.  Unfortunately, philosophers don’t agree on which approach to ethics is truly universal.  According to philpapers.org, most philosophers are divided between 3 different approaches:

Deontology (26% of Philosophers): Rightness or wrongness is based on the adherence to moral laws or duties.  Deontologists care more about the intention of an action, rather than its consequence.  (Example: Murdering innocent people is wrong because murdering innocent people is inherently wrong.)
Consequentialism (24% of Philosophers): Rightness or wrongness is based on the consequences of an action, regardless of the intentions or character of the actor.  (Example: Murdering innocent people is wrong because it leads to undesirable consequences such as fear, pain, and grief.)
Virtue Ethics (18% of Philosophers): Rightness or wrongness is based on the character of an individual.  Certain traits are deemed “good” and others “bad”.  Actions are not morally right because of an outcome or adherence to a duty.  Instead, they are good if a person is acting as a result of a virtue. (Example: Murdering innocent people is wrong because it requires the absence of certain virtues such as compassion and fairness.)

I am a consequentialist because deontology and virtue ethics cannot be justified without considering how certain duties or virtues impact the world.  Nothing is “good just because”.  Most who hold this view generally believe duties/virtues are “good because they’re natural” or “good because of God or a higher power”.  The former is based on the naturalistic fallacy and can be dismissed outright, given that not everything “natural” is universally considered to be good and not everything “un-natural” is universally considered to be bad (e.g. AIDS is natural, but computers are not).  The latter should be dismissed because there is no reliable evidence nor compelling arguments that support the existence of any god or anything supernatural.  This would be why 78% of philosophers (the world’s argument experts) are either atheists or agnostics, and only 26% claim to be non-naturalists (i.e. believe in supernatural forces).  It is also why 75% are scientific realists, which means they believe judgments should be based on current scientific models for how the universe works.  As it happens, there are no widely held scientific models of the universe that include the supernatural or any gods.

Now, the 18% of philosophers who are virtue ethicists and 26% who are deontologists might still beg to differ.  A virtue ethicist may suggest that ethics are imprecise and consequentialism is too difficult to be able to be applied to every situation.  Some don’t believe it is even worthwhile to have universal ethical theories, since they can’t account for every possible scenario.  Instead, they consider their approach to be more useful to how people actually approach ethical decisions.  Yet, not all “virtues” espoused by the majority of our society are truly virtuous.  For example, males are taught that repressing emotions is a virtue while females are taught that submissiveness is a virtue.  Each to the detriment of society and to the individuals who seek to adhere to these virtues.  How can these virtues be argued for or against without pointing to how they impact the world as I just did?  Even the idea that virtues are more useful is one that begs the question “useful for what?”  The “what” being the desired consequence.  A virtue ethicist may still argue that consequentialism requires people to act in ways that aren’t in their own best interest in order to do what is best for others.  Yet, if everyone in the world was completely selfless, they would lose their best advocate for their own wellbeing.  Even if everyone else was focused on making an individual happy, how could they know how to do so without the person being self-aware and selfish enough to communicate their desires?  A world of selfless people would have less happiness than one with some selfishness.  Thus, consequentialism does not necessarily require people to always or even frequently act against their own best interest.  

Deontologists, on the other hand, may argue that they are not championing arbitrary rules that just exist in a vacuum with no logic behind them.  For example, Kant thought there was one universal principle that all rules derived from: “Act only on that maxim through which you can at the same time will that it should become a universal law.”  Yet, a universal law for a sadist, masochist, suicide cult member, or Ayn Randian Objectivist would be very different than one espoused by a social worker, environmentalist, or altruist.  Different minds have different ideas about what is good, and a different tolerance for what is bad.  In addition, the problem with the idea of these laws and duties is that they are meant to be applied in every kind of situation regardless of context.  Yet, sometimes even lying, murdering, and stealing are the only way to save one’s life or the life of another.  One would have to qualify each rule with every sort of context in which they would not apply.  Yet, the question is WHY would those rules need to be broken for those contexts?  The answer: because sometimes adhering to a rule or duty would lead to a negative consequence.  Thus, the consequence is what ultimately matters, not the rule or duty.

In general, ethics is about doing good and not doing bad.  Even when goodness and badness are based on consequences, which are concrete, observable, and tied to the physical universe, one still has to justify why the consequences are actually good or bad.  As already stated, nothing is “good just because”, and there is no supreme arbiter of good and evil.  The universe does not care whether or not one person murders another.  Yet, we know goodness and badness exist because we have experienced them in our lives, and we know others have as well.  We know that some things make us happy, and those are good.  We also know that some things make us sad, and those are bad.  Thus, goodness and badness are the positive and negative reactions to life circumstances of individual feeling entities.  However, even on an individual basis, it’s difficult to understand what is ultimately good or bad.  Suffering sometimes leads to positive outcomes, and happiness sometimes leads to negative ones.  To understand the ultimate goodness or badness experienced by an individual, one has to take into account the positive and negative reactions to circumstances experienced throughout the span of their life.  The greater the moments of satisfaction vs. dissatisfaction, the better the life.  

Goodness and badness exist on a spectrum.  For example, most would agree that having one’s finger accidentally cut off is bad, but having one’s leg cut off would be worse.  Likewise, finding a $5 bill on the street is good, but finding a $100 bill would be better.  Thus, the goodness or badness of a life isn’t merely based on the percentage of positive or negative moments experienced through a lifetime.  It’s the average degree of positive and negative moments.  For simplicity’s sake, we can use a scale from -100 to 100 to assess the degree of life satisfaction or dissatisfaction experienced in any given moment.  A good life would be one in which the average of all moments would be above 0, but the best life would be the one with an average that is close to 100.  To the individual, any event that brings the average down is bad, and any event that brings the average up is good.  However, an event that takes someone from 70 to 50 would be better than an event that would take someone from 10 to -10, despite the event being worth -20 in each case.  This is because the end score is what ultimately matters.  Accordingly, in terms of priority, we should first desire to experience events that will first keep us above 0, then events that will increase the average.  Sure, -5 is still better than -50, but -5 is still an overall bad life.  

While it is important to understand goodness and badness from the perspective of an individual, ethical principles generally apply to groups of individuals.  Unfortunately, it’s not always possible to do what is in everyone’s best interest all of the time, considering that sometimes what improves the lives of a few people deprives the lives of others.  Slavery is one example of this, but so is factory farming, Ponzi schemes, and patriarchal societies.  Therefore, any ethical system that seeks to do what is good and avoid doing what is bad for all group members will require compromise.  To do good, according to our understanding of what is good for individuals, the ethical system needs to prohibit actions that bring members below 0 and promote actions that increase the average score.  In other words, first do no harm, then maximize happiness.  Given that some individuals are more capable of achieving these aims when given appropriate incentives and resources, it is better to allow them to achieve a better life than others who are less capable.  Thus, while the ultimate goal may be limiting bad lives and maximizing the wellbeing of group members, inequality is useful as long as it helps to achieve this goal.  On the other end of the spectrum, those who have a negative impact on the wellbeing of the group have to be handled quite differently.  Their negative impact would need to be minimized, reversed, or eliminated.  Yes, eliminated.  If someone cannot exist without causing others to go below 0 without themselves going below 0, their existence is bad for the group.

It seems that “do no harm” and “eliminate those with a negative impact” are mutually exclusive, but they are not.  It’s the “lesser of two evils” principle.  If faced with the possibility of an action leading to 100 people below 0 and another action that would lead 10 people below 0, the latter is the ethical choice.  However, this is assuming there is no 3rd choice wherein no one would be below 0.  If this 3rd choice existed, the first two would be considered unethical.  So, is it permissible to kill one person to harvest their organs and save the lives of others?  If this scenario existed in a vacuum, then yes, it would be ethical.  However, in the real world, there would be other effects from the murder beyond ending one life and saving others.  A society that would allow innocent people to be killed and harvested for organs would be one in which the population would fear for their lives and where it is virtuous to be a murderer.  I would venture to guess that to be that type of murderer would entail the loss of personality traits that benefit society, such as empathy and kindness.

Since the basis of goodness and badness are life experiences, the greater the number of individuals with good lives the more the goodness there is in the universe.  Thus, any person who seeks to be “good” in a universal sense, ought to consider the life experiences of all entities capable of having a good or a bad life.  Often, the scope of our ethics tends to be rather narrow, in that we care about the humans who are a part of the groups with which we most identify.  We may care about the welfare of some animals, such as our pets, but don’t consider the type of life our chicken nuggets once had when they were living, feeling animals.  However, given that many animals can experience a good or bad life, they ought to be included in the population under consideration, as should other humans who are not a part of our immediate peer groups.  The largest of all possible groups, which should thus be given the greatest consideration, is the future generations of humans and other feeling entities.  There may be 7 billion humans now, but imagine how many lives could be experienced in the next 7 billion years if humans and other animals continued to exist.  The universe may end at some point, but it may be possible for us to continue to thrive up until that moment.  If we were to put the heaviest weight on the welfare of future generations, then our primary imperative is to ensure that they will be allowed to exist to experience good lives.  Therefore, to be “good” in a universal sense requires the adherence to these 3 directives, which are prioritized in the order in which they are presented:

1. Act in ways that will enhance the likelihood that humans and other feeling entities will continue to exist indefinitely.
2. Never act in ways that cause others to experience a bad life unless it is the only way to ensure that more do not experience a bad life.
3. Act in ways that maximize the positive life experience of as many other humans and feeling entities as possible.

Monday, February 4, 2019

The Psychology of Blind Faith

When someone has “blind faith,” they tend to hold onto their beliefs even when there is significant evidence suggesting they are false. This is different from standard faith, which is based merely on the absence of evidence. So what drives people to be so unreasonable? Psychology has some interesting answers.
 
Cognitive Dissonance and Motivated Reasoning
 
When someone is faced with conflicting attitudes, beliefs, or behaviors, they experience a sensation of discomfort referred to as cognitive dissonance.[1] People with cognitive dissonance are motivated to resolve the conflicting notions in a manner which allows them to maintain consistency in their beliefs and attitudes.[2] For example, let’s say a creationist happens to discover my blog, and decides to thoroughly examine the resources I’ve compiled on my “Don’t Believe in Evolution?” page. They would be facing some serious evidence that their deeply held beliefs are wrong, yet they would probably find some way of dismissing it all in favor of their prior convictions. Given that their creationist beliefs have been reinforced for years, have become a major aspect of their identity, and are likely shared by many of their family and friends, letting go of them would be a difficult task. Thus, they partake in a behavior psychologists refer to as motivated reasoning, which is the unconscious tendency of individuals to fit their processing of information to conclusions that suit some end or goal.[3] As political scientist James Tabor put it, “they retrieve thoughts that are consistent with their previous beliefs, and that will lead them to build an argument and challenge what they're hearing."[4] 


Conflicting Brain
During the run-up to the 2004 presidential election, psychologist Drew Westen led a study wherein ardent George Bush and John Kerry supporters were each presented with statements from the candidates clearly showing that they were contradicting themselves.[5] To no one’s surprise, each group saw no fault in their favored candidate, yet remained critical of the other. [6] During the experiment, fMRI scans showed the brains of the participants’ were inactive in their reasoning centers, while areas associated with emotions and moral judgments lit up. [7] On top of that, once subjects came to a satisfactory conclusion about their candidate, the area of the brain associated with pleasure became active. [8] This likely acted to reinforce the feeling that their rationalizations were logically sound.
 
What this and other similar studies seem to conclude is that when people partake in motivated reasoning, they aren’t actually reasoning at all. To paraphrase an analogy made by psychologist Jonathan Haidt, ‘people may think they are being scientists, when in reality they are being more like lawyers.’[9] That is, people build a case to support a position they feel to be true vs. weigh the relative merits of differing arguments and supporting evidence.

Extraordinary Evidence Requires Extraordinary Rationalizations
So what happens when people are faced with really compelling evidence suggesting their deeply held beliefs are wrong? Not only do they find ways of discrediting the evidence, their rationalizations can actually convince them to adhere to their beliefs more fervently.[10] Such was the case in a study by political scientists Brendan Nyhan and Jason Reifler. In this study, they showed participants quotes by George W. Bush asserting that Saddam Hussein had WMDs, then quotes from the Bush-commissioned Iraq Survey Group report that refuted his assertions.[11] Conservative participants not only still believed Saddam Hussein had weapons of mass destruction, but they were even more convinced after reading the quotes from the Iraq Survey Group.[12]


The Education Effect
Surely a higher level of education dampens the effects of motivated reasoning, right? Well, not really. Studies show the higher the education level, the higher the bias.[13] This would be why in a 2008 Pew study, 19% of college educated Republicans believed humans were causing the earth to warm vs. 31% of non-college educated Republicans. [14] As political scientist Milton Lodge put it “People who have a dislike of some policy—for example, abortion—if they're unsophisticated they can just reject it out of hand, but if they're sophisticated, they can go one step further and start coming up with counterarguments."[15] This confirms a long-held observation of mine that smart people find smart reasons to believe stupid things.
 
Dunning-Kruger Effect
 

For those who are less educated on a subject, the Dunning-Kruger effect may still lead to a higher level of blind faith and motivated reasoning. The effect is the result of an illusory superiority bias, which leads ignorant and incompetent people to believe they are far more intelligent and competent than they actually are.[16] Conversely, those who are more intelligent and competent tend to underestimate their abilities. [17] I’ve seen many examples of this from creationists on the internet. One proclaimed that he knew more about evolution than anyone on the comment board. Yet, when I asked him about the 2nd best evidence for evolution besides the fossil record (i.e. genetic evidence), he became agitated and belligerent. Go figure.
 

Tu Quoque
I believe the psychological phenomena leading to blind faith are likely experienced by many theists who read my blog. However, some may argue that I am just as guilty of dismissing superior evidence and arguments as are theists. For many, this conviction that both sides of the culture war are equally pig headed and dismissive leads to what I have dubbed the “head in the sand effect.” That is, they ignore what I have to say because they believe I am far too certain of my convictions to present information accurately. To tell you the truth, people would be right to be skeptical of my motives given my obvious bias. However, there are some big differences between ardent Secular Humanists such as myself and close-minded theists:


  • We seek to believe in only that which is supported by the evidence, not merely what we wish to be true.
  • We are willing to change our minds or at least be skeptical of our views when presented with superior evidence.
  • We are very aware of the human tendency to rationalize, and actively seek to minimize its effects.

Conclusion
In the end, we are all guilty of motivated reasoning. We are human after all, and we have the exact same evolutionarily driven impulses and cognition patterns as the rest of our species. However, there are certainly some who are more apt to give in to these impulses and ignore evidence, while others are more adept at thinking critically about their own motivations.


Resources:
 
Fantastic example of motivated reasoning (Richard Dawkins dowsing experiment)
https://www.youtube.com/watch?v=DdjYGaINLwo&list=FLrT9OWupoCp5Jf8HwhyW-4A

The Tao of Cherry Picking

In my past 3 posts, I’ve presented evidence revealing there is nothing written in the Bible that couldn’t have been imagined by a Bronze Age human, that Biblical morality is inconsistent and at times sadistic, and the God of the Bible is more like Kim Jong Il than Mr. Rogers. Christian and Jewish readers may counter that I was taking passages in the Bible out of context, and that those nasty parts were merely a reflection the authors’ ancient culture rather than God’s word. God “inspired” the Bible after all, he didn’t write it. However, the crux of the situation is figuring out which parts of the Bible were truly God-inspired, and which were merely remnants of an archaic culture. While theists throughout the ages have been convinced of their ability to discern the will of God, the reality is that they are all following the Tao of Cherry Picking.

The Tao
There are reportedly 41,000 denominations of Christianity in the world today, and you can bet that all of them think the others are reading the Bible incorrectly.[1] In fact, a common slander used by some Christians against other groups is the accusation that they are practicing “Cafeteria Christianity.”[2] As Conservapedia puts it, “This practice is particularly common among liberal denominations, which cite only those passages which show Christ's forgiveness and mercy, but not His justice, in order to deny the existence of Hell and the necessity of faith for redemption… they selectively ignore passages which refute evolution, condemn abortion, make statements about the role of women that refute modern feminist dogmas, and conflict with other liberal views.”[3] However, the reality is that even conservative Christians cherry pick the Bible. In fact, the statement about abortion is one good example, since the Bible isn’t very clear about abortion at all.[4] In addition, while they may embrace the anti-gay message of Leviticus 20:13, they rarely promote the idea of socially exiling couples who have sex during a woman’s period as decreed in Leviticus 20:18. As one enlightened Christian blogger put it “everyone cherry-picks the Bible: those who claim to be ‘staunch believers in the Bible’ claiming its inerrancy and infallibility along with those who view it as a historical and all-too-human text.”[5]

Scholarly Interpretation
Some Christians point to Bible scholarship as a means to understand which parts are “inspired” and which are merely notions of flawed humans. However, as logical as some approaches to biblical interpretation may be, they are still subjective. Thus, Bible scholars can come to a variety of conclusions based on their take as to what God wanted the verses to mean. In the 1800’s, Bible scholars were used to prop up the pro-slavery arguments of the American South, while Bible scholars were used to prop up the anti-slavery arguments of the abolitionists.[6] In modern times, Bible scholars rail against the abomination of gay marriage, while Bible scholars rail against the evils of homophobia.[7] Bible scholars may belong to any of the 41,000 denominations of Christianity as well. Thus, Catholic scholars inherently disagree with Pentecostal scholars just as Coptic scholars inherently disagree with Eastern Orthodox ones. Given that scholars are the most knowledgeable regarding the context of Bible passages, it is interesting they have so many different interpretations despite this specialization.



Guided by God
I would argue that both scholars and non-scholarly theists believe in the validity of Bible passages they feel are true. In fact, this is a legitimate form of Biblical interpretation practiced by some Christians. As philosopher Peter van Inwagen put it, “if you have submitted yourself to God’s will and if you read—say—that God has commanded that the children be punished for the sins of the fathers, your reaction will be along these lines: Yes, that’s what seemed self-evidently true to the Hebrews once, that it was right to punish the children for the sins of the fathers, and that that was therefore what God would have told their ancestors to do; with God’s help, we now know better.”[8] However, the problem with this God-guided method of Biblical interpretation is that it has ultimately led to 41,000 different denominations of Christianity! It has led to wars such as the Crusades and the multitude of European wars following the onset of Protestantism.[9] In the end, people believe in that which conforms to their preconceived ideas of history, science, and morality. For example, studies have shown that Christians tend to believe Jesus would be in favor of the social issues they consider the most important.[10] In other words, the Jesus of conservatives is quite different than the Jesus of liberals.

Perspectives on the Old Testament
 Many Christians consider the Old Testament irrelevant to Christianity.[11] Jesus was fond of dismissing many backward rules required by the Pentateuch. In addition, there are a number of passages such as "For Christ is the end of the Law, that everyone who has faith may be justified” (Romans 10:4) and "It seemed good to the Holy Spirit and to us not to burden you with anything beyond the following requirements: You’re to abstain from food sacrificed to idols, from blood, from the meat of strangled animals and from sexual immorality” (Acts 15:28-29) which seem to imply that the Old Testament laws no longer apply. Yet, despite these passages and many Christian claims to the contrary, they do still use books such as Psalms for moral wisdom and Genesis for their understanding of our origins, among other examples.[12] Thus, it seems the Old Testament works when it’s in Christians’ favor, but not when it promotes things like slavery and rape. Part of the reason for this may be that the New Testament isn’t very clear about how much of the Old Testament to ignore. For example, there are these passages:

  • “For truly, I say to you, till heaven and earth pass away, not an iota, not a dot, will pass the law until all is accomplished. Whoever then relaxes one of the least of these commandments and teaches men so shall be called least in the kingdom of heaven; but he who does them and teaches them shall be called great in the kingdom of heaven.” (Matthew 5:18-19)

  • "All scripture is inspired by God and is useful for teaching, for refutation, for correction, and for training in righteousness." (2 Timothy 3:16)

  • "Know this first of all, that there is no prophecy of scripture that is a matter of personal interpretation, for no prophecy ever came through human will; but rather human beings moved by the Holy Spirit spoke under the influence of God." (2 Peter 20-21)

I did provide a link to a list of these and other passages to a Christian recently and she responded “I looked at the link you provided and the blogger cherry-picked without consideration of the surrounding text. Or maybe [the] website was a deliberate exercise in hypocrisy, cherry-picking which verses support your bias, instead of looking at the entire chapter and the audience and cultural/socio-economic frame of reference in which He was teaching.” However, I doubt she would argue that “God is love” (1 John 4:16) is ever taken out of context.

Conclusion
In the end, what does it matter if you don’t explain the greater context of every seemingly unethical or contradictory Bible passage? The authors of the Bible were supposedly writing on God’s behalf for all of humanity past, present, and future. Why wouldn’t God inspire people more consistently and clearly so that you wouldn’t need 10 scholars with PhD's to figure out the context of every word? Is it because God works in mysterious ways? Is it because man has free will and makes mistakes? Or is it simply more plausible that beliefs change as cultures change, thus explaining not only the negative but also the positive messages written in this entirely man-inspired book?

Resources

Funny video satirizing Christian’s fervent need for context