My Beliefs, in a Nutshell

Confidence: 8
Importance: 8
Novelty: 5
Status: temporarily okay
Topics: personal
word #: 1243

This post tries to clarify some of my fundamental philosophical beliefs, which I take for granted in much of my other writing. If anything here sounds utterly wrong to you, there is a good chance either you and I use a word differently, or that we could eventually converge in understanding given a long enough conversation.

Strong Opinions, Loosely Held

Here are some of my probably-still-current beliefs, in ~order of 'purity':

  • Math can, in principle, describe everything.
  • Bayesian probability theory is the proper way to interpret evidence. All propositions are either true or false, but it’s useful to hold one’s belief in the truth of a proposition as a value between 0 and 1. Further, it's possible to hold a credence as a probability distribution, where the mean is your credence, but the distribution shows where one expects one's credence could shift given additional computation. One should always be reevaluating and incrementally changing their degrees of beliefs according to new evidence.
  • All disagreement can be resolved between two rational agents with sufficient time to communicate and hash out all their underlying beliefs. See Aumann's agreement theorem. The von Neumann–Morgenstern utility theorem is not perfectly true as a descriptive model, but it is as a normative decision criterion. When you’ve appropriately defined your utility function, you shouldn’t have a ‘risk-preference’, but should always just maximize expected utility.
  • Utilitarianism is our best candidate for a theory of morality. Particularly, I subscribe to {act, total, hedonic, non-negative, physics-based} utilitarianism as opposed to a {rule, average, preference, negative, non-physics-based}. In a nut-shell, a moment of desirable conscious experience (i.e. well-being) maps out to a positive real number, and a moment of undesirable conscious experience (i.e. suffering) maps out to a negative real number, and when we are trying to choose how to act in the world, we should, ideally, act so as to maximize utility. Basically, to do good and avoid doing bad, one can't escape doing cost-benefit analysis. Confusion about the epistemic status of utilitarianism generally stems from confusing the theory with its application. Utilitarianism doesn't say to do anything other than just do the math and be self-consistent and specific about what it is that you value. Further implications of utilitarianism:
    • The ethical desirability or utility of an action depends on the set of alternative options one has available. To illustrate, performing CPR on a man whose heart has stopped is generally a great thing, but if you refuse to let the better-trained paramedics take over when they show up, your continued CPR is now of negative-utility.
    • There’s no intrinsic ethical difference between doing a bad thing and failing to do an equally good thing.
    • Supererogation (moral "extra-credit”) doesn't exist.
    • While self-care is justified to maintain your mental health and your instrumental utility, true selfishness is not.
    • Instances of well-being and suffering are morally important, regardless of their place in space or time. Thus, because so many future people with net-positive lives are expected to exist if we don't go extinct, we have an obligation to help them come into existence by reducing our risk of extinction.
  • The long-term value thesis: Most of the value that is to be captured in the world lies in the future. We should primarily be focused on the quality of this future and ensuring that it is not destroyed. The biggest obstacles to this—existential risks—deserve much more of our society’s collective resources and attention.
  • Effective Altruism. It's simply about asking the question, “How can I do the most good, with the resources available to me?”, and then carrying out what you come up with.
  • Evolutionary psychology is an extremely valuable academic discipline; we always want to approach concepts with our own evolutionary psychology in mind. One should be familiar with common cognitive biases.
  • Neoliberalism—the belief when national and international institutions support free trade of goods, ideas, and capital, it’ll lead to prosperity—is basically true in present human society. We need to regulate only where markets systemically fail and do our best to make entrepreneurship and market transactions as easy as possible. I’m a fan of Conscious Capitalism.
  • Georgism—taxing land rather than the fruits of labor—is the basis of economic freedom and social justice. No one has an intrinsic right to a natural resource more than anyone else, so people can only rent land and natural resources that equally belongs to others.
  • Risks from omnicidal lone agents (e.g. radical anti-natalists or religious-extremists) will eventually supersede that of risks from centralized control. Politically, I mostly side with geolibertarianism, but we will eventually probably need AI-enabled mass-surveillance if we are to survive long-term as a civilization. The primary argument underpinning this is Nick Bostrom’s Vulnerable World Hypothesis

Additional Concepts I Love

  • Occam’s Razor: The more complex an explanation is, the more evidence you need just to find it in belief-space.
  • The optimal causes for one to work in are important, solvable, neglected, and a decent personal fit.
  • Extreme ownership is an essential mindset. People can be victims, but victim mindsets are seldom more effective than a mindset of taking total responsibility for oneself, and frankly, the future of conscious life.
  • Have faith in your own neuroplasticity and ability to eventually derive enjoyment from new productive behaviors. Marginally productive activities like writing can be just as rewarding as video games once one gives them up.
  • Incentives are a part of life. Unfortunately, most people need skin in the game to be productive and do their part to fulfill everybody’s preferences. Thus, I’m against most forms of welfare beyond a minimum universal basic income, which ideally should be the equitable redistribution of value from a land-value tax. Further, the money behind most forms of welfare could be better spend on existential risk reduction and improving the quality of life for future people.
  • Alternative institutions like approval voting and a Common Ownership Self-Assessed Tax seem highly underrated.
  • Meditation has been widely studied and proven to be a fantastic tool for the majority of people. Most of us should probably be meditating daily.
  • Cause-neutrality and impartial impact evaluation: One should aspire to do good not merely because it feels good, but because it maximizes the good. While it’s very sad to have one’s family member die from say, brain cancer, this doesn’t mean it’s optimal for one to donate to brain cancer research. Ideally, one should be cause-neutral and donate to where the evidence says the most good can be captured per dollar.
  • Counterfactual and marginal thinking is imperative. All resources have alternative uses and marginal diminishing returns are real. Counterfactual thinking is key to solving the abortion debate and marginal thinking could help more people capture more value by focusing less on perfectionism.
  • Humans are terrible at working with large numbers. We don’t feel 100x worse when 100x more people die in a catastrophe. Thus, it’s important to shut up and multiply and not base one’s altruistic decisions solely by how one feels.
  • Both wild and factory animal suffering is extremely unfortunate. It’s our duty to end factory farming quickly and look out for the welfare of all animals.

You'll only receive email when they publish something new.

More from Evan Ward
All posts