“Reality is the hallucination we all agree on.” as neuroscientist Anil Seth put it. Naturally, discussing the parts we disagree on leads to a painful state of cognitive dissonance - the uncomfortable place where ideas contradict each other. Talking politics is an easy way to get there. Its inflammatory nature reliably spawns a clash of realities that repel each other like magnets of the same pole, with no respect for even the most intimate of relationships - as everyone knows who discussed politics at a family gathering: Once a sensitive question gets thrown in, the atmosphere becomes charged, trapping the air below it. The conversation heats up to an unbearable degree until it’s only a matter of words to make it blow up like an unsupervised pressure cooker.
Oddly enough, people are perfectly willing and capable to acknowledge the range of viewpoints when it comes to aesthetic regards and questions of the quality of a piece of music. There is no accounting for taste, they argue - pointing out the futility of such a debate before it happens. This truism, however, is conveniently forgotten once we enter the terrain of political discourse: The terms right and wrong start getting attributed to mere opinions, with a self-righteous disregard for the potential subjectivity from which they emerge. It’s here where people gladly subdue their individualist mantra to each their own in favor of a more comfortable explanation, citing either misinformation or sheer blindness to reality as the cause for their opponents’ behavior at the ballot box.
It's no partisan affair either: Both sides of the political spectrum frantically insist that there’s nothing subjective about the right way to proceed forward - at the same time, the failure to objectively resolve their disagreements suggests otherwise. Ironically, this implication is passionately disregarded by everybody - uniting parties in at least one aspect. They continuously treat certain subjects as if there was a transcendental truth, accessible to anyone who can see through the others’ propaganda. The media spreads facts with the intent of speeding up the process of enlightenment; and yet, the ideological gap between our worlds seems to only get wider every day. Believers default to insults, condescensions and political violence in order to make their point, because...well, that's all the options left.
This behavior poses the question why we are so convinced to be right in the first place - and how we can justify the anger we're expelling if someone questions our beliefs. Don't we know better? Or do we, and just don't care?
Most likely, both.
As just another animal shaped by the principles of natural selection, we perceive the world not as the collection of matter that it is, but as things worth attaining or avoiding. After all, that’s the only utility in perceiving - navigating this world in a way that allows you to pass on our genes. Its accuracy doesn’t matter if it manages to make you behave in an evolutionary beneficial manner. This is evident in how our visual system is so attuned to our cognitive fabric, that it warps and bends reality according to our mental representations of the things it contains - our motives are so hidden that we ourselves are unaware of many of them. For example: If you’re afraid of spiders, they will appear of greater size for you then for people who aren't.
Whatever input we receive is deeply intertwined with the emotional reaction an object or phenomenon incites in us, be it because of past associations (the extreme example being PTSD) or the archaic workings of our brain structure itself, known as instincts. Which, of course, also make for the same kind of unmoderated reactionary behavior that makes us involuntarily jump up from our seats when we hear a loud and sudden bang. This form of evolutionary advantageous hardwiring doesn’t even necessitate conscious awareness of the stimuli.
Author Lynn A. Lysbell made this case at length in her book “The Fruit, the Tree and the Serpent”, presenting the astonishing implications of what she calls neural snake detection circuit. Even cortically-blind people exhibit emotional arousal - indicated by markers like skin conductance - when shown images of snakes. While a person with such a sight-impairment can’t process any visual signals in the traditional sense, the pictures still find their way into the brain, through pathways of the retina enjoying an exclusive, direct connection to the brain. This so-called blind-sight preserves one’s ability to successfully navigate through an obstacle course:
So, for example, if you were to hold up a pencil and move it past such a person's blind visual field, that person might say that she did not see anything, but she might also say she had a feeling or a sense that something had moved in front of her. Blindsighted people often make saccades to the location of the target even though they insist that they cannot see anything. Some people with blindsight can also point to the object they cannot see.
Those directly linked neurons also appear responsible for the unconscious registration of emotional images in non-blind people, as in subliminal messaging: You can mimic blind-sight by exposing an image of a snake to viewers for mere milliseconds, and their body will still exhibit signs of anxiety. Certain stimuli don’t demand any sensory awareness to incite an emotional reaction in us. Even infants show signs of stress when you show them images of spiders - their pupils dilate, indicating an increase of nor-epinephrine.
Good for us: Such a shortcut to the emotional center of the brain turned out to be of great benefit for humans, since fast reaction times were a vital thing to have when confronting a predator - making everyone who had them evolutionarily fit enough to have become our ancestors. Instead of having to waste CPU power every time we have to calculate the potential of a threat, the result of the question a snake-like image poses got permanently coded into our sensory organs. It’s deep rooted prejudice. Which makes it testament to the automaticity with which we judge information about this world.
Our senses are the psychological prism through which we learn about this world, and they can’t simply be separated from the rest of our cognition. The eye cannot see what the mind does not know. As it happens, most of this knowledge is as unconscious as we are of its influence.
The Value Hierarchy
The human body behaves universal in telling its owner that snake-like patterns ought to be avoided. It’s our evolutionary imperative in full force, initiating an instinctive repulsion that motivates each of us to become attentive towards it. Like the intrinsic appeal of an attractive person or the deterrent effect of a bad smell, these things engage you to react to them, because the message contained within them is important; they are are low level values ingrained into our biology; “principles or standards of behavior” as the Oxford dictionary defines it. So standard, in fact, that we wouldn’t highlight it as a preference that could set people apart.
The fact that some of the things we perceive require more immediate attention than others lends itself to the concept of a value hierarchy: Fear, for example, is able to overwrite many other impulses. There’s also acute variability in what we view as most important. Depending on physiological biomarkers, say hunger or a need to sleep, the priority of certain things in your perception of the world changes, because your values changed: Food looks tastier if you’re hungry, the floor appears comfortable if you’re tired, ugly people resemble more attractive ones when you’re drunk, and so on. With the purpose of making you most attentive towards the information with the highest implications for your existence, biology makes sure that your attention is focused where it belongs - a mental faculty having evolved since the dawn of human existence.
Selective attention, the quality of your mind to block out every stimulus that isn’t valuable to achieving your objective, further highlights the impact your mind’s workings have on your perception. A simple but illustrative example is the famous gorilla experiment:
While the goal there is rather trivial, it shows the narrowness with which we perceive the world once we’ve set a task to ourselves. In the case of everyday living, this task consists of the set of our evolutionary objectives - the constant undercurrent of primal urges dictating our attention. They orient us towards goals, motivate us to pursue them and make us attentive towards the things we intrinsically value. In other words, the purpose of surviving itself colors your perception.
(We'll come back to the color metaphor later on.)
Therefore, the world will appear similar to us, since the variability of behavior most conducive to survival won’t differ too much on the lower levels of an organism’s function. In fact, our most basic values look similar across mammalian creatures: A mouse will flee from a cat, because the mouse will instinctively know it could be killed by the larger animal. While we don’t run the risk of getting killed by a cat, our behavior runs parallel to that of a mouse whenever we encounter a predator with more physical prowess than us. Why? Because it looks threatening. You don’t need to evaluate it - your instincts have already done it for you. The situation triggers a sense of fear and the urge to flight. It’s simple and it works. It’s these instances that make the existence of a “right” way to behave so convincing - because if your value to stay alive in the first place, there aren’t many other ways to react to the perception
Likewise, humans have so much in common that we are defined as the same species - prepackaged with the same set of hard-wired mental representations of things. That would include the fear of heights, the appreciation of sweet foods, and so on, all diverging from the fundamental ethic of “dying bad, living good”. It’s only when moving up the value hierarchy in a Maslowian fashion, that we realize something: Sometimes, how to behave cannot be given a clear cut answer everyone would agree on unanimously, even though it might feel just as self-evident for each individual. Why? Because there exists more than one successful strategy for surviving.
Accounting for Taste
Humans can all agree that getting hurt by a hot plate is a bad thing. Our perception makes it so. We universally share this value, because it’s never been not useful for your body to react adversely to burning itself. There are outliers who don’t feel pain, but their rarity speaks for itself. When it comes to moral questions, though, there’s less of an agreement between what is the right way to conduct yourself. With morality being nothing but the question of how to behave, the definition of “right” changes according to one’s neurological blueprints - of which many variants can build a successful machine. Take empathy, for example:
As a mechanism to enable human cooperation, empathy is an invaluable tool, causing us to feel bad if we do something to another human that makes them feel pain. Still, we ourselves are where the feeling originates from and we can differ a whole lot in our capacity to share somebody’s suffering. No wonder - intense feelings of empathy aren’t always the most beneficial way to react to a situation. While it’s easy to call such inability a moral deficit, we have to keep in mind that any form of altruism is only a pragmatic tool to allow for social cooperation, the incentive being the creation of a life insurance based on the principles of tit-for-tat. Inevitably, the norms of reciprocity will create “cheaters” exploiting the system. Thus, depending on the context, sociopathy might very well be a useful character trait.
This isn’t limited to the readiness of a sociopath to take advantage of other people, but also includes a lack of psychological impact when seeing a comrade suffering, as evolutionary biologist Brett Weinstein points out:
I would imagine, in ancestral circumstances, that there are cases in which effectively one might be of value to one's lineage in a form that couldn't be disrupted by observing suffering. [...] Soldiers on a battlefield, for example, might be very effective if they could tune out the immense amount of suffering around them and do their job in a more or less mechanical fashion.
Sociopathy goes to show how opposing ways of moral perception can exist in the same environment. It’s sensible to think that evolution could have caused neurological diversity, fragmenting our species’ strategies for the optimal way of conducting ourselves. This allows people’s definition of moral to differ, because their levels of agreeableness - and many other factors, for that matter - differ. It’s personality all over again. We might all ask the same questions life prompts us to ask, and we all listen to our instincts for an answer. Rest assured, this answers will differ for each of us.
Since the right way to behave can differ substantially according to your values, in turn influencing your perception of the world, the preconceived notion of a universal moral gets outed for what it truly is: An illusion. Of course, David Hume figured this out long ago:
Morality is nothing in the abstract Nature of Things, but is entirely relative to the Sentiment or mental Taste of each particular Being; in the same Manner as the Distinctions of sweet and bitter, hot and cold, arise from the particular feeling of each Sense or Organ. Moral Perceptions therefore, ought not to be class’d with the Operations of the Understanding, but with the Tastes or Sentiments.
Much of our perception serves us in a similar way, which makes it's easy to be fooled into thinking it is the same, and thus should act as a template to derive universal principles of behavior from it. In actuality, though, what we perceive as truth is already biased due to our values - the biological basis for our moral reasoning. We can expand on Hume’s taste analogy in order to highlight this point.
Take the sensation of bitterness. It evolved as a mechanism to detect potentially toxic compounds in food - many poisonous alkaloids taste bitter. However, this is more true for some people more than for others: Differences in gene expression lead to differing intensities of the perceived bitterness, which often translates into a pronounced dislike for broccoli and other veggies. Whether you are such a supertaster or not, both instances are viable evolutionary strategies, as there’s no right way to process the information. But there is a right way to react to the sensation: Don't eat it. Recognition of bitterness still exists for almost everyone, yet in some people, the volume of perceiving this information is turned up. Which gets us to the crux of the matter:
It’s easy to be fooled in believing that our preference or distaste for broccoli stems from a like or dislike of bitterness. What we often forget to consider, however, is the possibility that another person might experience the bitterness much more intense. In other words: Our opinion on bitter equals bad doesn’t have to differ in order for our perception of how bitter - and thus how bad - it actually is. Analogously, the biological basis for your personality will influence your moral perception. We all have some concept of what is amoral - but what triggers it and how intense these feelings will be is unique to your neurological structure, although general trends will apply inter-individually.
Poignantly, this will go unnoticed in day to day life. Except when we have to agree on what we ́ll have for dinner.
Let’s recap: Our perception isn't there to tell us how things are, but what to do. Since it only serves to derive ought statements from the world around us, our biologically hardcoded values will influence the very way we perceive in the first place. They are the instincts that transform information into behavioral imperatives. How to behave is thus a question our perception answers to us, which in turn is modulated by the uniqueness of our brains. Politics is the same question, only on a societal scale - and therefore the ideal breeding ground for ideological conflicts. While many of these matters are moral questions, certain sensibilities extend to other emotional domains - like fear.
Again, it starts with a commonality: We all value feeling safe. Yet the threshold at which people will feel their sense of security jeopardized varies from person to person. Someone who has a higher threat sensitivity will react more strongly to the prospect of potential danger. They’re exhibiting what psychologists call a negativity bias, which leads them to interpret information differently. Incidentally, this is one of the effective differences in brain function that makes some people more inclined to vote on certain issues more conservatively. They feel a greater threat coming from things they do not know - which would include people coming from outside the border. It’s easy to think of this as a bug, not a feature - but mind you, just because you don’t share this sentiment doesn’t make it irrational. Limiting immigration to limit the perception of threat might seem irrational for a person who doesn’t experience said fear, but certainly isn’t for the person perceiving it as an all too real danger.
Imagine yourself to be advocating for a more liberal position: You’d be pulling up some statistics that show that any potential increase in crime is miniscule and outweighed by the humanitarian and economic advantages their the presence of immigrants bring. You’d do so under the assumption that the additional information will sway your opponents view. This strategy seems sensible: Showing that the risk is very low might might do the trick. But how low, exactly, is low enough? No one would be anxious when confronted with a situation that poses a zero percent risk for one’s own safety. But who is to decide the threshold that needs to be surpassed to warrant a negative emotion? Is it one percent? thirty percent? Or fifty percent? Your emotions will tell you, because they are the only thing that can. And as it happens, they may differ substantially from the person sitting across the aisle.
Any economic incentive you bring up can be similarly disarmed: I might gain more anxiety from the increase in perceived risk than pleasure from a better economy. It’s an argument that only works after your initial, value-driven judgement has been made. It’s like with bitter taste: At which point is it too bitter to be viable food? Depends on how many receptors for bitterness you have. You might not experience it as I do, and tell me all the rational arguments for eating broccoli: The sulforaphane, the omega 3 acid, the importance of fibre for your digestive system. But all these arguments only have weight in the framework of a mind whose taste experience isn't marked by the intense experience of wanting it out of your mouth.
It might not be possible to swap taste receptors in order for your to comprehend my position. I could, however, spray some Bittrex® (the most bitter substance known to men) onto the vegetable, so you can get a taste of how I perceive the situation. Suddenly, all of your arguments will be insufficient to convince yourself to eat it. They don’t mean anything anymore, as they have stopped being compatible with your value to avoid something that tastes that bitter. For all intents and purposes, this would give you a taste of my reality. After all, increasing the signal or turning up the sensitivity of the receiver will have a similar result.
This leads us to another interesting difference in perception and the subsequent influence it has on your behavior: Disgust. Higher disgust sensitivity will also make you vote more conservatively. We can exploit this neurological institution and show how almost everyone's allegedly rational judgement can be skewed left or right with some tuning. Jonathan Haidt discusses the poignant implications in his excellent book “The Righteous Mind”:
Alex Jordan, a grad student at Stanford, came up with the idea of asking people to make moral judgments while he secretly tripped their disgust alarms. He stood at a pedestrian intersection on the Stanford campus and asked passersby to fill out a short survey. It asked people to make judgments about four controversial issues, such as marriage between first cousins, or a film studio’s decision to release a documentary with a director who had tricked some people into being interviewed. Alex stood right next to a trash can he had emptied. Before he recruited each subject, he put a new plastic liner into the metal can. Before half of the people walked up (and before they could see him), he sprayed the fart spray twice into the bag, which “perfumed” the whole intersection for a few minutes. Before other recruitments, he left the empty bag unsprayed.
Sure enough, people made harsher judgments when they were breathing in foul air. Other researchers have found the same effect by asking subjects to fill out questionnaires after drinking bitter versus sweet drinks. As my UVA colleague Jerry Clore puts it, we use “affect as information.” When we’re trying to decide what we think about something, we look inward, at how we’re feeling. If I’m feeling good, I must like it, and if I’m feeling anything unpleasant, that must mean I don’t like it.
In a political debate, we are consulting facts to try to prove the person wrong or questioning his or her ability to think coherently. Yet a fact alone isn’t an argument, which always coexists with a motive. Since our motives come from our values, we often end up in the frustrating confines of a zero sum game: We can pull information out of a pool of facts all day, but in the end, our conclusion is isolated from our rationality, with our value-based motives bridging the impossible is/ought gap. In fact, that is all an animal does - interpreting information in order to derive evolutionary successful behaviors from it. We can’t use reason to make another person get to the same conclusion, because the same information will incite different emotions for different people - and thus will be prioritized differently.
Ultimately, we are arguing about the right way to perceive the world. Like a neural network trying to detect objects in digital images, we evolved a sense that lets us put behaviors and situations into categories of good and bad. The more vague the image gets, the more our presettings will come to light. At that point, we will use the miniscule amount of information at our disposal to make a judgement, while both sides will find evidence to make their case. The controversial topics that regularly pop up in politics are the ambiguous ones, where nuances in brain function will lead people to value information differently, leading to an ideological dead-end.
Or look at it this way. Imagine each bit of information was a piece of a jigsaw puzzle: Your biases will make you put the pieces together in a way that fits with your preconceived notions of how the world works. You can create many images in the end, but only one will seem reasonable to you - even though the pieces have such flexible edges that they could be arranged in multiple, logically coherent ways. How could this be? Well, the pieces would be multidimensional; one of the dimensions being the emotional impression they create on your consciousness. Thus, only one way would makes sense to you.
It’s hard to comprehend how stark this difference in moral perception might be. We assume that the world looks similar for all of us, because in most situations, our biological similarities can uphold this illusion. We derive the same oughts from it. Yet once we transgress base level instincts - once we get to ambiguous matters - the conceptual gap gets illuminated. To point out how stark this effect can be, we can look at another sense that fools us into believing the world we see looks the same to us.
Enter the dress.
What makes the dress so special? After all, there are countless examples of pictures conveying visual ambiguity - the Rabbit-duck illusion, the Spinning Dancer, My Wife and My Mother-in-Law. However, most optical illusions only require you to take a closer look of the image or some angular head adjustment in order to reveal how your senses fool you. This isn’t the case with the dress. Its ambiguity is non-apparent. Whatever color composition you perceive, it tends to stay that way. Even if someone were to disagree with you, changing your perspective is so difficult that the expression of any other color combinations besides the one you see will strike you as nonsensical. Chances are, you rather assume that the one person disagreeing with you has an optical defect rather than assuming billions of people are similarly minded.
The fact that the picture looks so unsuspicious tells us something else: Situations like this could occur everyday, without us noticing. Normally, we would have never found out that we perceive it differently - only if we needed to make a decision about it, it might have to light. Then we’ll communicate about what we see, but by describing the colors it generates in our minds, we find ourselves in a quarrel. “I don’t like black and blue.” one party might argue. “Me neither. But why does it matter if we’re talking about a white and gold dress?”
We are in disbelief that someone could see something different than us. We don’t view it as an opinion, because “color” strikes us as a non-opinionated statement to describe. It seems objectively true - and that is how we argue for it. The experience of color represents something real - as real as it can be for our limited senses. Why would it be any different when it comes to the perception of morality?
That isn’t to say that people think it’s objectively true that their moral opinion is “correct”. The point is, they might as well. Take the issue of abortion, for example. Is allowing it the preservation of a woman’s autonomy to do with her body as she wishes? Or is it the legal murder of an unborn person? To me, the answer is clear as day. Since there exist people that express the opposite belief, I have to take a leap of faith that their conviction is of similar intensity; that they cannot help but view abortion as something different than an amoral act against the sanctity of life.
Another contentious one: Everybody has an understanding of fairness - but depending on your wiring, your preferred policies to ensure it could differ substantially. Like taxing the rich differently than the poor. A person with libertarian values might argue: Why should rich people be punished for being clever and hard working? To that, someone left-leaning could ask: Why should poor people be further punished for being either not clever, or having bad luck? Then again, we all have to play with the cards we’re dealt with - life is unfair from the get-go. So why should someone be compensated for being born poor, but not for being unattractive? People can answer these questions, but it’s not their rationality that allows them too.
Just like how the colors we see are tied to a difference in brain activity, so is our stance on many political issues. The picture of the dress is nuanced enough to elicit all the individual differences in our brains. And so it is with most of the things we are discussing right now. They are morally ambiguous matters, since the perspective you take on them will depend on those small differences of how your brain interprets information - your moral perception.
Color and morality do not exist outside the confines of our minds - yet we can mostly live peacefully together because the illusion works so well, for the most part. Similarly, evolution tuned us to abide by some guidelines naturally, because they work so well. But once we enter the terrain of more nuanced information, the actual gap between our perception gaffs us. Just like someone voting for a party diametrically opposed to our own values does: We could never understand how someone a person could be so stupid, blind, or plain ignorant to decide that way - attributes that would all apply if they saw the world the same way we do. We try to apply somebody else’s moral reasoning onto our own way to perceive this world, and find that it doesn’t compute.
Your beliefs - the set of your values - are a critical organ of your psychological orienting system; an integral part of the cognitive machinery that makes your viscerally colored perception function properly. Somebody attacking these beliefs attacks the validity of your perception. Considering that perception is what your behavior is predicated upon, any such attack poses a threat against our ability to realize your survival strategy. Such a person is effectively telling you that what you see isn’t real, thereby denying the legitimacy of your experience - the very system that allows you to operate according to your genes’ interests. This is akin to someone disregarding your pain by saying bitterness does not exist - it’s certainly real enough for you to want to spit it out. How more real does it need to be?
While taste would normally not be considered a belief, it has a similarly function, that is shaping your reality. Beliefs have much deeper roots than our knowledge and experiences. Their biological basis colors your perception in the first place. In other words, our opinions are already there, and our perception makes us discover them. Looked at it that way, they’re also just as vital - which is why we’ll want to defend them. And since they differ substantially between people for non-apparent reasons, we struggle to agree on the right one - because there is none.
Thus, we’re wired to argue about what belief of reality serves our survival best - and not objective reality. After all, there’s nothing to argue about, as it doesn’t contain a motive - only facts. Our survival, on the other hand, is shorthand for our genes’ survival - and their unique imprint can be found in the psychology shaping our worldview; in other words, how we tend to interpret these facts. We will want to defend it, as it’s the only thing that can tell us what is right and what isn’t. We depend on it in order to orient ourselves in this world. As such, maintaining a sense of reality consists of a vital interest and like any other sense, you will react aggressively if someone tries to deprive you of it.
In his comprehensive book on the necrophile aspects of human psychology, Erich Fromm presents a brilliant awareness of this condition:
A third specifically human condition of existence contributes to a further increase of human defensive aggressiveness compared with animal aggressiveness. Man, like the animal, defends himself against threat to his vital interests. But the range of man’s vital interests is much wider than that of the animal. Man must survive not only physically but also psychically. He needs to maintain a certain psychic equilibrium lest he lose the capacity to function; for man every-thing necessary for the maintenance of his psychic equilibrium is of the same vital interest as that which serves his physical equilibrium.
First of all, man has a vital interest in retaining his frame of orientation. His capacity to act depends on it, and in the last analysis, his sense of identity. If others threaten him with ideas that question his own frame of orientation, he will react to these ideas as to a vital threat. He may rationalize this reaction in many ways. He will say that the new ideas are inherently “immoral,” “uncivilized,” “crazy,” or whatever else he can think of to express his repugnance, but this antagonism is in fact aroused because “he” feels threatened.
Consciousness bridges the is-ought gap by coloring our perception with emotion. It rationalizes what you see, cause otherwise you wouldn’t be able to make sense of reality. The whole purpose of sensing the world around you is to manipulate it according to your goals - things around you get filtered and highlighted for a reason. We’re blind to most of the electromagnetic waves around us, because it’s only the ones getting reflected by the physical world that are useful to us. Similarly, our values act as the modulating force for our perception to further our agendas. We don’t see the world as it is, but as the product of what our narrow, goal driven attention wants us to see and if somebody doesn’t, it strikes us an incomprehensible blindness to reality.
Meanwhile, we’re blissfully unaware of the subjectivity of reality emerging from our own heavily biased set of beliefs beliefs. If a person's view of the world seems too dissonant to our own experience, it’s enough for us to at least passively discredit them. If you can't trust your own eyes, why trust somebody else’s? Yours certainly won’t be working too bad, otherwise you would have already noticed, right?
Consequently, we’re predestined to clash on ideological fronts the more we want to carve out the details of the ideal society. Once we reached a certain level of maturity as a civilization, all the problems left to solve are the ones most likely to polarize us. Controversial topics are the moral equivalent of a visually ambiguous picture. One that could look different to half the population and no matter how long you look at it, seemingly won’t allow you to change your perspective. Because there’s no single best strategy when it comes to the complex questions modern society asks from us, just like there is no right way to view a color.
Of course, not everyone with a political preference will be the militant representatives of the associated party that gets satirized so often. Yet I think it's fair to say that everyone has an idea of what ought to be, and if the opportunity presents itself, will fight for it. As Joan Robinson said “Ideology is like breath - you never smell your own”. But breathing, you must.
The question remains, though, whether awareness of the fact that biology determines the way you see the world will make it more peaceful or more hostile. On the one hand, it acts as explanations why humans behave differently. But then again, it allows you to redefine human more closely, to only encompass what you consider the right way to view certain topics. Which sounds horrible - if it weren’t for the fact that we do it all the time:
From an objective perspective, a brutal criminal is not any less human. Yet our definition of “human” encompasses a certain expectation of their values. I’m pretty sure most people have an idea what no true human would ever do. We compare brutal criminals to unbeloved creatures, like snakes and vermin. Enough people exist who would wish the death penalty upon them, because their values represent a perceived danger to their own safety. Which gets us into exactly the kind of dangerous territory that make politics such a poignant topic: It only needs one belief system effective enough to make everyone who doesn’t share it seem non-human, and therefore undeserving of treatment we’d wish onto ourselves.
That isn’t to say we’d be better off if everybody had the same political stance. We might actually profit from ideological diversity. If there was a range of values represented in your average social group, it could benefit the group as a whole - like the political analog to the sentinel hypothesis: People have different chronotypes, meaning they prefer to go to bed at different times - some are early birds, some are night owls. This genetic variability ensure that someone always looks whenever the others can’t see. Analogously, if everyone looked left, we’d be unwise about the blind spot right from us. Perhaps that’s the reason why, whenever you enter a new group of people, there seems to be always the same kind of people.
I’ll be honest with you: I can’t help myself but find people inherently more sympathetic who view the dress the same way I do. People who like the same music I like strike me as more reasonable. And once people tell me they like licorice, they have to make a very strong case for them in order to make me forget about it. Yet, I know that we have enough in common to still profit from working together. Thus, since I know that people’s brains differ enough that they are likely to see something else, I have but one choice to maintain peace at christmas dinner: Avoid talking about it.