Sunday, July 28, 2024

Constructing Our World (What is Real? 5)


These posts make more sense when read in order.

Please click here for the first article in this series to enter the rabbit hole.

 

Our brains don’t receive anything like photographs or videos from our eyes. Our visual receptors—the rods and cones—collect minimal information. A cone is triggered depending on the photon’s wavelength—which we interpret as color—and by the number of photons—which indicates brightness. The overall pattern of receptors activated can reveal the position of edges and shading. The information is sparse and basic. It’s up to the brain to sort out what elements are parts of which objects. This can be difficult since most objects are partly hidden and must also be recognized when viewed from various angles.

We also need to determine what an object is when it has unique characteristics that are unlike anything we’ve ever seen, such as a teapot shaped like Lewis Carroll’s frumious Bandersnatch. Most of us wouldn’t recognize a frumious Bandersnatch if we saw one, but we could still tell it was also a teapot. Oddly, we do this by breaking the input down into its many elements.

First, what you see with the right half of each eye is sent through neurons to your brain’s right hemisphere, while the left half of what you see goes to your left hemisphere. So the two halves of your visual field are processed separately and later woven together.

Vision is important to primates, yet we receive very little information from our eyes. For part of our visual field about the size of the full moon there are only about 40 neurons coming from the retina and they can’t send much information—essentially just increasing or decreasing their firing rates. They connect to around 12,000 neurons in the first level of the cortex and their signals are passed on to many more, which reveals how little emphasis our brains put on actual sight and how much on processing.

The information arriving from our eyes, is broken up, and then distributed in pieces. At the lower levels of processing, everything is compartmentalized and each is processed separately, passing along more than 300 circuits to more than thirty sensory centers that specialize in a single characteristic, such as lines, curves, sizes, colors, and textures. There are groups of neurons dedicated to more specific things, like detecting edges oriented in a particular direction, while other groups deal with edges with a different orientation. There’s even an area that deals with living things and another with non-living objects.[1]

Each circuit passes its processed information up a progression of more advanced levels. At the lower level of processing, neurons are very specific, for example, there’s one that only responds to visual data from 30 degrees to the left of our center and another that’s only sensitive to sound from that same spot. Higher levels are more abstract and cover larger portions of the field of view. They bring information together, along with memories and knowledge, to see concepts and the overview—the forest, as it were.

The early stage also involves eliminating redundancies. So if you see a straight line, you don’t need to process the entire line—you just need to note the points at each end and the line’s angle. This is faster, more efficient, and requires less brain power. Colors and shading can also be summarized.

The image on the left shows the inside of the brain's left hemisphere, while the right shows the outside. Visual processing takes place at the back of the brain before going to the front and entering our awareness. Simona Buetti and Alejandro Lleras, University of Illinois at Urbana-Champaign.

From our retinas, our optic nerves go to the Area V1 of the visual cortex, which is about the size of a postage stamp and is all the way at the back of our brains. It analyses contrasts and picks out the borders of the various parts of the scene. Predictions come down from Area V2 if what to expect. If the signal is different, then the error is passed up to Area V2, which analyses positions and spatial orientations, while combining data from each of our eyes to produce depth perception. This area also starts identifying colors. Again Area V3 passes down expectations and Area V2 passes up errors. This continues for each level with predictions coming down and errors going up.[2] Area V3 focuses on colors and shapes. Area V4 adds details and assembles the scene. Area V5 adds motion and controls eye movements to follow those motions. From Area V1 there are two main streams. The “where” spatial stream goes up toward the top of the brain, while the “what” object stream passes along the lower side. One study suggests the reason for this split is that improves our ability to predict what is about to visually happen next.[3] Of course, it’s tremendously more complicated than this, with all sorts of stuff flying back and forth between the 300 specialized areas, but you get the basic idea.

A lot happens to the information in the lower and middle levels of processing. One thing I’d like to note is that V4 highlights angles and sharp curves, while essentially ignoring flat edges and shallow curves.[4] This greatly reduces processing, but has a curious consequence.

British researchers studied individuals as they watched a video of a magician who threw a ball up in the air a few times, catching it with the same hand. On his final throw he only pretended to throw it. Two-thirds of the audience insisted they saw the ball rise and vanish in mid-air, but their eyes actually remained looking at the magician’s hand on the last throw and hadn’t moved up to where the ball was supposed to have disappeared. This indicates their eyes weren’t fooled by the trick—their mind was.[5]

This is because the V4 area ignores straight lines, focusing on the beginning and the end. By ignoring the middle, their brains filled it in using imagination. It’s one of those shortcuts our brains take to compress data. Magicians take advantage of this. When they want you to watch their hand, they move it in a curved line. When they don’t want you to see it, they move it in a straight line.

At the mid and higher levels of processing the different attributes are pulled together from each hemisphere into the visual image we perceive. You’d think that attributes like color and shape would be processed together, but they are completely separate until they are combined at the end of the process in what is called “neural gluing”.

Our vision starts with basics and builds up from there. A tumor in the occipital lobe can cause a person to see simple anomalies, such as flashes of light, while a temporal lobe tumor, which is further along in perceptual processing, can cause elaborate hallucinations.

All of that processing is done below our level of awareness. It’s not until t

he assembled perceptions reach the forebrain that the scene finally enters our consciousness. When you visualize a scene in your mind’s eye, parts of this process obviously take place without any stimuli from your eyes, yet imagining an explosion uses the very same neuronal pathways as when you actually see an explosion. Likewise, remembering an explosion also uses those same pathways, which is why experiences and imagination can interfere with and alter earlier memories.

And remember, all of this processing is happening within fractions of a second and is going on continuously, since the incoming signals don’t stop, as long as your senses are functioning. The impulse signals from your eyes, ears, and other senses are a constant flow and our brains have to pick out the changes from one moment to the next. It’s a tremendous task that requires our brains to take shortcuts to get the job done.

As a side note, an experiment showed that when someone has their eyes closed and is receiving no visual input, most of the signals flow from the higher visual levels to the lower ones—this would be the predictions—but when you inject the person with psychedelics and they start having hallucinations, the flow reverses, going from the lower levels to the higher levels, interfering with the predictions.

Interestingly studies are appearing that indicate schizophrenia is associated with the disruption between the bottom up signals of sensory input and the top down signals of executive control.[6]

Because sight is compartmentalized, damage to the system can cause some strange problems. There are people who can only see one object at a time and those who can’t see any objects—only the parts. There’s a woman who lost her ability to see movement and now experiences life as a series of still images. Another can only see objects in motion, just as frogs are blind to motionless insects. There’s a person who, when shown a picture of an octopus, thought it was a spider and that a pretzel was a snake. There’s change blindness, where someone doesn’t notice changes, even when you swap one photograph for another.

A girl with otherwise normal eyesight was found to see upside-down and backwards, so that when reaching for a cup on her left, she’d reach to her right. The same thing happened to a Spanish Civil War soldier, who—after being shot in the head—had the reversal happen, not only to his sight, but to his hearing and sense of touch as well. One man, after suffering a stroke, began seeing objects 30% smaller than they actually are—although, remember, we normally see things we focus on as being larger than they are. For another man it looked like the right half of people’s faces were melting. To another, faces appear distorted like “demons”, but not photographs of the same faces.

Because our brains can fill in much of what we see with what we expect to see, this is why sometimes won’t see something that is right in front of us unless we happen to look directly at it, or someone points it out to us. This happens more and more as you get older, probably partly because your brain has more experience at filling in the blanks. And since you slow down as you get older, this forces your visual processing to cut more corners.

It’s not just vision that’s compartmentalized during processing. Hearing and taste sensations are dealt with the same way. With sounds, signals from your eardrums are separated into timbre, pitch, loudness, reverberation, tone duration, location, and timing. Each is processed separately and then assembled into what we hear. But there are problems our brains have to overcome. Sounds are often ambiguous, incomplete, and the source’s location and identity may be unclear, so again our brains make educated guesses to fill in the gaps.[7]

Some say that all of our senses are constructed. You wouldn’t think this could apply to touch. It seems pretty straight forward. You just reach out and touch something, so you know it’s there in the real world, but that’s not the case. This is very apparent when something happens to our brain’s body map, as with the Third Hand illusion. The most obvious cases are where amputees feel phantom limbs and phantom pains. Not only can they experience very real, excruciating pains in their missing appendage, some can reach out and feel the touch of a coffee cup with their missing hands. When one neuroscientist moved the cup one amputee felt he was holding, it caused that man to suddenly cry out in pain, explaining, “It felt like you ripped the cup out of my fingers.”[8]

When you’re missing a hand it allows your brain to freely alter your body map. This can move your perceived hand to other parts of your body, such as just above the stump of your arm. Brushing that spot can make it seem like both arm and hand are being touched. Sometimes the brain will add another hand. One man had one on his shoulder and another on his face. That’s not too surprising since on the cortex, the touch area of the hand is between those of the arm and face. When part of the brain stops receiving signals, it takes over processing from nearby parts of the brain.

Cognitive scientist Donald Hoffman of the Massachusetts Institute of Technology wrote in his book Visual Intelligence, “We don’t normally think of ourselves as constructing objects of touch. We think instead that we feel those keys, that lipstick, that wallet, not by construction but just as they are. But we’re fooled again by our constructive prowess. It’s only because we’re so fast and so effective at constructing objects of touch that it feels to us that we don’t construct them at all.”[9]

When a man known as Mr. S. suffered carbon monoxide poisoning, he lost most of his ability to construct his vision. He then had severe difficulty recognizing anything. When shown a photograph of a young woman, he said he thought it was a woman because she didn’t have hair on her arms. When asked where her eyes were, he pointed to her breasts.

Making the blind see

We take our ability to see for granted. It just seems natural, but it’s an ability that takes a lot of training. Like learning languages, it’s much easier to learn how to see when you’re young.

When a person blind from birth is given sight, you’d expect them to excitedly run around pointing out things, but this doesn’t happen. Instead they see different levels of unrelated brightness and colors, which they aren’t even sure are coming from their eyes. Even though they know what objects feel like, they have trouble recognizing what things are. They see bits and parts of things, but can’t put these together to see the objects themselves. The effects of lighting and shadows also need to be learned. Perspective and distance cues that to us indicate depth are particularly difficult to learn. And transferring what they know from touch to sight can be like learning a new language. Children can adjust and become confident using sight in a few days. For older adults it can take years, and they might never completely adjust to it.[10]

In the case of the vertical and horizontal kittens, two studies—one at Stanford University and one at the University of Cambridge— each raised a set of kittens in an environment that had no horizontal lines, and another set in an area without vertical lines. When placed in a normal environment, the vertical kittens were unable to see anything horizontal, such as the seat of a chair, while the horizontal kittens couldn’t see anything vertical, like the legs of chairs. The first set never jumped up onto a chair and the second kept bumping into the chair’s legs. The parts of their brains for seeing those things either never developed or weren’t activated.[11]

The same thing can happen with astigmatism, where a lens defect causes distorted vision. Infants with it who go untreated will have it permanently, while it can be optically corrected when it arises in adults. The critical age is from two to four years of age.

Other things aren’t so age sensitive. Neurobiologist Susan Barry was cross-eyed from a very young age, so she lacked depth perception in spite of efforts and operations to correct this. To her the world always looked flat. That is, until she was in her forties and after six weeks of therapy, suddenly one day her car’s steering wheel floated out in front of her. Gradually her stereovision improved.

In an interview with New Scientist she recalled, “It was an incredibly joyful experience, a whole new world. I had the hardest time listening to my students because I was fascinated by the way their hands looked while gesturing. Leaves on trees, house plants, door knobs! Everything looked so beautiful. It was hard to describe to people: they looked at me like I was nuts.”[12]

Something similar happened to neuroscientist Bruce Bridgeman, who was nearly stereoblind, but his ability to experience depth perception began with a 3-D movie. After putting on the glasses, as soon as the film started the characters leapt off the screen and he’s had stereo vision since then, describing it as a “whole new dimension of sight.”[13]

It’s estimated that between five and ten percent of the population lack depth perception to various degrees and from various causes, such as being unable to focus their eyes on a single point or from being blind in one eye.

Scientists have found that dressmakers are way above average in estimating distances, probably through active use of their depth perception, while they suspect that it’s helpful for artists to be stereoblind, better enabling them to discard depth in order to transfer stereo perceptions to a flat canvas or page. Some artists train themselves to see things as being flat, while others close one eye to flatten a scene.[14]

Researchers at the Massachusetts Institute of Technology fooled an AI program, getting it to see a model of a turtle as a rifle in those pictures with red borders. Those with black borders were identified as something of a similar class, such as revolver or holster. Anish Athalye, Logan Engstrom, Andrew Ilyas, and Kevin Kwok.

The difficulties of learning vision become very apparent when trying to teach Artificial Intelligence (AI) how to interpret images. AIs don’t decipher images in the same way we do, with some very basic differences. One examined pictures of a turtle and said it was a rifle. They sometimes pick up aspects of an image that are hidden from us. An example of this is when shown two pictures of a cat that look identical to us, but to the AI one looked like a cat and the other a dog.

Adding to the difficulty is that AIs are essentially black boxes—we can’t see inside so we don’t know what they’re doing, how they’re doing it, or what features they’re noticing. We have to run experiments on them, just as we would with an infant or a monkey, to try to figure out what they’re seeing.

Our visual system is so complex and there’s so much that we still don’t understand, so there’s currently no way to model a program on it. From what I’ve seen, current efforts to improve AI perception rely on encouraging AI to teach itself using a variety of huge data sets of images. Unfortunately that still gives us no insight into how AI perception works.

We're behind and we can't catch up

Perception takes time to process. For some bits it can be as fast as a tenth of a second, but for complex scenes it takes longer. With erotic or highly emotional scenes, recognition can take perhaps five seconds or more. There’s also a detectable delay as information is passed from one hemisphere to the other.[15] These delays put us out of sync with the world, yet we don’t constantly have double vision or hear echoes. Most of the studies I’ve seen in recent years support the hypothesis that our brains appear to fix this by making predictions of what’s happening before the data from our senses is processed.

There’s a man who, after an illness, found he was seeing the world out of sync. To him, people were suddenly talking before their lips moved. Apparently part of his system for correcting this stopped working.[16]

When you look around, your brain isn’t processing an image of the world; it’s processing a constant stream of data. One shortcut it seems to use to deal with this deluge is to construct a view of what you’re seeing based on memories, expectations, and knowledge of your current situation.[17] This prediction is what you see while the real data is being processed.[18] When that’s done, it compares the data with its predictions, ignoring everything that isn’t changing and using the differences to update the predictions so they are a closer match to reality. The adjustments are passed back down the line as feedback.

Not only does this bring perceptions in sync with reality, it reduces processing time because once a perception is constructed, everything that doesn’t change is cast aside. In addition, it makes your vision faster and clearer when you look at something you’re already familiar with.[19] Conversely it’s slower for unfamiliar objects, which may be why an Australian study found that drivers are less likely to see less-common objects, such as motorcycles and buses, even when they’re warned to watch out for them.[20]

© John Richard Stephens, 2024.

I find that interesting since for many years I wondered how my grandfather could not have seen a cement truck coming towards him before he pulled out in front of it. He survived, but his neck was permanently turned toward his left side. Similarly I once didn’t see a man on a bicycle until I saw him pass behind me in the rearview mirror, even though I looked right at him before crossing the street. And I heard of someone else who didn’t see the mail truck they crashed into. We can look at things and not see them, especially if they’re unexpected.

When you look to the side to check for oncoming traffic, you’re mainly looking for cars or pickups. If you look quickly enough, you won’t be able to see movement so the scene will look more like a static picture and you’ll lose a key clue that something is approaching, which would further reduce your chance of seeing an unexpected cement truck or bicycle.

Obviously this predictive system isn’t perfect. There’s an illusion that retroactively makes you see a flash that wasn’t there, while a similar illusion can make you forget a flash that you did see.[21] There’s another that can make you see a dot change colors before it actually does.[22]

If you and your friends go someplace where there’s total darkness and have each of them slowly wave their hand in front of their own face, about half of them will see a shadowy shape moving. Since there’s no light, it’s impossible for them to actually see their hand. What they see is their brain’s prediction of their hand based on their proprioceptive sense of their hand’s location and movement.[23] Proprioception is what keeps you apprised of the position of your body parts using receptors in your skin, muscles, and joints.

They see the prediction, not their hand.

According to Gerrit Maus, a research psychologist at the University of California—Berkeley, “What we perceive doesn’t necessarily have that much to do with the real world, but it is what we need to know to interact with the real world.”[24]

 

If you like this, please subscribe below to receive an email the next time I post something wondrous. It's free.

 

Click here for the next article in this series:

Perception is Not Continuous



[1] Anonymous, “Brain Innately Separates Living And Non-living Objects For Processing”, ScienceDaily, August 14, 2009, http://www.sciencedaily.com/releases/2009/08/090813142430.htm, citing Bradford Z. Mahon, Stefano Anzellotti, Jens Schwarzbach, Massimiliano Zampini, Alfonso Caramazza, “Category-Specific Organization in the Human Brain Does Not Require Visual Experience”, Neuron, 2009; https://doi.org/10.1016/j.neuron.2009.07.012.

[2] Rajesh Rao and Dana Ballard, “Predictive coding in the visual cortex: a functional interpretation of some extra-classical receptive-field effects”, Nature Neuroscience, January 1999, pp. 79–87, https://www.nature.com/articles/nn0199_79, https://doi.org/10.1038/4580.

[3] Anil Ananthaswamy, “Self-Taught AI Shows Similarities to How the Brain Works”, Quanta Magazine, August 11, 2022, https://www.quantamagazine.org/self-taught-ai-shows-similarities-to-how-the-brain-works-20220811/.

[4] Anonymous, “JPEG for the Mind: How the Brain Compresses Visual Information”, ScienceDaily, February 11, 2011, http://www.sciencedaily.com/releases/2011/02/110210164155.htm, citing Eric T. Carlson, Russell J. Rasquinha, Kechen Zhang, and Charles E. Connor, “A Sparse Object Coding Scheme in Area V4”, Current Biology; https://doi.org/10.1016/j.cub.2011.01.013.

[5] Stephen L. Macknik and Susana Martinez-Conde, “And Yet It Doesn’t Move”, Scientific American Mind, vol. 26, no. 3, April 9, 2015, and as “How the Brain Can Be Fooled into Perceiving Movement”, https://www.scientificamerican.com/article/how-the-brain-can-be-fooled-into-perceiving-movement/.

[6] Elsevier. “Brain connectivity is disrupted in schizophrenia”, ScienceDaily, October 17, 2023, www.sciencedaily.com/releases/2023/10/231017123403.htm, citing Alexander Holmes, Priscila T. Levi, Yu-Chi Chen, Sidhant Chopra, Kevin M. Aquino, James C. Pang, and Alex Fornito, “Disruptions of Hierarchical Cortical Organization in Early Psychosis and Schizophrenia”, Biological Psychiatry: Cognitive Neuroscience and Neuroimaging, 2023, https://doi.org/10.1016/j.bpsc.2023.08.008.

[7] Daniel Levin, “Music special: The illusion of music”, New Scientist, no. 2644, February 23, 2008,

https://www.newscientist.com/article/mg19726441-500-music-special-the-illusion-of-music/.

And University of Wisconsin-Madison. “Banking on predictability, the mind increases efficiency”, ScienceDaily, November 22, 2010,

https://www.sciencedaily.com/releases/2010/11/101122152040.htm, citing Christian E. Stilp, Timothy T. Rogers and Keith R. Kluender, “Rapid efficient coding of correlated complex acoustic properties”, PNAS, November 22, 2010, https://doi.org/10.1073/pnas.1009020107.

[8] James Shreeve, “Touching the Phantom”, Discover Magazine, June 1993, pp. 34-42, https://www.discovermagazine.com/mind/touching-the-phantom, May 31, 1993.

[9] Donald D. Hoffman, Visual Intelligence, New York: W. W. Norton & Co., 1998, p. 180.

[10] Kayt Sukel, “Give the gift of sight – and insight will follow” (interview with Pawan Sinha), New Scientist, no. 3037, September 5, 2015, pp. 28-29, and as “Curing blind children reveals how the brain makes sense of sight”, September 2, 2015, https://www.newscientist.com/article/mg22730370-200-curing-blind-children-reveals-how-the-brain-makes-sense-of-sight/.

And Pawan Sinha, “Once Blind and Now They See”, Scientific American, vol. 309, no. 1, July 2013, pp. 48-55, and as “Blind Kids Gain Vision Late in Childhood While Giving a Lesson in Brain Science”, https://www.scientificamerican.com/article/blind-kids-gain-vision-late-childhood-while-giving-lesson-in-brain-science/, https://doi.org/10.1038/scientificamerican0713-48.

Also, University of Washington. “Man with restored sight provides new insight into how vision develops.” ScienceDaily, April 15, 2015, www.sciencedaily.com/releases/2015/04/150415140504.htm, citing E. Huber, J.M. Webster, A.A. Brewer, D.I.A. MacLeod, B.A. Wandell, G.M. Boynton, A.R. Wade, I. Fine, “A Lack of Experience-Dependent Plasticity After More Than a Decade of Recovered Sight”, Psychological Science, 2015; 26 (4): 393, https://doi.org/10.1177/0956797614563957.

[11] Helmut V.B. Hirsch and D.N. Spinelli, “Visual Experience Modifies Distribution of Horizontally and Vertically Oriented Receptive Fields in Cats”, Science, vol. 168, no. 3933, May 1970, pp. 869-871, https://doi.org/10.1126/science.168.3933.869.

And Colin Blakemore and Grahame F. Cooper, “Development of the Brain depends on the Visual Environment”, Nature, vol. 228, no. 5270, October 1970, pp. 477-478, https://doi.org/10.1038/228477a0.

[12] Helen Thomson, “Suddenly I see”, (interview with Susan Barry), New Scientist, June 6, 2009, no. 2711, p. 49, and as “How I learned to see in 3D”, June 3, 2009, http://www.newscientist.com/article/mg20227112.900-how-i-learned-to-see-in-3d.html, citing Susan Barry, Fixing My Gaze, Basic Books, 2009.

And “An Interview with ‘Stereo Sue’ ”, Review of Optometry, July 1, 2009, https://www.reviewofoptometry.com/article/an-interview-with-stereo-sue.

[13] Morgen Peck, “How a movie changed one man’s vision forever”, BBC Future, July 19, 2012, https://www.bbc.com/future/article/20120719-awoken-from-a-2d-world.

[14] University of California – Berkeley, “Dressmakers found to have needle-sharp 3-D vision”, ScienceDaily, June 14, 2017, www.sciencedaily.com/releases/2017/06/170614133734.htm, citing Adrien Chopin, Dennis M. Levi, and Daphné Bavelier, “Dressmakers show enhanced stereoscopic vision”, Scientific Reports, vol. 7, article 3435, 2017, http://dx.doi.org/10.1038/s41598-017-03425-1.

And Denise Grady, “The Vision Thing: Mainly in the Brain”, Discover Magazine, June 1993, pp. 56-66, https://www.discovermagazine.com/mind/the-vision-thing-mainly-in-the-brain, May 31, 1993.

[15] Michael S. Gazzaniga, “One Brain—Two Minds?” in Irving L. Janis, ed., Current Trends in Psychology, Los Altos, CA: William Kaufman, 1977, pp. 7-13.

[16] Helen Thomson, “Man hears people speak before seeing lips move”, New Scientist, no. 2924, July 6, 2013, p. 11, and the longer version “Mindscapes: First man to hear people before they speak”, July 4, 2013, https://www.newscientist.com/article/dn23813-mindscapes-first-man-to-hear-people-before-they-speak/, citing Cortex, https://doi.org/10.1016/j.cortex.2013.03.006.

[17] Bielefeld University, “How the brain leads us to believe we have sharp vision”, ScienceDaily, October 17, 2014, www.sciencedaily.com/releases/2014/10/141017101339.htm, citing Arvid Herwig and Werner X. Schneider, “Predicting object features across saccades: Evidence from object recognition and visual search”, Journal of Experimental Psychology: General, 2014; 143 (5): 1903, https://doi.org/10.1037/a0036781.

[18] Rensselaer Polytechnic Institute, “Crystal (Eye) Ball: Visual System Equipped With ‘Future Seeing Powers’ ”, ScienceDaily, May 16, 2008, http://www.sciencedaily.com/releases/2008/05/080515145356.htm, citing Mark Changizi, Cognitive Science, May-June, 2008.

And Benedict Carey, “Anticipating the Future to ‘See’ the Present”, The New York Times, June 10, 2008, https://www.nytimes.com/2008/06/10/health/research/10mind.html.

[19] Radboud University Nijmegen, “Expectations lead to less but more efficient processing in the human brain”, ScienceDaily, July 26, 2012, https://www.sciencedaily.com/releases/2012/07/120726094506.htm, citing Peter Kok, Janneke F.M. Jehee, Floris P. de Lange, “Less Is More: Expectation Sharpens Representations in the Primary Visual Cortex”, Neuron, 2012; 75 (2): 265, https://doi.org/10.1016/j.neuron.2012.04.034.

And Anomymous, “Expectations Speed Up Conscious Perception”, ScienceDaily, February 7, 2011, http://www.sciencedaily.com/releases/2011/02/110203081445.htm, citing L. Melloni, C.M. Schwiedrzik, N. Muller, E. Rodriguez, and W. Singer, “Expectations Change the Signatures and Timing of Electrophysiological Correlates of Perceptual Awareness”, Journal of Neuroscience, 2011; 31 (4): 1386, https://doi.org/10.1523/jneurosci.4570-10.2011.

[20] Springer Science+Business Media. “ ‘Element of surprise’ explains why motorcycles are greater traffic hazard than cars”, ScienceDaily, January 27, 2014, https://www.sciencedaily.com/releases/2014/01/140127101057.htm, citing Vanessa Beanland, Michael G. Lenné, Geoffrey Underwood, “Safety in numbers: Target prevalence affects the detection of vehicles during simulated driving”, Attention, Perception, & Psychophysics, 2014, https://doi.org/10.3758/s13414-013-0603-1.

[21] California Institute of Technology, “Time-traveling illusion tricks the brain: How the brain retroactively makes sense of rapid auditory and visual sensory stimulation”, ScienceDaily, October 9, 2018, https://www.sciencedaily.com/releases/2018/10/181009113612.htm, citing Noelle, R.B. Stiles, Monica Li, Carmel A. Levitan, Yukiyasu Kamitani, and Shinsuke Shimojo, “What you saw is what you will hear: Two new illusions with audiovisual postdictive effects”, PLOS One, 2018; 13 (10): e0204217, https://doi.org10.1371/journal.pone.0204217.

[22] Jan Westerhoff, “What are you?”, New Scientist, no. 2905, February 23, 2013, pp. 34-37, and as “Only you: The one and only you”, www.newscientist.com/article/mg21729052.300-the-self-the-one-and-only-you.html.

[23] University of Rochester, “Seeing in the dark: Most people can see their body’s movement in the absence of light”, ScienceDaily, October 31, 2013, http://www.sciencedaily.com/releases/2013/10/131031090431.htm, citing K.C. Dieter, B. Hu, D.C. Knill, R. Blake, and D. Tadin, “Kinesthesis Can Make an Invisible Hand Visible”, Psychological Science, 2013, https://doi.org/10.1177/0956797613497968.

[24] University of California – Berkeley, “Hit a 90 mph baseball? Scientists pinpoint how we see it coming”, ScienceDaily, May 8, 2013, http://www.sciencedaily.com/releases/2013/05/130508123017.htm, citing Gerrit W. Maus, Jason Fischer, and David Whitney, “Motion-Dependent Representation of Space in Area MT,” Neuron, 2013; 78 (3): 554, https://doi.org/10.1016/j.neuron.2013.03.010.

Creating Our Perceptions (What is Real? 4)

These posts make more sense when read in order.

Please click here for the first article in this series to enter the rabbit hole.

 

Depth perception

Much of our perception is created by our brains. Each of our retinas sees the world as two-dimensional—since we have a flat layer of receptor cells—and, interestingly enough, our brains originally process it that way. Using visual cues and information from the other eye, the brain gradually builds a three-dimensional view of the world a layer at a time. Depth is not something your eyes can see. It’s constructed in your brain.

The top illusion by Kanizsa and D. Varin shows how visual cues make us see a rectangle in front of the four targets, while the addition of caps on the upper right image destroy the illusion. The rectangle no longer floats above the semicircles. This illusion works because experience tells us that most objects are opaque like the imaginary square and often parts of objects are blocked from view as the targets appear to be.

The four lower drawings illustrate how binocular vision converts the two dimensions each eye sees into three dimensions, even though the images themselves are actually flat. Look at either the top or bottom pair and cross your eyes until the two dots become one and you see three boxes on each line. The upper middle image projects back, while the lower one extends towards you. For these images reality is two dimensional, yet we see the illusion of depth. These images were redesigned by John Richard Stephens and are based on older versions.

Using binocular vision is not the only way you do it, as the two sets of illusions I’ve included show. When looking with one eye, you can also move your head up and down or left and right to discern which objects are closer and which are farther, as closer objects move and they block the view of farther objects.

We see our entire visual field as three dimensional, but this is an illusion. If you close one of your eyes and look at your nose, you’ll see that it and part of your face blocks part of your view in that eye. By closing that eye and opening the other, you can see how much of your peripheral vision is blocked. Since we rely largely on both eyes to see depth, even with both eyes open we can only see those blocked areas in two dimensions—yet everything looks 3D to us. It’s thought this is because our brains’ predictions seamlessly add dimension to those areas. Of course you can gain full depth perception for those areas by turning your head to bring them into view of both eyes.

Our hearing is also three dimensional, indicating which direction and how far away the source of a sound is. Using a number of clues, our brains piece this information together to construct our hearing’s dimensionality.

Colors only exist in your mind

Unlike depth, which actually exists but has to be recreated by our brains, colors are an illusion and they don’t exist in the real world outside of our brains. It’s an interpretation, just as we interpret tastes to be bitter or sweet, or the vibrations we hear as discordant or harmonious. Colors are a sensation that, as I mentioned earlier, we don’t see as constant. Our brains vary them with the environment, similar to the Checker-Shadow Illusion.

In the Checker-Shadow Illusion, checkerboard squares A and B are identical shades of gray, as can be seen on the right where they have been extracted and placed on a more neutral background. The illusion of lighting and a shadow tricks our brains. Edward H. Adelson, Massachusetts Institute of Technology, © 1995.

Another example is that our cone receptors become less sensitive to colors as we age and the lenses of our eyes yellow, but we don’t notice these changes because our brains make adjustments for it.[1]

Our perception of colors also changes with the seasons. Researchers think this keeps our color perceptions consistent in spite of changes in the environment.[2] In addition, there are illusions that can make a single color appear to be two different colors.

Colors are in a continuum, blending from one to the next along a line extending from ultraviolet to infrared, but these are just electromagnetic waves—like microwaves and FM radio waves, only of a different frequency. What we see when we perceive a color is a wave of light vibrating a specific frequency. There is no color in the wave itself. Increase or decrease the frequency and you’ll perceive a different color, but except for that change, the waves are identical.

Your retina’s three types of color receptor cells, which are called cones because of their shape, signal their reception of photons of a particular range of wavelengths, which we call green, blue, and red, although red cones primarily respond in the yellow range. The next level of cells take the information from these cones and use it to add red and brightness, but colors aren’t coded by single cells, rather by patterns of cells. Our brains take this information and blend it together to create the range of colors we see. Even though the lenses of our eyes yellow after the age of forty, our brains make adjustments so we continue to see the same colors even though the electromagnetic waves that reach our retinas have changed.

Three of the many variations of Benham’s disks.

There are even several optical illusions that can make you see colors that aren’t there, such as Benham’s Top or Disk. These disks used to be sold as tops for children. They are usually a half-black, half-white disk, where the white side has twelve or more black concentric arcs of different lengths. When the disk or top is spun, the arcs become circles of various colors, depending on the speed. Reversing the spin’s direction, changes the colors. What we see are color illusions. It’s all black and white. You can see them and video them, but you can’t photograph them.


Evolutionary biologist Richard Dawkins wrote, “I used to think false colour images were a kind of cheat. I wanted to know what the scene ‘really’ looked like. I now realize that everything I think I see, even the colours of my own garden through the window, are ‘false’ in the same sense: arbitrary conventions used, in this case by my brain, as convenient labels for wavelengths of light.”[3]

Since colors are created in our brains, scientists can turn them on or off by applying magnetic fields to the lower center of our brains towards the back of our heads. They can also use this technique to make us see unusual colors.[4] While there are some people who don’t need magnetic fields as they can hallucinate colors at will.[5]

Over and under the rainbow

A double rainbow in Alma, Michigan. Tom (adjusted).

By looking at a rainbow you can see the complete spectrum of the pure colors that we’re able to see—from red along the outer edge to violet on the inner one, sometimes with a couple of bands of repeated colors on the lower edge (a supernumerary bow). Blended colors—such as purple, pink, brown, and olive—are a mix of two or more pure colors, while black, white, and gray are considered non-colors. None of those are in a rainbow. Neither are wavelengths we can’t see, such as ultraviolet, infrared, radio waves, microwaves, etcetera.

Most Americans see five or six bands of color in rainbows, but that’s largely cultural. We say there are seven, but that’s because Issac Newton added a couple of them to reach that number because he thought it was mystical. You’ll commonly see seven bands in artwork of rainbows because that’s how many the artists think rainbows are supposed to have. Some cultures see different ones, with several not recognizing blue as a color. Rainbows are actually a spectrum of uncountable colors.

Rainbows are usually opposite the sun because the sunlight has to go past you and reflect back at a specific angle from the droplets of water in the air, which act as prisms, splitting the wavelengths. The height of the rainbow is determined by how high the sun is in the sky—the lower the sun, the higher the rainbow. Red rainbows are seen at sunrise and sunset when longer wavelengths of light are dominant.

Various sun halos and arcs, with sun dogs off to each side of the sun as seen from the South Pole. Lieutenant Commander Heather Moe, NOAA Corps (adjusted).

You can get rainbows or bits of them under unusual circumstances, such as circling the sun, on wispy cirrus clouds, or as sun dogs that are near the sun. These are reflections off of ice crystals high in the sky. You can also get moonbows at night. Usually they are white since we can’t see colors in dim light, but when the moon is particularly bright we can see the colors and double moonbows.

Here in Hawai‘i—I live on Maui—we often get double rainbows. The second rainbow is a bit fainter and well above the first. And the colors are reversed, with red at the bottom and violet on top. You can also see rainbows when you’re looking down at clouds, sprinklers, or mist from waterfalls, but generally the sun has to be at your back.

While rainbows look real, they’re just reflections—a trick of the sunlight and water or ice—which is why you can’t go over the rainbow, or under it, or get to the end of one, where all the gold is supposedly hidden.

The Falling Tree

© John Richard Stephens, 2024.

Colors and much of our vision are creations of our minds and the same is true of our other senses. There’s the oft quoted question, “If a tree falls in a forest and no one is there to hear it, does it make a sound?” This philosophical question was raised as part of the subjectivist versus objectivist debate. We’ll get to the subjectivist argument shortly. Here I want to take the objectivist view.

The answer to the question depends on your definition of sound. If sound is the pressure waves the tree makes that pass through the air, then it does make a sound. But that’s not what we experience as sound, which is the sensation created by your brain when vibrations move the tiny hairs inside your inner ears—more specifically in your

cochlea. The hairs turn the pressure frequencies into electrical signals, which your brain interprets as sound. If that’s your definition, then the falling tree doesn’t make a sound without a listener, whether it’s Alice or a stink bug.

Hearing is rather like a radio picking up a radio signal. Like the pressure waves, the radio waves are there whether a radio is tuned to them or not. Is that sound? Perhaps, but you won’t be able to hear it without the radio. The radio tunes into a tiny band of electromagnetic waves and converts them into mechanical pressure waves, much as our brain then converts these pressure waves into what we hear.

If your definition requires the perception of sound, then a tree could fall, creating pressure waves that strike against the eardrums of a deaf person and there would be no sound. But as long as a person or some forest creature hears it, then their ears will perceive a sound. Sound, by this definition, is a sensation—an interpretation in the brain. The falling tree exists. The sound might or might not.

Your perception of sight, sound, touch, taste, and smell are creations of your mind and they don’t exist outside of your mind except in completely different forms. They are neurophysiological processes that your brain uses to make you aware of something in the environment. It’s part of how your brain creates your perception of the world around you.

While our senses aren’t very accurate, that might not be such a bad thing. With all their flaws, they are efficient. There’s just no way our brains could handle the torrent of information constantly flooding our senses, so we have to take short cuts.

Our brains predict, enhance, and alter our perceptions in order to make them more efficient and useful to us. Most of the time we never notice because we generally have no way to compare our perceptions to reality. Human evolution has traded accuracy for efficiency and for the most part it serves us well. While they’re not completely accurate, they’re accurate enough for you to survive...at least, most of the time.

We rely on our senses and we’re not used to thinking of them as being wrong. Some aircraft crashes are caused by pilots who are convinced they are flying level, when actually they’re diving. When their instruments tell them something different from their senses, they can mistakenly think the instrument has malfunctioned and they’ll fly an aircraft in excellent shape right into a mountain, the ocean, or the ground, especially at night or in fog. Such accidents are categorized as “Controlled Flight into Terrain”. This is apparently what happened to John F. Kennedy Jr., Stevie Ray Vaughan, Buddy Holly, and probably Kobe Bryant.

Similar illusions can cause driving accidents. Fog can make you drive faster than you think are going, as can the night in certain circumstances. The flashing lights of emergency vehicles can seem farther away than they are, causing you to crash into them. Also, if you try to follow someone at night who only has one working taillight or if you focus on just one of the lights, you’ll think it’s going somewhere it isn’t, causing you to crash into something or drive into a ditch. This was a big problem for truck convoys driving at night during the air raid blackouts of World War II.

One night I was following two vehicles that were kicking up clouds of dust on a dirt road. The driver of the truck in front of me was trying to closely follow the front vehicle by watching its taillights alone, but I held back far enough to see the road. It was only a minute or so before I saw the truck drop towards the right and flip over onto its roof, as it rolled down the side of a hill. The driver was okay, but his truck wasn’t. Sometimes you can’t trust your senses.

 

If you like this, please subscribe below to receive an email the next time I post something wondrous. It's free.

 

Click here for the next article in this series:

Constructing Our World



[1] Public Library of Science, "Brain, not eye mechanisms keep color vision constant across lifespan", ScienceDaily, May 8, 2013, http://www.sciencedaily.com/releases/2013/05/130508172135.htm, citing Sophie Wuerger, "Colour Constancy Across the Life Span: Evidence for Compensatory Mechanisms", PLoS ONE, 2013; 8 (5): e63921 https://doi.org/10.1371/journal.pone.0063921.

[2] Carl Engelking, “Your Color Perception Changes With the Seasons”, Discover Magazine, August 18, 2015, https://www.discovermagazine.com/mind/your-color-perception-changes-with-the-seasons.

[3] Richard Dawkins, Unweaving the Rainbow, New York: Houghton Mifflin Co., 1998.

[4] Donald D. Hoffman, Visual Intelligence, New York: W. W. Norton & Co., 1998, pp. 108-10.

[5] University of Hull, “Some people can hallucinate colors at will”, ScienceDaily, November 30, 2011,

http://www.sciencedaily.com/releases/2011/11/111130100224.htm, citing William J. McGeown, Annalena Venneri, Irving Kirsch, Luca Nocetti, Kathrine Roberts, Lisa Foan, and Giuliana Mazzoni, “Suggested visual hallucination without hypnosis enhances activity in visual areas of the brain,” Consciousness and Cognition, November 26, 2011, https://doi.org/10.1016/j.concog.2011.10.015.

Add your comment here

Name

Email *

Message *