These posts make more sense when read in order.
Please click here for the first article in this series to enter the rabbit hole.
Our brains don’t receive anything like photographs or videos from our eyes. Our visual receptors—the rods and cones—collect minimal information. A cone is triggered depending on the photon’s wavelength—which we interpret as color—and by the number of photons—which indicates brightness. The overall pattern of receptors activated can reveal the position of edges and shading. The information is sparse and basic. It’s up to the brain to sort out what elements are parts of which objects. This can be difficult since most objects are partly hidden and must also be recognized when viewed from various angles.
We also need to determine what an object is when it has unique characteristics that are unlike anything we’ve ever seen, such as a teapot shaped like Lewis Carroll’s frumious Bandersnatch. Most of us wouldn’t recognize a frumious Bandersnatch if we saw one, but we could still tell it was also a teapot. Oddly, we do this by breaking the input down into its many elements.
First, what you see with the right half of each eye is sent through neurons to your brain’s right hemisphere, while the left half of what you see goes to your left hemisphere. So the two halves of your visual field are processed separately and later woven together.
Vision is important to primates, yet we receive very little information from our eyes. For part of our visual field about the size of the full moon there are only about 40 neurons coming from the retina and they can’t send much information—essentially just increasing or decreasing their firing rates. They connect to around 12,000 neurons in the first level of the cortex and their signals are passed on to many more, which reveals how little emphasis our brains put on actual sight and how much on processing.
The information arriving from our eyes, is broken up, and then distributed in pieces. At the lower levels of processing, everything is compartmentalized and each is processed separately, passing along more than 300 circuits to more than thirty sensory centers that specialize in a single characteristic, such as lines, curves, sizes, colors, and textures. There are groups of neurons dedicated to more specific things, like detecting edges oriented in a particular direction, while other groups deal with edges with a different orientation. There’s even an area that deals with living things and another with non-living objects.[1]
Each circuit passes its processed information up a progression of more advanced levels. At the lower level of processing, neurons are very specific, for example, there’s one that only responds to visual data from 30 degrees to the left of our center and another that’s only sensitive to sound from that same spot. Higher levels are more abstract and cover larger portions of the field of view. They bring information together, along with memories and knowledge, to see concepts and the overview—the forest, as it were.
The early stage also involves eliminating redundancies. So if you see a straight line, you don’t need to process the entire line—you just need to note the points at each end and the line’s angle. This is faster, more efficient, and requires less brain power. Colors and shading can also be summarized.
The image on the left shows the inside of the brain's left hemisphere, while the right shows the outside. Visual processing takes place at the back of the brain before going to the front and entering our awareness. Simona Buetti and Alejandro Lleras, University of Illinois at Urbana-Champaign. |
From our retinas, our optic nerves go to the Area V1 of the visual cortex, which is about the size of a postage stamp and is all the way at the back of our brains. It analyses contrasts and picks out the borders of the various parts of the scene. Predictions come down from Area V2 if what to expect. If the signal is different, then the error is passed up to Area V2, which analyses positions and spatial orientations, while combining data from each of our eyes to produce depth perception. This area also starts identifying colors. Again Area V3 passes down expectations and Area V2 passes up errors. This continues for each level with predictions coming down and errors going up.[2] Area V3 focuses on colors and shapes. Area V4 adds details and assembles the scene. Area V5 adds motion and controls eye movements to follow those motions. From Area V1 there are two main streams. The “where” spatial stream goes up toward the top of the brain, while the “what” object stream passes along the lower side. One study suggests the reason for this split is that improves our ability to predict what is about to visually happen next.[3] Of course, it’s tremendously more complicated than this, with all sorts of stuff flying back and forth between the 300 specialized areas, but you get the basic idea.
A lot happens to the information in the lower and middle levels of processing. One thing I’d like to note is that V4 highlights angles and sharp curves, while essentially ignoring flat edges and shallow curves.[4] This greatly reduces processing, but has a curious consequence.
British researchers studied individuals as they watched a video of a magician who threw a ball up in the air a few times, catching it with the same hand. On his final throw he only pretended to throw it. Two-thirds of the audience insisted they saw the ball rise and vanish in mid-air, but their eyes actually remained looking at the magician’s hand on the last throw and hadn’t moved up to where the ball was supposed to have disappeared. This indicates their eyes weren’t fooled by the trick—their mind was.[5]
This is because the V4 area ignores straight lines, focusing on the beginning and the end. By ignoring the middle, their brains filled it in using imagination. It’s one of those shortcuts our brains take to compress data. Magicians take advantage of this. When they want you to watch their hand, they move it in a curved line. When they don’t want you to see it, they move it in a straight line.
At the mid and higher levels of processing the different attributes are pulled together from each hemisphere into the visual image we perceive. You’d think that attributes like color and shape would be processed together, but they are completely separate until they are combined at the end of the process in what is called “neural gluing”.
Our vision starts with basics and builds up from there. A tumor in the occipital lobe can cause a person to see simple anomalies, such as flashes of light, while a temporal lobe tumor, which is further along in perceptual processing, can cause elaborate hallucinations.
All of that processing is done below our level of awareness. It’s not until t
he assembled perceptions reach the forebrain that the scene finally enters our consciousness. When you visualize a scene in your mind’s eye, parts of this process obviously take place without any stimuli from your eyes, yet imagining an explosion uses the very same neuronal pathways as when you actually see an explosion. Likewise, remembering an explosion also uses those same pathways, which is why experiences and imagination can interfere with and alter earlier memories.And remember, all of this processing is happening within fractions of a second and is going on continuously, since the incoming signals don’t stop, as long as your senses are functioning. The impulse signals from your eyes, ears, and other senses are a constant flow and our brains have to pick out the changes from one moment to the next. It’s a tremendous task that requires our brains to take shortcuts to get the job done.
As a side note, an experiment showed that when someone has their eyes closed and is receiving no visual input, most of the signals flow from the higher visual levels to the lower ones—this would be the predictions—but when you inject the person with psychedelics and they start having hallucinations, the flow reverses, going from the lower levels to the higher levels, interfering with the predictions.
Interestingly studies are appearing that indicate schizophrenia is associated with the disruption between the bottom up signals of sensory input and the top down signals of executive control.[6]
Because sight is compartmentalized, damage to the system can cause some strange problems. There are people who can only see one object at a time and those who can’t see any objects—only the parts. There’s a woman who lost her ability to see movement and now experiences life as a series of still images. Another can only see objects in motion, just as frogs are blind to motionless insects. There’s a person who, when shown a picture of an octopus, thought it was a spider and that a pretzel was a snake. There’s change blindness, where someone doesn’t notice changes, even when you swap one photograph for another.
A girl with otherwise normal eyesight was found to see upside-down and backwards, so that when reaching for a cup on her left, she’d reach to her right. The same thing happened to a Spanish Civil War soldier, who—after being shot in the head—had the reversal happen, not only to his sight, but to his hearing and sense of touch as well. One man, after suffering a stroke, began seeing objects 30% smaller than they actually are—although, remember, we normally see things we focus on as being larger than they are. For another man it looked like the right half of people’s faces were melting. To another, faces appear distorted like “demons”, but not photographs of the same faces.
Because our brains can fill in much of what we see with what we expect to see, this is why sometimes won’t see something that is right in front of us unless we happen to look directly at it, or someone points it out to us. This happens more and more as you get older, probably partly because your brain has more experience at filling in the blanks. And since you slow down as you get older, this forces your visual processing to cut more corners.
It’s not just vision that’s compartmentalized during processing. Hearing and taste sensations are dealt with the same way. With sounds, signals from your eardrums are separated into timbre, pitch, loudness, reverberation, tone duration, location, and timing. Each is processed separately and then assembled into what we hear. But there are problems our brains have to overcome. Sounds are often ambiguous, incomplete, and the source’s location and identity may be unclear, so again our brains make educated guesses to fill in the gaps.[7]
Some say that all of our senses are constructed. You wouldn’t think this could apply to touch. It seems pretty straight forward. You just reach out and touch something, so you know it’s there in the real world, but that’s not the case. This is very apparent when something happens to our brain’s body map, as with the Third Hand illusion. The most obvious cases are where amputees feel phantom limbs and phantom pains. Not only can they experience very real, excruciating pains in their missing appendage, some can reach out and feel the touch of a coffee cup with their missing hands. When one neuroscientist moved the cup one amputee felt he was holding, it caused that man to suddenly cry out in pain, explaining, “It felt like you ripped the cup out of my fingers.”[8]
When you’re missing a hand it allows your brain to freely alter your body map. This can move your perceived hand to other parts of your body, such as just above the stump of your arm. Brushing that spot can make it seem like both arm and hand are being touched. Sometimes the brain will add another hand. One man had one on his shoulder and another on his face. That’s not too surprising since on the cortex, the touch area of the hand is between those of the arm and face. When part of the brain stops receiving signals, it takes over processing from nearby parts of the brain.
Cognitive scientist Donald Hoffman of the Massachusetts Institute of Technology wrote in his book Visual Intelligence, “We don’t normally think of ourselves as constructing objects of touch. We think instead that we feel those keys, that lipstick, that wallet, not by construction but just as they are. But we’re fooled again by our constructive prowess. It’s only because we’re so fast and so effective at constructing objects of touch that it feels to us that we don’t construct them at all.”[9]
When a man known as Mr. S. suffered carbon monoxide poisoning, he lost most of his ability to construct his vision. He then had severe difficulty recognizing anything. When shown a photograph of a young woman, he said he thought it was a woman because she didn’t have hair on her arms. When asked where her eyes were, he pointed to her breasts.
Making the blind see
We take our ability to see for granted. It just seems natural, but it’s an ability that takes a lot of training. Like learning languages, it’s much easier to learn how to see when you’re young.
When a person blind from birth is given sight, you’d expect them to excitedly run around pointing out things, but this doesn’t happen. Instead they see different levels of unrelated brightness and colors, which they aren’t even sure are coming from their eyes. Even though they know what objects feel like, they have trouble recognizing what things are. They see bits and parts of things, but can’t put these together to see the objects themselves. The effects of lighting and shadows also need to be learned. Perspective and distance cues that to us indicate depth are particularly difficult to learn. And transferring what they know from touch to sight can be like learning a new language. Children can adjust and become confident using sight in a few days. For older adults it can take years, and they might never completely adjust to it.[10]
In the case of the vertical and horizontal kittens, two studies—one at Stanford University and one at the University of Cambridge— each raised a set of kittens in an environment that had no horizontal lines, and another set in an area without vertical lines. When placed in a normal environment, the vertical kittens were unable to see anything horizontal, such as the seat of a chair, while the horizontal kittens couldn’t see anything vertical, like the legs of chairs. The first set never jumped up onto a chair and the second kept bumping into the chair’s legs. The parts of their brains for seeing those things either never developed or weren’t activated.[11]
The same thing can happen with astigmatism, where a lens defect causes distorted vision. Infants with it who go untreated will have it permanently, while it can be optically corrected when it arises in adults. The critical age is from two to four years of age.
Other things aren’t so age sensitive. Neurobiologist Susan Barry was cross-eyed from a very young age, so she lacked depth perception in spite of efforts and operations to correct this. To her the world always looked flat. That is, until she was in her forties and after six weeks of therapy, suddenly one day her car’s steering wheel floated out in front of her. Gradually her stereovision improved.
In an interview with New Scientist she recalled, “It was an incredibly joyful experience, a whole new world. I had the hardest time listening to my students because I was fascinated by the way their hands looked while gesturing. Leaves on trees, house plants, door knobs! Everything looked so beautiful. It was hard to describe to people: they looked at me like I was nuts.”[12]
Something similar happened to neuroscientist Bruce Bridgeman, who was nearly stereoblind, but his ability to experience depth perception began with a 3-D movie. After putting on the glasses, as soon as the film started the characters leapt off the screen and he’s had stereo vision since then, describing it as a “whole new dimension of sight.”[13]
It’s estimated that between five and ten percent of the population lack depth perception to various degrees and from various causes, such as being unable to focus their eyes on a single point or from being blind in one eye.
Scientists have found that dressmakers are way above average in estimating distances, probably through active use of their depth perception, while they suspect that it’s helpful for artists to be stereoblind, better enabling them to discard depth in order to transfer stereo perceptions to a flat canvas or page. Some artists train themselves to see things as being flat, while others close one eye to flatten a scene.[14]
Researchers at the Massachusetts Institute of Technology fooled an AI program, getting it to see a model of a turtle as a rifle in those pictures with red borders. Those with black borders were identified as something of a similar class, such as revolver or holster. Anish Athalye, Logan Engstrom, Andrew Ilyas, and Kevin Kwok. |
The difficulties of learning vision become very apparent when trying to teach Artificial Intelligence (AI) how to interpret images. AIs don’t decipher images in the same way we do, with some very basic differences. One examined pictures of a turtle and said it was a rifle. They sometimes pick up aspects of an image that are hidden from us. An example of this is when shown two pictures of a cat that look identical to us, but to the AI one looked like a cat and the other a dog.
Adding to the difficulty is that AIs are essentially black boxes—we can’t see inside so we don’t know what they’re doing, how they’re doing it, or what features they’re noticing. We have to run experiments on them, just as we would with an infant or a monkey, to try to figure out what they’re seeing.
Our visual system is so complex and there’s so much that we still don’t understand, so there’s currently no way to model a program on it. From what I’ve seen, current efforts to improve AI perception rely on encouraging AI to teach itself using a variety of huge data sets of images. Unfortunately that still gives us no insight into how AI perception works.
We're behind and we can't catch up
Perception takes time to process. For some bits it can be as fast as a tenth of a second, but for complex scenes it takes longer. With erotic or highly emotional scenes, recognition can take perhaps five seconds or more. There’s also a detectable delay as information is passed from one hemisphere to the other.[15] These delays put us out of sync with the world, yet we don’t constantly have double vision or hear echoes. Most of the studies I’ve seen in recent years support the hypothesis that our brains appear to fix this by making predictions of what’s happening before the data from our senses is processed.
There’s a man who, after an illness, found he was seeing the world out of sync. To him, people were suddenly talking before their lips moved. Apparently part of his system for correcting this stopped working.[16]
When you look around, your brain isn’t processing an image of the world; it’s processing a constant stream of data. One shortcut it seems to use to deal with this deluge is to construct a view of what you’re seeing based on memories, expectations, and knowledge of your current situation.[17] This prediction is what you see while the real data is being processed.[18] When that’s done, it compares the data with its predictions, ignoring everything that isn’t changing and using the differences to update the predictions so they are a closer match to reality. The adjustments are passed back down the line as feedback.
© John Richard Stephens, 2024. |
I find that interesting since for many years I wondered how my grandfather could not have seen a cement truck coming towards him before he pulled out in front of it. He survived, but his neck was permanently turned toward his left side. Similarly I once didn’t see a man on a bicycle until I saw him pass behind me in the rearview mirror, even though I looked right at him before crossing the street. And I heard of someone else who didn’t see the mail truck they crashed into. We can look at things and not see them, especially if they’re unexpected.
When you look to the side to check for oncoming traffic, you’re mainly looking for cars or pickups. If you look quickly enough, you won’t be able to see movement so the scene will look more like a static picture and you’ll lose a key clue that something is approaching, which would further reduce your chance of seeing an unexpected cement truck or bicycle.
Obviously this predictive system isn’t perfect. There’s an illusion that retroactively makes you see a flash that wasn’t there, while a similar illusion can make you forget a flash that you did see.[21] There’s another that can make you see a dot change colors before it actually does.[22]
If you and your friends go someplace where there’s total darkness and have each of them slowly wave their hand in front of their own face, about half of them will see a shadowy shape moving. Since there’s no light, it’s impossible for them to actually see their hand. What they see is their brain’s prediction of their hand based on their proprioceptive sense of their hand’s location and movement.[23] Proprioception is what keeps you apprised of the position of your body parts using receptors in your skin, muscles, and joints.
They see the prediction, not their hand.According to Gerrit Maus, a research psychologist at the University of California—Berkeley, “What we perceive doesn’t necessarily have that much to do with the real world, but it is what we need to know to interact with the real world.”[24]
Click here for the next article in this series:
[1] Anonymous, “Brain Innately Separates Living And Non-living Objects For Processing”, ScienceDaily, August 14, 2009, http://www.sciencedaily.com/releases/2009/08/090813142430.htm, citing Bradford Z. Mahon, Stefano Anzellotti, Jens Schwarzbach, Massimiliano Zampini, Alfonso Caramazza, “Category-Specific Organization in the Human Brain Does Not Require Visual Experience”, Neuron, 2009; https://doi.org/10.1016/j.neuron.2009.07.012.
[2] Rajesh Rao and Dana Ballard, “Predictive coding in the visual cortex: a functional interpretation of some extra-classical receptive-field effects”, Nature Neuroscience, January 1999, pp. 79–87, https://www.nature.com/articles/nn0199_79, https://doi.org/10.1038/4580.
[3] Anil Ananthaswamy, “Self-Taught AI Shows Similarities to How the Brain Works”, Quanta Magazine, August 11, 2022, https://www.quantamagazine.org/self-taught-ai-shows-similarities-to-how-the-brain-works-20220811/.
[4] Anonymous, “JPEG for the Mind: How the Brain Compresses Visual Information”, ScienceDaily, February 11, 2011, http://www.sciencedaily.com/releases/2011/02/110210164155.htm, citing Eric T. Carlson, Russell J. Rasquinha, Kechen Zhang, and Charles E. Connor, “A Sparse Object Coding Scheme in Area V4”, Current Biology; https://doi.org/10.1016/j.cub.2011.01.013.
[5] Stephen L. Macknik and Susana Martinez-Conde, “And Yet It Doesn’t Move”, Scientific American Mind, vol. 26, no. 3, April 9, 2015, and as “How the Brain Can Be Fooled into Perceiving Movement”, https://www.scientificamerican.com/article/how-the-brain-can-be-fooled-into-perceiving-movement/.
[6] Elsevier. “Brain connectivity is disrupted in schizophrenia”, ScienceDaily, October 17, 2023, www.sciencedaily.com/releases/2023/10/231017123403.htm, citing Alexander Holmes, Priscila T. Levi, Yu-Chi Chen, Sidhant Chopra, Kevin M. Aquino, James C. Pang, and Alex Fornito, “Disruptions of Hierarchical Cortical Organization in Early Psychosis and Schizophrenia”, Biological Psychiatry: Cognitive Neuroscience and Neuroimaging, 2023, https://doi.org/10.1016/j.bpsc.2023.08.008.
[7] Daniel Levin, “Music special: The illusion of music”, New Scientist, no. 2644, February 23, 2008,
And University of Wisconsin-Madison. “Banking on predictability, the mind increases efficiency”, ScienceDaily, November 22, 2010,
https://www.sciencedaily.com/releases/2010/11/101122152040.htm, citing Christian E. Stilp, Timothy T. Rogers and Keith R. Kluender, “Rapid efficient coding of correlated complex acoustic properties”, PNAS, November 22, 2010, https://doi.org/10.1073/pnas.1009020107.[8] James Shreeve, “Touching the Phantom”, Discover Magazine, June 1993, pp. 34-42, https://www.discovermagazine.com/mind/touching-the-phantom, May 31, 1993.
[9] Donald D. Hoffman, Visual Intelligence, New York: W. W. Norton & Co., 1998, p. 180.
[10] Kayt Sukel, “Give the gift of sight – and insight will follow” (interview with Pawan Sinha), New Scientist, no. 3037, September 5, 2015, pp. 28-29, and as “Curing blind children reveals how the brain makes sense of sight”, September 2, 2015, https://www.newscientist.com/article/mg22730370-200-curing-blind-children-reveals-how-the-brain-makes-sense-of-sight/.
And Pawan Sinha, “Once Blind and Now They See”, Scientific American, vol. 309, no. 1, July 2013, pp. 48-55, and as “Blind Kids Gain Vision Late in Childhood While Giving a Lesson in Brain Science”, https://www.scientificamerican.com/article/blind-kids-gain-vision-late-childhood-while-giving-lesson-in-brain-science/, https://doi.org/10.1038/scientificamerican0713-48.
Also, University of Washington. “Man with restored sight provides new insight into how vision develops.” ScienceDaily, April 15, 2015, www.sciencedaily.com/releases/2015/04/150415140504.htm, citing E. Huber, J.M. Webster, A.A. Brewer, D.I.A. MacLeod, B.A. Wandell, G.M. Boynton, A.R. Wade, I. Fine, “A Lack of Experience-Dependent Plasticity After More Than a Decade of Recovered Sight”, Psychological Science, 2015; 26 (4): 393, https://doi.org/10.1177/0956797614563957.
[11] Helmut V.B. Hirsch and D.N. Spinelli, “Visual Experience Modifies Distribution of Horizontally and Vertically Oriented Receptive Fields in Cats”, Science, vol. 168, no. 3933, May 1970, pp. 869-871, https://doi.org/10.1126/science.168.3933.869.
And Colin Blakemore and Grahame F. Cooper, “Development of the Brain depends on the Visual Environment”, Nature, vol. 228, no. 5270, October 1970, pp. 477-478, https://doi.org/10.1038/228477a0.
[12] Helen Thomson, “Suddenly I see”, (interview with Susan Barry), New Scientist, June 6, 2009, no. 2711, p. 49, and as “How I learned to see in 3D”, June 3, 2009, http://www.newscientist.com/article/mg20227112.900-how-i-learned-to-see-in-3d.html, citing Susan Barry, Fixing My Gaze, Basic Books, 2009.
And “An Interview with ‘Stereo Sue’ ”, Review of Optometry, July 1, 2009, https://www.reviewofoptometry.com/article/an-interview-with-stereo-sue.
[13] Morgen Peck, “How a movie changed one man’s vision forever”, BBC Future, July 19, 2012, https://www.bbc.com/future/article/20120719-awoken-from-a-2d-world.
[14] University of California – Berkeley, “Dressmakers found to have needle-sharp 3-D vision”, ScienceDaily, June 14, 2017, www.sciencedaily.com/releases/2017/06/170614133734.htm, citing Adrien Chopin, Dennis M. Levi, and Daphné Bavelier, “Dressmakers show enhanced stereoscopic vision”, Scientific Reports, vol. 7, article 3435, 2017, http://dx.doi.org/10.1038/s41598-017-03425-1.
And Denise Grady, “The Vision Thing: Mainly in the Brain”, Discover Magazine, June 1993, pp. 56-66, https://www.discovermagazine.com/mind/the-vision-thing-mainly-in-the-brain, May 31, 1993.
[15] Michael S. Gazzaniga, “One Brain—Two Minds?” in Irving L. Janis, ed., Current Trends in Psychology, Los Altos, CA: William Kaufman, 1977, pp. 7-13.
[16] Helen Thomson, “Man hears people speak before seeing lips move”, New Scientist, no. 2924, July 6, 2013, p. 11, and the longer version “Mindscapes: First man to hear people before they speak”, July 4, 2013, https://www.newscientist.com/article/dn23813-mindscapes-first-man-to-hear-people-before-they-speak/, citing Cortex, https://doi.org/10.1016/j.cortex.2013.03.006.
[17] Bielefeld University, “How the brain leads us to believe we have sharp vision”, ScienceDaily, October 17, 2014, www.sciencedaily.com/releases/2014/10/141017101339.htm, citing Arvid Herwig and Werner X. Schneider, “Predicting object features across saccades: Evidence from object recognition and visual search”, Journal of Experimental Psychology: General, 2014; 143 (5): 1903, https://doi.org/10.1037/a0036781.
[18] Rensselaer Polytechnic Institute, “Crystal (Eye) Ball: Visual System Equipped With ‘Future Seeing Powers’ ”, ScienceDaily, May 16, 2008, http://www.sciencedaily.com/releases/2008/05/080515145356.htm, citing Mark Changizi, Cognitive Science, May-June, 2008.
And Benedict Carey, “Anticipating the Future to ‘See’ the Present”, The New York Times, June 10, 2008, https://www.nytimes.com/2008/06/10/health/research/10mind.html.
[19] Radboud University Nijmegen, “Expectations lead to less but more efficient processing in the human brain”, ScienceDaily, July 26, 2012, https://www.sciencedaily.com/releases/2012/07/120726094506.htm, citing Peter Kok, Janneke F.M. Jehee, Floris P. de Lange, “Less Is More: Expectation Sharpens Representations in the Primary Visual Cortex”, Neuron, 2012; 75 (2): 265, https://doi.org/10.1016/j.neuron.2012.04.034.
And Anomymous, “Expectations Speed Up Conscious Perception”, ScienceDaily, February 7, 2011, http://www.sciencedaily.com/releases/2011/02/110203081445.htm, citing L. Melloni, C.M. Schwiedrzik, N. Muller, E. Rodriguez, and W. Singer, “Expectations Change the Signatures and Timing of Electrophysiological Correlates of Perceptual Awareness”, Journal of Neuroscience, 2011; 31 (4): 1386, https://doi.org/10.1523/jneurosci.4570-10.2011.
[20] Springer Science+Business Media. “ ‘Element of surprise’ explains why motorcycles are greater traffic hazard than cars”, ScienceDaily, January 27, 2014, https://www.sciencedaily.com/releases/2014/01/140127101057.htm, citing Vanessa Beanland, Michael G. Lenné, Geoffrey Underwood, “Safety in numbers: Target prevalence affects the detection of vehicles during simulated driving”, Attention, Perception, & Psychophysics, 2014, https://doi.org/10.3758/s13414-013-0603-1.
[21] California Institute of Technology, “Time-traveling illusion tricks the brain: How the brain retroactively makes sense of rapid auditory and visual sensory stimulation”, ScienceDaily, October 9, 2018, https://www.sciencedaily.com/releases/2018/10/181009113612.htm, citing Noelle, R.B. Stiles, Monica Li, Carmel A. Levitan, Yukiyasu Kamitani, and Shinsuke Shimojo, “What you saw is what you will hear: Two new illusions with audiovisual postdictive effects”, PLOS One, 2018; 13 (10): e0204217, https://doi.org10.1371/journal.pone.0204217.
[22] Jan Westerhoff, “What are you?”, New Scientist, no. 2905, February 23, 2013, pp. 34-37, and as “Only you: The one and only you”, www.newscientist.com/article/mg21729052.300-the-self-the-one-and-only-you.html.
[23] University of Rochester, “Seeing in the dark: Most people can see their body’s movement in the absence of light”, ScienceDaily, October 31, 2013, http://www.sciencedaily.com/releases/2013/10/131031090431.htm, citing K.C. Dieter, B. Hu, D.C. Knill, R. Blake, and D. Tadin, “Kinesthesis Can Make an Invisible Hand Visible”, Psychological Science, 2013, https://doi.org/10.1177/0956797613497968.
[24] University of California – Berkeley, “Hit a 90 mph baseball? Scientists pinpoint how we see it coming”, ScienceDaily, May 8, 2013, http://www.sciencedaily.com/releases/2013/05/130508123017.htm, citing Gerrit W. Maus, Jason Fischer, and David Whitney, “Motion-Dependent Representation of Space in Area MT,” Neuron, 2013; 78 (3): 554, https://doi.org/10.1016/j.neuron.2013.03.010.