Sources – Reality is Not Real

We thank the following experts for their input and critical reading:

  • Prof. Michael Herzog

EPFL, Switzerland

  • Dr. Nadine Dijkstra

University College London, United Kingdom

The Gap Between Reality and You

– Vision is maybe our main source of information about the world – but in reality we don’t really see that much. Only a thumbnail sized area of your visual field is in high resolution, while the rest is out of focus. If it doesn’t feel like this, it’s because it is made up by your brain – using a pretty neat trick. 

#Tim Vernimmen. Our Eyes Are Always Darting Around, So How Come Our Vision Isn’t Blurry? Smithsonian Magazine. 2019

https://www.smithsonianmag.com/science-nature/our-eyes-are-always-darting-around-s-not-how-we-see-world-180972414

Quote: “Beginning with the basics: The only things we can ever hope to see are those that send or reflect light toward our eyes, where it might end up hitting the retina, a layer of nervous tissue that covers the back two-thirds of the inner eyeball. There, the complex image of whatever we are looking at is first translated into activity of individual light-sensitive photoreceptor cells. This pattern is then transmitted to a variety of neurons in the retina that specifically respond to certain colors, shapes, orientations, movements or contrasts. The signals they produce are sent up to the brain through the optic nerve, where they are interpreted and put back together in a progression of specialized areas in the visual cortex.

Yet to transmit all the information that reaches our retina at the resolution we are used to would require an optic nerve with roughly the diameter of an elephant’s trunk. Since that would be rather unwieldy, only one tiny area of the retina—called the fovea—provides this kind of resolution. So in order to grant all the interesting features of our environment their moment in the foveal spotlight, we move our eyes around—a lot—in darts that scientists call saccades. (French for “jerks,” the word was coined in 1879 by French ophthalmologist Émile Javal.) Saccades are guided by what we are paying attention to, even though we are often blissfully unaware of them.”


#Mahanama B, Jayawardana Y, Rengarajan S, Jayawardena G, Chukoskie L, Snider J and Jayarathna S (2022) Eye Movement and Pupil Measures: A Review. Front. Comput. Sci. 3:733531. doi: 10.3389/fcomp.2021.733531
https://www.frontiersin.org/journals/computer-science/articles/10.3389/fcomp.2021.733531/full

Quote: “The existence of the fovea, a specialized high-acuity region of the central retina approximately 1–2 mm in diameter (Dodge, 1903), provides exceptionally detailed input in a small region of the visual field (Koster, 1895), approximately the size of a quarter held at arm’s length (Pumphrey, 1948; Hejtmancik et al., 2017). The role of gaze-orienting movements are to direct the fovea toward objects of interest. Our subjective perception of a stable world with uniform clarity is a marvel resulting from our visual and oculomotor systems working together seamlessly, allowing us to engage with a complex and dynamic environment.”

#Kolb H, Nelson RF, Ahnelt PK, et al. The Architecture of the Human Fovea. 2020 Feb 7 [Updated 2020 May 20]. 

https://www.ncbi.nlm.nih.gov/books/NBK554706/
Quote: “The area called the macula by ophthalmologists is a circular area around the foveal center of approximately 5.5 mm diameter (Figure 2B) The macula lutea with the yellow pigmentation extends across the fovea into the parafoveal region and a little beyond. This area is about 2.5 mm in diameter (Figure 2B). The actual fovea is about 1.5 mm in diameter and the central fovea consists of a foveal pit (umbo) that is a mere 0.15 mm across (Figure 2B). This foveal pit is almost devoid of all layers of the retina beneath the cone photoreceptors. On the edges of the foveal pit the foveal slope is still mainly devoid of other layers but some cell bodies of retinal interneurons, bipolar and horizontal cells and even some amacrine cell processes are becoming evident. By the 0.35 mm diameter circular area the first ganglion cell bodies, the retinal neurons sending signals to the brain, are beginning to appear. All the central fovea that measures 0.5 mm across is avascular (FAZ).”

– Each second your eyes make 3 to 4 sudden jerky movements, saccades, of 50 milliseconds, focusing from one point to another. Scanning your environment to get different sharp images that your brain then edits together. During a saccade your brain shuts down your vision so you don’t see a wild motion blur. This means that each day, for around 2 hours you are completely blind. If you could actually see what your eyes see, it would look something like this: brrrr 

Our eyes have four basic characteristic movements suited for different purposes: saccades, smooth pursuit movements, vergence movements, and vestibulo-ocular movements. The small, ballistic movements are called saccades and they guide our fovea to selected regions in the visual field. They can be small amplitudes like when our eyes jump from one line to the next while we are reading or bigger amplitudes as we gaze across a room. They can be voluntary but they also happen even when we are fixated.

Another note on the visual perception: visual system and processing is utterly complicated. There are various correction mechanisms to tell apart self movement (movement of the eyes) from external movement. We can not account for all of it here. 

#Mahanama B, Jayawardana Y, Rengarajan S, Jayawardena G, Chukoskie L, Snider J and Jayarathna S (2022) Eye Movement and Pupil Measures: A Review. Front. Comput. Sci. 3:733531. doi: 10.3389/fcomp.2021.733531

https://www.frontiersin.org/journals/computer-science/articles/10.3389/fcomp.2021.733531/full
Quote: “Eye movement information can be interpreted as a sequence of fixations and saccades. A fixation is a period where our visual gaze remains at a particular location. A saccade, on the other hand, is a rapid eye movement between two consecutive fixations. Typical humans perform 3–5 saccades per second, but this rate varies with current perceptual and cognitive demands (Fischer and Weber, 1993). Fixations and saccades are the primary means of interacting with and perceiving the visual world. During a fixation, our visual perceptual processes unfold. Saccades guide our fovea to selected regions of the visual field. We are effectively blind during saccades (Burr et al., 1994), which allows our gaze to remain relatively stable during saccadic reorientation. Saccadic eye movements are brief, and have a reliable amplitude-velocity relationship (see Figure 2) known as the main sequence (Bahill et al., 1975; Termsarasab et al., 2015). It shows that saccade velocity and saccade amplitude follow a linear relationship, up to 15°–20°. This relationship, however, varies with age and also in certain disorders (Choi et al., 2014; Reppert et al., 2015).”

#Purves D, Augustine GJ, Fitzpatrick D, et al., editors. Neuroscience. 2nd edition. Sunderland (MA): Sinauer Associates; 2001. Types of Eye Movements and Their Functions.
https://www.ncbi.nlm.nih.gov/books/NBK10991/

Quote: “Figure 20.4 The metrics of a saccadic eye movement. The red line indicates the position of a fixation target and the blue line the position of the fovea. When the target moves suddenly to the right, there is a delay of about 200 ms before the eye begins to move to the new target position. (After Fuchs, 1967.)”

Saccades generally last a few tens of milliseconds. Duration depends on how big the amplitude of the saccade is. We take the 50ms as an average number of what we come across in the literature. 

#John Enderle, 12 – PHYSIOLOGICAL MODELING. Introduction to Biomedical Engineering (Second Edition), Academic Press, 2005.

https://www.sciencedirect.com/topics/engineering/saccade-amplitude

Quote: “Saccadic eye movements are conjugate and ballistic, with a typical duration of 30–100 ms and a latency of 100–300 ms.” 

#Pundlik et al. From small to large, all saccades follow the same timeline. Journal of Vision. 2015.

https://jov.arvojournals.org/article.aspx?articleid=2433111

Quote: “Purpose: The well-known main sequence of saccadic eye movements can have a large variability; the peak velocity of saccades with the same amplitude can be dramatically different. We explored an alternative approach to describing the saccade characteristics by looking into the in-flight shift timeline of saccades. Methods: We gathered 929 natural saccades made by two subjects with their heads restrained as they watched videos. The saccade duration and amplitude ranged from 13 to 95 ms (mean ± std. = 43ms ± 14.5) and 0.54° to 28.7° (mean ± std. = 9.27° ± 4.5), respectively.”

If we assume that we make 3 saccades per seconds during 16 hours that we are awake, then it would add up to a little over two hours. 

(16 x 60 x 60 x 3 x 50ms) / (1000 x 60 x 60) = 2.4 hr 

#Binda and Morrone. Vision During Saccadic Eye Movements. Annual Review of Vision Science. 2018. 

https://www.annualreviews.org/content/journals/10.1146/annurev-vision-091517-034317
Quote: “Vision is always clear and stable, despite continual saccadic eye movements that reposition our gaze and generate spurious retinal motion. The brain actively anticipates the consequence of a future saccade to efficiently compensate for the shift in gaze and to prevent motion perception. Selective suppression is directed toward the motion mechanisms to suppress the otherwise compelling motion early in processing, while stability across saccades is guaranteed by the integration of post- and presaccadic information across a slanted spatiotemporal field. However, these changes take place at the cost of precision of visual localization, which becomes poor, and impact multiple dimensions, such as numerosity and multisensory perception. The internal and predictive signals that orchestrate these profound changes of perisaccadic visual mechanisms can be efficiently measured by pupillometry, which provides a new tool to dissociate active vision from conscious perception.”

– Instead your brain fills this time with its best guesses of what happened during the blackness.

#Kandel ER, Koester JD, Mack SH, Siegelbaum SA. eds. Principles of Neural Science, 6e. McGraw Hill; 2021. 

https://neurology.mhmedical.com/content.aspx?bookid=3024&sectionid=254330742

Quote: “Although the visual system produces vivid representations of our visual world, as described in preceding chapters, a visual image is not like an instantaneous photographic record but is dynamically constructed pathways from the eyes. When we look at a painting, for example, we explore it with a series of quick eye movements (saccades) that redirect the fovea to different objects of interest in the visual field. The brain must take into account these eye movements in the course of producing an interpretable visual image from the light stimuli in the retina. As each saccade brings a new object onto the fovea, the image of the entire visual world shifts on the fovea. These shifts occur several times per second, such that after several minutes the record of movement is a jumble (Figure 25–1). With such constant movement, visual images should resemble an amateur video in which the image jerks around because the camera operator is not skilled at holding the camera steady. In fact, however, our vision is so stable that we are ordinarily unaware of the visual effects of saccades. This is so because the brain makes continual adjustments to the images falling on the retina after each saccade. A simple laboratory experiment, shown in Figure 25–2, illustrates the biological challenge to the brain.”

Above image shows an eye tracking example where the eyes fixate between saccades. These fast movements would cause a blurry motion percept which we don’t experience, rather the image we perceive is quite stable. There are mainly two mechanisms how the brain takes care of this problem. Corollary discharge, which is kind of a backlog of the eye movements, and visual masking. Brain somehow knows before the saccade is executed where it will be and using this vector and the initial fixation, it can compute a guess. This is also partly the reason why you see motion in static images in various optical illusions, and the motion percept disappear when you fixate to a certain point in the image. 

#Cavanaugh J, Berman RA, Joiner WM, Wurtz RH. Saccadic Corollary Discharge Underlies Stable Visual Perception. J Neurosci. 2016 

https://pubmed.ncbi.nlm.nih.gov/26740647/
Quote: “Much is known about the image processing underlying perception, but little is known about the source of the vectors connecting successive retinal images. One possibility proposed by philosophers and scientists from Descartes to Helmholtz (Grüsser, 1995) is that signals within the brain provide the information needed to monitor ongoing movements. This internal information has come to be known as corollary discharge (CD) or efference copy (Sperry, 1950; Von Holst and Mittelstaedt, 1950). Each time a saccade occurs, a CD copy of the actual saccade vector driving the eye is sent to other brain regions related to visual perception to inform them of the impending saccade (Fig. 1D, right). Recently, a CD for saccades has been identified in the Rhesus monkey, an animal with visual brain anatomy and function remarkably similar to that in humans (Orban et al., 2004). This CD copy of the actual saccade vector travels in a circuit (Fig. 1D, left) from superior colliculus (SC) to the medial dorsal (MD) region of thalamus, and then to the frontal eye field (FEF) in frontal cortex (Sommer and Wurtz, 2002, 2004a, 2008). A role for this CD in controlling movement has been established by showing that disruption of the CD circuit degrades a monkey’s ability to guide rapid sequences of saccades when visual input is not fast enough to guide them (Hallett and Lightstone, 1976; Sommer and Wurtz, 2004b). The relationship of the CD to motor control is compelling enough that several commentaries have concluded that the CD is probably used for motor control (Bays and Husain, 2007) or for the selection of saccade targets (Zirnsak and Moore, 2014). So far no direct evidence has been presented that CD contributes to perception (for review, see Higgins and Rayner, 2015).”

Quote: “(Figure Caption:) A possible solution for the problems that saccades present for stable visual perception. A, Saccades (lines) and fixations (dots) from a human subject viewing a fragment of the painting by Seurat, “A Sunday Afternoon on the Island of La Grande Jatte”. The white arrows represent three hypothetical saccade vectors. B, The foveal images at the end of each of the three saccade vectors. C, Reconstruction of the visual scene using just perception of the saccade vectors and the retinal image. D, A corollary discharge that could provide the saccade vectors. Arrows on the right indicate a CD vector to cortex that represents a copy of the movement vector generating the saccade. The circuit on the left outlines an identified CD in the monkey brain from saccade-related neurons in SC through a thalamic relay in the MD to FEF. We hypothesize that the CD informs frontal cortex how to arrange successive retinal inputs into a stable visual perception.

#Kandel ER, Koester JD, Mack SH, Siegelbaum SA. eds. Principles of Neural Science, 6e. McGraw Hill; 2021. https://neurology.mhmedical.com/content.aspx?bookid=3024&sectionid=254330742

Quote: “Finally, there is a second potential disruption of vision produced by saccades: a blur as the saccade sweeps the visual scene across the retina. The blur is not seen, however, because neuronal activity in a number of visual areas is suppressed around the time of every saccade. This so-called saccadic suppression was first seen in the superior colliculus  and has subsequently been seen in the thalamus and areas of visual cortex beyond primary visual cortex.”

– As the spoon hits the ceramic, light reflects off it and hits your eyes after 1.3 nanoseconds. The ceramic vibrates and creates a shockwave of air molecules that travels to your ear in 1.2 milliseconds. 

Let’s say the distance between your eye and the spoon is about 40 cm. Speed of light 3x 10^10 cm/s so it would take 40 cm / 3 x 10^10 cm/s = 1.3 nanosec

Speed of sound in air in room temperature is 346 m/s which means that it would take
0.4 m / 346 m/s = 1.16 millisec ~1.2 millisec

– Heat is picked up by fibres in your fingers that send a signal to your brain in 50 milliseconds.

Skin has multiple receptors corresponding to different temperatures, so calculating this number in reality would not be straightforward. Touching the cup could stimulate mechanoreceptors, thermal receptors, or even pain receptors depending on the temperature. And they can all have different conduction velocities. But we have made a very simple calculation here, assuming a neural transmission speed of 20 m/s and the distance from finger to brain one meter. It is only for our purposes for the story here. 

#Kandel ER, Koester JD, Mack SH, Siegelbaum SA. eds. Principles of Neural Science, 6e. McGraw Hill; 2021. 

https://neurology.mhmedical.com/content.aspx?bookid=3024&sectionid=254330205

Quote: “Thermal nociceptors are activated by extremes in temperature, typically greater than 45°C (115°F) or less than 5°C (41°F). They include the peripheral endings of small diameter, thinly myelinated Aδ axons that conduct action potentials at speeds of 5 to 30 m/s and unmyelinated C-fiber axons that conduct at speeds less than 1.0 m/s (Figure 20–1A). Mechanical nociceptors are activated optimally by intense pressure applied to the skin; they too are the endings of thinly myelinated Aδ axons. Polymodal nociceptors can be activated by high-intensity mechanical, chemical, or thermal (both hot and cold) stimuli. This class of nociceptors consists predominantly of unmyelinated C fibers (Figure 20–1A).”

#Abraira VE, Ginty DD. The sensory neurons of touch. Neuron. 2013
https://pubmed.ncbi.nlm.nih.gov/23972592/

– Three very different inputs, all processed in your brain at different times. You don’t experience them separately but as one smooth, simultaneous and connected moment.

Multisensory integration refers to the neural integration of sensory information arising from different sensory modalities. We assign them to the same event, even though they are not of the same origin.

If we were to think about a more extreme example where this difference is more visible: lighting and thunder. Since the distance that the sound and light has to travel is way larger for lighting and thunder than from our coffee cup to us, we experience the rumble and streak of light as two distinct events. We perceive them at different times, but we still assign them to the same event though because of our past experience.

Brain has to assign events to some causality to make sense of the environment. There is quite a bit of neural processing underlying our cohesive experience of the environment in the midst of the cacophony of the light, sound waves and constant heat and pressure differences at our fingertips.  


We experience the sight and the sound of the spoon hitting the cup as synchronous. Not only do they reach the brain at different speeds, but they are also processed in the brain at different speeds. One of the brain’s tricks is temporal recalibration: altering our sense of time to synchronize our joint perception of sound and vision. The neural mechanisms as to how the brain manages to do this calibration is out of the scope of this video but we left a few papers below for the interested viewers. 

#Keetels M, Vroomen J. Perception of Synchrony between the Senses. In: Murray MM, Wallace MT, editors. The Neural Bases of Multisensory Processes. Boca Raton (FL): CRC Press/Taylor & Francis; 2012. 

https://www.ncbi.nlm.nih.gov/books/NBK92837/#ch9_sec11

Quote: “The perception of time and, in particular, synchrony between the senses is not straightforward because there is no dedicated sense organ that registers time in an absolute scale. Moreover, to perceive synchrony, the brain has to deal with differences in physical (outside the body) and neural (inside the body) transmission times. Sounds, for example, travel through air much slower than visual information does (i.e., 300,000,000 m/s for vision vs. 330 m/s for audition), whereas no physical transmission time through air is involved for tactile stimulation as it is presented directly at the body surface. The neural processing time also differs between the senses, and it is typically slower for visual than it is for auditory stimuli (approximately 50 vs. 10 ms, respectively), whereas for touch, the brain may have to take into account where the stimulation originated from as the traveling time from the toes to the brain is longer than from the nose (the typical conduction velocity is 55 m/s, which results in a ∼30 ms difference between toe and nose when this distance is 1.60 m; Macefield et al. 1989). Because of these differences, one might expect that for audiovisual events, only those occurring at the so-called “horizon of simultaneity” (Poppel 1985; Poppel et al. 1990)—a distance of approximately 10 to 15 m from the observer—will result in the approximate synchronous arrival of auditory and visual information at the primary sensory cortices. Sounds will arrive before visual stimuli if the audiovisual event is within 15 m from the observer, whereas vision will arrive before sounds for events farther away. Although surprisingly, despite these naturally occurring lags, observers perceive intersensory synchrony for most multisensory events in the external world, and not only for those at 15 m.”


#Noppeney U. Perceptual Inference, Learning, and Attention in a Multisensory World. Annu Rev Neurosci. 2021 

​​https://pubmed.ncbi.nlm.nih.gov/33882258/

#Deroy O, Faivre N, Lunghi C, Spence C, Aller M, Noppeney U. The Complex Interplay Between Multisensory Integration and Perceptual Awareness. Multisens Res. 2016
https://pmc.ncbi.nlm.nih.gov/articles/PMC5082728/


#Lennert, T., Samiee, S. & Baillet, S. Coupled oscillations enable rapid temporal recalibration to audiovisual asynchrony. Commun Biol 4, 559 (2021).
https://doi.org/10.1038/s42003-021-02087-0

#Freeman ED et al. Sight and sound out of synch: fragmentation and renormalisation of audiovisual integration and subjective timing. Cortex. 2013 

https://pmc.ncbi.nlm.nih.gov/articles/PMC3878386


– Your brain takes a moment to process and then invents a reality, a present moment that is not real. What you feel is “now” is in fact, a selectively edited version of the past. You really only consciously experience the world 0.3 to 0.5 seconds after things happened.

There is a certain time required for the sensory signals to be integrated over time so consciousness is also not instant. It would also not make too much sense that we are consciously responding to things at the instance that they are occurring since our reaction times would not catch up or a stimulus with a very short duration may not be relevant enough for our attention. Also, duration gives more context to the event for the brain to figure out what is going on, for example your response to a certain event might be different depending on the preceding event. Or the opposite, that we have to act quick under some circumstances that can not afford the time the conscious processing requires. 

Perceptions are not accurate copies of the world. We kind of abstract the reality to a manageable and relevant degree to us. Most of the sensory information is not even consciously processed. Brain makes an internal representation of the external events, not to replicate but to get the most relevant information for us. 

Brain first analyzes several features of the external objects or events, provided by our sensory modalities. When we hold an object, its shape, size, movement and texture are simultaneously processed in different brain regions, and the conclusions are put together in a conscious experience. Even though it sounds simple, how sensation is integrated into conscious experience and how conscious experience emerges and even stored later as memories are still big questions under research. 

How long it takes for that to happen is also an old question, which started the discussion with the experiments done by Libet. He famously showed his experiments that there is an unconscious buildup of electrical activity within the brain before we become aware of the stimulus and decide to act. For the skin stimulus, he measured a delay of about half a second which came to be known as Libet’s Delay. There are still different interpretations of those experiments, and needless to say, there have been many other experiments followed up on it.

#Pockett S. On subjective back-referral and how long it takes to become conscious of a stimulus: a reinterpretation of Libet’s data. Conscious Cogn. 2002 

https://pubmed.ncbi.nlm.nih.gov/12191934

#Herzog et al. All in Good Time: Long-Lasting Postdictive Effects Reveal Discrete Perception. Trends Cogn Sci. 2020

https://www.cell.com/action/showPdf?pii=S1364-6613%2820%2930170-4

Quote: “A two-stage model reconciles the debate on event-time versus brain-time [16,81]. Brain-time advocates propose that the time at which an element is consciously perceived is the time at which it is detected. Event-time advocates propose that the brain tries to compensate for differences in processing times. For example, we usually perceive visual and auditory inputs as synchronized, even though auditory information is detected much faster than visual information [53]. Many temporal illusions, such as the flash-lag effect, are interpreted as evidence for event-time [40]. However, there is also clear evidence in favor of brain-time [16,53]. Most of this debate focusses on perceived differences shorter than 100 ms. Long-lasting postdictive effects offer an entirely new interpretation because they show that conscious percepts occur much later than 100 ms. Instead of the brain compensating for different processing speeds, the subjective time of events is the product of complex unconscious processing during which the brain tries to come up with the best interpretation of what has happened in the last window. See [16] for converging evidence that the brain prioritizes meaningful grouping over event- or brain-time.”

#Herzog et al. All in Good Time: Long-Lasting Postdictive Effects Reveal Discrete Perception. Trends Cogn Sci. 2020

https://www.cell.com/action/showPdf?pii=S1364-6613%2820%2930170-4
Quote: “It may be argued that 400-ms unconscious processing windows prior to conscious access are unreasonable because we can react to incoming stimuli before 400 ms. For example, humans can detect an animal in an image faster than 400 ms [82]. However, conscious percepts are not required to perform actions, as evident in reflexes, eye movements, and many types of sports. The percept may occur after the action has been performed and the action may even be part of the conscious percept. As mentioned, we propose that a conscious percept contains entire event structures.”

Having said that, there is no agreed upon neural model of consciousness, and we also do not know if it is discrete or continuous. There are many other parameters influencing the time it takes for a stimulus to reach conscious perception, like the strength of the stimuli, saliency, expectation or the physiological restrictions of the sensory modality itself. 

#Melloni et al. Expectations Change the Signatures and Timing of Electrophysiological Correlates of Perceptual Awareness. 2011.

https://www.jneurosci.org/content/jneuro/31/4/1386.full.pdf

Quote: “Conscious perception is not solely determined by stimulus saliency. Strong stimuli can remain unnoticed if attention is deployed elsewhere as shown in the attentional blink or change blindness paradigms, and weak sensory stimuli can be readily perceived if they are attended to (Carrasco et al., 2004). Consequently, perceptual awareness (PA) is proposed to depend on two factors (Dehaene et al., 2006): the intensity of sensory stimulation and top-down attention, which enhances sensory processing. However, evidence suggests that attention may not be the only top-down factor that determines perception. Everyday experience indicates that recognition is greatly facilitated if one knows what to expect. In laboratory settings, when subjects are confronted with fragmented black-and-white images of an object, they may fail in perceiving the object. However, once the object has been identified, it pops out and will henceforth be recognized immediately (the Eureka effect) (Dolan et al., 1997; Ahissar and Hochstein, 2004).”

Quote: “Previous experience allows the brain to predict what comes next. How these expectations affect conscious experience is poorly understood. In particular, it is unknown whether and when expectations interact with sensory evidence in granting access to conscious perception, and how this is reflected electrophysiologically. Here, we parametrically manipulate sensory evidence and expectations while measuring event-related potentials in human subjects to assess the time course of evoked responses that correlate with subjective visibility, the properties of the stimuli, and/or perceptual expectations. We found that expectations lower the threshold of conscious perception and reduce the latency of neuronal signatures differentiating seen and unseen stimuli. Without expectations, this differentiation occurs 300 ms and with expectations 200 ms after stimulus in occipitoparietal sensors.”

#Hogendoom. Perception in real-time: predicting the present, reconstructing the past. Trends in Cognitive Sciences. 2021. 

https://psychologicalsciences.unimelb.edu.au/__data/assets/pdf_file/0006/4028505/Hogendoorn-TiCS-2022.pdf

Quote: “There are three key areas where the intuitive view that our perception mirrors the outside world at any given instant falls short. The first is that the transmission and processing of information in the nervous system takes time. During this time, events in the environment continue to unfold, such that sensory information becomes outdated while in transit. In the case of visual motion, for example, a moving object continues moving while sensory information about its position flows through the nervous system. These delays are substantial: it takes several dozen milliseconds for information from the retina to reach visual cortex [4,5], and at least ~120 ms before it is possible to use visual information to initiate voluntary actions [6,7]. In ball sports such as tennis, cricket, or baseball, such delays would correspond to mislocalising the ball several meters behind its true position (Figure 1B).

Even a relatively slowly moving object, such as a passing cyclist, would be perceived up to half a meter behind its true position. Humans are nevertheless able to play ball sports and navigate through traffic, and laboratory experiments confirm that humans are remarkably accurate at interacting with dynamic environments, achieving approximately zero lag for even fast-moving objects [8]. So how do perceptual mechanisms compensate for their own delays?”

– In pro table tennis balls woosh around at 25 meters per second, which is pretty fast, so let’s slow down time. Light passes from the ball to your eye in nanoseconds, is converted into electrical impulses that reach your brain to be processed after 100 milliseconds.

Fastest recorded speeds are over 30 meters per second, so we went with a bit lower estimation since we couldn’t find any other data on it. Average amateur player would be way slower than this, probably about 10 meters per second. 

#Fastest table tennis hit (male)

https://www.guinnessworldrecords.com/world-records/426620-fastest-table-tennis-hit-male

Quote: “The fastest table tennis hit by a male is 116 kilometres per hour (72.08 miles per hour) and was achieved by Łukasz Budner (Poland) in Częstochowa, Poland, on 4 June 2016.”

#Greg Letts. Maximum Speed of a Ping Pong Ball. 02/13/20

https://www.liveabout.com/ping-pong-ball-maximum-speed-3974874

Quote: “Officially, New Zealander Lark Brandt holds the record for the fastest recorded smash at 69.9 miles per hour which he hit at the inaugural World Fastest Smash Competition in 2003. Brandt said his technique was key—a combination of timing and strength paired with a loose wrist and a flat smash. The second place winner’s speed was 66.5 kph, a smash with a 38mm ball that was dropped vertically to the player smashing it. The speed was recorded using sports speed radar on a 38mm ball as it has a greater density than the 40mm ball, so it can be picked up by the radar gun.
Given that the world’s fastest smash is 70 mph, it’s safe to say the speed of a ball hit by the average ping pong player is much slower with an average speed of about 25 mph. Given the length of the table, even 50 mph is incredibly fast which is why players stand so far back.”

#Texier et al. On the size of sports fields. 2014 New J. Phys.
https://iopscience.iop.org/article/10.1088/1367-2630/16/3/033039/meta

Processing time of visual information depends on several parameters like contrast, how complex the scene is, so we are giving only a ballpark number. It also depends on what we want to do with that information, or how hard it is to make sense out of it. If for example you are asked to answer a question based on the scene, this can take longer “processing” and is more complex than just “seeing”. Also, there is some parallel processing going on so it might be that stimuli arriving simultaneously are processed at different times.   

Meanwhile, the ball travels 2.5 meters through the air, the length of the table. If your brain showed you the past, where the ball was 100 milliseconds ago, it would hit you before you could react. So instead your brain takes its location, speed and direction and calculates where the ball should be in the future – by the time the information reaches you. And then it creates a fictional version of it. This is what you see, in your fake present: a fake ball, that is somewhere else.

#Blom et al. Predictions drive neural representations of visual events ahead of incoming sensory information. PNAS. 2020. 

https://www.pnas.org/doi/full/10.1073/pnas.1917777117

Quote: For predictive mechanisms to compensate for neural transmission delays, they must be able to produce sensory-like representations in the absence of sensory input. However, this is not sufficient; they must further be able to do so at a shorter latency than the actual afferent sensory input. Only then can neural activity get ahead of transmission delays. This is precisely what we observed: Predictive activation of the representation coding for the stimulus in the position ahead of the final stimulus is evident in the EEG roughly 70 ms to 90 ms earlier than it would have been had it been evoked by sensory input alone (Fig. 2B; the strong dark red activation cluster around training time 90 ms to 150 ms and test time 150 ms occurs above the dashed line denoting the usual time course of sensory information coding for the same position). By preactivating the anticipated future position of the object, the visual system would be able to represent the object’s position more quickly than would be possible on the basis of afferent sensory information.”

The following is a simple explanation of the publication above by one of its authors: 

#Hogendoorn. What you’re seeing right now is the past, so your brain is predicting the present. 2020. 

https://theconversation.com/what-youre-seeing-right-now-is-the-past-so-your-brain-is-predicting-the-present-131913

Quote: “This is precisely what we observed in our brain recordings. When a moving object suddenly disappeared (for example, by moving clockwise in a circle and disappearing at the 12 o’clock position), our recordings showed that for a while, our participants’ brains acted exactly as if the object was still there and still moving, in the 1 o’clock position.

In other words, the brain was “seeing” the object based on where it expected the object to be, rather than based on real information from the eyes. This pattern of brain activity faded only once information from the eyes arrived at the brain to tell it the object had actually disappeared.

We also investigated what happens when an object changes direction rather than disappearing. As before, we reasoned that the brain would not know about the change in direction until it received that information from the eyes. It should therefore overshoot again, extrapolating the object beyond the point at which it changed direction. When the brain then discovers where the object actually went, it would have to catch up.”

#Chang CJ, Jazayeri M. Integration of speed and time for estimating time to contact. Proc Natl Acad Sci U S A. 2018 

https://pubmed.ncbi.nlm.nih.gov/29507200/
Quote: “Most current theories posit that TTC estimation results from computations that rely on kinematic information (4–17). Specifically, it is assumed that the brain uses information about distance, speed, and acceleration to determine when an object reaches a designated target point.”

#Koevoet D, Sahakian A, Chota S. How the brain stays in sync with the real world. Elife. 2023
https://pmc.ncbi.nlm.nih.gov/articles/PMC9851611/
Quote: “In professional baseball the batter has to hit a ball that can be travelling as fast as 170 kilometers per hour. Part of the challenge is that the batter only has access to outdated information: it takes the brain about 80–100 milliseconds to process visual information, during which time the baseball will have moved about 4.5 meters closer to the batter (Allison et al., 1994; Thorpe et al., 1996). This should make it virtually impossible to consistently hit the baseball, but the batters in Major League Baseball manage to do so about 90% of the time. How is this possible?

Strikingly, Johnson et al. discovered that the brain represented the moving object at location different to where one would expect it to be (i.e., not at the location from 80ms ago). Instead, the internal representation of the moving object was aligned to its actual current location so that the brain was able to track moving objects in real time. The visual system must therefore be able to correct the position by at least 80 milliseconds worth of movement, indicating that the brain can effectively compensate for temporal processing delays by predicting (or extrapolating) where a moving object will be located in the future.

The work of Johnson et al. confirms that motion prediction of around 80–100 milliseconds can almost completely compensate for the lag between events in the real world and their internal representation in the brain. As such, humans are able to react to incredibly fast events – if they are predictable, like a baseball thrown at a batter. Neural delays need to be accounted for in all types of information processing within the brain, including the planning and execution of movements. A deeper understanding of such compensatory processes will ultimately help us to understand how the human brain can cope with a fast world, while the speed of its internal signaling is limited. The evidence here seems to suggest that we overcome these neural delays during motion perception by living in our brain’s prediction of the present.”

So before the ball even touches your opponent’s bat, your brain starts predicting where it will likely be in space after they hit. Based on the other player’s posture and your table tennis experience. But as it can’t be sure if it will be correct, it prepares multiple different responses. Maybe the ball will be here, or here, or even here. To be ready for all of these scenarios your brain sends preprogrammed orders to the muscles you need to jump left, right or up. Telling them to be ready for any of them at a moment’s notice.

#O’Shea H, Moran A. Does Motor Simulation Theory Explain the Cognitive Mechanisms Underlying Motor Imagery? A Critical Review. Front Hum Neurosci. 2017
https://www.frontiersin.org/journals/human-neuroscience/articles/10.3389/fnhum.2017.00072/full
Quote: “Motor simulation theory (MST; hereafter, simulation theory; Jeannerod, 1994, 2001, 2006a) offers a seminal explanation for how various action-related cognitive states such as “motor imagery” (MI; the mental rehearsal of actions without engaging in the movements involved; Moran et al., 2012), action intention (the translation of a desired movement into behavior; Haggard, 2005) and observation, are related to actual motor execution (ME) states. The cornerstone of MST is the idea that cognitive motor states activate motor systems in the brain that are similar to those triggered during actual action (Jeannerod, 2001, 2004, 2006a). Further, these motor systems can be rehearsed off-line via a putative simulation mechanism which allows the mind to anticipate action viability and potential action outcomes (Jeannerod, 2001). Similar neural activation during motor cognition and ME is assumed to occur because both states share motor representations in the mind – the idea that actions are internally (or mentally) generated according to a specific goal and in the absence of external environmental cues (i.e., the theory of action representation, see Jeannerod, 1994, 2004, 2006b; Pearson and Kosslyn, 2015). Specifically, MI and ME “are both assigned to the same motor representation vehicle” (Jeannerod, 1994, p. 190) with the representation being the “covert counterpart of any goal-directed action, executed or not” (Jeannerod, 2006a, p. 165). This correspondence between ‘simulated’ and executed action led to the ‘functional equivalence’ hypothesis (Jeannerod, 1994, 2001, 2006a), which maintains that “motor imagery … should involve, in the subject’s motor brain, neural mechanisms similar to those operating during the real action” (Jeannerod, 2001, pp. S103–S104).”

#Gallivan JP, Bowman NA, Chapman CS, Wolpert DM, Flanagan JR. The sequential encoding of competing action goals involves dynamic restructuring of motor plans in working memory. J Neurophysiol. 2016
https://pmc.ncbi.nlm.nih.gov/articles/PMC4946594/
Quote: “Recent neurophysiological (Baumann et al. 2009; Cisek 2007; Cisek and Kalaska 2005, 2010; Klaes et al. 2011) and behavioral (Chapman et al. 2010a, 2014; Ghez et al. 1997; Stewart et al. 2013, 2014; Tipper et al. 1998; Wood et al. 2011) studies have provided strong evidence supporting the notion that in situations in which multiple potential movement goals are presented simultaneously, the brain specifies, in parallel, multiple motor plans for these competing options before deciding on one of them. Such motor encoding of competing action goals could facilitate the incorporation of movement-related costs and constraints into decisions related to action selection and may enable more rapid responding once the target is selected (Christopoulos et al. 2015; Cisek 2006; Cisek and Pastor-Bernier 2014; Cos et al. 2011, 2012, 2014; Gallivan et al. 2015). 

#Jeannerod M. Neural simulation of action: a unifying mechanism for motor cognition. Neuroimage. 2001
https://pubmed.ncbi.nlm.nih.gov/11373140/

#Yokoi A, Diedrichsen J. Neural Organization of Hierarchical Motor Sequence Representations in the Human Neocortex. Neuron. 2019
https://pubmed.ncbi.nlm.nih.gov/31345643/


#Zabicki A, de Haas B, Zentgraf K, Stark R, Munzert J, Krüger B. Imagined and Executed Actions in the Human Motor System: Testing Neural Similarity Between Execution and Imagery of Actions with a Multivariate Approach. Cereb Cortex. 2017
https://pubmed.ncbi.nlm.nih.gov/27600847/


#Svoboda and Li. Neural mechanisms of movement planning: motor cortex and beyond. Current Opinion in Neurobiology. 2018.
https://www.sciencedirect.com/science/article/abs/pii/S0959438817302283?via%3Dihub

– Before the signal from your foot touching the ground has even reached the brain, it has already sent the order to your foot to make the next step – and it has already calculated the muscle patterns for the next two. 

Even though we make it seem quite simple, walking is a pretty complicated movement. It involves integration of visual, vestibular, and somatosensory systems to coordinate the limbs and trunk for safe locomotion. You have to balance the constantly changing center of mass during your movement. Many neural structures from cortex to spinal cord work together to make you walk.

So to lighten the burden of doing it in realtime and online, neural circuits automates it to a certain level. In other animals, pattern generators take care of rhythmic activities like swimming, flying, chewing or breathing. Showing this type of circuitry in humans is a bit more difficult since the animal experiments are generally invasive. So we know most of what we know through taking comfort in having common problems with other mammals, such as coordinating a few limbs in a meaningful and energy efficient way and, implementing similar neural solutions to those problems. These pattern generation circuits are able to generate the alternating movement even in the absence of sensory feedback. However, this does not mean that sensory feedback does not play a role during locomotion. It is essential for adapting the ongoing activity by changing the intensity and duration of muscle activity. It can be thought of as the pattern generators provide a rough template of the walking activity which then is adapted to the reality of the surrounding world by the sensory input.

#Grillner S, Kozlov A. The CPGs for Limbed Locomotion-Facts and Fiction. Int J Mol Sci. 2021 

https://pubmed.ncbi.nlm.nih.gov/34070932

Quote: “The graceful locomotor movements that we observe as a cheetah runs after a prey or a ballet dancer performs utilize most parts of the nervous system, from the spinal cord to the cortex and cerebellum. The very core of the locomotor system is, however, composed of spinal circuits under the control of locomotor command centers in the brainstem [1]. The general pattern of motor coordination during walking is ancient and similar in rodents, cats and humans and also in birds, which suggests that it had evolved already in reptiles and possibly earlier [2]. In the spinal cord, there is a locomotor network of neurons coordinating the pattern of muscle activation in each step cycle, a central pattern generator network (CPG). This network is also subject to a profound sensory control from receptors/afferents activated by the limb movements [3,4,5,6]. During a step cycle, the limb goes through four different phases, (1) the support phase, (2) the lift off phase, (3) the forward flexion during the swing phase and (4) finally, a touch down phase, which is the most demanding part, when the limb is extended to make contact with the ground with appropriate speed in relation to ground. The pattern of muscle activation is more complex than a mere alternation between flexors and extensors, and ensures that the lift off and touch down phases are adequately controlled. This complex pattern remains after a complete denervation of the afferents from the limb, and it is thus coordinated by the CPG itself [7,8]. Under normal conditions all phases are subject to modulation due to sensory input arising from the limb as it is subject to a variety of perturbations as, for example, we walk on a slope or experience obstacles. The sensory elements that influence the CPG are of critical importance for normal locomotion and have been dealt with extensively elsewhere [1,6,9]. Suffice it to mention that the extensor Golgi tendon organs activated during the support phase and receptors signaling the hip position can affect the transition from stance to swing and thus has an impact on the CPG.”

#Kiehn O. Decoding the organization of spinal circuits that control locomotion. Nat Rev Neurosci. 2016 

https://pmc.ncbi.nlm.nih.gov/articles/PMC4844028

Quote: “Although locomotion might seem effortless, it is a complex motor behaviour that involves the concerted activation of a large number of limb and body muscles. The planning and initiation of locomotion take place in supraspinal areas, including the cortex1, the basal ganglia 2–4, the midbrain 5,6 and the hindbrain 7–9, but the precise timings and patterns of locomotor movements in vertebrates are generated by activity in neuron assemblies that are located in the spinal cord itself 10,11 (FIG. 1). These neurons receive activating inputs from the brain and are able to produce the rhythms and patterns of locomotion that are conveyed to motor neurons and then to the axial and limb muscles, as first shown by Thomas Graham Brown more than 100 years ago in the cat12 and later confirmed in all vertebrates13. Additional layers of regulation come from the cerebellum, modulatory signals 9,14–16 and sensory feedback 17,18.

#Sten Grillner and Abdeljabbar El Manira. Current Principles of Motor Control, with Special Reference to Vertebrate Locomotion. Physiological Reviews. 2020.

https://journals.physiology.org/doi/full/10.1152/physrev.00015.2019
Quote: “Whether this hypothesis would hold was tested by deter-mining if removal of sensory feedback could switch the complex pattern of muscle activity into a simple alternating program between all extensors and flexors (234, 235). Afterboth deafferentation (section of all dorsal roots; FIGURE12B) and in curarized animals (no movement-related sen-sory feedback), the complex pattern of activation of differ-ent motorneuron pools persisted (134, 234, 235). These data provided a firm proof that proprioceptive reflexes are not a prerequisite for the generation of the complex loco-motor program. Thus central neural networks (locomotorCPGs) alone can generate the delicate timing pattern that will sequentially start and terminate the activity of individ-ual muscles at the correct phase of the step cycle (210).However, the motor pattern can at times become more variable in the absence of sensory feedback.”

#Clark DJ. Automaticity of walking: functional significance, mechanisms, measurement and rehabilitation strategies. Front. Hum. Neurosci. 2015

https://www.frontiersin.org/journals/human-neuroscience/articles/10.3389/fnhum.2015.00246/full

Quote: “The “central pattern generator” circuits of the spinal cord are perhaps the most well-known locomotor circuits supporting automaticity. Evidence from animals and humans reveals that non-patterned electrical input to the lumbar spinal cord can elicit flexion/extension movements of the limbs that are similar to walking, even in the absence of input from the brain (Grillner, 1981). For instance, Dimitrijevic and colleagues used epidural stimulation of the posterior spinal cord to elicit locomotor-like limb movements in adults with complete spinal cord injury. This finding complements earlier research that demonstrated the ability of decerebrate cats to perform basic stepping movements (Sherrington, 1910; Brown, 1911). Spinal pattern generating circuits may already be operational at birth, as they have been proposed to be responsible for coordinated kicking movements in human infants, as well as the “step reflex” that occurs when infants are stood upright with body weight supported (Forssberg, 1985). With maturation and practice, these circuits become more complex in order to facilitate coordinated adult locomotion (Ivanenko et al., 2004; Clark et al., 2010; Dominici et al., 2011). At the next level of the neuraxis are brainstem circuits of locomotor control. Electrical stimulation of isolated brainstem regions has been shown to evoke walking-like behaviors. The two key regions that have been identified are the mesencephalic locomotor region (MLR) and subthalamic locomotor region (SLR). The MLR has been observed in all vertebrate species tested to date, including lamprey, salamander, stingray, rat, guinea-pig, rabbit, cat, and monkey (Le Ray et al., 2011; Ryczko and Dubuc, 2013). It provides excitatory input to the spinal cord that serves to initiate, scale, and sustain the descending command for walking (Le Ray et al., 2011; Ryczko and Dubuc, 2013). The SLR is considered to be closely related to the MLR and has been found in a number of vertebrates including rats and cats (Kasicki et al., 1991; Narita et al., 2002). It may have particular relevance for scaling locomotor output, such as when inducing changes in speed and cadence (e.g., walking vs. running) (Narita et al., 2002). In addition to brainstem locomotor regions, a cerebellar locomotor region has been reported in cats (Mori et al., 1998). Furthermore, studies in humans with cerebellar damage have shown the important role of the cerebellum in the control and coordination of balance and walking (Morton and Bastian, 2004). Among the notable findings with cerebellar damage are ataxic gait, impaired motor learning, and compromised ability to make predictive gait and balance modifications (Horak and Diener, 1994; Morton and Bastian, 2004, 2006). Finally, descending excitatory drive from cerebral motor pathways is considered crucial to facilitating the brainstem and spinal circuits of automaticity in humans (Yang and Gorassini, 2006). Emerging evidence from studies using electroencephalography and transcranial magnetic stimulation further suggest a direct involvement of motor cortex in driving muscle activation, even during undemanding steady state walking (Petersen et al., 2001, 2012). Accordingly, some aspects of automaticity of walking may reside in cerebral circuits. Cumulatively, the CNS circuits discussed here comprise the neurophysiological architecture that allows for automaticity of walking without the need for continuous attentional monitoring and executive control.

#Klarner T, Zehr EP. Sherlock Holmes and the curious case of the human locomotor central pattern generator. J Neurophysiol. 2018 

https://pmc.ncbi.nlm.nih.gov/articles/PMC6093959

Quote: “Central pattern generators (CPGs) for walking are neuronal networks that produce rhythmic activation of muscles that control the limbs. There is a wealth of data to support the existence of spinal locomotor CPGs in other animals but very little direct evidence for CPGs in humans. In reduced animal models, direct recordings can be taken, giving indisputable evidence for the structure and function of CPGs in generating rhythmic movements. In humans, the experimental techniques needed to definitively confirm parallel observations are invasive and thus not feasible or ethical to perform. Therefore, we must rely on indirect evidence and inference—the process of logical deduction—to assess the contributions of CPGs in rhythmic human movements. The exact locations of the CPG networks, how many there may be, and how they are coordinated remains beyond the scope that our methodologies can reveal. Thus, in this review, we use the term CPG as an umbrella term encompassing one or many distributed CPG networks.”

#Reimann H, Fettrow T, Thompson ED and Jeka JJ (2018) Neural Control of Balance During Walking. Front. Physiol.

https://www.frontiersin.org/journals/physiology/articles/10.3389/fphys.2018.01271/full#h5
Quote: “Neural control of balance has been studied extensively in standing, using a variety of techniques with quiet unperturbed stance as well as sensory and mechanical perturbations (Peterka, 2002). Despite the vast knowledge gained regarding balance control during standing, such findings do not necessarily translate to balance control during walking. The main reason is the gait cycle. While responses to disturbances during standing follow a short-medium-long latency response pattern over 50–200 ms involving a proximal-to-distal pattern (or vice versa) of muscular activation (Horak and Nashner, 1986), responses to disturbances during walking can occur anytime over the much longer (≈600 ms) gait cycle of steady state walking. Critically, body configuration changes dramatically over the gait cycle (e.g., double vs. single stance), necessitating vastly different mechanisms to maintain upright balance at different points of the cycle.”

– As your foot catches the peel, the gyroscope inside your ears notices a sudden change of your position in space. It submits this information to your brain stem and spinal cord, the “things must happen quickly” section of your body. They immediately trigger emergency recovery patterns and send orders to different muscle groups.

#Ryu, H.X., Kuo, A.D. An optimality principle for locomotor central pattern generators. Sci Rep 11, 13140 (2021). 

https://doi.org/10.1038/s41598-021-91714-1

Quote: “A combination of two types of neural circuitry appears responsible for the basic locomotory motor pattern. One type is the central pattern generator (CPG; Fig. 1A), which generates pre-programmed, rhythmically timed, motor commands1–3. The other is the reflex circuit, which produces motor patterns triggered by sensory feedback (Fig. 1C). Although they normally work together, each is also capable of independent action. The intrinsic CPG rhythm patterns can be sustained with no sensory feedback and only a tonic, descending input, as demonstrated by observations of fictive locomotion4,5. Reflex loops alone also appear capable of controlling locomotion1 ,particularly with a hierarchy of loops integrating multiple sensory modalities for complex behaviors such as stepping and standing control6,7. We refer to the independent extremes as pure feedforward control and pure feedback control. Of course, within the intact animal, both types of circuitry work together for normal locomotion (Fig. 1B)8. However, this cooperation also presents a dilemma, of how authority should optimally be shared between the two9.”

– Within 200 milliseconds pre programmed sequences activate to catch your fall. Your arms shoot out, your other leg stiffens to support your weight, your core muscles contract to stabilise you. 100 milliseconds later, when you become aware that you are tripping your body is already recovering. You are only just now catching up. 

#DeGoede KM, Ashton-Miller JA, Liao JM, Alexander NB. How quickly can healthy adults move their hands to intercept an approaching object? Age and gender effects. J Gerontol A Biol Sci Med Sci. 2001 

https://pubmed.ncbi.nlm.nih.gov/11524453

Quote: “A fall to the ground has been observed to take approximately 700 milliseconds from the initial balance disturbance (1). Simple reaction times have been shown to be approximately 200 milliseconds in healthy adults (10,11), leaving a residual 500 milliseconds MT to move the arms into position for impact. The MT reported here, which ranged from 226 to 292 milliseconds, suggest that healthy adults should have sufficient time to deploy the hands properly to arrest a fall to the floor. However, the speed–accuracy trade-off will mean that tasks requiring greater accuracy, such as grabbing a support rail to arrest a fall, may take longer. In a recent study, elderly women arrested 21% of noninjurious falls in such a manner (12).”

Are YOU just a Prediction of your Brain? 

– Why do you feel about the world the way you do? Your sense of hunger, your energy level and especially your emotions are not just objective reactions to what state you are in, but predictions. Your brain’s prediction of what you will need soon or need to be ready for. You are probably used to getting food or going to bed roughly around the same time – and as time approaches your brain releases hormones to prepare you. A self fulfilling prophecy. You get hungry or tired because your brain assumes this is the time where this is needed. This is the most striking thing about your emotions.

Before we explain further it is important to note that this is still a very active area of research and we do not have definitive answers to even the seemingly basic questions like what an emotion is. There are multiple theories of emotion, and the overall topic is too complex to fairly explain all theories here. These theories differ on even the most seemingly basic things like the definition of an emotion, how emotions are different from similar concepts like mood, reward, and motivation, or  how various phenomena such as facial movements, physiological changes, and feelings should be understood in relation to emotions.

There is also a fierce debate in how different researchers are interpreting the research results. On top of it, the use of the same vocabulary but with different meanings makes getting a deeper understanding of the literature very difficult, especially to researchers outside the field. Therefore we went with the explanation that we found to make sense to us among the ones we recently heard. However, the nuances and details are beyond our scope, and we are not proficient enough to take sides with any school of thought. 

The history is long and the nuances are beyond our scope, but we can mainly talk about two classes of emotion theories. For the longest time, it was believed that emotions were hardwired reactions to external events and therefore are universal. It was assumed that there are a few of them like joy, sadness, disgust, fear, anger, disgust that manifest similarly in everyone. So researchers in this field studies brains to find the corresponding circuitries. This is the classical view and traces back all the way to Darwin. There are still proponents of this view, though in different formulations. However, later research painted a different, more interactive picture where emotions result from people’s interpretations and explanations of their circumstances. This class of theories are collected under Appraisal Theories and according to them, we do not just respond to our environment, like a reflex, but instead interpret it and these interpretations have something to do with emotion. There is also a third class, followed after the appraisal theories, constructive theories, which does not accept any essentialism, i.e. there are dedicated brain regions exclusively to emotions. This last group, especially the theory of constructive emotion, we describe in this part.

According to this theory, emotions are predictions, rather than mere reactions to external events. Predictions allow us to respond much faster, and they are less costly compared to reacting to everything in real time. Therefore, rather than thinking of emotions as labels like happy or sad, this theory understands them as made up of three basic resources: internal sensations from our body, sensory information from the outside world and mental representations from past experiences. The brain proactively predicts what emotion is most appropriate for the situation and we start to experience that emotion.

#Barrett LF, Simmons WK. Interoceptive predictions in the brain. Nat Rev Neurosci. 2015.
https://pmc.ncbi.nlm.nih.gov/articles/PMC4731102/
Quote: “Understanding the brain as issuing and sculpting interoceptive inferences genuinely inverts the traditional functional hierarchy, such that agranular (limbic) cortices are no longer assigned the function of reacting to stimulation from the world, but are instead anticipating it. Rather than interoceptive perceptions being solely the representation of afferent sensory input from the body, they can be thought of as inferences about the sensory consequences of homeostatic budgeting that are implemented as upcoming visceromotor commands; these inferences are constrained by error signals that result from the failure of previous predictions to accurately account for incoming interoceptive sensations127. Prediction errors have the capacity to feed back up the active inference hierarchy to sculpt future visceromotor outputs and predictions. In this way, representations that are built from previous experience drive neural activity and are modulated or constrained by actual sensory input from the internal milieu of the body. In the most general terms, interoceptive perceptions – that is, what is experienced – derive from the brain’s best guess about the causes of events in the body, with incoming sensory inputs keeping those guesses in check. According to the EPIC model, not only has your past viscerosensory experience reached forward to create your present experience, but how your body feels now will again project forward to influence what you will feel in the future. It is an elegantly orchestrated self-fulfilling prophecy, embodied within the architecture of the nervous system.”

#Barrett LF. The theory of constructed emotion: an active inference account of interoception and categorization. Soc Cogn Affect Neurosci. 2017
https://pmc.ncbi.nlm.nih.gov/articles/PMC5390700/#nsw154-B218
Quote: “An increasingly popular hypothesis is that the brain’s simulations function as Bayesian filters for incoming sensory input, driving action and constructing perception and other psychological phenomena, including emotion. Simulations are thought to function as prediction signals (also known as ‘top-down’ or ‘feedback’ signals, and more recently as ‘forward’ models) that continuously anticipate events in the sensory environment.10 This hypothesis is variously called predictive coding, active inference, or belief propagation (e.g. Rao and Ballard, 1999; Friston, 2010; Seth et al., 2012; Clark, 2013a,b; Hohwy, 2013; Seth, 2013; Barrett and Simmons, 2015; Chanes and Barrett, 2016; Deneve and Jardri, 2016).11 Without an internal model, the brain cannot transform flashes of light into sights, chemicals into smells and variable air pressure into music. You’d be experientially blind (Barrett, 2017). Thus, simulations are a vital ingredient to guide action and construct perceptions in the present.12 They are embodied, whole brain representations that anticipate (i) upcoming sensory events both inside the body and out as well as (ii) the best action to deal with the impending sensory events. Their consequence for allostasis is made available in consciousness as affect (Barrett, 2017).”

#Barrett LF. Solving the emotion paradox: categorization and the experience of emotion. Pers Soc Psychol Rev. 2006
https://pubmed.ncbi.nlm.nih.gov/16430327/
Quote: “At a sensory level, people have a continuous stream of homeostatic feedback from the body that delivers affective information about their current relation to the world. It is not a specific interoceptive readout of autonomic activity or anything so precise. Rather, it is a core affective state that gives rise to feelings of displeasure (or pleasure) and activation (or deactivation) that results from many sources, including ongoing automatic evaluations or primary appraisals of the world. The way that people conceptualize their affective state will depend on the knowledge about emotion that they bring to bear when categorizing it. Knowledge about emotion is context dependent, represented by sensory, motor, and somatovisceral information, and driven by emotion language. A person might experience his or her core affective state as a particular sort of sadness, anger, or nervousness, depending on the conceptual knowledge that he or she brings to bear in that situation. Categorizing core affect in this way is functional. It changes core affect into a meaningful experience, allowing people to make inferences about what caused the state, and how to deal with the situation. Emotion categorizations also allow people to efficiently communicate their experiences of core affect to others. Categorizing core affect into the experience an emotion can proceed with more or less skill. It is a skill that we bring to bear when representing and communicating about our own internal states, as well as the internal states of others, including nonhuman animals.”

Interested viewers can refer to the following review papers and material.

Following is a simple summary of the theories of emotion: 

#Amanda Taintor. Exploring the Theories of Emotion. ASCCC Open Educational Resources Initiative. Retrieved January 2025. https://socialsci.libretexts.org/Courses/Northeast_Wisconsin_Technical_College/Infant_and_Toddler_Care_and_Development_(NWTC)/17%3A_Emotional_Development/17.02%3A_Theories_of_Emotion

There are also different understandings of emotions that are still including pro-active prediction: 

#Mark Solms. What is Emotion? 2022.
https://www.therapyroute.com/article/what-is-emotion-by-m-solms
Quote: Emotion is actually a sensory modality, akin to vision, hearing, smell and so on. It is surprising how few people recognise this. If you could subtract all the classical sensory modalities from consciousness, there would still be something left – this something is emotion. The most fundamental difference between emotion and the other sensory modalities is that they register states of the external world – of objects – whereas emotions register the state of the internal world – the subject. Your emotions register the state of you. Emotions may be triggered by external events, but they do not register the events themselves, they register your reaction to them. That is why the same event may be exhilarating to one person and terrifying to another.

The defining feature of the basic emotions is that they are inborn responses to situations of universal biological significance. They are, in a sense, inherited memories of how to respond in such situations, crucial for survival and reproductive success. Those of our ancestors who did not possess the genetic sequences that pre-programme these responses therefore tended not to survive and reproduce – which is why we do not resemble them.

Leave a Comment

Your email address will not be published. Required fields are marked *

Shopping Cart