ScienceDaily (Oct. 7, 2012) — UCLA researchers have for the first time measured the activity of a brain region known to be involved in learning, memory and Alzheimer’s disease during sleep. They discovered that this part of the brain behaves as if it’s remembering something, even under anesthesia, a finding that counters conventional theories about memory consolidation during sleep.
University of California at Irvine | July 31, 2012
UC Irvine scientists have discovered intriguing differences in the brains and mental processes of an extraordinary group of people who can effortlessly recall every moment of their lives since about age 10.
The phenomenon of highly superior autobiographical memory – first documented in 2006 by UCI neurobiologist James McGaugh and colleagues in a woman identified as “AJ” – has been profiled on CBS’s “60 Minutes” and in hundreds of other media outlets. But a new paper in the peer-reviewed journal Neurobiology of Learning & Memory‘s July issue offers the first scientific findings about nearly a dozen people with this uncanny ability.
All had variations in nine structures of their brains compared to those of control subjects, including more robust white matter linking the middle and front parts. Most of the differences were in areas known to be linked to autobiographical memory, “so we’re getting a descriptive, coherent story of what’s going on,” said lead author Aurora LePort, a doctoral candidate at UCI’s Center for the Neurobiology of Learning & Memory.
Surprisingly, the people with stellar autobiographical memory did not score higher on routine laboratory memory tests or when asked to use rote memory aids. Yet when it came to public or private events that occurred after age 10½, “they were remarkably better at recalling the details of their lives,” said McGaugh, senior author on the new work.
“These are not memory experts across the board. They’re 180 degrees different from the usual memory champions who can memorize pi to a large degree or other long strings of numbers,” LePort noted. “It makes the project that much more interesting; it really shows we are homing in on a specific form of memory.”
ScienceDaily (May 24, 2012) — Experiencing strong emotions synchronizes brain activity across individuals, a research team at Aalto University and Turku PET Centre in Finland has revealed.
Experiencing strong emotions synchronizes brain activity across individuals. (Credit: Image courtesy of Aalto University)
Human emotions are highly contagious. Seeing others’ emotional expressions such as smiles triggers often the corresponding emotional response in the observer. Such synchronization of emotional states across individuals may support social interaction: When all group members share a common emotional state, their brains and bodies process the environment in a similar fashion.
Researchers at Aalto University and Turku PET Centre have now found that feeling strong emotions makes different individuals’ brain activity literally synchronous.
The results revealed that especially feeling strong unpleasant emotions synchronized brain’s emotion processing networks in the frontal and midline regions. On the contrary, experiencing highly arousing events synchronized activity in the networks supporting vision, attention and sense of touch.
“Sharing others’ emotional states provides the observers a somatosensory and neural framework that facilitates understanding others’ intentions and actions and allows to ‘tune in’ or ‘sync’ with them. Such automatic tuning facilitates social interaction and group processes,” says Adjunct Professor Lauri Nummenmaa from the Aalto University, Finland.
“The results have major implications for current neural models of human emotions and group behavior. It also deepens our understanding of mental disorders involving abnormal socioemotional processing,” Nummenmaa says.
Participants’ brain activity was measured with functional magnetic resonance imaging while they were viewing short pleasant, neutral and unpleasant movies.
Source: Science Daily
ScienceDaily (July 5, 2012) — Sensory substitution devices (SSDs) use sound or touch to help the visually impaired perceive the visual scene surrounding them. The ideal SSD would assist not only in sensing the environment but also in performing daily activities based on this input. For example, accurately reaching for a coffee cup, or shaking a friend’s hand. In a new study, scientists trained blindfolded sighted participants to perform fast and accurate movements using a new SSD, called EyeMusic. Their results are published in the July issue of Restorative Neurology and Neuroscience.
Left: An illustration of the EyeMusic SSD, showing a user with a camera mounted on the glasses, and scalp headphones, hearing musical notes that create a mental image of the visual scene in front of him. He is reaching for the red apple in a pile of green ones. Top right: close-up of the glasses-mounted camera and headphones; bottom right: hand-held camera pointed at the object of interest. (Credit: Maxim Dupliy, Amir Amedi and Shelly Levy-Tzedek)
The EyeMusic, developed by a team of researchers at the Hebrew University of Jerusalem, employs pleasant musical tones and scales to help the visually impaired “see” using music. This non-invasive SSD converts images into a combination of musical notes, or “soundscapes.”
The device was developed by the senior author Prof. Amir Amedi and his team at the Edmond and Lily Safra Center for Brain Sciences (ELSC) and the Institute for Medical Research Israel-Canada at the Hebrew University. The EyeMusic scans an image and represents pixels at high vertical locations as high-pitched musical notes and low vertical locations as low-pitched notes according to a musical scale that will sound pleasant in many possible combinations. The image is scanned continuously, from left to right, and an auditory cue is used to mark the start of the scan. The horizontal location of a pixel is indicated by the timing of the musical notes relative to the cue (the later it is sounded after the cue, the farther it is to the right), and the brightness is encoded by the loudness of the sound.
The EyeMusic’s algorithm uses different musical instruments for each of the five colors: white (vocals), blue (trumpet), red (reggae organ), green (synthesized reed), yellow (violin); Black is represented by silence. Prof. Amedi mentions that “The notes played span five octaves and were carefully chosen by musicians to create a pleasant experience for the users.” Sample sound recordings are available at http://brain.huji.ac.il/em/.
“We demonstrated in this study that the EyeMusic, which employs pleasant musical scales to convey visual information, can be used after a short training period (in some cases, less than half an hour) to guide movements, similar to movements guided visually,” explain lead investigators Drs. Shelly Levy-Tzedek, an ELSC researcher at the Faculty of Medicine, Hebrew University, Jerusalem, and Prof. Amir Amedi. “The level of accuracy reached in our study indicates that performing daily tasks with an SSD is feasible, and indicates a potential for rehabilitative use.”
The study tested the ability of 18 blindfolded sighted individuals to perform movements guided by the EyeMusic, and compared those movements to those performed with visual guidance. At first, the blindfolded participants underwent a short familiarization session, where they learned to identify the location of a single object (a white square) or of two adjacent objects (a white and a blue square).
In the test sessions, participants used a stylus on a digitizing tablet to point to a white square located either in the north, the south, the east or the west. In one block of trials they were blindfolded (SSD block), and in the other block (VIS block) the arm was placed under an opaque cover, so they could see the screen but did not have direct visual feedback from the hand. The endpoint location of their hand was marked by a blue square. In the SSD block, they received feedback via the EyeMusic. In the VIS block, the feedback was visual.
“Participants were able to use auditory information to create a relatively precise spatial representation,” notes Dr. Levy-Tzedek.
The study lends support to the hypothesis that representation of space in the brain may not be dependent on the modality with which the spatial information is received, and that very little training is required to create a representation of space without vision, using sounds to guide fast and accurate movements. “SSDs may have great potential to provide detailed spatial information for the visually impaired, allowing them to interact with their external environment and successfully make movements based on this information, but further research is now required to evaluate the use of our device in the blind ” concludes Dr. Levy-Tzedek. These results demonstrate the potential application of the EyeMusic in performing everyday tasks — from accurately reaching for the red (but not the green!) apples in the produce aisle, to, perhaps one day, playing a Kinect / Xbox game.
Source: Science Daily
It has always been assumed that the first thing that happens is that we have the experience an emotion, and then and only then do we start reacting to the situation physiologically. But over a hundred years ago, William James, the father of American psychology, and Carl Lange, a Danish psychologist, separately introduced the idea that we have it all backwards: First, they said, we have physiological responses to a situation, and only then do we use those responses to formulate an experience of emotion. This is called the James-Lange theory.
Walter Cannon and Phillip Bard came up with a variation on the James-Lange idea in 1929: They suggested that there are neural paths from our senses that go in two directions. One goes to the cortex, where we have a subjective experience, and one goes to the hypothalamus, where the physiological processes begin. In other words, the experience of an emotion, and the physiological responses occur together. This is (as you might expect by now) called the Cannon-Bard theory.
In 1937, James Papez noted that the physiological side of emotion is not just a matter of the hypothalamus, but is a complex network of neural pathways — the Papez circuit. In 1949, Paul McLean completed and corrected Papez’s ideas, and called the larger complex the limbic system, which is what we call it today. It included the hypothalamus, the hippocampus, and the amygdala, and is tightly connected with the cingulate gyrus, the ventral tegmental area of the brain stem, the septum, and the prefrontal gyrus.
Paul McLean is also the founder of the triune brain theory. He suggested that there is a certain evolutionary quality to the structure of the brain. Reptiles, he said, function entirely in terms of instinct, and their brains are little more than what we call the brain stem in people. He called it the archipallium or reptilian brain, and it includes the medulla, cerebellum, the pons, and the olfactory bulbs. Above this is the paleopallium, or old mammalian brain. This is the limbic system and the portions of the brain we call the old cortex. Of course, this adds emotions to the reptilian picture, and allows for simple learning. And on top of the paleopallium is the neopallium (aka new mammalian or rational brain, or neocortex). This is where more advanced activities occur, including awareness. McLean adds that, in human beings, these three “brains” don’t always behave cooperatively, which leads to some of the unique problems we have!