Until now, it has been assumed that people with conditions like ADHD, Tourette syndrome, obsessive compulsive disorder and schizophrenia, all of whom characteristically report symptoms of "brain clutter," may suffer from anomalies in the brain's prefrontal cortex.
Damage to this brain region is often associated with failure to focus on relevant things, loss of inhibitions, impulsivity and various kinds of inappropriate behaviour.
So far, exactly what makes the prefrontal cortex so essential to these aspects of behaviour has remained elusive, hampering attempts to develop tools for diagnosing and treating these patients.
But new research by Julio Martinez-Trujillo, a professor in McGill University's Department of Physiology and Canada Research Chair in Visual Neuroscience, has brought new hope to these patients.
He believes the key to the "brain clutter" and impulsivity shown by individuals with dysfunctional prefrontal cortices lies in a malfunction of a specific type of brain cell. Martinez-Trujilo and his team have identified neurons in the dorsolateral sub-region of the primate prefrontal cortex that selectively filter out important from unimportant visual information.
The key to the normal functioning of these "filter neurons" is their ability to, in the presence of visual clutter, selectively and strongly inhibit the unimportant information, giving the rest of the brain access to what is relevant.
"Contrary to common beliefs, the brain has a limited processing capacity. It can only effectively process about one per cent of the visual information that it takes in," Martinez-Trujilo said. "This means that the neurons responsible for perceiving objects and programming actions must constantly compete with one another to access the important information.
"What we found when we looked at the behaviour of the neurons in the prefrontal cortex, was that an animal's ability to successfully accomplish a single action in the presence of visual clutter, was dictated by how well these units suppressed distracting information."
These results could be highly relevant for identifying the causes and improving the diagnosis and treatments of a wide range of mental disorders including ADHD and schizophrenia.
The research was conducted by Therese Lennert, a PhD student who holds a Vanier Scholarship, and it was funded by the Canada Research Chair program, Canadian Institutes of Health Research (CIHR), EJLB Foundation, and Natural Sciences and Engineering Research Council of Canada (NSERC).
Tuesday, April 26, 2011
Saturday, April 16, 2011
Dyslexia Typeface project wins Smart Prize
AMSTERDAM - De winnaar van de smart future minds award wordt op 28 april bekendgemaakt.
Tijdens het evenement smart urban stage in Amsterdam strijden twaalf innovatieve projecten om deze prijs.
Alle projecten richten zich op het toekomstige leven in de stad. Een van de kanshebbers is Dyslexia van Christian de Boer.
Hij ontwikkelde een nieuw lettertype waarmee dyslectici minder leesfouten maken.
Het project ‘Dyslexia’
Christian de Boer is grafisch ontwerper en heeft dyslexie. Dyslectici denken in beelden. Omdat de vormen van het Westerse schrift zo op elkaar lijken, worden letters snel met elkaar verward door dyslectici.
Dit leidt tot leerproblemen op school zodra kinderen leren lezen. Dyslexie komt veel minder voor in China dan in het Westen, omdat het Chinese schrift deels is gebaseerd op iconen. Door het onderlinge verschil treedt er minder verwarring op. Op dit gegeven is het lettertype ‘Dyslexie’ gecreëerd, waarbij meer onderscheid wordt gemaakt tussen letters, zodat deze makkelijker te herkennen zijn door dyslectici.
‘Ik wilde een lettertype maken dat beter leesbaar is voor mensen met dyslexie. Zij zien letters vaak zweven en zien het als een 3D object. Mijn idee was om letters in gedachten vast te binden op de grond, zo licht Christian de Boer zijn project toe.
Why Does Brain Development Diverge from Normal in Autism Spectrum Disorders?
Rett syndrome, a neurodevelopmental disorder on the autism spectrum, is marked by relatively normal development in infancy followed by a loss of cognitive, social and language skills starting at 12 to 18 months of age.
It is increasingly seen as a disorder of synapses, the connections between neurons that together form brain circuits. What hasn't been clear is why children start out developing normally, only to become progressively disadvantaged.
New research from Children's Hospital Boston, published in the April 14 issue of Neuron, helps unravel what's going on.
The researchers, led by Chinfei Chen, MD, PhD, of Children's F.M. Kirby Neurobiology Center, studied synapse development in mice with a mutation in the MeCP2 gene, the same gene linked to human Rett syndrome.
They found strong evidence that the loss of functioning MeCP2 prevents synapses and circuits from maturing and refining in response to cues from the environment -- just at the time when babies' brains should be maximally receptive to these cues.
Chen believes her findings may have implications not just for Rett syndrome, but for other autism spectrum disorders. "Many ASDs manifest between 1 and 2 years of age, a period when kids are interacting more with the outside world," says Chen.
"The brain of an autistic child looks normal, but there's a subtle difference in connections that has to do with how they process experiences. If you could diagnose early enough, there might be a way to alter the course of the disease by modifying experience, such as through intense one-to-one therapy."
Chen and colleagues focused on a synaptic circuit in the brain's visual system that is relatively easy to study, known as the retinogeniculate synapse.
It connects the cells receiving input from the eye to the lateral geniculate nucleus, an important relay station in the brain's thalamus. Visual input from the outside world, during a specific "critical period," is crucial for its normal development.
The team tested the functioning of the circuit by stimulating the optic tract and measuring electrical responses in the thalamus to see how the neurons were connected, and how strong the connections were.
In MeCP2-mutant mice, these recordings indicated that the visual circuit formed normally at first, and that during the second week of life, weaker connections were pruned away and others strengthened, just as they should be.
But after day 21 of life -- after mice open their eyes and when the visual circuitry should be further pruned and strengthened based on visual experience -- it became abnormal. The number of inputs and connections actually increased, while the strength of the synapses decreased.
This pattern was similar to that seen when normal mice were kept in the dark after day 21, depriving them of visual stimulation. Together, the findings suggest that MeCP2 is critically important to our ability to refine synaptic circuits based on sensory experience, says Chen. Without MeCP2, the circuit fails to incorporate this experience.
"During this last phase of development, you need sensory input to lock down and stabilize the connections," Chen explains. "But the circuit is not getting the right signal to stabilise, and continues to look around for the right connections."
It is increasingly seen as a disorder of synapses, the connections between neurons that together form brain circuits. What hasn't been clear is why children start out developing normally, only to become progressively disadvantaged.
New research from Children's Hospital Boston, published in the April 14 issue of Neuron, helps unravel what's going on.
The researchers, led by Chinfei Chen, MD, PhD, of Children's F.M. Kirby Neurobiology Center, studied synapse development in mice with a mutation in the MeCP2 gene, the same gene linked to human Rett syndrome.
They found strong evidence that the loss of functioning MeCP2 prevents synapses and circuits from maturing and refining in response to cues from the environment -- just at the time when babies' brains should be maximally receptive to these cues.
Chen believes her findings may have implications not just for Rett syndrome, but for other autism spectrum disorders. "Many ASDs manifest between 1 and 2 years of age, a period when kids are interacting more with the outside world," says Chen.
"The brain of an autistic child looks normal, but there's a subtle difference in connections that has to do with how they process experiences. If you could diagnose early enough, there might be a way to alter the course of the disease by modifying experience, such as through intense one-to-one therapy."
Chen and colleagues focused on a synaptic circuit in the brain's visual system that is relatively easy to study, known as the retinogeniculate synapse.
It connects the cells receiving input from the eye to the lateral geniculate nucleus, an important relay station in the brain's thalamus. Visual input from the outside world, during a specific "critical period," is crucial for its normal development.
The team tested the functioning of the circuit by stimulating the optic tract and measuring electrical responses in the thalamus to see how the neurons were connected, and how strong the connections were.
In MeCP2-mutant mice, these recordings indicated that the visual circuit formed normally at first, and that during the second week of life, weaker connections were pruned away and others strengthened, just as they should be.
But after day 21 of life -- after mice open their eyes and when the visual circuitry should be further pruned and strengthened based on visual experience -- it became abnormal. The number of inputs and connections actually increased, while the strength of the synapses decreased.
This pattern was similar to that seen when normal mice were kept in the dark after day 21, depriving them of visual stimulation. Together, the findings suggest that MeCP2 is critically important to our ability to refine synaptic circuits based on sensory experience, says Chen. Without MeCP2, the circuit fails to incorporate this experience.
"During this last phase of development, you need sensory input to lock down and stabilize the connections," Chen explains. "But the circuit is not getting the right signal to stabilise, and continues to look around for the right connections."
Technique for Letting Brain Talk to Computers Now Tunes into Speech
The act of mind reading is something usually reserved for science-fiction movies but researchers in America have used a technique, usually associated with identifying epilepsy, for the first time to show that a computer can listen to our thoughts.
In a new study, scientists from Washington University demonstrated that humans can control a cursor on a computer screen using words spoken out loud and in their head, holding huge applications for patients who may have lost their speech through brain injury or disabled patients with limited movement.
By directly connecting the patient's brain to a computer, the researchers showed that the computer could be controlled with up to 90% accuracy even when no prior training was given.
Patients with a temporary surgical implant have used regions of the brain that control speech to "talk" to a computer for the first time, manipulating a cursor on a computer screen simply by saying or thinking of a particular sound.
"There are many directions we could take this, including development of technology to restore communication for patients who have lost speech due to brain injury or damage to their vocal cords or airway," says author Eric C. Leuthardt, MD, of Washington University School of Medicine in St. Louis.
Scientists have typically programmed the temporary implants, known as brain-computer interfaces, to detect activity in the brain's motor networks, which control muscle movements.
"That makes sense when you're trying to use these devices to restore lost mobility -- the user can potentially engage the implant to move a robotic arm through the same brain areas he or she once used to move an arm disabled by injury," says Leuthardt, assistant professor of neurosurgery, of biomedical engineering and of neurobiology, "But that has the potential to be inefficient for restoration of a loss of communication."
Patients might be able to learn to think about moving their arms in a particular way to say hello via a computer speaker, Leuthardt explains. But it would be much easier if they could say hello by using the same brain areas they once engaged to use their own voices.
Read more of the article here
The research appears April 7 in The Journal of Neural Engineering. This Journal contains many free articles that help scientists, clinicians and engineers understand, replace, repair and enhance the nervous system.
In a new study, scientists from Washington University demonstrated that humans can control a cursor on a computer screen using words spoken out loud and in their head, holding huge applications for patients who may have lost their speech through brain injury or disabled patients with limited movement.
By directly connecting the patient's brain to a computer, the researchers showed that the computer could be controlled with up to 90% accuracy even when no prior training was given.
Patients with a temporary surgical implant have used regions of the brain that control speech to "talk" to a computer for the first time, manipulating a cursor on a computer screen simply by saying or thinking of a particular sound.
"There are many directions we could take this, including development of technology to restore communication for patients who have lost speech due to brain injury or damage to their vocal cords or airway," says author Eric C. Leuthardt, MD, of Washington University School of Medicine in St. Louis.
Scientists have typically programmed the temporary implants, known as brain-computer interfaces, to detect activity in the brain's motor networks, which control muscle movements.
"That makes sense when you're trying to use these devices to restore lost mobility -- the user can potentially engage the implant to move a robotic arm through the same brain areas he or she once used to move an arm disabled by injury," says Leuthardt, assistant professor of neurosurgery, of biomedical engineering and of neurobiology, "But that has the potential to be inefficient for restoration of a loss of communication."
Patients might be able to learn to think about moving their arms in a particular way to say hello via a computer speaker, Leuthardt explains. But it would be much easier if they could say hello by using the same brain areas they once engaged to use their own voices.
Read more of the article here
The research appears April 7 in The Journal of Neural Engineering. This Journal contains many free articles that help scientists, clinicians and engineers understand, replace, repair and enhance the nervous system.
Labels:
Brain Function,
language skills,
learning,
listening,
Speech
Weak Evidence for Word-Order Universals
About 6,000 languages are spoken today worldwide. How this wealth of expression developed, however, largely remains a mystery.
A group of researchers at the Max Planck Institute for Psycholinguistics in Nijmegen, the Netherlands, has now found that word-orders in languages from different language families evolve differently.
The finding contradicts the common understanding that word-order develops in accordance with a set of universal rules, applicable to all languages.
Researchers have concluded that languages do not primarily follow innate rules of language processing in the brain. Rather, sentence structure is determined by the historical context in which a language develops.
Linguists want to understand how languages have become so diverse and what constraints language evolution is subject to. To this end, they search for recurring patterns in language structure.
In spite of the enormous variety of sounds and sentence structure patterns, linguistic chaos actually stays within certain limits: individual language patterns repeat themselves. For example, in some languages, the verb is placed at the beginning of the sentence, while with others it is placed in the middle or at the end of the sentence. The formation of words in a given language also follows certain principles.
Michael Dunn and Stephen Levinson of the Max Planck Institute for Psycholinguistics have analysed 301 languages from four major language families: Austronesian, Indo-European, Bantu and Uto-Aztecan.
The researchers focused on the order of the different sentence parts, such as "object-verb," "preposition-noun," "genitive- noun" or "relative clause-noun," and whether their position in the sentence influenced the other parts of the sentence.
In this way, the researchers wanted to find out whether the position of the verb has other syntactic consequences: if the verb precedes the object for example ("The player kicks the ball"), is the preposition simultaneously placed before the noun ("into the goal")? Such a pattern is observed in many languages, but is it an inevitable feature of how languages develop?
"Our study shows that different processes occur in different language families," says Michael Dunn. "The evolution of language does not follow one universal set of rules." For example, the "verb-object" pattern influences the "preposition-noun" pattern in the Austronesian and Indo-European languages, but not in the same way, and not in the other two language families. The researchers never found the same pattern in word-order across all language families.
A group of researchers at the Max Planck Institute for Psycholinguistics in Nijmegen, the Netherlands, has now found that word-orders in languages from different language families evolve differently.
The finding contradicts the common understanding that word-order develops in accordance with a set of universal rules, applicable to all languages.
Researchers have concluded that languages do not primarily follow innate rules of language processing in the brain. Rather, sentence structure is determined by the historical context in which a language develops.
Linguists want to understand how languages have become so diverse and what constraints language evolution is subject to. To this end, they search for recurring patterns in language structure.
In spite of the enormous variety of sounds and sentence structure patterns, linguistic chaos actually stays within certain limits: individual language patterns repeat themselves. For example, in some languages, the verb is placed at the beginning of the sentence, while with others it is placed in the middle or at the end of the sentence. The formation of words in a given language also follows certain principles.
Michael Dunn and Stephen Levinson of the Max Planck Institute for Psycholinguistics have analysed 301 languages from four major language families: Austronesian, Indo-European, Bantu and Uto-Aztecan.
The researchers focused on the order of the different sentence parts, such as "object-verb," "preposition-noun," "genitive- noun" or "relative clause-noun," and whether their position in the sentence influenced the other parts of the sentence.
In this way, the researchers wanted to find out whether the position of the verb has other syntactic consequences: if the verb precedes the object for example ("The player kicks the ball"), is the preposition simultaneously placed before the noun ("into the goal")? Such a pattern is observed in many languages, but is it an inevitable feature of how languages develop?
"Our study shows that different processes occur in different language families," says Michael Dunn. "The evolution of language does not follow one universal set of rules." For example, the "verb-object" pattern influences the "preposition-noun" pattern in the Austronesian and Indo-European languages, but not in the same way, and not in the other two language families. The researchers never found the same pattern in word-order across all language families.
Parents' hesitations help toddlers learn new words
A team of cognitive scientists has good news for parents who are worried that they are setting a bad example for their children when they say "um" and "uh." A study conducted at the University of Rochester's Baby Lab shows that toddlers actually use their parents' stumbles and hesitations (technically referred to as disfluencies) to help them learn language more efficiently.
For instance, say you're walking through the zoo with your two-year-old and you are trying to teach him animal names. You point to the rhinoceros and say, "Look at the, uh, uh, rhinoceros." It turns out that as you are fumbling for the correct word, you are also sending your child a signal that you are about to teach him something new, so he should pay attention, according to the researchers.
Young kids have a lot of information to process while they listen to an adult speak, including many words that they have never heard before. If a child's brain waits until a new word is spoken and then tries to figure out what it means after the fact, it becomes a much more difficult task and the child is apt to miss what comes next, says Richard Aslin, a professor of brain and cognitive sciences at the University of Rochester and one of the study's authors.
"The more predictions a listener can make about what is being communicated, the more efficiently the listener can understand it," Aslin said.
The study, which was conducted by Celeste Kidd, a graduate student at the University of Rochester, Katherine White, a former postdoctoral fellow at Rochester who is now at the University of Waterloo, and Aslin was published online April 14 in the journal Developmental Science.
Read more here
For instance, say you're walking through the zoo with your two-year-old and you are trying to teach him animal names. You point to the rhinoceros and say, "Look at the, uh, uh, rhinoceros." It turns out that as you are fumbling for the correct word, you are also sending your child a signal that you are about to teach him something new, so he should pay attention, according to the researchers.
Young kids have a lot of information to process while they listen to an adult speak, including many words that they have never heard before. If a child's brain waits until a new word is spoken and then tries to figure out what it means after the fact, it becomes a much more difficult task and the child is apt to miss what comes next, says Richard Aslin, a professor of brain and cognitive sciences at the University of Rochester and one of the study's authors.
"The more predictions a listener can make about what is being communicated, the more efficiently the listener can understand it," Aslin said.
The study, which was conducted by Celeste Kidd, a graduate student at the University of Rochester, Katherine White, a former postdoctoral fellow at Rochester who is now at the University of Waterloo, and Aslin was published online April 14 in the journal Developmental Science.
Read more here
Sunday, April 10, 2011
Brain Training using MRI Scans: See yourself Think
As humans face increasing distractions in their personal and professional lives, University of British Columbia researchers have discovered that people can gain greater control over their thoughts with real-time brain feedback.
The study is the world's first investigation of how real-time functional Magnetic Resonance Imaging (fMRI) feedback from the brain region responsible for higher-order thoughts, including introspection, affects our ability to control these thoughts. The researchers find that real-time brain feedback significantly improves people's ability to control their thoughts and effectively 'train their brains.'
"Just like athletes in training benefit from a coach's guidance, feedback from our brain can help us to be more aware of our thoughts," says co-author Prof. Kalina Christoff, UBC Dept. of Psychology. "Our findings suggest that the ability to control our thinking improves when we know how the corresponding area in our brain is behaving."
People control thoughts better when they see their brain activity
The study is the world's first investigation of how real-time functional Magnetic Resonance Imaging (fMRI) feedback from the brain region responsible for higher-order thoughts, including introspection, affects our ability to control these thoughts. The researchers find that real-time brain feedback significantly improves people's ability to control their thoughts and effectively 'train their brains.'
"Just like athletes in training benefit from a coach's guidance, feedback from our brain can help us to be more aware of our thoughts," says co-author Prof. Kalina Christoff, UBC Dept. of Psychology. "Our findings suggest that the ability to control our thinking improves when we know how the corresponding area in our brain is behaving."
People control thoughts better when they see their brain activity
Thursday, April 7, 2011
Google speech recognition software on Android
If you've tried speech-recognition software in the past, you may be skeptical of Android's capabilities. Older speech software required you to talk in a stilted manner, and it was so prone to error that it was usually easier just to give up and type.
Today's top-of-the-line systems—like software made by Dragon—don't ask you to talk funny, but they tend to be slow and use up a lot of your computer's power when deciphering your words. Google's system, on the other hand, offloads its processing to the Internet cloud.
Everything you say to Android goes back to Google's data centres, where powerful servers apply statistical modeling to determine what you're saying. The process is fast, can be done from anywhere, and is uncannily accurate.
You can speak normally but if you want punctuation in your email, you've got to say "period" and "comma", you can speak for as long as you like, and you can use the biggest words you can think of. It even works if you've got a regional accent.
How does Android's speech system work so well? The magic of data. Speech recognition is one of a handful of Google's artificial intelligence programs—the others are language translation and image search—that get their power by analysing impossibly huge troves of information.
For the speech system, the data accesses a large number of voice recordings. If you've used Android's speech recognition system, Google Voice's e-mail transcription service, Goog411 (a now-defunct information service), or some other Google speech-related service, there's a good chance that the company has your voice somewhere on its servers and it's only because Google has your voice, and millions of others, that it can recognise mine.
Read more here: Google speech recognition software for your cellphone actually works. - By Farhad Manjoo - Slate Magazine
Today's top-of-the-line systems—like software made by Dragon—don't ask you to talk funny, but they tend to be slow and use up a lot of your computer's power when deciphering your words. Google's system, on the other hand, offloads its processing to the Internet cloud.
Everything you say to Android goes back to Google's data centres, where powerful servers apply statistical modeling to determine what you're saying. The process is fast, can be done from anywhere, and is uncannily accurate.
You can speak normally but if you want punctuation in your email, you've got to say "period" and "comma", you can speak for as long as you like, and you can use the biggest words you can think of. It even works if you've got a regional accent.
How does Android's speech system work so well? The magic of data. Speech recognition is one of a handful of Google's artificial intelligence programs—the others are language translation and image search—that get their power by analysing impossibly huge troves of information.
For the speech system, the data accesses a large number of voice recordings. If you've used Android's speech recognition system, Google Voice's e-mail transcription service, Goog411 (a now-defunct information service), or some other Google speech-related service, there's a good chance that the company has your voice somewhere on its servers and it's only because Google has your voice, and millions of others, that it can recognise mine.
Read more here: Google speech recognition software for your cellphone actually works. - By Farhad Manjoo - Slate Magazine
Dyslexia: MRI Scan 'predicts when dyslexic will read'
The brain disorder makes it difficult for even very bright children to learn how to read and can be a lifelong source of frustration.
But according to research published in the Proceedings of the National Academy of Science (PNAS), brain scans can predict the improvement of teenagers' reading skills, with up to 90 per cent accuracy.
"This study takes an important step toward realising the potential benefits of combining neuroscience and education research by showing how brain scanning measures are sensitive to individual differences that predict educationally relevant outcomes," said Bruce McCandliss, one of the lead authors of the study and a professor at Vanderbilt University.
The research found brain scan results to be significantly more accurate in predicting how well a dyslexic child ultimately reads than standardised reading tests or the child's behaviour.
"This approach opens up a new vantage point on the question of how children with dyslexia differ from one another in ways that translate into meaningful differences two to three years down the line," Prof McCandliss said.
He said the research raises the prospect of a future test that could help match dyslexic students with the most effective treatments.
"Such insights may be crucial for new educational research on how to best meet the individual needs of struggling readers," he said.
The research was primarily conducted by experts at the Stanford University School of Medicine, with help from researchers at the Massachusetts Institute of Technology, the University of Jyvaskyla in Finland. and the University of York in the United Kingdom.
But according to research published in the Proceedings of the National Academy of Science (PNAS), brain scans can predict the improvement of teenagers' reading skills, with up to 90 per cent accuracy.
"This study takes an important step toward realising the potential benefits of combining neuroscience and education research by showing how brain scanning measures are sensitive to individual differences that predict educationally relevant outcomes," said Bruce McCandliss, one of the lead authors of the study and a professor at Vanderbilt University.
The research found brain scan results to be significantly more accurate in predicting how well a dyslexic child ultimately reads than standardised reading tests or the child's behaviour.
"This approach opens up a new vantage point on the question of how children with dyslexia differ from one another in ways that translate into meaningful differences two to three years down the line," Prof McCandliss said.
He said the research raises the prospect of a future test that could help match dyslexic students with the most effective treatments.
"Such insights may be crucial for new educational research on how to best meet the individual needs of struggling readers," he said.
The research was primarily conducted by experts at the Stanford University School of Medicine, with help from researchers at the Massachusetts Institute of Technology, the University of Jyvaskyla in Finland. and the University of York in the United Kingdom.
Wednesday, April 6, 2011
Language and Your Brain - Infographics
For centuries, researchers have studied the brain to find exactly where mechanisms for producing and interpreting language reside. Theories abound on how humans acquire new languages and how our developing brains learn to process languages. We take a look at the mysteries of language and the brain in the infographic below.
Click on the picture to see the whole Infographics on VOXY Blog
Click on the picture to see the whole Infographics on VOXY Blog
Labels:
brain,
Comprehension,
language,
Recognition,
Speech
Tuesday, April 5, 2011
Social Anxiety: Misreading faces
Children suffering from extreme social anxiety are trapped in a nightmare of misinterpreted facial expressions: They confuse angry faces with sad ones, a new study shows.
"If you misread facial expressions, you're in social trouble, no matter what other social skills you have," says Emory psychologist Steve Nowicki, a clinical researcher who developed the tests used in the study.
"It can make life very difficult, because other people's faces are like a prism through which we look at the world."
It's easy to assume that a socially anxious child would be especially sensitive to anger. "It turns out that they never learn to pick up on anger and often make the error of seeing it as sadness," Nowicki says.
"It sets up a very problematic interaction." Some socially anxious children long to interact with others, he says, and may try to comfort someone they think is sad, but who is actually angry.
"They want to help, because they're good kids," Nowicki says. "I've seen these kids trying to make a friend, and keep trying, but they keep getting rebuffed and are never aware of the reason why."
The study was co-authored by Amy Walker, a former undergraduate student at Emory, now at Yeshiva University, and will be published in the Journal of Genetic Psychology.
It is unclear whether misreading the facial expression is linked to the cause of the anxiety, or merely contributing to it.
By identifying the patterns of errors in nonverbal communication, Nowicki hopes to create better diagnostic tools and interventions for those affected with behavioural disorders.
Misreading faces tied to child social anxiety
"If you misread facial expressions, you're in social trouble, no matter what other social skills you have," says Emory psychologist Steve Nowicki, a clinical researcher who developed the tests used in the study.
"It can make life very difficult, because other people's faces are like a prism through which we look at the world."
It's easy to assume that a socially anxious child would be especially sensitive to anger. "It turns out that they never learn to pick up on anger and often make the error of seeing it as sadness," Nowicki says.
"It sets up a very problematic interaction." Some socially anxious children long to interact with others, he says, and may try to comfort someone they think is sad, but who is actually angry.
"They want to help, because they're good kids," Nowicki says. "I've seen these kids trying to make a friend, and keep trying, but they keep getting rebuffed and are never aware of the reason why."
The study was co-authored by Amy Walker, a former undergraduate student at Emory, now at Yeshiva University, and will be published in the Journal of Genetic Psychology.
It is unclear whether misreading the facial expression is linked to the cause of the anxiety, or merely contributing to it.
By identifying the patterns of errors in nonverbal communication, Nowicki hopes to create better diagnostic tools and interventions for those affected with behavioural disorders.
Misreading faces tied to child social anxiety
Monday, April 4, 2011
New Brain Structure Explains Willful Blindness
The article in today's NYT by Nancy Koehn titled “Why Red Flags Can Go Unnoticed” was chiefly concerned with the effects of willful blindness in humans. It did not answer the primary question: WHY do people ignore clear warnings of impending problems.
Bruce Nappi, in his new novel LIARS! provides a profound explanation: two human species coexist on earth today, and one of them is not able to broadly understand or apply logical reasoning, and no, it's not males and females!
The new discovery came when he first determined what creates consciousness in the human brain. Step one was recognising a new physiological brain model that revises Sigmund Freud's Id, ego and super-ego brain structure.
The second was sorting out what makes humans different from animals. In fact, contrary to common belief, that difference does not occur at the homo sapiens level but further back down the evolutionary tree. Differences in awareness for humans and animals are described and labeled A2 and A1 respectively, but, the characteristics listed for humans (A2) raised a big problem: they didn’t describe all known human abilities.
He categorised the additional abilities with a new label A3. The implication was both amazing and unsettling! Both A3 and A2 had human traits, but they were as distinct as A2 (humans) and A1 (animals). The solution required that each be considered a different species - amazing for sure.
However, if the discovery was true, it would have huge ramifications for human social structures. He tested the theory against more and more of the great social questions. The new A3 model produced so many logical answers that he is convinced he has stumbled onto a profound discovery.
New Brain Structure Explains Willful Blindness In Humans And Why Red Flags Go Unnoticed
Bruce Nappi, in his new novel LIARS! provides a profound explanation: two human species coexist on earth today, and one of them is not able to broadly understand or apply logical reasoning, and no, it's not males and females!
The new discovery came when he first determined what creates consciousness in the human brain. Step one was recognising a new physiological brain model that revises Sigmund Freud's Id, ego and super-ego brain structure.
The second was sorting out what makes humans different from animals. In fact, contrary to common belief, that difference does not occur at the homo sapiens level but further back down the evolutionary tree. Differences in awareness for humans and animals are described and labeled A2 and A1 respectively, but, the characteristics listed for humans (A2) raised a big problem: they didn’t describe all known human abilities.
He categorised the additional abilities with a new label A3. The implication was both amazing and unsettling! Both A3 and A2 had human traits, but they were as distinct as A2 (humans) and A1 (animals). The solution required that each be considered a different species - amazing for sure.
However, if the discovery was true, it would have huge ramifications for human social structures. He tested the theory against more and more of the great social questions. The new A3 model produced so many logical answers that he is convinced he has stumbled onto a profound discovery.
New Brain Structure Explains Willful Blindness In Humans And Why Red Flags Go Unnoticed
Saturday, April 2, 2011
Dyslexia Through the looking glass
Human beings understand words reflected in a mirror without thinking about it, just like those written normally, at least for a few instants. Researchers from the Basque Centre on Cognition, Brain and Languages (Spain) have shown this in a study that could also help to increase our understanding of the phenomenon of dyslexia.
Most people can read texts reflected in a mirror slowly and with some effort, but a team of scientists from the Basque Centre on Cognition, Brain and Language (BCBL) has shown for the first time that we can mentally turn these images around and understand them automatically and unconsciously, at least for a few instants.
"At a very early processing stage, between 150 and 250 milliseconds, the visual system completely rotates the words reflected in the mirror and recognises them," says Jon Andoni Duñabeitia, lead author of the study, "although the brain then immediately detects that this is not the correct order and 'remembers' that it should not process them in this way."
To carry out this study, which has been published in the journal NeuroImage, the researchers used electrodes to monitor the brain activity of 27 participants while carrying out two experiments in front of a computer screen.
In the first, the participants were shown words with some of the letters and other information rotated for 50 milliseconds (an imperceptible flash, which is processed by the brain); while in the second case the entire word in the mirror was rotated (for example HTUOM INSTEAD OF MOUTH).
The results of the encephalogram showed in both cases that, at between 150 and 250 milliseconds, the brain's response upon seeing the words as reflected in the mirror was the same as when they are read normally.
Better understanding of dyslexia
"These results open a new avenue for studying the effects of involuntary rotation of letters and words in individuals with reading difficulties (dyslexia) and writing problems (dysgrafia)," Duñabeitia explains.
The researcher gives reassurance to parents who worry when their children reverse their letters when they start to write: "This is the direct result of the mirror rotation property of the visual system." In fact, it is common for children to start to write this way until they learn the "established" forms at school.
"Now we know that rotating letters is not a problem that is exclusive to some dyslexics, since everybody often does this in a natural and unconscious way, but what we need to understand is why people who can read normally can inhibit this, while others with difficulties in reading and writing cannot, confusing 'b' for 'd', for example," explains Duñabeitia.
The scientific community has yet to discover how reading, a skill that is learnt relatively late in human development, can inhibit mental rotation in a mirror, a visual capacity that is common to many animals.
"A tiger is a tiger on the right side and the left side, but a word read in the mirror loses its meaning -- although now we know that it is not as incomprehensible for our visual system as we thought, because it is capable of processing it as if it were correct," the researcher concludes.
Most people can read texts reflected in a mirror slowly and with some effort, but a team of scientists from the Basque Centre on Cognition, Brain and Language (BCBL) has shown for the first time that we can mentally turn these images around and understand them automatically and unconsciously, at least for a few instants.
"At a very early processing stage, between 150 and 250 milliseconds, the visual system completely rotates the words reflected in the mirror and recognises them," says Jon Andoni Duñabeitia, lead author of the study, "although the brain then immediately detects that this is not the correct order and 'remembers' that it should not process them in this way."
To carry out this study, which has been published in the journal NeuroImage, the researchers used electrodes to monitor the brain activity of 27 participants while carrying out two experiments in front of a computer screen.
In the first, the participants were shown words with some of the letters and other information rotated for 50 milliseconds (an imperceptible flash, which is processed by the brain); while in the second case the entire word in the mirror was rotated (for example HTUOM INSTEAD OF MOUTH).
The results of the encephalogram showed in both cases that, at between 150 and 250 milliseconds, the brain's response upon seeing the words as reflected in the mirror was the same as when they are read normally.
Better understanding of dyslexia
"These results open a new avenue for studying the effects of involuntary rotation of letters and words in individuals with reading difficulties (dyslexia) and writing problems (dysgrafia)," Duñabeitia explains.
The researcher gives reassurance to parents who worry when their children reverse their letters when they start to write: "This is the direct result of the mirror rotation property of the visual system." In fact, it is common for children to start to write this way until they learn the "established" forms at school.
"Now we know that rotating letters is not a problem that is exclusive to some dyslexics, since everybody often does this in a natural and unconscious way, but what we need to understand is why people who can read normally can inhibit this, while others with difficulties in reading and writing cannot, confusing 'b' for 'd', for example," explains Duñabeitia.
The scientific community has yet to discover how reading, a skill that is learnt relatively late in human development, can inhibit mental rotation in a mirror, a visual capacity that is common to many animals.
"A tiger is a tiger on the right side and the left side, but a word read in the mirror loses its meaning -- although now we know that it is not as incomprehensible for our visual system as we thought, because it is capable of processing it as if it were correct," the researcher concludes.
Labels:
brain,
Cognition,
dysgraphia,
Dyslexia,
language skills,
school
Subscribe to:
Posts (Atom)