more information
Search within Results:

Select All    Deselect All
 |   | 
Details
   print
  Records Links
Author (up) Dawson, G.; Webb, S.; Schellenberg, G.D.; Dager, S.; Friedman, S.; Aylward, E.; Richards, T. file  url
openurl 
  Title Defining the broader phenotype of autism: genetic, brain, and behavioral perspectives Type Journal Article
  Year 2002 Publication Development and Psychopathology Abbreviated Journal Dev Psychopathol  
  Volume 14 Issue 3 Pages 581-611  
  Keywords Autistic Disorder/*complications/*genetics; Brain/*abnormalities; Child; Child Behavior Disorders/*etiology; Evoked Potentials/physiology; Humans; Language Disorders/etiology; Magnetic Resonance Imaging; Perceptual Disorders/etiology; Phenotype; Phonetics; Speech Perception; Temporal Lobe/abnormalities; Twin Studies as Topic  
  Abstract Achieving progress in understanding the cause, nature, and treatment of autism requires an integration of concepts, approaches, and empirical findings from genetic, cognitive neuroscience, animal, and clinical studies. The need for such integration has been a fundamental tenet of the discipline of developmental psychopathology from its inception. It is likely that the discovery of autism susceptibility genes will depend on the development of dimensional measures of broader phenotype autism traits. It is argued that knowledge of the cognitive neuroscience of social and language behavior will provide a useful framework for defining such measures. In this article, the current state of knowledge of the cognitive neuroscience of social and language impairments in autism is reviewed. Following from this, six candidate broader phenotype autism traits are proposed: (a) face processing, including structural encoding of facial features and face movements, such as eye gaze; (b) social affiliation or sensitivity to social reward, pertaining to the social motivational impairments found in autism; (c) motor imitation ability, particularly imitation of body actions; (d) memory, specifically those aspects of memory mediated by the medial temporal lobe-prefrontal circuits; (e) executive function, especially planning and flexibility; and (f) Language ability, particularly those aspects of language that overlap with specific language impairment, namely, phonological processing.  
  Call Number Serial 1118  
Permanent link to this record
 

 
Author (up) Holle, H.; Obleser, J.; Rueschemeyer, S.-A.; Gunter, T.C. file  url
doi  openurl
  Title Integration of iconic gestures and speech in left superior temporal areas boosts speech comprehension under adverse listening conditions Type Journal Article
  Year 2010 Publication NeuroImage Abbreviated Journal Neuroimage  
  Volume 49 Issue 1 Pages 875-884  
  Keywords Acoustic Stimulation; Adult; Brain Mapping; Comprehension/*physiology; Environment; Female; Functional Laterality/physiology; *Gestures; Humans; Image Processing, Computer-Assisted; Magnetic Resonance Imaging; Male; Photic Stimulation; Speech/*physiology; Speech Perception/*physiology; Temporal Lobe/*physiology; Young Adult  
  Abstract Iconic gestures are spontaneous hand movements that illustrate certain contents of speech and, as such, are an important part of face-to-face communication. This experiment targets the brain bases of how iconic gestures and speech are integrated during comprehension. Areas of integration were identified on the basis of two classic properties of multimodal integration, bimodal enhancement and inverse effectiveness (i.e., greater enhancement for unimodally least effective stimuli). Participants underwent fMRI while being presented with videos of gesture-supported sentences as well as their unimodal components, which allowed us to identify areas showing bimodal enhancement. Additionally, we manipulated the signal-to-noise ratio of speech (either moderate or good) to probe for integration areas exhibiting the inverse effectiveness property. Bimodal enhancement was found at the posterior end of the superior temporal sulcus and adjacent superior temporal gyrus (pSTS/STG) in both hemispheres, indicating that the integration of iconic gestures and speech takes place in these areas. Furthermore, we found that the left pSTS/STG specifically showed a pattern of inverse effectiveness, i.e., the neural enhancement for bimodal stimulation was greater under adverse listening conditions. This indicates that activity in this area is boosted when an iconic gesture accompanies an utterance that is otherwise difficult to comprehend. The neural response paralleled the behavioral data observed. The present data extends results from previous gesture-speech integration studies in showing that pSTS/STG plays a key role in the facilitation of speech comprehension through simultaneous gestural input.  
  Call Number Serial 502  
Permanent link to this record
 

 
Author (up) Jacola, L.M.; Byars, A.W.; Hickey, F.; Vannest, J.; Holland, S.K.; Schapiro, M.B. file  url
doi  openurl
  Title Functional magnetic resonance imaging of story listening in adolescents and young adults with Down syndrome: evidence for atypical neurodevelopment Type Journal Article
  Year 2014 Publication Journal of Intellectual Disability Research : JIDR Abbreviated Journal J Intellect Disabil Res  
  Volume 58 Issue 10 Pages 892-902  
  Keywords Adolescent; Adult; Brain Mapping; Cerebral Cortex/*physiopathology; Down Syndrome/*physiopathology; Female; Humans; Magnetic Resonance Imaging; Male; Speech Perception/*physiology; Young Adult; Down syndrome; functional magnetic resonance imaging; intellectual disability; receptive language  
  Abstract BACKGROUND: Previous studies have documented differences in neural activation during language processing in individuals with Down syndrome (DS) in comparison with typically developing individuals matched for chronological age. This study used functional magnetic resonance imaging (fMRI) to compare activation during language processing in young adults with DS to typically developing comparison groups matched for chronological age or mental age. We hypothesised that the pattern of neural activation in the DS cohort would differ when compared with both typically developing cohorts. METHOD: Eleven persons with DS (mean chronological age = 18.3; developmental age range = 4-6 years) and two groups of typically developing individuals matched for chronological (n = 13; mean age = 18.3 years) and developmental (mental) age (n = 12; chronological age range = 4-6 years) completed fMRI scanning during a passive story listening paradigm. Random effects group comparisons were conducted on individual maps of the contrast between activation (story listening) and rest (tone presentation) conditions. RESULTS: Robust activation was seen in typically developing groups in regions associated with processing auditory information, including bilateral superior and middle temporal lobe gyri. In contrast, the DS cohort demonstrated atypical spatial distribution of activation in midline frontal and posterior cingulate regions when compared with both typically developing control groups. Random effects group analyses documented reduced magnitude of activation in the DS cohort when compared with both control groups. CONCLUSIONS: Activation in the DS group differed significantly in magnitude and spatial extent when compared with chronological and mental age-matched typically developing control groups during a story listening task. Results provide additional support for an atypical pattern of functional organisation for language processing in this population.  
  Call Number Serial 1089  
Permanent link to this record
 

 
Author (up) Kotze, H.F.; Moller, A.T. file  url
openurl 
  Title Effect of auditory subliminal stimulation on GSR Type Journal Article
  Year 1990 Publication Psychological Reports Abbreviated Journal Psychol Rep  
  Volume 67 Issue 3 Pt 1 Pages 931-934  
  Keywords Adult; *Arousal; Attention; Female; Galvanic Skin Response; Humans; Male; *Speech Perception; *Subliminal Stimulation  
  Abstract The present study was designed to investigate the possible effect of auditory subliminal stimulation on GSR. 38 undergraduate students were exposed subliminally to emotional words while GSR was monitored. The results confirmed the hypothesis that auditory subliminal stimulation would effect a significant increase in GSR.  
  Call Number Serial 1373  
Permanent link to this record
 

 
Author (up) Mattys, S.L.; Pleydell-Pearce, C.W.; Melhorn, J.F.; Whitecross, S.E. file  url
openurl 
  Title Detecting silent pauses in speech: a new tool for measuring on-line lexical and semantic processing Type Journal Article
  Year 2005 Publication Psychological Science Abbreviated Journal Psychol Sci  
  Volume 16 Issue 12 Pages 958-964  
  Keywords Humans; *Semantics; *Signal Detection, Psychological; *Speech Perception; Speech Production Measurement; *Vocabulary  
  Abstract In this study, we introduce pause detection (PD) as a new tool for studying the on-line integration of lexical and semantic information during speech comprehension. When listeners were asked to detect 200-ms pauses inserted into the last words of spoken sentences, their detection latencies were influenced by the lexical-semantic information provided by the sentences. Listeners took longer to detect a pause when it was inserted within a word that had multiple potential endings, rather than a unique ending, in the context of the sentence. An event-related potential (ERP) variant of the PD procedure revealed brain correlates of pauses as early as 101 to 125 ms following pause onset and patterns of lexical-semantic integration that mirrored those obtained with PD within 160 ms of pause onset. Thus, both the behavioral and the electrophysiological responses to pauses suggest that lexical and semantic processes are highly interactive and that their integration occurs rapidly during speech comprehension.  
  Call Number Serial 1968  
Permanent link to this record
 

 
Author (up) Merritt, D.D.; Liles, B.Z. file  url
openurl 
  Title Story grammar ability in children with and without language disorder: story generation, story retelling, and story comprehension Type Journal Article
  Year 1987 Publication Journal of Speech and Hearing Research Abbreviated Journal J Speech Hear Res  
  Volume 30 Issue 4 Pages 539-552  
  Keywords Attention; Child; Child Language; Fantasy; Female; Humans; Language Development Disorders/*diagnosis; *Language Tests; Male; *Semantics; *Speech Perception; *Speech Production Measurement; Verbal Behavior; Vocabulary  
  Abstract Twenty language-impaired and unimpaired children ages 9:0 to 11:4 participated in three story tasks. The children generated three original stories, retold two adventure stories, and then answered two sets of comprehension questions after each retelling. Stein and Glenn's (1979) story grammar rules were adapted and used to analyze the narratives. The generated and retold stories produced by the language-disordered children contained fewer complete story episodes, a lower mean number of main and subordinate clauses per complete episode, and a lower frequency of use of story grammar components than those of the control group. The story hierarchies produced by both groups were highly similar, though, in both story generation and story retelling. The groups also did not differ in their understanding of the factual details of the retold stories, but did differ significantly in their comprehension of the relationships linking the critical parts of the stories together. The results are discussed relative to cognitive organizational deficits of language-impaired children.  
  Call Number Serial 1131  
Permanent link to this record
 

 
Author (up) O'Doherty, K.; Troseth, G.L.; Shimpi, P.M.; Goldenberg, E.; Akhtar, N.; Saylor, M.M. file  url
openurl 
  Title Third-party social interaction and word learning from video Type Journal Article
  Year 2011 Publication Child Development Abbreviated Journal Child Dev  
  Volume 82 Issue 3 Pages 902-915  
  Keywords Attention; Child, Preschool; Comprehension; Cues; Female; Humans; Imitative Behavior; *Interpersonal Relations; *Language Development; Male; *Social Environment; *Speech Perception; Television; *Verbal Learning; *Video Recording  
  Abstract In previous studies, very young children have learned words while “overhearing” a conversation, yet they have had trouble learning words from a person on video. In Study 1, 64 toddlers (mean age=29.8 months) viewed an object-labeling demonstration in 1 of 4 conditions. In 2, the speaker (present or on video) directly addressed the child, and in 2, the speaker addressed another adult who was present or was with her on video. Study 2 involved 2 follow-up conditions with 32 toddlers (mean age=30.4 months). Across the 2 studies, the results indicated that toddlers learned words best when participating in or observing a reciprocal social interaction with a speaker who was present or on video.  
  Call Number Serial 1969  
Permanent link to this record
 

 
Author (up) Obermeier, C.; Dolk, T.; Gunter, T.C. file  url
openurl 
  Title The benefit of gestures during communication: evidence from hearing and hearing-impaired individuals Type Journal Article
  Year 2012 Publication Cortex; a Journal Devoted to the Study of the Nervous System and Behavior Abbreviated Journal Cortex  
  Volume 48 Issue 7 Pages 857-870  
  Keywords Adult; Brain--physiology; Communication; Comprehension--physiology; Evoked Potentials--physiology; Female; Gestures; Hearing Loss--physiopathology; Hearing Tests; Humans; Language; Male; Persons With Hearing Impairments; Speech--physiology; Speech Perception--physiology  
  Abstract There is no doubt that gestures are communicative and can be integrated online with speech. Little is known, however, about the nature of this process, for example, its automaticity and how our own communicative abilities and also our environment influence the integration of gesture and speech. In two Event Related Potential (ERP) experiments, the effects of gestures during speech comprehension were explored. In both experiments, participants performed a shallow task thereby avoiding explicit gesture-speech integration. In the first experiment, participants with normal hearing viewed videos in which a gesturing actress uttered sentences which were either embedded in multi-speaker babble noise or not. The sentences contained a homonym which was disambiguated by the information in a gesture, which was presented asynchronous to speech (1000 msec earlier). Downstream, the sentence contained a target word that was either related to the dominant or subordinate meaning of the homonym and was used to indicate the success of the disambiguation. Both the homonym and the target word position showed clear ERP evidence of gesture-speech integration and disambiguation only under babble noise. Thus, during noise, gestures were taken into account as an important communicative cue. In Experiment 2, the same asynchronous stimuli were presented to a group of hearing-impaired students and age-matched controls. Only the hearing-impaired individuals showed significant speech-gesture integration and successful disambiguation at the target word. The age-matched controls did not show any effect. Thus, individuals who chronically experience suboptimal communicative situations in daily life automatically take gestures into account. The data from both experiments indicate that gestures are beneficial in countering difficult communication conditions independent of whether the difficulties are due to external (babble noise) or internal (hearing impairment) factors.  
  Call Number Serial 503  
Permanent link to this record
 

 
Author (up) Paulesu, E.; Harrison, J.; Baron-Cohen, S.; Watson, J.D.; Goldstein, L.; Heather, J.; Frackowiak, R.S.; Frith, C.D. file  url
openurl 
  Title The physiology of coloured hearing. A PET activation study of colour-word synaesthesia Type Journal Article
  Year 1995 Publication Brain : a Journal of Neurology Abbreviated Journal Brain  
  Volume 118 ( Pt 3) Issue Pages 661-676  
  Keywords Adult; Auditory Perception/physiology; Cerebral Cortex/*physiopathology/radionuclide imaging; *Cerebrovascular Circulation; Color Perception/*physiology; Female; Functional Laterality; Humans; Image Processing, Computer-Assisted; Magnetic Resonance Imaging; Male; Middle Aged; Models, Neurological; Somatosensory Cortex/physiopathology/radionuclide imaging; Speech Perception/*physiology; Temporal Lobe/physiopathology/radionuclide imaging; *Tomography, Emission-Computed; Writing  
  Abstract In a small proportion of the normal population, stimulation in one modality can lead to perceptual experience in another, a phenomenon known as synaesthesia. In the most common form of synaesthesia, hearing a word can result in the experience of colour. We have used the technique of PET, which detects brain activity as changes of regional cerebral blood flow (rCBF), to study the physiology of colour-word synaesthesia in a group of six synaesthete women. During rCBF measurements synaesthetes and six controls were blindfolded and were presented with spoken words or pure tones. Auditory word, but not tone, stimulation triggered synaesthesia in synaesthetes. In both groups word stimulation compared with tone stimulation activated the classical language areas of the perisylvian regions. In synaesthetes, a number of additional visual associative areas, including the posterior inferior temporal cortex and the parieto-occipital junctions, were activated. The former has been implicated in the integration of colour with shape and in verbal tasks which require attention to visual features of objects to which words refer. Synaesthetes also showed activations in the right prefrontal cortex, insula and superior temporal gyrus. By contrast, no significant activity was detected in relatively lower visual areas, including areas V1, V2 and V4. These results suggest that colour-word synaesthesia may result from the activity of brain areas concerned with language and visual feature integration. In the case of colour-word synaesthesia, conscious visual experience appears to occur without activation of the primary visual cortex.  
  Call Number Serial 553  
Permanent link to this record
 

 
Author (up) Schulze, K.; Koelsch, S. file  url
openurl 
  Title Working memory for speech and music Type Journal Article
  Year 2012 Publication Annals of the New York Academy of Sciences Abbreviated Journal Ann N Y Acad Sci  
  Volume 1252 Issue Pages 229-236  
  Keywords Auditory Perception--physiology; Feedback, Sensory--physiology; Humans; Learning--physiology; Memory, Long-Term--physiology; Memory, Short-Term--physiology; Models, Neurological; Models, Psychological; Music--psychology; Neuroimaging; Neuronal Plasticity--physiology; Neurosciences; Speech--physiology; Speech Perception--physiology  
  Abstract The present paper reviews behavioral and neuroimaging findings on similarities and differences between verbal and tonal working memory (WM), the influence of musical training, and the effect of strategy use on WM for tones. Whereas several studies demonstrate an overlap of core structures (Broca's area, premotor cortex, inferior parietal lobule), preliminary findings are discussed that imply, if confirmed, the existence of a tonal and a phonological loop in musicians. This conclusion is based on the findings of partly differing neural networks underlying verbal and tonal WM in musicians, suggesting that functional plasticity has been induced by musical training. We further propose a strong link between production and auditory WM: data indicate that both verbal and tonal auditory WM are based on the knowledge of how to produce the to-be-remembered sounds and, therefore, that sensorimotor representations are involved in the temporary maintenance of auditory information in WM.  
  Call Number Serial 478  
Permanent link to this record
Select All    Deselect All
 |   | 
Details
   print

Save Citations: