COMING SOON – EARLY STAGE RESEARCH POSITIONS (PHD POSITIONS)
We will shortly be recruiting Early Stage Researchers for 13 projects (DCOMM1-13) as follows (11 PhD positions):
Position DComm1: Demonstratives across languages.
Host: UIB (Professor Pedro Guijarro-Fuentes. Key collaborators: Professor Kenny Coventry, UEA; Professor Holger Diessel, FSU).
This project will consider spatial demonstratives across languages using a combination of linguistic and experimental approaches to spatial demonstratives (Coventry et al., 2008, 2014) Although spatial demonstratives occur in all languages (Dixon, 2003; Diessel, 1999), demonstrative systems differ across languages in how they map onto space. This project will systematically test demonstrative use across languages in order to chart for the first time whether demonstrative systems across languages are governed by the same basic (universal) perceptual distinctions (Coventry et al., 2014; Diessel, 2014). This project will also examine the relationship between spatial and temporal uses of demonstratives, thus testing whether space and time are symmetrical or asymmetrically related in language.
Position DComm2: Deictic language and deictic gestures in developmental deficits
Host: NTNU (Professor Mila Vulchanova. Key collaborators: Dr Andrew Bayliss/Dr Martin Doherty, UEA; Professor Holger Diessel, FSU).
This project focuses on deictic communication in neurodevelopmental disorders. It is well know that there are pragmatic deficits in Autistic Spectrum Disorders. For example, problems with some pragmatic language persist even in the presence of preserved structural language at the high end of the autistic spectrum (Tager-Flusberg et al, 2005). It is also well known that there are deficits in visual attention in neurodevelopmental disorders, but with different disorders showing varying deficit profiles (Riby et al, 2008). This project will employ a range of experimental (EEG, eye tracking) and linguistic methods to provide the first detailed examination of (possible) deficits in deictic gesture and deictic language in autism in comparison with other developmental deficits (e.g. Williams Syndrome) (Landau & Hoffman, 2012).
Position DComm3: Deictic communication in development
Host: UEA (Dr Martin Doherty/Dr Andrew Bayliss. Key collaborators: Professor Holger Diessel, FSU; Professor Angelo Cangelosi, UOP).
This project will examine spatial demonstratives in typical development, and the relationship between demonstratives, gesturing and theory of mind (Doherty, 2009) cross-sectionally. Eye tracking will be used with infants and toddlers during simple play, employing tasks that will encourage children to identify objects referred to with demonstratives and to test to what extent they rely on joint attention cues (e.g. checking gaze of the speaker) (Carpenter et al 1998; Doherty et al, 2009). Other methods will include measurement of reaction times when speakers do or do not share the perspective of a conspecific (with older children). This project will be the first to examine the relationship between demonstratives, perspective/theory of mind, and joint attention using a range of controlled and ecologically valid metacognitive tasks.
Position DComm4: Investigating deictic communication in stroke patients with visual neglect
Host: UEA (Dr Stephanie Rossit/Professor Kenny Coventry. Key collaborators: Headway, UK; Associate Professor Mikkel Wallentin, AU).
This project will investigate deictic communication in stroke for the first time. Two thirds of stroke patients exhibit visual neglect (Stone et al, 1993). This project will examine and document deictic communication deficits in these patients, and the relationship between deictic communication and visual neglect. Following screening for the presence of visual neglect, apraxia and aphasia, (via the Norfolk and Norwich University Hospital), patients will perform a range of deictic communication tasks (including the ‘memory game task’ used successfully to elicit demonstratives under controlled conditions (Coventry et al, 2008, 2014). Voxel-based lesion-symptom analysis (Rossit et al, 2011) will be used to identify which brain regions when damaged are associated with such deficits. In turn, a rehabilitation trial will establish whether deictic communication enhancement may improve visual neglect.
Position DComm5: Spatial deixis in (diachronic) language development
Host: FSU (Professor Holger Diessel. Key collaborators: Dr Pedro Guijarro-Fuentes, UIB; Dr Andrew Bayliss/Dr Martin Doherty, UEA; Professor Mila Vulchanova/Dr Valentin Vulchanov, NTNU; Ordnance Survey).
This project will explore the role of spatial deictics in the diachronic development of grammar and will create a typological database of spatial deictics and their diachronic development. It has been established that deictic communication provides a frequent starting point for the development of a wide range of grammatical markers (e.g. definite articles, relative pronouns, copulas) and discourse markers (e.g. hesitation signals). Building on hypotheses presented in Diessel (2006), the project will employ a combination of qualitative analysis of data from historical texts, etymological dictionaries, and historical grammars, and the quantitative analysis of data from diachronic corpora and a typological database to be created in the project. These methods will show whether the communicative function of demonstratives to establish joint attention motivates their frequent development into grammatical markers.
Position DComm6: The neural correlates of spatial demonstratives
Host: AU (Associate Professor Mikkel Wallentin. Key collaborators: Professor Kenny Coventry, UEA).
This project will employ functional Magnetic Resonance Imaging (fMRI) to elucidate the mapping between spatial demonstratives and non-linguistic brain regions involved in the perception of space; specifically peripersonal (near space) versus extrapersonal (far) space (Kemmerer, 1999; Làdavas, 2002; Longo & Lourenco, 2006). Spatial demonstratives will be used as auditory stimuli, presented both within a larger linguistic context (e.g. in stories) and in isolation in a standard forced choice response paradigm. These paradigms will allow for investigation of the differences between demonstratives in different contextual and attentional settings. It will be established if there is a close neurological mapping between perceptual space (Kemmerer, 1999; Làdavas, 2002; Longo, & Lourenco, 2006; Lane et al, 2013). and demonstratives.
Position DComm7: Deictic communication in sign languages
Host: CNR (Dr Olga Capirci/Dr Cristina Caselli. Key collaborators: QUALISYS; Professor Holger Diessel, FSU).
The ESR will focus on deictic communication in sign languages – languages where action is privileged, but also provides key grammatical functions. Drawing on previous work on Sign Languages (SL) and Spoken-Vocal Languages (VL), and on new evidence on the development of deictic gesture and words for demonstrative versus person reference, the project will explore typological, modality-specific features affecting deixis and anaphora in SL. Deictic-anaphoric reference is produced in SL via complex manual and non-manual units, which exhibit highly iconic features and are marked by specific eye-gaze patterns which distinguish them from standard signs. Signed data will be analysed also referring to co-speech deictic gestures and to other deictic devices (e.g. demonstratives), in order to analyze bimodal bilingual (spoken/signed language) patterns of both adults and children.
Together the seven projects in WP1 will afford the first comprehensive picture of deictic communication using complementary methods from complementary disciplines. ESRs will benefit from training using all the relevant techniques typically isolated to single disciplines, conjoined with a programme of transferable skills that will prepare them for future employment.
Position DComm8: Developmental robotics architecture for the co-development of demonstratives and gestures in human-robot cooperation
Host: UOP (Professor Angelo Cangelosi/Dr Anthony Morse. Key collaborators: Dr Martin Doherty/Dr Andrew Bayliss, UEA; Telerobot; IIT).
This project will target the important issue of the robot’s understanding of function words, including demonstratives, to go beyond current state-of-the-art learning and understanding of words naming objects and actions in robots. The interaction between robots and humans, as in cooperation on joint object manipulation, requires the robot’s understanding of sentences such as “Pass me that ball” and “Put that yellow block there.” The cognitive architecture will extend the “ERA” architecture developed by Morse et al. (2010) used in HRI experiments on word learning for object and action names, with the addition of a recurrent module for the processing of action sequences and gestures. The extended architecture will focus on the development of robotics architecture for deictic communication, focusing on demonstratives. This work aims to be developmentally inspired, taking what is known about the developmental trajectory of gesture and language to impact upon robot architecture and learning. The extended cognitive architecture will provide the first cognitive robotic model for the understanding of function words and their grounding in sensorimotor strategies and social interaction, and its application to human-robot interaction/cooperation across a range of settings (e.g. robot companions for older adults).
Position DComm9: From single words to compositional language via gestures: Applications in robot language learning
Host: UOP (Professor Angelo Cangelosi/Dr Anthony Morse. Key collaborators: Dr Olga Capirci/Dr Cristina Caselli, CNR; Telerobot; IIT).
Studies on sign language and language-gesture development have shown that children go through a gesture-word combination stage before they make full transition to two-word and longer sentences (Capirci et al, 1996). This project will follow the language and action integration roadmap proposed in Cangelosi et al. (2010) on the co-development and sharing of compositional representations common to language and action (Pastra & Aloimonos, 2011) for extended language learning systems in human-robot cooperation tasks. The neural control architecture will integrate both action and language learning through integrated sensorimotor/linguistic areas, using interaction scenarios requiring compositional and recursive actions, and the learning of both words and gestures to describe action-object combination related to the task execution. Analyses of the neural controller’s shared action/language layers will shed light on the nature of shared, compositional representation bootstrapping both motor and linguistic capabilities.
Position DComm10: Deictic communication and mobile phones
Host: WWU (Dr Cristian Kray. Key collaborators: Professor Christoph Hölscher/Professor Martin Raubal, ETH; 52°North ).
This project will design, implement, and evaluate techniques to enable deictic communication between non-collocated human communication partners via mobile phones. The qualities of technologies produced will be compared to collocated deictic communication that is not technology-mediated. In order to achieve the objectives, a participatory design approach will be adopted employing design interaction elicitation techniques with potential users. The findings will be implemented using rapid prototyping techniques and agile software engineering methods, and will subsequently be evaluated through lab-based, controlled user studies as well as field studies. Among the outputs will be the production of open-source software, which will enable others to realize technology-mediated deictic communication across a range of platforms.
Position DComm11: Improved motion capture methodology and tools in linguistic research
Host: Qualisys (Mr Fredrik Muller. Key collaborators: Dr Olga Capirci/Dr Cristina Caselli, CNR; Dr Stephanie Rossit/Professor Kenny Coventry, UEA; Professor Mila Vulchanova, NTNU).
The goal of this project will be to focus on motion tracking technology development and integration with respect to communication. Many of the above projects tap how language and gesture are coordinated in deictic communication across populations. This project will develop motion-tracking integration for communication settings. This will involve the optimization of marker sets for motion tracking, the coordination of these sets with speech streams, and the associated algorithmic improvements in the motion capture tool chain required. The ESR will work closely across other projects, developing the technology needed for novel data collection and analyses methods. There will be a strong training component also, with the ESR learning how to train others, adapting training to the specific needs of researchers coming from different disciplines and approaches
Position Dcomm12: iCub robot hand redesign for gestural and deictic interaction
Host: Telerobot (Mr Francesco Becchi. Key collaborators: Professor Angelo Cangelosi/Dr Anthony Morse, UOP; IIT; Dr Olga Capirci/Dr Cristina Caselli, CNR; Associate Professor Mikkel Wallentin, AU).
In order for robots to be able to use deictic communication effectively, as well as a full range of gestures, it is necessary to design a robotic hand that can fulfil the full gestural vocabulary required. To that end, this project will redesign the iCub robot hand, working closely with the other sites using the iCub (UOP) and informed by findings regarding gestural vocabulary investigated elsewhere (e.g. CNR). In order to achieve these goals, the ESR will work across sites, integrating requirements into the design of the new iCub hand. Critically, the new hand will afford greater potential for naturalistic human-robot interaction – a goal that will afford transferability to other technologies and situations.
Position DComm13: Deictic communication in architectural and urban design Host: ETH (Professor Christoph Hölscher/Dr Martin Brösamle. Key collaborators: KCAP; Dr Christian Kray, WWU).
The spatial arrangements in architectural design tasks are too detailed and nuanced to be reliably communicated by verbal descriptions alone. Thus, the dialogue between designers and with clients is based on deictic gestures and verbal references to objects such as plans, 3d models and ad-hoc sketches. This project will characterize and classify the deictic references occurring in different stages of the design process. The project will also distinguish between novice and senior designers with respect to their deictic reference strategies and their ability to adjust such behaviours to different communication partners.
How to apply
Applicants should first ensure whether they are eligible based on the EU commission guidelines, summarised here.
Applicants should complete the application form to be found here, together with the application documents required for each site. Applicants should address how they meet the selection criteria, and supply evidence of documentation as required.
Enquiries about individual projects should be addressed to the Principal investigators for each project site. General enquiries can be made here.
FAQs about applying for ESR positions can be found here.