Want to pay for an essay, but have no idea which service to turn to? See our industry professionals and select the person that may help you tackle the appropriate project write my paper for me

We organize a seminar series that involves some of the most important researchers active in the areas relevant to social robotics. The speakers of this year will cover different forms of embodiment (virtual agents, humanoid robots, etc.), their use in a wide spectrum of application domains (detection of developmental problems in children, tutoring, conflict resolution, customer-seller transactions, etc.) and the interplay between robotics and some of the most important psychological phenomena (personality, emotions, attitude, intentions, etc.).



The seminars will be held at two venues (both visible in the map above):

  • Institute of Neuroscience and Psychology – 58, Hillhead Street (Glasgow)
  • School of Computing Science – 17, Lilybank Gardens (Glasgow)

The best way to reach both venues is to arrive at the Hillhead Metro Station and to walk from there (10 minutes walk in both cases). Please see the program below to identify the exact venue for the seminar you attend.


October 9th, 2015 at 15.30
University of Glasgow (School of Psychology) – 58, Hillhead Street (Glasgow)
A Dimensional Representation of Emotion
James Russell (Boston College)

Dividing emotions into categories has proved unwieldy. Dimensions provide an alternative. Self-reported emotional experience, emotions seen in others, and the semantics of emotion lexicon consistently show two dominant dimensions: valence and activation.

James A. Russell is professor of psychology at Boston College. He did his doctoral work in psychology at UCLA in 1975 and took a position at the University of British Columbia. He later moved to Boston College in 2000. He has published over one hundred scientific papers, all on some aspect of emotion. He developed a circumplex model of affect, now thought of a representing Core Affect. His current work focuses on an approach to emotion called psychological construction, especially on ways to integrate this approach with other research programs such as appraisal theory and social construction.

October 16th, 2015 at 15.30
University of Glasgow (School of Computing Science) – 17, Lilybank Gardens (Glasgow)
How do I Look in This? Embodiment and Social Robotics
Ruth Aylett (Heriot Watt University)

Robots have been produced with a wild variety of embodiments, from plastic-skinned dinosaurs to human lookalikes, via any number of different machine-like robots. Why is embodiment important? What do we know about the impact of embodiment on the human interaction partners of a social robot? How naturalistic should we try to be? Can one robot have multiple embodiments? How do we engineer expressive behaviour across embodiments? I will discuss some of these issues in relation to work in the field.

Ruth Aylett has been a Professor of Computer Science at Heriot-Watt University in Edinburgh since 2004, coming there from the Centre for Virtual Environments at University of Salford. She researches intelligent embodied agents, both graphical and robotic, and also works in affective computing, interactive digital narrative and human-robot interaction. She was a partner in the EU LIREC project investigating long-lived interaction with robot companions, especially in the workplace. She is currently working in the EU EMOTE project towards an empathic robot tutor. She has more than 250 publications in journals, peer-reviewed conferences and book chapters and was a founder of the international conference Intelligent Virtual Agents. Read more at

October 23rd, 2015 at 15.30
University of Glasgow (School of Psychology) – 58, Hillhead Street (Glasgow)
Interacting with Socio-Emotional Virtual Agent
Catherine Pelachaud (Telecom ParisTech)

In this talk I will present our current work toward endowing virtual agents with socio-emotional capabilities. I will start describing an interactive system of an agent dialoging with human users in an emotionally colored manner. Through its behaviors, the agent can sustain a conversation as well as show various attitudes and levels of engagement. I will describe methods, based on corpus analysis, user-centered, or motion capture, we are using to enrich its repertoire of multimodal behaviors. These behaviors can be displayed with different qualities and intensities to simulate various communicative intentions and emotional states.

Catherine Pelachaud is a Director of Research at CNRS in the laboratory LTCI, TELECOM ParisTech. Her research interest includes embodied conversational agent, nonverbal communication (face, gaze, and gesture), expressive behaviors and socio-emotional agents. With her research team, she has been developing an interactive virtual agent platform GRETA that can display emotional and communicative behaviors. She has been involved and is still involved in European projects related to believable embodied conversational agents, emotion and social behaviors. She is associate editors of several journals among which IEEE Transactions on Affective Computing, ACM Transactions on Interactive Intelligent Systems and Journal on Multimodal User Interfaces. She has co-edited several books on virtual agents and emotion-oriented systems. She participated to the organisation of international conferences such as IVA, ACII and AAMAS, virtual agent track.

November 6th, 2015 at 15.30
University of Glasgow (School of Computing Science) – 17, Lilybank Gardens (Glasgow)
Delighting the User With Speech Synthesis
Matthew Aylett (University of Edinburgh and Cereproc)

We all know there is something special about speech. Our voices are not just a means of communicating, although they are superb at communicating, they also give a deep impression of who we are. They can betray our upbringing, our emotional state, our state of health. They can be used to persuade and convince, to calm and to excite. Speech synthesis technology offers a means to engage the user, to personify an interface, to add delight to human computer interaction. In this talk I will present speech synthesis work that supports social interaction through the use of emotion, personalisation and audio design, we will relate this technology to example of a lab report abstract requirements in dialogue systems, eyes-free data aggregation and audio interfaces, and I will discuss the challenges the technology faces for a pervasive, eyes-free future.

CereProc Chief Science Officer: Dr Matthew Aylett has over 15 years’ experience in commercial speech synthesis and speech synthesis research. He is a founder of CereProc, which offers unique emotional and characterful synthesis solutions and has recently been awarded a Royal Society Industrial Fellowship to explore the role of speech synthesis in the perception of character in artificial agents.

November 20th, 2015 at 15.30
University of Glasgow (School of Computing Science) – 17, Lilybank Gardens (Glasgow)
Computational Modeling and Personal Robotics for Extracting Social Signatures
Mohamed Chetouani (University Pierre et Marie Curie)

Social signal processing is an emerging research domain with rich and open fundamental and applied challenges. In this talk, I’ll focus on the development of social signal processing techniques for real applications in the field of psycho-pathology. I’ll overview recent research and investigation methods allowing neuroscience, psychology and developmental science to move from isolated individuals paradigms to interactive contexts by jointly analyzing behaviors and social signals of partners. From the concept of interpersonal synchrony, we’ll show how to address the complex problem of evaluating children with pervasive developmental disorders. These techniques are also demonstrated in the context of human-robot interaction by a new way of using robots in autism (moving from assistive devices to clinical investigations tools). I will finish by closing the loop between behaviors and physiological states by presenting new results on hormones (oxytocin, cortisol) and behaviors (turn-taking, proxemics) during early parent-infant interactions.

Mohamed Chetouani is the head of the IMI2S (Interaction, Multimodal Integration and Social Signal) research group at the Institute for Intelligent Systems and Robotics (CNRS UMR 7222), University Pierre and Marie Curie-Paris 6. He received the M.S. degree in Robotics and Intelligent Systems from the UPMC, Paris, 2001. He received the PhD degree in Speech Signal Processing from the same university in 2004. In 2005, he was an invited Visiting Research Fellow at the Department of Computer Science and Mathematics of the University of Stirling (UK). Prof. Chetouani was also an invited researcher at the Signal Processing Group of Escola Universitaria Politecnica de Mataro, Barcelona (Spain). He is currently a Full Professor in Signal Processing, Pattern Recognition and Machine Learning at the UPMC. His research activities, performed at the Institute for Intelligent Systems and Robotics, cover the areas of social signal processing and personal robotics through non-linear signal processing, feature extraction, pattern classification and machine learning. He is also the co-chairman of the French Working Group on Human-Robots/Systems Interaction (GDR Robotique CNRS) and a Deputy Coordinator of the Topic Group on Natural Interaction with Social Robots (euRobotics). He is the Deputy Director of the Laboratory of Excellence SMART Human/Machine/Human Interactions In The Digital Society.

December 4th, 2015 at 15.30
University of Glasgow (School of Psychology) – 58, Hillhead Street (Glasgow)
Stacy Marsella (Institute for Creative Technologies and University of Southern California)

To be Announced