Invited Speakers

Notice: due to IROS2022 time restrictions (we have 2 hours less compared to the original time of the workshop) all speakers become Keynote speakers, since they will have the same time for their talk.

 

Keynote Speakers

 

Alessandra Sciutti

profile_Alessandra_image3.jpg

Web:  https://www.iit.it/it/people-details/-/people/alessandra-sciutti  

Google Scholar: https://scholar.google.com/citations?user=YdbEJn8AAAAJ&hl=es&oi=ao 

Short Bio: Alessandra Sciutti is Tenure Track Researcher, head of the CONTACT (COgNiTive Architecture for Collaborative Technologies) Unit of the Italian Institute of Technology (IIT). She received her B.S and M.S. degrees in Bioengineering and the Ph.D. in Humanoid Technologies from the University of Genova in 2010. After two research periods in USA and Japan, in 2018 she has been awarded the ERC Starting Grant wHiSPER (www.whisperproject.eu), focused on the investigation of joint perception between humans and robots. She published more than 80 papers and abstracts in international journals and conferences and participated in the coordination of the CODEFROR European IRSES project (https://www.codefror.eu/). She is currently Associate Editor for several journals, among which the International Journalof Social Robotics, the IEEE Transactions on Cognitive and Developmental Systems and Cognitive System Research. The scientific aim of her research is to investigate the sensory and motor mechanisms underlying mutual understanding in human-human and human-robot interaction. For more details on her research, as well as the full list of publicationsplease check the Contact Unit websiteor her Google Scholar profile. 

 

Title of talk: “Can a robot be intelligent without being social?” (In-person talk)

Abstract:
Establishing mutual understanding between robots and humans is necessary for effective collaboration. To achieve that, robots need to be endowed with the ability to model how humans perceive the world and their partners. This knowledge should also drive the way robots plan their actions, in order to behave in a way intuitively predictable from a human perspective. Action and perception skills have then to be integrated within a cognitive architecture, including components such as memory, motivation and internal drives, necessary to allow the robot to grow and learn over longer periods of time and interaction. The role of different types of AI in the building of such an architecture is a central matter of discussion in cognitive robotics. Taking human intelligence as inspiration - and its inherently social nature - this talk will present some examples of the use of AI in the development of a cognitive robot.

 

Yiannis Demiris

YiannisDemiris2.jpg

Web: www.imperial.ac.uk/people/y.demiris

Google Scholar: https://scholar.google.com/citations?user=B2o5i-AAAAAJ&hl=es&oi=ao 

Short Bio: Yiannis Demiris is a Professor in Human-Centred Robotics at Imperial, where he holds a Royal Academy of Engineering Chair in Emerging Technologies (Personal Assistive Robotics). He established the Personal Robotics Laboratory at Imperial in 2001. He holds a PhD in Intelligent Robotics, and a BSc (Honours) in Artificial Intelligence and Computer Science, both from the University of Edinburgh.  Prof. Demiris' research interests include Artificial Intelligence, Machine Learning, and Intelligent Robotics, particularly in intelligent perception, multi-scale user modelling, and adaptive cognitive control architectures in order to determine how intelligent robots can generate personalised assistance to humansto improve their physical, cognitive and social well being.

 

Title of talk: AI for Personal Assistive Robotics (Online)

Abstract: Assistive robots hold great potential for empowering people to achieve their intended tasks, for example during activities of daily living, such as dressing, mobility, feeding and object handovers. Their potential can be maximised by ensuring that the interfaces that are used to interact with robots remain adaptable to the users and the interaction context. Artificial Intelligence can assist by transitioning interfaces from presenting data to offering affordances, using for example planning, and augmented reality visualisations. In this talk, I will outline research in our Personal Robotics Laboratory at Imperial College London (imperial.ac.uk/personal-robotics) towards the development of algorithms that enable learning during human-robot interaction, both for robots as well as for humans; I will demonstrate their application in activities of daily living, and illustrate how computer vision, learning, and augmented reality can enable adaptive user interfaces for interacting with assistive robots in a safe, efficient, and trustworthy manner. 

 

Cynthia Matuszek

CynthiaMatuszekPhotoFinal4.png

Web: https://www.csee.umbc.edu/~cmat/

Google Scholar: https://scholar.google.com/citations?user=C3NuO-AAAAAJ&hl=es&oi=ao 

Short Bio: Cynthia Matuszek is an associate professor of computer science and electrical engineering at the University of Maryland, Baltimore County, and the director of UMBC’s Interactive Robotics and Language lab. Her research is focused on how robots can learn grounded language from interactions with non-specialists, which includes work in not only robotics, but human-robot interactions, natural language, and machine learning, informed by a background in common-sense reasoning and classical artificial intelligence. Dr. Matuszek has published in machine learning, artificial intelligence, robotics, and human-robot interaction venues. 

 

Title of talk: Natural Language Grounding and HRI in VR (Online)

Abstract: As robots move from labs and factories into human-centric spaces, it becomes progressively harder to predetermine the environments and interactions they will need to be able to handle. Letting robots learn from end users via natural language is an intuitive, versatile approach to handling dynamic environments and novel situations robustly, while grounded language acquisition is concerned with learning to understand language in the context of the physical world. In this presentation, I will give anover view of our work on learning the grounded semantics of natural language describing an agent's environment, and will describe work on applying those models in a sim-to-real language learning environment. 

 

Matthias Scheutz

MatthiasScheutz3.jpg

Web: https://hrilab.tufts.edu/people/matthias.php

Google Scholar: https://scholar.google.com/citations?user=5yT3GScAAAAJ&hl=es&oi=ao 

Short Bio: Matthias Scheutz is a full professor in computer science at Tufts University and director of the Human-Robot Interaction Laboratory. He has over 400 publications in artificial intelligence, natural language understanding, robotics, and human-robot interaction, with current research focusing on complex ethical robots with instruction-based learning capabilities in open worlds.

 

Title of talk: DIARC: The Utility of a Cognitive Robotic Architecture for AI-based HRI  (Online)

Abstract: As more advanced AI techniques are explored in every area of robotics, integrating them in a functional and systematic manner is becoming an increasingly difficult challenge. Cognitive architectures have from the beginning attempted to present a unified view of cognition and are thus a promising framework for integrating diverse cognitive functions, in particular capabilities needed for natural human-robot interaction. In this presentation, we will argue that a distributed cognitive robotic architecture like the "Distributed Integrated Affect Reflection Cognition" (DIARC) architecture is an ideal platform for developing and evaluating advanced algorithms for natural HRI. We will provide a brief overview of DIARC with focus on capabilities needed for task-based dialogue interactions and demonstrate these capabilities in a variety of human-robot interaction settings.


Web: https://dorsa.fyi/

Google Scholar: https://scholar.google.com/citations?user=ZaJEZpYAAAAJ&hl=es&oi=ao 

Short Bio: Dorsa Sadigh is an assistant professor in Computer Science and Electrical Engineering at Stanford University.  Her research interests lie in the intersection of robotics, learning, and control theory. Specifically, she is interested in developing algorithms for safe and adaptive human-robot and human-AI interaction. Dorsa received her doctoral degree in Electrical Engineering and Computer Sciences (EECS) from UC Berkeley in 2017, and received her bachelor’s degree in EECS from UC Berkeley in 2012.  She is awarded the Sloan Fellowship, NSF CAREER, ONR Young Investigator Award, AFOSR Young Investigator Award, DARPA Young Faculty Award, Okawa Foundation Fellowship, MIT TR35, and the IEEE RAS Early Academic Career Award.

 

 

Title of talk: Learning from Non-Traditional Sources of Data (Online)

Abstract: Imitation learning has traditionally been focused on learning a policy or a reward function from expert demonstrations. However, in practice in many robotics applications, we have limited access to expert demonstrations. Today, I will talk about a set of techniques to address some of the challenges of learning from non-traditional sources of data, i.e., suboptimal demonstrations, rankings, play data, and physical corrections. I will first talk about our confidence-aware imitation learning approach that simultaneously estimates a confidence measure over demonstrations and the policy parameters. I will then talk about extending this approach to learn a confidence measure over expertise of different demonstrators in an unsupervised manner. Following up, I will discuss how we can learn more expressive models such as a multimodal reward function when learning from a mixture of ranking data. Finally, I talk about our recent efforts in learning from other non-traditional sources of data in interactive domains. Specifically, we show how predicting latent affordances can be substantial when learning from undirected play data in interactive domains, and how we can learn from a sequence of interactions through physical corrections.

 

Yutaka Nakamura

yutaka_nakamura2.jpg

Web: https://www.riken.jp/en/research/labs/r-ih/guard_robo_proj/behav_learn/index.html 

Google Scholar:

Short Bio: Dr. Yutaka Nakamura is a team leader of Guardian Robot Project, RIKEN, Japan. He received his Ph.D. from the graduate school of information science, Nara Institute of Science and Technology in 2004 and worked as a postdoctoral researcher until 2006 on reinforcement learning for bipedal walking. From 2006 to 2020, he worked at Osaka University and studied motion generation for robots with complicated body and learning mechanisms for communication robots. His goal is to develop robots that learn how to behave in everyday spaces where they coexist with humans through their own experiences. His research interests include reinforcement learning, motion planning and generative models for human robot interaction.

 

Title of talk: Modeling a dyadic interaction using a deep neural network (In-person talk)

Abstract: Communication robots are expected to be systems that accompany humans and enrich people's lives since they can potentially interact with humans in a natural way. To realize this, it seems important for robots to communicate smoothly with humans, but it is difficult for robots to communicate at a good tempo, unlike humans. Humans interact with others in a bidirectional manner using multiple modalities including non-verbal communication. One person may give responses such as nod while the other is speaking, and overlap may occur in turn-taking. Such full-duplex communication-like behavior is different from a question answering dialogue, in which a response is given after recognition is completed. In this presentation, we will introduce our approaches to modeling behavior during dialogue, by using models in which the actions of all participants are considered simultaneously. Some preliminary results of applying deep neural network models to a dyadic interaction captured by a 360-degree panoramic camera will be presented. 


Luca locchi

LucaIocchi3.jpg

Web: www.diag.uniroma1.it/iocchi

Google Scholar: https://scholar.google.com/citations?user=KHhijC0AAAAJ&hl=es&oi=sra 

Short BioProf. Luca locchi is Full Professor at Sapienza University of Rome, Italyteaching in the Master in Artificial Intelligence and Robotics and currently Coordinator of the PhD Program in Engineering in Computer Science. His main research interests include cognitive robotics, task planning, multi-robot coordination, robot perception, robot learning, human-robot interaction, and social robotics. He is the author of more than 180 referred papers (h-index 45 [Google Scholar]) in journals and conferences in artificial intelligence and robotics. He is currently Associate Editor of Artificial Intelligence Journal and organized several scientific events. He has been principal investigator of several international, EU, national and industrial projects in artificial intelligence and robotics. 

He iscurrently Vice-President of Robo Cup Federation (member of the Board of Trustees since 2013) and contributed to benchmarking domestic service robots through scientific competitions with in RoboCup@Home and the European Robotics League Service Robots (ERL-SR), of which he has been a member of the Organizing Committees since their origin. He organized several international scientific robot competitions, as well as student competitions focussing on service robots and human-robot interaction (including ERL-SR, European RoboCupJunior Championship, and RoboCup@Home Education Challenges). He has also supervised the development of several different teams participating in robot competitions.

 

Title of talkCognitive Social Robots (Online)

Abstract: More and more robotic applications are considering human-in-the-loop models, not only for fulfilling the specific applications tasks, but also to increase user acceptability and trustworthiness of robotics and artificial intelligence technologies. In this context, robots should exhibit at the same time cognitive capabilities and social abilities,  in order to achieve the task goals while properly interacting with people in the environment. In this talk, we will discuss design and implementation of cognitive social robots from a knowledge representation perspective, illustrating problems and possible solutions for an effective integration of cognitive functionalities, such as planning, reasoning, and learningwith social abilities, such as human-robot multi-modal interaction, dialogue management, etc. 

 

Online user: 2 Privacy
Loading...