Videos of the Workshop

Videos of the IROS2022 Workshop: Artificial Intelligence for Social Robots Interacting with Humans in the Real World [intellect4hri]

Link for youtube list of all videos: https://www.youtube.com/watch?v=LKGHN5-4ltE&list=PLrCF8p1hSjOPGoHZNHfphQUzDmVSdhti6

Note: All talks are in English. Sorry for the between cuts sometimes in the main organizer's breathing. She has respiratory diseases that do not interfere with her work, but with the face mask (due to covid restrictions) sometimes it is difficult for her to breathe.

Next, we include each video for people that only want to see a concrete video.

1) Title of talk: Introduction & description of the workshop intellect4HRI and the project that make it possible AI4HRI.

Speaker: Ely Repiso Polo (Web: https://elyrepiso.wordpress.com/; Google Scholar: https://scholar.google.com/citations?...)

Video: https://youtu.be/LKGHN5-4ltE

Abstract: Nowadays, with the advance of artificial intelligence (AI) and robotics, there is a growing move to use AI-enabled robots for human-robot interaction to do wide range of diverse tasks combining the best abilities of humans and robots to achieve success. Then, we have crated this workshop to bring together a community of researchers interested in making human-robot social interaction more competent with the help of artificial intelligence. We hope to form new collaborations in the future to improve the research in this field by combining the best of our abilities, as we have in the AI4HRI project. Also, we want to exchange the most recent works in the field, including those of several invited speakers and the poster session, while fomenting gender equality and demographic diversity. Finally, this workshop was possible due to the AI4HRI project that aims to build an open-source architecture for human-robot interaction management. The architecture will provide a knowledge and ontology management system associated with reasoning and learning abilities. The system will be designed to handle an overall interaction, in a task-oriented way, from planning to execution (and recovering if needed). Then, to arrive to this final objective, we have jointed the three AI technologies of the three partners: (1) Autonomous decision-making from LAAS/CNRS of Toulouse (France), (2) Knowledge management and knowledge reasoningfor human-robot interaction from Bremen University (Germany) and (3) Learning of social interactions from Kyoto University (Japan).

Partners and workshop organizers.

LAAS/CNRS of Toulouse:

-Dr. Aurelie CLODIC (PI of the project) [Web: https://homepages.laas.fr/aclodic/drupal/content/home; Google Scholar: https://scholar.google.com/citations?user=CP9q-Y4AAAAJ&hl=es&oi=ao]

-Dr. Rachid ALAMI [Web: https://homepages.laas.fr/rachid/;Google Scholar: https://scholar.google.com/citations?user=lda-c1UAAAAJ&hl=es&oi=ao]

-Dr. Ely REPISO POLO (Post Doc. of the project) [Web: https://elyrepiso.wordpress.com/;Google Scholar:https://scholar.google.com/citations?user=jO_K-WgAAAAJ&hl=es]

-Dr. Guillaume SARTHOU [Web: https://www.laas.fr/public/fr/annuaire?userid=25801; Google Scholar:https://scholar.google.com/citations?user=pbcmYj0AAAAJ&hl=es&oi=ao]

Kyoto University:

-Prof. Takayuki KANDA (PI of the project) [Web: https://www.robot.soc.i.kyoto-u.ac.jp/~kanda/; Google Scholar:https://scholar.google.com/citations?user=BL9EACgAAAAJ&hl=es&oi=ao]

-Dr. Dražen BRŠČIĆ [Web: https://drazenb.github.io/; Google Scholar: https://scholar.google.com/citations?user=XvMpFZQAAAAJ&hl=es&oi=ao]

-Dr. Malcolm DOERING (Post Doc. of the project) [Web: https://malcolmdoering.wordpress.com/; Google Scholar:https://scholar.google.com/citations?user=jgt0yGoAAAAJ&hl=es&oi=ao]

-Bremen University:

-Prof. Michael BEETZ (PI of the project) [Web: https://ai.uni-bremen.de/team/michael_beetz; Google Scholar:https://scholar.google.com/citations?user=mINzfREAAAAJ&hl=es&oi=ao]

-Mona ABDEL-KEREAM (Doctorand of the project) [Web: https://ai.uni-bremen.de/team/mona_abdel-keream;Google Scholar:https://www.semanticscholar.org/author/Mona-Abdel-Keream/1416705883]

 

2) Title of talk: Natural Language Grounding and HRI in VR (Online)

Speaker: Cynthia Matuszek (Web: https://www.csee.umbc.edu/~cmat/ Google Scholar: https://scholar.google.com/citations?...)

Video: https://youtu.be/rLCAjgO9gn4

Abstract: As robots move from labs and factories into human-centric spaces, it becomes progressively harder to predetermine the environments and interactions they will need to be able to handle. Letting robots learn from end users via natural language is an intuitive, versatile approach to handling dynamic environments and novel situations robustly, while grounded language acquisition is concerned with learning to understand language in the context of the physical world. In this presentation, I will give an over view of our work on learning the grounded semantics of natural language describing an agent's environment, and will describe work on applying those models in a sim-to-real language learning environment.

 

3) Title of Talk: DIARC: The Utility of a Cognitive Robotic Architecture for AI-based HRI  (Online)
Speaker:
Matthias Scheutz (Web: https://hrilab.tufts.edu/people/matthias.php; Google Scholar: https://scholar.google.com/citations?user=5yT3GScAAAAJ&hl=es&oi=ao)
Video: We will include a resum of this talk because the speaker prefers no video.

Abstract: As more advanced AI techniques are explored in every area of robotics, integrating them in a functional and systematic manner is becoming an increasingly difficult challenge. Cognitive architectures have from the beginning attempted to present a unified view of cognition and are thus a promising framework for integrating diverse cognitive functions, in particular capabilities needed for natural human-robot interaction. In this presentation, we will argue that a distributed cognitive robotic architecture like the "Distributed Integrated Affect Reflection Cognition" (DIARC) architecture is an ideal platform for developing and evaluating advanced algorithms for natural HRI. We will provide a brief overview of DIARC with focus on capabilities needed for task-based dialogue interactions and demonstrate these capabilities in a variety of human-robot interaction settings.

Resum: TODO

 

4) Title of talk: Learning from Non-Traditional Sources of Data

Speaker: Dorsa Sadigh  (Web: https://dorsa.fyi/ Google Scholar: https://scholar.google.com/citations?...)

Video: https://youtu.be/BAYP-izd21c

Abstract: Imitation learning has traditionally been focused on learning a policy or a reward function from expert demonstrations. However, in practice in many robotics applications, we have limited access to expert demonstrations. Today, I will talk about a set of techniques to address some of the challenges of learning from non-traditional sources of data, i.e., suboptimal demonstrations, rankings, play data, and physical corrections. I will first talk about our confidence-aware imitation learning approach that simultaneously estimates a confidence measure over demonstrations and the policy parameters. I will then talk about extending this approach to learn a confidence measure over expertise of different demonstrators in an unsupervised manner. Following up, I will discuss how we can learn more expressive models such as a multimodal reward function when learning from a mixture of ranking data. Finally, I talk about our recent efforts in learning from other non-traditional sources of data in interactive domains. Specifically, we show how predicting latent affordances can be substantial when learning from undirected play data in interactive domains, and how we can learn from a sequence of interactions through physical corrections.

 

5) Panel Discussion Title: How can we envision the future of AI for HRI, including Human-robot interaction between multiple humans/robots and for long-and short-term interaction?

Speakers: Cynthia Matuszek & Dorsa Sadigh

Video: https://youtu.be/Ev5BOs1tJgQ

Questions discussed:

1. Multiple different interactions (also related to short-general and long-customized interactions): How can robots achieve enough level of general behavior to be able to interact with multiple people but at the same time be able to deal with a “customized” interaction for the particularities of each person? 

2. More natural collaborative robot behaviors using learning or other methods: How can we make robots perform more human-like collaborative behaviors using learning or other methods? (These behaviors can include physical and communicative interactions) 

3. One very related to machine learning: What areas of robotics do you think machine learning could have the most impact on the future? And what can be the biggest limitations of machine learning in those areas of robotics that have to be solved? (Example of possible limitation: deep learning does not extrapolate/generalize well outside the training data distribution; it currently requires a lot of training data). 

4. [Controversial/science fiction question.]

Answer Option 1: Do you think the AI used for robots can evolve in a way that robots will be more capable and take over complex jobs in the real world that require social interaction? Will this lead us to a utopia, or will it cause many problems as many people lose their jobs? Will we also need the Asimov rules or something similar? In addition, how do you envision that this evolution can happen?

Answer Option 2: Alternatively, if you prefer, in a more realistic way. Which can be the most advanced capabilities that these robots will achieve due to the evolution of AI? In addition, how do you envision that this evolution can happen?

 

6) Title of talk: “Can a robot be intelligent without being social?”

Speaker:Alessandra Sciutti (Web:  https://www.iit.it/it/people-details/... Google Scholar: https://scholar.google.com/citations?...)

Video: https://youtu.be/FSVMUh-gqrs

Abstract: Establishing mutual understanding between robots and humans is necessary for effective collaboration. To achieve that, robots need to be endowed with the ability to model how humans perceive the world and their partners. This knowledge should also drive the way robots plan their actions, in order to behave in a way intuitively predictable from a human perspective. Action and perception skills have then to be integrated within a cognitive architecture, including components such as memory, motivation and internal drives, necessary to allow the robot to grow and learn over longer periods of time and interaction. The role of different types of AI in the building of such an architecture is a central matter of discussion in cognitive robotics. Taking human intelligence as inspiration - and its inherently social nature - this talk will present some examples of the use of AI in the development of a cognitive robot.

 

7) Title of talk: Modeling a dyadic interaction using a deep neural network

Speaker: Yutaka Nakamura (Web: https://www.riken.jp/en/research/labs...)

Video: https://youtu.be/lwgrGsq3cRM

Abstract: Communication robots are expected to be systems that accompany humans and enrich people's lives since they can potentially interact with humans in a natural way. To realize this, it seems important for robots to communicate smoothly with humans, but it is difficult for robots to communicate at a good tempo, unlike humans. Humans interact with others in a bidirectional manner using multiple modalities including non-verbal communication. One person may give responses such as nod while the other is speaking, and overlap may occur in turn-taking. Such full-duplex communication-like behavior is different from a question answering dialogue, in which a response is given after recognition is completed. In this presentation, we will introduce our approaches to modeling behavior during dialogue, by using models in which the actions of all participants are considered simultaneously. Some preliminary results of applying deep neural network models to a dyadic interaction captured by a 360-degree panoramic camera will be presented.

 

8) Panel Discussion Title: How can we envision the future of AI for HRI, including Human-robot interaction between multiple humans/robots and for long-and short-term interaction? + Closing remarks of the two panel discussions.

Speaker: Yutaka Nakamura, Alessandra Sciutti & Yiannis Demiris.

Videohttps://youtu.be/DMEUAMIeH6I

Questions discussed:

1. Multiple different interactions (also related to short-general and long-customized interactions): How can robots achieve enough level of general behavior to be able to interact with multiple people but at the same time be able to deal with a “customized” interaction for the particularities of each person? 

2. More natural collaborative robot behaviors using learning or other methods: How can we make robots perform more human-like collaborative behaviors using learning or other methods? (These behaviors can include physical and communicative interactions) 

3. One very related to machine learning: What areas of robotics do you think machine learning could have the most impact on the future? And what can be the biggest limitations of machine learning in those areas of robotics that have to be solved? (Example of possible limitation: deep learning does not extrapolate/generalize well outside the training data distribution; it currently requires a lot of training data). 

4. [Controversial/science fiction question.]

Answer Option 1: Do you think the AI used for robots can evolve in a way that robots will be more capable and take over complex jobs in the real world that require social interaction? Will this lead us to a utopia, or will it cause many problems as many people lose their jobs? Will we also need the Asimov rules or something similar? In addition, how do you envision that this evolution can happen?

Answer Option 2: Alternatively, if you prefer, in a more realistic way. Which can be the most advanced capabilities that these robots will achieve due to the evolution of AI? In addition, how do you envision that this evolution can happen?

 

9) Title of talk: AI for Personal Assistive Robotics (Online)

Speaker: Yiannis Demiris (Web: www.imperial.ac.uk/people/y.demiris; Google Scholar: https://scholar.google.com/citations?user=B2o5i-AAAAAJ&hl=es&oi=ao)

Video: We will include a resum of this talk because the speaker prefers no video.

Abstract: Assistive robots hold great potential for empowering people to achieve their intended tasks, for example during activities of daily living, such as dressing, mobility, feeding and object handovers. Their potential can be maximised by ensuring that the interfaces that are used to interact with robots remain adaptable to the users and the interaction context. Artificial Intelligence can assist by transitioning interfaces from presenting data to offering affordances, using for example planning, and augmented reality visualisations. In this talk, I will outline research in our Personal Robotics Laboratory at Imperial College London (imperial.ac.uk/personal-robotics) towards the development of algorithms that enable learning during human-robot interaction, both for robots as well as for humans; I will demonstrate their application in activities of daily living, and illustrate how computer vision, learning, and augmented reality can enable adaptive user interfaces for interacting with assistive robots in a safe, efficient, and trustworthy manner.

Resum: TODO

 

10) Title of talk: Cognitive Social Robots

Speaker: Luca locchi (Web: www.diag.uniroma1.it/iocchi Google Scholar: https://scholar.google.com/citations?...)

Video: https://youtu.be/j_ei1b3BXPs

Abstract: More and more robotic applications are considering human-in-the-loop models, not only for fulfilling the specific applications tasks, but also to increase user acceptability and trustworthiness of robotics and artificial intelligence technologies. In this context, robots should exhibit at the same time cognitive capabilities and social abilities,  in order to achieve the task goals while properly interacting with people in the environment. In this talk, we will discuss design and implementation of cognitive social robots from a knowledge representation perspective, illustrating problems and possible solutions for an effective integration of cognitive functionalities, such as planning, reasoning, and learning,  with social abilities, such as human-robot multi-modal interaction, dialogue management, etc.

 

11) The Videos and Papers of the Posters presented in the workshop are included in the tab: Accepted Posters.

Online user: 3 Privacy
Loading...