Accepted Posters

 

Notice: We do not use any formal procedings. Then, the paper content can be published in other conference or journals with NOT self-plagiarism.

---------------

We have received a large number of submissions and 12 papers have been selected to be presented with respect to themes of this workshop.

 

These posters will be presented in two ways:


1. In the poster session: Through a three-minute talk, plus an additional minute for questions.


2. On coffee breaks: Interactive poster session using authors' computers ("interactive" PowerPoint).

At least one of the authors must be in at least one of the coffee breaks for this interactive session.

Note: This format is to facilitate that all the works have an interactive poster session and a presentation that can be heard by all attendees.

------------------------------------------



1) Title: Planning Interactions as an Event Handling Solution for Successful and Balanced Human-Robot Collaboration.

Authors: Silvia Izquierdo-Bardiola, Carlos Rizzo and Guillem Alenyà.

Abstract: Dealing with the stochastic nature of human behaviour in Human-Robot Collaboration (HRC) remains a well known challenge that needs to be tackled. Automated task planning techniques have been implemented in order to share the workload between the agents, but these still lack the necessary adaptability for real-world applications. In this paper, we extend our previous work presented in [1], where an improved task planning framework integrating an agent model was presented, anticipating and avoiding failures in HRC by reallocating the actions in the plan based on the agents’ states. This work introduces the integration of interaction actions into the planning framework, in order to deal with situations where the issue reflected by a change in an agent state might be better handled with an interaction between the agents than by an action reallocation. Preliminary evaluation shows promising results of how this framework can help to increase the success in HRC plans, as well as the balance in workload distribution between the agents, which constitutes a key element in a collaboration.

Link to paper: (Paper) Planning Interactions as an Event Handling Solution for Successful and Balanced Human-Robot Collaboration

Link to video: (Video) Planning Interactions as an Event Handling Solution for Successful and Balanced Human-Robot Collaboration

 

2) Title: Towards Imitation Learning of Human Interactive Motion

Authors: Jiang Yongqiang, Malcolm Doering and Takayuki Kanda.

Abstract: Current imitation learning methods for designing robot motion are limited because they rely on human input of environment information (e.g. objects indicated by deictic gestures), which is time-consuming when designing robot motions for real-world scenarios. To solve this problem, we propose a novel method for finding the references of pointing gestures by using heat map information extracted from motion and speech clustering of human-human interaction data. To develop this method we setup an array of skeleton-sensing depth sensors to record human motion during natural interaction in an in-lab camera shop scenario. The results show that our method can find the positions of the objects being pointed to (cameras in the camera shop). Eventually, we aim to design a robot system that automatically learns to imitate all socially appropriate interactive motions (gestures, body postures, etc.).

Link to paper: (Paper) Towards Imitation Learning of Human Interactive Motion

Link to video: (Video) Towards Imitation Learning of Human Interactive Motion

Notice: This paper is one of the 4 papers directly related with the AI4HRI project.

 

3) Title: OG-SGG: Ontology-Guided Scene Graph Generation for SemanticScene Representation in Robotic Applications Extended Abstract.

Authors: Fernando Amodeo, Fernando Caballero, Natalia Díaz-Rodríguez and Luis Merino.

Abstract (from the motivation section): This work surveys existing research on the automatic generation of those scene graphs, such as [8], [6], [10], [9], and investigates their application to telepresence robots. The main goal of this work is thus finding a way to reuse and repurpose existing scene graph generation models and datasets for specific robotic applications, and applying additional techniques that take into account existing domain knowledge of the application, so that we can improve the performance of a machine learning model within the reduced scope of a given problem and ontology. This is precisely what the proposed OG-SGG methodology sets out to do.

Link to paper: (Paper) OG-SGG: Ontology-Guided Scene Graph Generation for SemanticScene Representation in Robotic Applications Extended Abstract.

 Link to video: (video) OG-SGG: Ontology-Guided Scene Graph Generation for SemanticScene Representation in Robotic Applications Extended Abstract.

 

4) Title: Humor used by security guard robots to eliminate malicious nuisances.

Authors: Yuto Ushijima, Takayuki Kanda and Satoru Satake.

Abstract (from the introduction section): However, humor is very difficult for robots to use. For example, given several humorous and non-humorous dialogues in response to a low moral situation, a human being can select a dialogue that is likely to work to some extent, though not perfectly. The dialogues that should be selected here are those that are comfortable when spoken in that context, that make a good impression on the other person, and that the other person is convinced will eliminate the annoyance. But it is difficult to construct an algorithm that would recognize natural language with such complex and abstract conditions on a computer. What information would be needed to recognize that the dialogues are in context? What kind of dialogues make a good impression? In addition to understanding the context and making a good impression, the dialogues would have to satisfy additional, less explicit, conditions in order to convince the other person of them. The task of this study is to construct a mechanism that enables the selection of appropriate dialogues according to such contexts and situations.

Link to paper: (Paper) Humor used by security guard robots to eliminate malicious nuisances.

Link to video: (Video) Humor used by security guard robots to eliminate malicious nuisances.

 

5) Title: IMHuS: Intelligent Multi-Human Simulator.

Authors: Olivier Hauterville, Camino Fernández, Phani Teja Singamaneni, Anthony Favier, Vicente Matellán, and Rachid Alami.

Abstract: Simulation is a basic tool for testing robots’ behavior that must include humans when dealing with social robotics. The goal of Human-Robot-Interaction simulation is not only to test different techniques but also to provide the replicability needed to study which metrics are best suited to measure social interaction between robots and humans. IMHuS offers a system in which humans can be choreographed to create high-level social behaviors, such as taking an elevator in scenarios defined to place a robot and measure its performance. In addition, the system can be easily modified to define new actions and metrics so that developers can create new scenarios to test and measure social HRI.

Link to paper: (Paper) IMHuS: Intelligent Multi-Human Simulator

Link to video: (Video) IMHuS: Intelligent Multi-Human Simulator

This video is a PowerPoint with audio. If you open the PowerPoint and click on play presentation, you will see the slides and hear the recorded audio.

 

6) Title: Analysis of Robot Errors in Social Imitation Learning.

Authors: Joshua Ravishankar, Malcolm Doering, Takayuki Kanda.

Abstract: Data-driven imitation learning is a method that leverages human-human interaction data to effectively generate robot behaviors for human-robot interaction. However, interaction errors can occur that cause interaction breakdowns. Furthermore, these interaction errors do not occur in humanhuman interactions, and thus, the behavior generation model is left with no behaviors to imitate in order to effectively recover. To the end of building a robust error handling pipeline to facilitate interaction recovery, in this work we analyze error types in social imitation learning for human-robot interaction (HRI). We focus on two specific robot behavior generation systems: one with data abstraction and one without data abstraction. We categorize frequently occurring interaction errors from these systems into categories and summarize the resulting interaction patterns. Many of these errors lead to reduced interaction quality and sometimes lead to frustration and/or confusion in humans. Finally, we conclude that the existence of such errors necessitates an autonomous error detection and online interaction recovery method.

Link to paper: (Paper) Analysis of Robot Errors in Social Imitation Learning

Link to video: "Analysis of Robot Errors in Social Imitation Learning" Intellect4HRI Workshop IROS2022

Notice: This paper is one of the 4 papers directly related with the AI4HRI project.

 

7) Title: Why is My Social Robot so Slow? How a Conversational Listener can Revolutionize Turn-Taking.

Authors: Matthew P. Aylett, Andrea Carmantini and David A. Braude.

Abstract: Currentmachinedialogsystemsarepredominantly implemented using a sequential, utterance based, two-party, speak-wait/speak-wait approach. human-human dialog is 1) not sequential, with overlap, interruption and back channels; 2) processes utterances before they are complete and 3) are often multi-party. The current approach is stifling innovation in social robots were long delays (often several seconds) is the current norm for dialog response time, leading to stilted and unnatural dialog flow. In this paper, by referencing a light weight word spotting speech recognition system - Chatty SDK, we present a practical engineering strategy for developing what we term aconversational listenerthat would allow systems to mimic natural human turn-taking in dialogue.

Link to paper: (Paper) Why is My Social Robot so Slow? How a Conversational Listener can Revolutionize Turn-Taking

Link to video: (Video) Why is My Social Robot so Slow? How a Conversational Listener can Revolutionize Turn-Taking.

 

8) Title: Imagine a human! UAivatar - Simulation Framework for Human-Robot Interaction.

Authors: Mona Abdel-Keream and Michael Beetz.

Abstract: Designing reliable, effective, and safe robotics systems for Human-Robot Interaction (HRI) remains still a major challenge due to the non-symmetrical nature of HRI and the absence of a comprehensive human model. For a meaningful interaction is suggested that both participants must have knowledge of one another. To achieve this an adequate representation of the other’s belief, intentions, goals, actions, etc. is necessary. We introduce in this paper the UAivatar simulation framework that allows for the simulation of various HRI scenario. It includes a cognitive control for a human character and a knowledge-based human model that is integrated into the AI software and control architecture of the operating robot. Our framework, therefore, extends the robots inner world model with rich knowledge about humans allowing the robot to ’imagine’ and simulate humans performing everyday activities.

Link to paper: (Paper) Imagine a human! UAivatar - Simulation Framework for Human-Robot Interaction.

Link to video: (Video) Imagine a human! UAivatar - Simulation Framework for Human-Robot Interaction

Notice: This paper is one of the 4 papers directly related with the AI4HRI project.

 

9) Title: Do robots know enough about their collaborative and adaptive events? Rethinking OCRA - An ontology for Collaborative Robotics and Adaptation.

Authors: Alberto Olivares-Alarcos, Sergi Foix, Stefano Borgo and Guillem Alenyà.

Abstract: In the near future, robots shall collaborate with humans, overcoming uncertainty and safety constraints during the execution of industrial robotic tasks. Hence, reliable collaborative robots must be capable of reasoning about their collaboration’s specifications (e.g. safety), and also the adaptation of their plans due to unexpected situations. A common approach to reasoning is to represent the domain knowledge using logic-based formalisms, such as ontologies. In this article, we revisit OCRA, an Ontologyfor Collaborative Robotics and Adaptation, which was built around two main notions: collaboration, and plan adaptation. OCRA assures a trusty human-robot collaboration, since robots can model and reason about their collaborations and plan adaptations in collaborative robotic scenarios. However, the ontology can be improved: a more thorough discussion of the concept of adaptation’s trigger can help to understand adaptations. Hence, we posit a new research question to extend OCRA, and propose a definition for adaptation trigger.

Link to paper: (Paper) Do robots know enough about their collaborative and adaptive events? Rethinking OCRA - An ontology for Collaborative Robotics and Adaptation

 Link to video: (Video) Do robots know enough about their collaborative and adaptive events? Rethinking OCRA - An ontology for Collaborative Robotics and Adaptation.

 

10) Title: In which context are we interacting? A Context Reasoner for interactive and social robots.

Authors: Adrien Vigné, Guillaume Sarthou, Ely Repiso, and Aurélie Clodic.

Abstract: To effectively interact with others, one has to understand the context in which the interaction takes place. This understanding allows us to know how to act, how to interpret others’ actions, and thus how to react. In this paper, we first present a basic formalism of the notion of context through the use of an ontology in order to integrate the new piece of information into a robot knowledge base in a coherent way. From there, we present a Context Reasoner integrated into the robot’s knowledge base system allowing it to identify the context in which it is interacting but also to identify the current context of the surrounding humans. The effectiveness of this Context Reasoner and its underlying representation of the context is demonstrated in a simulated but dynamic situation where a robot observes an interaction between two humans.

This work is an initial step to endow robots with the ability to understand the nature of interactions. We think that this basis will help to develop higher decisional processes able to adapt the robot’s behaviors depending on the nature of the interaction.

Link to paper: (Paper) In which context are we interacting? A Context Reasoner for interactive and social robots.

Link to video: (Video) In which context are we interacting? A Context Reasoner for interactive and social robots.

Notice: This paper is one of the 4 papers directly related with the AI4HRI project.

 

11) Title: DDPEN: Trajectory Optimisation With Sub Goal Generation Model.

Authors: Aleksander Gamayunov, Aleksey Postnikov, Gonzalo Ferrer.

Abstract: Differential dynamic programming (DDP) is a widely used and powerful trajectory optimization technique, however, due to its internal structure, it is not exempt from local minima. In this paper, we present Differential Dynamic Programming with Escape Network (DDPEN) - a novel approach to avoid DDP local minima by utilising an additional term used in the optimization criteria pointing towards the direction where robot should move in order to escape local minima.
In order to produce the aforementioned directions, we propose to utilize a deep model that takes as an input the map of the environment in the form of a costmap together with the desired goal position. The Model produces possible future directions that will lead to the goal, avoiding local minima which is possible to run in real time conditions. The model is trained on a synthetic dataset and overall the system is evaluated at the Gazebo simulator.
In this work we show that our proposed method allows avoiding local minima of trajectory optimization algorithm and successfully execute a trajectory 278 m long with various convex and nonconvex obstacles.

Link to paper: (Paper) DDPEN: Trajectory Optimisation With Sub Goal Generation Model

Link to video: (Video) DDPEN: Trajectory Optimisation With Sub Goal Generation Model

 

12) Title: A controlled human-robot interaction experimental setup for physiological data acquisition.

Authors: Mathias Rihet, Aurélie Clodic, Guillaume Sarthou, Sridath Tula, and Raphaëlle N. Roy.

Abstract: Physiological measurements are promising tools to perform an online evaluation of human-robot interaction. In this study, a controlled human-robot interaction experimental setup for physiological data acquisition is presented, focusing on the elicitation of two cognitive states: cognitive effort and automation surprise. Using various physiological sensors along with subjective and behavioral measures, this setup allowed to collect data from 16 subjects over 2 sessions. Subjective and behavioural data confirm the induction of cognitive effort but not automation surprise yet. Physiological data processing is currently underway. Several challenges are discussed concerning this implementation, including the elicitation of the target cognitive states, the synchronization of all the devices and the need for repeated measures.

Link to paper: (Paper) A controlled human-robot interaction experimental setup for physiological data acquisition.

Link to video: (Video) A controlled human-robot interaction experimental setup for physiological data acquisition.

 

Online user: 2 Privacy
Loading...