Communicating unknown objects to robots through pointing gestures
(2014) 15th Annual Conference Towards Autonomous Robotic Systems (TAROS), 2014 In Lecture Notes in Computer Science 8717. p.209-220- Abstract
- Delegating tasks from a human to a robot needs an efficient and easy-to-use communication pipeline between them - especially when inexperienced users are involved. This work presents a robotic system that is able to bridge this communication gap by exploiting 3D sensing for gesture recognition and real-time object segmentation. We visually extract an unknown object indicated by a human through a pointing gesture and thereby communicating the object of interest to the robot which can be used to perform a certain task. The robot uses RGB-D sensors to observe the human and find the 3D point indicated by the pointing gesture. This point is used to initialize a fixation-based, fast object segmentation algorithm, inferring thus the outline of... (More)
- Delegating tasks from a human to a robot needs an efficient and easy-to-use communication pipeline between them - especially when inexperienced users are involved. This work presents a robotic system that is able to bridge this communication gap by exploiting 3D sensing for gesture recognition and real-time object segmentation. We visually extract an unknown object indicated by a human through a pointing gesture and thereby communicating the object of interest to the robot which can be used to perform a certain task. The robot uses RGB-D sensors to observe the human and find the 3D point indicated by the pointing gesture. This point is used to initialize a fixation-based, fast object segmentation algorithm, inferring thus the outline of the whole object. A series of experiments with different objects and pointing gestures show that both the recognition of the gesture, the extraction of the pointing direction in 3D, and the object segmentation perform robustly. The discussed system can provide the first step towards more complex tasks, such as object recognition, grasping or learning by demonstration with obvious value in both industrial and domestic settings. (Less)
- Abstract (Swedish)
- Delegating tasks from a human to a robot needs an efficient and easy-to-use communication pipeline between them - especially when inexperienced users are involved. This work presents a robotic system that is able to bridge this communication gap by exploiting 3D sensing for gesture recognition and real-time object segmentation. We visually extract an unknown object indicated by a human through a pointing gesture and thereby communicating the object of interest to the robot which can be used to perform a certain task. The robot uses RGB-D sensors to observe the human and find the 3D point indicated by the pointing gesture. This point is used to initialize a fixation-based, fast object segmentation algorithm, inferring thus the outline of... (More)
- Delegating tasks from a human to a robot needs an efficient and easy-to-use communication pipeline between them - especially when inexperienced users are involved. This work presents a robotic system that is able to bridge this communication gap by exploiting 3D sensing for gesture recognition and real-time object segmentation. We visually extract an unknown object indicated by a human through a pointing gesture and thereby communicating the object of interest to the robot which can be used to perform a certain task. The robot uses RGB-D sensors to observe the human and find the 3D point indicated by the pointing gesture. This point is used to initialize a fixation-based, fast object segmentation algorithm, inferring thus the outline of the whole object. A series of experiments with different objects and pointing gestures show that both the recognition of the gesture, the extraction of the pointing direction in 3D, and the object segmentation perform robustly. The discussed system can provide the first step towards more complex tasks, such as object recognition, grasping or learning by demonstration with obvious value in both industrial and domestic settings. (Less)
Please use this url to cite or link to this publication:
https://lup.lub.lu.se/record/11047c4b-0779-4219-8483-126cbf308063
- author
- Großmann, Bjarne ; Pedersen, Mikkel Rath ; Klonovs, Juris ; Herzog, Dennis ; Nalpantidis, Lazaros and Krüger, Volker LU
- publishing date
- 2014-01-01
- type
- Chapter in Book/Report/Conference proceeding
- publication status
- published
- subject
- keywords
- Autonomous Mobile Robots, Human-Robot Interaction (HRI), Object Extraction, Pointing Gestures
- host publication
- Advances in Autonomous Robotics Systems : 15th Annual Conference, TAROS 2014, Birmingham, UK, September 1-3, 2014. Proceedings - 15th Annual Conference, TAROS 2014, Birmingham, UK, September 1-3, 2014. Proceedings
- series title
- Lecture Notes in Computer Science
- volume
- 8717
- pages
- 12 pages
- publisher
- Springer
- conference name
- 15th Annual Conference Towards Autonomous Robotic Systems (TAROS), 2014
- conference location
- Birmingham, United Kingdom
- conference dates
- 2014-09-01 - 2014-09-03
- external identifiers
-
- scopus:84906733575
- ISSN
- 1611-3349
- 0302-9743
- ISBN
- 978-3-319-10401-0
- 978-3-319-10400-3
- DOI
- 10.1007/978-3-319-10401-0_19
- language
- English
- LU publication?
- no
- additional info
- Volume 8717
- id
- 11047c4b-0779-4219-8483-126cbf308063
- date added to LUP
- 2019-05-16 21:28:21
- date last changed
- 2024-06-11 11:56:48
@inproceedings{11047c4b-0779-4219-8483-126cbf308063, abstract = {{Delegating tasks from a human to a robot needs an efficient and easy-to-use communication pipeline between them - especially when inexperienced users are involved. This work presents a robotic system that is able to bridge this communication gap by exploiting 3D sensing for gesture recognition and real-time object segmentation. We visually extract an unknown object indicated by a human through a pointing gesture and thereby communicating the object of interest to the robot which can be used to perform a certain task. The robot uses RGB-D sensors to observe the human and find the 3D point indicated by the pointing gesture. This point is used to initialize a fixation-based, fast object segmentation algorithm, inferring thus the outline of the whole object. A series of experiments with different objects and pointing gestures show that both the recognition of the gesture, the extraction of the pointing direction in 3D, and the object segmentation perform robustly. The discussed system can provide the first step towards more complex tasks, such as object recognition, grasping or learning by demonstration with obvious value in both industrial and domestic settings.}}, author = {{Großmann, Bjarne and Pedersen, Mikkel Rath and Klonovs, Juris and Herzog, Dennis and Nalpantidis, Lazaros and Krüger, Volker}}, booktitle = {{Advances in Autonomous Robotics Systems : 15th Annual Conference, TAROS 2014, Birmingham, UK, September 1-3, 2014. Proceedings}}, isbn = {{978-3-319-10401-0}}, issn = {{1611-3349}}, keywords = {{Autonomous Mobile Robots; Human-Robot Interaction (HRI); Object Extraction; Pointing Gestures}}, language = {{eng}}, month = {{01}}, pages = {{209--220}}, publisher = {{Springer}}, series = {{Lecture Notes in Computer Science}}, title = {{Communicating unknown objects to robots through pointing gestures}}, url = {{http://dx.doi.org/10.1007/978-3-319-10401-0_19}}, doi = {{10.1007/978-3-319-10401-0_19}}, volume = {{8717}}, year = {{2014}}, }