Skip to main content

Lund University Publications

LUND UNIVERSITY LIBRARIES

Feedback Control and Sensor Fusion of Vision and Force

Olsson, Tomas LU (2004) In Research Reports TFRT-3235
Abstract
This thesis deals with feedback control using two different sensor types, force sensors and cameras. In many tasks robotics compliance is required in order to avoid damage to the workpiece. Force and vision are the most useful sensing capabilities for a robot system operating in an unknown or uncalibrated environment. An overview of vision based estimation, control and vision/force control is given. Two different control algorithms based on a hybrid force/vision structure are presented, using image-based and position-based visual servoing, respectively. The image-based technique is suitable in situations with simple contact geometry in uncalibrated environments, and in situations where the positioning task is naturally specified relative... (More)
This thesis deals with feedback control using two different sensor types, force sensors and cameras. In many tasks robotics compliance is required in order to avoid damage to the workpiece. Force and vision are the most useful sensing capabilities for a robot system operating in an unknown or uncalibrated environment. An overview of vision based estimation, control and vision/force control is given. Two different control algorithms based on a hybrid force/vision structure are presented, using image-based and position-based visual servoing, respectively. The image-based technique is suitable in situations with simple contact geometry in uncalibrated environments, and in situations where the positioning task is naturally specified relative to some visible structure in image space. The position-based technique can handle more complex motions, which require more information about the environment. For a process with linear dynamics in task space, an edge-based visual estimation technique can be used to design an observer with linear error dynamics, which avoids the high computational complexity of the Extended Kalman Filter. In a dynamic visual tracking system, many different choices of the parameterization of the state space exist. If the actuator- and measurement coordinate systems are rigidly attached, a dual quaternion parameterization can be used to express linear constraints on the estimated motion. This increases robustness when the intrinsic camera parameters are time-varying. For real-time control applications in general, it is important to minimize the input-output latency, which may otherwise compromise the performance of the control system. In vision-based control systems the latency is dominated by the image processing. A method for multiple cameras, which aims at maximizing the achieved accuracy of the measurements given a fixed computation time, is presented. (Less)
Please use this url to cite or link to this publication:
author
supervisor
organization
publishing date
type
Thesis
publication status
published
subject
keywords
sensor based robot control, Visual tracking, visual servoing, resource optimization, Kalman filter
in
Research Reports TFRT-3235
pages
121 pages
publisher
Department of Automatic Control, Lund Institute of Technology (LTH)
ISSN
0280-5316
language
English
LU publication?
yes
id
20a3c036-1d6a-4a88-8d10-a91746c82838 (old id 1044053)
date added to LUP
2016-04-01 16:55:06
date last changed
2018-11-21 20:45:13
@misc{20a3c036-1d6a-4a88-8d10-a91746c82838,
  abstract     = {{This thesis deals with feedback control using two different sensor types, force sensors and cameras. In many tasks robotics compliance is required in order to avoid damage to the workpiece. Force and vision are the most useful sensing capabilities for a robot system operating in an unknown or uncalibrated environment. An overview of vision based estimation, control and vision/force control is given. Two different control algorithms based on a hybrid force/vision structure are presented, using image-based and position-based visual servoing, respectively. The image-based technique is suitable in situations with simple contact geometry in uncalibrated environments, and in situations where the positioning task is naturally specified relative to some visible structure in image space. The position-based technique can handle more complex motions, which require more information about the environment. For a process with linear dynamics in task space, an edge-based visual estimation technique can be used to design an observer with linear error dynamics, which avoids the high computational complexity of the Extended Kalman Filter. In a dynamic visual tracking system, many different choices of the parameterization of the state space exist. If the actuator- and measurement coordinate systems are rigidly attached, a dual quaternion parameterization can be used to express linear constraints on the estimated motion. This increases robustness when the intrinsic camera parameters are time-varying. For real-time control applications in general, it is important to minimize the input-output latency, which may otherwise compromise the performance of the control system. In vision-based control systems the latency is dominated by the image processing. A method for multiple cameras, which aims at maximizing the achieved accuracy of the measurements given a fixed computation time, is presented.}},
  author       = {{Olsson, Tomas}},
  issn         = {{0280-5316}},
  keywords     = {{sensor based robot control; Visual tracking; visual servoing; resource optimization; Kalman filter}},
  language     = {{eng}},
  note         = {{Licentiate Thesis}},
  publisher    = {{Department of Automatic Control, Lund Institute of Technology (LTH)}},
  series       = {{Research Reports TFRT-3235}},
  title        = {{Feedback Control and Sensor Fusion of Vision and Force}},
  url          = {{https://lup.lub.lu.se/search/files/4818851/8840410.pdf}},
  year         = {{2004}},
}