Skip to main content

Lund University Publications

LUND UNIVERSITY LIBRARIES

Learned Trajectory Embedding for Subspace Clustering

Lochman, Yaroslava ; Olsson, Carl LU and Zach, Christopher (2024) 2024 IEEE/CVF Conference on Computer Vision and Pattern Recognition, CVPR 2024 In Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition p.19092-19102
Abstract

Clustering multiple motions from observed point trajectories is a fundamental task in understanding dynamic scenes. Most motion models require multiple tracks to estimate their parameters, hence identifying clusters when multiple motions are observed is a very challenging task. This is even aggravated for high-dimensional motion models. The starting point of our work is that this high-dimensionality of motion model can actually be leveraged to our advantage as sufficiently long trajectories identify the underlying motion uniquely in practice. Consequently, we propose to learn a mapping from trajectories to embedding vectors that represent the generating motion. The obtained trajectory embeddings are useful for clustering multiple... (More)

Clustering multiple motions from observed point trajectories is a fundamental task in understanding dynamic scenes. Most motion models require multiple tracks to estimate their parameters, hence identifying clusters when multiple motions are observed is a very challenging task. This is even aggravated for high-dimensional motion models. The starting point of our work is that this high-dimensionality of motion model can actually be leveraged to our advantage as sufficiently long trajectories identify the underlying motion uniquely in practice. Consequently, we propose to learn a mapping from trajectories to embedding vectors that represent the generating motion. The obtained trajectory embeddings are useful for clustering multiple observed motions, but are also trained to contain sufficient information to recover the parameters of the underlying motion by utilizing a geometric loss. We therefore are able to use only weak supervision from given motion segmentation to train this mapping. The entire algorithm consisting of trajectory embedding, clustering and motion parameter estimation is highly efficient. We conduct experiments on the Hopkins155, Hopkins12, and KT3DMoSeg datasets and show state-of-the-art performance of our proposed method for trajectory-based motion segmentation on full sequences and its competitiveness on the occluded sequences. Project page: https://ylochman.github.io/trajectory-embedding.

(Less)
Please use this url to cite or link to this publication:
author
; and
organization
publishing date
type
Chapter in Book/Report/Conference proceeding
publication status
published
subject
keywords
motion segmentation, subspace clustering, trajectory clustering
host publication
Proceedings - 2024 IEEE/CVF Conference on Computer Vision and Pattern Recognition, CVPR 2024
series title
Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition
pages
11 pages
publisher
IEEE Computer Society
conference name
2024 IEEE/CVF Conference on Computer Vision and Pattern Recognition, CVPR 2024
conference location
Seattle, United States
conference dates
2024-06-16 - 2024-06-22
external identifiers
  • scopus:85207248293
ISSN
1063-6919
ISBN
9798350353006
DOI
10.1109/CVPR52733.2024.01806
language
English
LU publication?
yes
id
c83d271c-a560-42ff-9602-04efafda2be6
date added to LUP
2025-01-02 15:29:36
date last changed
2025-04-04 14:13:21
@inproceedings{c83d271c-a560-42ff-9602-04efafda2be6,
  abstract     = {{<p>Clustering multiple motions from observed point trajectories is a fundamental task in understanding dynamic scenes. Most motion models require multiple tracks to estimate their parameters, hence identifying clusters when multiple motions are observed is a very challenging task. This is even aggravated for high-dimensional motion models. The starting point of our work is that this high-dimensionality of motion model can actually be leveraged to our advantage as sufficiently long trajectories identify the underlying motion uniquely in practice. Consequently, we propose to learn a mapping from trajectories to embedding vectors that represent the generating motion. The obtained trajectory embeddings are useful for clustering multiple observed motions, but are also trained to contain sufficient information to recover the parameters of the underlying motion by utilizing a geometric loss. We therefore are able to use only weak supervision from given motion segmentation to train this mapping. The entire algorithm consisting of trajectory embedding, clustering and motion parameter estimation is highly efficient. We conduct experiments on the Hopkins155, Hopkins12, and KT3DMoSeg datasets and show state-of-the-art performance of our proposed method for trajectory-based motion segmentation on full sequences and its competitiveness on the occluded sequences. Project page: https://ylochman.github.io/trajectory-embedding.</p>}},
  author       = {{Lochman, Yaroslava and Olsson, Carl and Zach, Christopher}},
  booktitle    = {{Proceedings - 2024 IEEE/CVF Conference on Computer Vision and Pattern Recognition, CVPR 2024}},
  isbn         = {{9798350353006}},
  issn         = {{1063-6919}},
  keywords     = {{motion segmentation; subspace clustering; trajectory clustering}},
  language     = {{eng}},
  pages        = {{19092--19102}},
  publisher    = {{IEEE Computer Society}},
  series       = {{Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition}},
  title        = {{Learned Trajectory Embedding for Subspace Clustering}},
  url          = {{http://dx.doi.org/10.1109/CVPR52733.2024.01806}},
  doi          = {{10.1109/CVPR52733.2024.01806}},
  year         = {{2024}},
}