Skip to main content

Lund University Publications

LUND UNIVERSITY LIBRARIES

Semantic and Articulated Pedestrian Sensing Onboard a Moving Vehicle

Priisalu, Maria LU (2023)
Abstract
It is difficult to perform 3D reconstruction from on-vehicle gathered video due to the large forward motion of the vehicle. Even object detection and human sensing models perform significantly worse on onboard videos when compared to standard benchmarks because objects often appear far away from the camera compared to the standard object detection benchmarks, image quality is often decreased by motion blur and occlusions occur often. This has led to the popularisation of traffic data-specific benchmarks. Recently Light Detection And Ranging (LiDAR) sensors have become popular to directly estimate depths without the need to perform 3D reconstructions. However, LiDAR-based methods still lack in articulated human detection at a distance when... (More)
It is difficult to perform 3D reconstruction from on-vehicle gathered video due to the large forward motion of the vehicle. Even object detection and human sensing models perform significantly worse on onboard videos when compared to standard benchmarks because objects often appear far away from the camera compared to the standard object detection benchmarks, image quality is often decreased by motion blur and occlusions occur often. This has led to the popularisation of traffic data-specific benchmarks. Recently Light Detection And Ranging (LiDAR) sensors have become popular to directly estimate depths without the need to perform 3D reconstructions. However, LiDAR-based methods still lack in articulated human detection at a distance when compared to image-based methods. We hypothesize that benchmarks targeted at articulated human sensing from LiDAR data could bring about increased research in human sensing and prediction in traffic and could lead to improved traffic safety for pedestrians. (Less)
Please use this url to cite or link to this publication:
author
organization
publishing date
type
Book/Report
publication status
published
subject
keywords
pedestrian detection, autonomous vehicles
pages
53 pages
publisher
arXiv.org
DOI
10.48550/arXiv.2309.06313
project
Semantic Mapping and Visual Navigation for Smart Robots
Modelling Pedestrians in Autonomous Vehicle Testing
language
English
LU publication?
yes
id
6ebb84e3-b766-413f-8fc9-f0c58e6755f8
date added to LUP
2023-10-09 14:21:53
date last changed
2023-11-08 10:48:54
@techreport{6ebb84e3-b766-413f-8fc9-f0c58e6755f8,
  abstract     = {{It is difficult to perform 3D reconstruction from on-vehicle gathered video due to the large forward motion of the vehicle. Even object detection and human sensing models perform significantly worse on onboard videos when compared to standard benchmarks because objects often appear far away from the camera compared to the standard object detection benchmarks, image quality is often decreased by motion blur and occlusions occur often. This has led to the popularisation of traffic data-specific benchmarks. Recently Light Detection And Ranging (LiDAR) sensors have become popular to directly estimate depths without the need to perform 3D reconstructions. However, LiDAR-based methods still lack in articulated human detection at a distance when compared to image-based methods. We hypothesize that benchmarks targeted at articulated human sensing from LiDAR data could bring about increased research in human sensing and prediction in traffic and could lead to improved traffic safety for pedestrians.}},
  author       = {{Priisalu, Maria}},
  institution  = {{arXiv.org}},
  keywords     = {{pedestrian detection; autonomous vehicles}},
  language     = {{eng}},
  month        = {{09}},
  title        = {{Semantic and Articulated Pedestrian Sensing Onboard a Moving Vehicle}},
  url          = {{http://dx.doi.org/10.48550/arXiv.2309.06313}},
  doi          = {{10.48550/arXiv.2309.06313}},
  year         = {{2023}},
}