• Home
  • |
  • Insights
  • |
  • Research paper
  • |
  • Monocular Vision Based Crowdsourced 3D Traffic Sign Positioning With Unknown Camera Intrinsics and Distortion Coefficients

Monocular Vision Based Crowdsourced 3D Traffic Sign Positioning With Unknown Camera Intrinsics and Distortion Coefficients

Autonomous vehicles and driver assistance systems utilize maps of 3D semantic landmarks for improved decision making. However, scaling the mapping process as well as regularly updating such maps come with a huge cost. Crowdsourced mapping of these landmarks such as traffic sign positions provides an appealing alternative. The state-of-the-art approaches to crowdsourced mapping use ground truth camera parameters, which may not always be known or may change over time. In this work, we demonstrate an approach to computing 3D traffic sign positions without knowing the camera focal lengths, principal point, and distortion coefficients a priori. We validate our proposed approach on a public dataset of traffic signs in KITTI. Using only a monocular color camera and GPS, we achieve an average single journey relative and absolute positioning accuracy of 0.26 m and 1.38 m, respectively.

READ THE FULL PAPER

Sign up for our newsletter and get the latest insights!

Anonymize your own images

Talk to our Cybersecurity experts today!

Get in touch with our experts to learn more about our Automotive Cybersecurity solution.