Imaging sensors are being increasingly used in autonomous vehicle applications for scene understanding. This paper presents a method that combines radar and monocular vision for ground modelling and scene segmentation by a mobile robot operating in outdoor environments. The proposed system features two main phases: a radar-supervised training phase and a visual classification phase. The training stage relies on radar measurements to drive the selection of ground patches in the camera images, and learn online the visual appearance of the ground. In the classification stage, the visual model of the ground can be used to perform high level tasks such as image segmentation and terrain classification, as well as to solve radar ambiguities. This method leads to the following main advantages: (a) self-supervised training of the visual classifier across the portion of the environment where radar overlaps with the camera field of view. This avoids time-consuming manual labelling and enables on-line implementation; (b) the ground model can be continuously updated during the operation of the vehicle, thus making it feasible the use of the system in long range and long duration applications. This paper details the algorithms and presents experimental tests conducted in the field using an unmanned vehicle.

Visual ground segmentation by radar supervision

REINA, GIULIO;
2014-01-01

Abstract

Imaging sensors are being increasingly used in autonomous vehicle applications for scene understanding. This paper presents a method that combines radar and monocular vision for ground modelling and scene segmentation by a mobile robot operating in outdoor environments. The proposed system features two main phases: a radar-supervised training phase and a visual classification phase. The training stage relies on radar measurements to drive the selection of ground patches in the camera images, and learn online the visual appearance of the ground. In the classification stage, the visual model of the ground can be used to perform high level tasks such as image segmentation and terrain classification, as well as to solve radar ambiguities. This method leads to the following main advantages: (a) self-supervised training of the visual classifier across the portion of the environment where radar overlaps with the camera field of view. This avoids time-consuming manual labelling and enables on-line implementation; (b) the ground model can be continuously updated during the operation of the vehicle, thus making it feasible the use of the system in long range and long duration applications. This paper details the algorithms and presents experimental tests conducted in the field using an unmanned vehicle.
File in questo prodotto:
Non ci sono file associati a questo prodotto.

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11587/374037
 Attenzione

Attenzione! I dati visualizzati non sono stati sottoposti a validazione da parte dell'ateneo

Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus ND
  • ???jsp.display-item.citation.isi??? ND
social impact