The full proceedings are available as a single PDF file. Warning: the file is over 200 MB.
We are running a poll to select the best papers of the conference. It is available as a Google Forms document.
The deadline is September, 15th, 2017.
________________________________________________________________________________
10:40 - 11:00
People Tracking and Re-Identification by Face Recognition for RGB-D Camera Networks
Kenji Koide1, Emanuele Menegatti2, Marco Carraro2, Matteo Munaro2, and Jun Miura1 (
1Department of Computer Science and Engineering at the Toyohashi University of
Technology, Japan, 2Department of Information Engineering at the University of Padua, Italy )
This paper describes a face recognition-based people tracking and re-identification system for
RGB-D camera networks. The system tracks people and learns their faces online to keep
track of their identities even if they move out from the camera’s field of view once. For
robust people re-identification, the system exploits the combination of a deep neural
network-based face representation and a Bayesian inference-based face classification
method. The system also provides a predefined people identification capability: it
associates the online learned faces with predefined people face images and names to
know the people’s whereabouts, thus, allowing a rich human-system interaction.
Through experiments, we validate the re-identification and the predefined people
identification capabilities of the system and show an example of the integration of the
system with a mobile robot. The overall system is built as a Robot Operating System
(ROS) module. As a result, it simplifies the integration with the many existing robotic
systems and algorithms which use such middleware. The code of this work has been
released as open-source in order to provide a baseline for the future publications in this
field.
________________________________________________________________________________
11:00 - 11:20
Constrained-covisibility marginalization for efficient on-board stereo SLAM
Matías A. Nitsche1, Gastón I. Castro1, Taihú Pire2, Thomas Fischer1, Pablo De Cristóforis1 (
1ICC, Computer Science Research Institute (CONICET-UBA), Argentina, 2CIFASIS, French
Argentine International Center for Information and Systems Sciences (CONICET-UNR),
Argentina )
When targeting embedded applications such as on-board visual localization for small
Unmanned Air Vehicles (UAV), available hardware generally becomes a limiting factor. For this
reason, the usual strategy is to rely on pure motion integration and/or restricting the size of the
map, i.e. performing visual odometry. Moreover, if monocular vision is employed, due to the
additional computational cost of stereo vision, this requires dealing with the problem of
unknown scale. In this work we discuss how the cost of the tracking task can be reduced
without limiting the size of the global map. To do so, the notion of covisibility is strongly used
which allows choosing a fixed and optimal set of points to be tracked. Moreover, this work
delves into the concept of parallel tracking and mapping and presents some finer
parallelization opportunities. Finally, we show how these strategies improve the
computational times of a stereo visual SLAM framework called S-PTAM running on-board an
embedded computer, close to camera frame-rates and with negligible precision
loss.
________________________________________________________________________________
11:20 - 11:40
Deconvolutional Networks for Point-Cloud Vehicle Detection and Tracking in Driving Scenarios
Víctor Vaquero, Ivan del Pino, Francesc Moreno-Noguer, Joan Solá, Alberto Sanfeliu, and
Juan Andrade-Cetto (Institut de Robòtica i Informàtica Industrial, CSIC-UPC, Barcelona,
Spain.)
Vehicle detection and tracking is a core ingredient for developing autonomous driving
applications in urban scenarios. Recent image-based Deep Learning (DL) techniques are
obtaining breakthrough results in these perceptive tasks. However, DL research has not yet
advanced much towards processing 3D point clouds from lidar range-finders. These sensors
are very common in autonomous vehicles since, despite not providing as semantically rich
information as images, their performance is more robust under harsh weather conditions than
vision sensors. In this paper we present a full vehicle detection and tracking system that works
with 3D lidar information only. Our detection step uses a Convolutional Neural Network (CNN)
that receives as input a featured representation of the 3D information provided by a Velodyne
HDL-64 sensor and returns a per-point classification of whether it belongs to a vehicle or not.
The classified point cloud is then geometrically processed to generate observations
for a multi-object tracking system implemented via a number of Multi-Hypothesis
Extended Kalman Filters (MH-EKF) that estimate the position and velocity of the
surrounding vehicles. The system is thoroughly evaluated on the KITTI tracking
dataset, and we show the performance boost provided by our CNN-based vehicle
detector over a standard geometric approach. Our lidar-based approach uses about
a 4% of the data needed for an image-based detector with similarly competitive
results.
________________________________________________________________________________
11:40 - 12:00
Deep Detection of People and their Mobility Aids for a Hospital Robot
Andres Vasquez, Marina Kollmitz, Andreas Eitel, and Wolfram Burgard (Faculty of Computer
Science, University of Freiburg, Freiburg, Germany)
Robots operating in populated environments encounter many different types of people, some
of whom might have an advanced need for cautious interaction, because of physical
impairments or their advanced age. Robots therefore need to recognize such advanced
demands to provide appropriate assistance, guidance or other forms of support. In this
paper, we propose a depth-based perception pipeline that estimates the position
and velocity of people in the environment and categorizes them according to the
mobility aids they use: pedestrian, person in wheelchair, person in a wheelchair with a
person pushing them, person with crutches and person using a walker. We present
a fast region proposal method that feeds a Region-based Convolutional Network
(Fast R-CNN [1]). With this, we speed up the object detection process by a factor of
seven compared to a dense sliding window approach. We furthermore propose a
probabilistic position, velocity and class estimator to smooth the CNN’s detections and
account for occlusions and misclassifications. In addition, we introduce a new hospital
dataset with over 17,000 annotated RGB-D images. Extensive experiments confirm
that our pipeline successfully keeps track of people and their mobility aids, even in
challenging situations with multiple people from different categories and frequent
occlusions.
________________________________________________________________________________
13:30 - 13:50
A Probabilistic Framework for Global Localization with Segmented Planes
Jan Wietrzykowski and Piotr Skrzypczyński (Institute of Control and Information Engineering,
Poznań University of Technology, Poznań, Poland)
This paper proposes a novel approach to global localization using high-level features. The
new probabilistic framework enables to incorporate uncertain localization cues into a
probability distribution that describes the likelihood of the current robot pose. We use multiple
triplets of planes segmented from RGB-D data to generate this probability distribution and to
find the robot pose with respect to a global map of planar segments. The algorithm can
be used for global localization with a known map or for closing loops with RGB- D
data. The approach is validated in experiments using the publicly available NYUv2
RGB-D dataset and our new dataset prepared for testing localization on plane-rich
scenes.
________________________________________________________________________________
13:50 - 14:10
Consistency of Feature-based Random-set Monte-Carlo Localization
Manuel Stübler, Stephan Reuter, and Klaus Dietmayer (driveU / Institute of Measurement,
Control, and Microtechnology, Ulm University, Ulm, Germany)
Self-localization is one of the most critical parts in robotics and automated driving. Thus, it is
quite essential to have some kind of self-assessment for the respective pose estimate.
Therefore, this paper introduces a new online approach to check the consistency of
feature-based random-set Monte-Carlo Localization (MCL). The basic idea is to detect
inconsistencies of the assumed measurement process in a stochastic manner in
order to infer validity of the localization result. This concept is closely linked to the
Normalized Innovation Squared (NIS) in Kalman filtering techniques. The problem of
checking the consistency online, in absence of ground-truth data, is formulated using
confidence intervals for the estimated measurement model parameters. In contrast to the
single-object Kalman filter, multi- object filters not only consider the spatial uncertainty
of sensor measurements, but also the clutter rate and missed detections. Thus,
all those measurement model parameters need to be observed and checked for
consistency. The proposed concept is applied to a random-set formulation of MCL
that is formally derived in the present contribution. The evaluation is done using
real-world data from a test vehicle in a scenario that covers public urban and rural
roads.
________________________________________________________________________________
14:10 - 14:30
Factor descent optimization for sparsification in graph SLAM
Joan Vallvé, Joan Solà, and Juan Andrade-Cetto (Institut de Robòtica i Informàtica Industrial,
CSIC-UPC, Barcelona, Spain.)
In the context of graph-based simultaneous localization and mapping, node pruning consists
in removing a subset of nodes from the graph, while keeping the graph’s information content
as close as possible to the original. One often tackles this problem locally by isolating the
Markov blanket sub-graph of a node, marginalizing this node and sparsifying the dense
result. It means computing an approximation with a new set of factors. For a given
approximation topology, the factors’ mean and covariance that best approximate the
original distribution can be obtained through minimization of the Kullback-Liebler
divergence. For simple topologies such as Chow-Liu trees, there is a closed form
for the optimal solution. However, a tree is oftentimes too sparse to explain some
graphs. More complex topologies require nonlinear iterative optimization. In the
present paper we propose Factor Descent, a new iterative optimization method to
sparsify the dense result of node marginalization, which works by iterating factor by
factor. We also provide a thorough comparison of our approach with state-of-the-art
methods in real world datasets with regards to the obtained solution and convergence
rates.
________________________________________________________________________________
14:30 - 14:50
Probabilistic Modeling of Gas Diffusion with Partial Differential Equations for Multi-Robot
Exploration and Gas Source Localization
Thomas Wiedemann1, Christoph Manss1, Dmitriy Shutin1, Achim J. Lilienthal2, Valentina
Karolj1, Alberto Viseras1 ( 1Institute of Communications and Navigation of the German
Aerospace Center (DLR), Wessling, Germany, 2Mobile Robotics and Olfaction Lab, Orebro
University, Orebro, Sweden )
Employing automated robots for sampling gas distributions and for localizing gas sources is
beneficial since it avoids hazards for a human operator. This paper addresses the problem of
exploring a gas diffusion process using a multi-agent system consisting of several mobile
sensing robots. The diffusion process is modeled using a partial differential equation (PDE). It
is assumed that the diffusion process is driven by only a few spatial sources at unknown
locations with unknown intensity. The goal of the multi-robot exploration is thus to
identify source parameters, in particular, their number, locations and magnitudes.
Therefore, this paper develops a probabilistic approach towards PDE identification under
sparsity constraint using factor graphs and a message passing algorithm. Moreover,
the message passing schemes permits efficient distributed implementation of the
algorithm. This brings significant advantages with respect to scalability, computational
complexity and robustness of the proposed exploration algorithm. Based on the
derived probabilistic model, an exploration strategy to guide the mobile agents in real
time to more informative sampling locations is proposed. Hardware- in-the-loop
experiments with real mobile robots show that the proposed exploration approach
accelerates the identification of the source parameters and outperforms systematic
sampling.
________________________________________________________________________________
Poster 1
A Low-Cost Mobile Infrastructure for Compact Aerial Robots Under Supervision
Marc Lieser, Henning Tjaden, Robert Brylka, Lasse Löffler, and Ulrich Schwanecke
(Computer Vision & Mixed Reality Group of RheinMain University of Applied Sciences,
Wiesbaden, Germany)
The availability of affordable Micro Aerial Vehicles (MAVs) opens up a whole new field of civil
applications. We present an Infrastructure for Compact Aerial Robots Under Supervision
(ICARUS) that realizes a scalable low-cost testbed for research in the area of MAVs starting at
about $100. It combines hardware and software for tracking and computer-based
control of multiple quadrotors. In combination with the usage of lightweight miniature
off-the-shelf quadrotors our system provides a testbed that virtually can be used
anywhere without the need of elaborate safety measures. We give an overview of
the entire system, provide some implementation details as well as an evaluation
and depict different applications based on our infrastructure such as an Unmanned
Ground Vehicle (UGV) which in cooperation with a MAV can be utilized in Search
and Rescue (SAR) operations and a multi-user interaction scenario with several
MAVs.
________________________________________________________________________________
Poster 2
Outdoor Obstacle Avoidance based on Hybrid Visual Stereo SLAM for an Autonomous
Quadrotor MAV
Radouane Ait-Jellal and Andreas Zell (Computer Science Department, University of Tübingen,
Germany)
Abstract—We address the problem of on-line volumetric map creation of unknown
environments and planning of safe trajectories. The sensor used for this purpose is a stereo
camera. Our system is designed to work in GPS denied areas. We design a keyframe based
hybrid SLAM algorithm which combines feature-based stereo SLAM and direct stereo
SLAM. We use it to grow the map while keeping track of the camera pose in the map.
The SLAM system builds a sparse map. For the path planning we build a dense
volumetric map by computing dense stereo matching at keyframes and inserting the
point clouds in an Octomap. The computed disparity maps are reused on the direct
tracking refinement step of our hybrid SLAM. Safe trajectories are then estimated
using the RRT algorithm in the SE(3) state space. In our experiments, we show that
we can map large environments with hundreds of keyframes. We also conducted
autonomous outdoor flights using a quadcopter to validate our approach for obstacle
avoidance.
________________________________________________________________________________
Poster 3
Kinematic Model based Visual Odometry for Differential Drive Vehicles
Julian Jordan and Andreas Zell (Computer Science Department, University of Tübingen,
Germany)
This work presents KMVO, a ground plane based visual odometry that utilizes the
vehicle’s kinematic model to improve accuracy and robustness. Instead of solving a
generic image alignment problem, the motion parameters of a differential drive vehicle
can be directly estimated from RGB-D image data. In addition, a method for outlier
rejection is presented that can deal with large percentages of outliers. The system is
designed to run in real time on a single thread of a mobile CPU. The results of the
proposed method are compared to other publicly available visual odometry and
SLAM methods on a set of nine real world image sequences of different indoor
environments.
________________________________________________________________________________
Poster 4
Local Contextual Trajectory Estimation with Demonstration for Assisting Mobile Robot
Teleoperation
Ming Gao and J. Marius Zöllner (Technical Cognitive System (TKS), FZI Research Center for
Information Technology, Karlsruhe, Germany)
We focus on assisting mobile robot teleoperation in a task-appropriate way, where we model
the user intention as an action primitive to perform a contextual task, e.g. doorway crossing
and object inspection, and provide motion assistance according to the task recognition. This
paper contributes to formulating motion assistance in a data-driven manner. With the motion
clusters obtained in our previous report [1], we apply a fast online Gaussian Mixture
Regression (GMR) approach to the most probable motion cluster classified during operation,
to estimate the local trajectory the human user intends to follow in the short term for the
corresponding task execution with the recognized contextual information. To regulate
the estimation accuracy, we compute the Mahalanobis distance of each estimated
trajectory way point. By thresholding the distance, we can achieve the trajectory
estimation within a predefined tolerance bound regarding the regression outliers.
The experimental results from both the qualitative and quantitative tests using the
real data confirmed the effectiveness and real- time performance of the proposed
approach.
________________________________________________________________________________
Poster 5
Fast Autonomous Landing on a Moving Target at MBZIRC
Marius Beul, Sebastian Houben, Matthias Nieuwenhuisen, and Sven Behnke (Autonomous
Intelligent Systems Group, Computer Science VI, University of Bonn, Germany)
The ability to identify, follow, approach, and intercept a non-stationary target is a desirable
capability of autonomous micro aerial vehicles (MAV) and puts high demands on reliable
target perception, fast trajectory planning, and stable control. We present a fully autonomous
MAV that lands on a planar platform mounted on a ground vehicle, relying only on onboard
sensing and computing. We evaluate our system in simulation as well as with real robot
experiments. Its resilience was demonstrated at the Mohamed Bin Zayed International
Robotics Challenge (MBZIRC) where it worked under competition conditions. Our team
NimbRo ranked third in the MBZIRC Challenge 1 and – in combination with two other tasks –
won the MBZIRC Grand Challenge.
________________________________________________________________________________
Poster 6
Towards Autonomous Landing on a Moving Vessel through Fiducial Markers
Riccardo Polvara, Sanjay Sharma, Jian Wan, Andrew Manning, and Robert Sutton
(Autonomous Marine System Research Group, School of Engineering, Plymouth University,
Plymouth, United Kingdom)
This paper propose an autonomous landing method for unmanned aerial vehicles (UAVs),
aiming to address those situations in which the landing pad is the deck of a ship. Fiducial
marker are used to obtain the six-degrees of freedom (DOF) relative-pose of the UAV to the
landing pad. In order to compensate interruptions of the video stream, an extended Kalman
filter (EKF) is used to estimate the ship’s current position with reference to its last known one,
just using the odometry and the inertial data. Due to the difficulty of testing the proposed
algorithm in the real world, synthetic simulations have been performed on a robotic
test-bed comprising the AR Drone 2.0 and the Husky A200. The results show the
EKF performs well enough in providing accurate information to direct the UAV in
proximity of the other vehicle such that the marker becomes visible again. Due to the
use of inertial measurements only in the data fusion process, this solution can be
adopted in indoor navigation scenarios, when a global positioning system is not
available.
________________________________________________________________________________
Poster 7
Collaborative Object Picking and Delivery with a Team of Micro Aerial Vehicles at MBZIRC
Matthias Nieuwenhuisen, Marius Beul, Radu Alexandru Rosu, Jan Quenzel, Dmytro
Pavlichenko, Sebastian Houben, and Sven Behnke (Autonomous Intelligent Systems Group,
Computer Science VI, University of Bonn, Germany)
Picking and transporting objects in an outdoor environment with multiple lightweight MAVs is a
demanding task. The main challenges are sudden changes of flight dynamics due to altered
center of mass and weight, varying lighting conditions for visual perception, and coordination
of the MAVs over unreliable wireless connections. At the Mohamed Bin Zayed International
Robotics Challenge (MBZIRC) teams competed in a Treasure Hunt where three MAVs had to
collaboratively pick colored disks and drop them into a designated box. Only little preparation
and test time on-site required robust algorithms and easily maintainable systems to
successfully achieve the challenge objectives. We describe our multi-robot system employed
at MBZIRC, including a lightweight gripper, a vision system robust against illumination
and color changes, and a control architecture allowing to operate multiple robots
safely. With our system, we—as part of the larger team NimbRo of ground and
flying robots—won the Grand Challenge and achieved a third place in the Treasure
Hunt.
________________________________________________________________________________
Poster 8
A Predictive Online Path Planning and Optimization Approach for Cooperative Mobile Service
Robot Navigation in Industrial Applications
Felipe Garcia Lopez, Jannik Abbenseth, Christian Henkel, and Stefan Dörr (Fraunhofer
Institute for Manufacturing Engineering and Automation IPA, Department for Robot and
Assistive Systems, Stuttgart, Germany)
In this paper we address the problem of online trajectory optimization and cooperative
collision avoidance when multiple mobile service robots are operating in close proximity to
each other. Using cooperative trajectory optimization to obtain smooth transitions in
multi-agent path crossing scenarios applies to the demand for more flexibility and efficiency in
industrial autonomous guided vehicle (AGV) systems. We introduce a general approach for
online trajectory optimization in dynamic environments. It involves an elastic-band based
method for time-dependent obstacle avoidance combined with a model predictive
trajectory planner that takes into account the robot’s kinematic and kinodynamic
constraints. We augment that planning approach to be able to cope with shared
trajectories of other agents and perform an potential field based cooperative trajectory
optimization. Performance and practical feasibility of the proposed approach are
demonstrated in simulation as well as in real world experiments carried out on a
representative set of path crossing scenarios with two industrial mobile service
robots.
________________________________________________________________________________
Poster 10
Managing a Fleet of Autonomous Mobile Robots (AMR) using Cloud Robotics Platform
Aniruddha Singhal, Prasun Pallav, Nishant Kejriwal, Soumyadeep Choudhury, Swagat Kumar,
and Rajesh Sinha (TCS Research, Tata Consultancy Services, New Delhi, India)
In this paper, we provide details of managing a fleet of autonomous mobile robots (AMR)
using Rapyuta Cloud Robotics Platform. While the robots are themselves completely
autonomous in its motion and obstacle avoidance capability, the target destination for each
robot is provided by a global planner which itself may receive goals from an Enterprise
Resource Planning (ERP) system. The global planner and the ground vehicles (robots)
constitute a multi agent system (MAS) which communicate with each other over a wireless
network. The complexities involved, and the corresponding benefits of implementing such a
cloud based system is explained by comparing with two other implementations based on the
standard distributed computing and communication framework of Robot Operating
System (ROS). The working of the complete system is demonstrated through a
real world experiment with physical robots in a laboratory setting. Through these
implementations, the limitations of the current cloud framework is identified and critical
suggestions are made for its improvement which, in turn, forms the future direction for this
work.
________________________________________________________________________________
Poster 11
A Fully Automatic Hand-Eye Calibration System
Morris Antonello, Andrea Gobbi, Stefano Michieletto, Stefano Ghidoni, and Emanuele
Menegatti (Intelligent Autonomous Systems Laboratory (IAS-Lab), Department of Information
Engineering (DEI), University of Padova, Padova, Italy)
To retrieve the 3D coordinates of an object in the robot workspace is a fundamental
capability for industrial and service applications. This can be achieved by means of a
camera mounted on the robot end-effector only if the hand-eye transformation is
known. The standard calibration process requires to view a calibration pattern, e.g. a
checkerboard, from several different perspectives. This work extends the standard
approach performing calibration pattern localization and hand- eye calibration in a fully
automatic way. A two phase procedure has been developed and tested in both simulated
and real scenarios, demonstrating that the automatic calibration reaches the same
performance level of a standard procedure, while avoiding any human intervention.
As a final contribution, the source code for an automatic and robust calibration is
released.
________________________________________________________________________________
Poster 12
Robot navigation balancing safety and time to goal in dynamic environments
María-Teresa Lorente, and Luis Montano (Instituto de Investigación en Ingeniería de Aragón,
University of Zaragoza, Zaragoza, Spain)
This work addresses a technique for robot motion planning and navigation in dynamic
environments. First, a model to represent the future evolution of the moving obstacles in the
environment is defined in a robocentric reference, which maps the obstacle motion on the
control space, the velocity-time space, of the robot. Second, a planning technique working on
that model for navigation and maneuvering in this scenario is performed. The planned motion
commands balance safety and time to goal criteria, applying every control sampling time a
velocity command satisfying the criteria from the planned strategy. Unlike other classic
reactive strategies, the novel method developed allows to plan and execute safe motions
according to the further evolution of the moving objects in the robot field of view. The robot
maneuvers among the obstacles using the concepts of time to collision and time to
escape from the perceived moving objects. The technique is evaluated in randomly
generated simulated scenarios, based on metrics defined using safety and time to goal
criteria.
________________________________________________________________________________
Poster 13
Introduction and Initial Exploration to an Automatic Tennis Ball Collecting Machine
Jialiang Zhao, Hongbin Ma, Jiahui Shi, and Yunxuan Liu (School of Automation, Beijing
Institute of Technology, Beijing, China)
This paper describes a preliminary development to an automatic tennis ball collecting
machine. The framework adopts a roller to collect tennis balls when moving. Tennis balls are
detected and tracked automatically by computer vision while hardware acceleration is used to
improve efficiency. For navigation, fuzzy control and 6 ultrasonic sensors are used to avoid
obstacles when approaching the targets. A web-based monitoring and teleoperation system is
added for human-robot interaction.
________________________________________________________________________________
Poster 14
Review of Integrated Vehicle Dynamics Control Architectures
Moad Kissai, Bruno Monsuez, and Adriana Tapus (ENSTA ParisTech, Department of
Computer and System Engineering, Palaiseau, France)
Most of the chassis systems are developed by automotive suppliers to improve a specific
vehicle performance. Drivers, and consequently vehicle manufacturers, are more
concerned by the overall behaviour of the Vehicle. Many coordination architectures have
been proposed in the literature in order to integrate different chassis systems in
a single vehicle. In this paper, these architectures are compared and discussed.
Two major classes are proposed: Downstream and Upstream Coordination. The
purpose of this classification is to help car manufacturers and suppliers standardize
an Integrated Vehicle Dynamics Control architecture for faster and more flexible
designs.
________________________________________________________________________________
Poster 15
Mobile Robot for Retail Surveying and Inventory Using Visual and Textual Analysis of
monocular pictures based on Deep Learning
Marina Paolanti, Mirco Sturari, Adriano Mancini, Primo Zingaretti, and Emanuele Frontoni
(Dipartimento di Ingegneria dell’Informazione (DII), Universitá Politecnica delle Marche,
Ancona, Italy)
This paper describes a novel system for automat- ing data collection and surveying in a retail
store using mobile robots. The manpower cost for surveying and monitoring the shelves in
retail stores are high, because of which these activities are not repeated frequently causing
reduced customer satisfaction and loss of revenue. Further, the accuracy of data collected
may be improved by avoiding human related factors. We use a mobile robot platform with
on-board cameras to monitor the shelves autonomously (based on indoor UWB Localization
and planning). The robot is designed to facilitate automatic detection of Shelf Out of Stock
(SOOS) situations. The paper contribution is an approach to estimate the overall stock
assortment based of pictures from both visual and textual clues. Based on visual and textual
features extracted from two trained Convolutional Neural Networks (CNNs), the type of the
product is identified by a machine learning classifier. The approach was applied and tested on
a newly collected dataset and several machine learning algorithms are compared. The
experiments yield high accuracy, demonstrating the effectiveness and suitability
of the proposed approach, also in comparison of existing state of the art SOOS
solutions.
________________________________________________________________________________
9:00 - 9:20
Path tracking of a four-wheel steering mobile robot: A robust off-road parallel steering strategy
Mathieu Deremetz, Roland Lenain, Adrian Couvent, and Christophe Cariou (Irstea,
Technologies and Information Support System Research, Aubière, France), Benoit Thuilot (
Clermont Université, Université Blaise Pascal, Institut Pascal, France, and CNRS, UMR 6602,
Institut Pascal, Aubière, France )
In this paper, the problem associated with accurate control of a four-wheel steering mobile
robot following a path, while keeping different desired absolute orientations and ensuring
different desired lateral deviations, is addressed thanks to a backstepping control strategy. In
particular, the control of each steering angle is investigated through a new parallel steering
approach based on an extended kinematic model of a bicycle-model robot assuming that the
two front steering angles are equal and likewise for the two rear ones. Two control laws are
then proposed to ensure a suitable path following according to orientation and position
conditions. In order to balance the lateral effects, notably the sideslip angles, an observer has
been used to estimate the sliding. This estimation permits to feed the proposed control laws
appropriately, enabling an accurate path tracking and orientation keeping along the trajectory.
This new point of view permits to achieve difficult manoeuvres in narrow environments
such as a parallel parking or sharp turns. Previous approaches have focused on the
control of four-wheel steering mobile robots with respect to the trajectory but do
not combine path following with independent heading angle control and slippery
conditions.
________________________________________________________________________________
9:20 - 9:40
Multimodal Visual-Inertial Odometry for navigation in cold and low contrast environment
Axel Beauvisage and Nabil Aouf (Centre for Electronic Warfare Information and Cyber,
Defence Academy of the United Kingdom, Cranfield University, Shrivenham, United Kingdom)
Multispectral setups have a great potential to tackle problems such as collision avoidance or
pedestrian detection. However, multispectral odometry is nowadays less precise than most
stereo visible setups, so there is a need to improve such systems. With this work, we
investigate the fusion of inertial data with visual information to improve the performance of
multispectral setups. Inertial data are included in the process model of an extended Kalman
filter to estimate the pose of a vehicle. But inertial measurement units drift rapidly when
integrated for a certain period of time. Visual odometry is used to correct the predicted pose
and reduce this drift. IMU data are also used to provide a pose estimation when images
are too noisy to compute a motion (e.g. motion blur or low contrast). Therefore,
we present a new multispectral (visible/LWIR) navigation system able to cope with
fast motions and to operate in cold environments, where the contrast of infrared
images is reduced. We demonstrate the robustness of the setup on a series of
semi-urban datasets acquired from a car. An average error inferior to 3% of the distance
traveled between the estimated trajectory and the GPS track is achieved on all the
datasets.
________________________________________________________________________________
9:40 - 10:00
Acting Thoughts: Towards a Mobile Robotic Service Assistant for Users with Limited
Communication Skills
F. Burget, L.D.J. Fiederer, D. Kuhner, M. Völker, J. Aldinger, R.T. Schirrmeister, C. Do, J.
Boedecker, B. Nebel, T. Ball, and W. Burgard (Department of Computer Science and Faculty
of Medicine, University of Freiburg, Germany)
As autonomous service robots become more affordable and thus available also for the general
public, there is a growing need for user friendly interfaces to control the robotic
system. Currently available control modalities typically expect users to be able to
express their desire through either touch, speech or gesture commands. While this
requirement is fulfilled for the majority of users, paralyzed users may not be able to
use such systems. In this paper, we present a novel framework, that allows these
users to interact with a robotic service assistant in a closed-loop fashion, using
only thoughts. The brain-computer interface (BCI) system is composed of several
interacting components, i.e., non-invasive neuronal signal recording and decoding,
high-level task planning, motion and manipulation planning as well as environment
perception. In various experiments, we demonstrate its applicability and robustness in real
world scenarios, considering fetch-and-carry tasks and tasks involving human-robot
interaction. As our results demonstrate, our system is capable of adapting to frequent
changes in the environment and reliably completing given tasks within a reasonable
amount of time. Combined with high-level planning and autonomous robotic systems,
interesting new perspectives open up for non-invasive BCI-based human- robot
interactions.
________________________________________________________________________________
10:00 - 10:20
Predicting Travel Time from Path Characteristics for Wheeled Robot Navigation
Peter Regier, Marcell Missura, and Maren Bennewitz (Humanoid Robots Lab, University of
Bonn, Bonn, Germany)
Modern approaches to mobile robot navigation typically employ a two-tiered system where
first a geometric path is computed in a potentially obstacle-laden environment, and then a
reactive motion controller with obstacle-avoidance capabilities is used to follow this path to the
goal. However, when multiple path candidates are present, the shortest path is not always the
best choice as it may lead through narrow gaps and it may be in general hard to follow due to
a lack of smoothness. The assessment of an estimated completion time is a much stronger
selection criterion, but due to the lack of a dynamic model in the path computation
phase the completion time is typically a priori not known. We introduce a novel
approach to estimate the completion time of a path based on simple, readily available
features such as the length, the smoothness, and the clearance of the path. To
this end, we apply non-linear regression and train an estimator with data gained
from the simulation of the actual path execution with a controller that is based on
the well-known Dynamic Window Approach. As we show in the experiments, our
method is able to realistically estimate the completion time for 2D grid paths using the
learned predictor and highly outperforms a prediction that is only based on path
length.
________________________________________________________________________________
10:50 - 11:10
Segmentation of Depth Images into Objects Based on Local and Global Convexity
Robert Cupec, Damir Filko, and Emmanuel K. Nyarko (Faculty of Electrical Engineering, Computer Science
and Information Technologies Osijek, J. J. Strossmayer University of Osijek, Croatia)
An approach for object detection in depth images based on local and global convexity is
presented. The approach consists of three steps: image segmentation into planar patches,
greedy planar patch aggregation based on local convexity and segment grouping based on
global convexity. The proposed approach improves upon existing similar methods, which use
convexity as a cue for object detection, by detecting convex objects represented by
multiple spatially separated image regions as well as hollow convex objects. The
presented method is experimentally evaluated using a publicly available benchmark
dataset and compared to two state-of-the art approaches. The experimental analysis
demonstrates improvement achieved by high-level segment grouping based on global
convexity.
________________________________________________________________________________
11:10 - 11:30
Semantical Occupancy Grid Mapping Framework
Timo Korthals, Julian Exner, Thomas Schöpping, and Marc Hesse (Bielefeld University,
Cluster of Excellence Cognitive Interaction Technologies, Cognitronics & Sensor Systems,
Bielefeld, Germany)
In recent decades, mapping has been increasingly investigated and applied in unmanned
terrain, aerial, sea, and underwater vehicles. While exploiting various mapping techniques to
build an inner representation of the environment, one of the most famous remaining is
occupancy grid mapping. It has been applied to all domains in a 2D/3D fashion for
localization, mapping, navigation, and safe path traversal. Until now generally active range
measuring sensors like LiDAR or SONAR are exploited to build those maps. With this work
the authors want to overcome these barriers by presenting an occupancy mapping framework
offering a generic sensor interface. The interface handles occupancy grids as inverse sensor
models, which may represent knowledge on different semantical decision levels and therefore
build up a semantic grid map stack. The framework offers buffered memory management for
efficient storing and shifting and further services for accessing the 2D map stack in
different cell-wise pre-fused and topometric ways. Within the framework, two novel
techniques operating especially with occupancy grids are presented: First, a novel odds
based interpolation filter is introduced, which scales grid maps in a Bayesian way.
Second, a Supercell Extracted via Variance-Driven Sampling (SEVDS) algorithm is
presented which, abstracts the semantical occupancy grid stack to a topometric
map. While this work focuses on the framework’s introduction, it is extended by the
evaluation of SEVDS against state-of-the-art superpixel approaches to prove its
applicability.
________________________________________________________________________________
11:30 - 11:50
Lidar-based Urban Road Detection by Histograms of Normalized Inverse Depths and Line
Scanning
Shuo Gu, Yigong Zhang, Jian Yang, and Hui Kong (School of Computer Science and
Engineering, Nanjing University of Science and Technology, Nanjing city, Jiangsu, China)
In this paper, we propose to fuse the geometric information of a 3D Lidar and a monocular
camera to detect the urban road region ahead of an autonomous vehicle. Our method takes
advantage of both the high definition of 3D Lidar data and the continuity of road in image
representation. First, we obtain an efficient representation of Lidar data, an organized 2D
inverse depth map, by projecting the spatially unorganized 3D Lidar points onto the camera’s
image plane. The approximate road regions can be quickly estimated by extracting vertical
and horizontal histograms of the normalized inverse depths. To accurately find the road area,
a row and column scanning strategy is applied in the approximate road region. We have
carried out experiments on the public KITTI-Road benchmark, and achieve one of the
best performance among the Lidar-based road detection methods without learning
procedure.
________________________________________________________________________________
11:50 - 12:10
Extrinsic 6DoF Calibration of 3D LiDAR and Radar
Juraj Peršić, Ivan Marković, and Ivan Petrović (University of Zagreb Faculty of Electrical
Engineering and Computing, Zagreb, Croatia)
Environment perception is a key component of any autonomous system and is often based on
a heterogeneous set of sensors and fusion thereof, for which extrinsic sensor calibration plays
fundamental role. In this paper, we tackle the problem of 3D LiDAR–radar calibration which is
challenging due to low accuracy and sparse informativeness of the radar measurements. We
propose a complementary calibration target design suitable for both sensors, thus enabling a
simple, yet reliable calibration procedure. The calibration method is composed of
correspondence registration and a two-step optimization. The first step, reprojection error
based optimization, provides initial estimate of the calibration parameters, while the second
step, field of view optimization, uses additional information from the radar cross
section measurements and the nominal field of view to refine the parameters. In the
end, results of the experiments validated the proposed method and demonstrated
how the two steps combined provide an improved estimate of extrinsic calibration
parameters.
________________________________________________________________________________
14:40 - 15:00
Effective Target Aware Visual Navigation for UAVs
Ciro Potena, Daniele Nardi, and Alberto Pretto (Department of Computer, Control, and
Management Engineering “Antonio Ruberti”, Sapienza University of Rome, Italy.
In this paper we propose an effective vision-based navigation method that allows a multirotor
vehicle to simultaneously reach a desired goal pose in the environment while constantly facing
a target object or landmark. Standard techniques such as Position-Based Visual Servoing
(PBVS) and Image-Based Visual Servoing (IBVS) in some cases (e.g., while the multirotor is
performing fast maneuvers) do not allow to constantly maintain the line of sight with a target of
interest. Instead, we compute the optimal trajectory by solving a non-linear optimization
problem that minimizes the target reprojection error while meeting the UAV’s dynamic
constraints. The desired trajectory is then tracked by means of a real-time Non-linear Model
Predictive Controller (NMPC): this implicitly allows the multirotor to satisfy both the
required constraints. We successfully evaluate the proposed approach in many
real and simulated experiments, making an exhaustive comparison with a standard
approach.
________________________________________________________________________________
15:00 - 15:20
From Monocular SLAM to Autonomous Drone Exploration
Lukas von Stumberg1, Vladyslav Usenko1, Jakob Engel2, Jörg Stückler3, and Daniel Cremers1
( 1Computer Vision Group, Computer Science Institute 9, Technische Universität
München, Garching, Germany, 2Oculus Research, Redmond, USA 3Computer Vision
Group, Visual Computing Institute, RWTH Aachen University, Aachen, Germany )
Micro aerial vehicles (MAVs) are strongly limited in their payload and power capacity. In order
to implement autonomous navigation, algorithms are therefore desirable that use sensory
equipment that is as small, low-weight, and low-power consuming as possible. In this paper,
we propose a method for autonomous MAV navigation and exploration using a low-cost
consumer-grade quadrocopter equipped with a monocular camera. Our vision-based
navigation system builds on LSD-SLAM which estimates the MAV trajectory and a semi-dense
reconstruction of the environment in real-time. Since LSD-SLAM only determines depth at
high gradient pixels, texture-less areas are not directly observed so that previous exploration
methods that assume dense map information cannot directly be applied. We propose
an obstacle mapping and exploration approach that takes the properties of our
semi-dense monocular SLAM system into account. In experiments, we demonstrate our
vision-based autonomous navigation and exploration system with a Parrot Bebop
MAV.
________________________________________________________________________________
15:20 - 15:40
Coherent swarming of unmanned micro aerial vehicles with minimum computational and
communication requirements
Daniel Brandtner and Martin Saska (Department of Cybernetics, Faculty of Electrical
Engineering, Czech Technical University in Prague, Czech Republic)
An algorithm designed for stabilization and con-rol of large groups of micro aerial vehicles
(MAVs) - multirotor helicopters - without any explicit communication is proposed in this paper.
The presented algorithm enables a swarm of MAVs to maintain its coherence and perform a
compact motion in complex environments while avoiding obstacles with only very limited
computational and sensory requirements. The method is very robust to incomplete sensory
information, it enables a fully distributed applicability, and it is highly scalable. Increasing
amount of MAVs even improves the required coherence behaviour. Numerous simulations in
different environments were conducted to verify the algorithm, show its potential, and explore
its various configurations.
________________________________________________________________________________
15:40 - 16:00
Online Switch of Communication Modalities for Efficient Multirobot Exploration
Francesco Amigoni, Jacopo Banfi, Alessandro Longoni, and Matteo Luperto (Dipartimento di
Elettronica, Informazione and Bioingegneria, Politecnico di Milano, Milano, Italy)
Exploration of unknown environments with multirobot systems subject to communication
constraints is a task involved in several applications, like search and rescue and monitoring.
The approaches proposed so far to address this problem are usually based on
the use of a single communication modality, for instance, either multi-hop (MH) or
rendezvous (RV), during the whole mission. However, it has been conjectured that online
switching between communication modalities could be beneficial. In this work, we
empirically investigate this hypothesis by presenting an exploring multirobot system
that can originally switch between communication modalities during an exploration
mission.
________________________________________________________________________________
Poster 16
Robotic Platform for Deep Change Detection for Rail Safety and Security
Mirco Sturari, Marina Paolanti, Emanuele Frontoni, Adriano Mancini, and Primo Zingaretti
(Dipartimento di Ingegneria dell’Informazione (DII), Universitá Politecnica delle Marche,
Ancona, Italy)
Felix is the first robot measuring switch dimension parameters and geometrical railway ones
together with potentially dangerous situations. It increases the quality and the consistency of
the measures and offers the chance to increase the safety of operators and ultimately of final
users. This paper presents a novel approach mixing visual and point cloud information for
effective change detection for railways safety and security application. Main contributions are
on the proposed data platform and on the mix of point-based measurements (switch
dimensions) and on surrounding change detection (dangerous trees or abnormal railway
trawlers) based multi-view camera and linear laser, processed by a classification process.
Results, coming from a real rail scenario using the Felix platform, show the feasibility of the
approach and the fast surveying capabilities with strong implications on safety and
security.
________________________________________________________________________________
Poster 17
City-scale continuous visual localization
Manuel Lopez-Antequera1,2, Nicolai Petkov1 and Javier Gonzalez-Jimenez2 ( 1MAPIR-UMA
group, University of Málaga, Instituto de Investigación Biomédica de Málaga, Málaga, Spain,
2Johann Bernoulli Institute of Mathematics and Computing Science, University of Groningen,
The Netherlands )
Visual or image-based self-localization refers to the recovery of a camera’s position and
orientation in the world based on the images it records. In this paper, we deal with the problem
of self-localization using a sequence of images. This application is of interest in settings where
GPS-based systems are unavailable or imprecise, such as indoors or in dense cities. Unlike
typical approaches, we do not restrict the problem to that of sequence-to-sequence or
sequence-to-graph localization. Instead, the image sequences are localized in an
image database consisting on images taken at known locations, but with no explicit
ordering. We build upon the Gaussian Process Particle Filter framework, proposing two
improvements that enable localization when using databases covering large areas: 1) an
approximation to Gaussian Process regression is applied, allowing execution on large
databases. 2) we introduce appearance-based particle sampling as a way to combat
particle deprivation and bad initialization of the particle filter. Extensive experimental
validation is performed using two new datasets which are made available as part of this
publication.
________________________________________________________________________________
Poster 18
Mobile robotics in arable lands: current state and future trends
Luis Emmi and Pablo Gonzalez-de-Santos (Centre for Automation and Robotics (UPM-CSIC),
Madrid, Spain)
This paper presents a summary of the current state of mobile robotics oriented to perform
precision agricultural tasks on arable lands. Two approaches of robot configurations are
identified and some relevant examples are mentioned in addition to identifying the trend of
robotics in agriculture, the current limitations, and the following steps as understood by the
authors for reducing the gap for increased inclusion of robotics in everyday agricultural
tasks.
________________________________________________________________________________
Poster 19
Human Robot Motion: A shared effort approach
Grimaldo Silva and Thierry Fraichard (Univ. Grenoble Alpes, Inria, CNRS, Grenoble, France)
This paper is about Human Robot Motion (HRM), i.e. the study of how a robot should move
among humans. This problem has often been solved by considering persons as moving
obstacles, predicting their future trajectories and avoiding these trajectories. In contrast with
such an approach, recent works have showed benefits of robots that can move and avoid
collisions in a manner similar to persons, what we call human-like motion. One such benefit is
that human-like motion was shown to reduce the planning effort for all persons in the
environment, given that they tend to solve collision avoidance problems in similar
ways. The effort required for avoiding a collision, however, is not shared equally
between agents as it varies depending on factors such as visibility and crossing
order. Thus, this work tackles HRM using the notion of motion effort and how it
should be shared between the robot and the person in order to avoid collisions. To
that end our approach learns a robot behavior using Reinforcement Learning that
enables it to mutually solve the collision avoidance problem during our simulated
trials.
________________________________________________________________________________
Poster 20
On Multi-robot Search for a Stationary Object
Miroslav Kulich1, Libor Přeučil1, and Juan José Miranda Bront2 ( 1Czech Institute of
Informatics, Robotics and Cybernetics, Czech Technical University in Prague, Czech
Republic, 2Departamento de Computación, Facultad de Ciencias Exactas y Naturales,
Universidad de Buenos Aires, Argentina )
Two variants of multi-robot search for a stationary object in a priori known environment
represented by a graph are studied in the paper. The first one is generalization of the
Traveling Deliveryman Problem where more than one deliveryman is allowed to be used
in a solution. Similarly, the second variant is generalization of the Graph Search
Problem. A novel heuristics suitable for both problems is proposed which is furthermore
integrated into a cluster-first route second approach. A set of computational experiments
was conducted over the benchmark instances derived from the TSPLIB library. The
results obtained show that even a standalone heuristics significantly outperforms
the standard solution based on k-means clustering in quality of results as well as
computational time. The integrated approach furthermore improves solutions found
by a standalone heuristics by up to 15% at the expense of higher computational
complexity.
________________________________________________________________________________
Poster 21
Semantic Visual SLAM in Populated Environments
L. Riazuelo, L. Montano and J. M. M. Montiel (Robotics, Perception and Real-Time Group,
Aragón Institute of Engineering Research (I3A), Universidad de Zaragoza, Zaragoza, Spain)
We propose a visual SLAM (Simultaneous Localization And Mapping) system able to perform
robustly in populated environments. The image stream from a moving RGB-D camera is the
only input to the system. The computed map in real-time is composed of two layers: 1) The
unpopulated geometrical layer, which describes the geometry of the bare scene as an
occupancy grid where pieces of information corresponding to people have been removed; 2)
A semantic human activity layer, which describes the trajectory of each person with respect to
the unpopulated map, labelling an area as ”traversable” or ”occupied”. Our proposal is to
embed a real-time human tracker into the system. The purpose is twofold. First, to mask out of
the rigid SLAM pipeline the image regions occupied by people, which boosts the
robustness, the relocation, the accuracy and the reusability of the geometrical map in
populated scenes. Secondly, to estimate the full trajectory of each detected person
with respect to the scene map, irrespective of the location of the moving camera
when the person was imaged. The proposal is tested with two popular visual SLAM
systems, C2TAM and ORBSLAM2, proving its generality. The experiments process a
benchmark of RGB-D sequences from camera onboard a mobile robot. They prove the
robustness, accuracy and reuse capabilities of the two layer map for populated
scenes.
________________________________________________________________________________
Poster 22
Mapping likelihood of encountering humans: application to path planning in crowded
environment
Fabrice Jumel1,2, Jacques Saraydaryan1,2, and Olivier Simonin1,3 ( 1Laboratoire
CITI-Inria, Chroma team, Villeurbanne, France, 2Université de Lyon, CPE Lyon,
Villeurbanne, France, 3Université de Lyon, INSA Lyon, Inria, Villeurbanne, France )
An important challenge for autonomous robots is to navigate efficiently and safely in human
populated environments. It requires that the robots perceive human motions and
take into account human flows to plan and navigate. In this context we address the
problem of modeling human flows from the perception of the robots, by defining a grid
of the human motion likelihood over the environment, called flow grid. We define
the computation of this grid as a counting based mapping. Then we define a path
planning taking into account the risk of encountering humans in opposite direction.
We first evaluate the approach in simulation by considering different navigation
tasks in a crowded environment. For this purpose, we compare three A*-based
path planning models using different levels of information about human presence.
Simulations involving 200 moving persons and 4 collaborative robots allow to evaluate
simultaneously the flow mapping and the related path planning efficiency. Finally
we experiment the model with a real robot that maps human displacements in its
environment.
________________________________________________________________________________
Poster 23
Autonomous Landing On A Moving Car With Unmanned Aerial Vehicle
Tomas Baca, Peter Stepan, and Martin Saska (Faculty of Electrical Engineering Czech
Technical University in Prague, Czech Republic)
This paper presents an implementation of a system that is autonomously able to find, follow
and land on a car moving at 15 km/h. Our solution consists of two parts, the image processing
for fast onboard detection of landing platform and the MPC tracker for trajectory
planning and control. This approach is fully autonomous using only the onboard
computer and onboard sensors with differential GPS. Besides the description of the
solution, we also present experimental results obtained at MBZIRC 2017 international
competition.
________________________________________________________________________________
Poster 24
Improving Sonar Image Patch Matching via Deep Learning
Matias Valdenegro-Toro (Ocean Systems Laboratory, School of Engineering & Physical
Sciences, Heriot-Watt University, Edinburgh, UK)
Matching sonar images with high accuracy has been a problem for a long time, as sonar
images are inherently hard to model due to reflections, noise and viewpoint dependence.
Autonomous Underwater Vehicles require good sonar image matching capabilities for tasks
such as tracking, simultaneous localization and mapping (SLAM) and some cases of object
detection/recognition. We propose the use of Convolutional Neural Networks (CNN) to learn a
matching function that can be trained from labeled sonar data, after pre-processing to
generate matching and non-matching pairs. In a dataset of 39K training pairs, we obtain 0.91
Area under the ROC Curve (AUC) for a CNN that outputs a binary classification matching
decision, and 0.89 AUC for another CNN that outputs a matching score. In comparison,
classical keypoint matching methods like SIFT, SURF, ORB and AKAZE obtain AUC
0.61 to 0.68. Alternative learning methods obtain similar results, with a Random
Forest Classifier obtaining AUC 0.79, and a Support Vector Machine resulting in AUC
0.66.
________________________________________________________________________________
Poster 25
Risk and Comfort Management for Multi-Vehicle Navigation using a Flexible and Robust
Cascade Control Architecture
Charles Philippe1,2, Lounis Adouane1, Benoît Thuilot1, Antonios Tsourdos2, and Hyo-Sang
Shin2 ( 1Institut Pascal, Université Clermont Auvergne, Clermont-Ferrand, France, 2Cranfield
University, Cranfield, United Kingdom )
This paper presents a new cascade control architecture formulation for addressing the
problem of autonomous vehicle trajectory tracking under risk and comfort constraints. The
integration of these constraints has been split between an inner and an outer loop. The former
is made of a robust controller dedicated to stabilizing the car dynamics while the latter uses a
nonlinear Model Predictive Control (MPC) scheme to control the car trajectory. The proposed
structure aims to take into account several important aspects, such as robustness
considerations and disturbance rejection (inner loop) as well as control signal and state
constraints, tracking error monitoring and tracking error prediction (outer loop). The proposed
design has been validated in simulation while comparing mainly with common kinematic
trajectory controllers.
________________________________________________________________________________
Poster 26
Reactive Dubins Traveling Salesman Problem for Replanning of Information Gathering by
UAVs
Robert Pěnička1, Martin Saska1, Christophe Reymann2, and Simon Lacroix2 ( 1Faculty of
Electrical Engineering Czech Technical University in Prague, Czech Republic, 2LAAS/CNRS,
Toulouse, France
We introduce a novel online replanning method for robotic information gathering by
Unmanned Aerial Vehicles (UAVs) called Reactive Dubins Traveling Salesman Prob- lem
(RDTSP). The considered task is the following: a set of target locations are to be visited by the
robot. From an initial information gathering plan, obtained as an offline solution of either the
Dubins Traveling Salesman Problem (DTSP) or the Coverage Path Planning (CPP), the
proposed RDTSP ensures robust information gathering in each given target location by
replanning over possible missed target locations. Furthermore, a simple decision making is a
part of the proposed RDTSP to determine which target locations are marked as missed
and also to control the appropriate time instant at which the repair plan is inserted
into the initial path. The proposed method for replanning is based on the Variable
Neighborhood Search metaheuristic which ensures visiting of all possibly missed target
locations by minimizing the length of the repair plan and by utilizing the preplanned
offline solution of the particular information gathering task. The novel method is
evaluated in a realistic outdoor robotic information gathering experiment with UAV
for both the Dubins Traveling Salesman Problem and the Coverage Path Planning
scenarios.
________________________________________________________________________________
Poster 27
Autonomous Task Execution within NAO Robot Scouting Mission Framework
Anja Babić, Nikola Jagodin, and Zdenko Kovačić (Faculty of Electrical Engineering and
Computing, University of Zagreb, Zagreb, Croatia)
A polygon for movement and task execution by the humanoid robot NAO was defined in the
form of a 2D map. A series of tasks was designed for the robot to accomplish during its
scouting mission. A localization algorithm using markers and the robot’s camera was
developed, as well as algorithms for navigation, path planning, and robot motion. A GUI for
mission definition, supervision, and control, along with manual robot tele-operation, was
developed. A finite automaton was defined which enables switching between autonomous,
semi-autonomous, and manual operation modes. The developed modules were integrated
with modules for audio and visual perception and the complete system was tested on a
chosen scouting mission.
________________________________________________________________________________
Poster 28
Scalable Multirotor UAV Trajectory Planning using Mixed Integer Linear Programming
Jorik De Waen1, Hoang Tung Dinh2, Mario Henrique Cruz Torres2, and Tom Holvoet2
( 1KU Leuven, Leuven, Belgium, 2imec-DistriNet, KU Leuven, Leuven, Belgium
Trajectory planning using Mixed Integer Linear Programming (MILP) is a powerful approach
because vehicle dynamics and other constraints can be taken into account. However, it is
currently severely limited by poor scalability. This paper presents a new approach which
improves the scalability regarding the amount of obstacles and the distance between the start
and goal positions. While previous approaches hit computational limits when the problem
contains tens of obstacles, our approach can handle tens of thousands of polygonal
obstacles successfully on a typical consumer computer. This performance is achieved
by dividing the problem into many smaller MILP subproblems using two sets of
heuristics. Each subproblem models a small part of the trajectory. The subproblems
are solved in sequence, gradually building the desired trajectory. The first set of
heuristics generate each subproblem in a way that minimizes its difficulty, while
preserving stability. The second set of heuristics select a limited amount obstacles to
be modeled in each subproblem, while preserving consistency. To demonstrate
that this approach can scale enough to be useful in real, complex environments,
it has been tested on maps of two cities with trajectories spanning over several
kilometers.
________________________________________________________________________________
Poster 29
Robust Submap-Based Probabilistic Inconsistency Detection for Multi-Robot Mapping
Yufeng Yue1, Danwei Wang1, P.G.C.N. Senarathne2, and Chule Yang1 ( 1School of Electrical
and Electronic Engineering, Nanyang Technological University, Singapore, 2ST
Engineering-NTU Corporate Laboratory, Nanyang Technological University, Singapore )
The primary goal of employing multiple robots in active mapping tasks is to generate a
globally consistent map efficiently. However, detecting the inconsistency of the generated
global map is still an open problem. In this paper, a novel multi-level approach is
introduced to measure the full 3D map inconsistency in which submap-based tests are
performed at both single robot and multi-robot level. The conformance test based on
submaps is done by modeling the histogram of the misalignment error metric into
a truncated Gaussian distribution. Besides, the detected inconsistency is further
validated through a 3D map registration process. The accuracy of the proposed
method is evaluated using submaps from challenging environments in both indoor and
outdoor, which illustrates its usefulness and robustness for multi-robot mapping
tasks.
________________________________________________________________________________
Poster 30
Semantic Monte-Carlo Localization in Changing Environments using RGB-D Cameras
Marian Himstedt and Erik Maehle (Institut für Technische Informatik - Universität zu Lübeck,
Germany)
The localization with respect to a prior map is a fundamental requirement for mobile robots.
The commonly used adaptive monte carlo localization (AMCL) can be found on most of the
mobile robots ranging from small cleaning robots to large AGVs. While achieving accurate
pose estimates in static environments, this algorithm tends to fail in the presence of significant
changes. Recently published extensions and alternatives to AMCL observe the
environment over longer times while building complex spatio-temporal models. Our
approach, in contrast, utilizes object recognition and prior semantic maps to enable
robust localization. It exploits the fact that putative changes in the environment can
be predicted based on a prior semantic knowledge. Our system is experimentally
evaluated in a warehouse environment being subject to frequent changes. This
emphasizes the importance of our algorithm in respect of challenging industrial
applications.
________________________________________________________________________________
Poster 31
Evaluation of the Force-Current Relationship in a 3-Finger Underactuated Gripper
Simone Giannico, Nicola Castaman and Stefano Ghidoni (Intelligent Autonomous Systems
Laboratory Department of Information Engineering, University of Padova, Italy)
This paper provides a detailed analytic evaluation of the force-current relationship for a real
underactuated gripper, whose geometry presents some differences with respect to the cases
usually considered in the literature. Differently from other approaches, the model proposed
can work in two ways: calculating the current needed to exert a given force, and calculating
the force applied by the gripper when a known current is impressed to the motor. Calculating
the current as a function of the forces is not a trivial task; however, this is possible in the
proposed solution thanks to the use of a single input parameter describing the set of forces
applied by the gripper. The proposed approach was tested in real experiments, demonstrating
that the proposed model is capable of providing a very good estimation in several working
conditions.
________________________________________________________________________________
Poster 32
COACHES: An assistance Multi-Robot System in public areas
L. Jeanpierre, A.-I. Mouaddib, L. Iocchi, M.T. Lazaro, A. Pennisi, H. Sahli, E. Erdem, E.
Demirel, and V. Patoglu ()
In this paper, we present a robust system of self-directed autonomous robots evolving in a
complex and public spaces and interacting with people. This system integrates high-level skills
of environment modeling using knowledge-based modeling and reasoning and scene
understanding with robust image and video analysis, distributed autonomous decision-making
using Markov decision process and Petri-Net planning, short-term interacting with
humans and robust and safe navigation in overcrowding spaces. This system has been
deployed in a variety of public environments such as a shopping mall, a center of
congress and in a lab to assist people and visitors. The results are very satisfying
showing the effectiveness of the system and going beyond just a simple proof of
concepts.
________________________________________________________________________________
Poster 33
A QR-code Localization System for Mobile Robots: Application to Smart Wheelchairs
Luca Cavanini1, Gionata Cimini2, Francesco Ferracuti1, Alessandro Freddi1, Gianluca Ippoliti1,
Andrea Monteriù1 and Federica Verdini1 (1Dipartimento di Ingegneria dell’Informazione,
Università Politecnica delle Marche, Ancona, Italy, 2ODYS S.r.l., Milano, Italy)
A Smart Wheelchair (SW) is an electric powered wheelchair, equipped with sensors and
computational capabilities, with the general aim of both enhancing independence and
improving perceived quality of life of the impaired people using it. SWs belong to the class of
semi-autonomous mobile robots, designed to carry the user from one location to another of
his/her choice. For such systems, the localization aspect is of utmost concern, since GPS
signal is not available indoors and alternative sensor sets are required. This paper proposes a
low-cost artificial landmark-based localization system for mobile robots operating indoor. It is
based on Quick Response (QR) codes, which contain the absolute position of the landmark: a
vision system recognizes the codes, estimates the relative position of the robot (i.e.,
displacement and orientation) w.r.t. the codes and, finally, calculates the absolute
position of the robot by exploiting the information contained in the codes. The system
has been experimentally validated for self-localization of a smart wheelchair, and
experimental results confirm that navigation is possible when considering an high QR-code
density, while QR-code low density conditions permit to reset the cumulative odometry
error.
________________________________________________________________________________
Poster 34
An Efficient Backtracking-based Approach to Turn-constrained Path Planning for Aerial Mobile
Robots
Hrishikesh Sharma, Tom Sebastian, and Balamuralidhar P. (Embedded Systems and Robotics
research Lab, Tata Consultancy Services Ltd., India)
Many important classes of civilian applications of Unmanned Aerial Vehicles, such as the
class of remote monitoring of long linear infrastructures e.g. power grid, gas pipeline etc.
entail usage of fixed-wing vehicles. Such vehicles are known to be constrained with restricted
angular movement. Similarly, mobile robots such as car robots or tractor-trailer robots are also
known to entail such constraint. The algorithms known so far require a lot of preprocessing for
turn constraint. In this paper, we introduce a novel algorithm for turn angle- constrained path
planning. The proposed algorithm uses a greedy backtracking strategy to satisfy
the constraint, which minimizes the amount of backtracking involved. By further
constructing an efficient depth-first brute-force algorithm for path planning and comparing
against its performance, we see an improvement in convergence performance by
a factor of at least 10x. Further, compared to recent LIAN suite of path-planning
algorithm, our algorithm exhibits much reduced discretization offset/error with respect to
shortest path length. We believe that this algorithm will form an useful stepping stone
towards evolution of better path planning algorithm for specific mobile robots such as
UAVs.
________________________________________________________________________________
Poster 35
Data Collection Planning with Dubins Airplane Model and Limited Travel Budget
Petr Váňa, Jan Faigl, Jakub Sláma, and Robert Pěnička (Faculty of Electrical Engineering,
Czech Technical University in Prague, Czech Republic)
In this paper, we address the data collection planning problem for fixed-wing unmanned
aircraft vehicle (UAV) with a limited travel budget. We formulate the problem as a variant of
the Orienteering Problem (OP) in which the Dubins airplane model is utilized to
extend the problem into the three-dimensional space and curvature-constrained
vehicles. The proposed Dubins Airplane Orienteering Problem (DA-OP) stands to find
the most rewarding data collection trajectory visiting a subset of the given target
locations while the trajectory does not exceed the limited travel budget. Contrary to the
original OP formulation, the proposed DA-OP combines not only the combinatorial
part of determining a subset of the targets to be visited together with determining
the sequence to visited them, but it also includes challenges related to continuous
optimization in finding the shortest trajectory for Dubins airplane vehicle. The problem is
addressed by sampling possible approaching angles to the targets, and a solution
is found by the Randomized Variable Neighborhood Search (RVNS) method. A
feasibility of the proposed solution is demonstrated by an empirical evaluation on
modified benchmarks for the OP instances to the scenarios with varying altitude of the
targets.
________________________________________________________________________________
Poster 36
Motion planning for long reach manipulation in aerial robotic systems with two arms
A. Caballero1, M. Bejar2, A. Rodriguez-Castaño1, and A. Ollero1 ( 1University of Seville,
Seville, Spain, 2University Pablo de Olavide, Seville, Spain )
In this paper an aerial robotic system with two arms for long reach manipulation (ARS-LRM)
while flying is presented. The system consists of a multirotor with a long bar extension that
incorporates a lightweight dual arm in the tip. This configuration allows aerial manipulation
tasks increasing considerably the safety distance between rotors and manipulated objects.
The objective of this work is the development of planning strategies to move the ARS-LRM
system for both navigation and manipulation tasks. With this purpose, a simulation
environment to evaluate the algorithms under consideration is required. Consequently, the
ARS-LRM dynamics has been properly modeled with specific methodologies for multi-body
systems. Then, a distributed control scheme that makes use of nonlinear control strategies
based on model inversion has been derived to complete the testbed. The motion planning
problem is addressed considering jointly the aerial platform and the dual arm in order to
achieve wider and safer operating conditions. The operation of the planner is given by an
RRT*-based algorithm that optimizes energy and time performance in cluttered
environments for both navigation and manipulation tasks. This motion planning
strategy has been tested in a realistic industrial scenario given by a riveting task. The
satisfactory results of the simulations are presented as a first validation of the proposed
approach.
________________________________________________________________________________
Poster 37
Multi-robot human scene observation based on hybrid metric-topological mapping
Laetitia Matignon1,2, Stephane d’Alu1,3, and Olivier Simonin1,3 ( 1CITI Lab. Inria,
Chroma team, INSA Lyon, Villeurbanne, France, 2Université de Lyon 1, LIRIS, CNRS,
UMR5205 Villeurbanne, France, 3INSA Lyon, Université de Lyon, Villeurbanne, France )
This paper presents an hybrid metric-topological mapping for multi-robot observation of a
human scene. The scene is defined as a set of body joints. Mobile robots have to cooperate to
find a position around the scene that maximizes the number of observed joints. It is assumed
that the robots can communicate but have no map of the environment. The map is updated
cooperatively by exchanging only high-level data, thereby reducing the communication
payload. The mapping is also realized in an incremental way to explore promising
areas of the environment while keeping state-space complexity reasonable. We
proposed an on-line distributed heuristic search combined to this hybrid mapping. We
showed the efficiency of the approach on a fleet of three real robots, in particular its
ability to quickly explore and find the team position maximizing the joint observation
quality.
________________________________________________________________________________
Poster 38
TIGRE: Topological Graph based Robotic Exploration
L. Fermin-Leon, J. Neira, and J. A. Castellanos (Instituto de Investigación en Ingeniería de
Aragón (I3A), Universidad de Zaragoza, Zaragoza, Spain)
In this work we address the problem of autonomous robotic exploration and map building. We
propose TIGRE, a method that segments the current state of the explored map to
incrementally obtain a topological graph. We then use a graph traversal algorithm as a high
level exploration policy. We consider that an optimal traversal of the topological
graph must visit every edge at least once (instead of the usual goal of visiting every
node at least once). This allows closing all potential loops, which is known to greatly
improve both the map quality and the accuracy in the robot position. We build upon the
classical Tarry’s algorithm for maze solving. Our TIGRE algorithm incrementally builds
a topological graph and traverses every edge at least once, maximum twice. In
this work we compare the path produced by TIGRE and frontier-based exploration
algorithms that calculate a path using the full graph an either visit every edge once
(Eulerian path) or every node once (Hamiltonian path). The comparison is done in
terms of quality of the reconstruction, error evolution and area coverage. TIGRE
exhibits a performance similar to the optimal traversal of every edge, with only an
additional 30% increase in path length with respect to the shortest path. We also obtain
an average error in the reconstruction 50% lower and a lower bound on the error
evolution.
________________________________________________________________________________
Poster 39
Collision Avoidance for Safe Structure Inspection with Multirotor UAV
F. Azevedo, A. Oliveira, A. Dias, J. Almeida, M. Moreira, T. Santos, A. Ferreira, A. Martins, and
E. Silva (INESC Technology and Science, ISEP - School of Engineering, Porto, Portugal)
The multirotor UAVs are being integrated into a wide range of application scenarios due to
maneuverability in 3D, versatility and reasonable payload of sensors. One of the application
scenarios is the inspection of structures where the human intervention is difficult or unsafe
and the UAV can provide an improvement of the collected data. At the same time introduce
challenges due to low altitude missions and also the fact of being manually operated without
line of sight. In order to overcome these issues, this paper presents a LiDAR-based real-time
collision avoidance algorithm, denoted by Escape Elliptical Search Point with the ability to be
integrated into autonomous and manned modes of operation. The algorithm was
validated in a simulation environment developed in Gazebo and also in a mixed
environment composed by a real robot in an outdoor scenario and simulated obstacle and
LiDAR.
________________________________________________________________________________
Poster 40
Gas Source Localization Strategies for Teleoperated Mobile Robots. An Experimental
Analysis
Andres Gongora, Javier Monroy, and Javier Gonzalez-Jimenez (Machine Perception and
Intelligent Robotics (MAPIR) research group, University of Malaga, Spain)
Gas source localization (GSL) is one of the most important and direct applications of a gas
sensitive mobile robot, and consists in searching for one or multiple volatile emission sources
with a mobile robot that has improved sensing capabilities (i.e. olfaction, wind flow, etc.). This
work adresses GSL by employing a teleoperated mobile robot, and focuses on which search
strategy is the most suitable for this teleoperated approach. Four different search strategies,
namely chemotaxis, anemotaxis, gas-mapping, and visual-aided search, are analyzed and
evaluated according to a set of proposed indicators (e.g. accuracy, efficiency, success rate,
etc.) to determine the most suitable one for a human-teleoperated mobile robot.
Experimental validation is carried out employing a large dataset composed of over 150 trials
where volunteer operators had to locate a gas-leak in a virtual environment under
various and realistic environmental conditions (i.e. different wind flow patterns and
gas source locations). We report different findings, from which we highlight that,
against intuition, visual-aided search is not always the best strategy, but depends
on the environmental conditions and the operator’s ability to understand how gas
distributes.
________________________________________________________________________________
Poster 41
On the use of Unmanned Aerial Vehicles for Autonomous Object Modeling
Michael C. Welle, Ludvig Ericson, Rares Ambrus, and Patric Jensfelt (Centre for
Autonomous System at KTH Royal Institute of Technology, Stockholm, Sweden)
In this paper we present an end to end object modeling pipeline for an unmanned aerial
vehicle (UAV). We contribute a UAV system which is able to autonomously plan a path,
navigate, acquire views of an object in the environment from which a model is built. The UAV
does collision checking of the path and navigates only to those areas deemed safe. The data
acquired is sent to a registration system which segments out the object of interest and
fuses the data. We also show a qualitative comparison of our results with previous
work.
________________________________________________________________________________
Poster 42
Multi range Real-time depth inference from a monocular stabilized footage using a Fully
Convolutional Neural Network
Clément Pinard1,2, Laure Chevalley1, Antoine Manzanera2, David Filliat2 ( 1Parrot,
Paris, France, 2U2IS, ENSTA ParisTech, Université Paris-Saclay, Palaiseau, France
We propose a neural network architecture for depth map inference from monocular stabilized
videos with application to UAV videos in rigid scenes. Training is based on a novel
synthetic dataset for navigation that mimics aerial footage from gimbal stabilized
monocular camera in rigid scenes. Based on this network, we propose a multi-range
architecture for unconstrained UAV flight, leveraging flight data from sensors to make
accurate depth maps for uncluttered outdoor environment. We try our algorithm on both
synthetic scenes and real UAV flight data. Quantitative results are given for synthetic
scenes with a slightly noisy orientation, and show that our multi-range architecture
improves depth inference. Along with this article is a video that present our results more
thoroughly.
________________________________________________________________________________
Poster 43
Synthesized semantic views for mobile robot localization
Johannes Pöschmann, Peer Neubert, Stefan Schubert, and Peter Protzel (Chemnitz
University of Technology, Germany)
Localizing a mobile robot in a given map is a crucial task for autonomy. We present an
approach to localize a robot equipped with a camera in a known 2D or 3D geometrical map
that is augmented with semantic information (e.g., a floor plan with semantic labels). The
approach uses semantic information to mediate between the visual information from the
camera and the geometrical information in the map. Moreover, semantic information is robust
to appearance changes like lighting conditions. Instead of solely relying on salient semantic
landmarks (i.e., “things” like doors) we also exploit “stuff”-like semantic classes such as wall
and floor. The presented localization approach builds upon the idea of computing a
semantic segmentation of an incoming camera image using a Convolutional Neural
Network and subsequent matching to semantic views synthesized from a map.
We give details about the algorithmic approach on how to semantically segment
images, synthesize images from the semantic 2D or 3D map, the matching between
images from both sources, and the integration in Monte Carlo localization. Further, we
provide a set of proof-of-concept experiments and evaluate the influence of the
selected set of semantic classes. To work towards the usage of hand-drawn sketches
as input map, we also evaluate the robustness of the presented approach to map
distortions.
________________________________________________________________________________
Poster 44
Adaptive Sampling-based View Planning under Time Constraints
Lars Kunze1, Mohan Sridharan2, Christos Dimitrakakis3, and Jeremy Wyatt4 ( 1Oxford
Robotics Institute, Dept. of Engineering Science, University of Oxford, United Kingdom,
2Department of Electrical and Computer Engineering, The University of Auckland, New
Zealand, 3Computer Science and Engineering, Chalmers University of Technology, Göteborg,
Sweden, 4Intelligent Robotics Lab, School of Computer Science, University of Birmingham,
United Kingdom )
Planning for object search requires the generation and sequencing of views in a continuous
space. These plans need to consider the effect of overlapping views and a limit imposed
on the time taken to compute and execute the plans. We formulate the problem
of view planning in the presence of overlapping views and time constraints as an
Orienteering Problem with history-dependent rewards. We consider two variants of
this problem—in variant (I) only the plan execution time is constrained, whereas in
variant (II) both planning and execution time are constrained. We abstract away the
unreliability of perception, and present a sampling-based view planner that simultaneously
selects a set of views and a route through them, and incorporates a prior over object
locations. We show that our approach outperforms the state of the art methods for the
orienteering problem by evaluating all algorithms in four environments that vary in size and
complexity.
________________________________________________________________________________
Poster 45
“Look At This One” - Detection sharing between modality-independent classifiers for robotic
discovery of people
Joris Guerry12, Bertrand Le Saux1, and David Filliat2, (1ONERA The French Aerospace Lab,
Palaiseau, France, 2U2IS, ENSTA ParisTech, Inria FLOWERS team, Université Paris-Saclay,
Palaiseau, France)
With the advent of low-cost RGBD sensors, many solutions have been proposed for extraction
and fusion of colour and depth information. In this paper, we propose new different fusion
approaches of these multimodal sources for people detection. We are especially concerned
by a scenario where a robot evolves in a changing environment. We extend the
use of the Faster RCNN framework proposed by Girshick et al. [1] to this use case
(i), we significantly improve performances on people detection on the InOutDoor
RGBD People dataset [2] and the RGBD people dataset [3] (ii), we show these fusion
handle efficiently sensor defect like complete lost of a modality (iii). Furthermore we
propose a new dataset for people detection in difficult conditions: ONERA.ROOM
(iv).
________________________________________________________________________________
14:00 - 14:20
A bio-inspired celestial compass applied to an ant-inspired robot for autonomous navigation
Julien Dupeyroux, Julien Diperi, Marc Boyron, Stéphane Viollet, and Julien Serres
(Aix-Marseille University, CNRS, ISM, Inst Movement Sci, Marseille, France)
Common compass sensors used in outdoor environments are highly disturbed by
unpredictable magnetic fields. This paper proposes to get inspiration from the insect
navigational strategies to design a celestial compass based on the linear polarization of
ultraviolet (UV) skylight. This bio-inspired compass uses only two pixels to determine the solar
meridian direction angle. It consists of two UV-light photosensors topped with linear polarizers
arranged orthogonally to each other as it was observed in insects’ Dorsal Rim Area. The
compass is embedded on our ant-inspired hexapod walking robot called Hexabot. The
performances of the celestial compass under various weather and UV conditions have been
investigated. Once embedded onto the robot, the sensor was first used to compensate for yaw
random disturbances. We then used the compass to maintain Hexabot’s heading
direction constant in a straight-forward walking task over a flat terrain while being
perturbated in yaw by its walking behaviour. Experiments under various meteorological
conditions provided steady state heading direction errors from 0.3∘ (clear sky) to
1.9∘ (overcast sky). These results suggest interesting precision and reliability to
make this new optical compass suitable for autonomous field robotics navigation
tasks.
________________________________________________________________________________
14:20 - 14:40
Controllers Design for Differential Drive Mobile Robots based on Extended Kinematic
Modeling
Julio C. Montesdeoca Contreras1,2, D. Herrera2, J.M. Toibero2, and R. Carelli2 (1Universidad
Politécnica Salesiana, Cuenca, Ecuador, 2Instituto de Automatica, CONICET / Universidad
Nacional de San Juan, San Juan, Argentina)
This paper presents the simulation results of the controllers design for differential-drive mobile
robot (DDMR) using a novel modeling method, which is based on the inclusion of the sideway
velocity into the kinematic modeling, in order to obtain a holonomic-like model. Next,
non-holonomic constraint is introduced assuming that the sideway slipping is measurable. The
controller design considers a variable position for the point of interest and takes
into account also the robot constrained inputs. The obtained inverse kinematics
controller is differentiable, time invariant and naturally incorporates the sideway
slipping which is considered measurable. Moreover, the proposed controller can be
used for both: trajectory tracking and path following by setting appropriate desired
values at the planning stage. The Lyapunov theory is used to prove the stability
of the control system. Simulator includes a robot dynamics module that supports
physics engines. Obtained simulations results show a high performance for both
tasks.
________________________________________________________________________________
14:40 - 15:00
On Solution of the Dubins Touring Problem
Jan Faigl, Petr Váňa, Martin Saska, Tomáš Bá’vca, and Vojtěch Spurný (Faculty
of Electrical Engineering, Czech Technical University in Prague, Czech Republic)
The Dubins traveling salesman problem (DTSP) combines the combinatorial optimization of
the optimal sequence of waypoints to visit the required target locations with the continuous
optimization to determine the optimal headings at the waypoints. Existing decoupled
approaches to the DTSP are based on an independent solution of the sequencing part as the
Euclidean TSP and finding the optimal headings of the waypoints in the sequence. In this
work, we focus on the determination of the optimal headings in a given sequence of waypoints
and formulate the problem as the Dubins touring problem (DTP). The DTP can be solved by a
uniform sampling of possible headings; however, we propose a new informed sampling
strategy to find approximate solution of the DTP. Based on the presented results, the
proposed algorithm quickly converges to a high-quality solution, which is less than
0.1% from the optimum. Besides, the proposed approach also improves the solution
of the DTSP, and its feasibility has been experimentally verified in a real practical
deployment.
________________________________________________________________________________
15:00 - 15:20
Algorithms for limited-buffer shortest path problems in communication-restricted environments
Alessandro Riva, Jacopo Banfi, Arlind Rufi, and Francesco Amigoni (Dipartimento di
Elettronica, Informazione e Bioingegneria, Politecnico di Milano, Milan, Italy)
In several applications, a robot moving from a start to a goal location is required to gather data
along its path (e.g., a video feed in a monitoring scenario). The robot can have at its disposal
only a limited amount of memory to store the collected data, in order to contain
costs or to avoid that sensible data fall into the hands of an attacker. This poses the
need of periodically delivering the data to a Base Station (BS) through a deployed
communication infrastructure that, in general, is not available everywhere. In this
paper, we study this scenario by considering a variant of the shortest path problem
(which we prove to be NP-hard) where the robot acquires information along its path,
stores it into a limited memory buffer, and ensures that no information is lost by
periodically communicating data to the BS. We present and evaluate an optimal
pseudo-polynomial time algorithm, an efficient feasibility test, and a polynomial time heuristic
algorithm.