ian@iabaldwin
ian.baldy
iabaldwin
CV
We begin by developing the physical means to make large-scale localisation achievable, and affordable. This takes the form of a stand-alone, rugged sensor payload - incorporating a number of sensing modalities - that can be deployed in either a mapping or localisation role. We then present a new technique for localisation in a prior map using an information-theoretic framework. The core idea is to build a dense retrospective sensor history, which is then matched statistically within a prior map. The underlying idea is to leverage the persistent structure in the environment, and we show that by doing so, it is possible to stay localised over the course of many months and kilometres. The developed system relies on orthogonally-oriented ranging sensors, to infer both velocity and pose. However, operating in a complex, dynamic, setting (like a town centre) can often induce velocity errors, distorting our sensor history and resulting in localisation failure. The insight into dealing with this failure is to learn from sensor context - we learn a place-dependent sensor model and show that doing so is vital to prevent such failures.
We demonstrate the viability of using 2D LIDAR data as the sole means for accurate, robust, long-term road-vehicle localization within a prior map in a complex, dynamic real-world setting. We utilize a dual-LIDAR system - one oriented horizontally, in order to infer vehicle linear and rotational velocity, and one declined to capture a dense view of the surrounds - that allows us to estimate both velocity and position within a prior map. We show how probabilistically modelling the noisy local velocity estimates from the horizontal laser feed, fusing these estimates with data from the declined LIDAR to form a dense 3D swathe and matching this swathe statistically within a map will allow for robust, long-term position estimation. We accommodate estimation errors induced by passing vehicles, pedestrians, ground-strike etc., by learning a positional- dependent sensor model - that is, a sensor-model that varies spatially - and show that learning such a model for LIDAR data allows us to deal gracefully with the complexities of real-world data. We validate the concept over more than 9 kilometres of driven distance in and around the town of Woodstock, Oxfordshire.
In this paper we describe and demonstrate a method for precisely localizing a road vehicle using a single push-broom 2D laser scanner while leveraging a prior 3D survey. In contrast to conventional scan matching, our laser is oriented downwards, thus causing continual ground strike. Our method exploits this to produce a small 3D swathe of laser data which can be matched statistically within the 3D survey. This swathe generation is predicated upon time varying estimates of vehicle velocity. While in theory this data could be obtained from vehicle speedometers, in reality these instruments are biased and so we also provide a way to estimate this bias from survey data. We show that our low cost system consistently outperforms a high caliber integrated DGPS/IMU system over 26 km of driven path around a test site.
We present a novel way to learn sampling distributions for sampling-based motion planners by making use of expert data. We learn an estimate (in a non-parametric setting) of sample densities around semantic regions of interest, and incorporate these learned distributions into a sampling-based planner to produce natural plans. Our motivation is that certain aspects of the workspace have a local influence on planning strategies, which is dependent both on where, and what, they are. In the event that learning the density estimate of the training data is impractical in the original feature space, we utilize a non-linear dimensionality-reduction technique and perform density estimation on a lower-dimensional embedding. Samples are then lifted from this embedded density into the original feature space, producing samples that still well approximate the original distribution. A goal of this work is to learn how various features in the environment influence the behavior of experts - for example, how pedestrian crossings, traffic signals and so on affect drivers. We show that learning sampling distributions from expert trajectory data around these semantic regions leads to more natural paths that are measurably closer to those of an expert. We demonstrate the feasibility of the technique in various scenarios for a virtual car-like robotic vehicle and a simple manipulator, contrasting the differences in planned trajectories of the semantically-biased distributions with conventional techniques.
This paper presents a novel way to bias the sampling domain of stochastic planners by learning from example plans. We learn a generative model of a planner as a function of proximity to labeled objects in the workspace. Our motivation is that certain objects in the workspace have a local influence on planning strategies, which is dependent not only on where they are but also on what they are. We introduce the concept of a Semantic Field - a region of space in which configuration sampling is modelled as a multinomial distribution described by an underlying Dirichlet distribution. We show how the field can be trained using example expert plans, pruned according to information content and inserted into a regular RRT to produce efficient plans. We go on to show that our formulation can be extended to bias the planner into producing sequences of samples which mimic the training data.
This presentation presents a novel way to bias the sampling domain of stochastic planners by learning from example plans. The fundamentals of the stochastic planner of choice (Rapidly -exploring Random Trees) is discussed, along with a method to bias the proposal distribution of the planner using expert data.
In this paper we present a large dataset intended for use in mobile robotics research. Gathered from a robot driving several kilometers through a park and campus, it contains a five degree-of-freedom dead-reckoned trajectory, laser range/reflectance data and 20 Hz stereoscopic and omnidirectional imagery. All data is carefully timestamped and all data logs are in human readable form with the images in standard formats. We provide a set of tools to access the data and detailed tagging and segmentations to facilitate its use.
... is a second generation inspection robot, fusing the power of modern embedded computing technology with the diversity of low-cost robotics. Although primarily designed for the inspection of marine vessels in dry-dock, eRobot makes use of a highly modular design in order to be able to fulfill a variety of inspection tasks.