Tracked Platform

From irlab
Jump to: navigation, search

Outdoor Robotics

The aim is to take the existing tracked robot and turn it into a research platform for a wide variety of different projects within the University. At a high level the robot platform could be used to test and demonstrate the execution of mapping, investigation and monitoring tasks and look at long term autonomy (learning to adapt to the environment, tasks and changing characteristics of the robot). An example task might be to patrol around the outside of the computer science building taking regular pictures of each door. Rustam has an existing tracked platform for which a velocity controller for each track exists with an additional set of sensors that can be fitted to the platform. An initial step would be take the indoor algorithms used on Dora and modify the tracked base to take the same control messages (using the Player interfaces). At the same time the Gazebo simulation could be adapted to incorporate a model of the robot and the campus to allow development of higher level control algorithms and simulated testing.

Building up the platform

There are a number of development steps which are needed on the platform, these could hopefully be made into a set of student projects. Development steps

  • Identify the equivalent control levels to match Dora (initially)
  • Add a velocity controller
  • Add odometry (feedback from an IMU and possibly encoders on the drive for each track
  • Add a GPS receiver and use to improve the location / velocity state estimation
  • Install an on-board laptop to run control algorithms
  • Incorporate a feedback controller to follow a trajectory (lines / splines)
  • Add tele-operation options (wifi / radio link to additional laptop - e-stop function)
  • Tele-operation of velocity control
  • Install laser scanner
  • Use scanner data for SLAM and obstacle avoidance
  • Remote operation with ‘go to <location>’ commands
  • Run as plug in replacement for Dora
  • Add pan tilt camera, link camera data back to laptop.
  • Develop a simulation for the platform and outdoor environment (using Gazebo?)

Potential additions

  • Visual servoing (go-to point in image)
  • Additional sensors for obstacle detection
  • Use of on-board image processing (e.g. follow-me behaviour)
  • Richer controller for shaping the tracks (self-levelling, changing eye height, stair climbing, maximising contact to improve traction
  • Detecting / modelling track contact (for climbing over obstacles)
  • specific behaviour for moving in groups / crowds
  • A horn / speaker.
  • collision detection bumpers
  • integrate a Kinect
  • detections of roads / paths / steps / buildings / doorways from visual images

Data gathering runs

  • Loop around the campus, checking on accuracy of self-localisation
  • Gathering visual landmarks
  • building maps (2d+height) of the immediate area
  • examination of sensor performance - are there key areas of failure (glass doors, wire fences)
Personal tools