||Odometry can be achieved for wheeled robots by counting the number of revolutions of each wheel.
In legged robots the same is not true due to the unpredictable intermediate contact interactions
with the ground.
The goal of this project is to study the feasibility of vision based odometry for a highly mobile
legged robot navigating in unstructured environments.
- What are the classes of computer vision algorithms that result in better odometry accuracy?
- Can one augment vision with other types of sensors (e.g. accelerometers) to achieve full body
pose estimation? How do these ideas compare to animals?
- What types of walking gaits best suit the vision algorithms?