Background
Drones and small planes are one of the fastest growing fields today. And so, we have decided to use them in our project, combined with an on-board camera and computer vision, to build an autonomous drone that navigates on it’s own in an open space.
Our Goal
Our goal would be to build a system that autonomously navigates the Drone inside a building, while presenting the (X,Y,Z) coordinates of the drone inside the building, relative to the takeoff spot.
More about the project
The system will be based on feuture extraction from the drone’s built in camera feed. Predefined fetures will be placed in the area we wish to cover.
After the feature extraction, geometric methods will be used to determine the drone’s relative position (X,Y,Z), we will then use those coordinates along with the drone’s Yaw,Pitch,Roll angles to direct the drone to it’s final destination.
The destination points and the features will be predefined.
The geometric methods we will use include 3D pose-estimation methods such as one of the Perspective n-point Problem solutions, among other known teqniques .
The Platform we use for this system is called ROS, which is a widely used Operating System in the robotics field.
Our code is mostly written in C++ along with OpenCV.
The Drone we are using is called The AR-Drone 2.0, which has an on-board HD camera.
A comparison result between our geometric method of estimating distance and the PnP algorithm:
The data presented is taken from a test where the drone was standing still infront of a chessboard.
As seen from the graph, the PnP method is unstable and gives non-accurate results, whilst our method gives steady results.
Due to this, our estimation of the drone’s position in space was done only based on our method.
CopyRights © 2014-2015 Soliman Nasser, Hilal Diab and Ibrahim JubranA 3D Photography Project,Directed by and submitted to: Prof. Hagit Hel-Or.
Comments