Vision-Based Slam Algorithm For Quadcopter

Authors

  • B Anbarasu, Arjun S, Kaushik G Raman, Manikanta K, Stanislaus John

Abstract

This paper presents efficiently building a 3D map of the environment for real-time computation, to define the pose of the agent in real-world settings using a Vision-Based SLAM (Simultaneous Localization and Mapping) Algorithm. This algorithm uses images acquired from cameras and other image sensors. Visual SLAM can use simple cameras (such as wide-angle and spherical cameras), compound eye cameras (stereo and multi-cameras), and RGB-D cameras (depth and ToF cameras). This algorithm measures pictures and allows to fabricate a guide and confine the vehicle continuously. Visual SLAM algorithms also permit the vehicle to map out non-specified environments. This map data is used to bring out performance such as path planning and obstacle avoidance. Intensive testing of four different extractors- SIFT, SURF, SHI-TOMASI, and ORB-was undertaken, to determine the best use for indoor/outdoor environments. The results were verified and have concluded using ORB-SLAM because it is a good alternative to SIFT and SURF in terms of computation cost, matching performance, accuracy, and fast detection of keypoint.

Downloads

Published

2021-06-03

How to Cite

B Anbarasu, Arjun S, Kaushik G Raman, Manikanta K, Stanislaus John. (2021). Vision-Based Slam Algorithm For Quadcopter. International Journal of Modern Agriculture, 10(2), 4160- 4164. Retrieved from http://www.modern-journals.com/index.php/ijma/article/view/1306

Issue

Section

Articles