Abstract
This paper presents the application of computer vision and machine learning to autonomous approach and landing and taxiing for an air vehicle. Recently, there has been growing interest in developing unmanned aircraft systems (UAS). We present a system and method that uses pattern recognition which aids the landing of a UAS and enhances the human-crewed air vehicle landing. Auto-landing systems based on the Instrument Landing System (ILS) have already proven their importance through decades. The auto-land systems work in conjunction with a radio altimeter, ILS, MLS, or GNSS. Closer to the runway, both under VFR and IFR, pilots are expected to rely on visual references for landing. Modern systems like HUD or CVS allow a trained pilot to manually fly the aircraft using guidance cues from the flight guidance system.Notwithstanding the type of landing and instruments used, typically, Pilots are expected to have the runway threshold markings, aiming point, displacement arrows, and touch down markings/lights insight before Minimum Decision Altitude (MDA). Imaging sensors are the essential standard equipment in crewed and crewless aerial vehicles that are widely used during the landing maneuver. In this method, a dataset of visual objects from satellite images is subjected to pattern recognition training. This trained system learns and then identifies and locates important visual references from imaging sensors and could help in landing and taxiing.