Nsocial_Govtech_Challenge10_How to automatically log plane movements in the airport?

Page 1

Nsocial enriched.immersive.experimental.

How To Automatically Log Plane Movements In The Airport?


We will use airport’s cameras and our software. Airport’s cameras are our resources to capture datas. By using AI + Machine Learning + Deep Learning; we will focus on the image tracking and recognition by using the camera’s focals to automate tracking of plane movements and procedures using visual material from available cameras at the airport. We will use Object Detection / Recognition, Pattern Recognition, Image Segmentation and Triangulation methods on this project and triangulation will be supported with AI + Machine Learning + Deep Learning.

We will recognize the objects first from the cameras and segmentify them. Camera Focals will give us the sources of the datas. Then we will use triangulation for measuring the datas, distances and speeds. All these steps will be supported with AI and ML & DL. We will record and process the datas by using • Object Detection / Recognition, • Pattern Recognition, • Image Segmentation • Triangulation • AI & Deep Learning & Machine Learning Then we will visualise the data and Show them on the dashboards.


Object Detection / Recognition Object detection is a computer vision technique that allows us to identify and locate objects in an image or video. With this kind of identification and localization, object detection can be used to count objects in a scene and determine and track their precise locations, all while accurately labeling them. Specifically, object detection draws bounding boxes around these detected objects, which allow us to locate where said objects are in (or how they move through) a given scene. Object detection, on the other hand, draws a box around each for example; aircraft and labels the box “aircraft�. The model predicts where each object is and what label should be applied. In that way, object detection provides more information about an image than recognition. Object detection is inextricably linked to other similar computer vision techniques like image recognition and image segmentation, in that it helps us understand and analyze scenes in images or video.



Object detection can be broken down into machine learning-based approaches and deep learning-based approaches. Machine Learning based approaches, computer vision techniques are used to look at various features of an image, such as the color histogram or edges, to identify groups of pixels that may belong to an object. These features are then fed into a regression model that predicts the location of the object along with its label. Deep Learning-based approaches employ convolutional neural networks (CNNs) to perform end-to-end, unsupervised object detection, in which features don’t need to be defined and extracted separately. After the object detection; we will implement Image Segmentation. Image segmentation: Our technology can differentiate between settlement, water and woods from satellite images or locate tumors in 3d tomograms or find cars from real time images to find their velocity.


After object detection and image segmentation, the next step will be Pattern Recognition and Pattern Classification for runways. Pattern recognition: We develop algorithms to find target objects in images. This could range from finding airport runways and planes in satellite images to finding a 3cm bolt in that runway.

Pattern classification: Our technology can monitor people and detect at which moment a person watching an advertisement gets bored or feels joy or whether a bolt on a runway threatens the safety of a flight ot not. Any material that should not be found on an airport taxiway, ramp, runway or airfield is classified as Foreign Object Detection (FOD) and must be removed to increase safety of operation and prevent aircraft damage.

Passenger’s and flight’s safety is our main priority. A foreign object detection system that is used for tracking even the smallest objects in any weather condition in an airport. FOD provides comprehensive sets of surveillance and analysis tools in order to have a complete management of the runways.


Summary of Triangulation

We will use Triangulation Similarity and the cameras for identifying the distance of a defined object. Triangulation Similarity is like this: • Imagine that there is an object which is known as W value width. • We will locate this object to the D value distance on our camera. • We will capture a photo of the W width object by using our camera and the apparent width will be measure as P pixel value. • This allows us to derive the perceived focal length of our camera with F value. F = (P x

D) / W


For example, we will place 8,5 x 11 inches standard object in front of the camera. (horizontally; G = 11) Imagine that D = 24 inches placed and captured a photo. When we measure the width of object in the photo; object’s perceived width will be P =248 pixels.

Our focal length is F and here is the formula: F = (248 pixels x 24 inches) / 11 inches = 543,45.

As we continue to both zoom in and out of the object from the camera, we can apply the Triangulation Similarity to determine the distance of the object to the camera: D ’= (G x F) / P


Again, to make this more concrete, let's say we move the camera 3 feet (or 36 inches) from the pointer and take a photo of the same object. Thanks to automatic image processing, we can determine that the perceived width of the object is now 170 pixels. If we reflect this to the equation we now have: D ’= (11 inches x 543,45) / 170 = 35 inches or approximately 36 inches or 3 feet.

Later, we will give the distance data from cameras looking at the same point into a formula and obtain the real distance data. For example, a formula where we give the distance data from two cameras; minxminx ∑j=1,2(uj−PjT1ϕ˜PjT3X)2+(vj−PjT2ϕ˜PjT3X)2 Here j is the index of each camera, X˜ is the homogeneous representation of X. PTi is P each row of camera projection matrix. The first estimate of the solution, X0, is estimated through linear triangulation to minimize the cost function.


Thank You. If you have any questions, please contact with us!

Lithuania: Vilnius TechPark Antakalnio St. 17, Vilnius, LT-10312, Lithuania

USA: Middletown 600 N Broad Street Suite 5 #922, Delaware, USA Turkey: INEO Innovation Center, Kadir Has Cd., 34083 Cıbali/İstanbul

Kadir Has Üniversitesi

+370 617 672 03 , +1 4158942552, +90 850 346 7074

www.nsocialtr.com hello@nsocialtr.com


Turn static files into dynamic content formats.

Create a flipbook
Issuu converts static files into: digital portfolios, online yearbooks, online catalogs, digital photo albums and more. Sign up and create your flipbook.