An Investigation into the photogrammetric process Christopher F. Ryan, Falmouth Marine School, UK
Abstract An investigation into the photogrammetric process was carried out, accuracy and ease of use were commented on. A model of a boat, CNC milled to 0.02mm accuracy was measured using Rhinophoto 3.0. The pointcloud created by the software was compared to the original model sent to the CNC milling machine, the standard deviation (0.34284 mm); and mean (0.33378 mm) and median (0.28196 mm) distances of the points from the original model were measured. The obtained pointcloud was found to be of moderate to good accuracy, and moderate ease of use. A number of potential improvements and recommendations were made.
Introduction Reverse engineering designs from existing vessels has been a common practice for many years, in the past it was essential to be able to produce a lines plan from an existing vessel as lines were often lofted directly from scale models in the workshop. In more recent times with the widespread use of CAD (computer aided design) software a need for easier, faster and more accurate methods has arisen. The scope of this article is to investigate the photogrammetric process and comment on accuracy and ease of use. There were two main methods used to take lines from a boat hull before photogrammetry and 3d scanners existed. These were the string triangulation method (this was referred to as the triangulation method by Gardener [1] but it will be referred to as the string triangulation method to avoid confusion later on); and what will be referred to as the grid method. The methods are as follows: String triangulation method: The boat is divided into ten stations and these are marked on the hull with tape. Points to be measured at each station are marked on the tape. A line parallel to the vessel is marked on the floor a known distance from the longitudinal centreline of the boat and the stations are marked on this line (each of these points will be referred to as point B for their corresponding stations). A point is marked on a stick (point A). The stick with a point marked on it (point A) is leant against the vessel at each station, a piece of string is stretched between point A and a point on the hull, the string is then measured and the measurement recorded. This is repeated for each point on the hull surface. The distances from point B to each of the points marked on the hull are measured in the same way. Other measurements taken from each station are: shear height, height of keel from workshop floor, rabbet height, half beam, horizontal distance of point A from the longitudinal centreline of the boat and vertical height of point A from the workshop floor. (Fig 1)
The stem profile shape can be measured using the same method, measuring distance from two datum points as long as their position is known relative to the rest of the vessel. The stern rake, rocker and half width of a suitable number of sections of the stern are also recorded. From this data it is possible to draw each of the stations of the vessel and the stem and stern shapes either in a CAD program or by hand and reconstruct a lines plan from these. [1]
Fig 1 – Two variations of the string triangulation method, diagram II shows how a stick leant against the side of the hull can provide better accuracy of measuring hulls with tumblehome.
Grid method: The grid method uses a vertical and horizontal arm marked at suitable intervals depending on the size of the boat being measured. The apparatus is positioned at each station and it is checked that the horizontal arm is perpendicular to the centerline and both arms are level and square to each other. Measurements are taken at each marked interval both on the ‘x’ and ‘y’ axes. (Fig 2) These measurements can be put into a spreadsheet and then imported to a CAD program. The ‘z’ axis value is the distance of the station from the forward perpendicular (station zero). The stem and transom shapes can be measured in the same way as described above or the grid method can be used to measure the stem profile. [1]
Fig 2 – Grid method Photogrammetry Photogrammetry is a method of measuring a 3d shape using photographs of the object taken at different angles and locations. Many theories and mathematical equations are used in the process of photogrammetry although the concept of it is based on perspective geometry and triangulation. One of the main applications of photogrammetry to date has been aerial photogrammetry for topographical purposes, however as the accuracy has improved it has been used to model objects in the naval architecture and aerospace industries as well as being explored for use in machine vision. “Photogrammetry uses the basic principle of Triangulation, whereby intersecting lines in space are used to compute the location of a point in all three dimensions. However, in order to triangulate a set of points one must also know the camera position and aiming angles (together called the orientation) for all the pictures in the set� [2] When a 2-D image of a 3-D object is produced the depth is lost. This is due to the image formation process in which the image is a 2-D projection of a 3-D scene. [3] The 3-D object co-ordinate location of a corresponding point on the image must be somewhere along the associated line of sight i.e. if a line were drawn from a point on the object being photographed, through the principle point of the camera it would also pass through the representation of the same point on the image (or the point on the image sensor which will convert the light rays into the image). This is the collinearity condition.[8] From one image only it is not possible to know the 3-D position of this point. When more than one image of this point is available however, the 3-D location of this point lies at the intersection of these lines of sight. [3] An example of this is shown in Fig 3.
Fig 3 – Showing the 3D intersection (M) of two homologous image points. [3] This process is called intersection. However before it is possible to perform intersection, the position and pose of the camera (combined they are called exterior orientation), the properties of the camera and lens (interior orientation), and which image points are at the same position on the object must be known (provided by the use of coded targets). [3] Interior orientation Camera geometry can be thought of as a pinhole camera (also called a central perspective projection).[4] In reality the geometry is more complex due to lens distortions. The interior orientation of a camera consists of the location of the principle point in respect to the optical axis (Fig 4), the focal length and all of the distortion parameters. [4]
Fig 4 – Camera model showing the difference between theoretical and real principal point and optical axis.
Lens distortions: Distortions are part of a group of lens aberrations [5]which cause a degradation of the image produced by a camera. Lens distortions affect the geometry of the image whereas all other optical aberrations affect the image quality. Wilson [3] discusses a large number of lens aberrations, however it is beyond the scope of this article to cover these in detail. The primary lens distortion of interest to photogrammetry is radial distortion. [4] Radial distortion affects the position of an image point relative to the central axis of the camera in reference to the ideal image point ie the difference between the position of an image point compared to where the image point would be located in a geometrically ideal camera. In simpler terms, it causes the image to bulge out from the centre (barrel distortion) or be pinched in towards the centre (pincushion distortion)(Fig 5). [4]
Fig 5 – Barrel distortion (left) and pincushion distortion (right) [4] “Interior orientation establishes the bundle of rays (collinearity straight lines). Exterior orientation establishes the position and orientation of the bundle of rays with respect to the object space coordinate system�[6] The image distortion also varies with focal length and object plane distance, this can cause some problems as a 3d object of the type that is of interest is not planar ie it exists in three dimensions not two, this in effect means the camera lens will distort the image of the object unevenly as some parts of the object are further from the camera than others.[7] Brown[7] referred to this as variations of distortion throughout the photographic field. This is an important feature of lens distortions that must be taken into account when removing distortions from an image. Exterior orientation To compute the 3d position of coded targets, orientation of the bundles of rays (collinear lines) must be known. This is defined by 6 parameters, x, y and z co-ordinates that define the spatial location of the camera within the Cartesian co-ordinate system; and pitch, roll and yaw (rotation around the x, y and z axes) which defines the attitude of the camera. Together these parameters are called the pose or orientation.[2]The scale factor also needs to be taken into account so that all images are scaled to the
same scale factor.[4]This process is called resection . [4] If using a film camera, two scaling scalars must be used (x and y) to account for uneven distortion of the film during processing and environmental degradation . [4]The equations used to compute exterior orientation make use of the collinearity, coplanarity and coangularity conditions.[8] Intersection “In the case of theodolites, two angles are measured to generate a line from each theodolite. In the case of photogrammetry, it is the two-dimensional (x, y) location of the target on the image that is measured to produce this line. By taking pictures from at least two different locations and measuring the same target in each picture a "line of sight" is developed from each camera location to the target. If the camera location and aiming direction are known (we describe how this is done in Resection), the lines can be mathematically intersected to produce the XYZ coordinates of each targeted point. ” [2] Coded Targets Coded targets allow the software to locate homologous image points that are at the same object coordinates with much greater accuracy than using natural features of the object such as edge intersections or object features. This also eliminates the need for manual selection of points to compute the object co-ordinates of therefore speeding up and reducing the cost of the process. Circular targets are used because it gives improved accuracy in determining the centroid location over rectangular targets.[9]
Equipment The only equipment required to perform photogrammetry is a digital camera (5 megapixels or above is recommended)[3], a piece of photogrammetry software, some coded targets produced by the software and printed out, and a computer with sufficient processing power to run the chosen photogrammetry software. Herein lies the main benefit of photogrammetry over laser based and hand measurement techniques. The equipment that is needed is normally cheaply and readily available and the majority of people already own the required equipment. In this case the equipment used was:
Canon EOS 7D camera with a 24mm lens Canon i865 printer Double sided sticky tape Scalpel (to cut out targets) Latex gloves (to stop smudging of targets)
Method A 100:4 scale model of a Cornish lugger was CNC milled to a tolerance of 20 microns (0.02mm) in aluminium. Rhinophoto was used to create suitably sized coded and uncoded targets and these were printed on 160 g/m2 card. A calibration grid was created using Rhinophoto, printed on A4 paper and this was taped to a flat piece of glass. Four photos of the calibration grid were taken with 90 degree rotation around the camera axis between photos. The camera was calibrated with Rhinophoto. The targets were positioned on the model boat with uncoded target strips running around all edges, along an approximate waterline and to create a number of longitudinal sections. The coded targets were positioned in the gaps created by the grid of uncoded target strips. A set of 46 photos were taken of the model. Rhinophoto was used to find 2d target locations, any unread targets were manually added. 3D position was then computed using Rhinophoto. The pointcloud created from the coded and uncoded targets was compared to the CAD/CAM model used to create the model boat hull.
Results The results were obtained by comparing the original model and the pointcloud produced by Rhinophoto (Fig 6).   
Standard deviation was = 0.34284 mm Mean distance between points and the original surface = 0.33378 mm Median distance between points and original surface = 0.28196 mm
Fig 6 – The point set deviation in Rhino. The accuracy of the points ranges from red (bad) to blue (on the surface)
Discussion It can be seen that a large number of the points are inaccurate, however the general boat shape was obtained. A number of points can be commented on in regards to accuracy:
The centre of the uncoded targets used to mark the edges of the model were not on the edge of the model but a few mm inside of it. A line could be offset from these points by the distance that they were inside the edge to produce a curve of the true edge. A large number of points were not successfully modeled especially around the keel and either end of the boat. The thickness of the targets means the computed points are not in the true position. Orienting the pointcloud correctly against the CAM model was difficult and this could have not been aligned properly producing inaccurate results.
Some problems were encountered when calibrating the camera and also when taking the photos. These were:
It was difficult to fix the calibration grid to a completely flat surface. This would have resulted in inaccurate calibration and could account for some of the inaccuracies obtained. Taking photographs in flat light was found to be best ie outside on an overcast day. When flash was used indoors it produced glare from the aluminium hull and resulted in the program being unable to locate coded and uncoded targets properly. Some areas such as the front and back were not modeled properly so it would have been better to take more photographs of these and add them in later on. Scale reference bars would make it easier to scale and orient. It would also mean extra detail photos could be used to add extra points at the bow and stern which would be correctly scaled. It was found to be difficult working with small targets, the process would be much quicker and easier on a large object.
Conclusion Photogrammetry is a useful method of producing accurate 3D models of complex geometry. It is far easier than older hand measurement methods and much quicker once the program has been got to grips with. Drawbacks of the software are, it does not work particularly well with reflective surfaces, very dependant on lighting conditions, it takes a large amount of time to manually add coded and uncoded points if the lighting conditions were less than ideal in the photographs and it takes a few hours of experimentation and testing to really get to grips with the software. Advantages of it are that it is cheap, all of the equipment needed is readily available, it is by far the most accurate low cost measuring method, is less confusing to perform in practice than the hand measurement methods and it can be performed in very tight spaces and on objects of nearly any size. The accuracy over this size of model was acceptable for most applications, however if greater accuracy is needed photogrammetry is not recommended. By far the most accurate methods of measuring complex geometry would be laser scanning and CMM machines, however these incur enormous costs to operate and purchase therefore are out of reach to most of the small and medium sized boat yards and design consultancies. It is for this reason that I would recommend photogrammetry for naval architecture applications and to any individual or business on a low budget as the most practical, affordable and accurate way of measuring boats.
Recommendations I have a number of recommendation and improvements to make.
I would use arrow targets to define edges rather than uncoded target strips, inaccuracies are incurred by using arrow targets however as the tip is calculated at the distance it would be if the target were on a flat surface.
Scale reference bars would be used to define actual scale of the model, this would make it possible to add extra scenes to the pointcloud if for example the bow and stern weren’t modeled properly first time around. If a reflective surface were being measured, a non reflective coating should be applied and flash should be switched off. Flat lighting would be best in this situation as previously described. Calibration would be performed with a piece of sticky backed plastic that could be suitably fixed to a piece of glass without it bulging in the centre. Target thickness command in Rhinophoto could be used to improve accuracy of the point locations. I would recommend taking photos along the line of an ellipse whose length runs the length of the object and is larger than the object. This ensures good differences between the angles of the photographs and less photographs will be rejected by the software.
Acknowledgements I would like to acknowledge Dylan from Longshores Precision Instruments Ltd. Who provided me with the CNC milled object at a student rate and very short notice. I would also like to acknowledge Phillipe Bouchot from Qualup SAS for support with the Rhinophoto software.
Nomenclature CAD – Computer aided design CAM – Computer aided manufacture PP – Principal point
References: 1. Gardner J. 1996. Wooden Boats to Build and Use. Mystic Seaport Museum. Mystic, Conneticut. 261pp. 2. Geodetic systems Inc. 1996. The Basics of Photogrammetry. [online] Available at: http://www.geodetic.com/Whatis.htm. Accessed 25/03/2011. 3. Pollefeys M. 2002. Visual 3D Modeling from Images. [online] Available at: http://www.cs.unc.edu/~marc/tutorial/tutorial02.html Accessed on 02/04/2011. 4. Ghosh S. 2005. Fundamentals of Computational Photogrammetry. Concept Publishing Company. New Delhi, India. 249pp 5. Jedlička J, Potůčková M. Date Unknown. Correction of Radial Distortion in Digital Images. Charles University in Prague, Faculty of Science. [online] Available at: http://dsp.vscht.cz/konference_matlab/MATLAB07/prispevky/jedlicka_potuckova/jedlicka_potu ckova.pdf Accessed on 02/04/2011 6. Ackermann S, Menna F, Scamardella A, Troisi S. Date unknown. Digital Photogrammetry for High Precision 3D Measurements In Shipbuilding Field. University of Naples, Department of Applied Sciences. [online] Available at:
http://dsa.uniparthenope.it/dsa_web/LinkClick.aspx?fileticket=nVdk_a99YOE%3D&tabid=171& mid=811&language=it-IT Accessed on 28/03/2011. 7. Brown DC. 1971. Close Range Camera Calibration. Journal of Photogrammetric Engineering: 855866 8. Grussenmeyer P, Al Khalil O. 2002. Solutions to Exterior Orientation In Photogrammetry, A Review. The Photogrammetric Record, an Internation Journal of Photogrammetry 17: 615-634 Otepka JO, Fraser CS. 2004. Accuracy Enhancement of Vision Metrology Through Automatic 9. Target Plane Determination. Proceedings of the ISPRS Congress 2004, Volume XXXV, Part B. 873879