Imaging
Systems
Keith Agnello
Contents Chapter 1: Image Fundamentals Histogram Resolution Tonal Resolution Spectral Resolution File Size/Format L*a*b*
6 8 10 12 14 16
Chapter 2: Image Input Imaging Pipeline Sensors Color Filter Array Scanners
20 22 24 26
Chapter 3: Manipulation Sharpening Median Filters
30 32
Chapter 4: Image Output Dots, Lines & Pixels Printers Displays
36 38 41
Chapter 5: Special Topics: LiDAR LiDAR cont.
44 46
Credits 3
Chapter One
Image Fundamentals
5
Histogram The Histogram is the fundamental tool for the digital photographer. Before an image is even analyzed on a screen in post processing, a histogram can reveal information about the exposure on the digital display, so the photographer can quickly compensate. It is reproduced as a graph with the vertical axis on the left representing each pixel of the total amount of pixels in the sensor’s spatial resolution. The horizontal axis represents pixel values for each corresponding pixel based on the sensor’s bit depth. For a 8-bit sensor the pixel values on the horizontal axis range from 0-255 left to right.
When post processing an image, the histogram can be adjusted in one area or globally. The pixel values will be adjusted accordingly to affect the brightness, exposure, and contrast of an image. Adjustments to the Histogram can be simplified to Sliding & Stretching.
In Digital Photography it is recommended to “Expose to the Right,” In other words, expose for the highlights on the right side of the histogram graph. In analog capture, a photographer would typically expose for the shadows and adjust the highlights when printing in the darkroom.
Clipping is a term used to describe lost shadow or highlight detail. It is identifiable when the histogram’s peaks are at the very edges of the graph.
Sliding adds or subtracts pixels values in the histogram. Anytime brightness is adjusted, sliding is the manual equivalent. Stretching multiplies or divides the pixel value, directly effecting the image contrast, or the difference between black & white.
Clipping
6
Capture Original
+ 50 Brightness
+ 50 Contrast
7
Resolution A photographer’s biggest concern with image quality in a digital photograph revolves around resolution. There are four types of resolution that can apply. Spatial Resolution is measured in relation to the size of the digital sensor, it is the x and y dimensions of the image that define the amount of pixels it contains. Spatial Resolution is important when reproducing text as opposed to images, as text requires higher spatial resolution. Tonal Resolution refers to the number of gray levels the image can represent. It defines the depth of the image’s dynamic range depending on the bit depth. Tonal Resolution is pivotal with the reproduction of images.
8
Spectral Resolution is all about color. Typically when calculating an image’s file size, you would multiply the figure by the amount of color channels the image contains. RGB for example, (red, green, blue) is an additive color space which indicates three different channels. Temporal Resolution is directly related to time, It is used with digital video regarding frame rate. It does not apply to still photographic images. When increasing or decreasing image size, spatial resolution needs to be adjusted. This can be accomplished by pixel elimination, replication, or interpolation which uses known pixel data to estimate values at unknown points.
Interpolation Nearest Neighbor The most simplified interpolation algorithm. It works by taking the known pixel and matching the surrounding pixels to that figure. Essentially making a bigger pixel. Analyzing the picture right, you can see a much harsher reproduction as the bigger pixel, though successfully increasing the size of the image, looks more pixelated.
Bilinear Interpolation Bilinear is an algorithm that considers the surrounding 2x2 area around a pixel. It takes the average of these 4 surrounding pixels to find its interpolating value. Bilinear is substantially better quality than nearest neighbor in regards to smooth lines and less pixelation. However, it is slightly slower than nearest neighbor.
Bicubic Interpolation Bicubic essentially accomplishes the same algorithm only it uses data of the neighboring 4x4 pixels to calculate an interpolating value with 16 pixels. In theory, it is a better reproduction than bilinear, but this is subjective. It is also slightly slower. Understanding both methods and selecting based on image content is ideal.
9
Tonal Resolution Bit Depth is what defines the tonal resolution of an image. The greater the bit depth, the greater the number of gray or brightness levels. An exponential relationship of 2 ^ the number of bits is used to determine the bit depth of an image. For example, an image with a bit depth of 3 can reproduce 8 brightness levels: 2^3 or 2 x 2 x 2 = 8 2 bit image = 2^2 = 4 levels 3 bit image = 2^3 = 8 levels 4 bit image = 2^4 = 16 levels 5 bit image = 2^5 = 32 levels 6 bit image = 2^6 = 64 levels 7 bit image = 2^7 = 128 levels 8 bit image = 2^8 = 255 levels*
In reference to the image right, once the bit depth of the image gets to around 6 or 7 bits, the tonal reproduction is acceptable for skin tones and color. This is why most printers do not even print up to 8 bits. The correct bit depth depends on image content, and the quality of tonal reproduction desired. *Even though 2^8 = 256, a 8 bit image is 255 levels because you need to consider 0 as a place holder and as a level value itself.
4 Levels 8 Levels 16 Levels 32 Levels 64 Levels 128 Levels 255 Levels
10
Bit Depth 8 bit 255 Levels
6 bit 64 Levels
4 bit 16 Levels
3 bit 8 Levels
2 bit 4 Levels
11
Spectral Resolution
Color can be reproduced using two different color schemes. RGB (red, green, blue) is an additive color scheme. It is used for screen display reproduction, monitors, projectors etc. When the maximum, equivalent amounts of Red, Green, and Blue values are present, the resulting hue is white. Equal amounts of RGB at lower values will make all shades of gray. Red + Green = Yellow
12
CMYK (Cyan, Magenta, Yellow, Black) is a subtractive color scheme. This is used extensively in the field of print media. K or Black is easier to print then white, so this alternative is just as effective as RGB just used in a different context. Like RGB when the maximum, equivalent amounts of Cyan, Magenta and Yellow are present, the resulting hue is black.
Green + Blue = Cyan
Blue + Red = Magenta
Color Schemes
Red Channel
Cyan Channel
Green Channel
Magenta Channel
Blue Channel
Yellow Channel
The additive color scheme, since its intention is for projection or reproduction on a monitor, has its dark tones included for all renditions of RGB.
CMYK is intended for a secondary print medium as a subtractive color scheme, so it needs a K (Black) channel to provide proper density in the image. Notice how in each of the three color channels, the density is reduced consistently. 13
Calculating File Size In the digital age, storage capacity is exponentially increasing. Understanding how storage files co-exist and their relationship to your own image file sizes is invaluable. All digital data is represented in bits and bytes. There are 1024 bits in one Byte, 1024 bytes in one kilobyte or kb, 1024 kilobytes in one megabyte or mb, 1024 megabytes in one gigabyte or gb, 1024 gigabytes in one terabyte or tb, From that point on, the same magic number 1024 applies, but the value names increase exponentially. After terabyte, petabyte, exabyte, and zettabyte follow, however are too large to be really applicable for today’s storage standards. Not to be confused with Megabytes (mb), Megapixels (MP) is a numerical value of how many pixels a sensor carries. Mega meaning million applies to both terms. It is just another way of explaining spatial resolution. So an 8 mega-pixel camera is 8 million pixels total.
14
To calculate file size simply multiply all of the following variables: Spatial Resolution x Tonal Resolution x Spectral Resolution x Temporal Resolution Example: You are photographing with a full frame DSLR (digital single lens reflex), 21.1 MP camera with a spatial resoultion of 5,616 by 3,744 pixels, and a tonal resolution of 8 bits per pixel, RGB, what is the file size? (5,616 x 3,744 pixels) x 8 bits x 3 channels pixel 21,026,304 pixels x 8 bits x 3 channels pixel 21,026,304 x 8 bits x 3 504,631,296 bits
File Formats File formats can be considered as the container which the file data is stored after capture. File Formats are categorized as either lossless or lossy storage. Lossless preserves all the original image data, so that when the image is opened later in post processing all the original data counts are present. Lossy compression eliminates redundant information to significantly reduce file size. Though beneficial for saving disk space, you lose information in your file that can never be recovered. Every time you save over a lossy file, it reduces small amounts of information it deems as redundant.
Examples of file formats: Raw (cr2, nef, IIQ, ORF, etc.)- A Raw files has no acronym it is an image file that is processed internally in the camera, offering the greatest amount of lossless information. Because it is done internally, there are numerous different raw file formats for each camera manufacturer. DNG or digital negative, was introduced by Adobe in 2004 as an open source raw format to eliminate proprietary issues. It is slightly compressed but is lossless. TIFF or tagged image file format, is used for both digital illustration and digital photographs. TIFF is lossless, yet still compressible. It can also use LZW compression algorithms which allows lossless compression. It is the ideal file format for image output. JPEG or Joint Photographers Experts Group is a lossy format, however that does not mean it has no value to photographers. For a photographer concerned with file space, JPEG is the way to go. Additionally everything we see on the web is typically a JPEG, so shooting in a JPEG format avoids file conversion. JPEG 2000 was an upgrade to the JPEG file type made in the year 2000. It has the option of lossy or lossless compression. It is a great alternative granted the application being used can support it.
15
L*a*b* RGB and CMYK are the two most commonly used color modes. However there are alternatives, like the CIELab or L*a*b* color mode. L*a*b* color was created based on replicating human visual perception. With L*a*b* color, the density is its own channel as a L* value (0 to100). For color, the spectrum is based on the human perceptual primary colors red, green, blue and yellow. Reds and Greens are represented as an a* value (∞ to -∞) which lays across a horizontal axis when represented visually. Blues and Yellows are represented as a b* value (∞ to -∞) which lays across a perpendicular horizontal axis when represented visually. The luminance or L* value lays across a vertical axis when represented visually. The advantages of the L*a*b* color mode is a larger gamut than RGB or CMYK. Gamut is the range of what color is reproducible, Gamut applies to not only the color mode, but also printers. Additionally L*a*b* color allows a user to alter exposure and density separately while keeping the color intact.
16
Original File RGB
L*a*b*
Luminance With RGB, a contrast adjustment layer in Adobe Photoshop effects all three channels globally. With an 8-bit color image, that is 255 colors per channel, resulting in a noticeable hue shift. With L*a*b* color mode, the contrast adjustment can be isolated to the Luminance or L* channel. This achieves the same contrast adjustment while simultaneously preserving the color. + 75 Contrast
+ 75 Contrast
17
Chapter Two Image Input
19
Imaging Pipeline Left to Right, Top to Bottom.
20
Scene/ Light Source
Optics
Camera/ ADC
Raw file
Raw Processor
Neutral Balance
CFA Interpolation
Gamma Correction
Exposure (white point)
Contrast (Black Point)
Sharpening
RIP/Print
21
Sensors A digital camera’s sensor is where the imaging pipeline begins. It is equivalent to the film plane of an analog camera. Except instead of acetate with an emulsion, the sensor is made of silicon. To simplify, a digital sensor makes an image by acting as a photo voltaic cell. Each pixel is a container that measures how much light it receives, and using an ADC, (analog-digital convertor), it converts the light levels to voltage levels. The pixels of a digital sensor are the individual PV cells that convert light to voltage. To visualize, think of a roofless square room, and every space of room on the floor is taken up by a bucket. When it rains how do you transfer the water out of the buckets? Instead of rain, think light. Digital sensors vary by the way they transfer this energy. CCD (Charged Coupled Device) sensors have been the standard for digital photography since its origination. There are two types of CCD sensors, Interline Transfer, and Frame Transfer, which vary by how the voltage data is transferred and then amplified. CMOS (Complimentary Metal Oxide Semi-Conductor) sensors are a new breakthrough in sensor technology. They are significantly less expensive than CCDs to produce because the technology is implemented on one chip instead of many. They also consume less power than a CCD. There are two types, Active Pixel and Passive Pixel.
22
Interline Transfer CCD Interline Transfer CCD sensors are used primarily for camcorders. Using the Bayer Pattern filter, each column of pixels has a transfer shift register adjacent to it. Essentially think of it as a slide, each adjacent pixel in the column shifts to the right and the slide takes them down and out of the sensor.
Frame Transfer CCD Frame Transfer CCD sensors are used primarily for still image capture. The pixels shift in the bottom row incrementally, this function repeats until all the data is processed. Frame Transfer CCDs still use the Bayer Pattern Filter, they are only different in how they transfer the data.
Passive Pixel CMOS The passive pixel CMOS sensor has photocells for each pixel and is arranged just like any other digital sensor. Passive Pixel photocells take in light and convert it to a voltage reading. Once the light is converted into an electrical charge, that signal is carried off of the chip and then amplified. These sensors are typically very small, and because of this generate a noticeable amount of noise which needs to be corrected in later processes. Active Pixel CMOS The active pixel CMOS sensor varies from the passive pixel CMOS by having an individual amplifier on each photocell. This amplifier is done on site per pixel, instead of transferring the data off of the chip and amplifying it afterwards. The result is less noise generated in capture. However because of a smaller fill factor, the sensor performs poorly in low light conditions. Fill Factor takes into consideration how much light is blocked by the amplifier.
Foveon X3 sensors are unique among digital sensors. The Foveon closely emanates the construction of an acetate film base, in which three layers of color sensitive emulsions are stacked on top of each other. Foveon sensors are a type of CMOS sensor only with the characteristics of an acetate film base. With the Foveon sensor, instead of layers of emulsion, the sensor has three Color Filter Arrays of photocells stacked on top of each other. Each of these different layers of photocells are uniformly sensitive to Red, Green, and Blue. This provides greater resolution per area of the image.
23
Sensors
Interline Transfer CCD
Frame Transfer CCD
Passive Pixel CMOS 24
Active Pixel CMOS
CFA A CFA or Color Filter Array is the pattern of filtered photocells on a digital sensor that create a color image when interpolated. The pixels are arranged based on what spectrum of light they are sensitive to. There are many different types of CFAs, which all output a color image. Different Color Filter Arrays exist to create a sensor capable of the greatest possible spectral resolution, and every one is patented and protected as intellectual property. While there are many different types of Color Filter Arrays, the majority of sensors use a Bayer Pattern Interpolation Filter as an ideal choice. The Bayer Pattern invented by Dr. Bryce E. Bayer of Eastman Kodak, assigns pixels that are only sensitive to one wavelength of light Red, Green, or Blue. In addition it arranges the pixels in a pattern that equally distributes twice the green pixels as red and blue. This is because the human eye is most sensitive to green light Of course there are many different variations of Color Filter Arrays. CYYM, and CYGM play with the subtractive color scheme for sensitivity and RGBW adds a neutral pixel sensitive to white light which deals strictly with density levels for better tonal reproduction. To simplify, a CFA is the de-mosaicing tool in the camera that takes all of the data and begins to make sense of it all for a final preview image.
25
Scanners Scanners are the secondary form of image input, which converts a pre-existing image into digital format. Thus they are just as important to understand as a camera itself. There are many different types of scanners, each with its own specific application. Flat-Bed Scanners are some of the most commonly used scanners in use. They work using CCD technology. Since a Flat-Bed scanner scans an image one line of pixels at a time, it emanates how a CCD sensor transfers data one line at a time. Flat-bed Scanners vary in quality and scanning resolution. Low Quality is used for text and documents, middle and higher end quality scanners have enough resolution to scan photos, negative, and transparency film. Drum Scanners are a professional standard for reproducing proper color and density. It is used typically for film and transparencies, but larger format drum scanners can also scan prints. Instead of CCD technology, drum scanners use PMT technology (Photo Multiplier Tube) which uses photo detectors in vacuum sealed tubes that are highly sensitive to visible, ultraviolet, and near infrared spectrums. The scanning material is laid flat on a drum that spins scanning pixels one line of pixels at a time. High end drum scanners are also used in scientific research for a numerous amount of applications in the field.
26
27
28
Chapter Three Image Processing
29
Sharpening Sharpening is a post processing technique that should be applied in some form to all digital photographs. Sharpening works as an output of Pixel Group Processing. Images can be simply broken down as containing high and low frequencies.. High frequencies being lines and details, and low frequencies being gradients and what’s within the lines. By blurring the whole image and making a mask, you make the high frequencies more apparent when you subtract the mask from the image and add what remains back to the original. Sharpening can really make a digital image stand out when it is done well, but it can not make an out of focus image sharp. Sharpening uses convolutions which are mathematical algorithms that take the values of surrounding pixels to adjust value of the center pixel. A convolution kernel is a visual stimulus tool used for creating masks. When altered, it affects blurring and sharpness globally within the entire image. It is typically a 3x3 or 5x5 grid of numerical values that represent a grid of pixels. The outer values affect the center pixel, and the kernel shifts to affect every pixel of the image as the center pixel. Kernels with negative numbers on the outside are sharpening filters, and positive numbers on the outside are blurring filters. Deciphering these convolution kernels can be intimidating. To simplify it, think of each grid square as a house in a neighborhood.
The closer the values are to the center square the more similar or blurred it is. The farther the values are, the sharper or more different it is from the others.
1
1
1
1
2
1
1
1
1
More Blurred
1
1
1
1
4
1
1
1
1
Less Blurred
1
-2
1
-2
5
-2
1
-2
1
More Sharp
0
-1
0
-1
5
-1
0
-1
0
Less Sharp 30
Unsharp Masking
Original
The Unsharp Mask is one of the most effective and popular sharpening masks. It works by increasing the contrast along the edges. The Unsharp Mask is quite flexible having three different control parameters: amount, radius, and threshold. An unsharp mask is not an edge detector but rather locates pixels that replicate a high frequency, or a value that differentiates itself from a common pixel value or low frequency. The amount is just what it sounds like. It is the degree of how much you sharpen. Sharpening makes the lighter pixels lighter and the darker pixels darker. Amount simply sets this adjustment. The radius determines the radius of what will be sharpened. A low radius means only pixels closest to the high frequencies will be sharpened. A higher radius affects a wider area.
Blurred image in Monotone
The threshold is the set level that determines how much contrast needs to be present between tones for sharpening to be applied. A higher threshold means higher contrast areas will be sharpened, vice versa for a lower threshold to a lower contrast area, and this value is adjustable.
Highlighted edges to be applied to original No Sharpening
Sharpening 31
Median Filter Noise is a type of artifact or undesired defect in a digital image. Noise can appear in an image in a number of different ways. The most common is a high ISO or sensor sensitivity in capture. Unlike film grain in analog processes, digital noise is in no way desirable. Signal to Noise Ratio is a measure of how much noise is present in the image relative to the signal strength. The higher the signal to noise ratio the better the image in terms of quality. Another type of artifact in digital imagery, aliasing appears as jagged edges on diagonal lines. This is more apparent on lower sample rate images. Higher sample rates provide smaller pixels which hides this form of artifacting. To reduce noise you need to blur the image. The median filter is another tool to accomplish this in Adobe Photoshop.
As you can see in the examples images, the noise does disappear with the median filter. However the end result is an image that looks painterly. The median filter gets rid of noise by blurring both low and high frequencies of an image. That however makes the image unrealistic by distorting all of the edges. Whether or not the filter is helpful is up to the discretion and control of the photographer, in the digital darkroom which is the computer.
32
Gamma Correction Have you ever encountered an image that had completely different density values in output than in input? This is because human visual perception and the computer are not exactly on the same page in terms of image reproduction.
Gamma Corrected .70
Gamma Correction adjusts the input voltage to compensate for this problem. Gamma encoding is required to compensate for human vision to maximize the use of tonal data in an image. Gamma is a measurement that reflects the relationship of input to output. It also describes the relationship between voltage input and brightness level on your monitor.
Original, No Gamma Correction 1.0
By correcting for gamma your adjusting the voltage your computer receives with input. The appropriate amount of voltage will illuminate the brightness levels accordingly. It is subjective to every monitor. There is an inexhaustible amount of material on Gamma equations, but that can be better explained in other reference materials. What is important to know for the photographer is how it can improve one’s images depending on the variable output components applied to an image.
Gamma Corrected 1.30
33
34
Chapter Four Image Output
35
Dots, Lines, & Pixels Dots, Lines, & Pixels are three terms that are commonly used interchangeably to describe spatial resolution. Using these three terms to simply describe resolution is false, and it is important to differentiate these similar but very different terms.
To understand how dots, lines and pixels interact with each other, the following equation can be applied.
Pixels and Lines are used to describe the spatial resolution of a monitor. PPI or Pixels per Inch, is the amount of pixels per square inch on the image file, while it is still on a screen of some form. The amount of PPI is another way to define your spatial resolution. In addition to PPI, LPI or Lines per inch, is another term that also determines spatial resolultion on a monitor. The value of LPI will be significantly smaller than PPI, as the lines refer to lines of pixels. The two terms will have different values, but mean exactly the same thing.
For example, A photograph is to be halftone and printed on a laser printer rated at 600 dpi. Since the image was taken with an 8 bit camera, 256 gray levels have to be printed for every pixel. How many PPI can be printed on this laser printer.
Dots solely refer to printing. DPI or Dots per Inch, refers to the number of dots of ink, whether pigments or dyes that are on the square inch of a print. In a sense it does define the spatial resolution, but only for a print media.
36
PPI/LPI = x DPI √ #of gray levels
600 DPI = 37.5 LPI 16 This formula is frequently called the rule of 16. As most cameras are 8-bit, your typically always working with 255 levels, so you can always divide or multiply by 16 for conversion from dots to pixels. It doesn’t always apply, but it is typically the solution to most conversion issues.
Halftoning Halftoning is the common practice of representing a continuous tone image onto a print medium. Halftoning uses dots of various sizes to fool the human eye into perceiving a continuous tone image. By placing dots of various sizes and tones together, the human eye fills in that minute white space and a full continuous tone image is apparent. Halftoning is used exclusively in printing presses and newspapers. Halftoning reduces the amount of ink and detail the printer has to produce, and makes large scale printing economically feasible.
Halftone
Continuous Tone
37
Printers Johannes Gutenberg revolutionized the world with the invention of the printing press in the mid 13th century. Since then, printing processes have evolved exponentially. There is still a separation between analog and digital print processes, and in some practices a combination of both. All of which will be explained further. Inkjet Printers are a digital print process that use either pigments or dyes which are projected onto paper. Pigments are dried particles of organic or inorganic color that rests on top of the page or paper. Dyes are liquid soluble colors that are absorbed into the paper upon contact. There are two main types of Inkjet Printers, Thermal and Piezo-Electric. Both processes are defined by how they project ink out of its cartridge. Thermal Inkjet Printers use heat to project ink out of a cartridge. When heat is applied the inks expand, which projects itself out of the cartridge, through the printers nozzle heads which spew ink onto the paper. Piezo-Electric Inkjet Printers use an electric current to make piezoelectric crystals expand. This expansion projects the ink out of the cartridge through the printer nozzle, and on to the page.
38
Dye-Sublimation Printers or Dye Sub for short, are a special kind or printer that uses heat to transfer pigment onto a paper or plastic. The advantages of a dye-sublimation printer is that it is a continuous tone print. Each color of ink is stored on an individual polyester ribbon and applied one color at a time. Rollers move the print medium in direct contact with the particular color ribbon, usually covering the width of the paper. A print head applies various amounts of heat using small electrical elements. The amount of heat applied affects the amount of ink to be transferred. The heat changes the state of the ink to a gas form, which is diffused into the medium and then eventually solidifies. Dye Sub printers are most commonly used for printing identification cards or other content intended for an alternative print medium. It is less intended for professional photographic prints as the diffusing dye is not the sharpest output process.
Lambda Printers are the ultimate hybrid of digital and analog processes. They are a continuous tone process and are renown for their higher d-max (black point) values for some of the best photographic reproductions available.
Laser Printers have become much more commonplace in printing, especially in an office setting. However they are still quite capable of producing color photographs. What differentiates laser printers is what’s underneath the hood of the machine.
A digital file is imported onto a computer and the lambda printer software splits the image into three channels. That data for each appropriate channel is sent to a Red, Green and Blue laser which sketches those density values onto light sensitive paper. The paper, which now has a latent photographic image is fed into a chemical processor which develops, washes, fixes, and dries the resulting print.
Laser Printers work using an electrostatic rolling drum that is positively charged. A laser etches what is to be texted leaving a negative charge where the laser inscribed. The drum is next coated in a toner of one particular color. The toner sticks to the negatively charged imprint and nothing else. Like glitter to glue on a piece of paper. Next paper is pressed to the drum, because the paper and drum move at the same time, the inscribed image is replicated perfectly onto paper.
Although the reproduciton value is unmatched, Lambda processors and dark rooms are expensive and require a good amount of maintenance. This inconvenience is making lambda printers more of a rarity in the world of desktop printing.
The final process is a fuser, which involves to heated rollers that bond the ink to the paper. This is why office laser prints are hot when they spit out of the printer.
39
Printers HP Indigo is a digital press that is the gem of digital offset printing. The latest model the HP Indigo 7600, boasts an impressive printing capability of over 2500 certified substrates including dark, transparent, and metallic media. Indigo introduced an indigo electro-ink where its name derives. This additional color added to ink availability gives Indigo the widest gamut possible, of all digital printers. It can output 120 color prints a minute, which is impressive for a sheet fed printer. The electro-ink is a toner based ink, which is a pigment based powder with a polymer incorporated. When heat is applied the polymer melts and bonds the pigment to the paper. Indigo digital presses are the top of the line of digital printing. They are extremely expensive so they are not for the casual printer of photos but rather professional press associations. In essence the Indigo is an extremely high quality laser printer.
40
Rotogravure is a type of intaglio printing process. It is an analog process, that yields the finest print reproduction. It has been the industry standard for newspapers and magazines. It’s done with massive zinc cylinders that are copper plated. The copper is etched to what the content will be, only backwards. The gravure cylinders are dipped into an ink reservoir while spinning, this fills the negative areas of the roller. Before paper is introduced a blade scrapes off the excess ink. Then large rolls of paper are fed through rollers making contact with the gravure cylinder. Finally heat is applied to dry the ink completely.
Displays CRT-Cathode Ray Tube is one of the earliest of displays. It works as a vacuum containing one or more electron guns, that fire Red, Green, and Blue samples of light with a fluorescent screen to produce images. CRTs have become surpassed by new monitor technologies which are lighter, less bulky, and cost less to manufacture. They are now more nostalgic than practical. LCD-Liquid Crystal Displays are not only more energy effiicient, but they are safer to dispose of than their CRT counterparts. The technology works using liquid crystals sandwiched between two planes of polarizing filters. The liquid crystals which were discovered in 1988, illuminate by the light that is shined behind them to produce images. LCDs are used in small, battery powered devices because they are so energy efficient. LED-Light Emitting Diode displays are another extremely energy efficient display technology. Small light emitting diodes in RGB combine together to create a full RGB image. LEDs are not just for monitors, they are used as traffic signals, aviation, and automotive lighting. They are low in power consuption, and ones that are brighter are more expensive. Plasma is a type of flat panel display that is only seen in monitors 30� or larger. The technology utilizes small pixel cells containing electrically charged ionized gases or flourescent lamps. Plasma has a wide color gamut, and power consumption varies by content.
41
Chapter Five Special Topics
43
LiDAR LiDAR or Light Detection and Ranging, is a remote sensing technology that uses laser light to record and display topographical data into a 3D visual reproduction. LiDAR has numerous different real world applications, all of which allow us to visualize what we know is there, but what is difficult to see with the naked eye. Similar to Radar, which emits radio waves and records the time it takes for those waves to bounce back and return to the source. LiDAR replaces radio waves with light waves for a much faster and more exact measurement. LiDAR systems are typically composed of a laser light transmitter and receiver. With one or multiple GPS (Global Positioning System) units, and an IMU (Inertial Measurement Unit). The laser being emitted is split with a beam splitter to allow the scanner to have multiple points to sample from.
44
LiDAR can be thought of as a large scale line scan camera, which scans one line of pixels at a time to eventually build up an entire image. Each pixel is instead a sample point which is recorded when the infrared laser makes contact with the land and bounces back to the receiver. The more pixels you have, the better the spatial resolution. With LiDAR the more sample points you have, the better the spatial resolution of the resulting image. LiDAR is extremely effective at representing natural and man made topographical features. Its scanning rate or pulse rate of the laser is typically in kHz. Which is in, thousand cycles per second. A 200 kHz scanning rate would be 200,000 laser pulses per second. This is too fast for the human eye to process so the light appears invisible. It does however allow LiDAR to be used at night.
45
LiDAR There are four types of LiDAR. High Altitude LiDAR is typically executed with a fixed wing aircraft, about 400 to 2,500 ft. above sea level. This is not very high for this type of aircraft, but the greater the altitude, the less accurate the data. Since the laser needs to hit the target and bounce back, eliminating some of the in-between distance provides the best results. High Altitude accuracies are in range of 9.25 to 18.5 cm vertically and 20 cm to 1 meter horizontally. As technology improves, so will high altitude accuracies. Low Altitude LiDAR is even closer to the ground at about 50 to 800 meters above sea level. This is implemented on a helicopter, which can hover close to the ground. Low Altitude LiDAR is best for a detailed reproduction of a specific area. While High Altitude LiDAR provides a reproduction for a much greater area, Low Altitude provides substantially more detail. At 20 to 100 sample points per square meter, the resolution is much more detailed. Mobile LiDAR has its advantages for its flexibility. The mobile unit is a smaller compact system with only one GPS, aerial LiDAR systems typically have two. It can be implemented on any mobile land or sea vehicle. The scanner can also continuously rotate a full 360 degrees rather than an oscillating, back and forth like its aerial counterparts. It has typical sample point densities of 1,000 to 4,000 points per meter.
46
Terrestrial LiDAR is essentially a mobile LiDAR system only stationed in one place on a survey tripod. It does not need a GPS or an IMU, though it can still have a GPS system. Terrestrial LiDAR is used for surveying a very specific area of interest like a road or bridge. Because it is stationary, it can make the most detailed reproductions of any LiDAR system.
Applications LiDAR is a key remote sensing technology that allows more efficient and accurate topographical data to be recorded. In addition it can give a visual stimulus when analyzing any square inch of the planet. Google Earth and other similar software programs would not be possible without LiDAR technology. For independent research especially in the field of environmental science. LiDAR is invaluable. GIS Global Information Systems rely on LiDAR along with other secondary data to reproduce a realistic digital representation of an area. With this asset of information, an fast, accurate report can easily be produced to see who is topographically most affected by natural disasters for example.
47
48
Credits 49
L*a*b* Processing http://www.forummalujemy.pl/forum/viewtopic. php?f=12&t=200&start=40
Gamma Correction http://www.scantips.com/lights/gamma2.html
http://www.booksmartstudio.com/color_tutorial/ colortheory4.html
Exposure http://www.backyardshots.com/index.php/articles/view/14/0.html
Image Pipeline
contrast http://jishytheobserver.wordpress.com/2012/04/14/ contrast/
Scene/Light Source http://popcultureaddictlifeguide.blogspot. com/2013/05/seeing-red-over-sun-kissedsunburned-skin.html lens http://www.123rf.com/photo_11881342_camera-photo-lens-vector.html camera/adc http://www.wwbw.com/AKG-Perception-120-USB-Condenser-Microphone-485820-i1529783.wwbw raw file http://www.digitalcameraworld.com/2012/09/25/ raw-tuesday-shooting-raw-files-the-right-way/2/ raw processor http://graphicssoft.about.com/od/photoshop/tp/ newpscs3.htm neutral balance http://xritephoto.com/ph_product_overview. aspx?id=1257 CFA interpolation http://en.wikipedia.org/wiki/Bayer_filter
50
sharpening http://www.nortonstones.com/ rip/print http://www.epson.com/cgi-bin/Store/jsp/Product. do?sku=C11CA19201 Sensors http://www.toshiba-teli.co.jp/en/products/industrial/glossary.htm http://www.amtimaging.com/www11/english/ thard_e.html http://www.foveon.com/article.php?a=67 http://www.sony.net/SonyInfo/News/ Press/200806/08-069E/ http://www.mac-forums.com/blog/choosing-a-digital-camera-2/
CFA Bayer http://en.wikipedia.org/wiki/File:Bayer_pattern.svg
http://www.proimagingsupplies.com/Hasselblad-Imacon-Flextight-X1-Scanner-p3448.html
RGBE http://en.wikipedia.org/wiki/File:RGBE_filter.svg
http://www.itinfopoint.com/post/13/scanner/
CYYM http://en.wikipedia.org/wiki/File:CYYM_pattern.svg
LiDAR
CYGM http://en.wikipedia.org/wiki/File:CYGM_pattern. svg RGBW http://en.wikipedia.org/wiki/File:RGBW_Bayer.svg RGBW1 http://en.wikipedia.org/wiki/File:RGBW_number_1. svg RGBW2 http://en.wikipedia.org/wiki/File:RGBW_number_2. svg RGBW3 http://en.wikipedia.org/wiki/File:RGBW_number_3. svg Scanners http://www.overstock.com/guides/how-to-use-aflatbed-scanner http://www.rakuten.com/th/48-bit-flatbed-scanner.htmlPrinters http://www.theimagecollective.org/inkjet-technologies/ http://news.softpedia.com/news/When-Digital-Becomes-Analog-43505.shtml
http://proceedings.esri.com/library/userconf%20/ proc01/professional/papers/pap214/p214.htm http://facility.unavco.org/software/idv/IDV_datasource_point_cloud.html http://www.directionsmag.com/pressreleases/ bluesky-purchases-the-worlds-first-fully-integrated-lidar-thermal-and-image/265395 http://www.harmseninc.com/what-is-lidar.aspx http://www.satimagingcorp.com/svc/gismapping. html http://googleblog.blogspot.com/2012/01/googleearth-62-its-beautiful-world.html ---------------------------------------------------------References Unsharp masking http://content.photojojo.com/tutorials/photoshop-sharpening/ Gamma Correction http://www.colormatters.com/the-power-of-gamma General Basic Photographic Materials and Processes. Third Edition. Nanette Salvaggio. Dr. Leslie Stroebel and Dr. Richard Zakia, Editors.
http://www.pixelperfect.com.au/lambda-printer/ http://www.thecomputercoach.net/html/printers. html http://instituteofprintingtechnology.com/
51
52
This textbook was written under the direction and guidance of Nitin Sampat as a part of the Imaging Systems Minor Rochester Institute of Technology 2013 - 2014 53