Imaging Systems

Page 1

imaging systems Christian Whitworth

Created through the Imaging Systems Course, Part of the Imaging Systems Minor IMSM 301 Rochester Institute of Technology 2013 Dimensions 17.78 x 22.86 cm Laserjet Freesheet paper, white, 148 gsm Font Arial , Times New Roman Font size 42 pt, 23 pt, 14 pt, 10 pt, 8 pt



table of contents 5

Fundamentals

13

Input

19

Image Processing

23

Output

29

Image Stability and Deterioration



fundamentals

5


resolution tonal resolution

The tonal resolution of an image is characterized by the number of gray levels, or the number of shades of gray, in the image. Determined by the image’s bit depth per pixel, the number of gray levels of each image can be calculated by the equation: 2number of bits/pixel . The corresponding sample image has been adjusted to represent a series of different bit depths using the Posterization tool in Photoshop CC. The monochrome bars to the left of the image provide a more comprehensive understanding of the way in which a sequence of gray levels (from black to white) is reproduced at different bit depths. The histograms to the left of the gray bars show a system of mapping the gray levels of an image. The x-axis of an image represents the number of levels, while the y-axis represents the frequency of each level. The histogram is set to automatically show a bit depth of 8 bits/pixel, although different bit depths are noticeable by varying visual representations of levels. The bottom-most level of the image has been posterized at a level of 2 bits/pixel. There is no smooth gradient between the different tones of the image, particularly noticeable in the reproduction of the sidewalk.

8 bits/pixel: 256 gray levels

5 bits/pixel: 32 gray levels

4 bits/pixel: 16 gray levels

3 bits/pixel: 8 gray levels

The second level represents an reproduction of 3 bits/pixel, or 8 gray levels. There is a smoother gradient in this reproduction, although this is still not an acceptable translation to do the image justice in a print. At 5 bits/pixel, the color gradient becomes significantly smoother. 6

2 bits/pixel: 4 gray levels


7


Most would find this to be an acceptable reproduction. In fact, many magazines and print publication print their images at 5 bits/pixel given the decent tonal quality the ability to save ink. At 8 bits/pixel, or 256 gray levels, the image has reached a tonal quality optimal for photographic reproduction. Due to the fact that the human eye can translate a photograph well at 8 bits/pixel, printing at a higher tonal resolution is unnecessary. At this resolution, the gradient and the histogram appear continuous because of the 256 gray levels present.

spatial resolution Spatial resolution is measured by the number of pixels present in an image and determined by the number of photocells in an image sensor. Also applicable to spatial resolution is the process of interpolation and the different resizing methods possible for an image. Each process of interpolation will take into account different aspects of the image, resulting in different qualities of compression or enlargement. The original image, seen on the opposite page, has been enlarged using three different methods of interpolation: nearest neighbor, bilinear, and bicubic. The three sub-sampled images present these methods. The top image was resized using nearest neighbor interpolation, which is most useful for reproducing hard edges, such as the edges of lines in text, and is also the fastest of the three methods. Since this process is carried out by taking the value of the nearest pixel and replacing the value of the new pixel to match, it is the crudest method of interpolation. This is even by the amount of artifacting is the sub-sampled image. 8

Bilinear interpolation is slower than nearest neighbor, but results in a much higher quality reproduction of the original image than nearest neighbor. This method of sampling is based on the values of columns followed by rows of pixels. There is far less artifacting in the image, resulting in higher quality print output. Bicubic interpolation is the slowest process of the three presented here, but as a result, allows for the highest quality of reproduction. Suitable for situations when a smooth gradient is desired, such as portraits, bicubic interpolation results in higher preservation of fine details. The process works by sampling 16 surrounding pixels in its calculations, providing smoother gradients, noticeable in the subsampled image at the bottom of this page.

Nearest neigbor interpolation is the fastest, but crudest, resizing method.

spectral resolution Spectral resolution refers to the three monochromatic images created through the red, blue, and green channels that make up a single color image. Each image is created through the amount of light that the camera’s sensor receives, each corresponding to a certain wavelength within the visible light spectrum. The red channel reads wavelengths between 620 and 740 nanometers, the green channel 520-570 nm, and the blue channel 450-495 nm.

Bilinear interpolation is a compromise between the speed and quality of nearest neighbor and bicubic interpolation.

Bicubic interpolation is the highest quality, but slowest resizing method.


9


Original Image

Red Channel 10

Green Channel

Blue Channel


temporal resolution Temporal resolution refers to the measurement of a file with respect to time. Associations with temporal resolution are often found in sources of television or motion picture industries. 16 frames per second is the persistence of human vision. Anything less than 16 frames per second will result in visible drag, rendering the viewing experience less desireable.

file size calculation The size of one file is consequently measured by combining the measurements of spatial, tonal, spectral, and temporal resolution. In the equation (x*y)(z)(位)(T) = File Size, (x*y) being spatial resolution, z being tonal resolution, lambda being spectral resolution, ansd T being temporal resolution.

byte conversion It is also important to remember conversion abong the standard value of a byte to assist in tonal resolution measurements. The chart shown below is intended to visually simplify this conversion.

11


12


image input

13


sensors CCD

The way in which the photosites of a charged-coupled device sensor (CCD Sensor) work is similar to the manner in which buckets of water collect rain water, and then deposit the water to be measured. The buckets correspond to the photosites, while the rain water can be associated with the electric charge that is converted from the amount of light. When each photosite is filled and measured, it can be converted into an output signal to be read as an image. The two types of CCD sensors include interline transfer and frame transfer. Interline transfer CCD sensors are often used in camcorders and allow for transfer shift registers along each column to prevent certain photosites from blooming, or spilling an overloaded level of light input. Full frame transfer CCDs are used in still photography, where each row of photosites is transferred incrementally, and then measured to be read as an image. In a market-based sense of image sensors, CCD sensors are of higher quality and less subject to noise, but as a result, more expensive to produce than CMOS sensors. CCD sensors are often used in point in shoot digital cameras. However, some brands, such as Leica, use the sensor in some of their DSLR models.

14

The collection and measuring of rain water in buckets is similar to the process in which a CC sensor collects and measures light.


CMOS In complementary metal oxide semi conductor sensors (CMOS sensors), electric circuitry next to each photosite converts light energy into voltage, which is then processed and measured and then converted into a digital signal. There are also two types of CMOS sensors. These two types are passive pixel and active pixel. Passive pixel CMOS sensors are lower in cost than CCD sensors, are not prone to blooming, and use less power than active pixel CMOS sensors. However, they do lack in ability to present a high dynamic range, and have a high fixed noise pattern as a result. Active pixel CMOS sensors have similar advantages to passive pixel CMOS sensors, but have a higher signal to noise ratio. Each photosite has its own amplifier, making measurement significantly faster. Active pixel CMOS sensors are usually used in cell phone cameras, web cameras, and some DSLRs..

Foveon X3 Foveon sensors are a type of CMOS sensor used in digital cameras that uses an array of photosites consisting of three vertically stacked photodiodes each. Each photodiodes reacts to a different wavelength of light, based on how deep each wavelength penetrates the silicon layers. This method of light collection is similar to the process in which traditional film reacts to light.

15


CFA interpolation CFA interpolation, also known as demosaicing, is an algorithim used to reconstruct a full color image from the incomplete color samples that have been received as output from an image sensor overlaid with a color filter array (CFA). Values of different pixels are calculated by surrounding pixels in different forms of interpolation. This process also attempts to avoid color aliasing, reduce noise, and preserve image resolution.

Process of mean interpolation

Bayer filter pattern

RGBW filter pattern

Striped filter pattern

The most common color filter array is the Bayer pattern, which is made up of alternating rows of red/green filters and blue/green filters. There are twice as many green filters than red or blue filters because the human eye is most sensitive to green wavelengths. Other CFA patterns, not as common as the Bayer pattern, include the striped filter pattern, the CMYK filter pattern, and the RGBW pattern, as can be seen to the right.

RGBW filter pattern 16

CMYK filter pattern


Incoming Light

Filter Array

Sensor Array

Resulting Pattern

17



image processing

19


image processing pipeline

20

Light enters camera lens

Analog to Digital Convertor

Pixel Defect Correction

“RAW� Sensor Data

Open in RAW Processor

CFA Interpolation

Neutral Balance

Colorimetric Transform (RBG to XYZ)

Gamma Correction

Lens Correction

Exposure Correction

Tone Correction

Artistic Choices

Crop / Scale

Store as Tiff, jpeg, or DNG file (8/16 bit encoding)


Original image. No sharpening applied.

Blurred image.

neutral balance

exposure correction

Neutral balance, often referred to as white balance, deals with the adjustment of the colors of an image to allow for color correction, and ultimately, achieving the goal of making neutrals appear neutral. Color balancing if often performed on an RGB image using a 3x3 matrix that scales all relative luminances in an image so that an object that was believed to be neutral appeared.

Brightness and contrast should be adjusted according to the overall exposure of the image. Levels can be adjusted to change the exposure and to avoid clipping in the highlights and shadows of the image.

colorimetric transform

Once the blurred image is subtracted from the original, only the high frequency areas remain.

To recieve the final sharpened image, the high frequency image must be applied to the original.

Process of Unsharp Masking

Often, matrices are applied to the RGB image in order to convert to CIE XYZ values, which pertain to a color space that is analogous to the LMS cone responses of the human eye. The CIE system defines Y as a luminance value, while Z is semiequal to blue stimulation and X is a mix of cone response curves chosen to be non-negative.

tone correction Furthermore, using the levels adjustment, the user can set the maximum, median, and minimum density values of the image, from 0 to 255, changing the tones of the image to be suitable. Also, curves is a method of stretching or compressing the tones of an image.

sharpening

The unsharp mask option of sharpening an image works by creating a blurred digital copy of the original, which is then subtracted from the values of the original image. This allows for only the high gamma correction frequency areas (edges, lines, details) to be evident. This layer is then Gamma correction is a nonlinear added back to the original image to operation used to code or decode make the details more pronounced, luminance or tristimulus values in order to correct the overall brightness consequently sharpening the image. of an image, often for computer A visual representation of this monitors. process can be seen in the images to lens correction the right. A common problem in many images in lens distortion, which occurs when light rays are bent due to the curved nature of glass used in photographic lenses. This makes several areas of the image appear bent or out of proportion. Lens correction software in post processing analyzes the image and adjusts appropriately according to the type of lens that was used to create the image.

21



image output

23


additive color system The primary colors of the additive color system include red, green, and blue. The secondary colors are cyan, magenta, and yellow. When all three of the primary colors are added together in equal parts, white light is produced. Displays such as computer monitors, LEDs, phone displays, and projector systems, are all based on a combination of red, green, and blue lights to make up an image.

Original Image

Yellow

Magenta

Cyan 24


subtractive color system The primary colors of the subtractive color system are cyan, magenta, and yellow, while the secondary colors are red, blue, and green. When all three of the primary colors are added together in equal portions, black is produced. This system is commonly used in and associated with pigments and dyes, used in printing. Often, black is included in the printing equation along with cyan, magenta, and yellow, to provide a true black to the print.

Original Image

Green

Blue

Red 25


printer technologies

dyes and pigments

Two main types of inkjet printers include piezoelectric and thermal ink jet. Piezoelectric technology uses a crystal to eject ink onto the paper. The type of voltage (positive, negative, or none) controls the shape, volume, and speed of the ink droplets. Thermal ink jet printers use bubbles formed by heat to eject ink onto the paper.

halftoning

Either dyes or pigments are used in printing processes. Pigments, which are powder based, are transferred inkjet to the paper using a vehicle, but are insoluble in the vehicle, allowing Drop on demand inkjet printers are it to be suspended. Dyes are either currently dominating the printing liquid based or soluble in their industry. These printers produce vehicle carrier. Both have advantages halftoned prints quickly at a typically and disadvantages suitable for quick pace, but lack in somes areas varied uses. Dyes have good light of quality. The water fastness of fastness, but poor water fastness, some dyes becomes a problem, and while pigments have both good many inkjet prints are not considered water and light fastness. Also, dyes to be the best method of image have a wider gamut and higher color archiving. brilliance than pigments.

laser Laser printers are often found in office applications, and provide better light and water fastness endurance than ink jet printing. Laser printers allow for higher quality reproductions of text and line art, but not in continuous tone images. A laser printer works by allowing a laser to etch an image onto a drum, which is then, with the use of applied ink, able to transfer the image to paper.

dye-sublimation Dye-sublimation printers provide the highest quality reproduction of photographs because of their ability to print in continuous tone, and the manner in which the paper absorbs the dye. Dye-sublimation works by using heat to transfer dye to the substrate. 26

Halftoning is the process of converting a continuous tone image into a series of CMYK dots that replicate the original image. The higher the number of dots per pixel, or dots per inch, the more continuous tone the image will appear to the human eye. Halftoning also allows for a lesser amount of spatial resolution for a higher level of tonal resolution.

PPI PPI (Pixels per inch) is the same value as LPI (lines per inch), and applies to monitors or other display formats. The higher the PPI value, the finer the detail in the image. However, when the PPI value of the image decreases, the gradation between pixels becomes less continuous, and the image appears pixelated.

DPI DPI (dots per inch) means different things depending on the device. In a CCD scanner, monitor, digital camera, or continuous tone printer, it essentially translates to PPI. However, in binary printers, such as laser and inkjet, it is dots/inch.

PPI / DPI conversion When converting between PPI and DPI, a standard rule of 16 is used, applicable to images with 256 gray levels. The equation for conversion is DPI/((number of dots per pixel) (1/2)) = PPI or LPI. If it is always assumed that the image has 256 gray levels, the equation can be simplified to DPI/16 = PPI. There is no use to convert DPI to PPI when printing solely line art, since the tone is not continuous, and as a result, no halftoning will be applied.


1 inch section at 72 ppi

1 inch section at 200 ppi

1 inch section at 300 ppi

1 inch section without halftoning

1 inch section halftoned at an 8 pixel radius

1 inch section halftoned at a 16 pixel radius 27



image stability and deterioration

29


types of image loss There are generally three different types of changes that are studied in image permance research. These three types include the fading of colorants that make up an image, migration or bleeding of the colorants, and the yellowing of the paper which the colorants are printed on. Fading is usually seen in two ways: overall fading, or desaturation or change in hue. With overall fading, the image tends to lighten and desaturate at a constant rate, while hue shifting of an image tends to happen with one color fading at a faster rate than the others, creating a color cast in the image. Small shift in color are much more noticeable than overall lightness and fading of an image.

print deterioration forces Heat amplifies chemically driven decay forces, applicable to all prints, no matter what type or location. Heat in the context of print deterioration is considered to be a force of the environment that can accelerate chemical and physical reactions. It is often recommended that prints be stored in a cool area to slow down the act of heat disintegration.

increased industrialization. Sulfur dioxides and the oxides of nitrogen are common pollutants that add to deterioration, but the dominant force is ozone. Some modern print are higher in susceptibility to pollutant gases, including inkjet prints. Even the smallest level of pollutant exposure can cause a print to fade over time, so careful consideration for protection of prints is key.

Original image. Effects of light exposure and coincidental deterioration

High or low relative humidity (RH) can prove to be very dangerous to the stability of both digital and traditional prints. Causing problems such as sticking together, mold growth, and cracking or bleeding. Image flow occurs within inkjet prints when a high RH causes the inks to bleed. The effects are loss in sharpness and detail, as well as changes in image density.

Effects of exposure to light are evident, but the image has not yet reached an state where it can no longer be considered acceptable.

Fading of color due to light exposure is also very common. The susceptibility of a print to light, as with all other forces, is highly dependent upon the printing process used to make the image, as well as the type of light that the image is exposed to. Also, the brighter the light source, the faster the image will tend to fade. Also a present force in print deterioration, pollutants in the atmosphere are accelerated in modern society due to 30

The long term exposure to light has rendered the print unnaccetpable. The paper has yellowed, and chance of delamination is possible.


normal use conditions Critical to testing and research are assumptions about the types of conditions the prints will be subjected to during actual real life use. This means that several counts of information and testing must be gathered to provide an accurate depiction of how consumers will use the print materials.

experimental conditions and duration

Example of print deterioration through humidity testing circumstances.

The experimental conditions for testing (light, temperature, and humidity levels as well as pollutant concentrations, air flows, etc.) are usually based on data from previously published image stability research.

converting test results After experiments and research have been conducted, and fading and yellowing endpoints have been determined, a general formula can be applied to convert experimental test results to image-life predictions. The formula is as follows:

image life = (experimental conditions x experimental duration) ____________________ normal use conditions

Example of print deterioration through air pollution testing circumstances.

An example of using this equation in a real instance of image lifelife calculation is as follows: The test exposure used high-intensity flourescent lights at 50,000 lux for 24 hours a day, and it took 21 days to reach the first endpoint. It is assumed that a consumer print will actually be displayed in a well lit office setting

under flourescent lights with an average light intensity of 500 lux for 12 hours a day. Therefore, image life is equal to 50,000 lux times 24 hours/day times 21 days, all divided by 500 lux times 12 hours per day. Image life is then equal to 4,200 days, or 11.5 years. The same process of calculating instances of image life can be calculated using other forces such as heat, humidity, and pollution, allowing the consumer to better predict the possible lifetime of a print in a certain condition.

importance of image preservation By being able to detect forces of image deterioration, analyze aspects of actual deterioration, and asses ways of preventing further deterioration, it is possible to allow for high levels of stability and ensure the existence of photographic prints for years to come. Different methods of storage and treatment are available for varioud types of paper, ink, and other circumstances. Further information and advice regarding image stability and deterioration, and ways of preventing or storing images can be found at the Image Permanence Institute (IPI) at RIT (www.imagepermanenceinstitute.org) and IPI’s Digital Print Preservation Portal, also known as DP3 (www.dp3project.org).

31


resources

Nitin Sampat Fundamentals of Imaging Image Input Image Processing Image Output Dots, Pixels, and Lines Wikipedia Demoisaicing Foveon X3 Color Filter Array Colorimetric Transform Lens Distortion Pigments Nanette Salvaggio Basic Photographic Materials and Processes Image Permanence Institute A Consumer Guide to Understanding Permanence Testing

32




Turn static files into dynamic content formats.

Create a flipbook
Issuu converts static files into: digital portfolios, online yearbooks, online catalogs, digital photo albums and more. Sign up and create your flipbook.