A webster finalbook

Page 1

imaging systems written by Amanda Webster



note from the author This book was created in Fall 2013 as part of the Imaging Systems Minor at Rochester Institute of Technology in Rochester, New York. Special thanks to Professor Nitin Sampat and the Imaging Systems class for collaboration and guidance during the production of this book.



contents 6

imaging fundamentals

8 10 12

tonal resolution spectral resolution spatial resolution

14

image input

16 18

color filter arrays sensors

20

image processing

22 24 26

image processing pipeline additive color subtractive color

28

image output

30 31 32 33

halftoning DPI, PPI, LPI printer technologies monitor/display

34

jpeg 2000

36

jpeg 2000 v. PNG

39

resources


6


imaging fundamentals In this chapter, we will we go over tonal resolution, spectral resolution, and spatial resolution. In order to determine the size of an image, you would multiply all three of these values. This is especially important when deciding how large your images should be with a certain amount of storage space.


tonal resolution Tonal resolution deals with the amount of gray levels within an image. Each channel in an RGB image contains bit depth which tells us how many levels are present. There are three channels in every image, each containing a bit depth. An image’s bit depth tells us how many gray levels are in an image.

how do you calculate the levels in an image? 2^x = number of gray levels x = number of bits/pixel Example: 2^5 = 32 gray levels/channel

8


8 bit (256 levels of gray)

Most digital cameras produce images with 256 gray levels. At this bit depth, there seems to exhibt complete tonal blending.

5 bit (32 levels of gray)

As you can see, in the image there is very little posterization at 32 levels, however the banding is noticeable in the grayscale gradient.

4 bit (16 levels of gray)

With 16 gray levels, we begin to see more tonal blending and less banding in the photograph.

3 bit (8 levels of gray)

At 3 bits, banding is still visibile, though there is more tonal blending apparent in the image.

2 bit (4 levels of gray)

A 2 bit image with 4 gray levels will show posterization and no tonal blending.

imaging fundamentals

9


spectral resolution Spectral resolution refers to three monochromatic images that represent the red, blue, and green channels in an image. How each channel appears depends on the amount of light read by the camera’s sensor. This light information directly correlates to a specific wavelength that makes up the visible spectrum, which is between 400 and 700 nanometers. The red channel is between 620 and 740 nanometers. The green channel represents between 520 and 570 nanometers. The blue channel consists of wavelengths between 450 and 495 nanometers. When read together, these three channels are read as one full color image. The images to the right depict each image and it’s three levels and monochromatic images. Red channel 700nm

red

650 nm

Green channel

10

orange

590 nm

yellow

570 nm

greeen

510 nm

blue

475 nm

indigo

445 nm

violet

400 nm

Blue channel


imaging fundamentals

11


spatial resolution Spectral resolution deals the image’s x and y dimensions. In other words, whenever an image is resized, its spatial resolution is manipulated. In this chapter, we will look at three methods of resizing images. These methods are nearest neighbor, bilinear, and bicubic interpolation.

nearest neighbor interpolation This process produces the most noticeable artifacts and pixelation around hard edges. However, this is the fast of the three methods when resizing an image.

bilinear interpolation Bilinear interpolation does not produce as many noticeable artificats as nearest neighbor. This process takes a longer amount of time than nearest neighbor when resizing your images.

bicubic interpolation Bicubic interpolation is the most accurate type of manipulation when resizing your images. As you can see, there is little to no visible artifacting and hard edges are maintained. 12


imaging fundamentals

13



image input Image input refers to what happens during image capture. In this chapter,we will cover how light is read in digital photography and color filter array patterns.


color filter array Digital cameras use sensors when recording images instead of film, unlike analog systems. Over each pixel in a light sensitve sensor in a digital camera, a colored filter is present. The human eye is twice as sensitve to the color green, which is why more green filters are prsent. The colored filters over each pixel make a pattern called color filter array (CFA). CFA interpolation, or demosaicing, reconstructs a full color image from incomplete color samples. This process creates three separate images of the same size; typically each image represents red, green, or blue. There are many different filters, but here we will focus on the four most common filters: the bayer pattern, CYYM, RGBW, and CYGM. The bayer pattern is the most common color filter array found in digital cameras. This pattern uses the additive system, and twice as many green filters as red and blue. The CYYM filter array uses twice as many yellow filters as magenta and cyan. This filter is used in a few Kodak cameras. Notice that this filter, along with the CYGM filter uses subtractive colors rather than additive. The RGBW filter utilizies white filters over the pixels, allowing all colors to be absorbed. The CYGM uses equal parts cyan, magenta, green, and yellow.

An example of how the bayer pattern reads light

16

RGBW pattern

Bayer pattern

CYYM pattern

CYGM pattern


image input

17


sensors ccd sensors CCD, or charged-couple device, sensors capture information

using a “global shutter”. This means that the entire image is captured instantaneously at first and processed last. CCD sensors are considered, are more expensive to produce, and less susceptible to noise.

Interline transfer CCD sensors are used in camcorders and

have registers along each line of pixels to prevent blooming if the neighboring pixel becomes exposed.

Full frame CCD sensors transfer each row of pixels

sequentially and ultimately read them as an image. Most full frame transfer CCD sensors do not suffer from blooming as lower end CCDs do.

cmos sensors CMOS, or complementary metal-oxide-semiconductor,

sensors capture and read each line of pixels sequentially. This type of capturing information is called a “rolling shutter”.

Passive pixel CMOS sensors are less expensive than CCD

sensors, use less power than the active pixel CMOS sensors, and are less susceptible to blooming. Blooming occurs when a pixel becomes oversaturated with light. Passive pixel CMOS sensors do have low dynamic range and often suffer from fixed pattern noise due to the arrangement of the amplifiers and pixels on the chip. Active pixel CMOS sensors (APS-C) are most often found in cell phone cameras, web cameras, and some DSLRs. In addition to the advantages APS-C sensors share with passive pixel sensors, APS-C sensors demonstrate less noise in images, and a high level of integration.

18


foveon x3 sensors Foveon x3 sensors are a type of CMOS and are similar to analog systems in the way they read light. Foveon sensors use a wide array of pixels that consist of vertically stacked photodiodes. Because the sensor is made of silicon and certain wavelengths penetrate to a different depth of silicon, each photodiode respond to separate wavelengths of light. The data produced provides information in the red, green, and blue channels. Color artifacting is not seen in foveon sensors because demosaicing is not needed, however color accuracy suffers with a foveon sensor. This happens because the silicon sensor leads to more cross-contamination between layers. image input

19


20


image processing There are many steps involved in image processing. While many take place during input, there are other steps that can occur during post production and often vary depending on artistic preferences. In this chapter, we will illustrate each of these steps in the image processing pipeline as well as additive and subtractive color modes.


image processing pipeline

1 Light enters the lens In order to capture an image, light must enter through the lens. This is the first step in image processing, as an image must be present first.

6 Neutral Balance Also in camera is when neutral balance is applied to the image. Neutral balance corrects incorrect color processed during capture.

22

2 Analog to Digital Conversion When you record an image with a digitial camera, the sensor simply relays voltage. In order to be read by a computer, an analog to digital converter is necessary to convert these volts to numeric values.

7 Raw Processor A raw processor such as CaptureOne, Adobe Camera Raw, or Lightroom will allow your image to be read and edited on a computer.

These numeric values are then divided up into groups of red, green, and blue.

8 Artistic Choices Once opened and ready to edit, artistic preferences regarding exposure, contrast, and sharpening may be applied. The images to the right illustrate these choices and manipulations.


3 RAW File Format The image information recorded during image capture is then stored into a RAW file format. Depending on your camera, you may see a .CR2, .CRW. .NEF, or .PEF file extension.

4 Gamma Correction Gamma correction is automatically applied once you have captured an image. An inverse curve is applied to the image in order for the brightnes levels to be readable by the human eye.

5 CFA Interpolation As mentioned prior to this chapter, CFA interpolation reconstructs a full color image from incomplete color samples.

Contrast

Exposure

Sharpening

image processing

9 Storage Finally, choose your file format, export, and save to a storage device. Your choice of file format may depend on storage space available and final output such as print or web.

23


additive color The additive color space refers to the mixing of light of different colors. The additive colors are red, green, and blue. When mixed, these additive primaries create cyan, magenta, and yellow. When all three additive primaries are combines, white is produced. Because this space is based off of light, additive color is most commonly seen in computer monitors and television screens. To the right, addtive colors are mixed to produce cyan, magenta, and yellow. The color channels have been overlaid slightly off so that the two additive colors and their product are all visible in the image.

24


Blue

Red Green

Cyan

image processing

Green Blue

Magenta

Red

Yellow

25


subtractive color While the additive color space refers to the mixing of light to produce two or more different colors, the subtractive color model implements the mixing of dyes, inks, pigments, or natural colorants in order to produce a much wider variety of colors. The subtractive colors are cyan, magenta, and yellow. We see the use of subtractive color most often in printing. The images to the right illustrate the subtractive colors and the additive colors they create when mixed. Keep in mind that black ink is almost always included in the mixing of subtractive colors when printing.

26


Cyan

Magenta Cyan

Red

image processing

Yellow Yellow

Green

Magenta

Blue 27



image output The final component in imaging systems is image output. In this chapter, we will look at processes that occur during output and different types of output your images can end up such as printers and displays.


halftoning

Halftoning occurs during the printing process of image output. The halftoning process allows printers to recreate continuous images with binary elements, or small black dots. These dots are laid down in a specific arrangement and size to recreate photographs when printing. At a normal viewing distance, the human eye will view halftone images as continuous tone images; though the small dots can be viewed through a microscope or an instrument like a loupe.

Notice the dots in both grayscale and color halftoned images to the right. To print a color image, CMYK dots are arranged and overlap eachother to make up the tones and colors in the original photograph. When printing a grayscale image, only black dots are laid down in specific arrangements to reproduce the continuous tones in a grayscale image.

30


dpi lpi ppi The terms DPI and PPI are often used interchangeably, which is incorrect. We often need to convert between the two. To do this, the rule of 16 is implemented. When the DPI is unknown, assume it is 256 for 256 gray levels. The rule of 16 is only applicable when dealing with continuous tone images where halftoning is involved. LPI or PPI = DPI/sqrt number of dots/pixels DPI, or dots per inch, refers to the amount of dots that make up the image. DPI refers to binary devices such as halftone printers, inkjets, and laserjets and is most often associated with line art and text.

1 inch section at 72 dpi

1 inch section at 300 dpi

PPI, or pixels per inch, refers to the resolution of a continuous tone image viewed on displays such as scanners, computer monitors, and digital cameras. As the PPI increases, the better an image looks on these monitors. As PPI decreases, an image will appear pixelated. Most images are viewed at 300ppi but many monitors are only 72ppi. LPI , or lines per inch, and PPI are the same, though LPI is related to resolution output. These lines refer to the screens used during high output printing. Each screen is angled and arranged specifically to avoid moiré patterns. Moiré patterns occur when a pattern repeats most often on

a halftone printer or scanner. Moiré is a pattern due to how the dots interact with each other and can be prevented by angling the screens differently. One example of a moire pattern

PPI

image output

DPI

31


printers

inkjet printers

dye sublimation printers

There are two types of inkjet printers: drop on demand and continuous tone ink jet. Both of these printing processes implement halftoning.

Dye-Sublimation printers are best for photographic output. These printers do not implement halftoning. Dye sublimation printers are much slower than most printing technologies and the output sizes are often restricted.

Drop on demand inkjet are currently the most popular inkjet

printers in the market. Drop on demand ink jet has a more compact technology but has a slower printing process than other printing processes. There are two types of drop on demand printing technologies: piezoelectric and thermal inkjets. Piezoelectric inkjets uses charged crystals to eject ink. The electric voltage controls the shape, volume, and speed of the ink. Thermal inkjets use heated bubbles, which

expand and eject ink onto the paper.

laser jet printers Laser printers are most often what we see in homes and offices. The cost per print is much lower than other printer technologies. These printers are best for line art and text based documents.

Continuous tone ink jet printers are extremely expensive and

most often used for large sized prints. Unlike inkjet, these printers do not use the halftoning process.

Dye sublimation

32

Laser jet


displays Because monitors are continuous tone devices, they display colors through the use of pixels instead of dots. Monitors and displays must be calibrated with a colorimeter in order to display colors properly.

light emitting diode displays Light emitting diode (LED) displays are generally used for

outdoor signage, billboards, and television displays. These displays require an array of red, green, and blue diodes that are put together to form a full color pixel.

liquid crystal displays Liquid crystal displays (LCD) are flat panel, electronic, or video displays that implement light modulating properties of liquid crystals which do not emit light directly. LCD screens are used most often for computer monitors, televisions, and telephones.

cathode ray tube displays Cathode ray tube (CRT) displays use an electron gun and

a fluorescent screen to display images. CRTs are vacuum tubes and are used in televisions but are being phased out by LCD and LED technologies.

image output

33



JPEG 2000 In this chapter, we will look at the benefits of JPEG 2000 and its applications. JPEG 2000 is a file format you may utilize when compressing images after post processing.


jpeg 2000 what is it? JPEG 2000 was an improvement to the standard JPEG file format by the Joint Photographic Experts Group committee. The difference between standard JPEG and JPEG 2000 is the algorithm, however as image makers, most are concerned with the options of compression rates. When saving a file as JPEG 2000, the user is prompted to choose high or low compression rates, both supported with this file format. As you can see in the bottom image to the right, you can decide wheter or not you want lossy or lossless compression. This decision will most likely be made based on the amount of storage space is available. Regarding image quality, JPEG 2000 is an improvement from the standard JPEG format. With JPEG 2000, there is noticeably less artifacting. JPEG 2000 files are also much more efficient when it comes to post production. JPEG 2000 images are easier to scale and edit, can handle 256 channels, support a large range of bit rates, and are easier to use in frame based video editing and storage.

applications JPEG 2000 files show best results in images with low contrast and are often used for military surveillance, medical imagery, and consumer products such as digital cameras and 3G mobile phones. When it is necessary to store many image files and a liimited amount of storage available, you may consider using the JPEG 2000 format because of the noticeable improvements to standard JPEG. The biggest downside to using JPEG 2000 is that not all software programs outside of Adobe suport this file type.

36

Standard JPEG 603 KB

JPEG 2000 lossless 13.8 MB


jpeg 2000 v png A popular application for JPEG 2000 is the web. Many image sharing sites and web hosts require a compressed image in order to make a post, which is why most users may consider using the JPEG 2000. A PNG file, which stands for Portable Network Graphic is a raster graphics file format that supports lossless compression, just like JPEG 2000. PNG is recommended over any other file type for web application because of its lossless compression. PNG was created in order to replace the GIF, or Graphics Interchange Format, for use on the Internet and not for professional photographic prints. Because PNG was created specifically for web application, it does not support non-RGB color spaces like CMYK. As you can see, there is not much difference when it comes to image quality between PNG and JPEG 2000. Both are valid file types to be used on the web, however, because JPEG 2000 is not widely supported at this point, most users are continuing to compress their raw files as PNG. PNG

JPEG 2000

jpeg 2000

37


38


resources http://www.dpreview.com/glossary/resources/images/123di_cfa.gif http://www.hatiandskoll.com/wp-content/uploads/2013/02/PhotographicFilm.png http://martinmoorephotography.files.wordpress.com/2012/01/k-r-cmos-sensor.jpg http://www.uwgb.edu/heuerc/2d/rgb-cymk.gif http://upload.wikimedia.org/wikipedia/commons/1/1f/Moire-(quadrat)-1.png http://www.learnthenet.com/assets/images/dpi.png http://www.sudcamp.com/wp-content/uploads/2012/08/ccd.jpg http://www.hometheaterhifi.com/volume_11_1/images/sigma-sd-10-camera-foveon-vs-film-diagram.jpg http://pho.to/media/images/digital/digital-sensors.jpg http://blog.sigmaphoto.com/wp-content/uploads/2011/01/X3_Illustration.jpg http://thumbnails.visually.netdna-cdn.com/global-spectrum_517aa50859104_w1500.gif http://upload.wikimedia.org/wikipedia/commons/thumb/c/c0/CMOS_Image_Sensor_Mechanism_Illustration. svg/370px-CMOS_Image_Sensor_Mechanism_Illustration.svg.png http://micro.magnet.fsu.edu/primer/digitalimaging/concepts/images/interlineccd.jpg http://micro.magnet.fsu.edu/primer/digitalimaging/concepts/images/fullframeccd.jpg http://content.answcdn.com/main/content/img/CDE/DYETHERM.GIF http://www.xennia.com/xennia/uploads/cij-dod-diagram.gif http://www.iconhot.com/icon/png/file-icons-vs-2/256/jpg-2.png http://iconbug.com/data/60/507/e534d267d49c34a2b352311aa57449de.png http://icons.iconarchive.com/icons/mat-u/camill/256/Filetype-png-icon.png http://www.cerbere.org/dain/blog/data/images/psd.png http://s3.hubimg.com/u/7961770_f520.jpg http://thecustomizewindows.com/wp-content/uploads/2011/07/Raw-Image-Format-Files-are-now-supported-inWindows-7-Explorer.jpg file:///Users/localuser/Desktop/camera%20raw.jpg http://i1-news.softpedia-static.com/images/news2/Downlo-Adobe-Camera-Raw-6-7-for-Mac-2.jpg http://media.web.britannica.com/eb-media/30/27030-050-9546928D.jpg http://www.naylor.com/images/300600.jpg

resources

39



Turn static files into dynamic content formats.

Create a flipbook
Issuu converts static files into: digital portfolios, online yearbooks, online catalogs, digital photo albums and more. Sign up and create your flipbook.