Courtney Di Stasio

Page 1

an introduction to

Imaging Systems Courtney Di Stasio



This book was created during the 2013 fall semester at the Rochester Institute of Technology for the Imaging Systems minor. All content was created under the instruction of Professor Nitin Sampat. Photographs by Courtney Di Stasio unless stated otherwise.



Table of Contents

1

Image Fundamentals

8

Input

14

Processing

24

Output

30

Special Topic: Lytro Camera



Image Fundamentals

2

Spatial Resolution

4

Tonal Resolution

6

Spectral Resolution

1


Image Fundametals

Spatial Resolution Angular resolution, better known as spatial resolution, helps to describe the resolving power of a given image. The original image (Figure 1) was amplified by 150% and copied thrice. Each copy was then modified with a different type of interpolation. Interpolation is a method used to construct new data points when given a range of known data points. To put it simply, it is a form of estimation. There are three types of interpolation algorithms: nearest neighbor, bicubic, and bilinear. The three copies of Figure 1 were magnified by 200% in order to demonstrate the effects of each type of interpolation.

Figure 1

2


Nearest Neighbor Interpolation Also known as point sampling, this algorithm selects a point and then approximates the value of it’s function based on the values of the pixels neighboring it. Though this method is great at preserving hard edges, it oftentimes leaves an image pixelated.

Bicubic Interpolation When speed is not an issue, bicubic interpolation is chosen over nearest neighbor or bilinear interpolations. Bicubic interpolation considers a 4x4 (16 pixel) grid around a given pixel. This algorithm provides a smoother image and fewer interpolation artifacts.

Bilinear Interpolation Much like bicubic interpolation, bilinear interpolation takes a 2x2 (4 pixel) grid into consideration when determining a given pixel’s value. This algorithm is used to achieve a smooth pixel transition when resizing an image and works faster than bicubic interpolation.

3


Image Fundametals

Tonal Resolution Our eyes have the ability to recognize the transitions between colors. Whether or not the transition between the colors in an image are smooth or not can be determined by tonal resolution. Tonal resolution is defined by the image’s number of bits, or the number of levels from the darkest dark to the lightest light. Figure 2 has been divided into sections to demonstrate the relationship between the number of bits/ levels in an image to the quality of tonal transitions, which corresponds to the gradient chart on page 5.

Figure 2

4


256 levels = 8 bits

64 levels = 6 bits

16 levels = 4 bits

8 levels = 2.8 bits

4 levels = 2 bits

5


Image Fundametals

Spectral Resolution Spectral or color resolution breaks a color image down into channels. These channels then map the different areas in which certain tones reside. The first of two color spaces we will be discussing is RGB. This color space is made up of the additive colors red, green, and blue. Additive colors are created by combining light of two or more different colors. RGB is popular in devices which hold color, like display monitors. The second color space is CMYK. This color space is made up of the primary subtractive colors cyan, magenta, yellow, and black. This color space is most commonly used in printing processes.

Figure 3

6


In the examples below you can see that Figure 3 has been separated into channels. These channels represent the additive colors of an RGB color space and the subtractive colors of a CMYK color space. Even though these channels are separated into different colors, the colors of their respective color spaces can be combined together to create the image you see on the left (Figure 3).

The red, green and blue channels above are combined to create the continuous tone image that you see to the left. The cyan, magenta, and yellow channels below can also be combined to create the continuous tone image to the left.

7


8


Input

10

Sensors

13

Demosaicing

9


Input

Sensors Both film and digital cameras have light-sensitive qualities that are used to record an image. Film cameras collect image data using film strips, which are layered strips of plastic that are specially coated. The bottom layer is coated in a silver-halide emulsion. On top of the emulsion layer sit three colored layers: red, green, and blue. When an image is captured, light travels through these layers and the color is absorbed, leaving you with a latent image that remains invisible until development. Digital cameras, however, use digital sensors. These silicon light-sensitive chips are used to record the information from the scene. There are layers of color over a digital sensor as well, to aid in collecting color information.

10

Images from Google Images


CMOS The complementary metal-oxide-semiconductor sensor, or CMOS sensor, uses a unique electrical process. Circuitry attached to each photo sensor converts light energy to a voltage. Then additional circuitry converts those voltages into digital information. CMOS sensors can be made with fewer components, use less power, and produce a faster readout than CCD sensors.

CCD This charge-coupled device, more commonly known as a CCD sensor, is an analog device. It is made up of many cells or photo sensors, and when light hits the sensor it is held as a small electrical charge. The charges are then converted to voltages one pixel at a time. CCD sensors are more mature and tend to have better noise characteristics than CMOS sensors, however, neither sensor yeilds a clear advantage in image quality.

Foveon X3 The Foveon X3 sensor is essentially a CMOS sensor that was created by Foveon Incorporated. This sensor is layered with red, green, and blue layers in comparison to one colored filter over the entire sensor. Each layer responds to different light wavelengths, resulting in three unique spectral sensitivity curves. Foveon X3 sensors have a lower spatial resolution than other sensors, but has a high spectral resolution.

Images from Google Images

11


Input

Color Filter Array Much like film strips, digital sensors are made up of colored filters used to capture color information. The most common color filter array, or CFA, is the Bayer Pattern. This pattern is made up of red, green, and blue layers. The green filters outnumber the red and blue because it directly relates to the way our eyes work; the human eye is most sensitive to the color green.

12

Images from Google Images


Demosaicing

Figure 5

Demosaicing is the process of creating missing color information from an image. This process begins as soon as the image as captured and occurs in-camera before we see the image. The color filter array of the sensor tells the camera how much color information was absorbed by the sensor, then the demosaicing algorithm kicks in to fill in any missing color information. Figure 5 above demonstrates what an image would look like before the demosaicing algorithm is applied. Figure 6 shows the same image after demosaicing.

Figure 6

13


14


Processing

16

Imaging Pipeline

18

Histograms

20

Image Adjustments

15


Processing 16

Imaging Pipeline

Step 1

Step 2

Step 3

Image capture

Film/Sensor records light

Sensor corrects corrupted pixels

Step 7

Step 8

Step 9

RAW processor

CFA interpolation

Neutral balance


Step 4

Step 5

Step 6

Analog signal recorded

Analog converted to Digital (ADC)

RAW file from ADC

Step 10

Step 11

Step 12

Gamma correction

Monitor/printer calibration

Further image corrections

17


Processing

Histograms The histogram is quite possibly the most helpful tool in photography. Unfortunately, it is also one of the most misunderstood and neglected tools. In this section we will learn what a histogram does and how to read one. A histogram is a graphic representation of tonal levels in a digital image. Tonal levels range from 0 to 255 with the lowest referring to the blackest black and the highest referring to the whitest white. Clipping occurs when the dark and/or white information is/are too extreme. Figures 7, 8, and 9 represent the same image that has been split into three sections and edited to be underexposed, properly exposed, and overexposed. See page 19 to view the corresponding histograms.

Figure 7

18

Figure 8

Figure 9


Underexposed When an image is underexposed, it means there is not enough light to fully illuminate a scene. The histogram of an underexposed image, like Figure 7, would shift to the left side. This lets the photographer know that there is a lot of dark information. In this case, clipping has occured and the righthand side indicates there is no bright information.

Properly Exposed The histogram of a properly exposed image, like Figure 8, will show tonal information throughout the entire histogram. Both dark and light levels are present but there is no clipping present.

Overexposed Figure 9 represents an overexposed image. The histogram shows both light and dark information present, but the image is so blown out that there is a lot of information missing. The histogram shows this by displaying spaced out lines; the spaces in between the lines represent missing information. The line on the righthand side represents clipping.

19


Processing

Noise Reduction Image noise is a random variation of brightness or color which is not present in the original scene. This is a product of the sensor circuitry or the digital camera itself and produces a fuzzy, snow-like arrangement of dots. Noise is an undesirable flaw meaning “unwanted signal� and can be removed using several different post-processing techniques. Figures 10 and 11 represent one image that has been divided in half. Figure 10 shows clear evidence of image noise. In this example, a median filter was applied to the image in Photoshop in an attempt to remove the unwanted noise while keeping edges sharp. Figure 11 shows the same image after the median filter is applied.

Figure 10

20

Figure 11


Sharpening To make things clear, sharpening is a way to define soft edges. Though you cannot take a completely blurry image and bring it to pristine sharpness, there are several sharpening tools you can use to adjust edge strength. Some examples include convolution, using a high-pass filter, or in this case, unsharp masking. Though it sounds contradicting, unsharp masking takes the original image, subtracts a blurred version, and adds the result back to the original image, producing a sharpened image. The image to the left has been divided in half. Figure 12 represents a blurred image before the unsharp mask was applied. Figure 13 is the final, sharpened image after unsharp masking. All of these techniques can be found in Photoshop and are quick, easy solutions for a slightly-blurry image. Figure 12

Figure 13

21


Processing

RGB vs. L*a*b* In this section, we will discuss the difference between RGB processing and L*a*b* processing. To the far right, you will see two replications of Figure 14. Both images have been increased in contrast by 75%, the first using RGB processing, the second using L*a*b*. Though the differences are subtle, you can look closely and spot the them in these two processes.

Figure 14

22


RGB Processing Increasing the contrast in an image using the RGB process is quite simple, but it doesn’t give you much control over the individual tones. When you increase the contrast of an RGB image, you are controlling the contrast of all of the channels at once. Notice that the subject’s hair appears more red than it does in Figure 14, as well as a yellow tone presented in her skin and the color of her dress appears much darker. Though bumping up the contrast in RGB processing is quick and easy, it might not be the best option.

L*a*b* Processing Much like the RGB color space having three channels, L*a*b* is a three-axis color system. L*, the first axis, represents lightness, a* represents colors moving from cyan to magenta/red, and b* represents colors moving from blue to yellow. Since there is an entire channel dedicated to lightness, L* provides complete control over contrast adjustment without altering colors. When comparing the L*a*b* processed image with Figure 14, you will see that the colors remain the same and the difference lies within the highlights and shadows.

23


24


Output

26

Calibration

27

Printer Types

28

DPI vs PPI

29

Halftoning

25


Ouput

Calibration Color is an important characteristic of an image and it is ideal that the same color profile carries all the way through from capture to print. The easiest way to stay consistent is by placing a color checker into the scene. Color checkers come equipped with colored patches that can be used when color balancing an image in post-processing. Color balancing your monitor is a key part of keeping color consistent throughout the entire imaging process. Every display monitor and printer are characterized differently, so profiles vary with each device. Tools like the Xrite ColorMunki aid in calibration to achieve a consistent color profile between a monitor and printing device.

26

Images from Google Images


Printing Laser Laser printing is a digital printing process that passes a laser beam over a charged drum, which then collects toner and transfers the image to paper. The paper is then heated to keep the image in place. Laser printing is fast, cheap, and is best for high quality text or graphics.

Inkjet Inkjet printers are the most commonly used type of printer. They range from small, desktop printers you might find in your home to large professional machines. Continuous inkjet printers will use pressure to propel droplets of ink onto the paper, whereas dropon--demand inkjet printers will use heat to make the ink bubble and drop onto the surface of the paper.

Dye-sublimation Dye-sublimation printers use heat to transfer dye onto paper. These are not to be confused with inkjet printers; dye-sub printers produce true continuous tone images, whereas inkjet printers produce individual droplets of ink that can be seen under magnification.

Images from Google Images

27


Ouput

Dots Per Inch vs Pixels Per Inch The resolution of an image defines the amount of information that image holds. Resolution has a few different identitites depending on what processing source you are using. Two of the most commonly seen are PPI and DPI.

Pixels Per Inch Pixels per inch, or PPI, can be considered an image’s true resolution from a pixel-based application, such as Photoshop or Lightroom. Desktop scanners and computer monitors are two examples of systems that run on PPI resolution.

Dots Per Inch Dots per inch, or DPI, is mainly used on the output side of the imaging spectrum. Printers use DPI to describe resolution in terms of small dots.

28

Images from Google Images


Halftoning Halftoning is a reproduction technique that simulates continuous tone imagery through the use of dots of varying shapes and sizes. Continuous tone images contain an infinite range of tones. However, halftone images are produced using only one color of ink. Dots of different sizes are then placed to represent the continuous tone image. The human eye sees this optical illusion and will blend the dots together to see the smooth tones of a continuous tone image. Figure 4 shows one image divided in half. The left side demostrates halftoning while the right side yeilds a continuous tone image.

Figure #

29


30


Lytro Camera

32

Introduction Lytro vs DSLRs

33

Capture Process & Share

31


Special Topic

Introduction In this section, we will discuss a recent invention that is considered a breakthrough in digital photographic technology. The Lytro camera has the special ability to make focal adjustments after an image has been captured. Lytro Inc. was created in 2006 by Ren Ng, a lightfield photography researcher at Stanford University. The camera is approximately five inches long and weighs about eight ounces.

Lytro vs DSLRs Both the Lytro and DSLR cameras have the three main components that a camera needs in order to capture an image: a lens, aperature, and a sensor. The Lytro’s sensor is what separates it from other cameras. In Figure #, we can see that DSLRs rely on mirrors to direct and converge light rays into one spot on the sensor to establish a focal point. Figure # shows us

32

Images from Google Images

that the Lytro opperates a little differently. The interior of the Lytro contains not one mirror; instead, the Lytro’s sensor is covered with an array of microlenses. The microlenses focus and magnify light rays coming from every direction. The Lytro collects and stores all of the light-field information allowing us to manipulate the focal point after capture.

Figure #: Canon DSLR interior

Figure #: Lytro interior


Capture

By selecting different areas of the picture via the LCD screen or using Lytro Desktop, you can alter an image’s focal point. The image above demonstrates two different focal points selected from the same image.

The Lytro is a very consumer friendly camera. It is small, light-weight, and easily portable. It features a constant aperature of f/2.0, an 8x optical zoom, and a rear LCD screen. The microlens sensor captures all of the light-field information from the scene and is quickly processed so that you can manipulate the focus in-camera as well as in post production.

Process & Share

The image above is a screenshot taken from Lytro Web Gallery. Viewers can browse through other member’s images, like images, share images, and play with the focal points.

After an image is captured, simply transfer the file into the Lytro’s accompanying software, Lytro Desktop, via a USB port located in the side of the camera. From there you can manipulate your image’s focal point to your liking as well as exposure, contrast, etc. Then you can upload your image to Lytro Web, Lytro’s online sharing network, where other Lytro users can view and play with your images’ focal points.

Images from Google Images

33




Turn static files into dynamic content formats.

Create a flipbook
Issuu converts static files into: digital portfolios, online yearbooks, online catalogs, digital photo albums and more. Sign up and create your flipbook.