Imaging Systems Martin Matt
Table of Contents
Fundementals
2
Image Input
10
Image Processing
16
Image Output
24
Slow Motion Video
30
Imaging Systems Bicubic interpolation is a very complex method that uses the pixels around it. By using more complicated calculations it is said to produce the smoothest gradations of the 3 methods.
Spatial Resolution
Nearest Neighbor creates new pixels based on the color around each pixel, therefore a single pixel scaled to 200% will become a 2x2 block of 4 pixels with the same color.
The number of pixels present in an image will affect the outcome when resizing the image. The most common ways to resize and image is to use a method called interpolation. Interpolation is the process of resizing an image by creating or removing pixels in order to make the image larger or smaller. There are 3 main types of interpolation available in Adobe Photoshop, bicubic, bilinear, and nearest neighbor. They all use different algorithms to resize an image.
Bilinear interpolation creates new pixels based on the weighted average of the pixels that surround it.
6
Spatial resolution refers to the number of pixels that make up a digital image. The more pixels an image has the more detail you can see.
Imaging Fundementals Tonal Resolution 1-bit
2-bit
3-bit
4-bit
5-bit
6-bit
7-bit
Tonal Resolution is measured in bit depth. Bit depth tells how many grey levels are be present in an image file. The greater the bit depth, the greater the amount of tones. An imgage with a higher bit depth will have more continuous tones compared to an image with a smaller bit depth. To calculate the number of tones in an image, you raise 2, to the power of the bit depth. For example a 4-bit image will have 16 tones (24 = 16). 1-bit = 2 2-bit = 4 3-bit = 8 4-bit = 16 5 bit = 32 6-bit = 64
7-bit = 128 8-bit = 256
The posterize tool in Adobe Photoshop allows you to change the bit depth of a file. The Posterize tool is located under the adjustments tab which is located under the image tab at the top of the tool bar. Using the slider you can change the bit depth to any number between 2-255 levels
7
Imaging Systems
3-bit
16-bit (below)
8-bit
32-bit (below)
In regards to color images, the number of bits represent how mant colors are present in the image file. It works the same way as Tonal Resolution does with raising 2 by the power of bits. Color images are different however due to the 3 different channels (RGB). Take an 8-bit image for example, each channel has 256 possible colors. In this case the 8-bit image can produce 16.7 million different colors.
In the figures shown to the left, the differences between a 3-bit, 8-bit, 16-bit, and 32-bit image are shown. Again, the difference between the files is the continuity of the tones. As you go up in bit depth, the smoother the tones are. After a certain point however, the differences become less and less noticable. This tipically happens between the 16 and 32 bit image depending on how complex the colore are in the image.
8
Imaging Fundementals Spectral Resolution
are most sensitive to red wavelengths. The human eye is most sensitive to green wavelengths.
A single color image is made up of 3 monochromatic images generated through a Red, Green, and Blue filters. This concpet is known as Spectral Resolution. A sensor records light at different wavelengths that corresponds to a specific color. These wavelengths are broken up into the 3 additive primaries Red, Green, and Blue. The color blue has the shortest wavelength and ranges from approximately 380nm to 495nm. The color Green ranges from 495nm to 590nm and Red ranges from 590nm to 750nm. Sensors are least sensitive to blue wavelengths and
9
Imaging Systems
Additive Color
Subtractive Color
The additive color system is made up of Red, Green, and blue primaries. The addition of red, green, and blue light creates white light. The additive system is used in all cameras, scanners, and displays.
The subtractive color system is made up of Cyan, Magenta, and Yellow primaries. When combined these colors create black. The subtractive system is used in all printers.
10
Imaging Fundementals Histograms Histograms are a tool in almost every image processing software that allows the user to visualize the distribution of tones of an image. Histograms are useful in detecting clipping. They are also useful in showing the amount of contrast in present in the image. If the histogram has spikes at both ends there is a great amount of contrast in the tones. If the tones are piled towards the left, it represents a low key or underexposed image. If the tones are piled to the right, the image is high key or overexposed. Although the ideal histogram has even tones throughout, there is no “right” or “wrong” histogram.
Normal Contrast
Low Key
High Key
11
Imaging Systems Sensors
CCD
Image sensors are made up of picture elements, or pixels. These pixels register the amount of light that passes through the lens of the device. This light is then converted to make an image using an A/D converter. What this does is effectively convert the photons recorded by the pixels into electrons. The electrons are then converted into a voltage and then each voltage is given a number. These number are what make up your digital image.
CCD (charged Coupled Device) sensors were developed specifically for cameras and have been used for over 30 years. They usually only use a few nodes to transfer the signal. This means that the signal from each row pixels is transferred one by one to the amplifier. This works like an old method of putting out a fire by transferring buckets of water down a line. CCDs are more sensitive to light compared to a CMOS sensor and produce less noise. There is some trade offs however. CCDs are more expensive to produce and they also require more power to run.
CMOS
Foveon
CMOS (Complementary metal-oxide semiconductor) aren’t that different from CCDs in what they do, but they are very different in how they are built. Unlike CCDs, each single pixel of a CMOS sensor can be read. This can be done because of the transistors located next to each individual pixel. This allows the sensor to work on less power and make it faster. The downside to this technology is it makes the chip less sensitive to light, and more noise is produced.
Fovean sensors were built to mimic film. They use 3 layers of pixels, one red, one blue, and one green layer. Each layer is set into the silicon at different depths because each color of light only penetrates the silicon so far.w
14
Imaging Input
CCD Sensor Technology CMOS Senor Technology
Foveon Sensor Technology
15
Imaging Systems Bayer Pattern/CFA Interpolation The Bayer Pattern was developed by Bryce Bayer who worked for Eastman Kodak. This is a Color Filter Array (CFA) which is made up of Red, Green, and Blue photosensors. These photosensors are made in a paticular arragement that mimics how the human eye works. Since human eyes are most sensetive to Green light, 50% of the photosensors are Green, and the other 50% is made up of 25% Red, and 25% Blue. Essentially this is a filter that is placed over an image sensor in order for the sensor to create a full color image. This process is called demosaicing. Although demosaicing can be done in a number of different ways, the Bayer Filter is one of the most common. The basic principal of this process is to interpolate the color of
16
the light that passes through this filter and lands on the sensor. Demosaicing can be difficult for some images and this is one way that noise is produced in an image.
Imaging Input
17
Imaging Systems
18
Image Processing
19
Imaging Systems All images start out as Light.
Light then travels through an Optics system such as a lens.
Raw Files are made
Raw Processors such as Lightroom, Adobe Camera Raw, and Capture One Pro can read Raw files where one can make adjustments and edits.
Gamma Correction is needed in order for the image to display correctly on screen. Gamma defines the relationship between the numerical values and the luminance those numbers represent.
The Exposure is how long the sensor was exposed to light. This time determines how light or dark the image appears.
Constrast in an image is the abilty to distinguish between different aspects of the image.
Color balance is a subjective process that changes the overall aprearance of the colors in the image.
0010110 up of numbers that 0 1 1 1 1 0 0 0 1 0 1 can be read as a 0 1 0 1 0 1 1 1 0 0 0 1 digital image. 100011001111 1000111101 1110010
Îł
20
Image Processing Once the light travels through the lens, the photons are then recorded on the Cameras Sensor.
ADC Converter
Anolog Binary Signal Signal Input Output
The photons then go through an ADC Converter which converts the analog signal into a digital signal. This digital signal is the RAW file.
CFA Interpolation process which light travels through red, green, and blue filters before hitting the sensor. This is how the sensor creates a full color image
Neutral Balance is a process that picks a neutral color in the image as a refrence point so that all the colors appear accurate.
Artifacts created by the digital sensor and ADC conversion is called noise. Noise Correction can help reduce or add noise depending on the desired affect.
Brightness is another way to make an image appear lighter or darker than the orinal image.
Sharpening is a process that is used to enhance the texture of a photograph.
Output is the final step in the imaging pipeline. You output to many different mediums including print, mobile, web, and displays
21
Imaging Systems Median Filter The median filter is used to help reduce the amount of noise present in an image. Shown below are two images that describe the effect that the median filter has on noise. Although this filter does a good job at reducing noise, the more you apply it, the more the colors blend together and begin to resemble those of a water color painting.
Image with noise
22
Image with Median Filter
Image Processing Brightness
Contrast
The brightness tool in Adobe Photoshop is used to brighten or darken and image. The images to the right are shown at brightness -50, 0, and 50 respectively from top to bottom.
Shown below are examples of different contrast levels. They are shown at -50, 0, 50 resectively from top to bottom. Contrast is used to separate different aspects of the image. Less contrast produces a â€œďŹ‚atterâ€? image while more contrast produces an image with more depth.
23
Imaging Systems Sharpening
UnSharp Mask
Sharpening is used to define the texture in the image. It does this by detecting the edges in the photograph and making them more noticable. There are many ways of handling this and Adobe Photoshop has tools that work really well. The most basic way to sharpen an image is using the UnSharp Mask tool, whereas using a method like Convolution Kernals is a bit more involved.
The unsharp masking tool essentially works by subtracting a blurred version of the image from the original image. By subtracting these, you get a result that represent the outlines of the image, or image mask. Next, you add the image mask back to the original image and the sum of those is a sharpened image.
Original
Original - Original Blurred
Original Blurred
Original + (Original - Original Blurred)
24
Image Processing Convolution Matrix Convolution Matrix can be used to obtain many different effects on an image. They can be used to sharpen, blurr, and detect edges just to name a few. What this matix does it adjust each pixel based on the pixels around it. Most common matrix are 3x3 and 5x5. The more one pixel differs from its neighbors makes the image appear sharper. On the other hand, the more one pixel mimic its nieghbors produces a blurred image. Shown below are two common types of Convolution Matrix. The bottom left is an example of a sharpening matrix and its effect. The bottom right is a commom blurring matrix and its effect.
25
Output
Imaging Systems Analog
Digital
Analog printing is another word for conventional printing. This type of technology uses a plate or screen to transfer dye onto the material you chose. Common types of analog printing that is being used are lithography, screen-printing, Flexography, and Gravure. Analog printing is usually only used when printing high volumes of projects where you need 5000 or more copies. The initial start up cost to make the plates and screens is very expensive so using analog printing to print a small number of copies is not practical.
Digital printing uses computerized signals to control ink and dye through laser and ink jet printers. Digital is a more practical solution for low volume runs, as the first print is costs the same as the last print unlike analog.
28
Image Output Laser
Inkjet
Laser printers use a laser to heat up a drum giving it an electrical charge which then attracts toner. This toner is a powdered type of “ink” that gets pressed onto the paper of other material you chose.
Inkjet uses tiny jets the spray ink onto the material. The more jets you have to better quality and faster printing process you will get.
DPI vs PPI Dpi and Ppi both describe resolution. Although most people use them interchangeably, they have very different applications. Dpi is used to measure “dots per inch” of a digital print, whereas Ppi is used to measure “pixels per inch” of a digital display. Dpi
devices include inkjet and laser printers and produce halftones where Ppi devices include monitors which produces continuous tone images.
Raster Image Processor (RIP) A RIP is software that works between your computer and your printer. It takes the information of your digital document and then turns all the text and graphics into a raster file that your printer will use in order to make the print.
29
Imaging Systems Continuous
Half Tone Printing
Continuous tone devices include analog printing as well as any pixel-based displays. Pixel based displays are considered continuous because each color and tone is reproduced by single tone and not a combination of dots. Display resolution is measured in PPI (pixels per inch) where the more pixels you have the more fine detail you can describe.
Half tone printing is a process that simulates continuous tone images through the combination of different color and size dots. Our eyes connect these dots, which makes the images look continuous although they technically are not. This process is used by most inkjey and laser printer today.
30
Image Output
31
Slow Motion Video
Imaging Systems Slow Motion Video Slow motion came from the times where filmmakers would have to crank film through a camera in order to record motion. When they wanted to create a slow motion effect, they would crank the camera faster than normal which would end up with more frames of video than were needed. Then when they played the video back at normal speed, the extra frames would make the video appear in slow motion. Slow motion is used a lot today in the world of sports and other fast paced subjects. We like to use slow motion so we are able to see things that we aren’t able to see at real speed. By slowing everything down we are able to see little details that happen too fast otherwise. The Rate Stretch tool is located in the selectoin menu near the time linetimeline
34
Slow motion is really easy to create in software like Adobe premiere given that you have enough frames of video to start with. Having more frames than needed is required if you are looking for true slow motion. The threshold for a true and smooth slow motion clip, you need to shoot at 120 frames per second, or 4 times the rate of normal (typically 24 or 30 fps). If you have enough frames, once you bring you footage into Premiere, simply use the Rate Stretch tool to stretch out the clip to the desired length. The more frames you have, the longer you can stretch the clip and still get smooth slow motion.
Click and drag a clip to the right to slow it down, or to the left to speed it up.
Image Output Sample Calculations 1 second of video @ 300 frames/second = 10 seconds @ 30fps 30 fps is a standard frame rate for video. Its enough to have smooth motion. If you have 300 frames, you have 10 times the amount of frames needed so you can stretch that clip out to 10 seconds and still have smooth motion. If you start with a 1 second clip shot at 30 fps and you stretch it to 2 seconds, your frame rate drops to 15 fps which will make the clip appear broken and glitchy. Thats why quality slow motion needs to be shot at really high frame rates.
35