Image creation - IMSY 1

Page 1

Image Creation Alani Pien



Imaging Systems

4 Fundamentals 12 Input 20 Process 28 Output 34 Credits



Fundamentals 5

Image Resolution Spatial Tonal Spectral

Temporal

File Formats

1


Image Resolution Chapter Vocabulary Imaging System - This is the possesses an image goes through from start to finish. This starts when a shutter bottom is pressed and an image is captured by a camera. All possesses that are needed to make the image into the final product are included in this definition. darkroom

Film camera

10101010

Digitizer(ADC)

electronic darkroom

“Analouge” Electrical Singnals “analog”

electronic signal

Electronic camera

Digital Darkroom

Imaging System: Flow Chart showing all components of an imaging system

Histogram - A graph that shows the Dynamic Range and Room for Error in a given image as well as other importnat visual informants. Midtones

Shadows

Highlights

Maxium Number of Pixels

f/Stops 0 2.8 3.2 3.5 3.8 4 4.2 4.2 4.5 4.8 5 5.3 5.6 6.3 7.1 8 9 10 11 13 16 18 20 0 255 Bit Depth

64

Labled Histogarm

128

189

Dynamic Range - The number of f/stops of light that film or a sensor can capture from a scene. Room for Error - The number of stops on a graph that are not filled with scene information. 6


Spatial Resolution Spatial Resolution is the sampling of a scene into a 2D grid. To increase spatial resolution of an image would improve the rendering of image details due to increasing the spatial resolution, resulting in an imagine with reduced pixilation. Pixilation is a type of artifact causing jagged edges within the image, decreasing detail quality in the image. All the images ( except for the full image) below are magnified to 250%.

Bicubic - This is the best resizing method for smooth gradients.

Original full Image

Bilinear - This method uses an average of the 4 surrounding pixels to find apropreate pixel values

Nearest Neighbor - The best method for resizing an image while preserving hard edges

1


Tonal Resolution Tonal Resolution, also referred to as brightness, is representative of gray levels an image has and is easily represented on a histogram (refer to page 6). The human eye has a limit of 8 bits/pixel (256 gray levels). Gray levels go from 0 (white) to 256 (black) on a histogram which is representing an 8 bit image with the Y axis being the number of pixels, this can vary by what camera is used. There are 256 levels in an 8 bit image, one bit is equlivialnt to 8 bytes. Low key image histograms will be shifted to the left and be darker, while high key images will be shifted towards the right of the histogram and be lighter. It’s imporant to note that the higher the bit depth the more continuous the image will be. pixel value

pixel value

0

Low key, darker Image histogram

255

0

High key, lighter Image histogram

1 bit/ channel, 8 of 256 levels

2 bit/channel, 16 out of 256

3bit/channel, 24 out of 256 levels

8

255


Spectral Resoultion Color or spectral resolution is defined as the range of light that a sensor or film can capture. In the case of sensor we sacrifice spatial resolution for spectral (color) by dying individual sensor pixels red, green or blue. This information is often displayed via color gamut. Depending on the output, the gamut and spectral resolution may be mismatched. These become the 3 primary color channels which are then de-mosaicked by a program. When viewing an image on a screen the primary colors are red, green and blue and are used in equal parts to dictate the colors displayed in the image. The 3 colors theoretically converge to become white; while an absents of any of the 3 primary channels creates black. This is different from when in print medium, where cyan, magenta and yellow theoretically create black when combined; an absence of these colors is white, or whatever color the paper being printed on.

+ Blue Channel

= Blue and Red Channels combined

Red Channel

+

= Green Channel

Red Channel

Red and Green Channels combined

+ Green Channel

Blue Channel

+ Blue Channel

=

+ Green Channel

Green and Blue Channels combined

= Red Channel

Red, Green and blue Channels combined

1


Calculating File Size File size calculations maybe an important factor in many areas of photography. These calculations are especially important where work-flow and storage is concerned. Files are storage containers, they have tags, labels and meta data, which change based on what type of file you are storing and if you choose to personalize the labels or tags to help you identify the file in question in your personal data storage system.

Digital Storage Conversion 8 bytes = 1 bit 1024 byte = 1 kilobit 1024 kb = 1 megabit 1024 mb = 1 gigabit

File Size Calculation Equation: Spatial x Tonal x Spectral = File Size

Spatial Resolution: 3456 x 5148 pixels Spectral Resolution: 3 channels (RGB) Tonal Resolution: 8 bits/pixel (3456 x 5148) x 8 x 3 = File Size 142331904 x 3 = File Size 426995712 bytes = File Size 426995712 byte/1024 = 416988 Kilobit File Size 416988 Kilobits/1024 = 407.21 Megabits File Size

10


1



2


Input Sensors - Film and Silicone Film A digital sensor (silicon) and film both use light to capture a scene, but use different process to get to equivalent outputs. Film uses a gelatin emulsion, which contains silver halide crystals and makes the film light sensitive. The sensitivity, contrast and resolution of the film is dependent on the size and other variables in the emulsion. To create colored film, colored emulsions are layered over the base layer of the film. Film is inherently sensitive to blue, so the layers; red, green blue and yellow, with the blue layer getting the least amount of light and balancing out the results of the film’s sensitivity. Light

Photographic Film (Ag-Br in gelatin)

Laten image

Developed Image (amplified image)

Fixed Negitve Image

Postive IMage

Diagram of Image Capture via Film

Pixels Pixels are the photocells that take light and read out as voltage value. They are the digital version of the crystals used to capture light on film. It’s easiest to think of pixels as containers which catch the light, the light is then translated into voltages which are used by the camera to recreate the image on a screen. There are several different ways to translate the light and also different ways to capture the color we see in photos. 14


Sensors Charged Coupling Device (CCD) Charged Coupling Device (CCD) sensors are the original sensors used in digital cameras. There are two different ways to that CCD sensors work: Frame Transfer and Interline Transfer. Both sensors take in light and translate it to voltage data, however there are differences in the way the data is transferred and amplified.

Interline Transfer CCD Interline Transfer CCD sensors use a line of pixels adjacent to a Vertical Shift Register, the part of the chip that registers the information and transfers it to a horizontal shift register which then stores the resulting image; often to an SD or CF card. The biggest problem on an Interline Transfer chip is all the Dead Space (not light sensitive) that is create by the shift registers.

Interline Transfer CCD Sensor

Frame Transfer CCD Frame Transfer CCD sensors use a bed of pixels that transfers the charge to a separate storage area under the pixels. This information is more directly transferred from each pixel to the storage area. However, transfer occurs during the time of exposure so to prevent blurring the proses has to be done rapidly. After being transferred to the storage area the charge is moved in the same way as interline CCD sensors work. Frame Transfer CCD Sensor

2


Complimentary Metal Oxide Semi-Conductor (CMOS)

Photocell

Photocell

Photocell

Photocell

Photocell

Photocell

Photocell

Photocell

Photocell

Photocell

Photocell

Photocell

Photocell

Photocell

Photocell

Voltage Data

Photocell

Voltage Data

The photocells of a passive pixel sensor are the same pixels used in CCD and Active pixel photocell CMOS. Passive Pixel sensors collect light and convert it globally into a voltage reading, converting light into a electrical value. The electrical signal moves from the chip and is amplified at a later stage. Passive Pixel CMOS Sensors are usually small, which causes noise that needs to be corrected later on.

Voltage Data

Passive Pixel CMOS

Voltage Data

CMOS sensors have smaller pixel size; eliminating some noise and larger imaging arrays. CMOS sensors are low energy consumers as well as a single-voltage power supply CCD chips need 5 or more single-voltage power supplies to function). CCD and CMOS sensors use similar light processing procedures. One of the biggest differences between the two sensors to keep in mind is the way that the sensors are physically produced.

Data Transfer

16

Photocell

Photocell

Photocell

Photocell

Photocell

Photocell

Photocell

Photocell

Photocell

Photocell

Photocell

Photocell

Photocell

Photocell

Photocell

Data Transfer

Note that the amplifier is on the photocell surface.

Voltage Data

Photocell

Voltage Data

Active Pixel CMOS Sensors have an individual amplifier on each photocell, possessing the data from one particular photocell as the data is generated. This integration of data collection and image processing produces an image faster and is often found in smart phones and other applications where time limitations are a factor. The draw back with an Active Pixel sensor is it's poor performance in lower light conditions, due to its fill factor. A fill factor is the surface area of a photocell which in Active Pixel CMOS sensors is diminished due to the fact that the personal amplifier blocks a given amount of light from hitting the photocell's surface.

Voltage Data

Active Pixel CMOS sensor

Voltage Data

Passive Pixel CMOS sensor, note that the data is moved off the chip before being amplified.


Color Filter Arrays (CFA) A Color Filter Array is a pattern, that can vary greatly and is used when adding colored filters to the pixels of a digital sensor, resulting in a color image. Color Filter Arrays patterns are protected by law as intellectual property and are often patented. All variations of a Color Filter Array will output a full color image, including patterns that use pixels with out filters and pixels filters colored with subtractive theory colors (cyan, magenta, yellow, black). The reason theses patterns are so well protected is so that the company producing the particular sensor will always have the best program to de-mosaic an image, yielding the best color because they have written their interpolations using the knowledge of their color filter array. This does not mean that programs not associated with the sensor producer are unable to de-mosaic an image; Adobe Light Room is a good example of theses types of programs.

Bayer Pattern

However, one of the most popular patterns is the Bayer pattern. This pattern divides red, green and blue filters across a sensor, but not equally. The human eye is most sensitive to green and least sensitive to blue, the Bayar pattern was designed using this knowledge. The Bayer pattern colors 50% of the pixels evenly across the sensor as green. An equal number of red (25%) and blue (25%) filters are used evenly across the sensor. This type of Arrays sacrifices spatial resolution for spectral (color) resolution.

The Bayer Pattern

Foveon Sensors

The Foveon sensors are very different from other types of Color filter Arrays because there are 3 (red, green and blue) filter covered layer of photocells, modeling the filter arrangement of color film, yielding very good results, however the production of this type of sensor is expensive.

The CFA on a Foveon Sensor

2


Scanners Scanners are a secondary form of image input. It's a secondary form of input because it dose not create an image. However, It uses the same imaging technology as cameras to digitize an image. Scanners use the same type of sensors as cameras, but scanners have a light source built in them. They can input a verity of both 2D and 3D objects that are not digital.

3D Scanner

Epson Flat bed scanner

Common Sensor Sizes Digital Sensor sizes vary depending on camera. The larger your sensor the more light (information) your camera can collect; resulting in a higher quality image.

18


2


20


3 2


The Imaging Pipe Line Read from left to right

Light enters camera, is registered by senosr

Analog to digital conversion of light into a voltage value

Colormetric Transformation

Gamma Correction Gamma space is the color space of the image, limited by capturing sensor (including human eye) and output options.

22

Color Array Interpolation (CFA) can be done in Adobe raw prossessing programs. However the best color prossesing is done by company provided programs.

Contrast Correction (Black point)

Sharpening prossessing

File exportation (Jpeg, TIFF, DNG)

Neutral or “white� balancing of an image

Final Image Display output (monitor, print)


3 2


Process Sharpening The sharpening proses happens towards the end of the pipeline and should be applied to most digital photos in some fashion. An image can be separated into high frequency (fine details) and low frequency( gradient). Sharpening works by blurring the image and adding a mask to make the high frequency sounds more visually notable then adding the results back into the original image, making the digital photograph visually sharper. However it should be noted that this will not improve a blurry image. Convolution kernels are algorithms that take the value of surrounding pixels to adjust the value of a central pixel. A simple way to think about is like houses in a neighborhood. The neighborhood is 3 or 5 grid with one house in each square, similar to nearest neighbor in mathematical terms when a sharpening mask is applied. Each kernel shift affects the pixels around it as a central pixel in the grid. The image is more blurred as the difference in value becomes smaller between the outer and central pixels; the opposite applies for sharpening, when the values between the central pixel and outer pixels become larger it increases contrast between the higher and lower frequencies in the image.

24

1

-2

-2

5

-2

1

-2

1

1

A sharpening Convoltion kernel, note the differnce in vlaue between the outer pixels and the central pixel

1

1

1

1

2

1

1

1

1


Unsharp Masking Unsharp masking is a useful sharpening method. Unsharp masking works by finding high frequency pixels without edge detection and increasing the contrast along the edges. Unsharp Masking in Photoshop gives a user three controls: amount, radius and threshold. The Amount slide gives you control over how much contrast you want in an image. It makes the difference in pixel value larger between pixels, causing contrast. Visually, this slide makes the light pixels lighter and darker pixels darker. The Radius slider controls the number of pixels affected around a high frequency central pixel. The higher the radius, the more pixels that will be affected by the sharpening algorithm being applied to the image. Think about the convolution kernel application, this slide lets you determine the size of the grid being used in the applicaiton of the unsharp mask. The Threshold slider helps control where the unsharp mask will be applied in the image. A higher value on the threshold slide will cause the unsharp mask to make higher frequency areas more contrasty, while a lower value on the side with cause the unsharp mask to be applied to lower frequency areas of the image.

oringal Image

Original image blurred and monatone

Applying an unsharp mask to the original image in monatone

3 2


Noise Noise in an image is usually a result of high sensor sensitivity (ISO) during capture and is an artifact, or a flaw due to technical limits of digital sensors and the amount of light in the scene, and the amount of light let into the camera via settings(aperture, ISO and shutter speed). Noise is most often seen in low frequency areas and resembles grain in analog color film. This affect is unwanted in many photos and not always recoverable; however there are ways to reduce noise in an image. One way to reduce noise in an image is to blur the image with a median filter, found under noise in Photoshop. This filter blurs an image, reducing the noise and contrast in the image. Because we are least sensitive to the color blue, we can apply a median filter only to the blue channel, saving the contrast in the image while reducing the noise.

Image with Noise

Zoomed in look at the noise of the above image

Blue channel of the image above with meadian blur filter applied

26

Note that only the blue channel is selected


Final Image without Noise

3 2


28


4 1


Output Dots, Lines & Pixels Dots, lines and pixels are the base for units of measurement for images on output devices, referring to spatial depth. Pixels per inch (PPI) and lines per inch (LPI) quantify the pixels of images being displayed on screens. Lines per inch will have a smaller value than pixels per inches, despite quantifying the same data, because lines of pixels in a square inch are being counted not individual pixels in a square inch (PPI). When changed from output on a monitor to print the units change from pixels to dots. This is still an indication of spatial resolution, exchanging pixels and lines of pixels for dots of ink, dye or pigments.

Mathmatics of Dots, Line and Pixels

The Rule of 16 is a formula that helps determine the number of gray levels in an image by multiplying or dividing by 16. Cameras are normally 8-bit, so one typically works with 256 levels including 0. Meaning that typically one can divide or multiply by 16 to convert from LPI or PPI to DPI. A situation where one might apply the rule of 16 might be this: A photo is to be displayed in halftone and printed at 600 DPI. The photo was taken with an 8-bit camera(256 gray levels) have to be printed for every pixel. How many PPI can be printed on this laser printer?

Dots per inch (DPI)

PPI or LPI = x DPI # of gray leves

600 DPI 16

= 37.5 LPI

Writen out formula for Rule of 16

30

Pixels per inch (PPI) or Lines per inch (LPI)


Halftone Halftoning is a practice that uses drops or dots of pigment, inks or dyes to create a printed image. The halftone process uses dots of different sizes to fool the eye into seeing continuous tones, with the eye filling in the small amount of blank space. The larger and closer together the dots are the denser that part of the image looks and the smaller and more space between the dots creates a lighter image. It should be noted that differnt colors are made by dots being places near or on top of each other, and not by inks, pigments or dyes being mixed together on the page or in the printer.

Halftone image

Continuous tone image

Black and white halftone tone range

4 1


Printers Inkjet Printers

Inject printers project dye or pigments onto paper. Pigments are dry particle that sit on top of the paper while dyes are liquid and absorbs into the paper. There are two types inject printers: Thermal and Piezo-electric. Thermal inject printers use heat to project ink on to paper. The ink is heated in a chamber to the point of expansion and projects itself onto the paper via nozzle. Piezo-Electric inkjet printer uses electric current to make piezoelectric crystal expand and push the ink out of the nozzle and onto the paper.

Inject printer ink expolsion

Laser Printers Laser printers are common in office settings, but are versatile enough to create photo grade color prints. These printers use an electro-static drum that rolls with a positive charge. A laser uses negatively charged particle to inscribe information that will be transferred to paper. The drum is then rolled in toner, which sticks to the negative charge. Paper is then rolled onto the drum and deposits the toner, the paper is then rolled between two hot rollers, called the fussers, fusing the tonner to the paper. Inside of a laser printer

32


4 1


Credits Nitin Sampat - Original Materials Keith Angnello - “Imaging Systems�

Guide book

CCD sensor Images pg 15 (Chapter 2) http://www.looptechnology.com/ccd-sensor-cameras.asp#. VkDndcsZYm8 CMOS sensor infomation pg 16 (Chapter 2) http://www.olympusmicro.com/primer/digitalimaging/ cmosimagesensors.html 3D scanner image pg 18 (Chapter 2) http://www.technocrazed.com/photon-3d-scanner-an-affordablescanner-to-digitize-3d-objects Senor size diogram pg 18 http://arsgratiars.blogspot.com/2012/09/no-capitulo-anterior-javimos-como-o.html Pixel vs print images pg 30 (Chapter 4 http://www.247inktoner.com/blog/post/2012/10/31/Explanation-ofprinter-DPI.aspx pixel vs print images pg 30 Halftone example b&w image pg 31 (Chapter 4) http://www.tek-tips.com/viewthread.cfm?qid=1285311 Inkjet diograms pg 32 (Chapter 4) http://supplies-outlet-blog.com/different-types-ink-printing-inkjetprinting/ Laser printer diograms pg 32 (Chapter 4) https://imsinkspot.com/laser-printers-work/

34


1


Turn static files into dynamic content formats.

Create a flipbook
Issuu converts static files into: digital portfolios, online yearbooks, online catalogs, digital photo albums and more. Sign up and create your flipbook.