THE INS & OUTS OF IMAGING
1
2
Written by Meghan Connor
3
Contents 6 FUNDAMENTALS
28 INPUT
42 PROCESSING
50 OUTPUT
82 X-TRANS SENSOR TECHNOLOGY 4
5
6
FUNDAMENTALS
7
Fundamentals 8
SPATIAL RESOLUTION Spatial Resolution refers to the number of pixels in an image. Spatial Resolution is not only dependent on pixel count, or PPI (pixels per inch), but also on the properties and capabilities of the capture device. This type of resolution measures detail. Cameras are known to measure their Spatial Resolutions in terms of Megapixels. An image with more pixels, has a higher Spatial Resolution and is therefore more detailed overall. The size of a pixel should also be considered, as smaller pixels are capable of recording more detail than larger pixels.
9
Fundamentals 10
SPATIAL RESOLUTION Interpolation Interpolation is the method used to increase or decrease the number of pixels in an image, therefore altering the images size. In interpolation, pixels are either eliminated or duplicated depending on the desired change in image size. The three most common interpolation methods, Nearest Neighbor interpolation, Bicubic interpolation, and Bilinear interpolation, each go about changing the value of a single pixel in different ways.
When this image was zoomed in by 350%, this method took .2 seconds.
• • • • • •
Nearest Neighbor
Selects value from nearest point Does not consider surrounding pixel values Simplest method of Interpolation Produces jagged edges Highest artifact generation Fastest method
Bilinear When this image was zoomed in by 350%, this method took .5 seconds.
•
Uses data from 4 bordering pixels
Bicubic When this image was zoomed in by 350%, this method took .6 seconds.
• • •
Uses data from all 8 surrounding pixels Produces the most subjectively appealing result Slowest method
11
Fundamentals
SPATIAL RESOLUTION
A pixel is a contone object, meaning that it is a continuous tone capable of reproducing multiple light levels. Pixels can be thought of as small, square light meters; each one measuring a different level of brightness ranging from 0-255, or black to white. A photograph is made up of pixels, each one having been assigned a numerical value corresponding to its light level.
The images stacked on the right are zoomed in crops of one small section of the full, larger. Zooming in on a selected area enables you to see each individual, different pixel.
12
PPI & LPI
DPI
PPI stands for “pixels per inch”.
DPI stands for “Dots per inch.” Dots are binary units of measurement, meaning they are only capable of recording 2 gray levels. Binary devices, such as laser and ink-jet printers are measured in DPI. While photographs are measured in PPI, text is measure in DPI.
The resolutions of continuous devices such as cameras, scanners, film and associated photographic papers, monitors, and dye-sublimination printers are measured in PPI. LPI refers to “lines per inch,” which is simply just another way of measuring PPI and is essentially the same.
To convert DPI to PPI, use the “rule of 16”: DPI = PPI or DPI PPI = 16 SQRT (# of dots per pixel))
*16 will change depending on how many gray levels are being simulated, but 16 is the common benchmark used in most conversions.
13
Fundamentals 14
TONAL RESOLUTION Tonal Resolution is essentially a calculation of the range of light captured within an image. When thinking of an image in color, consider that all of the colors are simply colored gray levels. All of the pixels within an image, and ultimately the image as a whole, are measured on a scale of 0-255. Therefore, Tonal Resolution is a calculation of the number of gray levels within an image. Tonal Resolution accounts for the Dynamic Range of an image, or its number of bits per pixel. Bit Depth and Histograms utilize Tonal Resolution in their practices. Tonal Resolution is a measure of light which can be measured in many different ways: f/stop, the number of bits per channel, dynamic range, bit depth, or in decibels (dB).
15
Fundamentals 16
TONAL RESOLUTION Bit Depth Bit Depth, or Tonal Resolution, is used to measure the number of gray levels within an image. A numerical change in Bit Depth is the equivalent to a 1 stop change in aperture, 1 bit, 6 decibels, or a density of .3 (neutral). This number can be calcuated using powers of 2: Bit Depth = 2#of bits/pixel or # of stops
Bits per channel/pixel
2 4 6 8 10 12 14 16
Number of grey levels
4 16 64 256 1,024 4,096 16,384 65,536
This portrait and it’s corresponding diagram illustrate how different numbers of gray levels and their resulting bit depths affect overall image quality. Bit Depth is increasing by powers of 2 as you move up the portrait. The bottom, and least detailed segment, has 4 gray levels, a bit depth of 4, and 2 bits per channel. The next segment up has 8 gray levels, then 16, 32, and finally, the top, most detailed segment has a Bit Depth of 64, or 64 gray levels. Therefore, as illustrated in this example, as bit depth or the number of tones reproduced in an image increase, so does the detail of the image overall.
17
Histograms Histograms measure frequencies; they are graphical representations of the properly exposed pixels in an image. Histograms visually interpret the numerical values corresponding to each pixel, enabling us to measure accurate exposure, and to neutral balance. The overall graph represents the dynamic range or overall tonal resolution of the entire scene represented in the image.
MAXIMUM PIXEL COUNT
Fundamentals
TONAL RESOLUTION
DYNAMIC RANGE/BITS PER PIXEL OF THE DETECTOR
0
18
255
• Original image • Properly Exposed
• +50% Contrast • Horizontal shift indicates the differenes between pixel values have increased.
• +2 STOPS • OVEREXPOSED • Shift to the right indicates increased frequencies of brighter tones.
• -2 STOPS UNDEREXPOSED • Shift to the left indicates increased amount of dark tones.
19
Fundamentals 20
SPECTRAL RESOLUTION Spectral Resolution, or Color Resolution addresses how color is generated by different media. This type of resolution is determined by the number of color channels a camera is capable of capturing. Most capture devices have a red, blue, and green channel. These three primary colors, or additive colors, are incorporated into the sensor of a camera in order to absorb the wavelengths of light present in the visible spectrum. This process of transmittance and absorbtion mocks that of the human eye. Each device, such as a camera, monitor, etc. has it’s own gamut, or range of realizable color. Larger gamuts entail that more colors are present on the device and consequently reproduced in output. Smaller gamuts do the opposite.
21
Fundamentals
SPECTRAL RESOLUTION Additive Colors +
+
=
The Additive colors represent three colors of light within the visible spectrum. Additive colors are used by monitors and cameras. These colors are Red, Blue, and Green (RGB). When equal quantities of these three colors are added together, they produce white. When unequal amounts of these colors are combined, different colors are reproduced. Devices which use this additive system tend to have larger color gamuts than those that don’t.
This image illustrates layered, filtered images. Three images were taken. A different colored filter (red, green, or blue) was placed in front of a lens for each image. When the three images are stacked and lined up, the girl’s skin turns white. This process shows us how Additive colors combine to make white.
22
Subtractive Colors +
+
=
Subtractive colors are pigments. The three subtractive colors are cyan, magenta and yellow. These colors are commonly used in printing. Printers use these subtractive primaries in their processes in order to reproduce color through subtractive mixing. These colors are referred to as “subtractive,� because they themselves are, unlike RGB, are impure; they are pigments because they are the results of pure colors being mixed together. When equal amounts of these pigments are combined, they produce black. The four images below are the results of different subtractive primary colored filters being applied in Photoshop. The image furthest to the left is the original image, in which no filter has been applied. In the following image, a cyan filter has been added, followed by a magenta and a yellow filter.
Additive and Subractive colors combine to produce other Additive/Subractive colors:
+
=
+
=
+
=
+
=
+
= 23
Resolution
TEMPORAL RESOLUTION Temporal Resolution deals with time; this type of resolution measures the refresh rate of a monitor. Temporal Resolution only applies to video as it is a measure of framerate, or frames per second (FPS). The only aspect of still photography that incorporates time is the shutter speed, or the amount of time in which a camera’s shutter is open whilst capturing an image. Other than their shutter speeds, still images do not have temporal resolution. Temporal Resolution and Spatial Resolution have an inverse relationship. A higher Spatial Resolution results in a lower Temporal Resolution, and vice-versa. This is because the speed of light is finite and because it takes a certain amount of time for the the photons which carry information to appear to their viewers. In the time in which these photons are traveling to the viewer, changes in the system itself could have occured. The longer this process of light traveling takes to occur, the lower the Temporal Resolution becomes. Most films have an average framerate of 24 FPS. Television, on the other hand, usually averages at about 30 FPS. In order for Temporal Resolution to remain idea, monitors must refreshed periodically. The higher the refresh rate of the monitor, the better the image quality.
24
25
Fundamentals
IMAGE SIZE The size of an image is determined by multiplying aspects of each one of the four types of Resolution:
File Size = Spatial Resolution (Pixels) x Tonal Resolution (Bits) x Spectral Resolution (RGB Data) x Temporal Resolution (Frames per second)
When calculating image size, it is crucial to remember bit to byte, or vice-versa, conversions:
1 Byte = 8 Bits 1024 Byte = 1 Kilobyte 1024 Kilobyte = 1 Megabyte 1024 Megabyte = 1 Gigabyte 1024 Gigabyte = 1 Terabyte
26
27
28
INPUT
29
Input
SENSORS A camera’s sensor is a silicon semiconductor made up of photosensitive diodes called “photosites,” which capture photons and convert them to electrons. This process is similar to that of solar panels converting light to energy. The electrons present in each photosite are converted to a voltage, which is then converted to digitial data, such as a pixel. Pixels are then relayed in consecutive order and stored as an image in camera made up of millions of pixels arranged in a grid. Each one of these pixels can be compared to mini-light meters. Upon capture, each pixel on the sensor measures the amount of light that hits it. Photons (light), creates an electronic charge when it hits the sensor. As a result of this process, each individual pixel will foster a different numerical charge depending on the amount and intensity of the light which hit it. Pixels read the charges and process them in order to create an image A higher charge will produce a brighter color which will be seen in the image once it’s been processed The Charged Coupled Device (CCD) and the Carbon Metal-Oxide Semiconductor (CMOS) sensors are the two most widely recognized sensor types today. The main difference between them regards how they process the light which their pixels have measured.
30
31
Input
SENSORS Carbon Metal Oxide Semiconductor (CMOS) CMOS or “Carbon Metal Oxide Semiconductor” is a technology used in the construction of analog and integrated circuits. In imaging, this technology is integrated into Active Pixel Sensors (APS). An Active Pixel Sensor is made up of one integrated circuit that contains an array of pixel sensors. On each pixel is photosite which works similarly to a light meter. Each pixel has it’s own charge-to-voltage conversion. The most common type of camera sensor today utilizes CMOS technology. Active Pixel CMOS sensors have found applications in cell phone cameras, scanners, digital cameras, and video cameras. This type of sensor is a complex system that reads every single pixel individually. Attached to each pixel is at least one wire, or transistor, which enables said process.
32
Proton to electron conversion (photosite)
Final Image
Electron to voltage conversion (transistors)
33
Input
SENSORS Charged Coupled Device (CCD) CCD, or “Charged Coupled Device,� is a semiconductor material used in the production of high-quality images. CCD sensors work by converting light to charge, or photons to electrons. Electrons are transferred across the sensor, from one row of pixels to the next, then to the chip. The charge of each pixel is transferred though a small number of output nodes. Here, these charges are converted to a voltage, buffered, and sent to the chip as an analog signal. The output signal produced through this transfer passes through an amplifier, making the signal readable as an image. The amplifier is a necessary component to this process because the original output signal is often too weak to be read initially.
34
Proton to electron conversion
Final Image
35
Input
CMOS More Noise
Incoming light will sometimes hit the transistors rather than the photosite, which results in increased noise in images. Because of this complex system of transistors, which occupy space, more noise is present than that of a CCD.
Less Power
CMOS sensors use less power than CCDs. This results in cameras with CMOS sensors having longer battery life.
Lower Price
CMOS sensors have reduced the costs of digital imaging because they require less money to be produced. CMOS sensors are cheaper to produce than CCDs because traditional manufacturing processes are used in their production. They are easier to produce, and therefore cheaper.
Lower Quality
In considering their complex systems, and that they cannot sustain as many pixels on the plane of their chips as CCDs, CMOS sensors produce lower quality images.
36
CCD Less Noise
CCDs are less complex than CMOS sensors, resulting in the production of less noise. The amplifier will cause noise, but not nearly as much as that produced by a CMOS sensor.
More Power
The process of reading pixels which is used by CCDs consumes as much as 100x the amount of power consumed by a CMOS sensor.
Higher Price
CCDs are more expensive because they require special manufacturing; the manufacturing process of a CCD enables conversion to take place within the chip without the occurrence of distortion.
Better Quality
Because of the complex manufacturing process used in their creation, CCD sensors produce higher quality images than those produced by CMOS sensors. Also, uniformity of output is high, resulting in a higher quality image. CCDs have also been around longer than CMOS sensors. CCDs, being more mature than CMOS sensors, have had more time for advancements and adjustments to be made, one of which being that of high quality.
37
Input
SENSORS Color Filter Array (CFA) In order for pixels to collect all the data necessary in reproducing accurate colors in an image, the Color Filter Array needs to be utilized. The Color Filter Array (CFA) is a mosaic of small color filters that have been placed over the photosites on each pixel of the sensor. These filters enable color information to be captures. These filters, depending on the specific CFA pattern, can be red, green, blue, white, green, cyan, magenta, yellow, and/or black. These filters are necessary because photosites measure light intensity, not wavelength. These filters enable different wavelengths of light, or different colors, to be captured.
Bayer Pattern The most commonly used and most well known CFA Pattern is the Bayer Pattern. The Bayer Pattern is named after Bryce Bayer of Eastman Kodak. This CFA pattern was patented by Bayer in 1976. A raw image made with a capture device that uses the Bayer Pattern is referred to as a “Bayer Pattern Image.” This pattern aimed to mimic the physiology of the human eye by incorporating twice as many green filters than red and blue. As a result of how our L and M cones “see” in daylight, the human eye is most sensitive to the color green. The pattern overall is 50% green, 25% blue, and 25% red. This pattern makes the image appear more detailed and less noisy to the human eye. There is less noise in the green channel than the others, which also contributes to said appeal. Because each pixel can only record one of the three colors (RGB), the pixel data can’t fully determine accurate colors. In order for a full-color, fully resolved, and accurate image to be produced, one of the three methods of interpolation mentioned in chapter 1 use demosaicing algorithms to estimate these values. 38
39
Input
SCANNERS Scanners, like cameras, are excellent sources of image input.
Flatbed Scanner The Flatbed or “Desktop� scanner is the most versatile and well-known scanner today.
Sheet-fed Scanner Sheet-fed scanners look like small printers. They are similar to flatbed scanners, except the document is moved in the scanning process and the scan head is immobile.
40
Handheld Scanner Handheld Scanners utilize the same basic technology as Flatbed Scanners, but they rely on the user to move them instead of a motorized belt.The Handheld Scanner does not provide good image quality, but it is very good to use for capturing text.
Drum Scanner Drum Scanners are used by the publishing industry in order to capture very detailed images. First, the document to be scanned is mounted on on a glass cylinder. At the center of this cylinder is a sensor that splits the light that was bounced from the document into 3 separate beams. Each beam is sent through a color filter into a photomultiplier tube (PMT) where light is changed into an electrical signal.
41
42
PROCESSING
43
Processing 44
IMAGE PROCESSING & PIPELINE An image travels through what is often referred to as the “imaging pipeline.” From the time an image is captured to when it’s exported after post-processing, an image goes through a specific set of ordered steps. The pipelines for JPEGs vs. those for RAW files differ mostly in regard to their post processing. But, image capture is a universally established system. When an image is captured, it travels through what is often referred to as the “analog system.” This initial capture only includes one channel and captures a black and white image. The analog signal is then sent through the analog to digital converter, in which the electricity that had been captured by the analog system is converted into numbers.
45
Processing 46
IMAGING PIPELINE
RAW
47
Processing
IMAGING PIPELINE 1
ANALOG SYSTEM The Analog System captures a black and white one channel image. After photons, or light, passes through the lens and strikes the sensor, it is converted into electrons. The sensor detects the amount of light captured for each individual pixel and sends said information along to the next step in the pipeline, the Analog to Digital Converter (ADC).
2
ANALOG TO DIGITAL CONVERTER (ADC) The Analog to Digital Converter or “ADC,” is responsible for making the analog signal provided to it in the step prior into a digital signal. The ADC converts the voltage provided to it by the Analog System into numerical values. The image continues to remain black & white, but because numbers have been assigned to pixels, it is now an 8-bit grayscale image.
3
RAW DATA The raw numerical data, or the camera’s capture file format (NEF, CR2, JPEG, etc.), is stored within a “container.” The image continues to be a black and white mosaiced file.
4
COLOR FILTER ARRAY (CFA) INTERPOLATION CFA Interpolation, otherwise known as “demosaicing,” constructs a full color image from the data provided to it in the previous step. Here, missing pixel values are determined through the use of one of the three Interpolation methods covered in Chapter 1. Because each pixel lies behind one of the three filters (red, green, and blue) within the Bayer Color Filter Array, an algorithm is used in order to estimate the color levels for each pixel. As seen in the pipeline on the previous page, the black and white image has been separated into these three color channels. Because of this complex process, the image triples in size.
48
5
NEUTRAL WHTE BALANCE Neutral White Balancing eliminates color casts, making the neutrals appear to be truly neutral. This step can take place within the camera by adjusting the White Balance to match an environments color temperature prior to capture or in post processing.
6
GAMMA CORRECTION Gamma Correction adjusts an image for monitor display.
7
COLOR SPACE At this step, Tristimulus Values (XYZ) are translated into the Working LAB/RGB space. Here, Color Spaces or “profiles” (sRGB, Adobe RGB) can be applied for output, prior to making any subjective changes.
SUBJECTIVE CORRECTIONS 8
The method in which subjective corrections are made is the one step within the pipeline which diffentiates RAW files from JPEGs files. These corrections are referred to as “manufacturing processes” for JPEGs because they occur within the camera after the image is demosaiced. For JPEGs, these corrections include increased contrast and detail. RAW files are much larger and lossless, meaning they can withstand significantly more adjustments in post-processing than a JPEG can without suffering in overall quality. Because of this, there are many subjective corrections that can be made. Professional photographers utilize the RAW format because of this flexibility. These adjustments are truly subjective, as they are the ones adjusted in Photoshop or Lightroom postimport. These steps include but are not limited to: exposure correction (histograms), noise reduction (luminance), lens distortion correction (correcting for vignetting and abberations), brightness and contrast adjustments, and sharpening. After these adjustments are finalized, the image is exported as a rendered output file, whether it be RAW or JPEG.
49
50
OUTPUT
51
Output 52
OUTPUT The final step in imaging is output. As explained in previous chapters, after an image is captured, it begins to travel through the imaging pipeline. After all final subjective corrections have been made and the image has been finalized, it needs to be exported or printed in order to be displayed. “Output� for analog photography was when the photographer used light to project his or her final image, post-burning and dodging, onto photo-sensitive paper. The image would then travel through the developer, stop-bath, etc., completing its output process. Today, output mostly refers to the printing or exporting of a digital image. Output medium varies as it is dependent on the desired result. A final image can be displayed in two different ways: on a monitor or on print media. It can be a digital file format such as a JPEG that is to be posted on a website or a tangible photographic print. Today, there are a variety of different types of printers, file formats, display technologies, color spaces, and other aspects which greatly impact how an image appears upon output.
53
Output 54
HALFTONING Halftoning is a technique that occurs during the printing process of image output. This process was developed by William Fox Talbot in the early 1830s. Halftoning is used in order to simulate continuous tone images through its use of different sized, shaped and colored dots. Digital Halftoning, which is used today, was developed in the 1970s. In this process, dots are laid down in specific arrangements in order to recreate photographs. Because color printers utilize a CMYK (cyan, magenta, yellow, and black) color space or subtractive mixing (as mentioned in Chapter 1), each of these dots is one of those four colors. Certain dots overlap in certain places and combine in order to produce the different colors within an image. At a normal viewing distance, the human eye views a halftone image as a continuous image, as the dots are so small that we cannot resolve them. When viewed through a microscope however, these dots can be resolved. The large image on the right is the original halftoned version of this portrait. The two smaller versions have been zoomed in, as if viewed under a microscope, in order to show how halftoning works. Today, Inket and Laser printers use halftoning in their printing processes. When printing in grayscale, only black dots are used. These dots are laid down in a specific arrangemnts to reproduce the continuous tones within a grayscale image.
55
Output
PRINTING The fundamental difference between the analog and digital printing processes is the presence of a physical plate.
Analog Printing In the 1440s, Guttenberg adopted the basic designs of older printing presses, making them more cost-effective and efficient. Guttenberg invented the Letterpress and Movable Type; two inventions that revolutionized printing and its capabilities. Since this time, many other analog printing processes such as lithography, flexography, gravure printing, and screen printing have been created. Analog printing requires a plate in which the text or images to be “printed� have been engraved onto. This plate is dipped into ink and pressed against paper, finalizing the process. Analog Printers are usually sheetfed by large rolls of paper, allowing for continuous printing. Because of this, Analog printers are much faster at printing mass produced, similar prints; this makes the Analog process ideal for newspapers and magazines. This process is expensive because each plate has to be personalized. The material which the plate is made out of, such as aluminum, copper coated zinc, or even rubber, can also be pricy.
56
Digital Printing Digital printing is very flexible. In this system, software takes the place of the analog plate; because of this, changes to print settings and design can be made much faster. Individual digital prints are much cheaper than analog, but unfortunately in most cases, they suffer in quality. Overall, a digital printer is slower than an analog because it is not sheetfed. The most common digital printer today is the Inkjet printer. Laser and DyeSublimination Printers are also used often, each one having its own advantages and disadvantages. Digital printers are composed of marking engines (method of ink application) and a Raster Image Processor(RIP). The RIP is responsible for qob queuing, halftoning, color management, and rasterizing. DPI is used to describe the images printed by digital printers because they are halftone devices.
57
Output
INKJET PRINTERS Inkjet Printers, the most commonly used printer type of today, function by selectively spitting dots of ink onto paper as the paper passes through printer heads. The first Inkjet printer capable of printing digital images was created in the 1970s. Canon, Epson, and Hewlett-Packard (HP) were the first companies to create and market these devices. This type of printer is used so often because it produces higher quality prints than Laser printers. There are two types of Inkjet Printers: Drop-on-demand (DOD) and Continuous Tone Inkjet (CIJ). Both of these Inkjet types implement halftoning, but they differ in how they go about dispersing ink onto paper.
58
59
Output
INKJET PRINTERS Drop-on-demand Drop-on-demand (DOD) printers are currently the most popular and widely used Inkjet printer. Their systems utilize compact technology, but their printing process takes longer. There are two types of Drop-ondemand inkjet printers: Piezoelectric and Thermal Inkjets. Piezoelectric printers use charged crystals to eject ink. An electric voltage is applied to the ink, changing its shape, volume, and speed. The voltage also creates the pressure responsible for forcing the ink out of the nozzle. The Piezo heads vibrate these drops of ink, allowing for more control in their placement. Piezoelectric Inkjets are more expensive, but they are also more versatile as they can use a wider variety of inks than other Inkjet types. Epson and Brother Industries produce Piezoelectric printers. Thermal Inkjet printers are more common than Piezoelectric printers and are known to be produced by Canon, Hewlett-Packard (HP), and Lexmark. The ink cartridges within Thermal Inkjets contain a series of small chambers, each of which contain a heater. A pulse of current travels through the heater, causing the ink to form a bubble. This heated bubble expands and the ink is ejected onto the paper.
60
Continuous Tone Inkjet (CIJ) Continuous Tone Inkjet printers contain a high-pressure pump which transfers ink from a reservoir, through a gunbody, and a microscopic nozzle. This process results in a continuous stream of ink drops. The drops then pass through an electrostatic field which is created by the charging of an electrode at the time in which the drops are passing through. Only a small amount of these droplets are used in the printing process, while others are recycled. CIJ printers are more expensive than Drop-on-Demand printers. But, because of the high velocity of the ink droplets and their high ejection frequency, CIJ printers are very fast. They are mostly used for printing large prints. This printer type, unlike the majority of digital printers today, does not utilize the halftoning process in its process.
61
Output
LASER PRINTERS Laser Printers function by scanning over a negatively charged drum. When the laser comes in contact with the drum it removes the negative charge from that area. Once the drum has been provided with a latent image, charged particle toner attaches itself to the neutral portions of the drum. The toner is then fixed onto the paper through the use of heat. Laser printers are used in many office all-in-one printers because of their fast speeds and their ability to reproduce high-quality text. But, they do not produce high quality images.
DYE SUBLIMINATION PRINTERS Dye-Sublimination Printers use heat in order to transfer ink onto paper. This printer type is the best at photographic output. It is a continuous tone printer that does not implement the halftoning process. It is commonly used by professional photographers. Dye-Sub printers use dye, which is more archival than ink, making its prints last longer. Unfortunately, these printers are often much slower than average printing technologies and their output sizes tend to be much more restricted.
62
63
Output
MONITORS & DISPLAY TECHNOLOGIES Monitors are continuous tone, or “contone� devices. They display color through use of pixels rather than dots. Because of this, their input resolutions are referred to in PPI rather than DPI. Scanners and digital cameras are also contone. How an image is displayed on a monitor plays a major role in how it appears once it has been printed. Many people, when editing their photographs on computers, do not consider how the color temperature of the lights in the room they are in can affect how their image is appearing. In order to correct for this, profiles can be applied to monitors. A profile will alter the brightness, vividness, and gamut of a monitor in order for all of the colors to appear neutral, or accurately color balanced. In order for a profile to be created and for the colors within an image to be displayed accurately, a monitor must be calibrated with a colorimeter. A colorimeter is a device that measures the absorbance of particular wavelengths of light. After profile adjustments have been made to suit the external environment, different color spaces with different gamuts can be applied to an image within a system. Display Technologies such as Light Emitting Diode Display (LED) and Liquid Crystal Display (LCD), the two most common display technologies today, arrange pixels differently. As a result, each one has its own advantages and disadvantages. These different arrangements can alter the appearance of an image; the same image can appear to be different on each of these display types.
64
65
Output
MONITORS & DISPLAY TECHNOLOGIES Light Emitting Diode (LED) Light Emitting Diode Displays (LED) are generally used for outdoor signage, TV displays, and billboards. Within LEDs, electrons recombine with electron holes in order to release energy in the form of photons. The color of light corresponds to the energy provided by each photon and is determined by the energy band gap (an electron-free zone) of the semi-conductor. This display requires an array of red, green, and blue diodes which are put together in order to form a full color pixel.
Liquid Crystal Display (LCD) The Liquid Crystal Display (LCD) is a flat, thin display technology. LCDs consist of an array of pixels which have been placed in front of a light source. Within an LCD is a layer of molecules that has been aligned in between two transparent electrodes and two polarizing filters. The polarizing filters are perpendicular to one another, blocking all the light. An electrical field is provided to the liquid crystals by the electrodes. The crystals align to control the amount of light that passes through the system. LCD technology is used most often for TVs, cell phone displays, and computer monitors.
66
67
Output
COLOR SPACES Color Spaces are specific arrangements of colors created by different companies within the imaging industry as an attempt to standardize how colors are represented on a monitor or device. Each color space is different. Choosing a color space, or profile, to apply to an image can be determined by variety of factors ranging from desired effect while editing to requirements of output medium. Pictured on the following page are Chromaticity Diagrams. These figures illustrate the gamut, or range of realizable color, for each color space. The colord segment contains the colors of the electromagnetic spectrum. Each one of the triangles in the top diagram illustrates the range of values that its color space can reproduce. Different color spaces have different white points, or neutral reference points. Larger color spaces can produce more colors than smaller color spaces. A color space can be applied to a file in editing. You can assign a color space, or profile, to an image within Photoshop. Different photographic papers have their own gamuts. Soft-proofing enables you to apply the profile of the paper you will be printing on during the editing process, allowing you to cater to its gamut. Four of the most common Color Spaces are Adobe RGB, sRGB, CIE L*a*b*, and ProPhoto RGB.
68
69
Output
COLOR SPACES Adobe RGB 1998 • • • • •
RGB color space Developed by Adobe Systems, Inc in 1998 Designed to encompass most of the colors achievable on CMYK color printers Encompasses about 50% of the visible color spectrum Larger gamut than sRGB but smaller than ProPhoto
CIE L*a*b* • • • • • • • • • •
70
Allows photographer to edit 3 channels : LAB Created by the International Commission on Illumination (CIE) Describes all colors visible to the human eye L channel = lightness (0-100; 0 being black and 100 being white) A* & b* channels = color components (blue-yellow and red-green) Color Opponent System Designed to match the perception of human vision Largest color gamut Three-dimensional model Device independent
sRGB • Created cooperatively by HP and Microsoft in 1996 for use on the Internet, monitors, and printers • Smallest Gamut of the four color spaces • Good for 8 or 16 bit images • Most commonly used color space • User is capable of interacting with the image as it was captured and stored within the camera, as many cameras allow users to apply the profile within camera • Additive color model: 100% RGB, the three primaries combine to make white, black is absent
ProPhoto RGB • Developed by Kodak • RGB Color Space • Very large gamut (encompasses over 90% of visible colors within the CIE L*a*b* space) • Good to work with a 16 bit image within this space because it allows you to avoid posterization
71
Output
Here, different color spaces have been assigned to the same image. The zoomed in segments to the right of each image show how the appearance of certain colors change when under different color spaces, or profiles.
Adobe RGB 1998
CIE L*a*b*
72
ProPhoto RGB
sRGB
73
Output
FILE FORMATS The information captured by a camera can be stored in one of the many file formats used by photographers and consumers alike today. Each file format is unique and different; each one having it’s own individual algorithm of compressing, encoding, organizing, and ultimately storing visual information. The most well known photographic File Formats are: JPEG/JPG, RAW (NEF, CR2, DNG, etc.), GIF, TIFF, PSD, PNG, and PDF.
74
75
Output
FILE FORMATS JPEG or JPG (Joint Photographic Experts Group) A JPEG is the most commonly used image file format. JPEGs utilize lossy compression, meaning that as the image, and therefore algorithm, is compressed and data is lost when the file is utilized. This means that the quality of the image decreases every time it is edited and re-saved. Every time a JPEG is reopened, it loses data and quality because of this compression. A JPEG is much smaller than a RAW image, in both file size and often times in dimensions. This makes JPEGs versatile in terms of Internet use: most blogs, social media platforms, and websites only support the JPEG image file format. The image below is a JPEG that has been compressed over time as a result of over-editing and from being emailed and resaved multiple times.
76
RAW Image (NEF, CR2, DNG, etc.) The RAW image format is the most commonly used file format in the professional photographic industry. This file format contains unprocessed data from the image sensor. Because of this, they are not yet ready to be printed after capture. But, because the file is unprocessed, it is a more accurate reproduction of the colors and overall image captured by the camera. After edits are made, RAW files can be exported as JPEGs, PSDs, or any other desired file format. One of the most important aspects of RAW images is their use of lossless, rather than lossy compression. Lossless data compression allows for the original data contained within an image to be perfectly reconstructed from the uncompressed data. This essentially means that a RAW file can withstand many changes and alterations, whether it be through editing of the image itself or sending the image to another person/device (email, storage, downloading), without losing its high quality. Different companies have created their own RAW files, each of which are basically the same, they just have different file extension names. Adobe has tried to establish a standard RAW format through their creation of a Digital Negative, or DNG image. Pictured below is a RAW file which has gone through all of the same steps as the JPEG on the previous page. Notice how the image has not lost its high quality and detail.
77
Output
FILE FORMATS PSD (Photoshop Document) • Unique to Photoshop • Supports the storage of layers, alpha channels, and other edits that can be done in Photoshop • Enables it to be a working file; you can save all of your progress • Cannot be used/uploaded outside of Adobe programs
PNG (Portable Network Graphics) • • • • • • •
Raster graphics file format Supports Lossless Compression Created as an improved replacement for the GIF The most used lossless image compression format on the Internet Supports a variety of image types Designed for internet use Does not support non-RGB color spaces (CMYK)
PDF (Portable Document Format) • Used to present documents • Independent of application software, hardware, or operating systems • Largely based on PostScript • Proprietary format • Can include text and images 78
79
FILE FORMATS TIFF (Tagged Image File Format) • • • • • • • •
Used to store raster graphics images Popular file format in the graphics and publishing industries Very large in size Flexible and adaptable (allows you to add header tags) Can be lossless or lossy Convenient for storing and emailing JPEGs can be saved within TIFFs Mostly used in the transfer of images to print
GIF (Graphics Interchange Format) • • • • • • • •
80
Bitmap image format Created by Compuserve in 1987 and Created for Internet use Limited palette: supports up to 8 bits per pixel for each image (256 different colors can be reproduced) Small in size Low in quality Created by looping a very short segment (a few seconds) of a video or stop-motion Suited for small, simple images and graphics with a limited range of colors
81
82
FUJIFILM X-TRANS SENSOR TECHNOLOGY 83
Fujifilm X-Trans Sensor 84
FUJIFILM X-PRO1 In 2012, Fujifilm released the X-Pro1. Being a member of Fujifilms X-Series, this digital camera looks just like a film camera. But, ironically, this model has revolutionized and advanced digital camera technology through certain characteristics. When compared to its digital counterparts, at first, it doesn’t seem to be too much different: it’s a 16-megapixel camera, it’s compatible with Fujifilms X-mount lenses, it has a second generation Hybrid Viewfinder, a TTL Hot Shoe, and a Sync Terminal. But, what makes this camera so different from others is the fact that it is mirrorless and because of its APS-C sized “X-Trans” CMOS Sensor.
85
Fujifilm X-Trans Sensor 86
BAYER CFA: THE ISSUES The Bayer Pattern (Chapter 2) is the most commonly used and most well known Color Filter Array in digital photography sensor technology today. It was created in order to mimic the physiology of the human eye through its incorporation of twice as many green than blue and red filters; this is because the human eye is most sensitive to the color green. As a result, a CFA image appears more detailed and less noisy to the human eye. The disadvantages to the Bayer Pattern are rarely ever discussed because it has earned its place as the most successful CFA.
87
Fujifilm X-Trans Sensor 88
BAYER CFA: THE ISSUES Moiré Pattern
A sensor with a Bayer CFA often has an increased risk of producing a moiré pattern in photographs of fine structures or patterns. A moiré pattern is a secondary, noticeable superimposed pattern that is created when two identical patterns on a surface are overlaid and slightly displaced. Therefore, these two regular patterns create a pattern that does not exist in reality. This happens because the regular 2x2 pattern of the filters in the Bayer CFA clash with the pattern being photographed. This not only produces an artificial pattern, but it also can produce undesirable color artefacts.
LEFT: Image before the Bayer CFA has been applied. RIGHT: Image after the Bayer CFA has been applied. The moiré pattern can be seen.
Optical Low-Pass Filter (OLPF)
1) Lens 2) Optical Low-Pass Filter 3) Sensor
The Optical Low-Pass Filter was built into cameras with the Bayer CFA in order to reduce the moirĂŠ and false color reproduction issues. The OLPF is located directly in front of the sensor, putting it between the sensor and the lens. While this reduced these issues to an extent, the implementation of an Optical Low-Pass Filter results in a decrease in image resolution as it is a barrier between the lens and sensor.
LEFT: Image before the Bayer CFA and OLPF have been applied. RIGHT: Image after the Bayer CFA and OLPF have been applied. The moirĂŠ pattern is no longer visible because of the OLPF, but, the resolution of the image has suffered significantly. The image now appears to be blurry as detail has been lost.
89
Fujifilm X-Trans Sensor 90
X-TRANS SENSOR: THE SOLUTIONS Film-based Logic Fujifilm decided to take a different approach to designing their new CFA Pattern. Moiré was never an issue with film photography because of the irregular size and layout of particles within Silver Halide Film. Because of this, Fujifilm settled on a much more random arrangement of filters than that of the Bayer Pattern. When randomness is increased, moiré is decreased enough so the need for an Optical Low-Pass Filter is eliminated entirely. This logic is a win-win, as moiré is eliminated and resolution is increased (because the OLPF no longer exists/isn’t acting as a barrier between the lens and sensor).
LEFT: Image before the X-Trans Pattern has been applied. RIGHT: Image after the X-Trans Pattern has been applied. Moiré is no longer present and resolution remains maintained.
6x6 X-Trans Sensor Pattern
The increased randomness of the X-Trans filter pattern maintains the same proportions of colored filters: two times the amount of green than red and blue filters, making the images more appealing to the human eye. Randomness in the pattern has been increased, resulting in less moirÊ. False colors are eliminated, high resolution restored, as full, unfiltered lens performace is captured. This pattern ensures that horizontal and vertical lines of pixels contain at least one RGB pixel. Bayer doesn’t have red and blue pixels in some lines, resulting in false color reproduction.
91