Introduction to Image Processing

Page 1

INTRODUCTION TO IMAGE PROCESSING

Politeknik Kota Malang Aditya Kurniawan, S.ST © 2011


WHAT IS THE DIFFERENCE ?

Image Processing Computer Vision Robot Vision 2 Aditya@poltekom.ac.id


IMAGE PROCESSING

A process to an image focusing on transforming, encoding and transmitting the image.

IMAGE

IMAGE

3 Aditya@poltekom.ac.id


COMPUTER VISION

Computer Vision => media to know the world visually supported by knowledge strength by computational instrument.

Description / humanized information

IMAGE

4 Aditya@poltekom.ac.id


ROBOT VISION

Robot Vision => a machine with ability to see his environment designed with workflow algorithm, so it can make a decision and finish the job automatically.

IMAGE

Action 5 Aditya@poltekom.ac.id


BLOCK DIAGRAM

Image Prosesing +

Computer Vision

Artificial Intelligent IT guys

+

Robot Vision

Hardware 6 Aditya@poltekom.ac.id


BLOK DIAGRAM

Image Prosesing +

Computer Vision

Artificial Intelligent

+

Robot Vision

Hardware MECHATRONIC guys 7 Aditya@poltekom.ac.id


INTRODUCTION

Modern digital technology has made it possible to manipulate multi-dimensional signals with systems that range from simple digital circuits to advanced parallel computers. The goal of this manipulation can be divided into three categories: Image Processing Image in  image out

Image Analysis Image in  measurements out

Image Understanding Image in  high-level description out 8 Aditya@poltekom.ac.id


INTRODUCTION

Image Processing 9 Aditya@poltekom.ac.id


INTRODUCTION

Image Analysis

10 Aditya@poltekom.ac.id


INTRODUCTION

Image Understanding

11 Aditya@poltekom.ac.id


INTRODUCTION

We begin with certain basic definitions. An image defined in the “real world� is considered to be a function of two real variables, for example, a(x,y) with a as the amplitude (e.g. brightness) of the image at the real coordinate position (x,y).

12 Aditya@poltekom.ac.id


INTRODUCTION

An image may be considered to contain subimages sometimes referred to as regions–of–interest, ROIs, or simply regions. This concept reflects the fact that images frequently contain collections of objects each of which can be the basis for a region.

13 Aditya@poltekom.ac.id


INTRODUCTION

Coordinate Position

14 Aditya@poltekom.ac.id


INTRODUCTION

Y1

Y2

Regions X1 Of Interest

X2

15 Aditya@poltekom.ac.id


INTRODUCTION Y2

Y1

Regions Of Interest

Y3

Y4

Regions Of Interest X1 X2

16 Aditya@poltekom.ac.id


INTRODUCTION

In a sophisticated image processing system it should be possible to apply specific image processing operations to selected regions. Thus one part of an image (region) might be processed to suppress motion blur while another part might be processed to improve color rendition.

17 Aditya@poltekom.ac.id


INTRODUCTION Brightness enhancement

Contrast enhancement

18

Aditya@poltekom.ac.id


DIGITAL IMAGE DEFINITIONS

A digital image a[m,n] described in a 2D discrete space is derived from an analog image a(x,y) in a 2D continuous space through a sampling process that is frequently referred to as digitization. For now we will look at some basic definitions associated with the digital image. The effect of digitization is shown in Figure 1.

19 Aditya@poltekom.ac.id


DIGITAL IMAGE DEFINITIONS

0,2 nM

3,2 nM

Sample range of colour in reality world is an analog signal

20 Aditya@poltekom.ac.id


DIGITAL IMAGE DEFINITIONS

0,2 nM

3,2 nM

The idea of digitization Is taking sample of a range or an analog value

21 Aditya@poltekom.ac.id


DIGITAL IMAGE DEFINITIONS

0,2 nM

3,2 nM 00

01

10

11

2 bit colour representation

The idea of digitization Is taking sample of a range or an analog value 22

Aditya@poltekom.ac.id


DIGITAL IMAGE DEFINITIONS

23 Aditya@poltekom.ac.id


DIGITAL IMAGE DEFINITIONS

The 2D continuous image a(x,y) is divided into N rows and M columns. The intersection of a row and a column is termed a pixel (pixel comes from “picture element”). The value assigned to the integer coordinates [m,n] with {m=0,1,2,…,M–1} and {n=0,1,2,…,N–1} is a[m,n]. In fact, in most cases a(x,y) which we might consider to be the physical signal that impinges on the face of a 2D sensor is actually a function of many variables including depth (z), color (l), and time (t). Unless otherwise stated, we will consider the case of 2D, monochromatic, static images in this chapter. 24 Aditya@poltekom.ac.id


DIGITAL IMAGE DEFINITIONS A pixel contain these information : a (x, y, z, l, t)

a = illumination / light exposure in a certain pixel X = horizontal coordinate Y = vertical coordinate Z = depth L = colour T = time frame

25 Aditya@poltekom.ac.id


DIGITAL IMAGE DEFINITIONS A pixel contain these information : a (x, y, z, l, t) a = illumination / light exposure in a certain pixel

Low Light exposure

High Light exposure

26 Aditya@poltekom.ac.id


DIGITAL IMAGE DEFINITIONS A pixel contain these information : a (x, y, z, l, t) X, Y = 2 dimensional coordinate Y1

Y2

X1

X2

27 Aditya@poltekom.ac.id


DIGITAL IMAGE DEFINITIONS A pixel contain these information : a (x, y, z, l, t) Z = depth

bottom

surface

28 Aditya@poltekom.ac.id


DIGITAL IMAGE DEFINITIONS A pixel contain these information : a (x, y, z, l, t) l = colour

Yellow colour

Red colour

29 Aditya@poltekom.ac.id


DIGITAL IMAGE DEFINITIONS A pixel contain these information : a (x, y, z, l, t) Picture taken in a different time frame

t = time frame t1

t2

t3

t4

30

Introduction to Image Processing

Aditya@poltekom.ac.id


DIGITAL IMAGE DEFINITIONS

The image shown in Figure 1 has been divided into N = 16 rows and M = 16 columns. The value assigned to every pixel is the average brightness in the pixel rounded to the nearest integer value. The process of representing the amplitude of the 2D signal at a given coordinate as an integer value with L different gray levels is usually referred to as amplitude quantization or simply quantization.

31 Aditya@poltekom.ac.id


COMMON VALUES

There are standard values for the various parameters encountered in digital image processing. These values can be caused by video standards, by algorithmic requirements, or by the desire to keep digital circuitry simple. Table 1 gives some commonly encountered values.

32 Aditya@poltekom.ac.id


COMMON VALUES

Quite frequently we see cases of M=N=2K where {K = 8,9,10}. This can be motivated by digital circuitry or by the use of certain algorithms such as the (fast) Fourier transform. The number of distinct gray levels is usually a power of 2, that is, L=2B where B is the number of bits in the binary representation of the brightness levels. When B>1 we speak of a gray-level image; when B=1 we speak of a binary image. In a binary image there are just two gray levels which can be referred to, for example, as “black” and “white” or “0” and “1”. 33 Aditya@poltekom.ac.id


CHARACTERISTIC OF IMAGE OPERATIONS There is a variety of ways to classify and characterize image operations. The reason for doing so is to understand what type of results we might expect to achieve with a given type of operation or what might be the computational burden associated with a given operation.

34 Aditya@poltekom.ac.id


ADVANTAGES OF IMAGE PROCESSING

Medical Sharpen X-Ray result, Analysis of MRI, etc

Technology and Communications Reduce noise from satellite image, video streaming

Game Shadow effect on water surface, light effect, etc

Photography and Films Contrast, brightness, illegal photo manipulations, etc

35 Aditya@poltekom.ac.id


MEDICAL APPLICATION

36 Aditya@poltekom.ac.id


TECHNOLOGY AND COMMUNICATIONS

37 Aditya@poltekom.ac.id


DIGITAL IMAGE ACQUISITION PROCESS

38 Aditya@poltekom.ac.id


Turn static files into dynamic content formats.

Create a flipbook
Issuu converts static files into: digital portfolios, online yearbooks, online catalogs, digital photo albums and more. Sign up and create your flipbook.