(for discussion)
http://howdyworldjavapgg.blogspot.com/2013/04/unit-b-background-context-rationale.html
A Hello World, Java based Introduction to Programming Monday, April 1, 2013 UNIT B -- Background Context & Rationale zzzzzz XXXXX
FOCUS: The PC and other ICCT (Information, Communication & Control Technologies . . . e.g. computers, phones, industrial control and web-oriented multimedia) in a Caribbean context, common digital formats for information/data, cracking the box on a hardware and software information systems and networks, to motivate the need for programming for various purposes.
____________________ CONTENTS
INTRODUCTION Backdrop -- the age of ICCT-based digital productivity Digital information, digital devices, digital skills & productivity Cracking the box -- how a PC works in a nutshell Digital networks for communication, for learning and for industry Object oriented programming as the gateway to digital productivity For Discussion Assignments
zzzzzzz 1
INTRODUCTION: In the Caribbean, we are by and large participating in the digital world as consumers [think, music players, cell and smart phone and lap tops etc.], rather than as economically viable and competitive producers. This is of serious concern in light of the accelerating pace of digitalisation of the global economy. Accordingly, we need to begin to prepare ourselves to be capable producers, which starts with basic programming capability. Thirty years ago, that would have been "easier," as BASIC and PASCAL were popular and widely used programming languages that had been designed for the learning needs of beginners. However, since the 1990's and with the rise of the Internet and multimedia technologies to dominant positions in the consumer markets, multiplied by the rise of widespread digital industrial control systems, such early languages have faded from the focus of attention. (Even earlier ones such as COBOL, FORTRAN, ALGOL and the like, are even more of a distant memory, save in very specialised contexts.) A new, much more sophisticated approach to programming emerged, object oriented programming, with the Java programming language -- roughly, an object oriented development of the "C" family of programming languages -- in the forefront. Java, consequently, is not an educational language but instead one tuned to the needs of professional programmers, in light of the problems that emerged with earlier languages. For instance, it has tamed the notorious GO TO statement used in say, BASIC, which tended to make for hard to follow, hard to debug "spaghetti code." (Cf. a famous "200klines . . ." horror story here.) Similarly, the object approach transcends the capacity of structured programming in terms of how it suggests development (though it can be argued that, strictly, the two can be shown to be formally equivalent). Similarly, Java tamed C's notorious pointer. Perhaps its most important innovation was the introduction of the Java Virtual Machine [JVM] and Bytecodes, which allowed for "write once, run anywhere" -- WORA -programming by creating a standard model software machine that the program is notionally run on, where the JVM then carries out the job of interfacing the program to its specific hardware environment. (This was also responsible in part for Java's early reputation for slowness; somewhat tamed since by the introduction of just in time compilers.) Only slightly behind this innovation, was the creation of Applications Programming Interfaces (API's), which allowed Java to have a library of canned, off the shelf packaged "modules" to do various jobs in programming. This means that a lot of Java programming is about calling canned modules rather than rolling one's own from scratch. That's especially great for windows based multimedia programming. Similarly, the ability to host native modules allows Java to interface with hardware systems through C or machine code etc. However, that focus on the needs of the professional programmer means that Java is significantly more challenging to gain even first level programming proficiency, much less 2
seriously compete in the market place. (This last means that "toy" educational languages are not really appropriate to our needs. They work to some extent for Primary School or Secondary School students, and perhaps for those who know they just want some basic familiarity or will be able to take time in a four year degree programme, but that just does not seem to fit well with the needs of those who need to be productive at least to basic level in a "useful" language, but who do not have a lot of time to pick up such proficiency through doing a long course sequence. Fortunately, there is encouraging research that shows that an "objects first" exposure to programming can work, and this course will use a conceptual bridge from the classic input processing, output [IPO] computing model to make sense of the Objects concept.) We may not be wholly happy with a situation like this, but, that is where we are now; the calendar is not saying 1983 anymore. So, the challenge before us, is to find a way to develop initial proficiency in Java based programming, "for all." At least, for those who are already reasonably computer literate and who are willing to make the effort to master the knowledge, skills and habits that make for successful programming. Then, once such initial proficiency has been acquired, it can be built on for working with multimedia, networking, industrial and general programming, as appropriate. This course, as a component of the AACCS programme under development for the Caribbean, targets that initial proficiency. Accordingly, in this backgrounder module, the context for that will be set up, the following foundational unit will use a Hello World program to get initial familiarity up and running, then by working through some cases and doing programming exercises for oneself, basic initial proficiency can be acquired. Thereafter, it will have to be built on. So, now, we must look at: Backdrop -- the age of ICCT-based digital productivity Notoriously, this is a digital age. Just look around, at people glued to their digital smart phones, reading books or surfing the web or doing email on tablets, doing more serious work on laptops, all hooked together through Internet and telephony networks that are all based on digital technologies. Our cars are based on industrial computer networks, such as the CAN bus, and our factories are full of robots. In offices, the PC is king, replacing the typewriter, adding machine and the like from yesteryear. Even the fax machine is on the way out. In short, the world is now dominated by what we can summarise as ICCTs: I -- Information C -- Communication and 3
C -- Control T -- Technologies However, if we look a little closer, we will see that there are several "digital divides" that limit the effectiveness of those technologies in transforming economic prospects. A lot of people cannot afford the machines and the subscriptions for services. Some, are lacking in basic familiarity, and cannot use the machines at even a first level. Even many of those who can use the machines to do a little word processing, or make a phone call, etc, are not able to make good use of the full capabilities of the machines and applications or "apps" they host. Very few in our region can program, and even fewer are proficient with the underlying hardware and electronics that make the technologies work. And yet, a world of bits and bricks in intelligent networks now clearly will dominate the global economy in the decades ahead. As a first sign of this, let us notice Apple Corporation, which back in the 1970's was one of the first to create a viable personal computer, literally starting in a garage. Then, it was a dominant player in the first wave of eight bit personal computers, until it was overtaken by IBM with the sixteen bit PC in the early 1980's, which rapidly dominated industry and commerce and opened the way for the "compatibles" that then took away the market from IBM to the point where IBM sold its PC division to the Chinese company, Lenovo. By the late 1990's, Apple had kicked out its founder Steve Jobs, and had drifted until it was on the ropes. In order to avoid legal challenges on monopoly practices Bill Gates of Microsoft injected US$ 150 millions and saw to the bringing back of Jobs as "interim" CEO. Apple's comeback began, and slowly Apple moved beyond the personal computer market, taking off with the iPod music player and the iPhone, now followed by the Tablet. By the time Jobs died, Apple was a leading edge digital innovations company worth US$ 381 billions, and then onwards its market Capitalisation soared to in excess of US$ 600 billions, coming back down to about US$ 400 billions at the moment. Nor is Apple's story the only success story. If our region is to progress, we must now begin to develop digital productivity skills, and we must begin to tackle the affordability challenge so that we have access to digital productivity "for all." Digital information, digital devices, digital skills & productivity Perhaps the first technical issue is, just what is information, especially digital information? Let us consider a switch, a battery and a light bulb connected in an electrical circuit similar to a battery powered flashlight:
4
Fig B.1 A circuit with a battery, a switch and a light bulb (SOURCE: All about Circuits) The switch is spring loaded, so that it can latch into an OPEN or a CLOSED condition. And, accordingly, the light will be OFF or ON. The switch stores one bit of information since it can be in one of two meaningful states. Similarly, some magnetisable powder in a varnish on a plastic disk or on a glass or metal disk's surface can be magnetised in two polarities, more or less, North Pole UP or South Pole UP. There are electrical circuits that can jump from having a "HIGH" to a "LOW" voltage at a certain point. In each of these cases, knowing which of the two possible states is used gives us one binary digit worth of information, the BIT. Most often, bits are represented as being in the states 1 or 0. A cluster of four of these switches or similar devices can store sixteen possible combinations, from 0000 to 1111, inclusive. Such a group of four bits is sometimes called a NIBBLE. For example, we can represent the number 9 as follows: | 1 | 0 | 0 | 1 | This is comparable to our familiar decimal number, place value notation, but instead of thousands, hundreds, tens and units, we have eights, fours, twos and units, so that: 9 = (1 x 8) + (0 x 4 ) + (0 x 2) + (1 x 1) . . . where just as 1,000 = 10^3, 100 = 10^2, 10 = 10^1 and 1 = 10^0, for the binary numbers system, we have 8 = 2^3, 4 = 2^2, 2 = 2^1, and 1 = 2^0. That is, each place is holding a value depending on 2 raised to a power: 0, 1, 2, 3, . . . (In fact, we can make up a similar number system for any whole number "base," e.g. the Babylonians used to count by 60's, which is why there are 60 minutes in an hour and 60 seconds in a minute, and this is directly tied to why the angles of an equilateral triangle are 60 degrees of arc. One degree is of course broken into 60 minutes of arc, and each minute of arc is broken into 60 seconds of arc. The word "minute" in both cases is really "minute," as 5
in quite small part, and "second" is for second even smaller part. The usual clock face is divided in twelve hours, and a base 12 system is called duodecimal. Students I taught used to find it funny that the base 60 system, which was used by astronomers for their calculations up to the 1600's, is called the sexagesimal system. No prizes for guessing their favourite system! Nowadays, it is common in computing, to count by twos or by sixteens [hexadecimal numbers], and formerly, by eights, called octal numbers. ) One nibble can store all ten different possibilities required for decimal digital numbers. To make this crucial point quite clear, let's spell it out: 0 --> 0000 1 --> 0001 2 --> 0010 3 --> 0011 4 --> 0100 5 --> 0101 6 --> 0110 7 --> 0111 8 -->1000 9 --> 1001 These ten states are used in binary coded decimal representation [BCD], which can be used with electronic storage registers and special circuits that carry out arithmetic, to make a digital electronic calculator or something like that. Of course, there will be six unused states left over, 1010, . . . 1111. That is, BCD is not particularly efficient in its use of the natural place value binary code's possibilities. However, it is convenient. I think some sophisticated calculators actually convert entered numbers into full binary numbers, including an equivalent to our scientific notation -- termed "floating point" notation: instead of 2.36 x 10^12, they may use numbers like 0.1011101000010011 x 2^11010101, etc. The calculator then calculates the result in binary arithmetic (which is much easier to implement and less wasteful) then converts back to the numbers that are displayed. There are special circuits that will take in the BCD code and then drive a "seven segment light emitting diode" to display the familiar blocky style of numbers. This can be extended to displaying letters as well, with a more complex display. Here is an old Hewlett-Packard calculator internal diagram:
6
Fig. B.1(a): A block diagram for an old HP Calculator, moving up from batteries, switches and bulbs to seven-segment LEDs controlled by calculator logic circuits that are synchoronised by a clock. By putting the switches in an array, the circuit knows which number etc has been input, and the circuits and software interpret it in context. Calculations are then done and the results are displayed on the screen. (The seven-segment LED's are sets of light emitting diodes driven by logic circuits to put out recognisable digits from 0 to 9 based on the codes for these digits, e.g. 6 = 0110.) (SOURCE) Why have I bothered to show a somewhat complicated diagram of a calculator that was built in the 1970's? Because -- surprise -- the computer is essentially a calculator on steroids. Indeed, the first microprocessor (= a computer central processing unit, CPU, on one chip) -a four-bit processor, essentially the 4004 -- was designed by Intel to provide a calculator chip for a Japanese calculator company, sometime about 1970. It was realised by Intel's engineers, that it was sensible to build a computer in miniature since the capability to make integrated circuits allowed that. Within a few years, microprocessors evolved from 4-bit to 8-bit registers, and the first major eight bit microprocessors were born: the 8008 then the 8080 by Intel, the 6800 by Motorola 7
[which was modelled after the design of the Digital Equipment Corporation [DEC] PDP 11], the Zilog Z80 and the Rockwell 6502. By the mid 1970's the first crude microcomputers were built. And, the team of the two Steves: Jobs and Wozniak, built the first Apple computers, based on the 6502. We are already seeing the basic recipe: a --> once you can find a binary code to represent information and b --> once you can store it in registers of a reasonable width, c --> you can build a device to take in the information, and store it. d --> Other circuits, under control of a synchronising clock signal, can process the information according to logic implemented in electronic circuits -- circuits that carry out mathematical operations such as adding, subtracting, multiplying and dividing, plus special logical operations known as AND, OR, NOT etc. e --> The results can be stored -- that's the job of memory (and Intel started out as a memory chip manufacturer). f --> Display or output to a printer etc, can then follow. g --> Likewise, the computer can be used to control machinery, ranging from an elevator to a car engine or a robot. The secret is that any process that can be described as a sequence of step by step logical operations, can be implemented in such a machine. (And, to come full circle, nowadays, when calculators are actually built in hardware, this is usually done by using a one-chip small microcomputer, often called a microcontroller because they are mostly intended to control machinery. Since such are built literally by the million [especially for the automotive industry or for cell phones etc.], it is cheap to use one to build a calculator. Just, change the program, feed it from a calculator keyboard, and display to a calculator screen. So, obviously, if you want to be able to design and build machinery to do something in this era, it would be helpful to learn your way around something like the ARM processor and simple computers built around it, e.g. the Raspberry Pi.)
8
Fig B.1(b): The Raspberry Pi, a low-cost credit card sized educational computer (Wiki & RP) For further instance of the use of bits, five bits -- 32 possible states -- could represent the alphabet by using some sort of code, but to store English language text, it is more convenient to use seven bits, which allows us to represent 128 possibilites. (When we have a block of n bits, the number of possibilities that can be stored is 2 x 2 x . . 2 n times over, i.e. 2^n.) For instance, the American Standard Code for Information Interchange, ASCII, uses seven bits, and often an eighth bit is used to provide a check bit that says whether there is an even or odd number of 1's. That way, if one of the digits gets corrupted, there is a way to detect it. Not perfect, but useful. Here is capital A in ASCII, without the eighth, parity check bit: A=| 1 | 0 | 0 | 0 | 0 | 0 | 1 |
(NB: A cluster of eight bits is often called a BYTE, and one of sixteen, a WORD, one of 32 being a LONG WORD. However, these may vary so check context.) This summary from practical cases already allows us to give a basic general definition and 9
description of information relevant to data processing and the functioning of systems that have to sense and respond to their surroundings: INFORMATION: In the context of computers and similar technologies, information is data -- i.e. digital representations of raw events, facts, numbers and letters, values of variables, etc. -- that have been put together or summarised in ways suitable for storing in special data structures (i.e. strings of characters: . . . -*-*-*- . . . , lists, tables, "trees," fields, records and files etc), and for processing and output in ways that are useful (i.e. functional). That's a definition, of sorts. The usefulness of the definition can be seen form teh following description: In contexts where systems must sense and respond to their surroundings, trends, disturbances, threats or changes their environment promptly and advantageously to achieve goals -- cybernetics -- information allows systems to select appropriate alternatives among possibilities, and respond in good time. Digital instrumentation and control systems, now common in industry, are examples of information processing systems. So is the system in the living cell that stores genetic information as a code in DNA and uses it to manufacture proteins by making messenger RNA, then transferring it to the Ribosome, where protein molecules are assembled step by step based on the information. Vuk Nikolic's animation is excellent:
Object 1
Protein Synthesis from Vuk Nikolic on Vimeo. Communication systems are designed to transfer information at a distance, such as a cell phone. To do so, they encode, transmit and detect then decode the 10
information. Of course, when a computer accesses the Internet, it shows that it too uses communication systems within it. (In fact, a smart phone is a small computer, set up to make telephone calls as well.) Information in such senses is distinguished from [a] data: raw events, signals, states etc. as sensed or captured and represented digitally on one hand, and [b] knowledge: information that has been so verified that we can reasonably be warranted, in believing it to be true. This may be visualised:
11
Fig B.2: Information in Information, Communication and Control Technologies (ICCT's) based systems. (SOURCE: Info Def'n, YourDictionary Computer) Textual information, as you might imagine, is a particularly important case. So, it is useful to look at a table showing the ASCII code:
Fig. B.3: The ASCII alphanumerical character text code, used commonly in data communications. For instance, the code for "7" is 0110111, and that for "m" is 1101101. The code for "M," by contrast, is 1001101. This allows us to process letters as though they were numbers. Taking into account parity check bits, one character is equivalent to one byte of information. (SOURCE: Wiki, public domain) Obviously, once we can code letters, numbers and other symbols or quantities as clusters of bits, we can apply mathematical techniques, we can store them in various media such as magnetic tape or magnetic discs or optical disks, etc. And, once we know the codes, we can use the stored or transmitted information to act in the real world. For simple instance, a "magic eye" beam uses an infrared lamp that throws a narrow beam, and an electronic detector connected to circuits to detect an intruder, who will break the beam and change a state in the circuits. The system senses this and sounds an alarm. Another very important application of the above is to represent images, level of light and colours, such as on your computer screen that you are using to read this. The trick to this is to take such a screen as a rectangle and to dice it up into tiny boxes in a table much like the above chart, but the "chart" may often have a size of 800 columns by 600 12
rows or more, much more, with modern screens. The resulting little boxes are called picture elements, or PIXELS for short. Then (for one system) we define three main colours of light, red, green and blue, as scientists have long known that when we add light, these are the primary colours. Using eight bits for each primary colour, we have 24-bit colour representation, with level of each colour going in 256 steps from 0000 to 1111. That gives us up to 16,777, 216 different colours, from black to white and around a ring of red, green and blue. Not perfect, but good enough.
Fig. B.4: 24-bit colour on a screen (Source: CSU LB) So, with a bit or imagination, we can see how we can represent how things are in the world in a digital, numerical format, that is amenable to being transmitted, stored and processed on a computer, then used to do things back in the world again. All, using glorified calculators. This is very powerful, and points to the application areas, communication, control, calculation, and so forth. We are also seeing already, two different sides to such digital technologies, (a) hardware to actually physically implement the tasks in hand, and (b) software, to program computers to process the information according to step by baby step recipes. 13
This points to general electronics skills, to skills in control systems and instrumentation technologies, to skills in computer hardware, to skills in networks, to programming and to multimedia. (As well as of course the basic authoring of documents and data objects using applications that we spend so much time doing.) Cracking the box -- how a PC works, in a nutshell Obviously, many of these technologies come together in the now ever so familiar personal computer, PC for short. But, what goes on inside the box? Let's "crack" it:
Fig. B.5: Inside the box of a PC (Credit: Raj) The anatomy is not too hard to figure out: a --> The actual heart of the computer is the Motherboard, on which the Central Processing Unit [CPU] (typically, a microprocessor) sits, as well as memory and connectors for input and output devices. b --> There may be ribbon cables going to a DVD/CD ROM drive and a Hard Disk Drive (which is the backup secondary storage unit, these days 250 - 750 giga bytes is not uncommon). c --> A power supply is a necessity, and there may be a separate video and sound 14
card, or these may be on the Motherboard. Providing these are all set up correctly, and the machine is properly cared for, we should be able to get upwards of five years of service, though by then typically the machine will be seriously out of date. (Two easy upgrades that may extend life are additional memory and maybe a fresh hard drive.) But, how do these units work together to make a PC function? The motherboard of a PC can be reduced to a functional block diagram:
15
Fig B.6: A block diagram for a 'typical" PC Motherboard, showing the CPU & clock driving the Northbridge [the memory controller hub chip], which then drives memory slots, the graphics bus [fast output] and the internal bus that goes to the Southbridge [input/output controller hub chip, the other part of the PC motherboard logic chipset], responsible for slower input/output functions and nonvolatile Flash memory (which holds the BIOS, the basic input/output system). Notice, NB goes to the 16
CPU, and SB to the NB, forming the "spine" of the Motherboard. BIOS is responsible for booting up the system, doing power on self tests, interface to keyboard, the screen, disks and the like, allowing the computer to "boot up" -- start and then set up to a known initial point, where the disk drive can then load the disk operating system, such as Windows, etc. Once the OS is loaded, more elaborate software can run. As technology has progressed the tendency has been to push NB functions more and more into the CPU into (Source: Wiki) Pulling back a bit, we should remember that one reason for resorting to such a "spinal bus" approach, was that it was hard to incorporate all of the bus and interface control logic in the processor. Given the way that chip features are getting finer and finer in size (allowing more functions to be crammed into the same area) and how the technology improves, that means there will be a trend to to observe that currently there is a move to more and more push the funcitons of the Northbridge into the processor. Where also, the processor is more and more becoming a multiple core entity, i.e. we have two, three or four processors commonly in one chip. hence the promotion of dual and quad core processors. Bearing that in mind, we can take a more generic overview of a single processor microprocessor based system that peeks just a bit inside the MPU:
17
Fig B.7: An overview of a microprocessor controlled system. The Microprocessor is driven by a clock signal that also synchronises events in the whole system. On startup, the system initialises itself to a known startup configuration, and proceeds to fetch [from memory], decode and execute machine level instructions in a continuous cycle. Such instructions work on clusters of bits in registers, and generally carry out arithmetical and logical operations on them. Computer based information processing in the end comes down to a step- by- step intelligently controlled sequence of Register Transfer Level moving and transformation of data, through Machine Code instructions. (Assembly language is little more than a relatively human readable symbolism for the actual machine code, which looks like FC BC, CC BD, etc. -- hexadecimal numbers or worse their binary equivalents: 11111011 10101011, etc. Assembly Language instructions or Machine codes come in sequences: step 1, step 2, step 3 . . ., and at certain points tests are made on data that allow programmed decisions: IF C then do X, ELSE, do Y. Such can be looped so that a given set of instructions will be done WHILE a condition holds, or UNTIL a condition holds or UNLESS a condition holds. As simplistic as such seems, that is enough to do anything that
18
can be reduced to such step by step sequences, forks and loops. (Of course, that is the real trick to the business! Namely, programming, hopefully in a high level language such as Java that the computer will translate into the actual pedal on the metal machine code.) The computer thus takes in inputs, processes data step by step in accordance with a programmed logic, and then sends back out outputs. Data and instructions are stored in memory, and there are also interface ports for input [i/p] and output [o/p]. Computers are also often connected in networks and may host machines that carry out work, step by step under programmed control. (Step by step finite recipes/sequences of actions to do defined tasks
are called Algorithms.) [Source: TKI] On system startup, we can summarise what happens to put the PC or a similar system into a known initial condition and hand it over to the Disk Operating System, Windows, Macintosh OSX, Linux, or whatever: 1: At power-up, the central processor would load its program counter with the address of the boot ROM [--> Read Only Memory, with hard wired instructions] and start executing ROM instructions. 2: These instructions displayed system information on the screen, 3: ran memory checks, and then 4: loaded an operating system from an external or peripheral device (disk drive). 5: If none was available, then the computer would (a) perform tasks from other memory stores or (b) display an error message, . . . depending on the model and design of the computer and version of the BIOS. [Wiki] Once the operating system initialises, the various applications are accessible, and on being started will load into memory and begin to execute based on inputs and outputs, such as through a keyboard and/or mouse. The screen interface will usually be a crucial part of how the computer then interacts with a user. In general, such computers can also host target systems that will interact with the computer through input-output connexions such as a USB socket and support circuitry on the Mother Board. A classic example of such a target would be a printer. A robot would be another. Applications software is written to fit in this framework, often using input-processing-output as a pattern. Object oriented software -- one of the major philosophies for writing applications etc -encapsulates data storing the state or attributes of a software entity (called an object, which may be a software "shadow" of a real world physical object like a robot or it may be a wholly internal software entity), surrounding its state-defining data or attributes with a capsule of 19
methods that act on it and interface to the external world by passing messages back and forth:
Fig. B.7: The IPO and Objects views of the computer in action as the operator interacts with the host system and as the Host interacts with the target system. (Source: TKI) Where also of course, modern computers are seldom isolated, so we next turn to: Digital networks for communication, for learning and for industry
20
In an Internet age, computers will normally be part of communication networks. Thus, it is helpful to understand the ABC-level basic principles of such communication networks. First, a typical network, with the inter-network [that's what Internet means] bringing the local area networks together:
Fig. B.8 (a): Local Area Networks (LANs) interconnected through the Internet. The nodes may be PC's, or other devices such as tablets or even smart phones, or printers, servers, etc. There is talk of even hooking up domestic appliances to the Internet. (SOURCE: Caltech)
Between two nodes, there is a definite system for communication, using compatible codes and standards for signals:
Fig. B.8(b): A framework for a communication system. The encoded and decoder must be compatible, and transmitters and receivers must be compatible with one another and the channel. 21
Noise is an inevitable physical reality, and tends to degrade performance. In the real world, there is usually a feedback communication path from the sink to the source, and the decoder and encoder may be multi-layered. Signals are captured, encoded, transmitted then pass over the channel, and are received, decoded and then go to their destination. [SOURCE: TKI, adapted Shannon et al.] The Internet uses the International Organisation for Standardisation (ISO) Open Systems Interconnect (OSI) "layercake" architecture, which uses multiple layers of encoding and agreed standards, called protocols:
Fig B.8(c): The OSI Model for layered protocols used to communicate through the Internet. The only physical interconnection from node to node is through the physical layer, but because of the layered protocols, a level n communication acts as though it is virtually connected to the opposite node and its corresponding layer. In fact, layer n passes information down to layer n-1, which attaches a header and passes to the next until it reaches the physical layer, which then transmits across the Internet. Intervening nodes in the network see to proper transfer of packets of information from the source node to the designation node, and there the data headers are read and acted on then the information is passed up the stack (being stripped of headers at each level, which tell how the information packet is to be handled) until it reaches the user application. The correspondence to the above two diagrams should be obvious. (SOURCE: Linked note) The integration of computers and similar devices through the Internet and other networks allows for many uses ranging from entertainment, to education to commerce to industry. Accordingly, network programming is an important aspect of the world of programming. Java servlets and applets are two ways Java fits in with this world, and indeed Java applications provide much of the multimedia and dynamic capability of the web. Not least, because of the use of the Java Virtual machine allows relevant programs to be written once and used "anywhere." 22
Applets -- now less popular, in part because of a partly deserved reputation for slowness -are typically small Java programs that are hosted in a browser or the like. Servlets live on a server and deliver results from there. In short, object oriented programming is a major feature of the wired and wireless world we now live in. With that in mind, we can now turn to: Object oriented programming as the gateway to digital productivity In the modern, web oriented era, Java is a popular language for web applications and multimedia. A key feature of Java is, that it is an object oriented language, which means that to proceed to address web productivity, we need to understand what computing objects are, at an initial level. Above, we already saw that objects are an evolution of the classic input-process-output [I-PO] approach to programming, that views programming as indeed being about coded algorithms with associated data and data structures. But the key development is that the program is now clustered as a structured collection of software objects that encapsulate data and methods that manipulate the data, including passing messages back and forth. This allows for reduced coupling between components of a program, which makes it more robust. Also, by basing objects on classes -- in effect, blueprints for the actual objects to be set up and run on the stage set up by the software -- it is possible to develop a useful hierarchy of objects that can then inherit much of their structure from parent classes, and then add new, particular data or methods required. This already highlights the main features of object-oriented programming. Instead of seeing programming as fundamentally procedures that process inputs and stred data in accordance with algorithms to yield outputs, we see instead set up a program as a cluster of interacting objects that pass messages back and forth, with processing and data localised to objects. That is, object-oriented programming [OOP] emphasises: 1: objects based on classes that give their blueprint, and having state-defining data attributes accessed and interacted with by passing messages to methods. Thus, objects are granular software bundles that have a state and behaviours by which the state changes and the entity interacts with the world of surrounding objects in a program. That is, the object is in effect an actor on a virtual, software stage that we may view through the window of the program and the device we are using. 2: inheritance: using a parent-child hierarchy so that classes and associated objects derive their core structure from their parents, and may add further data and methods, allowing further inheritance; all of which makes for modular programming and for code reuse.
23
3: encapsulation and isolation of state defining data/ attributes by using the methods envelope so that the degree of coupling of the components of an overall program is reduced as much as is reasonable. That is, messages are passed to/from objects, it is the local method capsule that accesses and interacts directly with/ modifies/ processes the encapsulated data. This means that objects are "plug and play" and we can change/ improve a given object so long as its message interface preserves the relevant function. That way, ideally, fixing one thing does not trigger a cascade of having to fix the various things it interacts with, and then the things they interact with and so on. 4: polymorphism sets up an approach where messages interfacing with objects are allowed to be generic, and a particular object interprets and acts on it depending on its inner state-defining data and methods. For instance, adding, subtracting, multiplying and dividing can be asked for and would take a different approach depending on whether we are dealing with whole number data [integers] or fractional data [floating point data]. Similarly, pressing a button triggers different things based on context. Therefore, a useful way to view object oriented software is to see it as defining clusters of objects as a cast of "actors" on a stage that interact through their particular roles as events and circumstances call for responses, under control of the program, and are being viewed and manipulated through a "software window":
24
Fig B.9 (a): The software window model for OO programming [SOURCE: TKI] 25
Underlying this is a "playful" frame of thought that sees the user interacting with a model through a view and through control devices, and in which the models are based on data in a context interacting as the objects play roles as actors on a stage:
Fig. B.9(b): The Model-View Controller and Data Context Interaction concepts By now, this should be "suspiciously" familiar as we think of a typical windowing oriented interface for a typical application such as a browser:
26
Fig. B.9(c): A windowing environment in light of the Software Window, MVC-DCI view Bingo, objects are everywhere. This should be borne in mind and/or referred back to from time to time, as we turn to actually starting to do object oriented programming. For Discussion 1: How do you think we in the Caribbean can begin to be productive and competitive economically in a digital age? Why? 2: Why is it that this course intends to start out with a "realistic" programming language, and with objects instead of the older IPO, processing oriented view? Does that make sense? What are the advantages and disadvantages? 3: What is digital information and why is it naturally measured in bits? 4: How can bits be clustered to make characters, words, documentary records, documents such as spreadsheets and screens of information? 5: How do you think that information on a screen can not only be in colour but moving? (What does this mean for the size of video and multimedia files? How generous should we therefore be with pictorial, video and multimedia information? What are the pros and cons for the view you take?) 6: It seems that the key to the microprocessor, is that bits can be drawn in as input, stored in registers and processed in execution units, under the control of clock signals, then put out as output. This seems to be a matter of "baby steps". What does this mean about what makes up a realistically sized program? 27
7: Also, why is it that instead of working with binary digits and machine code instructions that tell registers to move bits around and process them, we instead use higher level languages? 8: What do you think is the main reason why computer programmers have migrated to using object-oriented programming languages such as Java? What are the pros and cons? 9: What are the main features of an object and how does the use of groups of interacting objects on a stage viewed and controlled through a software window contribute to effective programming? 10: Given what we saw above about layered communication protocols and codes, windows and browsers, how does an object-oriented approach work with and contribute to a wired and wireless, networked world? 11: What would you do to adapt object-oriented computer programs for the use of someone who is blind or someone who is deaf? What about, someone who may be both? (Hint, look up Braille. View here, and read here also.) What does such imply about cost and digital divides?
Assignments XXXX
28