Past & Future — An Interactive Media Chronology

Page 1

Shelley
Russell
 September
7,
2009

Book
Synthesis:
Past
&
Future
—
An
Interactive
Media
Chronology
 
 

 Interactive
media
today
has
become
the
focus
of
Web
design,
online
business
 ventures
and
futurists
looking
to
predict
its
evolution
years
from
now.
But
what
is
now
 thought
of
as
a
highly
technological
and
advanced
phenomenon
once
began
as
a
simple
 act
of
communication
between
two
or
more
human
beings.
Telephone
conversations,
 story
circles
and
newspaper
or
magazine
articles
in
which
readers
were
encouraged
to
 respond
to
the
reporter
with
comments
or
questions
are
among
the
earliest
forms
of
 interactive
media.

 

 The
rapid
development
of
interactive
media
as
a
professional
field
is
largely
due
 to
the
emergence
of
digital
computers
and
the
development
of
the
Internet
and
the
 World
Wide
Web
(p.2).
The
first
computer,
ENIAC,
was
developed
in
the
1940s.
The
 machine
was
used
to
calculate
and
was
thought
of
as
an
advancement
to
the
previous
 non‐electric
abaci
and
abacuses.
Many
scientists
and
mathematicians
contributed
to
the
 development
of
the
modern‐day
computer,
among
which
are
key
players
Charles
 Babbage,
Vannevar
Bush,
Alan
Turing,
and
Thomas
Watson.

 


 Babbage
had
the
idea
for
an
“analytical
engine”
in
1833,
which
resembled
the
 modern‐day
computer.
Bush
invented
the
differential
analyzer
in
1925,
allowing
for
 more
advanced
electrical
computation.
While
Babbage
and
Bush
were
focused
on
 computers
as
tools
for
quick
numerical
computations,
Turing
was
the
first
to
create
the
 design
of
a
“general‐purpose
computer”
(p.3).
Watson
led
IBM
engineers
in
building
the
 first
computer
able
to
operate
on
software
in
1947.
 

 While
general‐purpose
computers
were
quickly
developing,
the
creation
of
the
 Internet
was
not
far
behind.
President
Dwight
D.
Eisenhower
created
the
Advanced
 Research
Projects
Agency
(ARPA)
in
1957
to
aid
in
the
“scientific
improvement”
of
U.S.
 defense
and
intelligence
(p.4).
In
the
early
60’s
J.C.R.
Licklider,
who
was
on
the
 management
team
at
ARPA,
began
trading
information
through
their
computers
in
 order
to
facilitate
a
more
efficient
work
environment.
Paul
Baran
and
Donald
Davies
 took
Licklider’s
original
idea
of
trading
information
and
expanded
it
to
include
the
idea
 of
sending
data
in
“packets”
through
a
“digital
network”
(p.4).
Baran’s
initial
sketches
of
 centralized,
decentralized
and
distributed
networks
quickly
evolved
into
what
is
known
 today
as
the
Internet.
 

 Although
Licklider
and
his
team
are
credited
with
spawning
the
creation
of
the
 Internet,
the
sending
of
data
electronically
dates
back
to
the
1830s
when
Samuel
Morse
 sent
the
first
telegraph,
“What
hath
God
wrought?”
Following
the
telegraph,
radio
grew
 in
popularity
after
Guglielmo
Marconi’s
creation
in
the
1890s.
The
1930s
were
 considered
the
“Golden
Age
of
Radio.”
Telephones
were
being
placed
in
homes
 throughout
the
country
in
the
early
1900s,
although
privacy
was
a
major
concern
due
to
 wiretapping.
In
the
1950s,
television
replaced
radio
as
the
dominant
form
of
broadcast.
 RCA
president
David
Sarnoff
and
Philo
Taylor
Farnsworth
are
credited
with
creating
the


earliest
forms
of
television.
The
rise
of
the
Internet
occurred
between
the
1960s
and
 1990s.

 

 ARPA
first
went
online
in
1969,
connecting
four
major
universities.
Soon,
more
 machines
were
connected
and
successfully
operating.
In
order
for
scientists
and
Internet
 developers
to
make
changes
and
create
technical
standards,
Steve
Crocker
created
 Request
for
Comments,
or
RFCs.
Perhaps
one
of
the
most
well‐remembered
RFCs
is
RFC
 354,
which
was
the
posting
for
file‐transfer
protocol
(FTP)
in
1972.
While
the
 development
of
the
Internet
was
moving
ahead
at
a
rapid
pace,
75
percent
of
Internet
 traffic
was
email.
It
wasn’t
until
1990
that
Tim
Berners‐Lee
expanded
the
Internet
to
 include
the
use
of
the
World
Wide
Web
by
writing
the
first
HTML
code.

 


 Berners‐Lee
first
introduced
the
Web
at
a
conference
in
1990,
and
from
there
 Internet
Service
Providers
(ISPs)
became
more
and
more
popular
as
people
sought
out
 businesses
allowing
them
to
get
access
to
the
Internet
via
“dial
up”
connection.
In
order
 to
further
enhance
ease
of
use
on
the
Web,
Mark
Andreessen
developed
Mosaic,
a
 browser
that
later
became
known
as
Netscape.
Web
users
found
it
easy
to
navigate
 various
pages
through
use
of
the
browser,
which
directed
them
to
various
documents
 online.

Since
Andreessen’s
creation
of
Mosaic,
numerous
browsers
have
been
created
 and
the
Web
continues
to
develop
and
grow
as
more
users
gain
access.

 

 The
popularity
and
immense
success
of
the
Internet
is
most
easily
understood
 through
a
comparison
of
other
popular
mediums
in
history.
The
radio
took
38
years
to
 gain
a
minimum
of
50
million
users
and
television
had
50
million
users
in
13
years.
 However
it
took
just
four
years
for
the
Internet
to
have
50
million
users.
Several
years
 later,
one
billion
users
were
estimated
to
be
using
the
Internet
(p.9).
Many
people
often
 mistake
the
Internet
for
the
World
Wide
Web,
and
vice
versa
—
interchanging
the
two
 terms
as
though
they
are
one
in
the
same.
In
reality,
the
World
Wide
Web
is
merely
one
 use
of
the
Internet.
The
Web
includes
a
system
of
hyperlinks
to
pages
and
documents
 that
are
accessible
online,
whereas
the
Internet
is
the
network
of
computers
under
 constant
development
that
ultimately
allows
the
Web
to
exist.

 

 But
the
Internet
is
not
the
only
thing
that
is
continuing
to
grow
and
develop.
The
 Web
began
as
what
is
known
as
“Web
1.0,”
but
it
has
grown
into
the
new
and
more
 interactive
“Web
2.0.”
When
the
Web
first
began,
scientists
were
happy
that
sharing
 data
electronically
had
become
a
success,
and
Web
users
found
it
convenient
to
be
able
 to
simply
view
documents
online.

Web
1.0
has
been
described
by
CNET
as
the
“era
of
 Web
prior
to
the
bursting
of
the
dotcom
bubble”
(p.17).
Web
sites
consisted
of
mainly
 static
pages
in
which
content
was
merely
being
presented
on
the
Internet
as
another
 form
of
sending
data.
With
Web
2.0,
the
idea
is
that
content
is
being
created
exclusively
 for
Web,
in
terms
of
writing
or
design.
JavaScript,
Wikipedia
and
digg
are
credited
by
 CNET
as
some
of
the
top
contributors
to
Web
2.0.
With
the
new
Web
2.0,
users
are
 given
more
freedom
on
the
Web
to
discover
their
own
paths
of
information
and
 contribute
to
online
content.

 

 It
is
predicted
that
Web
3.0
will
become
even
more
integrated
into
the
lives
of
 Internet
users—functioning
more
as
a
human
being,
or
a
“Semantic
Web”
(p.49,
p.58).
 Columnist
Mike
Elgan
predicts
that
Web
3.0
will
be
able
to
give
users
the
sensation
that
 they
are
interacting
with
another
human
being
instead
of
a
computer.
Currently,
users


can
search
for
various
items
in
Web
browsers
and
results
will
appear.
With
Web
3.0,
the
 computer
will
understand
your
location,
the
current
weather,
as
well
as
your
previous
 preferences
based
off
of
past
searches.
Inklings
of
Web
3.0
can
be
seen
in
the
Google
 browser,
in
which
a
user
can
type
in
terms
to
the
search
bar,
and
Google
may
come
up
 with
other
results—asking
the
user:
“Did
you
mean
this
instead?”
It
is
almost
as
though
 Google
knows
what
the
user
is
looking
for—almost.
With
Web
3.0,
Elgan
and
others
 predict
that
knowing
the
user
will
be
a
definite
feature.

 

 Still,
some
futurists
are
already
discussing
Web
4.0,
which
will
manifest
itself
in
 an
“augmented
world
where
the
virtual
and
real
blur”
(p.58).
Nils
Muller,
CEO
of
 TrendOne
declared
that
Web
4.0
would
essentially
be
an
“always‐on”
world,
or
a
world
 of
hyperconnectivity.
Philip
Tetlow,
author
of
“The
Web’s
Awake:
An
Introduction
to
the
 Field
of
Web
Science
and
the
Concept
of
Web
Life,”
argues
that
the
Web
is
already
 becoming
an
independent
entity—self‐controlled
and
separated
from
the
lives
of
 humans.
Tetlow
argues
that
the
Web
is
already
moving
towards
complete
 independence:
“The
Web
should
be
considered
a
living
organism
–
a
new
post‐human
 species
consisting
of
a
single
member”
(p.48).
 

 Predictions
about
the
previously
discussed
mediums
have
ranged
from
skeptical
 to
supportive,
but
the
Internet
instilled
perhaps
the
greatest
initial
fear
in
society.
 People
were
concerned
that
the
Internet
would
mean
the
end
of
the
human
race,
and
 the
start
of
a
machine/robot‐controlled
world.
For
instance,
Mondo
2000
editor
Ken
 Goffman
said
in
1992:
“Who’s
going
to
control
all
this
technology?
The
corporations,
of
 course.
And
will
that
mean
your
brain
implant
is
going
to
come
complete
with
a
 corporate
logo,
and
20
percent
of
the
time
you’re
going
to
be
hearing
commercials?”
 (p.41).
Futurist
Jim
Dator
predicted
in
1993:
“As
the
electronic
revolution
merges
with
 the
biological
evolution,
we
will
have
–
if
we
don’t
have
it
already
–
artificial
intelligence,
 and
artificial
life,
and
will
be
struggling
even
more
than
now
with
issues
such
as
the
legal
 rights
of
robots…”
(p.42).
Google’s
official
blog
presented
views
compiled
from
10
 experts
about
the
future
of
the
Internet
in
2008.
Predictions
from
the
blog
revealed
that
 experts
believe
70
percent
of
the
human
population
will
have
fixed
or
mobile
access
to
 the
Internet
in
the
next
decade.
Video
was
predicted
to
become
a
more
interactive
 medium
in
which
users
could
choose
content
and
control
advertisements
(Official
 Google
Blog).
 

 But
with
the
Internet
deemed
a
worldwide
success,
attention
now
lies
in
its
 implications
for
the
future.
The
Internet
and
the
World
Wide
Web
are
quickly
becoming
 more
and
more
integrated
in
the
lives
of
humans—somewhat
subconsciously.
Each
time
 an
email
alert
pops
up
on
one’s
iPhone,
or
a
Twitter
update
pops
up
on
a
computer
 screen,
it
becomes
second
nature
to
respond
to
the
alerts
and
check
them
on
a
regular
 basis.
This
is
just
the
very
surface
of
the
newly‐emerging
professional
field
of
 interactivity
that
is
emerging
as
a
means
for
humans
to
communicate,
browse
and
 manipulate
data
freely
on
the
Web.
Mitch
Kapor
has
spoken
out
about
the
importance
 of
interactive
design
that
is
firm,
suitable
and
easy
to
use
(p.50).
Interactive
design
must
 evoke:
“strategy
(connecting
the
product
with
goals),
experience
(related
interaction
 and
activities
in
context),
interaction
(the
interface
in
use
over
time
by
different
people),


interface
(the
presentation
of
information
and
controls)
and
functionality
and
 information
(the
categories,
types,
attributes
and
relationships
of
users)”
(p.50).

 

 One
of
the
must
clear
manifestations
of
modern
interactivity
can
be
seen
 through
augmented
reality
(AR)
and
virtual
reality
(VR)
worlds.
In
“The
Future
of
the
 Internet
III,”
by
Janna
Anderson,
Anderson
focuses
on
breakthroughs
and
VR
and
AR,
 and
uses
of
social
networking
across
various
fields
including
government
and
 commercial
sectors.
Online
gaming,
such
as
EverQuest
and
World
of
Warcraft
has
been
 proven
to
engage
users
in
“the
practice
of
useful
pursuits,
including
rapid
response…and
 leadership
through
collaboration”
(p.52‐53).
These
VR
worlds
could
potentially
lead
to
a
 future
in
leadership
for
some
dedicated
users,
according
to
a
2008
study
in
Harvard
 Business
Review.

While
online
gaming
is
a
valuable
tool
in
terms
of
software,
change
 and
development
in
these
programs
is
motivated
largely
by
humans’
use
of
the
various
 VR
worlds
(p.53).
Although
Second
Life
(a
social
VR
world),
and
other
synthetic
gaming
 worlds
have
millions
of
registered
users,
Facebook
and
MySpace
remain
the
most
 popular
group‐centered
networks
online.

 

 David
P.
Reed
presented
the
idea
that
the
Internet
is
designed
to
be
a
 “collaborative,”
“group‐forming”
process,
in
which
users
work
together
to
communicate
 and
generate
materials
online
(p.53).
Reed’s
Law
states
that:
“The
utility
of
large
 networks
can
scale
exponentially
with
the
size
of
the
network”
(p.53).
Facebook
is
a
 perfect
example
of
Reed’s
analysis
of
the
Internet
as
a
“group‐forming”
medium.
 Studies
show
that
regular
online
networkers
continue
to
use
these
communities
to
 allow
them
to
reach
a
point
of
self‐actualization.
Businesses
have
also
taken
great
 advantage
of
social
networks
and
synthetic
VR
worlds
to
market
new
strategies
and
 products
to
consumers,
and
train
employees.
Blogs
and
online
writing
via
social
 networking
allows
for
“collective
intelligence”
(p.74),
or
the
ability
of
individuals
to
 network
their
knowledge
and
collaborate
with
other
users
to
create
valuable
projects
 and
databases
full
of
information.
 

 More
free
wireless
broadband
access,
as
well
as
more
advanced
Internet
phones
 will
further
integrate
VR
and
AR
worlds
into
the
everyday
lives
of
users.
While
AR
and
VR
 have
been
used
for
personal
gain
and
business
ventures,
these
Web
tools
can
also
make
 a
difference
globally.

One
example
of
this
is
the
MDGMONITOR,
which
is
a
poverty
 tracking
Web
site.
Poorer
areas
are
tracked
and
publically
displayed.
This
tracker
has
 raised
awareness
and
consequently
money
for
poorer
areas
around
the
globe.
 

 While
VR
and
AR
worlds
are
largely
confined
to
computers
and
the
Web,
 wearable
computing
is
not
far
from
becoming
a
reality.
Soon,
clothes
will
be
able
to
 track
emotions
of
people
and
change
room
lighting
accordingly.
They
will
also
be
able
to
 monitor
one’s
posture.
Although
there
are
many
positive
implications
to
VR
and
AR
 worlds,
these
alternative
Web
communities
do
come
with
some
safety
risks—such
as
a
 loss
of
security,
overuse
of
the
Internet,
which
has
been
proven
to
lead
to
increased
 obesity,
as
well
as
more
suicide
cases
from
harmful
social
networking
practices
(p.58).

 

 Whereas
currently
humans
are
actively
seeking
out
computers
to
search
the
 Web,
look
up
addresses
on
Google
Earth
or
phone
a
friend
using
a
free
service
such
as
 Skype,
futurists
predict
that
human=computer
interfaces
will
not
remain
so
separate
for
 long.
Presently,
when
one
uses
the
Internet
or
types
a
Word
document,
an
observer


notices
a
person,
and
a
computer;
two
separate
entities.
However
the
“Internet
of
 Things”
will
soon
grow
to
include
devices
that
will
be
mixed
in
with
the
human
world,
 but
barely
visible
to
the
naked
eye.

 

 The
“Internet
of
Things”
can
be
defined
as
any
object
in
the
world
tagged
with
an
 IP
address
(a
small
device
that
identifies
the
object).
The
integration
of
intelligent
 devices
into
the
“Internet
of
Things”
will
mark
a
change
in
human
organization.
This
 phenomenon
has
also
been
referred
to
as
“pervasive”
or
“ubiquitous
computing,”
as
 well
as
“ambient
intelligence.”
William
Gibson,
known
by
many
as
the
“Father
of
 Cyberspace,”
says
that
soon
society
will
not
be
able
to
distinguish
between
cyberspace
 and
“that
which
isn’t
cyberspace”
(p.60).
Society
is
quickly
moving
toward
an
 unavoidable
transparency
with
the
“Internet
of
Things”
and
VR/AR
worlds.
Bill
Gates
 discussed
the
new
goal
of
making
“computing
as
pervasive
as
electricity”
(p.61).
 

 It
is
somewhat
jarring
to
think
that
soon
nearly
every
medium—a
table,
shower
 curtain,
wall…etc.,
will
become
a
means
of
acquiring
information.
It
is
already
somewhat
 difficult
to
get
away
from
advertisements,
the
Internet
and
cell
phones.
But
years
from
 now
it
will
be
incredibly
difficult
to
escape
the
world
of
cyberspace
and
before
long
even
 a
camping
trip
in
the
wilderness
will
likely
be
interrupted
by
various
mediums
receiving
 and
sending
information
in
the
“Internet
of
Things.”

 

 The
human‐computer
interface
is
quickly
evolving
from
the
traditional
WIMP
 (windows,
icons,
menus
and
pointing)
display.
Two
important
trends
that
are
driving
the
 emergence
of
new
possibilities
in
this
interface
are:
1)
The
move
towards
the
Mobile
 Internet,
and
2)
Embedded
networked
computing
devices
that
are
providing
more
ways
 for
human‐computer
interactions
to
occur
(p.60).
Already,
many
news
stations
are
 implementing
touch‐screens
to
better
display
information
to
viewers,
and
many
 computers
are
in
developmental
stages
to
include
“gesture‐control
and
multi‐touch
 features”
(p.61).
 

 While
display
screens
are
becoming
more
intuitive,
Wii
controllers
can
detect
 body
movements
and
projection
breakthroughs
will
soon
allow
data
on
cell
phone
 screens
to
be
significantly
enlarged,
developers
cannot
deny
that
efficiency
does
not
 always
lie
in
the
development
of
a
new
product.
The
most‐efficient
human‐computer
 input
method
remains
the
spoken
word,
and
the
most
efficient
computer‐human
output
 method
is
text.
Speech
recognition
is
improving
but
still
has
many
errors
due
to
voice
 inflections
and
inconsistency
of
background
noise.
Technology
is
also
developing
to
 include
easy‐to‐use
handwriting
recognition
from
a
stylus,
and
pen‐based
computing,
 which
allows
users
to
transfer
notes
to
a
personal
computer.
Beyond
the
basic
human‐ computer
interface
development,
brain‐computer
interfaces
are
a
popular
prediction
 from
technology
experts.
Essentially,
these
interfaces
will
provide
a
direct
connection
 between
human
brains
and
computers.

 

 Another
term
for
these
pervasive
computing
devices
is
Adam
Greenfield’s
 “everyware,”
which
rests
on
the
idea
that
“nothing
exists
in
isolation
from
other
things”
 (p.75).
According
to
Greenfield,
“everyware”
are
devices
that
can
be
networked
to
send
 and
receive
data
constantly.
Military
and
global
corporations
are
driving
ubiquitous
 computing
research.
Two
key
principles
of
“everyware”
are:
“1)
Build
it
as
safely
as
 possible
and
build
into
it
all
the
safeguards
to
personal
values,
and
2)
Tell
the
world
at


large
that
you
are
doing
something
dangerous”
(p.79).
“Everyware”
creates
an
 immortality
of
information,
because
every
place
is
an
opportunity
for
information
 output.
With
a
more
highly‐integrated
human‐computer
interface
comes
the
idea
of
 seamless
design
(p.67),
which
will
eventually
lead
to
a
world
of
hyperconnectivity,
or
the
 idea
that
humans
will
always
be
online.

 

 According
to
a
2008
study
conducted
by
the
Interactive
Data
Corporation
(IDC),
 many
people
are
already
classified
as
being
“hyperconnected”
users.
These
users
are
 willing
to
email,
text
and
communicate
using
other
methods
while
in
any
location—not
 differentiating
between
their
work
and
personal
lives.
Blackberry
users
have
been
 known
to
be
over‐addicted
to
the
devices—with
some
checking
them
more
than
85
 times
a
day
(p.69).
Recent
issues
have
been
raised
with
businesses
paying
employees
 overtime
for
work
done
on
Blackberries,
and
many
offices
have
an
understanding
with
 employees
that
they
can
conduct
some
personal
correspondence
on
their
Blackberries
 during
work
hours.
But
being
hyperconnected
has
been
shown
to
decrease
the
quality
 and
efficiency
of
work.
When
one
is
constantly
interrupted
by
a
phone
call
or
text
 message,
it
decreases
their
concentration
and
it
takes
time
for
one’s
mind
to
re‐focus
on
 the
task
at
hand
(p.70).
 


 Many
young
children
are
becoming
hyperconnected
too.
In
a
2007
report,
Pew
 Internet
indicated
that
93
percent
of
U.S.
teens
use
the
Internet.
Many
users,
children
 included,
have
to
adopt
multi‐tasking
in
order
to
monitor
multiple
goals
at
once.
Linda
 Stone
coined
the
term
“continuous
partial
attention,”
to
describe
hyperconnected
 individuals
who
must
focus
attention
on
one
task
while
thinking
about
several
 background
tasks
at
the
same
time.
While
multitasking
can
be
beneficial,
it
also
is
 negative
in
the
sense
that
it
can
lead
to
information
overload,
which
Basex
research
firm
 chose
as
their
“problem
of
the
year”
for
2008
(p.72).
Many
blogs
and
Web
sites
have
 been
started
that
focus
on
this
overload
of
data
and
suggest
that
technology
 complicates
our
lives
rather
than
simplifies
it.
Gina
Trapani’s
LifeHacker
site
gives
users
 tips
to
help
them
cut
through
massive
amounts
of
information.

 

 Looking
ahead
150
years,
more
and
more
information
will
become
available
to
 users
on
a
daily
basis.
Internet
pioneer
David
D.
Clark
has
predicted
that
there
will
be
a
 “need
to
accommodate
a
trillion
connected
devices
online
in
the
next
13
to
18
years”
 (p.68).
With
a
rapid
increase
in
users
and
information
available,
the
timeline
for
the
 future
suggests
that
computers
and
technology
will
become
even
more
integrated
into
 our
lives.
By
2011,
it
is
predicted
that
super
computers
will
be
on
the
market
–
operating
 close
to
the
speed
of
the
human
brain.
Intelligent
fabrics
will
be
present
in
2012
and
 human
cloning
and
teleportation
development
is
estimated
to
take
place
in
2015.
By
 2020,
ubiquitous
robots
will
be
present
on
earth
and
acquire
their
own
rights
and
jobs.
 “The
Singularity,”
or
“a
time
at
which
the
simultaneous
acceleration
of
nanotechnology,
 robotics
and
genetics
change
our
environment
beyond
the
ability
of
humans
to
 comprehend
or
predict,”
is
set
to
occur
at
2045
or
later
(p.92).

 

 Many
of
the
predictions
for
the
years
to
come
may
seem
far‐fetched,
but
in
 order
to
be
a
true
futurist,
one
must
create
his
or
her
own
image
of
where
he
or
she
 wants
to
be
in
so
many
years
(p.98).
Futuring
involves
developing
goals
and
answering
 key
questions,
as
well
as
understanding
stakeholders
and
their
roles.

Organizations
are


coping
with:
1)
Intelligent
horizon
scanning,
2)
Continuous
strategic
thinking,
3)
Dynamic
 action
planning,
and
4)
Engaging
in
collaborative
foresight
in
order
to
embrace
the
 future
and
all
that
it
has
to
offer
(p.
100‐101).
Foresight
thinking
involves
both
strategic
 and
tactical
tools.
Strategic
tools
will
reveal
a
vision
of
a
plausible
future
world
and
 challenge
one
to
think
about
the
world’s
meaning
and
the
future,
whereas
tactical
tools
 involve
creating
short‐term
strategies,
testing,
risk
assessment
and
problem
solving
 (p.103).
Mastery
of
the
following
cognitive
skills
is
essential
in
becoming
a
true
futurist:
 1)
Trend
assessment,
2)
Pattern
recognition,
3)
Systems
perspective,
4)
Anticipation,
5)
 Analysis
and
logic
(p.114).
Understanding
trends,
bigger
pictures
as
well
as
short/long‐ term
consequences
will
allow
one
to
determine
the
best
form
of
response
in
the
future.



 

 Trend
scanning,
networking,
action
planning
and
horizontal
scanning
are
several
 methods
one
can
take
to
understand
the
pace
of
change,
research
current
trends
and
 take
appropriate
action.
Trend
scanning
involves
looking
at
identified
trends
and
 analyzing
their
impact
over
time.
Networking
allows
companies
and
individuals
to
 communicate
any
outcomes
of
research
through
publications,
events,
case
studies
or
 final
reports.
Following
research
is
action
planning,
in
which
an
organizational
strategy
is
 defined
and
decisions
are
made
to
pursue
the
strategy
defined
in
research.
Horizontal
 Scanning
is
a
way
to
explore
“external
environmental
factors
in
order
to
understand
the
 pace
of
change,
and
identify
opportunities,
challenges
and
future
developments”
 (p.149).
Trends
are
much
easier
to
identify
than
developing
issues
because
trends
are
 already
labeled,
whereas
new
issues
occur
because
of
a
value
shift
or
a
change
in
the
 view
of
society.

 

 A
key
principle
to
horizon
scanning
is
that
“more
is
less”
(p.157).

According
to
 the
Law
of
Requisite
Variety
(Ashby
1956),
“A
system
with
the
requisite
control
variety
 can
deal
with
the
complexity
and
challenges
of
its
environment.
A
system
that
tries
to
 insulate
itself
from
environmental
variety
will
become
highly
unstable”
(p.157).
 Shielding
oneself
from
cyberspace
and
the
vast
amounts
of
information
available
will
 only
be
harmful
long‐term.
Preparing
for
the
future
involves
embracing
the
unknown
 and
delving
into
research
and
readings
on
current
trends
as
well
as
emerging
issues.


Turn static files into dynamic content formats.

Create a flipbook
Issuu converts static files into: digital portfolios, online yearbooks, online catalogs, digital photo albums and more. Sign up and create your flipbook.