The Idea Of The Personal Computer Is Dead

"Microsoft was founded with a vision of a computer on every desk, and in
every home. We've never wavered from that vision."
--Bill Gates

You can't make things better for humanity as a whole without making things
better for individuals. Each person is their own consciousness, and while a
society is often more than the sum of its parts, it is still fundamentally a
collection of parts, i.e. people.

In the 20th century, a new generation of machines was becoming popular whose
primary function was not to transport heavy objects or indeed to incorporate
any kind of mechanical motion at all. These devices were made for the purpose
of processing information, and they came to be called computers, a word which
had formerly applied to a human being who performed the same task.

Computers started off big and shared. We've all seen pictures of those
mainframes which filled an entire room and required several people to run. It
was a significant development, then, when, towards the latter part of the
20th century, there was a shift toward computers which were inexpensive,
simple, and small enough to be realistically purchased, owned, and used by an
average member of society. This was nothing short of a revolution. To make
these engines of information available to every person could change society.
It could change the world. And it did.

Regardless of what anyone might think about companies like Microsoft and
Apple today--and to be sure, they have drifted very, very far from the kinds
of companies they were upon their founding in the 1970s--these companies were
largely responsible for making hardware and software accessible to the world
at large. So important was Bill Gates' reaffirmation of Microsoft's
vision--in the oft-cited quote which begins this article--that an audio clip
of Gates speaking these very words was included in the article on Bill Gates
in Microsoft's own Encarta encyclopedia program. Of course, Encarta
paraphrases Gates here, since Gates' original quote added the words "all
running Microsoft software" to the end of the first sentence, but at least
the paraphrased quote seems to have its heart in the right place.

Nothing ruins dreams like having them come true. The vision of the personal
computer was ultimately realized. As time went by, computers became
everything people envisioned they would be: Fast enough to perform common
tasks without requiring people to wait very long, small and light enough for
people to easily carry them around in a purse or bag, and affordable enough
that nearly everyone in a developed society can have one.

So what do you do with a vision after it's already been realized?

"So the vision of Microsoft is pretty simple. It changed a couple years ago.
For the first 25 years of the company, it was a personal computer on every
desk and in every home. And it was a very good vision; very rare for a
company to be able to stick with something like that for 25 years. The reason
we changed it was simply that it became acceptable... And so as we stepped
back and looked at what we were trying to do with the programming model,
turning the Internet into the fabric for distributed computing, getting your
information to replicate in a very invisible way so that it was available to
you everywhere, thinking of this programming model spanning all the different
devices, we changed to the mission statement we have now, which is empowering
people through great software anytime, any place and on any device."
--Bill Gates

Of course, by the time Gates made this quote, he was already becoming less
relevant. Perhaps further insight can be found from the man who runs
Microsoft now.

During a live interview held at The Churchill Club in Santa Clara, California
on September 25, 2008, Ann Winblad asked Steve Ballmer: "So Microsoft, over
the last 30 years when you joined Microsoft the mission of the company was a
computer on every desktop, and in every home. Is that still Microsoft's
mission statement today?"

Ballmer replied: "No, no. We're working on a broader agenda than that, and I
love that."

Ideas have a way of being temporal. Stored in a person's mind, they tend to
change over time. They can be stored by writing them down, but hardly anybody
reads anymore, and when they do, they're not interested in old ideas. It's
perhaps not surprising, then, that people have largely forgotten what
personal computers were about when they first started coming around. For
starters, you hardly ever hear the term "personal computer" anymore. They're
just "computers," and the abbreviation "PC," when used, has a technical
connotation rather than a social or ideological connotation.

Today, the idea of the personal computer is dead. There are multiple reasons
for this, but I believe that they can be broadly broken down into three
primary points. I will endeavor to do this with the words that follow.

FOCUS ON NON-COMPUTING ACTIVITIES

The reason these devices are called "computers" is because at one point in
time, they were, indeed, seen as devices which were used for the purpose of
computation. No longer. Today, computers are used for almost everything but
computing.

There are two primary purposes which most consumers apply computers for
today: Entertainment, and communication. I use the word "communication"
loosely here, since that word usually implies some kind of ideas or
information being exchanged, while a great deal of the "communication" which
transpires on the Internet today involves the exchange of neither of these in
any significant amount. 

When computers were still new and the world was getting used to having them
around, they were seen for what they were: Arrays of relatively large numbers
of electronic bits (far more than a human brain could conveniently store and
retrieve from quickly), made accessible and modifiable in such a way that
they could be processed and presented far more quickly than a human brain
could ever process such large volumes of data. Truly, that is all a computer
is. This focus on the actual data mechanics of computers led to people
writing programs that helped analyze math problems, programs which simulated
physics experiments or other fields of hard science, programs which created
virtual environments for users to explore and experiment in, and the like.
Because the computer was capable of taking concepts which the human mind
already worked with but could process those concepts much more quickly than a
human, the computer became a tool of the mind, an extension and expansion of
the human consciousness. Using a computer was as natural as performing basic
math or thinking logically. The very act of programming a computer, even if
the resultant program didn't actually "do" anything particularly useful,
became an exercise in logical thinking which had the ability to make people
smarter.

It is human nature to satisfy the id. Anything which can be made faster, more
convenient, more pleasurable, or less complicated is something that
mainstream society has always readily embraced. The computer was no
exception. It didn't take too long before people decided that being smart was
simply too much work, and that the computer should serve to amuse rather than
enhance. By the mid-1990s, computer games had become significantly less
intelligent, and focused on dazzling the player with sensational media rather
than requiring them to think. This is a trend that never reversed itself, but
rather become more extreme as time went by, with the result that the 21st
century is a wasteland, not even able to produce a 10% ratio of smart, good
games.

Today, when buying a computer, typical home users think of their PC in terms
of entertainment potential, especially with regard to graphics capabilities.
Indeed, for many people, the idea of the computer has fused entirely with the
lazy, passive field of mainstream entertainment, and so the computer quite
literally becomes little more than a machine to play movies and music; a
media center that happens to have a CPU in it. "Don't make me think," once
easily dismissable as an infantile tantrum, is now technology's guiding
mentality. The act of watching noninteractive movies or listening to recorded
music has always enforced minimal requirements from the consumer, and now
this brainless field of entertainment is the primary reason people buy
so-called computers.

Today, there is such an emphasis on making computers networked that it
literally seems to go without saying. Mainstream computing culture seems to
honestly believe that a computer which is not networked is quite literally
worthless. This collective forgetfulness on computer users' part is almost
unbelievable, given that only 10 years before the Internet became something
most mainstream households were familiar with, a computer that was networked
in any way was a rare and unusual thing. Certainly, computer BBSes and other
forms of network links for the purposes of exchanging a few files and e-mails
were around since nearly the beginning of the existence of microcomputers
(and indeed, the idea of exchanging files and messages over networks predates
the microcomputer, since they were already doing it on ARPANET in the late
1960s/early 1970s), this was long seen as an add-on to the computer, a
supplementary functionality to the core computer rather than the sole focus.
A personal computer, by definition, is something personal, meaning that it
can be used in solitude, without the need for connections to other people or
places. Certainly, the ability to connect to other people and places is
useful and important, but this is done only in moderation.

Today's world is often described as "ultra-connected." People have so many
different ways of contacting each other that the list is almost ridiculous:
Telephone, e-mail, instant messaging (on a half-dozen major networks), blogs,
forums, etc. Yet increasing the different ways in which people can
communicate with each other does not increase the quality of the
communication that goes on; indeed, it has decreased. In order to communicate
an articulate, relevant idea, some amount of thought must go into the idea
before it is expressed. On today's Internet, that happens less and less;
ideas are expressed before being thought out. The shift in focus on hardware
bottlenecks is a perfect metaphor: Today, a computer's network throughput is
considered more of a performance bottleneck than its CPU speed.

The personal computer is not "connected" or "chatty." It is not a socialite.
It is a quiet, thoughtful sage, content to meditate and check its facts
before spreading memes to other people. With the extinction of this style of
person in society, it hardly becomes surprising that the idea of such a
computer would die out as well.

LOGINS AND EXCESSIVELY LONG STARTUP TIMES

The world of personal computers became significantly less personal on the day
it became standard to require users to log in.

There is no other device in a typical person's home that requires them to
identify themselves before they use it. A television, a book, a refrigerator,
and a toothbrush are common items around the house, and they can simply be
accessed and used immediately. Why should a computer require someone to log
in before they can do anything?

The rationale behind this requirement is obvious. It aids in security, but
more to the point, it enables multiple people to use the same computer. Since
multiple logins can be created, each user can have their own profile and
customize their computing environment as they wish. While this is an
understandable paradigm for a family computer which is used by many different
people, it makes no sense for a personal computer, since known logic dictates
that it is not possible to have multiple people who use a computer while
simultaneously having only one person who uses the computer. Either the
computer is personal, or it isn't. A truly personal device which used a login
mechanism for security would only require a password, not a username.

The concept of logins has been extended into whole-filesystem access control
lists. On most operating systems used today, every single file on a computer
has its own ACL (access control list) which dictates which users can read and
write to the file, and which users cannot. This kind of whole-system security
is a consequence of the dying out of the open-access mindset. There was a
time when the computer community promoted understanding and access, but now
the opposite mindset prevails: Every single file access is tightly
controlled, turning the entire computer into a police state.

Having to log into a computer also reduces the immediacy of use. One of the
appealing things about any accessible piece of technology is being able to
turn it on and use it immediately. This ability has been thoroughly lost in
today's computing environment. There was a time when you could turn on a
computer and have it ready for use in a matter of seconds. Commodore 64 and
Apple II computers power up almost instantaneously; the system will be ready
for use before the few seconds it takes for a CRT to warm up and actually
display a picture.

Some people would counter this by noting that although the Commodore 64 and
Apple II could indeed get you to a BASIC prompt almost instantly, it took
several minutes on those systems to load a program from a floppy disk. It
could be said, then, that immediacy of computer access reached its apex with
DOS-based PCs using hard drives, which took a little longer to boot up but
not much longer, and which had hard drives fast enough to load most programs
in a matter of seconds. When you factored in the BIOS POST process and the
time it took to process everything in CONFIG.SYS and AUTOEXEC.BAT, these PCs
would typically boot in around 10-15 seconds, which is about the upper limit
for computers to feel like they start up quickly. Today, even a superfast PC
with hardware that is orders of magnitude more powerful than what was
available to DOS PCs in the early 1990s struggles to boot in under 60
seconds, turning startup alone into a major undertaking.

This is almost entirely a consequence of the operating systems. Virtually all
operating systems in widespread use today attempt to load way too much
overhead before they get to a user shell. This is partly due to the fact that
most operating systems now want to boot to a GUI, which takes considerably
longer than simply booting to a command-line interface (CLI). Indeed, the
significantly decreased popularity of CLIs in favor of GUIs is itself a
strong contributor to the death of the idea of the personal computer. This
leads rather directly into the next major point:

OPERATING SYSTEMS WHICH ISOLATE THE USER FROM THE COMPUTER

As any technology becomes less novel, there is a tendency to make the
technology more normalized. 90% or more of users of a specific technology use
that technology to do only a limited set of tasks, to the point where that
set of tasks becomes broadly assumed to be why people acquire and use that
technology in the first place. (See the point above about entertainment and
"communication.") As this conception of the applications of a technology
solidifies in the public's mind, the design of technology begins to center
around it. Systems are designed to facilitate what that 90+% of users are
doing, even if that design restricts potential alternative applications of
the technology which might be used by a minority--say, less than 1% of users.

It is a natural consequence, then, that when computer operating environments
are designed today, they are crafted to make it as easy as possible for
people to do things like listen to music, watch movies, and send e-mail to
other people. These kinds of tasks are given top priority and icons to access
them are usually made prominently available. Icons for running computer
algebra systems, fluid simulations, or hardware debuggers are somewhat less
in evidence on typical desktop environments.

A personal computer is personal partly because it forms a symbiotic
relationship with its user. The user acts on the computer, and vice-versa. In
order for this relationship to be viable and beneficial, the computer must
make itself open and available. It must not try to hide things from the user.
It must not restrict what the user can do.

Unfortunately, most operating systems today are designed with
information-hiding and restriction of tasks as key design goals. It would be
regrettable enough if these things happened by accident, but they don't--they
are actually seen as elemements of good user interface design. There is a
strong fear in the computer community today that people might learn new
things, and so, in a concerted effort to keep this from happening, computers
try to hide information about themselves whenever possible. Designers worry
that people might be frightened by the prospect of seeing information, and so
when a computer encounters an error, the extremely unhelpful message "An
error has occurred" is often the only feedback a user gets. In times past, a
computer would readily perform a core dump on encountering an error,
displaying the contents of its memory buffer for the user to examine in
determining why an error occurred. No longer is this seen as good design; now
the focus is on shielding the hapless user from actually being able to
understand anything.

When microcomputers were first being developed, they were usually quite open
in their architecture. A program could readily modify any byte in memory, and
this was indeed normal practice for many years. At some point, it became
conventional wisdom that allowing computer programs to act as computer
programs was unsound, and so elaborate "memory protection" schemes were
devised, whereby some controlling operating system software is the only
program which has access to all memory, and user applications are actually
run as slaves to the operating system, which parcels out memory to the
applications and prevents one application from modifying memory that has been
assigned to another; indeed, applications which attempt to modify another
application's memory are often terminated immediately. The operating system
has been elevated far beyond what it once was; at one point in time, the
operating system nothing more than a tool to help the user by supplying a
shell, along with a handful of useful function calls and utilities. Today,
the operating system acts as judge, jury, and executioner, completely
independently of user input, and indeed, largely beyond user control.

Even operating systems which proudly trumpet messages of freedom suffer from
this mentality. Linux, widely touted as "free as in speech, not as in beer,"
runs in a protected mode and prohibits any application from accessing
resources that Linux has not assigned to the program. This makes it
appreciably difficult to hex-edit a running program in memory from the
command line. MS-DOS could do this readily. Linux cannot. Linux developers
know that very few users actually would want to (or know how to) do such a
thing, and so the idea of doing so is simply neglected and ignored.

Today, the layers of isolation from the hardware which are imposed upon a
user aren't even limited to those of the operating system. As programmers
become more lazy and wish to be able to create data structures without
understanding how those structures actually exist, layers upon layers of
abstraction are created, resulting in software that is unnecessarily complex,
slow, and inordinately difficult to debug as an actual CPU process. Today,
incredibly enough, actual computer programs are losing favor to web
applications, which run entirely in a web browser. This means that users now
have an operating system laid on top of hardware which isolates them from the
hardware, running a web browser (which was probably coded in a high-level
language since most operating systems today won't let you write in assembler
or machine language), which in turn runs either a runtime environment (such
as that used by Java) that executes the actual application code, or runs
stripped-down scripting languages that literally don't even have functions to
write to memory, let alone permissions from the operating system to do so.
People code not in computer languages like assembler or CPU opcodes, nor even
in high-level languages like C or Pascal; today, developers "program" in web
languages like HTML, JavaScript, and PHP. To call such activity "computer
programming" is akin to claiming that being able to turn a steering wheel
makes someone an auto mechanic.

------------------------------------------------------------------------------

None of these shifts in attitudes toward computers is a technology
limitation. (Some might argue that the use of protective operating systems is
a technology limitation, but it really isn't--the technology exists to create
an operating system that doesn't limit what the user can do. It's just that
mainstream computer users and manufacturers do not want to create or use
computers with such operating systems.) All of these shifts are a consequence
of how people think about computers. This is why I say that the idea of the
personal computer is dead. The personal computer isn't dead, and never will
be; it was never alive to begin with. It was, and always will be, just a
machine, and to recreate a machine, given the appropriate diagrams and
schematics, isn't an insurmountable challenge.

What is dead is the idea of the computer as a personal device, as a tool for
exploring the depths of science, creativity, math, and the human mind.
Computers are no longer seen as anything special, because people have
forgotten (or never realized in the first place) how powerful they can be in
educating, empowering, and inspiring people. Instead, a computer is now
regarded simply as a time- and labor-saving tool, like a screwdriver or a
washcloth. The function of such devices is devoid of variety or creativity;
it is the complete antithesis of what a computer is.

This article isn't about computers. It's about people. It's a summary of what
happens when something which is filled with limitless potential is drained of
all life when people seek to make the idea "mature" by utilizing* it,
thinking only of how ideas can be turned into money, power, amusement, or
other indulgence.

*I use the word "utilize" here in its correct sense--not in the colloquial
sense, in which it becomes a synonym for "use," but rather the meaning "to
make utilitarian; to make useful or practical."

This isn't to say that nobody in the world cares about personal computers
anymore; it's just that the people who do are a marginalized minority, like
archaeologists, and people who are actually well-versed in literature,
geography, and history rather than who simply pretend to be. There will
probably always be a small group of such people in the world, but their
pursuits are unlikely to ever be "popular" in the mainstream sense, nor are
they likely to ever make much money pursuing such ends. Make no mistake:
There was never much money in the personal computer industry to begin with.
The people who got rich in the computer industry are those who pandered to
big business. The few exceptions are those who happened to be lucky enough to
be the sole suppliers of personal computers at a time when the industry was
just beginning. If you're the sole source of water in the world, you don't
need marketing, sales, or research and development; the world will beat a
path to your door. The likes of those who got rich designing and selling the
world's first personal computers happened to be in the right place at the
right time, but they would not have gotten rich in today's world doing what
they did then.

I'll close this article as I began it: With a quote from the man himself.

"Information technology and business are becoming inextricably interwoven. I
don't think anybody can talk meaningfully about one without talking about the
other."
--Bill Gates

He's right. Say what you will about Gates, but in many ways, he's a man who
gets it. Goodbye, personal computer.

    Source: geocities.com/siliconvalley/2072

               ( geocities.com/siliconvalley)