Posted by: Khepera | Monday, 15 June 2009

Technology: Do We Really Know What We Have In our Hands?

In the rapidly quickening swirl of technological evolution, it’s sometimes difficult for those not in the forefront of development to keep track, much less fully grasp what is manifesting before their eyes. I laugh sometimes when folks – including me – fuss over the delay in waiting for their system to boot up, thinking back to the old DOS days, and booting from floppies. Most know that the personal computers of the early 90’s – the 386, etc. – was far more advanced than the computers used by NASA to launch the Apollo mission to the moon. But few of us have a comprehensive sense of the superior computational capacity(processor) and connectivity we hold in our hands today in the form of a Blackberry or I-Phone. If you think you truly grasp/appreciate where technology is today, walk with me on this recursive trail. Please note, what follows is not to overlook the development of computers & electronic technology prior the mid-60’s. A good reference for this can also be found here.

One of the first major shifts was the advent of the handheld calculator. Developed by Texas Instruments(TI) in 1967, this device, intended as a competitor with large and cumbersome mechanical adding machines, it could add, subtract, multiply & divide. Following TI’s development of the single chip microcomputer in 1971, and the single chip microprocessor in 1973, another major shift occurred. In 1972 the HP-35 was introduced, the first electronic scientific calculator, which was often referred to as an ‘electronic slide rule’. This chart on the evolution of CPU’s is a good reference on this progression./span>

Until this point, the distinction between calculators and computers was distinct & decisive. This was defined as calculators being devices with preset computational capacities – addition, subtraction, multiplication & division, even some higher math functions – whereas computers were programmable. This led to the rise of the programmable electronic calculator, whose perhaps most well known early entry was the HP-65, the first programmable handheld calculator. Hewlett-Packard’s role in the development of the electronic calculator is pivotal. The addition of nonvolatile memory was a key feature, allowing for th retention of stored work while the unit was shut down.

In the ’70’s, these devices created a revolution on college campuses and engineering facilities around the world. Very expensive in the beginning, some costing over $1000, the swift development of CPU’s and electronic miniaturization reduced both size and cost. Some may remember during the early & middle parts of this decade, many college professors would not allow calculators to be used during exams. In part this was due to the disparity of access related to cost, but many professors considered calculators as ‘crutches’, as devices which would make their students ‘computationally lazy’. This may sound ludicrous now, but we have all seen this play out at any store or fast food place when the electronic cash registers go down. The limited computational capacity of the cashiers is often readily apparent. This was much less the case before electronic cash registers. This theme of the reliance on machines as a source of human atrophy is very old, yet it remains a vibrant and lively discussion, for many…with good reason.

It’s worthwhile to pause here a moment and address a seldom discussed aspect of this burgeoning technological development: usage. For years we’ve been hearing – and I’ve been saying – that these tremendously advanced machines we now have on our desktops are largely wasted. This is because the vast majority of folks use their computer as a word processor, perhaps for some spreadsheets for budgets & such, email, and Internet access for all sorts of things. The proliferation of blogs and podcasts and other assorted forms of multimedia has helped to leverage more of the computer’s capacity, but far from tapping it’s potential. We are still essentially driving the Ferrari back & forth for groceries and beer, careful to avoid getting on the freeway.

The evolution of processor capacity – computing power – was just starting to ramp up, but data storage technology was not keeping up. Today, storage capacity may well be the most rapidly developing hardware side of computer technology. Well, we could have said that a year or so ago, before the breakthrough of multi-core technology. Some of us are familiar with dual-core technology, which has been out a few years now, but multi-core technology – particularly massively multi-core technology – is a shift which will, imho, transform computing as we know it in a few short years, in ways far greater than say cell phones transformed the way we communicate. We are rapidly approaching the point of having the equivalent of what we once referred to as a “supercomputer” available on the desktop. There’s a good post on this on the CAD Craft blog. Now, Anwar Ghuloum of Intel has just made a bold proclamation:

“Ultimately, the advice I’ll offer is that…developers should start thinking about tens, hundreds, and thousands of cores now.”

If they are telling software developers to ‘prepare for thousands of cores’, then it seems prudent that we should also be preparing as users. Let me try to give you some perspective on this. The shift from your present single core/processor, or even dual core machine to a computer with let’s say a thousand cores is tantamount to putting a 16 year old with a learner’s permit behind the wheel in the Indianapolis 500, or maybe even in the cockpit of a stealth fighter. For those who have been sleeping on the technology, or, like changing the clock on their VCR, have been practicing ‘technology avoidance’ because there seems to be too much to keep up with, to leverage the familiar quote, you better strap up cuz “Kansas is about to go bye-bye.”

One of the issues here is that as computers – and computer usage/access become more ubiquitous, the concerns about its impact on our humanity, our culture once again arise. While there is ample grist for this mill, it may be useful to examine some of what has already occurred, some examples for which there is some data and studies. MMORPG’s(massively multiplayer online role-playing games) have been around for decades now, going back to the days of Dungeons & Dragons in the mid-70’s, on old CP/M and DOS machines. This has morphed into the video game craze some see as epidemic now, but it has been around for a long time. It’s sort of like suddenly sitting up and saying we have a problem with drunk driving. It’s not new. As nearly any reference on MMORPG’s will mention, subcultures have arisen around these games, and the virtual worlds created to house them. Video games are big business:

“Overall for March, total sales, including hardware, software and accessories, rose 57 percent versus a year ago, hitting $1.7 billion. The number was also up month to month, besting the $1.33 billion in total sales registered in February.”

Video Game Industry: On a Winning Streak

For those who have not heard, it is no longer a secret, despite the fact they don’t teach it in public schools: The USA is a capitalist republic before all else, and, as such, the rules of business($$$) generally predominate over the rule of law. If you’re talking about a business which is grossing nearly $2 billion per month, you can believe that everyone from Congress on down is going to pony up for a piece of that pie, and is unlikely that anyone is going to be able to put a dent in that momentum. So, if it’s here to stay, what do we do?

One option is to make the ‘gaming’ premise/environment business based in & of itself, not just the business of selling video games. The Sims tried this, albeit somewhat feebly, and Adobe Atmosphere(discontinued) was moving in the right direction. However, Second Life is a realm where, despite evolutionary challenges, they seem to be getting it right. As is all too often the case with groundbreaking developments, some of the key innovators have left Linden Labs, and it remains to be seen how things will play out. Meshverse has some insightful commentary on this. Second Life has been posting some impressive statistics. I have an account, and am developing a business inworld, and I would encourage you to at least check it out.

Well above the horizon now are a wave of new computing resources coming available, which were previously mostly considered the realm of corporate interests. Cloud computing, which allows one to access computing resources on an as needed basis, when placed in tandem with the burgeoning multi-core developments means that soon, Second Life will be an application you can run – and host – yourself, on your own computer, instead of logging into someone else’s world. This premise of ‘software-as-service’ has spawned several new resources. But cloud computing is not the only option.

There’s a lot more out there, not to mention Web 2.0, but look for this as an evolving discussion, including question & salient comments as we move forward.



  1. […] with even deeper implications @ computing power can be found in an earlier blog post here: Technology: Do We Really Know What We Have In our Hands?(note the […]

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s


%d bloggers like this: