After watching the computer industry for about the last 15 years, I have come up with a couple of laws:
The history of the computer industry is full of examples of these laws in action.
Probably the most successful operating system introduction of the last 20 years was the advent of Microsoft Windows 3.0 in 1990. This was the product that finally spelt the end of the command line on most of the worlds desktop computers. Millions of copies were sold, even though most of the software written for Intel-architecture computers at that time, and for some years thereafter, was still firmly based on MS-DOS, not Windows.
So why were people buying Windows 3.0? PC Magazine ran a survey on its dial-up bulletin board system (remember, this was still early days for the Internet outside academia), and found that most Windows buyers were using it as a multi-tasker. That is, it gave them a way to conveniently run several DOS programs at once, switch between them, and even copy and paste textual data back and forth to a limited extent. In other words, it gave them a better DOS than DOS.
Most of the computer literati were still preoccupied with OS/2 at this time. Even though it was now two years old, and it was clear that its acceptance in the marketplace was taking rather a long time, nevertheless nobody was in any doubt about its technical superiority over MS-DOS (or even Windows, for that matter); it seemed inconceivable that it would not eventually win out, and become the standard operating system for Intel PCs.
Virtually all the major software developers of the late 1980s were behind OS/2 from the beginning: Aldus were one of the first with PageMaker for OS/2; Microsoft brought out Excel for OS/2 just weeks before Windows 3.0 was released; Lotus (the dominant spreadsheet vendor of the time) followed a few months after with 1-2-3/G for OS/2. So no-one can say that OS/2 failed because of lack of developer support: it was co-developed by IBM and Microsoft; it was their anointed successor to MS-DOS, and vendors, both big and small, duly fell into line. The magazines promptly started up regular OS/2 columns, and PC hardware advertisements started to tout OS/2 compatibility. But there were two initial shortcomings:
As hardware prices continued falling, the first shortcoming eventually went away. Indeed, the OS/2 versions of PageMaker and Excel were already performing tasks faster than their Windows counterparts, which perhaps showed where the resource hog really was. The second problem was eventually fixed in OS/2 2.0, which IBM released on its own after Microsoft had already given up to concentrate on Windows. IBM promoted the new version as a better DOS than DOS, a better Windows than Windows, but it did nothing to reverse the long, slow slide of OS/2 into oblivion.
The general explanation given at the time for this was that OS/2 had lost momentumthat the initial missteps had forever taken the shine off the product, so users would never be willing to reconsider it, no matter how good it became.
Yet that was not an explanation at all. For if it was, then why didnt it apply to other products? Take Windows: the early versions were embarrassingly bad in many ways, and it had to go through three major releases before achieving success. If Windows could do that, why not OS/2?
Clearly, the problem lay, not with the way the product was marketed, but with the product itself. No amount of tweaking or repackaging can save a flawed idea. OS/2 had a clean new architecture, designed from the ground up with a minimum of legacy baggage like that which infested MS-DOS (and Windows). It had elegant new APIs, protected memory and pre-emptive multitaskingall things calculated to make programmers and other knowledgeable computer types drool with delight.
But with all the effort devoted to these features, its designers forgot to come up with reasons why users should buy it. The software vendors were seduced by the technical superiority of the new system, but it turned out they couldnt actually use the new technology to solve any important problems that users wanted to be solvedthere was no OS/2-based killer app that made users want to switch.
And there was another computer platform, that came out about the same time as OS/2, that made the same mistake. This was the family of machines from NeXT, running their NextStep OS. This was based on a UNIX kernel, with protected memory and preemptive multi-tasking built-in. It had a sophisticated graphics engine built around Display PostScript. It had developer tools so powerful and so advanced, programmers and other knowledgeable computer types positively wet themselves just thinking about them.
Reviewers were virtually unanimous in their praise of the new computer platform. Even in 1988, UNIX had already established its reputation as a complicated, unfriendly system. Yet the NeXT machines were so easy to use, you could hardly tell that they were running UNIX underneath.
So why didnt the NeXT machines succeed? Again, because they forgot about the users. With all those powerful developer tools available, no-one was able to produce any software that people actually wanted to use.
Probably the best-known product ever developed for NextStep was Lotus Improv. From the dominant spreadsheet vendor of the time, this was going to be the next advance beyond spreadsheets. It was developed first on NextStep because it was so easy to develop things there. It got lots of good reviews, many of which ended with if only it were available for Windows.... So finally, Lotus took the plunge, and spent a large amount of effort porting Improv to Windows. Where it promptly disappeared without trace.
In fact, if I were going to add a third law to my current two, it might be something like No worthwhile software was ever developed for any platform that made it too easy. Not all developers are good; lots of them are mediocre, lazy, and like to get away with the minimum possible amount of work. If a platform is easily to develop for, you will attract all kinds of developers to it, good as well as mediocre. If a platform is difficult to develop for, the mediocre developers will tend to be put off, whereas the good ones will stick with it, if there is a potential market for their products. Perhaps programmers, like all artists, really only produce good work when they feel pain.
Thus, it is the users that attract the developers to a platform, not the other way round. If you can attract lots of users to your platform, you almost dont have to worry about the developers at all; they will crawl over broken glass to develop for your platform, if that is what it takes to tap into the profits from your market.
It is true that a good selection of software packages is an important part of the attraction of a computer platform for users. But never forget that attracting developer support is only a means to an end, not an end in itself; that end is always the attraction of the users. If you forget this, you end up creating another OS/2 or NextStep.
The history of computing is littered with bright ideas that went nowhere. Lots of people have come up with clever pieces of technology, that sounded so cool, there just had to be a use for them. But nine times out of ten, none was ever found, and they remained solutions in search of a problem.
Or alternatively, some elegant, powerful piece of technology was already established and being heavily used in some market niche. But along comes some technically inferior new competitor, that was cheaper, but didnt do nearly as much, yet it managed to establish itself in the same market niche, and even ultimately render the technically superior product extinct.
How could this happen? Two reasons: cost-effectiveness, and cash flow. The new competitor is a product with a large primary market outside that niche. Thus, it is cheaper than the product specifically targeted at that niche. It might not be as well-suited to the niche market as the technically-superior product, but if it can do, say, 60% of the job at 30% of the price, then there is a place for it, perhaps around the edges of the niche market.
Thus, the vendor of the technically-inferior product is getting some additional income from nibbling at the edges of this niche market. This encourages the vendor to invest some of the income it gets from its primary market in improving the usefulness of the product to that niche market, which increases sales, which in turn increases investment, until it gets to the point where it can do, say 90% of the job, still at only 30% of the price.
At this point, the technically-superior product, much more expensive, though still with a noticeable edge in quality (the difference between doing 90% and 100% of the job), is getting harder and harder to justify. The vendor can only drop the price by so much, since there is usually no other market it can break into to increase its cash flow. So ultimately, the technically-superior product becomes extinct.
The personal computer is a mass-market product. It is perhaps the ultimate mass-market product, given the large number of niche markets it has managed to conquer in this way. Other mass-market products may sell more in sheer numbers of units, but none has this ability to keep crossing over into new markets.
Example: consider how the Apple Macintosh came to take over the publishing industry. Even before the Macintosh, publishing was already making heavy use of computer-based systems. They had their specialist Atex machines, with purpose-built tools for dealing with all parts of the publishing task, and a price tag to match.
Then, along came this little box, with its tiny screen. At first glance, nothing much to look at, except for two things: a piece of software called Aldus PageMaker, and an amazing printer called the Apple LaserWriter. For a tiny fraction of the price of an Atex system, anybody could set themselves up as a do-it-yourself printing/publishing bureau. True, the LaserWriter wasnt capable of genuine typeset quality, but for many purposes, it was close enough. That first version of PageMaker was hopeless for putting together a full-length book or a complex technical document, but it worked great for smaller publications like newsletters, brochures, and even entire magazines. And so desktop publishing was born. The professional print bureaus found they were losing business to these new, smaller, cheaper operations; in a case of if you cant beat em, join em, they were practically forced into starting to use Macintosh-based systems themselves.
Clearly, it was desktop publishing that helped save the Macintosh from early extinction in the mid-1980s. When Apples fortunes declined again in the mid-1990s, some analysts suggested that it should give up trying to sell to the mass PC market, and concentrate on its niche marketspublishing, graphic arts and education. If Apple had ever succumbed to that idea, it would have been vulnerable to an invasion from the mass market it had left behind: perhaps some new Microsoft-Windows-based product would finally have cracked the publishing market by now (where none has succeeded as yet), and the Macintosh would already be extinct.
Instead, Apple stuck to its mass-market roots, brought out its iMac and iBook products, and came storming back virtually from the dead.
Another little box that had a lot to offer to a niche market was the Commodore Amiga. With a revolutionary add-on product called the Video Toaster, it was capable of performing a lot of the tasks associated with TV and video production, at a fraction of the cost of the existing professional products. Throughout the late 1980s, and into the early 1990s, a lot of Amigas and Toasters were sold for this purpose, even as arguments continued over whether the Toaster was capable of true broadcast quality or not.
Unfortunately, Commodore lost its focus on the mass market. Which in turn led to its financial troubles, which restricted the resources it had to continue in the mass market, and so on in in a vicious circle. The Amigas hardware architecture, which looked so innovative in 1985, was already showing its age by 1990. As the video-production world moved from analog to digital ways of doing things, the Amiga and the Toaster got left behind, relics of an obsolete way of doing things.
Moral: you must keep a hold on that mass-market high ground at all costs, otherwise you will eventually lose the niche markets as well.
Another example of my laws in action is being played out right now in the Linux market. Here is an operating system, being developed cooperatively by an informal network of programmers, who are giving their work away for free. Sounds wonderful, doesnt it? The stock market thought so, and drove up the share prices of Linux companies to absolutely ludicrous levels, until the bubble started to deflate in the early part of 2000. Today, it is clear that, while Linux has a definite role in the Internet server market, it is never going to succeed in establishing itself as a desktop operating system.
For me, the warning bells were already sounding a couple of years ago. From the beginning, the motives of many in the Linux community seemed less than pure: they were driven more by a hatred and jealousy of Microsoft, than by a desire to empower the users of their systems. And are we not all taught as little children that you will never succeed in an important endeavour if your motives are not pure?
For an alliance formed purely out of fear of a common enemy tends to fall apart when it turns out that the enemy is not as powerful as once thought. This happened in the late 1980s, when IBM brought out its PS/2 range of personal computers, with the new Micro Channel bus instead of the older AT-bus (now called ISA) slots for expansion cards. Since anybody who wanted to build Micro Channel hardware had to license IBMs patents to do so, the various vendors of PC compatibles feared that this would lead to IBM cementing its dominance over their industry.
So a group led by Compaq created the Extended Industry Standard Architecture (EISA) bus, as a backward-compatible extension of ISA, to try to counter Micro Channel. What nobody had yet noticed was that IBM had already lost its dominance over the PC industry, and was no longer in a position to call the shots and impose new proprietary architectures. Micro Channel never really took off (it was only licensed by one major clone vendorTandyas I recall), and EISA correspondingly went nowhere.
And so it is with Linux. Where it is having success is where its developers are concentrating on coming up with worthwhile solutions to important problemsin its use for Internet servers. On the desktop, its developers are simply trying to come up with alternatives to Microsoft products, and not even doing so very imaginatively, either. Here their efforts seem primarily targeted at other developers, in which case my First Law comes into play.
There is an interesting phenomenon that often happens in human societies, where, once a new idea is adopted by a small handful of influential people, it can spread rapidly and become dominant throughout the society. The evolution of human languages (new words, new slang and colloquial terms and so on) seems to happen in this way, and on the business side it is certainly the way the fashion industry works.
But, strangely, enough, the computer business doesnt seem to work this way. The analysts and magazine writers have devoted a lot of space to many different technologies over the years, many of which have gone nowhere in the market place. Linux could very well hold the record for the sheer number of column-inches devoted to it in the computer publications, compared to actual numbers of units soldnot since the days of OS/2 has there been so much ongoing free publicity for a product with so little sales. And yet the customers for desktop operating systems are still refusing to buy into it.
One particular fact seems worth emphasizing: many people take it for granted that a system that offers memory protection and preemptive multitasking is obviously worth having. Yet, as the examples I have mentioned above make abundantly clear, in the desktop market these are not selling points. You can argue all you want about the logic of running a more stable system and so on, but the fact remains that this logic does not translate into sales. To the customers, compatibility is more important.
Probably the closest thing to a counterexample to my laws is Windows NT. Here is a system that does indeed put the emphasis on memory protection and preemptive multitasking, that was designed to leave behind a lot of the architectural baggage associated with regular Windows and make things easier to program, and yet it is achieving a respectable level of sales.
Well, yes and no. After 7 years of intensive promotional and developmental efforts by Microsoft, Windows NT (including the new Windows 2000) is still very much a minority-interest operating system. When Windows 95 came out, Microsoft announced that that was going to be the last OS built on the old Windows architecture, that the next consumer OS would be built on the Windows NT architecture. Then, when it realized that the market wasnt going to stand for this, it brought out Windows 98, then 98 SE, and now Windows ME (Millennium Edition). The current position is that this last OS will be the final one to be built on the Windows 9x architecture, that the next consumer OS will indeed be built on the Windows 2000 architecture.
My expectation is, dont hold your breath. Microsoft recently announced that it had sold 3 million copies of Windows 2000. Trouble is, it took 5 months to achieve that. It may sound like a respectable number, but its still only at most about 20% of Windows 98 sales. This is the platform thats going to kill off the Windows 9x line? Somehow I dont think so.
(Though they might have to come up with a name for the next version other than Windows ME 2.)
Another interesting complication is the outcome of the Department of Justice antitrust case against Microsoft. If Judge Jacksons verdict is not overturned on appeal, Microsoft will be split into two companies: one will own the various Windows operating systems, while the other will own Microsoft Office and Internet Explorer. If this were to happen, the OS company would have less resources available to it to keep plugging NT/2000 than Microsoft currently does. What will this mean for the future of NT/2000? Perhaps a more realistic focus on whats bringing in the profits, versus what isnt?
Apple is currently developing its next-generation operating system for its Macintosh computers, called MacOS X (the X is pronounced ten). This will be built on a UNIX kernel, and will incorporate memory-protection and preemptive multitaskingfeatures that many analysts and other knowledgeable types agree have long been needed in the MacOS.
Sound familiar? It should, since in fact most of the programmers who developed NextStep are working on this systemthey came over when Apple acquired NeXT.
In fact, it is a familiar refrain in more ways than one: this is no less than Apples fourth attempt to combine MacOS and UNIX into a new-generation desktop operating system. The previous attempts were:
The other thing they have in common is that all these attempts were unsuccessful. There is an interesting phenomenon of selective recall, if you mention these failures to people today, for them to conclude that these products failed because Apple didnt support them properly, or publicize them anywhere nearly enough, or the idea was basically sound, but something fell short in the execution. (Much the same is sometimes said about OS/2, and no doubt will be said about Linux in the future.)
I think a closer examination will show that all these arguments are nonsense. A/UX was promoted and continually updated over many years (with 3 major releases, no less). Rhapsody and Yellow Box were introduced with much fanfare and ongoing publicity (and major development effort), which only began to falter when Apple began to notice the deafening silence in response from the Macintosh community. (MAE might or might not have been a dumb idea which should never have seen the light of day, but lets not get into that now...)
No, the fundamental reason why all attempts to combine MacOS and UNIX on the desktop have failed is because the rationale for attempting such a combination only makes sense to developers, not to users. UNIX on the desktop is, as they say in the movie business, box-office poisonusers just cant seem to find a reason why they should buy it.
I fear that the same thing is going to happen with MacOS X. Apple has already backed off from its initial ambitious plan to start including MacOS X with all new machines starting from early 2001. But one thing is clear: those engineers that Apple acquired when it bought NeXT still stubbornly believe (contrary to all the evidence) that there is something worthwhile in those NextStep technologies, that the marketing clout of Apple will find a way to popularize them where NeXT on its own was unable to for ten years.
Personally I think all the engineers brought over from NeXT should be sacked. When Apple bought NeXT, it brought back Steve Jobs as the prize. He alone has been worth the purchase price, with his marketing successes like the iMac and so on. But the rest of NeXT and all its technologies has just been a liability, that Apple should get rid of as soon as possible, before its too late.
Apple has announced a range of new Macs with a couple of interesting features: dual processors, and gigabit Ethernet built-in. Several analysts have already pointed out that the dual-processor feature seems to be just a distraction from the fact that Motorola (Apples sole supplier of PowerPC chips) seems incapable of increasing the clock speed of its processors at the moment. Precious little software is currently capable of using more than one processor effectively: thus, having two 500MHz processors will make little difference to most users doing most tasks, compared to older single-processor 500MHz machines.
Which is true as far as it goes, but remember my first law and users versus developers: if enough users buy these machines (and indications are that they will), then they will create a large potential market for software that is capable of using multiple processorslarger than any such market that has existed before. And where a potential market exists, you can bet that smart developers will jump in to try to sell to that market. Which means thinking up new and imaginative ways to use multiple processors. Obvious applications like speeding up MP3 audio encoding and MPEG movie compression would be of interest to lots of users right now. But who knows what else they will think up?
Ive been thinking of other areas in which my laws could be appliedfor example, existing niche markets which could conceivably be taken over by mass-market products in the future. One such market that comes to mind is that for serversincluding Internet servers. Currently it takes a certain amount of technical skill and ongoing administrative effort to look after a server: could this change in future, so that they become more like appliances that you switch on andfor the most partforget?
Similarly, the operating systems that run on servers are specially-designed to do so, with particular emphasis on high performance, high availability and other such features. We have already seen that this is one area where Linux is enjoying some success (along with other UNIX variants): its also an area where Windows NT is popular.
Lets consider the following question: could UNIX (including Linux) and NT/2000 be ousted from the server market by a technically-inferior, yet cheaper and easier-to-set-up system built on some future variant of Windows 9x?
At first sight, the idea seems laughable. In order for a server to a plug-in-and-forget device, it has to be reliable, and reliability is not generally considered a strong point of the Windows 9x family.
However, IBM, for one, has come up with a system for monitoring a server, and automatically rebooting it when it crashes. And this is not even the first product of its kind.
Consider this scenario: a whole bunch of bargain-basement, modest-performance servers, running some future variant of Windows 9x, and perhaps with built-in hardware monitors that will reboot them when they crash. The servers are all running the same e-commerce application in a load-sharing arrangement. At any moment, there will almost certainly be one or two machines rebooting from a crashed state, but thats not a big issue, because all the other machines are there to share the load.
And the whole arrangement costs less, both to buy and install and in ongoing running costs, than a real server setup running a real OS that will do the job with the same overall reliability and performance.
Sound plausible? There are obvious technical objections: what if the machine crashes in the middle of a transaction? The customer would certainly be annoyed. So perhaps all transactions are run in parallel on two machines at a time: if one crashes, the other one can still finish the job. (Doing things this way doubles the hardware cost, but thats only a small component of the total cost anyway.) If its cheap and it works, do it, and to hell with technical elegance.
Remember, you have to start by nibbling around the edges. I know were not there yet, but I think its only a matter of time...
Created 2000 April 16 by Lawrence DOliveiro <firstname.lastname@example.org>, last modified 2000 August 5.
Back to LDOs Home Page