On this page, which I started a few years ago IBM internally, I have collected my evaluation of technologies and strategies that I predict to succeed or fail. Every year or so I make updates to my prediction commenting if I had been right or wrong.
The things mentioned here are my personal thoughts, which I can't influence nor do I have any responsibility for taking them as facts.
The following themes are available:
I think there are two tendencies clearly on the horizon:
As Wintel (Windows on Intel based servers) machines are arriving a dead end due to that the ever increasing performance required more and more electric energy (and as most is converted into heat at least the same amount of cooling energy), server farms are reaching their feasible limits. The number of CPUs and servers can not be increased much more (even when using blades).
One has to limit the number of physical servers, but when you need more logical servers you have to use virtualization technologies, that is Virtual Machines.
One way to limit that is not to go for higher clock speeds but for more parellelism. What has begun with SMT (or Hyperthreading as Intel calls it, but which has existed some time on other architectures before Intel introduced it) today is expanded by using multiple cores on a singe CPU (which again e.g. IBM's PowerPC has for some time now). Even some hardware is still shared in a server having n CPUs with m cores, there will be more and more hardware support to allow individual CPUs and even cores to run different operating systems.
You may thing this as a new concept. Well, for the Wintel architecture it certainly is, but if you take a look at IBM's z-series (/390 architecture) that has been standard for years. At my customers a single host CPU (with about 3 to 5 processors) runs at least 3 production environments in parallel. That technology even allows to dynamically define which Virtual Machine (called LPAR Logical Partition there) gets how much of the total CPU power.
Nuclear powered spaceflight is certainly the way to go. NASA is working on spacecrafts where nuclear energy is not just used to power electric instruments, but to replace the chemical engine by electric engines. For that, the power from nuclear energy sources need to be increased from about 200 Watts to 100's of Kilowatts finally.
What makes most sense to me would be to develop engines that use nuclear power to heat hydrogen to extreme temperatures, resulting in high specific impulses (that's what is important for the power of an engine). You can read about a concept of a Saturn-5 sized rocket that would be so powerful that it can be launched to space, the mission load is ejected and than will vertically land again. The whole rocket! Consider that to what returned from a moon mission with a Saturn-5, a few tons from 2800 tons launched!
The speed of the exhaust would be higher than escape velocity, so you could throw waste into the engine and that waste will be blown into the sun or out of our sun system!
As we all can see, the interest of IBM in OS/2 has diminished dramatically since 1995. We got a new server version, OS/2 Warp Server for e-business, but there is no sign for a new client on the wall. IBM has said that at least at the moment neither they nor any vendor like Stardock will do or is allowed to do a new client. Also, a spongy announcement may be interpreted that starting Feburary 2000 all enhancements (like e.g. NetScape Communicator) and possible also fixpacks will no longer be available free of charge.
That pretty much fits the bill that IBM tries to get rid of OS/2. While that leaves a bad impression for the still available base of us home OS/2 users, it's not such a bad message for IBM and their corporate OS/2 users because:
Assuming that the importance of OS/2 (IBM) or IBM software in general is very minor compared to WINxx (MicroSoft) does that mean IBM will also become minor compared to MS? I think the clear answer is no, because:
Thus, I think IBM has a bright future and also the customers are there is no monopoly in the services market. The importance lies in no monopoly, because only then the market is organized efficiently and both parties win. Something that really should be considered as how bad and inefficient the MicroSoft Monopoly definitely is.
IBM is still here and is still profitable, despite they are no longer competing with MicroSoft. OS/2 is still available from the OEM vendor Serenity Systems. It's still a nice market with a small fan club staying loyal behind it.
As OS/2 does not support WIN32 applications (and IBM doesn't understand that vendors won't start writing for OS/2 so there is a need for WIN32 support for OS/2 to survive) a project has been established that provides a kind of compiler that compiles WIN32 applications to native OS/2 executables. You can get more information about the WIN32 OS/2 project from the page maintained by Timur Tabi, a former MMPM/2 device driver developer (before IBM decided a server OS doesn't need sound drivers), now working for Cirrus.
I think, if it will work well (and I don't doubt that because of the experts working on that) this will be significant enough to keep OS/2 around for a while. If it runs even more stable and faster (and there are some signs of that) than natively on WIN32 (notably WIN95), this could lead into the proposed OpenWIN32 API standard (which means that the control of the WIN32 API is taken out of the control of Microsoft by a kind of standards institute (a key prerequisite would be a compiler and toolkit out of control of Microsoft to have alternatives when they again tune some APIs to break existing code)).
The organization has changed, OS/2 Netlabs is now in charge. While the core team still is there, this has allowed others to join.
The Opera Browser is being ported to OS/2 in parallel to the ongoing development of Odin, as the project is now called.
It's still under development and has achieved some success not just to run Doom under OS/2 but also business applications like Acrobat Reader, OpenOffice or Lotus Notes. I only played with it a few years ago, while it's certainly technically an outstanding job what has been done, the tendency to me seem to be to run virtual machines under a host operating system like e.g. Connectix VirtualPC (now part of MicroSoft as their strategic Virtual Server), VMWare (though they actively prevent running OS/2 in their VM) and others.
I'm not sure if that project is just a joke on the web, we will find out be end of 1997 when the first betas are promised, but if this is really true, and technically I don't doubt that though the effort must be enormous, it may become a future killer operating system. You verify yourself what you think about that project from the DOSiX home page.
What is DOSiX? DOSiX can be best described that IBM 3 years ago promised (and failded) under the name Workplace OS. The concept is that you have a "kernel" operating system that runs other operating systems as personalities (that is clients), thus you could run OS/2, WIN32, Apply System 8,... simultaneously on a single machine.
DOSiX in my opinion goes even behind that, as on personality is the server for services (for example the OS/2 personality for HPFS partitions) other personalities as a client can request services from (for example WIN32 accessing HPFS).
I still do think after Workplace OS failed that the concept of such an operating system is excellent and IBM should also try to develop such a beast as IBM makes money on the platforms it controls the OS, so if users wish WIN32 IBM should provide an WIN32 capable OS not Microsoft (and with a Workplace OS IBM could still keep the OS/2 API alive and add support for other OSes).
Well, I have to admit that it has just been a fake, though there are some things possible to run virtual machines (e.g. NT under Linux), just look to VMWare for that working quite well, or Bochs (which due to emulating the whole X86 architecture is much slower).
Virtual Machines are becoming increasingly popular, in fact I'm writing today's update in an OS/2 guest session under XP SP2 using VirtualPC. When you ignore the thermal problems, you can throw so much CPU power into a PC, that a Virtual Machine feels like a native one.
This area has nothing to to with PCs in detail, but nevertheless I think that there will be a dramatic change real soon now, that is within the next decade.
I predict that manned spaceflight will get away from non-reuseable spacecrafts like the European Ariane-5 and the Russion Sojuz and also from the reusable US Space Shuttle to a technology that is much cheaper (as smaller devices with required high reliability are much cheaper to build than large ones, as e.g. the 2800 tons heavy Shuttle).
Mankind will get into space with spacecrafts that look similar, work similar and are about the size of todays fighter aircrafts. In detail, a fighter like spacecraft will either launch horizontally from a normal runway and will be refueled in air (before starting into space) or will be carried into more than 10 km height by a carrier plane (like the 747). From that temporary position in the atmosphere it will start its journey into space.
Well, Space Ship One isn't exactly a Jet, because it is not powered by a jet engine but a chemical rocket, but it operates exactly like a plane (horizontal launch and landing). It's still far away from spaceflight, as it only does a ballistic shot touching the edge of space. For a real spaceflight, one would need about 10 times the speed and able to handle the problem of the reentry. I've read that Space Ship One has only 2 percent of the energy in its engine of what would be needed to go for a real spaceflight. So, still much homework to do.
But, the real revolution is that this ballistic flight was privately funded (though Paul Allen has surely more money available from his times at MicroSoft than small states ;-).
I have found a study on Internet (sorry, I haven't saved the URL) from an US firm that shows that a F-16 sized spacecraft can carry about 500 pounds into space, or 6000 pounds to any location on earth.
This craft will launch horizontally with about 7% of the fuel on board. In air it will be refuelled to full tank capacity by a tanker plane before starting the final mission into space (the fuel is H2O2 - hydrogenperoxide which has the double weight/energy ratio than hydrogen). This study predicts that as this spacecraft is fully reusable that it would take only about 1 day to prepare for a new mission. If I remember correctly, one unit will cost about 100 million dollars, and they suggest to build a few hundreds (exactly like a new jet fighter generation, as in advance to space missions that spacecraft can also be used for instanteneously military attacks of targets far away).
It looks like in-air refuelling is still only used for military jet missions when no airbase or carrier is near. But it's proven technology nowadays and could certainly also be used for spaceflight.
Though I haven't read about this technology yet, based on NASA's X-15 program I do predict that with today's technology it would be very cheap, reliable and easy to get humans into space with spacecrafts that are carried on a 747 into >12 km height accelerated to almost soundspeed before the spacecraft takes off from the 747 and uses its own thrust to get into space.
The X-15 was about 2 (?) tons in weight and could reach 100 km height. As the 747 can lift 60 tons (as it does when transporting the Shuttle from California to Florida) I have not doubt that a spacecraft of that weight would have enough enery to get 500 pounds (a few humans to say) into an orbit of 400 km height or so. This spacecraft would be also in size compared to a (larger one though) fighter jet and also fully reusable. One thing I'm not sure is if the spacecraft is completely powerd be liquid fuel or also has some boosters to jetison like the Shuttle (boosters have the design problem that they can't be stopped once burning).
For heavy weights to lift carriers like the Shuttle, Ariane or Progress will still be used, but as they will transport only freight they don't have to be that reliable as when also transporting humans. And even a little less reliable means much much cheaper (simple matter of probability theory). The small spacecraft to transport humans of course has to be reliable, but making such small devices reliable is much cheaper.
I think NASA once has thought about using the STS (space transportation system aka Shuttle) for unmanned flights by making a Shuttle-C (cargo). I still think it makes sense to attach a cheap, non reusable, unmanned spacecraft onto the tanks and boosters instead of the Shuttle, being not reusable you could launch about 60 tons additionally into low orbit.
Well, I would count Space Ship One into that category, though it's not a real spacecraft but only doing a ballistic flight to the edge of space.
Another example is the Pegasus rocket, which is carried to 12.000 meters by a passenger jet (I think alternatively also by an F15 fighter jet, probably with less wight, but at higher speed and altitude) and then released.
JAVA promises that applets written in JAVA will be able to run on any platform that supports JAVA. This is done by using a technology that is called p-code (I think). P-Code compiler don't compile a source in instructions from the instruction set of the destination platform's CPU (e.g. Intel, Motorola), but into an intermediate code that can be converted to a platform's native instruction set very quickly. To summarize, JAVA is compiled on the server to an intermediate instruction set, which get interpreted at the client, after downloading the JAVA applet.
The p-code technology is neither a new discovery nor something developped for JAVA, so why do I think JAVA will become a success?
A year ago I predicted that JAVA will be a success in about 2 years. After I year we can see almost industry wide support for JAVA and even some JAVA applications are in production now (e.g. Cera-Bank, IBM's TCP/IP 4.1 configuration notebooks).
MicroSoft is trying to torpede JAVA and to replace it with its proprietary Active-X, which not only has a much worse design but seems to server mainly as a security leak to allow hackers to hack into Win-based PCs.
JAVA is still there and doing well in some niche markets, but it definitely has lost some momentum. Just note JavaOS, the joint effort between Sun and IBM to make a JAVA base operating system which was scraped soon after it was completed, or Corel Office for JAVA, which silently disappeared after it became clear that a full blown "fat" office suite just wont work in JAVA as it just became too big and too slow.
Now there exist fantastic tools that should allow one to build JAVA applications easily and probably also without knowing how to program at all (which likely results into the type of crap applications as we know from VisualBasic), like e.g. IBM's WebShere suite.
IBM is throwing all its mass behing JAVA, so customer project are quite often done in JAVA. In fact, I one says not really knowing JAVA but preferring C and C++ (like me ;-), they are considered being peculiar types.
I still think, C/C++ has its place beside JAVA, consider for example Linux, almost everything done there is in C. Using JAVA for business applications is certainly tempting, because it just has a library for everything. You have all the building blocks and you just need to connect them together to form your business application. C/C++ certainly does not have so many libraries, so you would need to do your own for larger parts in a project.
What I want to say is, JAVA is certainly not the solution of every problem, but currently (at least at IBM) it is used to be the ultimate wisdom.
The WWW (World Widw Web) currently uses HTML (HyperText Markup Language) as the protocol to transfer data between the server and client. There exist some standard as e.g. HTML-1, HTML-2 and HTML-3 and there are also some proprietary additions to these standards by the 2 main providers of client software, partly to overcome some limitations of the standard partly to establish the own extensions as the standard to control the market to monopolize and eliminate competition.
HTML can by called two-dimensional, as any other popular HyperText product. You can display text, pictures (and with some of the proprietary extensions also animated images), but your actions are limited to, let's say flat interaction. You can change the point of your interaction within the document (that's the first dimension), e.g. by scrolling up and down, and between documents (that's the second dimension), e.g. by following some links. It's similar to a picture, a picture may represent a room but you can't get into the depths of this room.
You are missing the three-dimensional component, that is to jump into the depths of the room in above's picture example. With Hyper-G and VRML (Virtual Reality Markup Language) this dimension is added - the picture of course stays two-dimensional on the display (unless holography becomes an alternative), but you can interact three-dimensional with the picture.
As an example take a museum. The first picture after entering the museum is the entrance hall. When walking into the depth of the entrance hall (by using a mouse or joystick as the controlling instrument) the picture on your browser is constantly updated. You turn leftwards to turn to a picture you are interested in, and the picture follows you movement. You get closer to the picture and the picture on your browser also zooms closer. It's like one of the popular texture mapping games like DOOM or Duke3D, the picture you get depends on the interactions you make - and all is done in real-time.
I predict that this will become an important part of the worldwide WWW architecture within the next 3 years. One just can hope that this technology doesn't get monopolized as the mainstream business applications are today (or to say it more clearly, I do expect an OS/2 server and browser, this is a chance to present OS/2 as the superior technology).
Unfortunately not much seems to have happened the last year, sure texture mapping games are now standard, but either using the proprietary HW interface or using Open-GL (which by the way also seems to be a technology worth for a closer lookup).
However, I still do expect these technologies to succeed within the next few years.
It looks like that this technology is still in the drawing board. The latest study I've heard about using a 3-dimensional representation are advanced search engines that allow you to seek your data like you would use a telescope to focus an object in space.
Still no great progress, at the moment XML seems to have more success of becoming a standard.
The NC assumes that most to all processing power, memory and disk space is located at central servers. Clients having only minimal processing power, memory and no disk space are connected to the servers. The server does all processing and the client just displays the results and requests data from the server upon user request.
This indicates what the NC stands for: It is not very expensive because of its limited resources, it is specialized to display data via a nice user interface and it requires fast connections to the server:
Ask yourself a single question: "Have you ever seen a NC in everyday use (that is not magazines, fairs, demos, announcements,...)?", I doubt that.
As predicted NC are and will be a nice market, on one hand one can argue that this is not good (e.g. better administration, better security, even more Microsoft dominance,... of NCs) on the other hand one can imagine that this trend makes sense (e.g. PCs are cheap, PCs run all kind of apps including games, PC administration can also be improved e.b. by WorkSpace On Demand, many cheap standard applications,... ).
I think saying that the NC is dead comes pretty close to the truth, if we assume NC is a PC-like computer that implements the "thin"-client where software and data is loaded from the server in contrast the the "fat"-client which is synonymous for a PC.
What really will become successful is not what is usually called a NC, that is the intelligent devices. Intelligent business devices like mobile phones, multimedia devices like video players, TVs with build in Internet access, household devices like oven, washing machines will become networked devices. Those devices will become smarter and smarter and internetworked to work together coopertively, but I would not call them NC (though JAVA makes very much sense for such devices too).
Do you still know the phrase NC? Hey, even slim clients are not more than a niche market when being compared to the old and traditional fat clients.
The revolution lies not in PC type hardware, but smart small devices like mobile phones.
Open32 is a subset of native OS/2 APIs that are named, called, used and behave like their Windows WIN32 APIs counterparts. This should allow developers to compile WIN32 applications into OS/2 applications by less more than changing some compiler switches. Even more, SMART a tool to analyze source code (WIN16, WIN32, 16-Bit OS/2,...) can give precious hint what can be crosscompiled easily and what requires some manual intervention.
Both things should create native OS/2 applications derived mainly from WIN16 and WIN32 applications. I agree that this really makes development for OS/2 for vendors usually writing Windows software much easier, but this has a terrible price in my eyes. Crosscompiling Windows applications to OS/2 applications means to destabilize the more stable OS/2 platform by badly programmed and behaving Windows applications.
Under Windows noone seems to care when rebooting x-times a day (and Windows reboots faster than OS/2), whereas the small fraction of OS/2 users certainly keeps the stability in mind when thinking about OS/2.
Under Windows noone care if the print spooler monopolizes the system instead of doing its task in the background. And Windows95 has just changed the mouse pointer used while such applications monopolize the PC, but hasn't provided much improvement regarding multitasking. Of course there are similar applications in OS/2, just to name VIEW.EXE as one of them, but they are a minority.
I would love to get some more vendors writing OS/2 software, but simply porting their ugly Windows programs is not what I want and that's what Open32 and SMART can do - unfortunately not more.
I really would prefer that the IBM Open Class Library would be used both for Windows and OS/2 development, this would give both, better Windows and better OS/2 applications. I don't think Open32 and SMART are bad, they are just pointing into the wrong direction (and you will see what the can do and what they can't do when the Lotus SmartSuite for OS/2 gets finished RSN (Real Soon Now - the same term used for Doom/2 for OS/2 2 years ago...).
Unfortunately I was right with my prophesis too, thanks to IBM even OS/2 vendors are now leaving the boat. OS/2 is doomed (mainly by IBM) to become a nice market unless a "bet the company" strategy becomes the strategic operational concept (as discussed in OS2ARENA, the PC OS and application software market is the fastest growing and IBM decided to retreat, only a similar effort when developing the /360 that could cause the whole company to fail can stop that Wintel monopoly from dominating the next years and likely decades).
Not related to Open32, but Microsoft doesn't stop after controlling 95% of the Desktops, engagements in Web technology, TV and even satellites (by Bill himself) show that they have a vision to dominate all. Unfortunately IBM's vision is to get that parts of the cake Microsoft hasn't taken yet (but they undoubtedly will take them).
Open32 is the mistake as I predicted years ago. The only major program using it I know of is Lotus SmartSuite for OS/2 Warp, and that bloatware is a bad example. Neither V1.0 nor V1.1 do work correctly. I installed it on multiple PCs but something failed here, something trapped there.
At least now there is a better alternative now, that is the now free Staroffice from Sun, which is also a cross-platfrom office suite. Did I already say you can download it for free from Sun? Really worth a try!