What's wrong with old good Unix
The main focus is on the desktop area.
- Here is the place where old Unix model show his age.
- Here is the place where any missing flexibility will promptly be "fixed" by the users.
- And here is the place where an ILoveYou attachament will be launched with entusiasm.
- Here is also the place where the users will complain dramatically any time when they
have to open a terminal, and add a line in /etc/somefile.cfg.
How long the Unix kernel is a amazing piece of engineering, and the new efforts KDE and Gnome try
very hard to provide a modern user interface, a lot of system utilities are still based on old
programming models and hurt the new development of UI.
Well, some of the things listed here are completely out of scope of my project SYS++, but I listed them
here as common Unix problems:
- The security model need improvement
Going ahead with a security model who
claim that the way to increase the security is to reduce the users convenience and flexibility
is not something the users will appreciate. For example, asking for root password in order to
start the Internet dialer (kppp) is at least ridiculous. Of course, we can't jump in the
W****** bandwagon either, to allow our systems to be virus breeding containers.
We have to change the design, in order to allow convenience thus increasing security.
- Virus resistant systems
The big threat in the future will be mallware. As Linux (for example)
become mainstream, it is susceptible to become the predilect target for virus writers. Despite the fact
that Unix multiuser model provide a very solid base against virus attacks, there will always be
vulnerabilities that may be exploited. We need to back a model, in which the damages can be reduced even
in case of a successfully attack.
- A library for registering configuration and resource files
Many Linux programs can't be moved from the places where the --prefix was set when
compiled. Many RPM's will just tell you "Package is not relocatable" if you try to
install them into another directory. Some programs try to solve this problem by looking
for their files into a series of standard locations until find them. But if installed into
a nonstandard location they will fail to work. KDE try to solve this problem by using KDEDIRS
environment variable. A KDE program will scan all the path listed there if files not found in
--prefix location. Well, if you want to install each program into his own directory this variable
will be big, so at starting, each program will loose time scanning tens of the directories.
A better solution will be to have a API managing a database of pairs (key, value).
Where the key is composed from a program unique name, version, and requested file
and the value will be the name of the file. It will also allow to move a programs files wherever
you want, since move may be implemented to lookup into database and update the entries.
- New installation programs
We need a set of installation utilities structured on
different security levels. Obviously, the kernel and core system installation
it is a completely different security issue than installing a interface from a network game. It is not only
the signature of the package to be verified(to insure non tampering) , but the program
should also check into a database of "level of trust" to know if a particular package is trustworthy
enough to be installed with the privileges it request.
- The Sys V Init
is still the major player in the service management, despite the fact
that it is completely inappropriate for new GUI oriented desktop systems. The daemon tools try
to fix some of the aspects, but unfortunate it stopped at the half of the road, leaving the
"overscripting abuse" unsolved. For a home desktop user, the runlevel editor is an abomination.
- Overscripting abuse
A look at /etc/init.d directory provide an insight of this problem.
Each service is started by a script who accept the only parameters: start,stop, restart and status.
To write a GUI configuration program who will enable a nonprofessional person to manage his system
in more details, it is a "mission impossible" task. In the end, if you want to change the uid and
supplementary groups a specific service run with, the only answer is: joe /etc/init.d/servicename .
However, some users still prefer vi :-)
The overscripting abuse, is the one responsible for the Linux fame of "not for home users".
The same consideration is valid also for device driver management. The modprobe and/etc/modules.conf
scripting abilities are very powerfull, hoever they only hurt the ability to write a solid
GUI driver management tool.
- A network executable format
The users will always ask for the possibility to send the animated
desktop toys by e-mail. As the Christmas come, there will frenzy for such of things. An operating system
who can't provide the ability to start such a toy with a click (or double-click) on the attachment icon will
not be happy adopted as a home desktop operating system. But, we need a possibility to enable this kind of action
without jeopardizing the system security. This is the reason we need to define a network executable format,
who can carry an executable payload and be cryptographically signed. The network executable should be verifiable
against a keyserver to confirm the origin of the software, and in any case, it must be shielded against accessing
users files.
- Lack of the user data concept
It is unbelievable into an Open Source system as Linux to see
that Ximian Evolution and KDE KMail/KAddressBook/KOrganizer have his own mailbox directory, his own calendar
files, and contact database. There is a new project started: Chandler, and one of the first problems taken into
discution on mailing lists was the "import filters". Man, into the commercial world, is understandable that
each company try hard to keep his users locked into using his own application even if "the others" have
something more appealing. But here ???????????
The user data, is user data, regardless what application they use to access them. A cooperative effort is
required in define and back in all these projects a common API and build a common set of libraries for access
to user data.
- GNU tools (automake, autoconf ...) are obsolete
In this moment, when the amount of software distributed as source code is huge, these tools
shown their age. In many cases, if you have a customised system the configure script
distributed in order to help you to build the program just create more troubles than it solve.
It is also damn slow. Why if I compile 100 programs on my computer I need 100 checkings if
I have libdl.so, if the QT compile without flags etc? The problem is the model.
It should be the responsability of my system to configure the build environment, and not the
responsability of a incomming script. The only thing that the incomming code should do it to
declare What is required for building and What are the specific parameters for this build.
Then it is my system responsible to provide the right building environment. The local
configuration programs should work hand in hand with the library installation program so
no "runtime" checkings have to be done.
Advanced Unix programming techniques page
Sys++ Project Home page
Visit M.T.M. Home Page