If you own a website, I recommend you to create an account at Bravenet to get various FREE site tools/services (such as Counter, Guestbook etc., here's a full list: affiliate.html); and if you decide you will, then please do register it "through" my affiliate program's banner above.
This page contains especially important articles written by me (most of them are exposing/debunking various PC-related myths), but it also contains other explanations/descriptions of, and information about general computing-related stuff. Similarly to the other pages on this site, the content is mostly personal (things that I've discovered myself and written about exclusively) and others that I've found browsing the Internet and reading them (i.e. the articles written by others; btw. for a few interesting web-coding related articles in particular, also see the "website.html" page), although note that for the non-personal ones, they at least contain modified text and are not just copied and pasted, i.e. they contain text that I wrote in my own words, however, the general point remains the same.
For example on this "articles.html" page, all the articles are written by me, however, particularly for the "COMPUTER-IDENTIFICATION ON THE NET" one, I've got the general idea on some site when I browsed the Internet, but then I've added and removed few things, changed others etc. Actually, it's quite similar with "RUNNING A COMPUTER NON-STOP OR NOT" one, i.e. the part about "MTBF ratings" is not solely mine, and with "THE MEMORY-FREEING PROGRAMS MYTH" one; the first part is modified text from the linked articles. But for all others, I wrote them by myself entirely. But anyway, for one to write an article, one needs to get that knowledge somewhere, needs to get an idea somewhere, so in my opinion the "originality" of something is a bit relative thing.
In short; my current hobby is, or better yet, for the past five or so years, I am interested in computing in general; that means being interested in various computing concepts (i.e. how things work, for example by observing the OS's behaviour with Process Explorer application; in particlar the handle and DLL views/panes), in custumizing the OS and finding its limits and capabilities, then further, having the basic programming/scripting skills, and actually being interested in anything other related to a desktop computer. This surely includes also a knowledge on how to cope with various errors, how to change various computer's settings (undocumented/optional parameters under various headers in OS's or program's .ini files, usage of Environment Variables, but mainly numerous registry hacks), therefore as the most important thing, increasing the speed and stability of the computer. And so I summarized all the important things into these articles below.
My various blogs: Here is a list of links to my various blogs; as first a link to my Voljatel Blog, both in Slovenian language, as second there are the links to the two "ad revenue sharing community" blogs Senserely Blog [ add], writingUp Blog [ add] in English language, then there are the three unsorted blogs Kuro5hin Blog, Spread Firefox Blog [ add], and CastleCops Blog [ add] also in English language, and finally the two futile blogs Slashdot Blog (this one doesn't get much attention), Techrepublic Blog (it's inaccessible to non-registered visitors), both in English language too.
(refresh the page for other quotes)
to make it tiny URL
/NOTE: From the site's update on 5.6.2006 onwards, this particular site will not be updated anymore. To be honest, I made few additional modifications on 6.6., 7.6., 8.6., 9.6., and 16.6., further on 13.7., 23.7., 23.8., and 26.9. in 2006, and finally on 14.1. in 2007 (which was the absolutely last update), but that was all just fixing old errors and formatting, and no new content was added. Optionally see the last "events-entry" on page "events7.html" (it's a short related announcement), and the first entry on the "events8.html" page (it describes all this in great details), however, the second one is located only on "still-updated" site-variants. Anyway, this notice applies to: Bravenet, Freehost386, Geocities, and Greatnow free-hosts (and from 14.1.2007 this includes also Atspace free-host), so for the current variant with the fresh content, please head on to one of these two main sites: 50webs, Voljatel, which are, as mentioned, the only ones still being updated.
For starters, let me emphasize again that my involvement with computers started roughly three or four years ago, when I was still studying architecture, and in fact because of the architecture studying I got my second PC (the first one was Sinclair Spectrum +, a decade ago), and that was the main reason to give up with studying, and as you also probably guessed, since then, I am obsessed with computing, and with pretty much with everything else that's related to it, but especially with generally knowing how to use things more reasonably, efficiently and safely. And so during this "process" of using my PC, learning about it, changing its settings and configuration. I have also learned a little bit about programming basics in general, particularly the HTML and JS languages, then I was learning also ABC programming language, it is totally neat thing. Its implementation for Windows/DOS package contains the Interpreter and Environment for Windows/DOS, and heh; no installation required at all. And finally the Python scripting language, which I am still learning very slowly, and a tiny bit of C++ and Intel's native assembler (a.k.a assembly) languages too. But from all the languages except for Python maybe, I've learned only less than "raw" basics, for example I've programmed few "Hello World" windows and similar, mostly based on templates from tutorials I was using at that time. To say/summarize all this on the very beginning of this page in two short sentences: I have never ever seen an application or system crash because of so-called "low memory conditons". And futher, all this stuff that you hear about "registry cleaning" (if it would even work that way) would only be true if you would be running most of the time at 100% of RAM used (which is btw. a good, not a bad thing), so "slows downs" because of too much applications running (i.e. again, if they don't fill-up the entire RAM) is a completely non-existant thing.
COMPUTER-IDENTIFICATION ON THE NET
The Internet uses a protocol called Transmission Control Protocol (TCP) developed in the 1970s by network engineers at Stanford University and others. Basically, it breaks down large files into small packets of about 1500 bytes, each carrying the address of the sender and the recipient. The sending computer transmits a packet, waits for a signal from the recipient that acknowledges its safe arrival, and then sends the next packet. If no receipt comes back, the sender transmits the same packet at half the speed of the previous one, and repeats the process, getting slower each time, until it succeeds; this means that even minor glitches on the line can make a connection sluggish.
Further, billions of computers are connected to the Internet and the web information is located on the Internet, stored as sites/pages, each with a unique name called a URL (Uniform Resource Locator) When you enter a Web address in the browser address bar or click a link in your Web browser to move to a new Web site, you are giving your browser the URL of the page that you want to view. For example, www.symantec.com is a typical URL. Each URL maps to the IP address of the computer that stores the Web page. URLs are used because they are easier to remember and type than IP addresses. Before your browser requests a page, it asks a DNS server for the IP address of the Web site. IP addresses are 32-bit numbers expressed as four decimal numbers, each ranging from 0 to 255 and separated by periods, for instance: 110.202.255.255. Every computer on the Internet has a unique IP address. So-called "Subnet masks" are always used in conjunction with a base IP address.
Like for example:
Base IP address: 10.0.0.1
Subnet mask: 255.255.255.0
When you are trying to identify computers, it is easier to work with groups of computers rather than having to identify each one individually. Subnet masks provide a way to identify a group of related computers, such as those on your local network. A typical subnet mask looks like this: 255.255.255.0. At its simplest, each 255 indicates parts of the IP address that are the same for all computers within the subnet, while the 0s indicate parts of the IP address that are different. There is one particular URL that identifies your computer to itself, and that is localhost. The IP address that corresponds to localhost is 127.0.0.1 (also known as "home IP" or "loopback" or simply "this computer"), so for example if you have a Web server on your computer, you can type http://localhost; compare to 0.0.0.0 IP address which means "no IP" (or unknown/any host or simply "anywhere and everywhere"), and therefore you can see your web-page, of course, if it exist at all.
Further, every device connected to the internet must have a unique IP; however, there're two types of them: "static" and "dynamic". But there is also an option that lets multiple computers share a single IP address called a router. Static IP addresses are exactly what their name implies, i.e. they are static or unchanging. They are assigned by network administrators or ISPs, and one has to configure the computer or other internet device manually to respond to that specific address. But mostly this is not needed because using the "DHCP" or "Dynamic Host Configuration Protocol" (which is the default for Windows TCP/IP connections), the computer broadcasts a special request for an IP address to the network. Another device, commonly belonging to an ISP, responds with an IP address that the computer then configures to use. Routers are devices that allow multiple computers to "share" a single IP address; the device that is connected to the internet is the router and it has a unique IP address.
PROCESSES AND THEIR BASE-PRIORITIES
Windows is a multitasking operating system, which means that there are various applications that run simultaneously at any give time. The process priority class is therefore a parameter that tells the system which task has priority over the other task(s); for instance, if there are two programs that are running at the same time and with the same priority, they will have equal shares of the CPU's time. But in case if you would set a higher priority for one of them, the programs that has a higher priority would use all the free processor time while the one with a lower priority would use only the rest of it.
/UPDATE: In the How do I set specific Win Processes to always High/Low Priority? thread: http://episteme.arstechnica.com/groupee/forums/a/tpc/f/99609816/m/365007825731/inc/-1 thread on Ars Technica I wrote to "consider it as a joke". Although it could mean to consider my post as a joke, I indeed meant the Priority Master program: http://prioritymaster.com (written by Ted Waldron III, a guy which reminds us all at Ars Technica of Alexander Peter Kowalski Kowalski, a.k.a. AlecStaar/APK, who defends his "RAM optimizer" and his other crappy programs to the point he's frothing in the mouth; see this particular post in the Diabolical and SexyBiyatch wedding pics thread on Ars Technica with a collection of links to APK's posts: http://episteme.arstechnica.com/eve/ubb.x?a=tpc&s=50009562&f=34709834&m=8510980933&r=3650926043#3650926043); here is also a link to its dedicated page at "download.com": http://www.download.com/Priority-Master-2006/3000-2094_4-10498003.html. And so, I got an e-mail message from the author of the Priority Master 2006 program a few days later. Basically, he says that I need to explain myself for "stating that the program is a joke publicly on the web". And the result of this is the Dear Arsians, I really need your opinion on this one ... thread: http://episteme.arstechnica.com/groupee/forums/a/tpc/f/99609816/m/738001397731, which is "destined for greatness" as one of Arsians wrote in it. Read it for an entertainment value. Oh and by the way, he is also very similar to some other Andrew K/Mastertech guy. You see, basically his main problem is that he failed to answer all the "hard" questions asked, for instance why is he spamming on Ars and on the various other sites (i.e. reposting his crappy website numerous times to Digg) etc., and on the other hand he also failed to provide any links to the supposed origins of these so-called myths, not to mention that he also failed to explain why is he misquoting people and so on; see the Firefox Myths thread: http://episteme.arstechnica.com/groupee/forums/a/tpc/f/99609816/m/558005957731 thread for a bit of a fun, i.e. the debate that ensued was a hilarious one, and the thread is no less than 12-pages long.
And as opposed to what you may have been led to believe; a higher process priority doesn't make things run faster (i.e. making a process or more processes run faster); remember that it is always a particular process and its priority compare to other processes' priorities and the main question: is there a spare CPU time. Also a higher priority has nothing to do with how fast something "comes into action", again as long as there is a spare CPU time. If you're using a programs that is not being responsive because another is hogging the CPU, you would be better off lowering the priority of the one that you aren't using. And also note that there are many things that can really bog down your system that have nothing to do with CPU utilization. If you system is busy processing disk I/O there will be little CPU activity, since disk doesn't need much CPU attention, but the system will be very sluggish to user input.
So for example if your CD/DVD burner programs consumes let's say 80% of CPU when burning a CD, setting it to "Above Normal" (10) or even to "High" (13) priority will not speed it up (the process of burning), if there is no other programs consuming those other 20% of CPU. It would only make it a bit more "stable" compare to other processes; but once more, only in cases if those other processes (or yet better, a single process) would start consuming enormeous amount of CPU; in this particular case of CD/DVD burner programs consuming 80% CPU that would mean that the one single hogging process would consume (start consuming) more that 20% of CPU. I highly recommend you to read through this thread on Ars Technica forum. The title of the thread is A generic rule on process prioritizing (about process-priorities), and here is a link pointing to the first one out of five posts: http://episteme.arstechnica.com/groupee/forums?a=tpc&s=50009562&f=99609816&m=468005824731&r=468005824731. But also note that the foreground task, i.e. the one that currently has a keyboard focus, has a bit higher priorities of threads anyways (this is because of the "threads queueing"), see the last paragraph in this entry below.
For instance, start Regedit and go to main menu Edit - Find... (Ctrl+F), and start searching for some possibly non-existant string (such as "ab_ab" for instance, optionally also check the "Match whole string only" check-box), or alternatively launch your adware programs or on-demand antivirus scanner and let either of these scan your hard-disk. Since the default process priority is Normal, and the process is using 90-100% of CPU, the system becomes sluggish. Now change the priority in Task Manager to Below Normal, and what you'll notice? The process is still using 90-100% of CPU, however, the system is suddenly NOT sluggish anymore. I hope you get it by now, i.e. the principle of priorities. So high priorities should be reserved only for things (i.e. programs/processes) that need to respond quickly to requests to run, but which don't need much CPU time when they do run. Low priorities are meant for the compute-bound operations; in other words for "CPU hogging" processes and have no affect on I/O-bound ones. Raising the priority of a process should be done only when you know that you need that particular process to run higher than all of the others at the current priority or when you are sure that it won't hog the CPU unnecessarily itself. In fact in some cases it may even help lowering the priority to make/keep things work right (i.e. an application running as it should), see the HELP: I can't normally play most of the games anymore thread: http://episteme.arstechnica.com/groupee/forums/a/tpc/f/99609816/m/639002616731 that I opened on Ars Technica forum when I had problems with the mouse being "delayed" or as I call it, the mouse "moving in steps" when trying to play a Star Wars - Knights of the old Republic game from LucasArts.
However, it is a bit different with "threads priorities" and their queueing. You see, if a newly ready thread is at the same priority as what's currently running, it has to sit on the "ready queue" for that priority until what's currently running has used up its timeslice (and everything else that was already on the ready queue); so if the newly ready thread is of higher priority than the currently running thread, it preempts the current thread immediately, regardless of the timeslice usage of the current thread. And note that threads have also "dynamic" priorities beside the "base" ones. Finally regarding the "spare CPU time"; the CPU being less than 100% busy does not necessarily mean that a thread that wants to run can run right away. On an instantaneous basis the CPU is never anything but 0% or 100% in use; it is either running a real thread or an idle thread. The "% busy" stat we see in places such as Task Manager and Perfmon is an average over the display interval, which is normally at least one second. A low "% busy" state may be hiding busy periods that last significant fractions of a second.
RUNNING A COMPUTER NON-STOP OR NOT
The debate concerning the question is it better to leave computer running 24/7 or shut it down has been questioned and the respective subject debated since the beginning of the "computers era". The answer has more to do with the type of computer, patterns of user's usage and a concern for power-bills. But as a general rule I was told on Ars Technica forums that once an electrical device such as computer is powered-up, it appears to be the best thing to left it on running non-stop. The power on/off cycles are damaging to a computer, i.e. they are damaging almost all the crucial PC's components including hard-disk, CPU, graphic-card, buses, mobo-chipsets, various "inner circuits", probably also RAM etc., and shortens a particular device's life-time. The microcircuits to flexing and fatigue due to change in temperatures. Over time this could lead to a break in the circuitry and result in system failure. So in one sentence: leaving the computer on all the time puts excess wear on the mechanical components, i.e. the hard drive spindle motor, cooling fans etc. You see, the thermal cycling occurs at the digital semi-conductor level as the state changes from 0 to 1 and 1 to 0. This in fact is the contributer to early failure mode of semi-conductors. The metallic leads are welded to the silicon. Any welding process has a risk of hydrogen embrittlement which cause a rapid loss of strength and ductility at the point of the weld. For this reason the standard method to produce more reliable devices is to place them (after manufacture)in a circuit and operate them for 48 hours. Throw away the failures and the remaining devices are more reliable than the total lot was prior to burn-in. For an even more reliable lot of devices they can be vibrated while burning in. More initial faiures but the remaining devices will be more reliable than the as manufactured lot. This then is the thermal cycling of semi-conductor devices and the start-up/shut-down temperature changes are largely irrelavant to whats happening at the chip level. Also I suggest you to see the "20.9.2005" entry on the "events3.html" page or check the Theoretical question regarding DC-projects and 100% CPU usage: http://episteme.arstechnica.com/groupee/forums?a=tpc&s=50009562&f=122097561&m=309005425731&r=309005425731 thread for further info. In this one we discussed what a heat actually does to the processor and other hardware-components (especially see Rarian's posts); in one sentence, the problem that arise from heat, particularly a temperature cycling leads to metal fatigue and the increase of speed of chemical reactions, so the bottom like is that running you computer at a constant high temperature is better than running at an oscillating (and relatively high) temperature. Also computer as any mechanical device sees most of the potentially "damaging" stress during the power on/off cycling. So yes, by maintaing your CPU at full load (of course, provided you have adequate cooling), and are not running on excessively high voltage for an overclock, you will reduce the thermal cycling and increase the life of your CPU, i.e. turning the computer on and off (also to some extent putting it under load and taking it off load) causes cycling of the CPU's temperature and metal fatigue. Actually, I am planning to write a full article about it in the near future. Another interesting related thread is the Is there a limit on/in (not sure which) a number of "page faults" for a process ??: http://episteme.arstechnica.com/groupee/forums/a/tpc/f/99609816/m/607003096731 one that I've also opened on Ars Technica back then. It deals with the endlessly increasing number of page faults for the "svchost.exe" process (which is btw. a so-called "carrier process" for various native NT-services), i.e. particularly the one launched with the "-k netsvcs" switch and in my case hosting no less than 16 NT-services.
I've read that many manufacturers of various specific computer components (such as hard-drives and power supplies) have used the MTBF ratings (which btw. means "meantime between failure"), to express the life cycle of their products. This is estimated frequency of mechanical failure based on stress testing. Note that "Mean" means that 50% fail before that point and 50% fail after that point, i.e. it is not a prediction of minimum life nor a prediction of estimated life. Power supplies have published ratings such as 50,000 hours (a bit under 6 years) and hard drive ratings have been 300,000 hour or even higher (it's a bit over 34 years), however, note that many computers are running 24 hours a day, 7 days a week, 365 days a year. For example every network-server musts be running constantly and generally they use the same basic components as the average user. But just because we know that the components are capable of running all of the time (be it "full load" or not) does not mean necessarily that they should. Laptops in particular have a higher chance of heat-related problems (because they have very limited ventilation systems), so in addition to the obvious battery power savings, shutting them down when they are not being used will allow them to run cooler and generally more efficiently. So if you use your computer only to check your e-mail once in a while and such (or even if use it constantly throughout the day), leaving it on during the day and turning it off at night makes perfect sense to a normal user (in a "turn it on when you need it and turn it off when you don't" manner), but if we take into an account everything mentioned above, you'll see that this is not the best practice. Also, if you hate to wait for Microsoft Windows operating system to boot, leaving your computer on all the time will probably increase your "quality of life". If saving electricity is your concern, then the monitor is your biggest enemy. Your display screen is the biggest single power consumer, so you can simply turn it off whenever you are not using the computer, but leave the computer itself on so you don't have to wait as long when you want to use it.
A monitoring of new computers at Iowa State University found that the average computer running "all-the-time" costs only about $65 per year. If you were to shut your monitor off on nights and weekends but leave the computer running, the cost would drop to about $40 per year. If you turn everything off at night and on weekends, the cost would drop to about $21 per year. Power-saving systems are now a part of almost every computer/operating system, which will put your computer and monitor in "sleep mode", which in saves electricity. So there is no one answer for this question, but there are a few absolutes for those that plan to keep their computers running all the time. The first general recommendation would be to invest in a good surge protector with a UL 1449 rating or, an UPS (Uninterruptible Power Supply), since the likelihood of a power related issue increases with the length of time that your computer is running. The second is to always shut down and un-plug your computer during an electrical storm (cutout of a circuit or power outage/failure). There is no way for your computer to get hit if it is not plugged-in, and it is a cheap way of protecting it your computer.
But this article/entry will also deal with "caching of the pagefile" (with help of SuperCache software for instance) and also with "placing the pagefile on RAM-drive/disk". You see, the main problem/question is: why go through the extra steps of having pages moved in and out of your working set or the modified list cache just to be moved to a memory pagefile. I mean why not rather use that extra memory for larger working sets and modified lists. If there is memory enough to host the pagefile in core, that memory is better served as directly accessable pages. Puting a pagefile in your cache simply adds another level of indirection which can be avoided by making that physical memory accessable to the OS for program/data storage rather than the cache. The simple fact is that placing the paging space in memory, whether by using a RAM-disk or this cache you have, is wrong. If there is memory enough to host the pagefile in core, that memory is better served as directly accessable pages. Puting a pagefile in your cache simply adds another level of indirection which can be avoided by making that physical memory accessable to the OS for program/data storage rather than the cache. It obviouslly doesn't make any logical sense. Caching the pagefile is really doing exactly what the OS is already doing with the modified and standby page lists. Windows does it much better actually; putting a page on the modified page list just involves unlinking it from one list, linking it to another, and updating maybe 12 bytes of other "bookkeeping" data, while "writing" it to a cached pagefile takes a memory-to-memory copy (or to a pagefile on a RAM-drive); with all the implied bad effects on L1/L2 cache contents, i.e. now you've got two copies of the page in the L1 and L2 cache, while the second copy replacing 4K of other stuff that would have been better left in the cache. There's no real penalty from growing the pagefile or having it be fragmented. I/O to the pagefile is random anyway, and most of the pageable code in the OS tends not to get paged out once it's paged in. Demand paged virtual memory and merged VM/buffer cache mechanism is designed to efficiently satisfies applications with large appetites for memory.
So remember this: the cache takes RAM. RAM allocated to the cache reduces the RAM available for process working sets. If anything a simple RAMdrive would be faster, since stuff written to the cached pagefile will still be written to the hard drive eventually, whereas the RAM-drive would leave it in the RAM allocated to the RAM-drive. So either way, it's still better to leave the RAM for the process working sets and so not incur the additional page faults at all. This only makes the IOs go faster, but why it wouldn't be better to not have to do the IOs in the first place. Now you might think "yeah, but those additional page faults will go faster than they otherwise would because they are satisfied in RAM". While this is true, it is still better to not incur them in the first place. And, you will also be increasing the page faults that have to be resolved to exes and dlls, and the pagefile in RAM won't do diddly to speed those up. But thanks to the pagefile in RAM, you'll have more of them. Also, the system is ALREADY cacheing pages in memory. Pages lost from working sets are not written out to disk immediately (or at all if they weren't modified), and even after being written out to disk, are not assigned to another process immediately. They're kept on the modified and standby page lists, respectively. The memory access behavior of most apps being what it is, you tend to access the same sets of pages over time... so if you access a page you lost from your working set recently, odds are its contents are still in memory, on one of those lists. So you don't have to go to disk for it.
Putting a pagefile in a RAM-drive is a self-evidently ridiculous idea in theory, and almost always a performance hit when tested under real-world workloads and actual measurement proves it to be a terrible idea in practice. You can't do this unless you have plenty of RAM. And if you have plenty of RAM, you aren't hitting your pagefile very often in the first place. On the other hand, if you don't have plenty of RAM, dedicating some of it to a RAM-drive will only increase your pagefault rate. Committing RAM to a RAM-disk and putting a pagefile on it makes fewer pages available for those lists, making that mechanism much less effective. And even for those pagefaults resolved to the RAMdisk pagefile, you are still having to go through the disk drivers. You don't have to for pagefaults resolved on the standby or modified lists. So just forget about it. But always do remeber the general "pagefile placement" related rule: Put the pagefile on the most-used partition of your least-used drive on your least-used IDE channel. The "most-used partition", because the head will be spending a lot of time there (if there are multiple partitions on the drive), while for the "least-used drive and channel", it's hopefully pretty obvious why.
THE PREFETCH-FOLDER CLEANING MYTH
As first, I you should really check the Ed Bott's:
http://www.edbott.com article titled Windows Expertise: One more time: do not clean out your Prefetch folder!:
http://www.edbott.com/weblog/archives/000743.html (note my own comments below under name "Ivan Tadej") and Popular Technology: CCleaner Cripples Application Load Times:
http://poptech.blogspot.com/2005/10/ccleaner-cripples-application-load.html articles. As you can read in the Ed Boot's article linked above (and I guess also stated somewhere on the Microsoft's site too), the Windows cleans the old/obsolete files in Prefetch folder by itself (after 128 files were/are created); but anyway, why then to bother at all with doing the OS' job, i.e. to bother with deleting these files manually (or with 3th party program), I mean although it is true, that I was also used to delete at least those various "setup.exe-hash.pf" files that are remains/leftovers of setup/installation programs. I know now that that task was also completely unnecessary, since the Windows deletes the oldest files itself, when there are 128 prefetch files already created under Prefetch directory. So why on earth to make programs run/launch slower, even if only once??!
In fact, the only thing I do regarding the Prefetch folder is that I delete those various "setup.exe-hash.pf" files, i.e. the ones from few of my programs that gets updated frequently and so I use a different executable with the same name each time, i.e. on each installation/updating. Then I also delete .pf files of various temporary processes' files, although those are rarely created/run on my system. And finally I delete those orphaned .pf files from executables that I moved after the new .pf file was already created (with new file's location), however, I know that OS would delete them by itself after time, but I am a "maintainance maniac" and so I do it myself in certain cases. Then regarding what data that these files contain, I guess it's quite obvious that they don't "pre-load" anything (or whatever), they just contain a list of directories, OS-libraries that executable loads/maps/hooks on the execution (not sure which term is appropriate) and other non-OS libraries that are called or better dynamicly/delay-loaded during run-time by executable in question (I assume this because .pf files are created AFTER the respective process is closed and not ON or right after the execution) with regard to device, i.e. with regard to the hard-disk volume on which they reside, so it is only a some kind of map.
Few lines from the AntiVir-related "AVGUARD.EXE-17927959.pf" file:
Also note, if Prefetching simply doesn't seem to work for you (i.e. Prefetch files NOT being created at all), the reason for this might be that you have probably disabled the "Task Scheduler" service; so to enable prefetching again just set it to Automatic startup-type. The other possibility is however that you've disabled the prefetching itself. Open the Regedit and check the value of "EnablePrefetcher" entry under the "HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\Session Manager\Memory Management\PrefetchParameters" registry key.
Here are the descriptions of these values (all four possibilites):
0 = Disabled
1 = Application launch prefetching enabled
2 = Boot prefetching enabled
3 = Applaunch and Boot enabled (default and optimal setting)
Oh, and one note; this particular article of mine is more or less only a copied text from my comment in the Ed Bott's article's "comment section".
THE REGISTRY-CLEANING SOFTWARE MYTH
As far as registry "cleaning" programs being useless (and things that they do to your registry), I must say that I've always thought that this is not the case. However, I must note here on the beginning that two are actually two "major sorts" of registry cleaners. First, there are the ones that only clean (or should I rather say delete) any orphaned entries found during the registry-checking procedure; in other words, they are capable only of finding and then cleaning/deleting such orphaned entries. And as second, there are the others that beside finding these entries/values also search the hard-disk's drives for corrections of such entries; these are mostly the paths stored as values (i.e. files and folders), paths in values of other similar entries, or names of the entries themselves if they contain path-references. Anyway, the important thing here is that the registry is basically just a rather huge monolithic flat-file database which Windows loads into the main memory (i.e. RAM) every time the computer is booted (and operating system loaded), and keeps it in memory whilst Windows is running. Therefore lots of programs leave various "left-overs" (useless values) after uninstalling them. My opinion is (or better was, please read on) that is somehow clever to remove them, sooner rather than later. They are just taking additional space, and each additional bit that causes bigger registry means bigger registry file, and more time for system to find a particular entry/value), making the registry operations slower. Yes it is true, slower for 0.0000001 times, but slower anyway, and having left-overs from 20 or more uninstalled programs...
Well and then I was told on Ars Technica forum (thanks goes to a member/user with nick DriverGuru) that registry cleaning is rather useless... Yes, it surely might save some disk-space, but this is so small amount. And as the most important thing, please remember that as opposed to what various sites offerring these products say: the impact of so-called "registry cleaning" on any aspect of computer's performance is only a minor one (or yet better; there is no impact at all), mainly because the registry queries (reads and writes) are not linear, and also the registry is a demand-paged database (whatever that means, lol); in other words, registry cleaning doesn't impact the speed of registry operations (and therefore overall computer's speed) nomather how many of them were left after some program was un-installed, or how "deep" the respective key/entry/value resides in the registry structure. But especially removing unused/orphaned entires will not prevent operating system crashes, or so-called "registry or application conflicts" (because an application in most cases simply rewrites an old registry data i.e. a key, entry or value that's already there) and such. Uhm, on some sites they even state that "cleaning" the registry will prevent BSODs; well this is simply a big pile of bull-shit.
Of course you don't need special/additional software for that, one can simply delete them with Regedit, but it is much more comfortably this way. But if you just *must* use any of these "registry cleaning" programs, see the first two threads above for programs used to do the job. One very good and powerful is called Registry First Aid (or shortly: Reg 1 Aid): http://www.registry-first-aid.com, http://www.RoseCitySoftware.com/Reg1Aid from KsL Software and Published by RoseCitySoftware: http://www.RoseCitySoftware.com. I rather call this program a registry "maintainer" (an "editor" in a way) than registry cleanup application. The best is that doesn't "clean" registry automatically, but simply scans registry for orphaned invalid/orphaned data, and after that it then scans hard-disk for possible solutions, and offers the best one. You still need to decide by yourself. Sadly it is a SHAREWARE program, but anyway, it's so useful that it's worth paing for.
It offers this options after completed scan:
"Fix entry" (to the suggested value or the one that you choose manually)
"Leave entry without change"
"Delete entry"
"Cut Invalid Substring" (for more complicated values, like those with more than one path etc.)
There is also another one that is pretty widely used. It is a FREEWARE program, called RegCleaner, or shortened to RegCleanr because of Microsoft's similar (or same, I forgot) name for some in-built application on 9x systems. It was developed by/under Macecraft Software (Macecraft Inc.), their main site is: http://www.jv16.org, while you get RegCleaner here: http://www.worldstart.com/weekly-download/archives/reg-cleaner4.3.htm
Registry First Aid application is without any doubt crucial for me, as devoted user of "non-setup" applications, and devoted explorer of finding the optimal folder-structures, therefore moving things around a lot. For example hen I move some "non-setup" programs (or programs "group", like Players or Internet apps), it would be such a waste of time changing their paths in registry manually, or deleting them in few cases, i.e. in cases of application which do not overwrite an entry-value on next first execution, like ATM - Another Task Manager on my Win98, or old versions of Soulseek p2p client etc. But there are other cases also. For example when I changed my Program Files folder to just Programs, all the paths of various MS's software dlls, executables, data/config files etc. were suddenly wrong (and therefor Outlook not working etc.), and well, it was a matter of few clicks to fix them all in one pass. I rather not think about how it would look like, if I would try to fix them all manually. It is the word "cleanup" that is somehow wrong in my opinion for this kind of program, I would rather say that it is a "registry maintaining" or "registry fixing" programs , but only for the software capable of such operations.
THE MEMORY-FREEING PROGRAMS MYTH
This article is trying to persuade you all who will read it, to not be led by sites offerring memory freeing/boosting/optimizing/defragmenting, ehm, even washing. As first, I urge you to check this article at Winnetmag site, written by Mark Russinovich (the co-author of Winternals/Sysinternals utilities) and titled The Memory-Optimization Hoax:
http://www.winnetmag.com/Windows/Article/ArticleID/41095/41095.html, then as second the article written by Jeremy Collake (from Bitsum Technologies a.k.a Collake Software) called The Truth About Windows Memory Optimizers: http://www.bitsum.com/winmemboost.htm, then third the Fred Langa's article at InformationWeek with a title The Explorer: Resource Leaks, Part Two > June 5, 2000: http://www.informationweek.com/story/showArticle.jhtml?articleID=17200583, the entry titled RAM Optimizers/Defragmenters on Mywebpages-SupportCD site, the "XP Myths" page:
http://mywebpages.comcast.net/SupportCD/XPMyths.html, and finally the article at Aumha site by Alex Nichol titled Virtual Memory in Windows XP:
http://aumha.org/win5/a/xpvm.htm. You see, the general bottom-line is that you want your RAM to be at full load more or less all of the time (i.e. the RAM "space" to be as used as possible; in terms of allocated addresses holding the "real" data), so remember: free RAM is a wasted RAM.
A few words on how Windows manages virtual memory. With modern computing, the worst thing you one can do for computer's performance is to touch the hard drive or in fact touch any non-memory storage. The fastest hard drives on earth are still slow compared to the computer's main memory (i.e. RAM), so you see, even with the "solid state" drives, in order to access the drive, one has to jump into system code and drivers, and this will push your own program's code out of the CPU's L2 cache (this is btw. called a "locality loss"), while there are two typical reasons one has to touch the disk. The first reason is when the application requests it explicitly (Word asks Windows to load "somefile.doc" file into the main memory), and the other is a so-called "hard fault", which occurs when the application tries to use memory that has been paged out to disk via "virtual memory" and needs to be paged back in. The principle is quite simple, i.e. Windows tries to keep commonly used pages of data in RAM, while less commonly used ones in the pagefile. So if on a given moment there is no RAM available, then pages not currently in use (i.e not actively used, since they might be used in near future) are moved to the pagefile. For this task it uses various lists; a so-called "stand by" list, and a "recently used" list. The process of moving a page of memory from RAM to the pagefile is calling "paging-out". So because these "optimizer programs" force the available memory counter up, the documents opened after this so-called "optimization" (the code that was part of processes' working set before the "optimization processes", i.e. present in the physical memory), must be re-read from the hard-disk when one continues to edit the document or open anopther instance of already running process. Thus it only slows-down the overall performance and responsiveness. On this point, I urge you to read the five DriverGuru's posts (posted one after another) on Ars Technica. The topic-tile is Where'd my free memory go?, and here is a link pointing to the first one out of five posts: http://episteme.arstechnica.com/eve/ubb.x/a/tpc/f/99609816/m/2590999945/r/2870972055#2870972055. Conversely, the process of pushing a page of memory from the page file to RAM is called "paging-in". So virtual memory is limited only by the size of the page file plus the size of RAM, i.e. the system can use gigabytes of memory even if the RAM is only a few hundred megabytes. And these programs do basically the same thing; they free up physical RAM (that would be used otherwise), and they do this simply by forcing as much possible allocated pages in physical RAM into the pagefile (or they allocate it to itself). Therefore the amount of free RAM is increased (why would anyone in the world want that??), but the amount of virtual memory in use is not affected, i.e. there is no increase in free memory, only the increase of free RAM. When the applications whose memory was put into the page file become active again, the pages of memory they use must be loaded back into RAM, incurring substantial overhead and causes performance degradation.
Strictly my opinion, things that I've discovered
I must say that I agree 100% with the articles I linked above. You see, I have tried various different programs of the mentioned kind (on the beginning of my "geek" era), and have actually used some of them in past on both, my Windows 98/SE and Windows XP setup. I've tried to use the ones that were not bloated with additional useless features, and generally looked somehow more or less "promising"), and soon I discovered most of the things that Mark mentioned in his article. Freeing up memory (paging it out to pagefile, btw., do not confuse pagefile concept with Memory Mapped Files), that would be used otherwise can only lead to performance degradation.
1. Why to "free" RAM as many of these programs offers (it's actually page the data out to hard-disk), when more that 10, 20, or 30% of RAM are still available, not allocated? Because if doing so, the system logically becomes much slower for quite some time (obviously slower), till all the used data (the data that were meant to be in RAM) are paged-in back to RAM.
2. Why to run an additional process to do exactly the same job as Windows Memory Management does. In other words, why to run an additional process if Windows itself manages paging in/out data sufficiently, but especially when the RAM is almost full (and of couse in other cases), so the "Free RAM, when only 5% is free" feature offered by some of those programs is completely and 100% useless.
3. There is no such thing regarding the memory management as: "This software will prevent crashes, freezes, lockups, BSODs (hehe), and generally improve the stability and performance of your computer". These programs are rather actually causing the problems (crashes, freezes or at least a delays and sluggishness) during the procedure of "freeing the RAM".
4. I suppose that there is nothing like "freeing/clearing RAM" as it says on many homesites of these programs in the meaning that some amount of data is in RAM, and after this "optimizing process" this data is freed; not in the meaning of being paged-out from RAM to pagefile, but "freed" in the meaning of being cleared (the portion of RAM that was previously allocated to the programs but it's not used anymore), so no more allocated nor in RAM nor in pagefile, i.e. simply vanished. Well, I imagine this might actually happen in certain situations, for instance in cases of so-called "memory leaks" caused by programs that don't perform well the job of clearing the old data from RAM (garbage collecting)
Although on the other hand, I could partially agree with forcing the data to be paged-out to disk, but only on 9x systems. I assume this could be useful for gamers, or hogging music and graphic programs users etc. Like for example after being on internet (when lots of processes and IE instances were opened), and after disconnecting and closing all those processes, to additionally attend to "clean" so-called leaks (see above) because I suppose Windows 9x platforms have/had completely different Memory Management compare to Windows NT systems. However, the main question that remains for Windows 9x platforms - is really all RAM freed after user closes the process that allocated it on execution (or later when working with it, so same as on NT systems, or there could be some section/area that could be called "wasted" and therefore needed to be manually freed and I mean freed and not paged-out to pagefile, since uhm, the programs that was using this portion of RAM was closed, so there is no other programs that would use this memory instead (we are not talking about shared dlls and MMF in this case)
STRATEGIES ON COPING WITH A BSOD
Stop Errors or STOP Messages, also referred to as BSODs (Blue Screen Of Death, i.e. called BSOD by its blue background) occur when Windows XP Professional stops responding. Stop error messages can be caused by hardware (a bad driver, or faulty or incompatible hardware) or faulty software, i.e. malfunctions, incompatibility and/or conflicts. But what is a driver anyway? Well, a driver is a sort of a program, that controls a device. Every device, whether it be a printer, disk drive, or keyboard, must have a driver program. Many drivers, such as the keyboard driver, come with the operating system. For other devices, you may need to load a new driver when you connect the device to your computer. In DOS systems, drivers are files with a ".sys" extension, while in Windows environments, drivers often have a ".drv" extension. A driver acts like a translator between the device and programs that use the device. Each device has its own set of specialized commands that only its driver knows. In contrast, most programs access devices by using generic commands. The driver, therefore, accepts generic commands from a program and then translates them into specialized commands for the device.
Troubleshooting RAM-related stop errors
If the error occurred immediately after RAM was added to the computer, the paging file might be corrupted or the new RAM might be either faulty or incompatible. In this case, delete the Pagefile.sys file, and return the system to the original RAM configuration. Additionally run hardware diagnostics supplied by the hardware manufacturer, especially the memory checks.
Troubleshooting file system stop errors
If you’re using a small computer system interface (SCSI) adapter, obtain the latest Windows XP Professional driver from the hardware vendor, disable the sync negotiation for the SCSI device, verify that the SCSI chain is correctly terminated, and check the SCSI IDs of the devices. If you’re unsure how to do any of these steps, refer to the instructions for the device. If you’re using integrated device electronics (IDE) devices, define the on-board IDE port as Primary only. Check the Master/Slave/Only settings for the IDE devices. Remove all IDE devices except the hard disk. If you’re unsure how to do any of these steps, refer to the instructions for your hardware. Run Chkdsk /f to determine if the file system is corrupt. If Windows XP Professional can’t run Chkdsk, move the drive to another computer running Windows XP Professional, and run the Chkdsk command on the drive from that computer.
Troubleshooting device driver stop errors
Check that the devices on your computer have drivers that are signed and certified by Windows Hardware Quality Labs (WHQL). If you’ve installed new drivers just before the problem appeared, try rolling them back to the older ones. Open the box and make sure all hardware is correctly installed, well seated, and solidly connected. Check the Microsoft Hardware Compatibility List (HCL) to verify and confirm that all of your hardware is on the Hardware Compatibility List and therefore compatible with Windows XP Professional, if some of it isn’t, then examine this non-HCL hardware. Check the devices on your computer (especially the one that appears in the stop message) if they have drivers that are driver signed, certified and identified by Windows Hardware Quality Labs (WHQL). Run Sigverif.exe to check for unsigned drivers. If you have a video driver not supplied with Windows XP Professional, try switching to the standard VGA driver or to a compatible driver supplied with Windows XP. Uninstall any software that uses filter drivers for example, antivirus, disk defragmentation, remote control, firewall, or backup programs.
General - if you can start Windows XP
First off restart your computer (if it wasn't restarted automatically), and if you are able to start Windows normally, then first check the System Log in Event Viewer for additional error messages that might help identify the device or driver causing the problem. To view the System Log launch EventVwr.msc from a Run box or from Start Menu -- Administrative Tools, i.e. click Start and then click Control Panel. Click Performance and Maintenance, and then click Administrative Tools. Double–click Event Viewer to open it and then System Log to view. Especially examine the "System" and "Application" logs in Event Viewer for recent errors that might give you further clues. To do this, launch EventVwr.msc from a Run box; or open "Administrative Tools" in the Control Panel then launch Event Viewer. If you’ve recently added new hardware, remove it and retest. Uninstall any non–critical hardware and software to help isolate the item that may be causing the problem. Using a current version of your antivirus software, check your hard-disk for viruses and trojans. If the test finds a virus, perform the steps required to eliminate it from your computer. Verify that your computer has the latest Service Pack installed. For a list of service packs and instructions for downloading them, go to the Windows Update Web site. Search the Microsoft Knowledge Base for "Windows XP Professional" and the number associated with the stop error you received. For example, if the message "Stop: 0x0000000A" appears, search for "0x0000000A", for more information on this, go to Help and Support Center and type "Safe Mode Options" in the Search box. If you have access to the Internet, visit the Microsoft Support site.
General - if you can start Windows XP
Same as above, restart your computer, and if you are unable to log on again, press F8 when the list of available operating systems appears, on the Advanced Options screen select Last Known Good Configuration and press ENTER. Unplug each new hardware device, one at a time, to see if this resolves the error. Run Recovery Console, and allow the system to repair any errors that it detects. Try to start your computer in safe mode, and then investigate your hardware-related software (drivers etc.), and make sure any newly installed/added hardware or software is properly installed (RAM, adapters, hard-disks, modems, drivers, programs and so on), and then remove or at least disable it/them. To start your computer in safe mode, restart your computer and same as above, when you see the list of available operating systems press F8 and on the "Advanced Options" screen select Safe Mode, and press ENTER. Verify that your hardware device drivers up-to-date and your system BIOS is the latest available version. Check and try disabling advanced BIOS memory options such as caching or Video BIOS Shadowing. Run Recovery Console, and allow the system to repair any errors that it detects.
All the pages on this site are labeled with the ICRA label. The site is maintained solely by its author and is best viewed with a standards-compliant browser.
This thing on the left is a simple Altavista's BabelFish translation script, for details see: http://babelfish.altavista.com. To make it work, click on the respective flag and translate the current page from its native English language to the choosen one, while for another similar script (which translates optional text) also check the page "various.html".