Modern operating systems all use virtual memory to manage applications and data being processed by the system. This is a good thing in that it prevents the system from stopping if there is insufficient physical memory available unfortunately the fact that virtual memory is based on using disk space as a substitute for physical memory means that if virtual memory is being used to a significant extent then system performance will be impacted dramatically.
Again, it might appear simple to know when a shortage of memory is proving to be a problem look at the memory allocated and compare it to the physical memory in the machine if it is higher then add more memory. However, as with CPU utilization, it is a little more complicated. For example, most versions of Microsoft Exchange will allocate as much memory as they can to maximize throughput and responsiveness (through the store.exe process). If another application requires more memory then store.exe is supposed to release memory to avoid the need to swap memory to disk. Hence in most servers running Exchange it will almost always appear that all physical memory is allocated all the time.
Adding more memory in this situation will make little or no difference to the performance of the server.
Significant over-allocation of memory may be an indication of a problem but the key question is how much of that memory is in active use. If an application has requested a memory allocation but is almost completely idle then the memory will be swapped out to disk and does not have to be recalled, so performance will not be overly affected. Therefore, the other key parameter that is useful to monitor in relation to memory is the page fault rate, the frequency with which the operating system has to move some data from physical memory to disk storage in order to recall other data that is required in memory at that time. Every time this happens the system has to wait until the swap has completed before it can carry on with the next processing task and performance will be much slower.
The solution to a memory problem is simple, at least up to a point. Adding more memory is often the simplest and cheapest way to boost the performance of a system but with 32-bit versions of Windows the maximum addressable physical memory is 4GB, of which (by default) half is reserved for the system address space, leaving just 2GB for applications. Adding more memory beyond 4GB will not bring about any further improvement, so alternative approaches such as splitting application load across multiple servers must be employed. Of course, 64-bit versions of Windows do not have this problem, being able to address 16TB directly.