View Single Post
  #3 (permalink)  
Old 02-22-2008, 07:03 PM
binny1070's Avatar
binny1070
N00b
Offline
Threadstarter
 
Join Date: Oct 2007
Posts: 35
Reputation: 0
binny1070 is a n00b
Mentioned: 0 Post(s)
Tagged: 0 Thread(s)
Re: Very goog info on Paging and the Windows CE Paging Pool

So how do you decide on the right pool size for your platform? I’m afraid it’s still a bit of a black art. There aren’t many tools to help. You can turn on CeLog during boot and see how many page faults it records. You can see the page faults in Remote Kernel Tracker, but in truth that kind of view isn’t much help here. The best tool I know is that readlog.exe will print you a page fault report if you turn on the “verbose” and “summary” options. If you get multiple faults on the same pages, your pool may be too small (you may also be unloading and re-loading the same module, ejecting its pages from memory, so look for module load events in the log too). If you don’t get many repeats, your pool may be bigger than you need. In CE6 you can use IOCTL_KLIB_GET_POOL_STATE to get additional information about how many pages are currently in your pool and how many times the kernel has had to free up pool pages to get down to the target size. There aren’t any tools like “mi” that query the pool state, so you’ll have to call the IOCTL yourself. On debug builds of the OS, there is also a debug zone in the kernel you can turn on to see a lot of detail about paging and when the pool trim thread is running. But CeLog is probably a better choice to collect all of that data.
As I already mentioned, as of CE6 you can set separate “target” and “max” values for the paging pools. I don’t really like the semantics of having a “max” – it isn’t dependent on the other usage or availability in the system. If some application takes most of the available memory in the system, you’d want the pool to let go of more pages. If you have a lot of free memory, and some application is reading a lot of file data, you’d want the pool to grow to use most of the available memory. We supported the “max” as an option to limit the pool size, but I’m starting to think the best idea is to set your max to infinity, to let the pool grow up to the size of available memory. We’ll still page out down to the target in the background. I’d have liked to add more sophisticated settings like “leave at least X amount of free memory” but that’s quite difficult to implement.
You’ll want to examine your pool behavior during important “user scenarios” like boot or running a predefined set of applications. If the user runs a lot of applications at once, or a really big application, or one that reads a lot of file data, they could go through pool pages pretty quickly. There isn’t really a lot you can do about that. We don’t even have a set of recommended scenarios for you to examine. I wish we had more information and more tools for this, but I’ve described about all we have.
The approach I think most OEMs take is that they leave the pool at the default size until they discover a perf problem with too much paging (by profiling or otherwise observing) in a scenario that's important to users. Then they bump it up until the problem goes away. Not very scientific but it works, and it's not like we have any answer that's more scientific anyway.
What goes into the paging pool
This is repeating some of the information above, but in more detail – how do you know exactly what pages will use the pool and what pages won’t? Keep in mind that paging is actually independent of the paging pool. Paging can happen with or without the paging pool. If you turn off the paging pool then you turn off the limit that we set on the amount of RAM that can be taken up for paging. But pages can still be paged. If you turn ON the paging pool then we enforce some limits, that’s all. So this isn’t really a question of what pages can use the pool, it’s a question of what pages are “pageable.”
Executables from the FILES section of ROM will use the paging pool for their code and R/O data. R/W data from executables can’t be paged out, so it will not be part of the pool. Compressed executables from the MODULES section of ROM will use the pool for their code and R/O data. If the image is running from NOR or from RAM, uncompressed executables from MODULES will run directly out of the image without using any pool memory. Executables from MODULES in images on NAND will be paged using the pool. (And by the way, I’m not terribly familiar with how we manage data on NAND/IMGFS so I might be missing some details here.)
Executables that would otherwise page but are marked as “non-pageable” will be paged fully into RAM as soon as they’re loaded, and not paged out again until they’re unloaded. These pages don’t use the pool. You can also create “partially pageable” executables by telling the linker to make individual sections of the executable non-pageable. Generally code and data can’t be pageable if it’s part of an interrupt service routine (ISR) or if it’s called during suspend/resume or other power management, because paging could cause crashes and deadlocks. And code/data shouldn’t be pageable if it’s accessed by an interrupt service thread (IST) because paging would negatively impact real-time performance.
Memory-mapped files which don’t have a file underneath them (a.k.a. RAM-backed mapfiles) will not use the pool. In CE5 and earlier, R/O file-backed mapfiles will use the pool while R/W mapfiles will not. In CE6, all file-backed memory-mapped files use the file pool. And the new file cache filter (cache manager) essentially memory-maps all open files, so the cached file data uses the file pool.
To look at that information from the opposite angle, if you are running all executables directly out of your image – all are uncompressed in the MODULES section of ROM, and the image is executing out of NOR or RAM, then the loader paging pool is probably a waste. You might still want to use the file pool to limit RAM use for file caching and memory-mapped files, but in that case you might want to turn off the loader pool.
Other Paging Pool Details
Someone once asked me whether the pool size affects demand paging. It doesn’t change demand paging behavior or timing. Demand paging is about delaying committing pages as long as possible, and it applies to pages regardless of the paging pool. Pages can be demand paged without being part of the pool; they won’t be paged in until absolutely necessary, and then they’ll stay in RAM without being paged out. Pool pages will be demand paged in, and may eventually be paged out again.
Another question was whether the paging pool uses up virtual address space. Actually, no, it doesn’t. The pool pages that are currently in use are assigned to virtual addresses that are already reserved. For example, when you load a DLL, you reserve virtual address space for the DLL; and when you touch a page in the DLL, a physical page from the pool is assigned to the already-reserved virtual address in your DLL. The pool pages that are NOT in use are not assigned virtual addresses. The kernel tracks them using their physical addresses only. The pool *does* use up physical RAM. In CE5 it uses the whole size of the paging pool; on CE6 it consumes physical memory equal to the “target” size of the pool. This guarantees that you have at least a minimum number of pages to page with, to avoid heavy thrashing over just a few pages when the rest of the memory in the system is taken.
Other Paging-Related Details
A related detail that occasionally confuses people is the “Paging” flag on file system drivers. This flag doesn’t control whether the driver code itself is pageable. Rather, it controls whether the file system allows files to be loaded into memory a page at a time or all at once. On typical file systems like FATFS the “Paging” flag is turned on, allowing executables and memory-mapped files to be accessed a page at a time. On other file system drivers, such as our release directory file system (RELFSD) and our network redirector, it’s turned off by default, causing executables and memory-mapped files to be read into memory all at once. I believe the reasoning is to improve performance and minimize problems when the network connection is lost.
This flag actually derives from the original Windows CE implementation of memory-mapped files. If the file system supported a couple of APIs, ReadFileWithSeek and WriteFileWithSeek, memory-mapped files on that file system would be pageable. If the file system did not support those APIs, the memory-mapped files would be non-pageable, in which case they’d be read entirely into RAM at load time and never paged out until the memory-mapped file is unloaded. The OS required pageability for special memory-mapped files like registry hives and CEDB database volumes, so file systems that did not support the required APIs could not hold these files. (If you ask me, there is no real need to require the seek + read/write to occur in one atomic API call, so the requirement on the “WithSeek” APIs was unnecessary, but perhaps there was a good reason back in the old days.)
As I already mentioned, the new CE6 file cache also uses the paging pool. The file cache is basically just memory-mapping files to hold the file data in RAM for a while. The file cache is enabled by default on top of FATFS volumes.
Reply With Quote