There are more checkpoint parameter for PostgreSQL tuning paramater, such as:
checkpoint_segments Parameter Checkpoints
Each WAL segment takes up 16 MB. As described at http://www.postgresql.org/docs/current/interactive/wal-configuration.html the maximum number of segments you can expect to be in use at any time is:
(2 + checkpoint_completion_target) * checkpoint_segments + 1Note that in PostgreSQL versions before 8.3 that do not have spread checkpoints, you can still use this formula, just substitute the following code snippet for the value you'll be missing:
checkpoint_completion_target=0
| checkpoint_segments | checkpoint_completion_target=0 | target=0.5 | target=0.9 |
| 3 | 112MB | 144MB | 160MB |
| 10 | 336MB | 416MB | 480MB |
| 32 | 1040MB | 1296MB | 1504MB |
| 64 | 2064MB | 2576MB | 2992MB |
| 128 | 4112MB | 5136MB | 5968MB |
| 256 | 8208MB | 10256MB | 11904MB |
The general rule of thumb you can extract here is that for every 32 checkpoint segments, expect at least 1 GB of WAL files to accumulate. As database crash recovery can take quite a while to process even that much data, 32 is as high as you want to make this setting for anything but a serious database server. The default of 3 is very low for most systems though; even a small install should consider an increase to at least 10.
Normally, you'll only want a value greater than 32 on a smaller server when doing bulk-loading, where it can help performance significantly and crash recovery isn't important. Databases that routinely do bulk loads may need a higher setting.
checkpoint_timeout Parameter Checkpoint
checkpoint_timeout is value of Parameter Checkpoint. The default for this setting of 5 minutes is fine for most installations. If your system isn't able to keep up with writes and you've already increased checkpoint_segments to where the timeout is the main thing driving when checkpoints happen, it's reasonable to consider an increase to this value.
Aiming for 10 minutes or more between checkpoints isn't dangerous; again it just increases how long database recovery after a crash will take. As this is one component to database server downtime after a crash, that's something you need a healthy respect for.
checkpoint_completion_target is value of Parameter Checkpoint.
If you have increased checkpoint_segments to at least 10, it's reasonable at that point to also increase checkpoint_competion_target to its practical maximum of 0.9. This gives maximum checkpoint spreading, which theoretically means the smoothest I/O, too. In some cases keeping the default of 0.5 will still be better however, as it makes it less likely that one checkpoint's writes will spill into the next one.It's unlikely that a value below 0.5 will be very effective at spreading checkpoints at all. Moreover, unless you have an extremely large value for the number of segments the practical difference between small changes in its value are unlikely to matter. One approach for the really thorough is to try both 0.5 and 0.9 with your application and see which one gives the smoother disk I/O curve over time, as judged by OS-level monitoring.
