Skip to content

Commit

Permalink
fix overtuning on flush thread
Browse files Browse the repository at this point in the history
also removes more reasons for a large slab to be "dirty".

I was trying to avoid over-flushing when the global page pool is low,
but it looks like the thread doesn't really keep up and we end up not
moving pages fast enough. so the lower the global page pool the more
over-aggressive the flusher gets.

maybe instead of "oldest age" we could sort the large slabs by free
chunks?
  • Loading branch information
dormando committed Nov 27, 2024
1 parent 8fd1949 commit 8611c10
Show file tree
Hide file tree
Showing 2 changed files with 9 additions and 18 deletions.
16 changes: 9 additions & 7 deletions slab_automove_extstore.c
Original file line number Diff line number Diff line change
Expand Up @@ -175,13 +175,15 @@ void slab_automove_extstore_run(void *arg, int *src, int *dst) {

// if page delta, oom, or evicted delta, mark window dirty
// classes marked dirty cannot donate memory back to global pool.
if (a->iam_after[n].evicted - a->iam_before[n].evicted > 0 ||
a->iam_after[n].outofmemory - a->iam_before[n].outofmemory > 0) {
wd->evicted = 1;
wd->dirty = 1;
}
if (a->sam_after[n].total_pages - a->sam_before[n].total_pages > 0) {
wd->dirty = 1;
if (small_slab) {
if (a->iam_after[n].evicted - a->iam_before[n].evicted > 0 ||
a->iam_after[n].outofmemory - a->iam_before[n].outofmemory > 0) {
wd->evicted = 1;
wd->dirty = 1;
}
if (a->sam_after[n].total_pages - a->sam_before[n].total_pages > 0) {
wd->dirty = 1;
}
}

// reclaim excessively free memory to global after a full window
Expand Down
11 changes: 0 additions & 11 deletions storage.c
Original file line number Diff line number Diff line change
Expand Up @@ -618,17 +618,6 @@ static void *storage_write_thread(void *arg) {
int target_pages = 0;
if (global_pages < settings.ext_global_pool_min) {
target_pages = settings.ext_global_pool_min - global_pages;
// make sure there's room in each class but don't over-flush.
// ie: we're going to want to move pages from the class with the
// "oldest" items, but this thread has no way of getting that info
// up-to-date. So we just free some pages in all active classes
// and let the page mover free stuff up.
// However, if the target delta is large (100+ megs) we can end up
// flushing gigabytes of data across many classes.
// Thus, we clamp the target here.
if (target_pages > 5) {
target_pages = 5;
}
}
counter++;
if (to_sleep > settings.ext_max_sleep)
Expand Down

0 comments on commit 8611c10

Please sign in to comment.