static void invalidate_bh_lru(void *arg) * This doesn't race because it runs in each cpu either in irq * invalidate_bh_lrus() is called rarely - but not only at unmount. Return -1381,35 +1402,56 static void _invalidate_bh_lrus(struct bh_lru *b) If (bh & bh->b_blocknr = block & bh->b_bdev = bdev & + lru = rcu_dereference(per_cpu(bh_lru, smp_processor_id())) Lookup_bh_lru(struct block_device *bdev, sector_t block, unsigned size) + b = rcu_dereference(per_cpu(bh_lru, smp_processor_id())) #define -1245,16 +1259,19 static void bh_lru_install(struct buffer_head *bh) +static DEFINE_MUTEX(bh_lru_invalidate_mutex) +static DEFINE_PER_CPU(struct bh_lru, *bh_lru) static DEFINE_PER_CPU(struct bh_lru, bh_lrus) = Used (assigned to per-CPU bh_lru pointer), and the other is Two bh_lrus structures for each CPU are allocated: one is being To avoid the IPI, free the per-CPU caches remotely via RCU. This interrupts CPUs which might be executing code sensitive On_each_cpu_cond(has_bh_in_lru, invalidate_bh_lru, NULL, 1) Umount causes invalidate_bh_lrus which calls an IPI on each ` (2 more replies) 0 siblings, 3 replies 4+ messages in threadįrom: Marcelo Tosatti 15:40 UTC ( / raw)Ĭc: Matthew Wilcox, Christoph Hellwig, Jens Axboe, Frederic Weisbecker Fs/buffer.c: update per-CPU bh_lru cache via RCU All of help / color / mirror / Atom feed * fs/buffer.c: update per-CPU bh_lru cache via RCU 15:40 Marcelo Tosatti
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |