Computer Science, asked by sudhakarli4296, 1 year ago

Cache performance of chaining is not good as keys are stored using linked list. open addressing provides better cache performance as everything is stored in same table.

Answers

Answered by Himshika
1
hello dear

On modern CPUs, cache is used everywhere: to read program instructions and read/write data in memory. On most CPUs cache is transparent, i.e. there is no need to explicitly manage the cache.

Cache is much faster than the main memory (DRAM). Just to give you a perspective, accessing data in Level 1 cache is ~4 CPU cycles, while accessing the DRAM on the same CPU is ~200 CPU cycles, i.e. 50 times faster.

Cache operate on small blocks called cache lines, which are usually 64 bytes long.

What causes chaining to have a bad cache performance?

Basically, chaining is not cache friendly. It is not only about this case in the hash tables, same issue with "classical" lists.

Hash keys (or list nodes) are far away from each other, so each key access generates a "cache miss", i.e. slow DRAM access. So checking 10 keys in a chain takes 10 DRAM accesses, i.e. 200 x 10 = 2000 cycles for our generic CPU.

The address of the next key is not known until a next pointer is read in the current key, so there is not much room for an optimization...

hope it will help u

if u like my ans. plzzzz mark me as brainileast

thnx dear
Answered by kush193874
4

Basically, chaining is not cache friendly. It is not only about this case in the hash tables, same issue with "classical" lists.

Hash keys (or list nodes) are far away from each other, so each key access generates a "cache miss", i.e. slow DRAM access. So checking 10 keys in a chain takes 10 DRAM accesses, i.e. 200 x 10 = 2000 cycles for our generic CPU.

The address of the next key is not known until a next pointer is read in the current key, so there is not much room for an optimization...

hope it will help u

i. plzzzz mark me as brainileast

Similar questions