US 11,809,315 B2
Fabricless allocation of cache slots of local shared caches with cache slot recycling in a fabric environment
Steve Ivester, Grafton, MA (US); and Kaustubh Sahasrabudhe, Westborough, MA (US)
Assigned to Dell Products L.P., Hopkinton, MA (US)
Filed by EMC IP HOLDING COMPANY LLC, Hopkinton, MA (US)
Filed on Mar. 17, 2021, as Appl. No. 17/203,854.
Prior Publication US 2022/0300420 A1, Sep. 22, 2022
Int. Cl. G06F 12/06 (2006.01); G06F 12/084 (2016.01); G06F 9/50 (2006.01); G06F 12/0871 (2016.01); G06F 11/20 (2006.01); G06F 12/0802 (2016.01)
CPC G06F 12/0646 (2013.01) [G06F 9/5016 (2013.01); G06F 11/2053 (2013.01); G06F 12/084 (2013.01); G06F 12/0871 (2013.01); G06F 12/0802 (2013.01); G06F 2212/1024 (2013.01); G06F 2212/1041 (2013.01); G06F 2212/282 (2013.01); G06F 2212/313 (2013.01); G06F 2212/502 (2013.01)] 5 Claims
OG exemplary drawing
 
1. An apparatus comprising:
a data storage system comprising:
a plurality of non-volatile drives;
a plurality of compute nodes that are interconnected by a fabric and that present at least one logical production volume to hosts and manage access to the drives, each of the compute nodes comprising a local memory and being configured to allocate a portion of the local memory to a shared memory that can be accessed by each of the compute nodes, the shared memory comprising cache slots that are used to store data for servicing input-output commands (IOs);
a plurality of primary queues, each associated with one of the compute nodes;
a plurality of secondary queues, each associated with one of the compute nodes;
a plurality of worker threads, each associated with one of the compute nodes and configured to:
recycle cache slots of the allocated portion of the local memory of that compute node;
allocate at least some of the recycled cache slots to the respective associated compute node prior to receipt of an IO for which the recycled cache slot will be utilized by providing temporary exclusive ownership of the allocated recycled cache slots that excludes non-owner compute nodes from writing to the allocated recycled cache slots;
add the allocated recycled cache slots to one of the primary queues;
send messages via the fabric to indicate allocation of the allocated recycled cache slots, the compute nodes configured to use the allocated recycled cache slots without sending messages via the fabric to claim ownership of the allocated recycled cache slots;
recycle at least some of the cache slots without allocation to any of the compute nodes such that unallocated recycled cache slots are available for use by any of the compute nodes by claiming at least one of the unallocated recycled cache slots to service an TO following receipt of the IO; and
add unallocated recycled cache slots to one of the secondary queues, the unallocated recycled cache slots being created by one of the worker threads only in response to the primary queue associated with the worker thread being full.