What is the cache?

Cache is a hardware or software component that stores data so that future requests for that data can be handled faster; The data stored in a cache can be the result of a previous computation or a copy of the data stored elsewhere. A cache hit occurs when the requested data can be found in a cache, while a cache error occurs when it cannot. Cache results are served by reading data from the cache, which is faster than recalculating a result or reading from a slower data store; therefore, the more requests that can be answered from the cache, the faster the system will be made.

To be profitable and allow efficient use of data, caches must be relatively small. However, caches have proven effective in many areas of computing, because typical computer applications access data with a high degree of reference location. These access patterns exhibit a temporary location, where data that has already been requested recently is requested, and a spatial location, where data that is physically stored near the data that has already been requested is requested.

How does it work

The hardware implements the cache as a memory block for temporary data storage that will probably be used again. Central processing units (CPUs) and hard disk drives (HDDs) frequently use cache memory, as do web browsers and web servers.

A cache consists of a group of entries. Each entry has associated data, which is a copy of the same data in a backup store. Each entry also has a tag that specifies the identity of the data in the backup store of which the entry is a copy.

When the cache client (a CPU, web browser, operating system) needs to access the data that is supposed to exist in the backup store first verify the cache. If an entry can be found with a label that matches that of the desired data, the input data will be used. This situation is known as a cache hit. For example, a web browser program can check its local cache on disk to see if it has a local copy of the content of a web page at a particular URL. In this example, the URL is the tag and the content of the web page is the data. The percentage of accesses that result in cache hits is known as the hit rate or the hit ratio of the cache.

The alternative situation, when the cache is verified and found to contain no entry with the desired tag, is known as a cache error. This requires more expensive access to backup store data. Once the requested data is retrieved, it is usually copied to the cache and ready for the next access.

During a lack of cache memory, some other previously existing cache entry is deleted to make room for newly recovered data. The heuristics used to select the entry to replace is known as the replacement policy. A popular "less recently used" (LRU) replacement policy replaces the oldest entry, the entry that was accessed less recently than any other entry (see the cache algorithm). More efficient caching algorithms calculate the frequency of use-impact against the size of the stored contents, as well as the latencies and performance of the cache and backup store. This works well for large amounts of data, longer latencies and slower performance such as those experienced with hard drives and networks but it is not efficient to use within a CPU cache.

Was this answer helpful?

Related Articles

Ruby on Rails

Ruby on Rails is a free web application framework for the Ruby programming language. It is often...

Connect remotely to MySQL database

Remote MySQL connection is disabled on our shared servers due to security reasons, but you can...

Check the number of inodes in cPanel

An inode is the abbreviation for index node. In Unix-based file systems, an inode is considered a...

how many CPU resources I'm using?

All of our shared hosting servers are equipped with the Cloudlinux operating system at this time,...

How to upload the content for addon domain

You have to upload the files for an addon domain to its directory. When you create a new addon...