You can use caching to improve the performance of your site. This page will explore some of the ways you might do this within Totara. You might also want to look at our Caching documentation for guides on how to configure caching in Totara.
When trying to understand caching in Totara there are some key terms and concepts you will need to understand. These are set out in more detail on this page.
There are three cache modes within Totara.
- Application caches: Used to store data available across sessions
- Session caches: Used to store data specific to the user within their session
- Request caches: Used to store data just for the lifetime of the request
The cache definitions displayed on the interface show what areas Totara is caching data for, what mode is being used, and where the data for each definition is being stored.
Totara supports storing data in different locations out of the box, including the file system, memcached, and redis. Cache stores are also pluggable. Multiple stores can be configured on a single site.
Configuring caching within Totara
Caching is configurable at two levels, application caches and session caches.
Each cache mode has a default. For application caches the default is to store these as flat files in the site data directory, as these caches need to be shared between all session. Session caches in the session, and request caches are stored directly in memory by default. Each definition can be independently pointed at a specific store.
Configuring application mode caching for your site can have a significant impact on the performance of your site, especially when it is at load. However, for it to be of benefit it needs to be carefully planned, and tailored to your hosting environment and web architecture.
By default application cache data is stored in the sitedata directory. This is guaranteed to work, however in horizontally scaled web architectures, or cloud-hosted environments the site data directory is often across the network, so this is not an ideal choice. Cache store solutions such as Redis are specifically designed for this task, and often through configuration can provide notably improved performance.
Choosing an alternative cache store to use as the default, or choosing to map specific high load caches to better suited stores should be looked into for large installations.
Evaluating your caching configuration
There is no quick answer to this unfortunately.
Within Totara you can enable the display of performance information on each page (Quick-access menu > Development > Debugging > perfdebug) which reveals which caches are being used on a particular page, and how many hits, misses and sets are occurring. However, to understand where bottlenecks are you will want to investigate how your current cache stores are performing, and test alternative stores and configurations.
In understanding bottlenecks you will want to focus your attention on the cache stores themselves. If you are using the default application cache this means monitoring the site data directory file system. You want to understand the bottlenecks of your chosen file system, and whether you are pushing against those limitations. You will also want to understand the overhead of communication between the application and your file system, this includes network latency and any potential lock and wait situations that may arise when the file system is under load.
Some ideas for testing alternative stores and configurations
Totara has a built in tool for measuring cache store performance at a basic level, which can be found at Quick-access menu > Plugins > Caching > Test performance. This should not be run on a production site as it will lead to the cache being load tested and will impact performance in itself.
The performance information in the footer can be monitored to understand the difference in page load times.
We would also suggest targeted planning as not all caches are equal. Some caches are read heavy, others are write heavy. Some must use shared space (by default) while others are safe to be used on local storage. Investigating and understanding how caches are used on your site will enable you to better plan, theoretically, where they are best stored. See hints for cache consideration below for some ideas.
Hints and tips
Whether you have a single web server or have scaled horizontally has a biggest impact on how your are best to configure caching.
Cache stores like Redis and Memcached are specifically designed to facilitate fast and efficient storage and retrieval of data. They should be explored as default stores. They tend to offer much better performance than traditional file systems and are well suited to shared caching requirements like those in Totara.
Opcode caches like APCu are the fastest. They store data as PHP memory objects and require no serialisation or normalisation of data. However they are not shared, and are not suitable as a default unless you have a single web server. Even with a single server they should be used with caution as some that the Totara team reviewed dealt poorly with memory exhaustion, which if encountered will have a devastating effect on site function. However, for read-heavy caches that you don't expect to change often and that have regular size (like lang and config caches) these caches can be the perfect solution when looking to improve performance.
If opcode caches don't appeal, and you have multiple web servers you may want to consider running local cache stores if possible. This could be local memcached instances, or even local file directories if the web servers have local disks. In this case don't make them defaults, but point specific caches that are safe to local use. When editing a definitions mapping there is a notification at the top of the screen that informs whether a cache is safe to use with a local store. We suggest only mapping read heavy, commonly used caches to local stores.
Memcache and clustering deserves a special note. The memcached store in Totara supports memcached instance clustering. In an environment when you are running multiple web servers this allows your scale your caching infrastructure with the rest of your web architecture by running local memcached instances that are all kept in sync and thus are suitable for shared caches as well. It should be used with caution. We would recommend only for read-heavy caches, as writes are made to all nodes in the cluster. It can be a particularly effective strategy if you are running your web server environment over distributed zones where network latency becomes a bottleneck (e.g. zones in the Americas, Europe, and Asia). The biggest issue with memcached is that purges are indiscriminate. If a large number of definitions are all using a single memcached instance, or cluster of instances, then overzealous purging by individual caches can become a performance bottleneck.