Feature Function Benefit
Object-model aware caching Cache understands the object model, the data schema, and the mappings between them Enables object uniqueness and relationship queries
Unique object caching Cache uses metamodel to ensure every object in the cache is unique, even if several queries return the same object Prevents subtle bugs caused by multiple inconsistent copies of same object
Dynamic cache relationship queries Cache uses metamodel to automatically find related objects in the cache (lazy loading) Improve performance by eliminating database queries for relationships
Shared object cache Objects cached in-memory can be shared by multiple threads Improves performance by eliminating database queries for objects
Transactional cache Isolates all objects manipulated within a single transaction until it commits successfully Ensures transactional integrity at the cache level by isolating changes
Cache relationship knitting Can load several tables efficiently, then knit object relationships in cache Improves efficiency for loading the cache
Cache indexing Creates in-memory indexes on one or more nonkey attributes Accelerates performance by avoiding the need to query the database
Cache preloading Uses bulk loading to populate cache on application startup Improves performance by maximizing cache hit rate
Eager relationship loading Relationships can be traversed automatically when an object is retrieved using join query semantics Improves performance for complex objects
Lazy relationship loading Access related objects on an as-needed basis Eliminates the possibility of a large object result set/tree graph from filling the cache inadvertently
Object clearing policy Specifies when objects should be cleared from cache Allows object cache lifecycle management for more efficient use of the shard cache
Object pinning policy Pin objects in the cache to prevent them from being cleared Increases cache hit rate by ensuring that selected objects always remain in the cache
Cache write-through policies Cache writes objects to the database immediately or upon transaction commit Reduce database contention by shortening the duration that database locks are held
Transient objects Cache objects are not written to the database Objects without a back-end database representation can be managed by the cache
State-based cache synchronization Ensures that all caches have up-to-date information (as opposed to simply invalidating stale objects) Increases application scalability by allowing clusters of caches to work together
Resilient cache sync Guarantees that all caches receive all cache change messages even with intermittent network failures Ensures that caches do not contain stale data, enables stateful failover between caches in a cluster
Express cache sync Ensures that caches remain consistent even if cache synchronization messages are delivered out of order Maximizes synchronization performance by avoiding the need to temporarily store synchronization messages on disk
Cross-platform sync Can synchronize caches across Java, C#, C++ applications, and J2EE and .NET platforms Ensures frequently accessed data is up-to-date across multiple languages and platforms; supports heterogeneous deployment environments
Table 2. Distributed Caching
A DSA should provide caching to accelerate applications by avoiding unnecessary database operations.