Contact Us 1-800-596-4880

Cache Scope

Enterprise, CloudHub

The Cache Scope saves on time and processing load by storing and reusing frequently called data. You can put any number of message processors into a cache scope and configure the caching strategy to store the responses (which contain the payload of the response message) produced by the processing that occurs within the scope. Mule’s default caching strategy defines how data are stored and reused, but if you want to adjust cache behavior, you can customize a global caching strategy in Mule and make it available for use by all cache scopes in your application.

Mule sends a message into the cache scope and the parent flow expects and output. The cache scope processes the message, delivers the output to the parent flow and saves the output (i.e. caches the response). The next time Mule sends the same kind of message into the cache scope, the cache scope may offer a cached response rather than invoking, again, a potentially time-consuming process.

You can configure the exchange patterns of endpoints in a cache scope to operate as request-response or one-way. Regardless of exchange pattern settings or any data retrieval activity that might occur within it, the cache scope simply caches the output of the flow inside it; when a message next enters the scope, the cache scope offers the cached output, or cached response.

You can use cache scope to reduce the processing load on the Mule instance and speed up message processing within a flow. It is particularly effective for:

  • Processing repeated requests for the same information

  • Processing requests for information that involve large, non-consumable message payloads

For instance, you can use a cache scope to manage customer requests for flight information. Many customers may request the same pricing information about flights from San Francisco to Buenos Aires. Rather than using a lot of processing power to send separate requests to several airline databases with each customer query, you can use a cache scope to arrange to send a request to the databases fewer times - say, once every ten minutes - and present users with the cached flight pricing information. Where timeliness of data is not critical, cache scope can save time and processing power.

Caching Strategy

The caching strategy defines the actions a cache scope takes when a message enters its subflow.

  • If there is no cached response event (a cache "miss"), cache scope processes the message.

  • If there is a cached response event (a cache "hit"), cache scope offers the cached response event rather than processing the message.

You can customize a global caching strategy in Mule for the cache scopes in your application to flow, or you can use Mule’s default caching strategy.

Default Caching Strategy

By default, all cache scopes in Mule applications follow the caching strategy procedure described below. Consult the Creating a Global Caching Strategy section below if you want to create your own custom global caching strategy.

  1. A message enters the cache scope.

  2. Cache scope determines whether the message’s payload is consumable. A consumable payload can only be read once before it is lost - such as a streaming payload - and cannot be cached.

    • If the message payload is consumable, cache scope always processes the message; nothing is cached and the caching strategy is abandoned.

    • If the message payload is not consumable, cache scope continues to the next step in the caching strategy.

  3. Cache scope generates a key to identify the message’s payload. Mule uses an MD5KeyGenerator and an MD5 digest to generate a unique key for the message payload.

  4. Cache scope compares the newly-generated key to cache responses that it has previously processed and saved in an InMemoryObjectStore - a container for cached data. In other words, cache scope searches for a "cache hit" it can offer instead of processing the message.

    • Where there is a cache miss, cache scope processes the new message and produces a response. Cache scope also saves the resulting response in the object store. (If the response has a consumable payload, it does not cache the response).

    • Where there is a cache hit, the caching strategy uses a DefaultResponseGenerator to generate a response that combines data from both the new request and the cached response. (If generated response has a consumable payload, it does not cache the response.)

  5. Cache scope pushes the response out into the parent flow for continued processing.

Adding and Configuring a Cache Scope

STUDIO Visual Editor

  1. Drag and drop the cache scope icon from the palette into a flow on your canvas.

    Studio_Cache_Flow
  2. Drag one or more building blocks from the palette into the cache scope to build a chain of processors to which Mule will apply the caching strategy. A cache scope can contain any number of message processors.

  3. Open the building block’s Scope Properties panel, then configure the fields per the table below.

    Studio_CachePP
Field Value Description XML

Display Name

Cache

Customize to display a unique name for the scope in your application.

doc:name="Cache"

Caching strategy reference

Use Default caching strategy

(Default) Select if you want the cache scope to follow Mule’s default caching strategy.

N/A

Reference to a strategy

Select to configure the cache scope to follow a global caching strategy that you have created; select the global caching strategy from the drop-down menu or create on by clicking the add.

cachingStrategy-ref="Caching_Strategy"

Filter

Process all messages

(Default) Select if you want the cache scope to execute the caching strategy for all messages that enter the scope.

N/A

Filter messages using an expression

Selecet if you want the cache scope to execute the caching strategy ONLY for messages the match the expression(s) defined in this field.

If the message matches the expression(s), Mule executes the caching strategy.

If the message does not match expression(s), Mule processes the message through all message processors within the cache scope; Mule never saves nor offers cached responses.

filterExpression="#[user.isPremium()]"

Filter messages using a global filter

Select if you want the cache scope to execute the caching strategy only for messages that successfully pass through the designated global filter.

If the message passes through filter, Mule executes the caching strategy.

If the message fails to pass through filter, Mule processes the message through all message processors within the cache scope; Mule never saves nor offers cached responses.

filter-ref="MyGlobalFilter"

XML Editor or Standalone

  1. Add a ee:cache element to your flow at the point where you want to initiate a cache processing block. Refer to the code sample below.

  2. Optionally configure the scope according to the tables below.

    Element Description

    ee:cache

    Use to create a block of message processor that will process a message, deliver the output to the parent flow, and cache the response for reuse (according to the rules of the caching strategy.)

    Element Attribute Default Value Description

    doc:name

    Cache

    Customize to display a unique name for the cache scope in your application.

    Note: Attribute not required in Mule Standalone configuration.

    filterExpression

    (Optional) Specify one or more expressions against which the cache scope should evaluate the message to determine whether the caching strategy should be executed.

    filter-ref

    (Optional) Specify the name of filtering strategy that you have defined as a global element. This attribute is mutually exclusive with filterExpression.

    cachingStrategy-ref

    (Optional) Specify the name of the global caching strategy that you have defined as a global element. If no cachingStrategy-ref is defined, Mule will use default caching strategy.

  3. Add nested elements beneath your ee:cache element to define what processing should occur within the scope. The cache scope can contain any number of message processors as well as references to child flows.

    <ee:cache doc:name="Cache" filter-ref="Expression" cachingStrategy-ref="Caching_Strategy">
        <some-nested-element/>
        <some-other-nested-element/>
    </ee:cache>

Creating a Global Caching Strategy

Create a global caching strategy to customize some of the activities that your cache scopes perform.

For example, a cache scope that processes messages with large payloads - which, in turn, results in large cached responses in the InMemoryObjectStore - may quickly exhaust memory storage and slow the processing performance of your flow. In such a case, you may wish to create a global caching strategy that stores cached responses in a different type of object store and prevents memory exhaustion.

STUDIO Visual Editor

  1. In the Scope Properties panel, click the add next to the Reference to a strategy field.

  2. Configure the fields in the Global Element Properties panel that appears according to the tables below. The only required field is Name.

    Studio_GlobalCachingStrategy
    Field Value Description XML

    Name

    Caching_Strategy

    Customize to create a unique name for your global caching strategy.

    name="Caching_Strategy"

    Object Store

    (Optional) Configure an object store in which Mule will store all of the scope’s cached responses. Refer to the Configuring an Object Store for Cache section below for configuration specifics. Unless otherwise configured, Mule stores all cached responses in an InMemoryObjectStore by default.

    <custom-object-store>

    <in-memory-store>

    <managed-store>

    <simple-text-file-store>

    Event Key

    Default

    (Default) Mule utilizes an MD5KeyGenerator and an MD5 digest to generate a key. Use when you have objects that return the same MD5 hashcode for instances that represent the same value, such as String class.

    N/A

    Key Expression

    (Optional) Enter an expression that Mule should use to generate a key. Use when request classes do not return the same MD5 hashcode for objects that represent the same value.

    keyGenerationExpression="#[some.expression]"

    Key Generator

    (Optional) Identify a custom-built Spring bean that generates a key. Use when request classes do not return the same MD5 hashcode for objects that represent the same value. If you have not created any custom key generators, the Key Generator drop-down box will be empty. Click add next to the field to create one.

    keyGenerator-ref="Bean"

  3. Optionally, click the Advanced tab in the Global Element Properties panel and configure further, if needed, according to the tables below.

    Studio_Cache_Global2
    Field Value Description XML

    Response Generator

    Specify the name of a Response Generator that will direct the cache strategy to use a custom-built Spring bean to generate a response that combines data from both the new request and the cached response. Click add next to the field to create a new Spring bean for your caching strategy to reference.

    responseGenerator-ref="Bean1"

    Consumable Message Filter

    Specify the name of a Consumable Message Filter to direct the cache strategy to use a custom-built Spring bean to detect whether a message contains a consumable payload. Click add next to the field to create a new Spring bean for your caching strategy to reference.

    consumableFilter-ref="Bean2"

    Event Copy Strategy

    Simple event copy strategy (data is immutable)

    Data is either immutable, like a String, or the Mule flow has not mutated the data. The payload that Mule caches is the same as that returned by the flow. Every generated response will contain the same payload.

    Serializable event copy strategy (data is mutable)

    Data is mutable or the Mule flow has mutated the data. The payload that Mule caches is not the same as that returned by the flow, which has been serialized/deserialized in order to create a new copy of the object. Every generated response will contain a new payload.

    <ee:serializable-event-copy-strategy/>

Configuring an Object Store for Cache

By default, Mule stores all cached responses in an InMemoryObjectStore. Create a global caching strategy and define a new object store if you want to customize the way Mule stores caches responses.

Object Store Description

custom-object-store

Create custom class to instruct Mule where and how to store cached responses.

in-memory-store

Configure the following settings for an object store that saves cached responses in the system memory:

  • store name

  • maximum number of entries (i.e. a cached response)

  • the "life span" of a cached response within the object store (i.e. time to live)

  • the expiration interval between polls for expired cached responses

managed-store

Configure the following settings for an object store that saves cached responses in a place defined by ListableObjectStore:

  • store name

  • persistence of cached response (true/false)

  • maximum number of entries (i.e. cached responses)

  • the "life span" of a cached response within the object store (i.e time to live)

  • the expiration interval between polls for expired cached responses

simple-text-file-store

Configure the following settings for an object store that saves cached response in file:

  • store name

  • maximum number of entries (i.e. cached responses)

  • the "life span" of a cached response within the object store (i.e. time to live)

  • the expiration interval between polls for expired cached responses

  • the name and location of the file in which the object store saves cached responses

Configure the settings of your new object store. If you selected a custom-object-store, select or write a class and Spring property to define the object store. Configure the settings for all other object stores as described in the table below.

Field or Checkbox XML Attribute Instructions

Store Name

name (for in-memory, simple-text)

storeName (for managed)

Enter a unique name for your object store.

Persistent

persistent="true"

Check to ensure that the object store saves cached responses in persistent storage. Default is false.

Max Entries

maxEntries

Enter an integer to limit the number of cached responses the object store will save. When it reaches the maximum number of entries, the object store expunges the cached responses, trimming the first entries (first in, first out) and those which have exceeded their time to live.

Entry TTL

entryTTL

(Time To Live) Enter an integer to indicate the number of milliseconds that a cached response has to live in the object store before it is expunged.

Expiration Interval

expirationInteval

Enter an integer to indicate, in milliseconds, the frequency with which the object store checks for cached response events it should expunge. For example, if you enter “1000”, the object store reviews all cached response events every one thousand milliseconds to see which ones have exceeded their Time To Live and should be expunged.

Directory

directory

Enter the file path of the file where object store saves cached responses.

Example

The example that follows demonstrates the power of the cache scope with a Fibonacci function. The Fibonacci sequence is a series of numbers in which the next number in the series is always the sum of the two number preceding it.

In this example, the Mule flow receives and performs two tasks for each request:

  1. Executes, and returns the answer to, the Fibonacci equation (see below) using a number (n) provided by the caller F(n) = F(n-1) + F(n-2) with F(0) = 0 and F(1) = 1

  2. Records and returns the cost of the calculation, wherein each individual invocation of a calculation task (i.e. add two numbers in the sequence) adds 1 to the cost

    cache_flow
    cache_flow

    Note that this example requires FibonacciResponseGenerator.java

<?xml version="1.0" encoding="UTF-8"?>
<mule xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
      xmlns="http://www.mulesoft.org/schema/mule/core"
      xmlns:ee="http://www.mulesoft.org/schema/mule/ee/core"
      xmlns:spring="http://www.springframework.org/schema/beans"
      xmlns:http="http://www.mulesoft.org/schema/mule/http"
      xmlns:vm="http://www.mulesoft.org/schema/mule/vm"
      xmlns:doc="http://www.mulesoft.org/schema/mule/documentation"
      version="EE-3.3.0"
      xsi:schemaLocation="

          http://www.mulesoft.org/schema/mule/ee/core http://www.mulesoft.org/schema/mule/ee/core/current/mule-ee.xsd

          http://www.mulesoft.org/schema/mule/http http://www.mulesoft.org/schema/mule/http/current/mule-http.xsd

          http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans-current.xsd

          http://www.mulesoft.org/schema/mule/core http://www.mulesoft.org/schema/mule/core/current/mule.xsd

          http://www.mulesoft.org/schema/mule/vm http://www.mulesoft.org/schema/mule/vm/current/mule-vm.xsd">

    <configuration>
        <expression-language>
            <global-functions>
                def fibonacciRequest(n, cached)
                {
                    import org.mule.DefaultMuleMessage;
                    import org.mule.RequestContext;

                    request = new DefaultMuleMessage("Fibonacci: " + n, app.registry['_muleContext']);

                    request.setOutboundProperty("n", Integer.toString(n));

                    if (!cached)
                    {
                        request.setOutboundProperty("nocache", true);
                    }

                    RequestContext.getEventContext().sendEvent(request, "vm://fibonacci");
                }
            </global-functions>
        </expression-language>
    </configuration>

    <spring:bean id="responseGenerator" class="com.mulesoft.mule.cache.FibonacciResponseGenerator"/>

    <ee:object-store-caching-strategy name="Caching_Strategy" doc:name="Caching Strategy" keyGenerationExpression="#[message.inboundProperties['n']]" responseGenerator-ref="responseGenerator"/>

    <vm:connector name="vmConnector">
        <dispatcher-threading-profile maxThreadsActive="200"/>
    </vm:connector>


    <flow name="cache-exampleFlow1" doc:name="cache-exampleFlow1">
        <http:inbound-endpoint exchange-pattern="request-response" host="localhost" port="8081" path="fibonacci" doc:name="HTTP"/>
 <message-filter doc:name="Filter favicon">
            <not-filter>
                <wildcard-filter pattern="/favicon.ico" caseSensitive="true"/>
            </not-filter>
        </message-filter>
 <choice doc:name="Choice">
            <when expression="message.inboundProperties['n'] &lt; 20">
                <flow-ref name="calculateFibonacci"/>
 <expression-component>payload= "Fibonacci(" + message.inboundProperties['n'] + ") = " + payload +"\nCOST: " + message.outboundProperties['cost']</expression-component>
            </when>
            <otherwise>
                <expression-component>payload= "ERROR: n must be less than 20"</expression-component>
            </otherwise>
        </choice>
    </flow>

    <flow name="calculateFibonacci">
        <vm:inbound-endpoint path="fibonacci" exchange-pattern="request-response"/>
 <ee:cache cachingStrategy-ref="Caching_Strategy"
                  filterExpression="#[groovy:message.getInboundProperty('nocache') == null]" doc:name="Cache">
            <logger level="INFO" message="#[payload]"/>
            <expression-component><![CDATA[
                n = message.inboundProperties['n'];
                if (n < 2)
                {
                    payload = n;
                    message.outboundProperties["cost"] = 1;
                } else {
                    boolean cached = message.inboundProperties['nocache'] == null;
                    import org.mule.api.MuleMessage;
                    MuleMessage fib1 = fibonacciRequest(n-1, cached);
                    MuleMessage fib2 = fibonacciRequest(n-2, cached);
 message.outboundProperties["cost"] = fib1.getInboundProperty("cost") + fib2.getInboundProperty("cost") + 1;
                    payload = Long.parseLong(fib1.getPayload()) + Long.parseLong(fib2.getPayload());
                }
            ]]>
            </expression-component>
        </ee:cache>
    </flow>
</mule>

If a call to the Fibonacci function has already been calculated and cached, the flow returns both the cached response and the cost of retrieving the cached response, which is 0. To demonstrate the number of invocations cache spares the function, this example includes the ability to force the flow to perform the full calculation by adding a nocache parameter to the request URL.

The following sequence illustrates a series of calls to the Fibonacci function. Notice that when the flow is able to return a cached value — because it has already performed an identical calculation — the cost returned is 0. When the flow is able to respond with a value it has calculated using another cached response (as in request-response C, below), the cost represents the difference between the cached response and the new request. (For example, if the Fibonacci function has already calculated and cached a request for n=10, and then receives a request for n=13, the cost to return the second response is 3.)

reqC

As this example illustrates, cache saves both time and processing load by reusing data it has already retrieved or calculated.