Contact Us 1-800-596-4880

JDBC Transport Reference

Mule Runtime Engine versions 3.5, 3.6, and 3.7 reached End of Life on or before January 25, 2020. For more information, contact your Customer Success Manager to determine how you can migrate to the latest Mule version.

As of Anypoint Studio May 2014 with 3.5.0 Runtime and its new Database connector, JDBC Transport is deprecated. Applications that use the JDBC transport continue to work with the 3.5.0 Enterprise Runtime, but the connector is no longer available in the Studio palette to add to applications.

Access the documentation for the [Database Connector] to take advantage of the newly-designed functionality.

Some features are available only with the Mule Enterprise version of the JDBC transport, which are noted below. The JDBC Transport lets you send and receive messages with a database using the JDBC protocol. Common usage includes retrieving, inserting, updating, and deleting database records, as well as invoking stored procedures, such as, to create new tables dynamically.

In code examples in this guide, spring-beans-current.xsd is a placeholder. To locate the correct version, see http://www.springframework.org/schema/beans/.

Transport Info

Transport Doc Inbound Outbound Request Transaction Streaming Retries MEP’s Default MEP Maven Artifact

JDBC

[JavaDoc]

check

check

check

check (local XM)

error 1

check

one-way request response

one-way

org.mule.transport:mule-transport-jdbc

Namespace and Syntax

XML namespace (Community Version):

xmlns:jdbc="http://www.mulesoft.org/schema/mule/jdbc"

XML namespace (Enterprise Version):

xmlns:jdbc="http://www.mulesoft.org/schema/mule/ee/jdbc"

XML schema location (Community Version)

http://www.mulesoft.org/schema/mule/jdbc http://www.mulesoft.org/schema/mule/jdbc/3.5/mule-jdbc.xsd

XML schema location (Enterprise Version)

http://www.mulesoft.org/schema/mule/ee/jdbc http://www.mulesoft.org/schema/mule/ee/jdbc/3.5/mule-jdbc-ee.xsd">

For Community or Enterprise, if you create the Spring bean right in the Mule configuration file you must include the following namespace:

<mule xmlns="http://www.mulesoft.org/schema/mule/core"
       xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
       xmlns:spring="http://www.springframework.org/schema/beans"
       xmlns:jee="http://www.springframework.org/schema/jee"
       xmlns:util="http://www.springframework.org/schema/util"
       xmlns:jdbc="http://www.mulesoft.org/schema/mule/jdbc"
       xsi:schemaLocation="
       http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans-current.xsd
       http://www.springframework.org/schema/jee http://www.springframework.org/schema/jee/spring-jee-2.5.xsd
       http://www.springframework.org/schema/util http://www.springframework.org/schema/util/spring-util-2.5.xsd
       http://www.mulesoft.org/schema/mule/core http://www.mulesoft.org/schema/mule/core/3.5/mule.xsd
       http://www.mulesoft.org/schema/mule/jdbc http://www.mulesoft.org/schema/mule/jdbc/3.5/mule-jdbc.xsd">
In these code examples, spring-beans-current.xsd is a placeholder. To locate the correct version, see http://www.springframework.org/schema/beans/.

Connector Syntax (Community Version)

<spring:bean id="jdbcDataSource" class="org.enhydra.jdbc.standard.StandardDataSource" destroy-method="shutdown">
    <spring:property name="driverName" value="org.apache.derby.jdbc.EmbeddedDriver"/>
    <spring:property name="url" value="jdbc:derby:muleEmbeddedDB;create=true"/>
</spring:bean>

<jdbc:connector name="jdbcConnector" dataSource-ref="jdbcDataSource" pollingFrequency="10000"
                queryRunner-ref="queryRunner" queryTimeout="10" resultSetHandler-ref="resultSetHandler"
                transactionPerMessage="true"/>

Connector Syntax (Enterprise Version)

<spring:bean id="jdbcDataSource" class="org.enhydra.jdbc.standard.StandardDataSource" destroy-method="shutdown">
    <spring:property name="driverName" value="org.apache.derby.jdbc.EmbeddedDriver"/>
    <spring:property name="url" value="jdbc:derby:muleEmbeddedDB;create=true"/>
</spring:bean>

<jdbc:connector name="jdbcEeConnector" pollingFrequency="1000" dataSource-ref="jdbcDataSource"
                queryRunner-ref="queryRunner" queryTimeout="10" resultSetHandler-ref="resultSetHandler"
                transactionPerMessage="true">
    <jdbc:ackSqlCommandExecutorFactory ref="ackSqlCommandExecutorFactory"/>
    <jdbc:sqlCommandRetryPolicyFactory ref="sqlCommandRetryPolicyFactory"/>
    <jdbc:query key="myQuery" value="select * from table"/>
    <jdbc:sqlCommandExecutorFactory ref="sqlCommandExecutorFactory"></jdbc:sqlCommandExecutorFactory>
    <jdbc:sqlStatementStrategyFactory ref="sqlStatementStrategyFactory"/>
</jdbc:connector>

Endpoint Syntax

Inbound Endpoints

You can define you endpoints two different ways:

  1. Prefixed endpoint (valid for both Community and Enterprise JDPC endpoints)

    <jdbc:inbound-endpoint queryKey="selectQuery" name="jdbcInbound" pollingFrequency="10000" queryTimeout="10"
                           connector-ref="jdbcConnector" exchange-pattern="one-way">
        <jdbc:transaction action="ALWAYS_BEGIN" />
    </jdbc:inbound-endpoint>
  2. Non-prefixed URI:

    <inbound-endpoint address="jdbc://getTest?type=1"/>

Outbound Endpoints

  1. Prefixed endpoint (valid for both CE and EE jdbc endpoints)

    <jdbc:outbound-endpoint queryKey="selectCount" exchange-pattern="one-way" connector-ref="jdbcConnector" queryTimeout="10" >
        <jdbc:transaction action="ALWAYS_BEGIN"/>
    </jdbc:outbound-endpoint>
  2. Non-prefixed URI:

    <outbound-endpoint address="jdbc://writeTest?type=2"/>

Considerations

Using the JDBC transport is good idea if you don’t already have a database abstraction layer defined for your application. It saves you trouble of writing your own database client code and will be more portable if you decide to change databases in the future. If your application uses a database abstraction layer, then it usually preferable to use that instead of the JDBC transport.

Features

The Mule Enterprise JDBC Transport provides key functionality, performance improvements, transformers, and examples not available in the Mule community release. The following table summarizes the feature difference.

Feature Summary Mule Community Mule Enterprise

[Inbound SELECT Queries]

Retrieve records using the SQL SELECT statement configured on inbound endpoints.

x

x

[Large Dataset Retrieval]

Enables retrieval arbitrarily large datasets by consuming records in smaller batches.

x

[Acknowledgement Statements]

Supports ACK SQL statements that update the source or other table after a record is read

x

x

[Basic Insert/Update/Delete Statements]

Individual SQL INSERT, UPDATE, and DELETE queries specified on outbound endpoints. One statement is executed at a time.

x

x

[Batch Insert/Update/Delete Statements]

Support for JDBC batch INSERT, UPDATE, and DELETE statements, so that many statements can be executed together.

x

[Advance JDBC-related Transformers]

XML and CSV transformers for easily converting to and from datasets in these common formats

x

[Outbound SELECT Queries]

Retreive records using SQL SELECT statement configured on outbound endpoints. Supports synchronous queries with dynamic runtime parameters.

x

x

[Outbound Stored Procedure Support - Basic]

Ability to invoke stored procedures on outbound endpoints. Supports IN parameters but not OUT parameters

x

x

[Outbound Stored Procedure Support - Advanced]

Same as Basic, but includes both IN and OU T parameter support. OUT parameters can be simple data types or cursors.

x

[Unnamed Queries]

Queries that can be invoked programatically from within components or other Java code. This is the most flexible option, bus also requires writing code.

x

x

[Flexible Data Source Configuration]

Support for configuration of data source through JDNI, XAPool, or Spring.

x

x

[Transactions]

Support for transactions via underlying Transaction Manager.

x

x

Within this features section items identified by the Enterprise maker indicate features available only in the Enterprise version.

Inbound SELECT Queries

Inbound SELECT queries are queries that are executed periodically (according to the pollingFrequency set on the connector).

Here is an example:

<spring:bean id="jdbcDataSource" class="org.enhydra.jdbc.standard.StandardDataSource" destroy-method="shutdown">
  <spring:property name="driverName" value="oracle.jdbc.driver.OracleDriver"/>
  <spring:property name="url" value="jdbc:oracle:thin:user/pass@host:1521:db"/>
</spring:bean>
...
<jdbc:connector name="jdbcConnector" pollingFrequency="10000" dataSource-ref="jdbcDataSource"> ❷
        <jdbc:query key="selectLoadedMules"
                    value="SELECT ID, MULE_NAME, RANCH, COLOR, WEIGHT, AGE from mule_source"/>
</jdbc:connector>
...
    <flow name="AllMules">
        <jdbc:inbound-endpoint queryKey="selectLoadedMules" exchange-pattern="request-response"/> ❶
...
    </flow>
...

In this example, the selectLoadedMules❶ would be invoked every 10 seconds (pollingFrequency=10000 ms)) ❷. Each record from the result set is converted into a Map (consisting of column/value pairs).

Inbound SELECT queries are limited because (1) generally, they cannot be called synchronously (unnamed queries are an exception), and (2) they do not support runtime parameters.

Large Dataset Retrieval (Enterprise)

Overview

Large dataset retrieval is a strategy for retrieving large datasets by fetching records in smaller, more manageable batches. Mule Enterprise provides the key components and transformers needed to implement a wide range of these strategies.

When to Use It

  • When the dataset to be retrieved is large enough to overwhelm memory and connection resources.

  • When preserving the order of messages is important.

  • When resumable processing is desired (that is, retrieval of the dataset can pick up where it left off, even after service interruption).

  • When load balancing the data retrieval among clustered Mule nodes.

How it Works

Large dataset retrieval does not use conventional inbound SELECT queries to retrieve data. Instead, it uses a Batch Manager component to compute ID ranges for the next batch of records to be retrieved. An outbound SELECT query uses this range to actually fetch the records. The Batch Manager also controls batch processing flow to make sure that it does not process the next batch until the previous batch has finished processing.

Here is an example:

<spring:bean id="idStore" class="com.mulesoft.mule.transport.jdbc.util.IdStore"> ❶
     <spring:property name="fileName" value="/tmp/large-dataset.txt"/>
</spring:bean>
<spring:bean id="seqBatchManager" class="com.mulesoft.mule.transport.jdbc.components.BatchManager"> ❷
    <spring:property name="idStore" ref="idStore"/>
    <spring:property name="batchSize" value="10"/>
    <spring:property name="startingPointForNextBatch" value="0"/>
</spring:bean>
<spring:bean id="noArgsWrapper"
             class="com.mulesoft.mule.transport.jdbc.components.NoArgsWrapper"> ❸
    <spring:property name="batchManager" ref="seqBatchManager"/>
</spring:bean>
<flow name="LargeDataSet">
        <vm:inbound-endpoint exchange-pattern="one-way" path="vm://next.batch"/>
        <spring-object bean="noArgsWrapper" />
...

irst you set up the file which holds the starting point ID for the next batch of records ❶. Next you define your BatchManager and set the idStore, batchSize and starting point ❷. Then you define a 'noArgsWrapper' spring bean and set a reference to the batch manager ❸. ❹ is where you define the component that gets called after the inbound endpoint triggers. Your outbound endpoints can use

[map-payload:lowerId]

and

#[map-payload:upperId]

to reference a batch of database rows.

Important Limitations

Large dataset retrieval requires that:

  1. The source data contains a unique, sequential numeric ID. Records should also be fetched in ascending order with respect to this ID.

  2. There are no large gaps in these IDs (no larger than the configured batch size).

In Combination with Batch Inserts

Combining large dataset retrieval with batch inserts can support simple but powerful ETL use cases.

Acknowledgment (ACK) Statements

ACK statements are optional SQL statements that are paired with inbound SELECT queries. When an inbound SELECT query is invoked by Mule, the ACK statement is invoked for each record returned by the query. Typically, the ACK statement is an UPDATE, INSERT, or DELETE.

An ACK statement would be configured as follows:

...
<jdbc:connector name="jdbcConnector" pollingFrequency="10000" dataSource-ref="jdbcDataSource">
    <jdbc:query key="selectLoadedMules"
                value="SELECT ID, PROCESSED from mule_source WHERE PROCESSED is null order by ID"/>
    <jdbc:query key="selectLoadedMules.ack"
                value="update mule_source set PROCESSED='Y'  where ID = #[map-payload:ID] "/>
</jdbc:connector>
...

Notice the required convention of appending an ".ack" extension to the query name. This convention lets Mule know which inbound SELECT query to pair with the ACK statement.

Also, note that the ACK statement supports parameters. These parameters are bound to any of the column values from the inbound SELECT query (such as #[map-payload:ID] in the case above).

ACK statements are useful when you want an inbound SELECT query to retrieve records from a source table no more than once. Be careful, however, when using ACK statements with larger result sets. As mentioned earlier, an ACK statement gets issued for each record retrieved, and this can be very resource-intensive for even a modest number of records per second (> 100).

Basic Insert, Update, and Delete Statements

SQL INSERT, UPDATE, and DELETE statements are specified on outbound endpoints. These statements are typically configured with parameters, which are bound with values passed along to the outbound endpoint from an upstream component.

Basic statements execute just one statement at a time, as opposed to batch statements, which execute multiple statements at a time. Basic statements are appropriate for low-volume record processing (<20 records per second), while batch statements are appropriate for high-volume record processing (thousands of records per second).

Note that Mule processes JDBC statements differently depending upon the format of the data it receives:

  • If the message payload is a collection, Mule uses batch to process the statement

  • If the message payload is not a collection, Mule uses basic to process the statement

For example, if you use a DataMapper to feed an ArrayList into a JDBC endpoint in your application, Mule uses batch and executes your JDBC statement once for every item in the ArrayList that emerged from the DataMapper.

For example, when a message with a java.util.Map payload is sent to a basic insert/update/delete endpoint, the parameters in the statement are bound with corresponding entries in the Map. In the configuration below, if the message contains a Map payload with {ID=1,TYPE=1,DATA=hello,ACK=0}, the following insert is issued: ` INSERT INTO TEST (ID,TYPE,DATA,ACK) values (1,1,'hello',0)`.

<jdbc:connector name="jdbcConnector" pollingFrequency="10000" dataSource-ref="jdbcDataSource">
    <jdbc:query key="outboundInsertStatement"
              value="INSERT INTO TEST (ID, TYPE, DATA, ACK) VALUES (#[map-payload:ID],
                    #[map-payload:TYPE],#[map-payload:DATA], #[map-payload:ACK])"/>
</jdbc:connector>
...
<flow name="ExampleFlow">
    <inbound-endpoint address="vm://doInsert"/>
    <jdbc:outbound-endpoint queryKey="outboundInsertStatement"/>
</flow>
...

Batch Insert, Update, and Delete Statements (Enterprise)

As mentioned above, batch statements represent a significant performance improvement over their basic counterparts. Records can be inserted at a rate of thousands per second with this feature.

Usage of batch INSERT, UPDATE, and DELETE statements is the same as for basic statements, except the payload sent to the VM endpoint should be a List of Maps, instead of just a single Map.

Batch Callable Statements are also supported. Usage is identical to Batch Insert/Update/Delete.

Note that Mule processes JDBC statements differently depending upon the format of the data it receives:

  • If the message payload is a collection, Mule uses batch to process the statement

  • If the message payload is not a collection, Mule uses basic to process the statement

For example, if you use a DataMapper to feed an ArrayList into a JDBC endpoint in your application, Mule uses batch and executes your JDBC statement once for every item in the ArrayList that emerged from the DataMapper.

Common integration use cases involve moving CSV and XML data from files to databases and back. This section describes the transformers that perform these actions. These transformers are available in Mule Enterprise only.

XML-JDBC Transformer

The XML Transformer converts between XML and JDBC-format Maps. The JDBC-format Maps can be used by JDBC outbound endpoints (for select, insert, update, or delete operations).

Transformer Details:

Name Class Input Output

XML → Maps

com.mulesoft.mule.transport.jdbc.transformers.XMLToMapsTransformer

java.lang.String (XML)

java.util.List (List of Maps. Each Map corresponds to a "record" in the XML.)

Maps → XML

com.mulesoft.mule.transport.jdbc.transformers.MapsToXMLTransformer

java.util.List (List of Maps. Each Map will be converted into a "record" in the XML)

java.lang.String (XML)

Also, the XML message payload (passed in or out as a String) must adhere to a particular schema format:

<xs:schema xmlns:xs="http://www.w3.org/2001/XMLSchema" elementFormDefault="qualified">
  <xs:element name="table">
    <xs:complexType>
      <xs:sequence>
        <xs:element ref="record"/>
      </xs:sequence>
    </xs:complexType>
  </xs:element>
  <xs:element name="record">
    <xs:complexType>
      <xs:sequence>
        <xs:element maxOccurs="unbounded" ref="field"/>
      </xs:sequence>
    </xs:complexType>
  </xs:element>
  <xs:element name="field">
    <xs:complexType>
      <xs:simpleContent>
        <xs:extension base="xs:NMTOKEN">
          <xs:attribute name="name" use="required" type="xs:NCName"/>
          <xs:attribute name="type" use="required" type="xs:NCName"/>
        </xs:extension>
      </xs:simpleContent>
    </xs:complexType>
  </xs:element>
</xs:schema>

Here is an example of a valid XML instance:

<table>
    <record>
  <field name="id" type="java.math.BigDecimal">0</field>
  <field name="name" type="java.lang.String">hello</field>
    </record>
</table>

The transformer converts each "record" element to a Map of column/value pairs using "fields". The collection of Maps is returned in a List.

The following returns any processed rows in XML format when you go to http://localhost:8080/first20 in your browser:

<jdbc:connector name="jdbcConnector" dataSource-ref="jdbcDataSource">
        <jdbc:query key="selectLoadedMules"
                    value="SELECT ID, PROCESSED from mule_source WHERE PROCESSED is null order by ID"/>
        <jdbc:query key="selectLoadedMules.ack"
          value="update mule_source set PROCESSED='Y'  where ID = #[map-payload:ID]"/>
    </jdbc:connector>

    <jdbc:maps-to-xml-transformer name="XMLResponseTransformer"/>

    <message-properties-transformer name="XMLContentTransformer">
        <add-message-property key="Content-Type" value="text/xml"/>
    </message-properties-transformer>

    <flow name="ReportModel">
            <inbound-endpoint address="http://localhost:8080/first20" responseTransformer-refs="XMLResponseTransformer XMLContentTransformer" exchange-pattern="request-response"/>
            <jdbc:outbound-endpoint queryKey="selectLoadedMules" exchange-pattern="request-response"/>
    </flow>

CSV-JDBC Transformer

The CSV Transformer converts between CSV data and JDBC-format Maps. The JDBC-format Maps can be used by JDBC outbound endpoints (for select, insert, update, or delete operations).

Transformer Details:

Name Class Input Output

CSV → Maps

com.mulesoft.mule.transport.jdbc.transformers.CSVToMapsTransformer

java.lang.String (CSV data)

java.util.List (List of Maps. Each Map corresponds to a "record" in the CSV)

Maps → CVS

com.mulesoft.mule.transport.jdbc.transformers.MapsToCSVTransformer

java.util.List (List of Maps. Each Map corresponds to a "record" in the CSV)

java.lang.String (CSV data)

The following table summarizes the properties that can be set on this transformer:

Property Description

deliminter

The delimiter character used in the CSV file. Defaults to comma.

qualifier

The qualifier character used in the CSV file. Used to signify if text contains the delimiter character.Defaults to double quote.

ignoreFirstRecord

Instructs transformer to ignore the first record. Use this if your first row is a list of column names. Defaults to false.

mappingFile

Location of Mapping file. Required. Can either be physical file location or classpath resource name. The DTD format of the Mapping File can be found at: http://flatpack.sourceforge.net/flatpack.dtd. For examples of this format, see http://flatpack.sourceforge.net/documentation/index.html.

This configuration loads a CSV file in the mule_source table of a database:

<jdbc:connector name="jdbcConnector" dataSource-ref="jdbcDataSource">
    <jdbc:query key="commitLoadedMules"
                value="insert into mule_source
                (ID, MULE_NAME, RANCH, COLOR, WEIGHT, AGE)
                values
                (#[map-payload:ID;int;in], #[map-payload:MULE_NAME], #[map-payload:RANCH], #[map-payload:COLOR], #[map-payload:WEIGHT;int;in], #[map-payload:AGE;int;in])"/>
</jdbc:connector>

<file:connector name="fileConnector" autoDelete="false" pollingFrequency="100000000"/>
<file:endpoint path="/tmp/data" name="get" connector-ref="fileConnector"/>
<custom-transformer name="ObjectToString" class="org.mule.transformer.simple.ObjectToString"/>
<jdbc:csv-to-maps-transformer name="CSV2Maps" delimiter="," mappingFile="/tmp/mules-csv-format.xml" ignoreFirstRecord="true"/>

<flow name="CSVLoader">
    <file:inbound-endpoint ref="get" transformer-refs="ObjectToString CSV2Maps">
        <file:filename-wildcard-filter pattern="*.csv"/>
    </file:inbound-endpoint>
    <echo-component/>
    <jdbc:outbound-endpoint queryKey="commitLoadedMules"/>
</flow>

Outbound SELECT Queries

An inbound SELECT query is invoked on an inbound endpoint according to a specified polling frequency. A major improvement to the inbound SELECT query is the outbound SELECT query, which can be invoked on an outbound endpoint. As a result, the outbound SELECT query can do many things that the inbound SELECT query cannot, such as:

  1. Support synchronous invocation of queries. For example, you can implement the classic use case of a web page that serves content from a database using an HTTP inbound endpoint and an outbound SELECT query endpoint.

  2. Allows parameters so that values can be bound to the query at runtime. This requires that the message contain a Map payload containing key names that match the parameter names. For example, the following configuration could be used to retrieve an outbound SELECT query:

<jdbc:connector name="jdbcConnector" dataSource-ref="jdbcDataSource">
        <jdbc:query key="selectMules"
                    value="select * from mule_source where ID between 0 and #[header:inbound:max]"/>
</jdbc:connector>
<jdbc:maps-to-xml-transformer name="XMLResponseTransformer"/>
<message-properties-transformer name="XMLContentTransformer">
    <add-message-property key="Content-Type" value="text/xml"/>
</message-properties-transformer>
<flow name="ExampleModel">
    <inbound-endpoint address="http://localhost:8080/getMules" exchange-pattern="request-response" responseTransformer-refs="XMLResponseTransformer XMLContentTransformer"/>
    <jdbc:outbound-endpoint queryKey="selectMules" exchange-pattern="request-response"/>
</flow>

In this scenario, if the URL http://localhost:8080/getMules?max=3 is hit, then the following query executes:

SELECT * FROM mule_source WHERE ID between 0 and 3

The database rows are transformed into XML which you see in your browser.

Outbound Stored Procedure Support - Basic

Stored procedures are supported on outbound endpoints in Mule. Like any other query, stored procedure queries can be listed in the queries map. Following is an example of how stored procedure queries could be defined:

<jdbc:connector name="jdbcConnector" pollingFrequency="10000" dataSource-ref="jdbcDataSource">
    <jdbc:query key="storedProc" value="CALL addField()"/>
</jdbc:connector>

To denote that we are going to execute a stored procedure and not a simple SQL query, we must start off the query by the text CALL followed by the name of the stored procedure.

Parameters to stored procedures can be forwarded by either passing static parameters in the configuration or using the same syntax as for SQL queries (see "Passing in Parameters" below). For example:

<jdbc:query key="storedProc1" value="CALL addFieldWithParams(24)"/>
<jdbc:query key="storedProc2" value="CALL addFieldWithParams(#[map-payload:value])"/>

<flow name="ExampleModel">
    <inbound-endpoint address="http://localhost:8080/get" exchange-pattern="request-response"/>
    <jdbc:outbound-endpoint queryKey="storedProc1" exchange-pattern="request-response"/>
</flow>

<flow name="ExampleModel">
    <inbound-endpoint address="http://localhost:8080/get2" exchange-pattern="request-response"/>
    <jdbc:outbound-endpoint address="jdbc://storedProc2?value=25"/>
</flow>

If you do not want to poll the database, you can write a stored procedure that uses HTTP to start a Mule flow. The stored procedure can be called from an Oracle trigger. If you take this approach, make sure the exchange pattern is 'one-way'. Otherwise, the trigger/transaction won’t commit until the HTTP post returns.

Note that stored procedures are only supported on outbound endpoints. If you want to set up a flow that calls a stored procedure at a regular interval, you can define a [Quartz] inbound endpoint and then define the stored procedure call in the outbound endpoint. For information on using Quartz to trigger flows, see the following [blog post].

Passing in Parameters

To pass in parameter values and get returned values to/from stored procedures or stored functions in Oracle, you declare the parameter name, direction, and type in the JDBC query key/value pairs on JDBC connectors using the following syntax:

Call #[<return parameter name>;<int | float | double | string | resultSet>;<out>] :=
<Oracle package name>.<stored procedure/function name>($PARAM1, $PARAM2, ...)

where $PARAMn is specified using the following syntax:

#[<parameter name>;<int | float | double | string | resultSet>;<in | out | inout>]

For example:

<jdbc:query key="SingleCursor"  value="call MULEPACK.TEST_CURSOR(#[mules;resultSet;out])"/>

This SQL statement calls a stored procedure TEST_CURSOR in the package of MULEPACK, specifying an out parameter whose name is "mules" of type java.sql.ResultSet.

Here is another example:

<jdbc:query key="itcCheckMsgProcessedOrNot"
value="call #[mules;int;out] := ITCPACK.CHECK_IF_MSG_IS_HANDLED_FNC(487568,#[mules1;string;out],
#[mules2;string;out],#[mules3;int;out],#[mules4;string;out])"/>

This SQL statement calls a stored function CHECK_IF_MSG_IS_HANDLED_FNC in the package of ITCPACK, assigning a return value of integer to the parameter whose name is "mules" while specifying other parameters, for example, parameter "mules2" is an out string parameter.

Stored procedures and functions can only be called on JDBC outbound endpoints. Once the values are returned from the database, they are put in a java.util.HashMap with key/value pairs. The keys are the parameter names, for example, "mules2", while the values are the Java data values (Integer, String, etc.). This hash map is the payload of MuleMessage that returns to the caller or sends to the next endpoint depending on the Mule configuration.

Outbound Stored Procedure Support - Advanced (Enterprise)

Mule Enterprise provides advanced stored procedure support for outbound endpoints beyond what is available in the Mule community release. This section describes the advanced support.

OUT Parameters

In Mule Enterprise, you can execute your stored procedures with out and inout scalar parameters. The syntax for such parameters is:

<jdbc:query key="storedProc1" value="CALL myProc(#[a], #[b;int;inout], #[c;string;out])"/>

You must specify the type of each output parameter (OUT, INOUT) and its data type (int, string, etc.). The result of such stored procedures is a map containing (out parameter name, value) entries.

Oracle Cursor Support

For Oracle databases only, an OUT parameter can return a cursor. The following example shows how this works.

If you want to handle the cursor as a java.sql.ResultSet, see the "cursorOutputAsResultSet" flow below, which uses the "MapLookup" transformer to return the ResultSet.

If you want to handle the cursor by fetching the java.sql.ResultSet to a collection of Map objects, see the "cursorOutputAsMaps" flow below, which uses both the "MapLookup" and "ResultSet2Maps" transformers to achieve this result.

<jdbc:connector name="jdbcConnector" pollingFrequency="1000" cursorTypeConstant="-10"
      dataSource-ref="jdbcDataSource">
    <jdbc:query key="SingleCursor"  value="call TEST_CURSOR(#[mules;resultSet;out])"/>
</jdbc:connector>

<custom-transformer class="org.mule.transformer.simple.MapLookup" name="MapLookup">
    <spring:property name="key" value="mules"/>
</custom-transformer>

<jdbc:resultset-to-maps-transformer name="ResultSet2Maps"/>

<flow name="SPModel">
    <vm:inbound-endpoint path="returns.maps" responseTransformer-refs="ResultSet2Maps MapLookup"/>
    <jdbc:outbound-endpoint queryKey="SingleCursor"/>
</flow>
<flow name="cursorOutputAsResultSet">
    <vm:inbound-endpoint  path="returns.resultset"  responseTransformer-refs="MapLookup"/>
    <jdbc:outbound-endpoint queryKey="SingleCursor"/>
</flow>

In the above example, note that it is also possible to call a function that returns a cursor ref. For example, if TEST_CURSOR2() returns a cursor ref, the following statement could be used to get that cursor as a ResultSet:

<jdbc:query key="SingleCursor"  value="call #[mules;resultSet;out] := TEST_CURSOR2()"/>
Important note on transactions: When calling stored procedures or functions that return cursors (ResultSet), it is recommended that you process the ResultSet within a transaction.

Unnamed Queries

SQL statements can also be executed without configuring queries in the Mule configuration file. For a given endpoint, the query to execute can be specified as the address of the URI.

MuleMessage msg = eventContext.receiveEvent("jdbc://SELECT * FROM TEST", 0);

Flexible Data Source Configuration

You can use any JDBC data source library with the JDBC Connector. The "myDataSource" reference below refers to a DataSource bean created in Spring:

<jdbc:connector name="jdbcConnector" pollingFrequency="10000" dataSource-ref="myDataSource">
        ...
</jdbc:connector>

You can also create a JDBC connection pool so that you don’t create a new connection to the database for each message. You can easily create a pooled data source in Spring using [xapool]. The following example shows how to create the Spring bean right in the Mule configuration file.

<spring:bean id="pooledDS" class="org.enhydra.jdbc.standard.StandardXADataSource" destroy-method="shutdown">
  <spring:property name="driverName" value="oracle.jdbc.driver.OracleDriver"/>
  <spring:property name="url" value="jdbc:oracle:thin:user/pass@host:1521:db"/>
  <spring:property name="user" value="USER" />
  <spring:property name="password" value="PWD" />
  <spring:property name="minCon" value="10" />
  <spring:property name="maxCon" value="100" />
</spring:bean>

If you need more control over the configuration of the pool, you can use the standard JDBC classes. For example, you could create the following bean in the Spring configuration file (you could also create them in the Mule configuration file by prefixing everything with the Spring namespace):

<bean id="c3p0DataSource" class="com.mchange.v2.c3p0.ComboPooledDataSource" destroy-method="close">
        <property name="driverClass">
            <value>oracle.jdbc.driver.OracleDriver</value>
        </property>
        <property name="jdbcUrl">
            <value>jdbc:oracle:thin:@MyUrl:MySID</value>
        </property>

        <property name="user">
            <value>USER</value>
        </property>
        <property name="password">
            <value>PWD</value>
        </property>

        <property name="properties">
            <props>
                <prop key="c3p0.acquire_increment">5</prop>
                <prop key=" c3p0.idle_test_period">100</prop>
                <prop key="c3p0.max_size">100</prop>
                <prop key="c3p0.max_statements">1</prop>
                <prop key=" c3p0.min_size">10</prop>
                <prop key="user">USER</prop>
                <prop key="password">PWD</prop>
            </props>
        </property>
    </bean>

You could then reference the c3p0DataSource bean in your Mule configuration:

<connector name="C3p0Connector" className="org.mule.providers.jdbc.JdbcConnector">
  <properties>
    <container-property name="dataSource" reference="c3p0DataSource"/>
      <map name="queries">
        <property name="test1" value="select * from Tablel"/>
        <property name="test2" value="call testd(1)"/>
      </map>
  </properties>
</connector>

Or you could call it from your application as follows:

JdbcConnector jdbcConnector = (JdbcConnector) MuleServer.getMuleContext().getRegistry().lookupConnector("C3p0Connector");
ComboPooledDataSource datasource = (ComboPooledDataSource)jdbcConnector.getDataSource();
Connection connection = (Connection)datasource.getConnection();

String query = "select * from Table1"; //any query
Statement stat = connection.createStatement();
ResultSet rs = stat.executeQuery(query);

To retrieve the data source from a JNDI repository, you would configure the connector as follows:

<spring:beans>
  <jee:jndi-lookup id="myDataSource" jndi-name="yourJndiName" environment-ref="yourJndiEnv" />
  <util:map id="jndiEnv">
    <spring:entry key="java.naming.factory.initial" value="yourJndiFactory" />
   </util:map>
</spring:beans>

Transactions

Transactions are supported on JDBC endpoints. See [Transaction Management] for details.

Usage

Copy your JDBC client jar to the <MULE_HOME>/lib/user directory of your installation.

If you want to include the JDBC transport in your configuration, these are the namespaces you need to define:

<mule xmlns="http://www.mulesoft.org/schema/mule/core"
       xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
       xmlns:spring="http://www.springframework.org/schema/beans"
       xmlns:jdbc="http://www.mulesoft.org/schema/mule/jdbc"
       xsi:schemaLocation="
       http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans-current.xsd
       http://www.mulesoft.org/schema/mule/core http://www.mulesoft.org/schema/mule/core/3.5/mule.xsd
       http://www.mulesoft.org/schema/mule/jdbc http://www.mulesoft.org/schema/mule/jdbc/3.5/mule-jdbc.xsd">
...

For the enterprise version of the JDBC transport:

<mule xmlns="http://www.mulesoft.org/schema/mule/core"
      xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
      xmlns:spring="http://www.springframework.org/schema/beans"
      xmlns:jdbc="http://www.mulesoft.org/schema/mule/ee/jdbc"
      xsi:schemaLocation="
       http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans-current.xsd
       http://www.mulesoft.org/schema/mule/core http://www.mulesoft.org/schema/mule/core/3.5/mule.xsd
       http://www.mulesoft.org/schema/mule/ee/jdbc http://www.mulesoft.org/schema/mule/ee/jdbc/3.5/mule-jdbc-ee.xsd">
...

Then you need to define a connector:

<spring:bean id="jdbcDataSource" class="org.enhydra.jdbc.standard.StandardDataSource" destroy-method="shutdown">
    <spring:property name="driverName" value="org.apache.derby.jdbc.EmbeddedDriver"/>
    <spring:property name="url" value="jdbc:derby:muleEmbeddedDB;create=true"/>
</spring:bean>

<jdbc:connector name="jdbcConnector" dataSource-ref="jdbcDataSource" pollingFrequency="10000"
                queryRunner-ref="queryRunner" queryTimeout="10" resultSetHandler-ref="resultSetHandler"
                transactionPerMessage="true"/>

Finally, you define an inbound or outbound endpoint.

  • Use an inbound endpoint if you want changes to your database to trigger a Mule flow.

  • Use an outbound endpoint to make changes to the database data or return database data to an inbound endpoint, such as using an http endpoint to display database data

Endpoints

Inbound Endpoints

<jdbc:inbound-endpoint queryKey="selectQuery" name="jdbcInbound" pollingFrequency="10000" queryTimeout="10"
                       connector-ref="jdbcConnector" exchange-pattern="one-way">
    <jdbc:transaction action="ALWAYS_BEGIN" />
</jdbc:inbound-endpoint>

Outbound Endpoints

<jdbc:outbound-endpoint queryKey="selectCount" exchange-pattern="one-way" connector-ref="jdbcConnector" queryTimeout="10" >
    <jdbc:transaction action="ALWAYS_BEGIN"/>
</jdbc:outbound-endpoint>
If you are using Mule Enterprise edition, then you must use the EE version of the JDBC transport. Therefore, if you are migrating from CE to EE, update the namespace and schemaLocation declarations to the EE versions as described above.

Exchange Patterns

The one-way and request-response exchange patterns are supported. If an exchange pattern is not defined, 'one-way' is the default.

Polling Transport

The inbound endpoint for JDBC transport uses polling to look for new data. The default is to check every second, but it can be changed via the 'pollingFrequency' attribute on the connector.

Features Supported by this Module

Most standard transport features are supported for the JDBC transport: transactions, retry, expressions, etc. Streaming is not supported for the JDBC transport.

Example Configurations

The following example demonstrates how you would write rows in a database to their own files.

Writing Database Rows to their Own Files
<mule xmlns="http://www.mulesoft.org/schema/mule/core"
       xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
       xmlns:spring="http://www.springframework.org/schema/beans"
       xmlns:jdbc="http://www.mulesoft.org/schema/mule/jdbc"
       xmlns:file="http://www.mulesoft.org/schema/mule/file"
       xsi:schemaLocation="
       http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans-current.xsd
       http://www.mulesoft.org/schema/mule/core http://www.mulesoft.org/schema/mule/core/3.5/mule.xsd
       http://www.mulesoft.org/schema/mule/file http://www.mulesoft.org/schema/mule/file/3.5/mule-file.xsd
       http://www.mulesoft.org/schema/mule/jdbc http://www.mulesoft.org/schema/mule/jdbc/3.5/mule-jdbc.xsd">

    <!-- This placeholder bean lets you import the properties from the db.properties file. -->
    <spring:bean id="property-placeholder" class="org.springframework.beans.factory.config.PropertyPlaceholderConfigurer"> ❶
        <spring:property name="location" value="classpath:db.properties"/>
    </spring:bean>

    <!-- This data source is used to connect to the database using the values loaded from the properties file -->
    <spring:bean id="jdbcDataSource"
        class="org.enhydra.jdbc.standard.StandardDataSource"
        destroy-method="shutdown">
        <spring:property name="driverName" value="${database.driver}"/> ❷
        <spring:property name="url" value="${database.connection}"/> ❸
    </spring:bean>

    <jdbc:connector name="jdbcConnector" dataSource-ref="jdbcDataSource" pollingFrequency="5000" transactionPerMessage="false"> ❹
        <jdbc:query key="read" value="SELECT id, type, data FROM test3 WHERE type=1"/> ❺
        <jdbc:query key="read.ack" value="UPDATE test3 SET type=2 WHERE id=#[map-payload:id]"/> ❻
    </jdbc:connector>

    <file:connector name="output" outputAppend="true" outputPattern="#[function:datestamp].txt" /> ❼

    <flow name="allDbRows">
        <jdbc:inbound-endpoint queryKey="read" connector-ref="jdbcConnector"/> ❽
        <object-to-string-transformer /> ❾
        <file:outbound-endpoint connector-ref="output" path="/tmp/rows"/> ❿
    </flow>
</mule>

The database authentication information is stored in a properties file named 'db.properties' ❶. For a MySQL database, the file would look similar to this:

database.driver=com.mysql.jdbc.Driver

database.connection=jdbc:mysql://localhost/test?user=<user>&password=<password>

The values in the property file are used in ❷ and ❸ to configure the data source bean. The jdbc connector references the data source ❹ and defines a couple of queries (❺ and ❻) which the inbound endpoint will use. The 'read' query checks the database for rows which have a 'type' column set to 1. The 'read.ack' query is automatically run for every new record found and sets the 'type' column to 2 so it will not be picked up again by the indound endpoint. A file connector is defined at ❼ to write each row found to a file with a date stamp name. Next, the flow is defined which calls the jdbc 'read' query on the inbound endpoint ❽. New database rows are then processed by the object-to-string transformer ❾ and finally written to the '/tmp/rows' directory ❿.

This example shows how to display database rows in a browser:

Display Database Row in a Browser
<mule xmlns="http://www.mulesoft.org/schema/mule/core"
       xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
       xmlns:spring="http://www.springframework.org/schema/beans"
       xmlns:jdbc="http://www.mulesoft.org/schema/mule/ee/jdbc"
       xmlns:file="http://www.mulesoft.org/schema/mule/file"
       xsi:schemaLocation="
       http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans-current.xsd
       http://www.mulesoft.org/schema/mule/core http://www.mulesoft.org/schema/mule/core/3.5/mule.xsd
       http://www.mulesoft.org/schema/mule/file http://www.mulesoft.org/schema/mule/file/3.5/mule-file.xsd
       http://www.mulesoft.org/schema/mule/ee/jdbc http://www.mulesoft.org/schema/mule/ee/jdbc/3.5/mule-jdbc-ee.xsd">

    <!-- This placeholder bean lets you import the properties from the db.properties file. -->
    <spring:bean id="property-placeholder" class="org.springframework.beans.factory.config.PropertyPlaceholderConfigurer">
        <spring:property name="location" value="classpath:db.properties"/>
    </spring:bean>

    <!-- This data source connects to the database using the values loaded from the properties file -->
    <spring:bean id="jdbcDataSource"
        class="org.enhydra.jdbc.standard.StandardDataSource"
        destroy-method="shutdown">
        <spring:property name="driverName" value="${database.driver}"/>
        <spring:property name="url" value="${database.connection}"/>
    </spring:bean>
    <jdbc:connector name="jdbcConnector" dataSource-ref="jdbcDataSource">
        <jdbc:query key="selectRows"
                    value="select * from mule_source where ID between 0 and #[header:inbound:max]"/> ❶
    </jdbc:connector>
    <jdbc:maps-to-xml-transformer name="XMLResponseTransforer"/> ❷
    <message-properties-transformer name="XMLContentTransformer"> ❸
        <add-message-property key="Content-Type" value="text/xml"/>
    </message-properties-transformer>
    <flow name="ExampleModel">
        <inbound-endpoint address="http://localhost:8080/rows" exchange-pattern="request-response" responseTransformer-refs="XMLResponseTransformer XMLContentTransformer"/> ❹
        <jdbc:outbound-endpoint queryKey="selectRows" exchange-pattern="request-response"/> ❺
    </flow>
</mule>
In these code examples, spring-beans-current.xsd is a placeholder. To locate the correct version, see http://www.springframework.org/schema/beans/.

Configuration Reference (Community)

Connector

Table 1. Attributes of <connector…​>
Name Type Required Default Description

pollingFrequency

long

no

The delay in milliseconds that will be used during two subsequent polls to the database. This is only applied to queries configured in inbound endpoints.

dataSource-ref

string

yes

Reference to the JDBC DataSource object. This object is typically created using Spring. When using XA transactions, an XADataSource object must be provided.

queryRunner-ref

string

no

Reference to the QuereyRunner object, which is the object that actually runs the Query. This object is typically created using Spring. Default is org.apache.commons.dbutils.QueryRunner.

resultSetHandler-ref

string

no

Reference to the ResultSetHandler object, which is the object that determines which java.sql.ResultSet gets handled. This object is typically creatd using Spring. Default is org.apcahe.commons.dbutils.handlers.MapListHandler, which steps through the ResultSet and stores records as Map objects on a List.

transactionPerMessage

boolean

no

Whether each database record should be received in a seperate transaction. If false, there will be a single transaction for the entire result set. Default is true.

queryTimeout

integer

no

The timeout in seconds that will be used as a query timeout for the SQL statement.

Table 2. Child Elements of <connector…​>
Name Cardinality Description

abstract-sqlStatementStrategyFactory

0..1

The factory that determines the execution strategy based on the SQL provided.

abstract-query

0..*

Defines a set of queries. Each query has a key and a value (SQL statement).

Queries are later referenced by key.

Inbound Endpoint

Receives of fetches data from a database. YOu can reference SQL select statements or call stored procedures on inbound endpoints. Statements on the inbound endpoint get invoked periodically according to the polling interval. Statement that contain an insert, update, or delete are not allowed.

Table 3. Attributes of <inbound-endpoint…​>
Name Type Required Default Description

pollingFrequency

long

no

The delay in milliseconds that will be used during two subsequent polls to the database.

queryTimeout

integer

no

-1

The timeout in seconds that will be used as a query timeout for the SQL statement.

queryKey

string

no

The key of the query to use.

Table 4. Child Elements of <inbound-endpoint…​>
Name Cardinality Description

abstract-query

0..*

Outbound Endpoint

You can reference SQL statement or call a stored procedure on outbound endpoints. Statements on the outbound endpoint get invoked synchoronously. SQL select statements or stored procedures may return output that is handled by the ResultSetHandler and then attached to the message as the payload.

Table 5. Attirubts of <outbound-endpoint…​>
Name Type Required Default Description

queryTimeout

integer

no

-1

The timeout in seconds that will be used as a query timeout for the SQL statement.

queryKey

string

no

The key of the query to use.

Table 6. Child Elements of <outbound-endpoint…​>
Name Cardinality Description

abstract-query

0..*

Configuration Reference (Enterprise)

Connector

Table 7. Attributes of <connector…​>
Name Type Required Default Description

handleOutputResultSets

boolean

no

false

Whether the output java.sql.ResultSet instances should be handled with the ResultSetHandler object. This attribute is useful when executing store procedures which return java.sql.ResultSet instances as output paramaters. Default is false.

Table 8. Child Elements of <connector…​>
Name Cardinality Description

sqlCommandExecutorFactory

0..1

The factory that creates the command executor for the read SQL statement.

ackSqlCommandExecutorFactory

0..1

The factory that creates the command executor for the acknowledge SQL statement.

sqlCommandRetryPolicyFactory

0..1

The factory that creates the retry polocies which detail if a SQL statements must be re-executed in case of errors.

Inbound Endpoint

Table 9. Child Elements of <inbound-endpoint…​>
Name Cardinality Description

Outbound Endpoint

Table 10. Child Elements of <outbound-endpoint…​>
Name Cardinality Description

Transformers

The following transformers can be found in the Enterprise version of JDBC transport:

Maps to XML transformer

Converts a List of Map objects to XML. The Map List is the same as what you get from using the default ResultSetHandler. The XML schema format is provided in the documentation.

Table 11. Attributes of <maps-to-xml-transfomer…​>
Name Type Required Default Description

includeNullValues

boolean

no

false

Whether to include fields with null values or not.

Table 12. Child Elements of <maps-to-xml-transformer…​>
Name Cardinality Description

XML to Maps Transformer

Converts XML to a List of Map objects. The Map List is the same as what you get from using the default ResultSetHandler. The XML schema format is provided in the documentation.

Table 13. Child Elements of <xml-to-maps-tranformer…​>
Name Cardinality Description

Maps to CSV Transformer

Converts a List of Map object to a CSV file. The Map List is the same as what you get from using the default ResultSetHandler.

Table 14. Attributes of <map-to-csv-transfomer…​>
Name Type Required Default Description

delimiter

string

no

Delimiter used in CSV file. Default is comma.

mappingFile

string

no

Name of the "mapping file" used to describe the CSV file. See http://flatpack.sourceforge.net for details.

ignoreFirstRecord

boolean

no

Whether to ignore the first record. If the first record is a header, you should ignore it.

qualifier

string

no

The character used to escape text that contains a delimiter.

Table 15. Child Element of <maps-to-csv-transformer…​>
Name Cardinality Description

CSV to Maps Transformer

Converts a CSV file to a List of Map objects. The Map List is the same as what you get from using the default ResultSetHandler.

Table 16. Attributes of <csv-to-map-transfomer…​>
Name Type Required Default Description

delimiter

string

no

Delimiter used in CSV file. Default is comma.

mappingFile

string

no

Name of the "mapping file" used to describe the CSV file. See http://flatpack.sourceforge.net for details.

ignoreFirstRecord

boolean

no

Whether to ignore the first record. If the first record is a header, you should ignore it.

qualifier

string

no

The character used to escape text that contains a delimiter.

Table 17. Child Element of <csv-to-map-transformer…​>
Name Cardinality Description

ResultSet to Maps Transformer

Transforms a java.sql.ResultSet to a List of Map objects just like the default ResultSetHandler. Useful with Oracle stored procedures taht return cursors (ResultSets).

Table 18. Child Elements of <resultset-to-map-transformer…​>
Name Cardinality Description

Javadoc API Reference

Refer to the EE distribution for the Enterprise version of the JDBC transport javadocs.

Maven

The JDBC transport is implemented by the mule-transport-jdbc module. You can find the source for the jdbc transport under transports/jdbc.

If you are using Maven to build your application, use the following dependency snippet to include the JDBC transport in your project:

Community version

<dependency>
  <groupId>org.mule.transports</groupId>
  <artifactId>mule-transport-email</artifactId>
  <version>3.4.0</version>
</dependency>

Enterprise Version

<dependency>
    <groupId>com.mulesoft.muleesb.transports</groupId>
    <artifactId>mule-transport-jdbc-ee</artifactId>
    <version>3.4.0</version>
</dependency>

Best Practice

  • Put you database connection and credential information in a seperate properties file. This allows your port configuration file to different environments. See [Example Configurations] for an example on how this is done.

Data Source Configuration

Data source configuration has become musch simpler. Previously, a data source had to be configured with Spring:

<spring:bean id="dataSource" class="org.enhydra.jdbc.standard.StandardDataSource" destroy-method="shutdown">
    <spring:property name="driverName" value="com.mysql.jdbc.Driver"/>
    <spring:property name="url" value="jdbc:mysql://localhost/mule"/>
    <spring:property name="user" value="mysql"/>
    <spring:property name="password" value="secret"/>
</spring:bean>

Now this greatly simplified:

<jdbc:mysql-data-source name="dataSource" database="mule" user="mysql" password="secret"/>

Data Sources

The following elements can be used with all the database-specific data sources listed below:

Attribute Description

loginTimeout

Login timeout.

transactionIsolation

Transaction isolation level to set on the newly created javax.sql.Connection object.

Derby

Derby data sources are created as embedded data sources. So the definition of user and passowerd is not required.

Use the jdbc:`derby-data-source` configuration element to configure Derby. If you use a regular bean, you may receive errors when undeploying or redelpoying the application.

Example:

<jdbc:derby-data-source name="dataSource" database="mule"/>

The following attributes are available on the derby-data-source element:

Attribute Description

create

If True the datatbase will be created upon first access. See the [Derby Documentation] for details.

database

Name of the database to connect to. This attrirbute cannot be use together with url attriubte.

name

Unqiue indentifier of the datasource. Use this name to reference the data source from the JDBC connector.

url

JDBC URL to use when connecting to the database. This attribute cannot be used together with the database attribute.

MySQL

Example:

<jdbc:mysql-data-source name="dataSource" database="mule" user="mysql" password="secret"/>

The following attirbutes are available on the mysql-data-source element:

Attribute Description

database

Name of the database to connect to. This attribute cannot be used together with the url attribute.

host

Database hos to connect to. This attribute cannot be used together the the url attribute.

name

Unique identifier of the data source. Use this name to reference the data source from the JDBC connector.

password

Password for connection to the datebase. This attribute is required.

port

Database port to connect to. This attribute cannot be used together the with url attribute.

url

JDBC URL to use when conenction to the database. This attribute cannot be used together with the database, host, or port attribute.

user

User for connecting to the database. This attribute is required.

Oracle

Example:

<jdbc:oracle-data-source name="dataSource" user="scott" password="tiger"/>

The following attributes are available on the orcale-data-source element:

Attribute Description

host

Database host to connect to. This attribute cannot be used together with the url attribute.

instance

Oracle Instance to connect to. This attribute cannot be used together with the url attribute.

name

Unqiue identifier of the data source. Use this name to reference the data source from the JDBC connector.

password

Password for connecting to the database. This attribute is required.

port

Database port to connect to. This attribute cannot be used together with the url attribute.

url

JDBC URL to use when conneting to the database. This attribute cannot be used together with the instance, host, or port attribute.

user

User for connecting to the database. This attribute is required.

Postgreqsql

Example:

<jdbc:postgresql-data-source name="dataSource" database="mule" user="postgres" password="secret"/>

The following attirbutes are available on the mysql-data-source element:

Attribute Description

database

Name of the database to connect to. This attribute cannot be used together with the url attribute.

host

Database hos to connect to. This attribute cannot be used together the the url attribute.

name

Unique identifier of the data source. Use this name to reference the data source from the JDBC connector.

password

Password for connection to the datebase. This attribute is required.

port

Database port to connect to. This attribute cannot be used together the with url attribute.

url

JDBC URL to use when conenction to the database. This attribute cannot be used together with the database, host, or port attribute.

user

User for connecting to the database. This attribute is required.