The metadata APIs have been updated, with the
DatabaseMetaData
interface now able to retrieve SQL type hierarchies and a new ParameterMetaData
interface to describe the types and properties of the parameters in PreparedStatement
objects.Prior to JDBC 3.0, the way to set a parameter in a stored procedure was by specifying the parameter's index, not its name. The
CallableStatement
interface has been updated so that you can now specify parameters by their names.Several changes have been made to the data types that JDBC supports and, as a part of the changes, two new data types have been added.
To facilitate modifying the values of CLOB (Character Large OBject), BLOB (Binary Large OBject), and REF (SQL structured) types, the data type interfaces of the same names have been updated. It follows that, because we are now able to update the values of these types, the
ResultSet
interface has been revised to support updating columns of these types, including updating the ARRAY type as well.The two new data types that have been added are
java.sql.Types.DATALINK
and java.sql.Types.BOOLEAN
. The newly added types refer to the SQL types of the same names. DATALINK provides access to external resources, or URLs, while the BOOLEAN type is logically equivalent to the BIT type, but adds semantic meaning. The value of a DATALINK column is retrieved from an instance of ResultSet
using the new getURL()
methods, while the BOOLEAN type is retrieved using getBoolean()
.To address the common need to obtain the value of an auto-generated or auto-incremented key, the JDBC 3.0 API now makes it painless to obtain this value. To determine the value of any generated keys, simply specify in the statement's
execute()
method an optional flag denoting that you are interested in the generated value. Your level of interest can either beStatement.RETURN_GENERATED_KEYS
or Statement.NO_GENERATED_KEYS
. After executing the statement, the values for the generated keys are obtained by retrieving a ResultSet
from a Statement
's instance method, getGeneratedKeys()
. TheResultSet
contains a row for each generated key. The example in Listing 1 creates a new author and returns the corresponding auto-generated key.Listing 1. Retrieving an auto-generated key
Statement stmt = conn.createStatement(); // Obtain the generated key that results from the query. stmt.executeUpdate("INSERT INTO authors " + "(first_name, last_name) " + "VALUES ('George', 'Orwell')", Statement.RETURN_GENERATED_KEYS); ResultSet rs = stmt.getGeneratedKeys(); if ( rs.next() ) { // Retrieve the auto generated key(s). int key = rs.getInt(1); } |
Most application developers do not need to be aware of the relationship between JDBC and the J2EE Connector Architecture in order to be effective users of the JDBC API. But, knowing that the JDBC 3.0 specification has given consideration to the new architecture does yield a better understanding of where JDBC fits into the J2EE standard and the direction in which the specification is moving.
The J2EE Connector Architecture specifies a set of contracts that allow enterprise information systems to connect to an application server in a pluggable manner. The architecture defines resource adapters that are responsible for connectivity to external systems. The Connector's Service Provider Interface (SPI) just so happens to closely match the services provided by the JDBC interfaces.
The JDBC API fulfills two of the three contracts defined by the connector architecture. First, connection management for connecting application components to back-end systems is fulfilled by the JDBC
DataSource
andConnectionPoolDataSource
interfaces. Second, transaction management for supporting transactional access to resources is handled by XADataSource
. Third -- and where the JDBC specification does not have any corresponding focus -- is security management for enabling secure access to back-end systems. Despite the last shortcoming, it is still possible to map the JDBC interfaces to the Connector SPI. A driver vendor that maps its JDBC driver to the Connector system contracts will be able to deploy its driver as a resource adapter and immediately receive all the benefits of pluggability, packaging, and deployment from within an application server. Therefore, a standard API will exist across heterogeneous enterprise information systems for enterprise developers to use.A holdable cursor, or result, is one that does not automatically close when the transaction that contains the cursor is committed. JDBC 3.0 adds support for specifying cursor holdability. To specify the holdability of your
ResultSet
, you must do so when preparing a statement using the createStatement()
, prepareStatement()
, or prepareCall()
methods. The holdability may be one of the following constants:HOLD_CURSORS_OVER_COMMIT | ResultSet objects (cursors) are not closed; they are held open when a commit operation is implicitly or explicitly performed. |
CLOSE_CURSORS_AT_COMMIT | ResultSet objects (cursors) are closed when a commit operation is implicitly or explicitly performed. |
In general, closing a cursor when a transaction is committed results in better performance. Unless you require the cursor after the transaction, it is best to close the cursor when the commit operation is carried out. Because the specification does not define the default holdability of a
ResultSet
, the behavior will be dependent on the implementation. I expect, however, that as JDBC 3.0 drivers become available, most implementations will continue to close cursors after the transaction completes.A limitation of the JDBC 2 specification is that statements that return multiple results must have only one
ResultSet
open at any given time. As a part of the changes in JDBC 3.0, the specification allows the Statement
interface to support multiple openResultSets
. It is important to note, however, that the execute()
method still closes any ResultSet
s that were opened from a previous call to execute()
. So, to support multiple open results, the Statement
interface adds an overloaded version of the method getMoreResults()
. The new form of the method takes an integer flag that specifies the behavior of previously openedResultSet
s when the getResultSet()
method is called. The interface defines the flags as follows:CLOSE_ALL_RESULTS | All previously opened ResultSet objects should be closed when calling getMoreResults() . |
CLOSE_CURRENT_RESULT | The current ResultSet object should be closed when calling getMoreResults() . |
KEEP_CURRENT_RESULT | The current ResultSet object should not be closed when calling getMoreResults() . |
Listing 2 shows an example of handling multiple open results.
Listing 2. How to handle multiple open results
String procCall = ""; // Set the value of procCall to call a stored procedure. // ... CallableStatement cstmt = connection.prepareCall(procCall); boolean retval = cstmt.execute(); if (retval == false) { // The statement returned an update count, so handle it. // ... } else { // ResultSet ResultSet rs1 = cstmt.getResultSet(); // ... retval = cstmt.getMoreResults(Statement.KEEP_CURRENT_RESULT); if (retval == true) { ResultSet rs2 = cstmt.getResultSet(); // Both ResultSets are open and ready for use. rs2.next(); rs1.next(); // ... } } |
JDBC 3.0 defines several standard connection pool properties. Developers are not expected to modify the properties directly with the API, but rather through their application server or data store instrumentation. Because developers will be only indirectly affected by the standardization of connection pool properties, the benefit might not be immediately obvious. However, by reducing the amount of vendor-specific configuration properties and replacing them with standardized properties, it becomes even easier for developers to swap in a different vendor's JDBC driver. Additionally, the properties allow administrators to fine tune the connection pool to maximize performance characteristics of the application. The properties are shown in the table below.
Property name | Description |
maxStatements | The number of statements that the connection pool should keep open. |
initialPoolSize | The number of physical connections that the pool should create when it is initialized. |
minPoolSize | The minimum number of physical connections that the pool should contain. |
maxPoolSize | The maximum number of physical connections that the pool should contain. Zero specifies that there is no maximum size. |
maxIdleTime | The duration, in seconds, that an unused physical connection should remain in the pool before the connection is closed. Zero specifies that there is no limit. |
propertyCycle | The interval, in seconds, that the connection pool should wait before enforcing its property policy. |
In addition to improved connection pool support, it is now possible to pool prepared statements. A prepared statement allows you to take a commonly used SQL statement and pre-compile it, thereby dramatically improving performance if the statement is executed multiple times. On the other hand, creating a
PreparedStatement
object introduces a certain amount of overhead, so, ideally, the statement's lifetime will be sufficiently long to compensate for this overhead. Performance-conscious developers sometimes contort their object models to increase the lifetime of a PreparedStatement
object. JDBC 3.0 frees the developer from this concern because the data source layer is now responsible for caching prepared statements.Listing 3 demonstrates how to take advantage of JDBC's prepared statement pooling support. The observant reader may notice that the statements in the listing are no different than vanilla JDBC 2 code. That is because statements are pooled entirely under the covers. This means that under JDBC 3.0, your existing code will automatically leverage statement pooling. Unfortunately, this also means that you do not have control over which prepared statements are pooled, only the number of statements that are cached.
Listing 3. Pooling prepared statements
String INSERT_BOOK_QUERY = "INSERT INTO BOOKLIST " + "(AUTHOR, TITLE) " + "VALUES (?, ?) "; Connection conn = aPooledConnection.getConnection(); PreparedStatement ps = conn.prepareStatement(INSERT_BOOK_QUERY); ps.setString(1, "Orwell, George"); ps.setString(2, "1984"); ps.executeUpdate(); ps.close(); conn.close(); // ... conn = aPooledConnection.getConnection(); // Since the connection is from a PooledConnection, the data layer has // the option to retrieve this statement from its statement pool, // saving the VM from re-compiling the statement again. PreparedStatement cachedStatement = conn.prepareStatement(INSERT_BOOK_QUERY); // ... |
Perhaps the most exciting addition to JDBC 3.0 is the addition of savepoints. The transaction support in JDBC 2 gave developers control over concurrent access to their data, thereby ensuring that persisted data is always in a consistent state. Unfortunately, it is sometimes desirable to have more control over a transaction than simply rolling back every change in the current transaction. With JDBC 3.0 you are given that control with savepoints. The
Savepoint
interface allows you to partition a transaction into logical breakpoints, providing control over how much of the transaction gets rolled back. Figure 1 illustrates how savepoints can be used in a transaction.Figure 1. Visual representation of a savepoint
You probably won't need to use savepoints frequently. However, a general situation in which savepoints can be useful is when you have a set of changes, but cannot determine what subset of those changes must be kept until all of the results are known. The code example in Listing 4 illustrates how to use the
Savepoint
interface.Listing 4. Using savepoints
conn.setAutoCommit(false); // Set a conservative transaction isolation level. conn.setTransactionIsolation(Connection.TRANSACTION_SERIALIZABLE); Statement stmt = conn.createStatement(); int rows = stmt.executeUpdate( "INSERT INTO authors " + "(first_name, last_name) VALUES " + "('Lewis', 'Carroll')"); // Set a named savepoint. Savepoint svpt = conn.setSavepoint("NewAuthor"); // ... rows = stmt.executeUpdate( "UPDATE authors set type = 'fiction' " + "WHERE last_name = 'Carroll'"); // ... conn.rollback(svpt); // ... // The author has been added, but not updated. conn.commit(); |
No comments:
Post a Comment