Documentation Contents |
NOTE: The material in this chapter is based on JDBCtm API Tutorial and Reference, Second Edition: Universal Data Access for the Javatm 2 Platform, published by Addison Wesley as part of the Java series, ISBN 0-201-43328-1.
A Connection
object
represents a connection with a database. A connection session
includes the SQL statements that are executed and the results that
are returned over that connection. A single application can have
one or more connections with a single database, or it can have
connections with many different databases.
A user can get information about a
Connection
object's database by invoking the
Connection.getMetaData
method. This method returns a
DatabaseMetaData
object that contains information
about the database's tables, the SQL grammar it supports, its
stored procedures, the capabilities of this connection, and so
on.
The traditional way to establish a
connection with a database is to call the method
DriverManager.getConnection
. This method takes a
string containing a URL. The DriverManager
class,
referred to as the JDBC management layer, attempts to locate a
driver that can connect to the database represented by that URL.
The DriverManager
class maintains a list of registered
Driver
classes, and when the method
getConnection
is called, it checks with each driver in
the list until it finds one that can connect to the database
specified in the URL. The Driver
method
connect
uses this URL to actually establish the
connection.
A user can bypass the JDBC management
layer and call Driver
methods directly. This could be
useful in the rare case that two drivers can connect to a database
and the user wants to explicitly select a particular driver.
Normally, however, it is much easier to just let the
DriverManager
class handle opening a connection.
The following code exemplifies opening a
connection to a database located at the URL
jdbc:odbc:wombat
with a user ID of oboy
and 12Java
as the password:
String url = "jdbc:odbc:wombat"; Connection con = DriverManager.getConnection(url, "oboy", "12Java");
The JDBC 2.0 Standard Extension API
provides the DataSource
interface as an alternative to
the DriverManager
for establishing a connection. When
a DataSource
class has been implemented appropriately,
a DataSource
object can be used to produce
Connection
objects that participate in connection
pooling and/or Connection
objects that can participate
in distributed transactions. See the chapter "DataSource" for more
information and to see example code for creating a connection using
a DataSource
object. This chapter also explains why
using a DataSource
object is the preferred alternative
for creating a connection.
An application uses a
Connection
object produced by a
DataSource
object in essentially the same way it uses
a Connection
object produced by the
DriverManager
. There are some differences, however. If
the Connection
object is a pooled connection, an
application should include a finally
block to assure
that the connection is closed even if an exception is thrown. That
way a valid connection will always be put back into the pool of
available connections.
If a Connection
object is
part of a distributed transaction, an application should not call
the methods Connection.commit
or
Connection.rollback
, nor should it turn on the
connection's auto-commit mode. These would interfere with the
transaction manager's handling of the distributed transaction.
When an application uses the
DriverManager
to create a Connection
object, it must supply a URL to the
DriverManager.getConnection
method. Since URLs often
cause some confusion, we will first give a brief explanation of
URLs in general and then go on to a discussion of JDBC URLs.
A URL (Uniform Resource Locator) gives information for locating a resource on the Internet. It can be thought of as an address.
The first part of a URL specifies the
protocol used to access information, and it is always followed by a
colon. Some common protocols are ftp
, which specifies
"file transfer protocol," and http
, which specifies
"hypertext transfer protocol." If the protocol is
file
, it indicates that the resource is in a local
file system rather than on the Internet.
ftp://javasoft.com/docs/JDK-1_apidocs.zip http://java.sun.com/products/JDK/CurrentRelease file:/home/haroldw/docs/tutorial.html
The rest of a URL, everything after the
first colon, gives information about where the data source is
located. If the protocol is file
, the rest of the URL
is the path for the file. For the protocols ftp
and
http
, the rest of the URL identifies the host and may
optionally give a path to a more specific site. For example, here
is the URL for the Java Software home page. This URL identifies
only the host:
http://www.java.sun.com
By navigating from this home page, you can go to many other pages, one of which is the JDBC home page. The URL for the JDBC home page is more specific and looks like this:
http://www.java.sun.com/products/jdbc
A JDBC URL provides a way of identifying a data source so that the appropriate driver will recognize it and establish a connection with it. Driver writers are the ones who actually determine what the JDBC URL that identifies a particular driver will be. Users do not need to worry about how to form a JDBC URL; they simply use the URL supplied with the drivers they are using. JDBC's role is to recommend some conventions for driver writers to follow in structuring their JDBC URLs.
Since JDBC URLs are used with various
kinds of drivers, the conventions are, of necessity, very flexible.
First, they allow different drivers to use different schemes for
naming databases. The odbc
subprotocol, for example,
lets the URL contain attribute values (but does not require
them).
Second, JDBC URLs allow driver writers to encode all necessary connection information within them. This makes it possible, for example, for an applet that wants to talk to a given database to open the database connection without requiring the user to do any system administration chores.
Third, JDBC URLs allow a level of indirection. This means that the JDBC URL may refer to a logical host or database name that is dynamically translated to the actual name by a network naming system. This allows system administrators to avoid specifying particular hosts as part of the JDBC name. There are a number of different network name services (such as DNS, NIS, and DCE), and there is no restriction about which ones can be used.
The standard syntax for JDBC URLs is shown here. It has three parts, which are separated by colons.
jdbc:<subprotocol>:<subname>
The three parts of a JDBC URL are broken down as follows:
jdbc
-the protocol. The protocol in a JDBC URL is
always jdbc
. <subprotocol>
-the name of the driver or the
name of a database connectivity mechanism, which may be supported
by one or more drivers. A prominent example of a subprotocol name
is odbc
, which has been reserved for URLs that specify
ODBC-style data source names. For example, to access a database
through a JDBC-ODBC bridge, one might use a URL such as the
following:
jdbc:odbc:fred
In this example, the subprotocol is
odbc
, and the subname fred
is a local
ODBC data source.
If one wants to use a network name service (so that the database name in the JDBC URL does not have to be its actual name), the naming service can be the subprotocol. So, for example, one might have a URL like:
In this example, the URL specifies that the local DCE naming service should resolve the database namejdbc:dcenaming:accounts-payable
accounts-payable
into a more specific name that can be
used to connect to the real database.
<subname>
-a way to identify the data source.
The subname can vary, depending on the subprotocol, and it can have
any internal syntax the driver writer chooses, including a
subsubname. The point of a subname is to give enough information to
locate the data source. In the previous example, fred
is enough because ODBC provides the remainder of the information. A
data source on a remote server requires more information, however.
If the data source is to be accessed over the Internet, for
example, the network address should be included in the JDBC URL as
part of the subname and should adhere to the following standard URL
naming convention:
//hostname:port/subsubname
Supposing that dbnet
is a
protocol for connecting to a host on the Internet, a JDBC URL might
look like this:
jdbc:dbnet://wombat:356/fred
The subprotocol odbc
is a
special case. It has been reserved for URLs that specify ODBC-style
data source names and has the special feature of allowing any
number of attribute values to be specified after the subname (the
data source name). The full syntax for the odbc subprotocol is:
jdbc:odbc:<data-source-name>[;<attribute-name>=<attribute-value>]*
Thus all of the following are valid jdbc:odbc names:
jdbc:odbc:qeor7 jdbc:odbc:wombat jdbc:odbc:wombat;CacheSize=20;ExtensionCase=LOWER jdbc:odbc:qeora;UID=kgh;PWD=fooey
A driver developer can reserve a name to
be used as the subprotocol in a JDBC URL. When the
DriverManager
class presents this name to its list of
registered drivers, the driver for which this name is reserved
should recognize it and establish a connection to the database it
identifies. For example, "odbc" is reserved for the JDBC-ODBC
Bridge. If there were a Miracle Corporation, it might want to
register "miracle" as the subprotocol for the JDBC driver that
connects to its Miracle DBMS so that no one else would use that
name.
Java Software is acting as an informal registry for JDBC subprotocol names. To register a subprotocol name, send email to:
jdbc@eng.sun.com
Once a connection is established, it is used to pass SQL statements to its underlying database. The JDBC API does not put any restrictions on the kinds of SQL statements that can be sent; this provides a great deal of flexibility, allowing the use of database-specific statements or even non-SQL statements. It requires, however, that the user be responsible for making sure that the underlying database can process the SQL statements being sent and suffer the consequences if it cannot. For example, an application that tries to send a stored procedure call to a DBMS that does not support stored procedures will be unsuccessful and will generate an exception.
The JDBC API provides three interfaces for
sending SQL statements to the database, and corresponding methods
in the Connection
interface create instances of them.
The interfaces for sending SQL statements and the
Connection
methods that create them are as
follows:
Statement
-created by the
Connection.createStatement
methods. A
Statement
object is used for sending SQL statements
with no parameters. PreparedStatement
-created by the
Connection.prepareStatement
methods. A
PreparedStatement
object is used for precompiled SQL
statements. These can take one or more parameters as input
arguments (IN parameters). PreparedStatement
has a
group of methods that set the value of IN parameters, which are
sent to the database when the statement is executed.
PreparedStatement
extends Statement
and
therefore includes Statement
methods. A
PreparedStatement
object has the potential to be more
efficient than a Statement
object because it has been
precompiled and stored for future use. Therefore, in order to
improve performance, a PreparedStatement
object is
sometimes used for an SQL statement that is executed many times.
CallableStatement
-created by the
Connection.prepareCall
methods.
CallableStatement
objects are used to execute SQL
stored procedures-a group of SQL statements that is called by name,
much like invoking a function. A CallableStatement
object inherits methods for handling IN parameters from
PreparedStatement
; it adds methods for handling OUT
and INOUT parameters.
The following list gives a quick way to
determine which Connection
method is appropriate for
creating different types of SQL statements:
createStatement
methods-for a simple SQL statement
(no parameters) prepareStatement
methods-for an SQL statement that
is executed frequently The versions of these methods that take
no arguments create statements that will produce default
ResultSet
objects; that is, they produce result sets
that are not scrollable and that cannot be updated. With the JDBC
2.0 API, it is possible to create statements that will produce
result sets that are scrollable and/or updatable. This is done by
using new versions of the methods createStatement
,
prepareStatement
, and prepareCall
that
take additional parameters for specifying the type of result set
and the concurrency level of the result set being created. In
Chapter 5, "ResultSet," the section on
the types of ResultSet
objects on page 52 explains the different types
of ResultSet
objects and the constants that specify
them. The section "Concurrency
Types" on page 53 does the same for concurrency levels.
"Creating Different Types of
Result Sets" on page 55 gives examples of how to create
ResultSet
objects using the new versions of the
Connection
methods for creating statements.
A transaction consists of one or more
statements that have been executed, completed, and then either
committed or rolled back. When the method commit
or
rollback
is called, the current transaction ends and
another one begins.
Generally a new Connection
object is in auto-commit mode by default, meaning that when a
statement is completed, the method commit
will be
called on that statement automatically. In this case, since each
statement is committed individually, a transaction consists of only
one statement. If auto-commit mode has been disabled, a transaction
will not terminate until the method commit
or
rollback
is called explicitly, so it will include all
the statements that have been executed since the last invocation of
either commit
or rollback
. In this second
case, all the statements in the transaction are committed or rolled
back as a group.
The beginning of a transaction requires
no explicit call; it is implicitly initiated after disabling
auto-commit mode or after calling the methods commit
or rollback
. The method commit
makes
permanent any changes an SQL statement makes to a database, and it
also releases any locks held by the transaction. The method
rollback
will discard those changes.
Sometimes a user doesn't want one change
to take effect unless another one does also. This can be
accomplished by disabling auto-commit and grouping both updates
into one transaction. If both updates are successful, then the
commit
method is called, making the effects of both
updates permanent; if one fails or both fail, then the
rollback
method is called, restoring the values that
existed before the updates were executed.
Most JDBC drivers will support transactions. In order to be designated JDBC Compliant, a JDBC driver must support transactions.
The JDBC 2.0 Standard Extension API makes
it possible for Connection
objects to be part of a
distributed transaction, a transaction that involves connections to
more than one DBMS server. When a Connection
object is
part of a distributed transaction, a transaction manager determines
when the methods commit
or rollback
are
called on it. Thus, when a Connection
object is
participating in a distributed transaction, an application should
not do anything that affects when a connection begins or ends.
In order to be able to participate in
distributed transactions, a Connection
object must be
produced by a DataSource
object that has been
implemented to work with the middle tier server's distributed
transaction infrastructure. Unlike Connection
objects
produced by the DriverManager
, a
Connection
object produced by such a
DataSource
object will have its auto-commit mode
disabled by default. A standard implementation of a
DataSource
object, on the other hand, will produce
Connection
objects that are exactly the same as those
produced by the DriverManager
class.
If a DBMS supports transaction processing, it will have some way of managing potential conflicts that can arise when two transactions are operating on a database at the same time. A user can specify a transaction isolation level to indicate what level of care the DBMS should exercise in resolving potential conflicts. For example, what happens when one transaction changes a value and a second transaction reads that value before the change has been committed or rolled back? Should that be allowed, given that the changed value read by the second transaction will be invalid if the first transaction is rolled back? A JDBC user can instruct the DBMS to allow a value to be read before it has been committed (a "dirty read") with the following code, where con is the current connection:
con.setTransactionIsolation(TRANSACTION_READ_UNCOMMITTED);
The higher the transaction isolation
level, the more care is taken to avoid conflicts. The
Connection
interface defines five levels, with the
lowest specifying that transactions are not supported at all and
the highest specifying that while one transaction is operating on a
database, no other transactions may make any changes to the data
read by that transaction.
TRANSACTION_READ_UNCOMMITTED
, used in the previous
example, is one level up from the lowest level. Typically, the
higher the level of isolation, the slower the application executes
(due to increased locking overhead and decreased concurrency
between users). The developer must balance the need for performance
with the need for data consistency when making a decision about
what isolation level to use. Of course, the level that can actually
be supported depends on the capabilities of the underlying
DBMS.
When a new Connection
object
is created, its transaction isolation level depends on the driver,
but normally it is the default for the underlying data source. A
user may call the method setIsolationLevel
to change
the transaction isolation level, and the new level will be in
effect for the rest of the connection session. To change the
transaction isolation level for just one transaction, one needs to
set it before executing any statements in the transaction and then
reset it after the transaction terminates. Changing the transaction
isolation level during a transaction is not recommended, for it
will trigger an immediate call to the method commit
,
causing any changes up to that point to be made permanent.
It is recommended that programmers explicitly close connections and statements they have created when they are no longer needed.
A programmer writing code in the Java programming language and not using any outside resources does not need to worry about memory management. The garbage collector automatically removes objects when they are no longer being used and frees the memory they were using. When memory is running low, it will recycle discarded objects, making the memory they currently occupy available for quick reuse.
However, if an application uses external
resources, as it does when it accesses a DBMS with the JDBC API,
the garbage collector has no way of knowing the status of those
resources. It will still recycle discarded objects, but if there is
lots of free memory in the Java heap, it may garbage collect
infrequently, even though the (small) amount of Java garbage is
holding open large amounts of expensive database resources.
Therefore, it is recommended that programmers explicitly close all
connections (with the method Connection.close
) and
statements (with the method Statement.close
) as soon
as they are no longer needed, thereby freeing DBMS resources as
early as possible. This applies especially to applications that are
intended to work with different DBMSs because of variations from
one DBMS to another.
Note that the method
Connection.isClosed
is guaranteed to return
true
only when it is called after the method
Connection.close
has been called. As a result, a
programmer cannot depend on this method to indicate whether a
connection is valid or not. Instead, a typical JDBC client can
determine that a connection is invalid by catching the exception
that is thrown when a JDBC operation is attempted.
The two new SQL3 data types that are
user-defined types (UDTs), SQL structured types and
DISTINCT
types, can be custom mapped to a class in the
Java programming language. Like all the SQL3 data types, they have
standard mappings, but a programmer may create a custom mapping as
well. The fact that there is a custom mapping for a particular UDT
is declared in a java.util.Map
object. This
Map
object may be the one that is associated with a
connection, or it may be one that is passed to a method.
A programmer declares a custom mapping by
adding an entry to a Map
object. This entry must
contain two things: (1) the name of the UDT to be mapped and (2)
the Class
object for the class in the Java programming
language to which the UDT is to be mapped. The class itself, which
must implement the SQLData
interface, will contain the
specific mappings.
Each Connection
object
created using a JDBC 2.0 driver that supports custom mapping will
have an empty type map to which custom mappings may be added. This
type map is an instance of the interface
java.util.Map
, which is new in the Java 2 platform and
replaces java.util.Dictionary.
Until custom map
entries are added to this type map, all operations for
STRUCT
and DISTINCT
values will use the
standard mappings (the Struct
interface for
STRUCT
values and the underlying type for
DISTINCT
values).
The following code fragment, in which con
is a Connection
object and ADDRESSES
is
an SQL structured type, demonstrates retrieving the type map
associated with con and adding a new entry to it. After the type
map is modified, it is set as the new type map for con.
java.util.Map map = con.getTypeMap(); map.put("SchemaName.ADDRESSES", Class.forName("Addresses")); con.setTypeMap();
The Map
object map, the type
map associated with con, now contains at least one custom mapping
(or more if any mappings have already been added). The programmer
will have previously created the class Addresses
,
probably using a tool to generate it. Note that it is an error to
supply a class that does not implement the interface
SQLData
. The class Addresses
, which does
implement SQLData
, will have a field for each
attribute in ADDRESSES
, and whenever a value of type
ADDRESSES
is operated on by a method in the Java
programming language, the default will be to map it to an instance
of the class Addresses
. The type map associated with a
connection is the default type map in the sense that a method will
use it if no other type map is explicitly passed to it.
Note that the name of the UDT should be
the fully-qualified name. For some DBMSs, this will be of the form
catalogName.schemaName.UDTName.
Many DBMSs, however,
do not use this form and, for example, use a schema name but no
catalog name. The important thing is to use the form appropriate
for a particular DBMS. The DatabaseMetaData
methods
getCatalogs
, getCatalogTerm
,
getCatalogSeparator
, getSchemas
and
getSchemaTerm
give information about a DBMS's
catalogs, schemas, preferred terms, and the separator it uses.
Instead of modifying the existing type
map, an application can replace it with a completely different type
map. This is done with the Connection
method
setTypeMap
, as shown in the following code fragment.
It creates a new type map, gives it two entries (each with an SQL
UDT name and the class to which values of that type should be
mapped), and then installs the new type map as the one associated
with the Connection
con.
java.util.Map newConnectionMap = new java.util.HashTable(); newConnectionMap.put( "SchemaName.UDTName1", Class.forName("className1")); newConnectionMap.put( "SchemaName.UDTName2", Class.forName("className2")); con.setTypeMap(newConnectionMap);
The Map
object
newConnectionMap now replaces the type map originally associated
with the Connection
con, and it will be used for
custom type mappings unless it is itself replaced. Note that the
example uses the default constructor for the class
HashTable
to create the new type map. This class is
one of many implementations of java.util.Map
provided
in the Java 2 platform API, and one of the others could have been
used as well.
In the previous examples, the type map associated with a connection was modified to contain additional mappings or set to be a different type map altogether. In either case, though, the connection's type map is the default for custom mapping JDBC types to types in the Java programming language. The next example will show how to supersede the connection's type map by supplying a method with a different type map.
Methods whose implementations may involve
a custom mapping for UDTs have two versions, one that takes a type
map and one that does not. If a type map is passed to one of these
methods, the given type map will be used instead of the one
associated with the connection. For example, the Array
methods getArray
and getResultSet
have
versions that take a type map and versions that do not. If a type
map is passed to a method, it will map the array elements using the
given type map. If no type map is specified, the method will use
the type map associated with the connection.
The capability for supplying a type map to a method makes it possible for values of the same user-defined type to have different mappings. For example, if two applications are using the same connection and operating on the same column value, one could use the type map associated with the connection, and the other could use a different type map by supplying it as an argument to the appropriate method.
The following code fragment creates a new
type map and provides it as a parameter to the Array
method getArray
.
java.util.Map arrayMap = new java.util.HashTable(); arrayMap.put("SchemaName.DIMENSIONS", Class.forName("Dimensions")); Dimensions [] d = (Dimensions [])array.getArray(arrayMap);
In the second line, the new type map
arrayMap is given an entry with the fully-qualified name of an SQL
structured type (SchemaName.DIMENSIONS)
and the Java
class object (Class.forName("Dimensions"))
. This
establishes the mapping between the Java type
Dimensions
and the SQL type DIMENSIONS
.
In the third line, arrayMap is specified as the type map to use for
mapping the contents of this Array
object, whose base
type is SchemaName.DIMENSIONS
.
The method getArray
will
materialize the elements of the SQL3 ARRAY
value
designated by array, with each element being mapped according to
the mapping specified in arrayMap. In other words, each element,
which is a value of type Schema.DIMENSIONS
, will be
translated to an instance of the class Dimensions
by
mapping the attributes of each DIMENSIONS
value to the
fields of a Dimensions
object. If the base type of the
array does not match the UDT named in arrayMap, the driver will
convert the array's elements according to the standard mapping. If
no type map is specified to the method getArray
, the
driver uses the mapping indicated in the connection's type map. If
that type map has no entry for Schema.DIMENSIONS
, the
driver will instead use the standard mapping.
Copyright © 1993, 2010, Oracle and/or its affiliates. All rights reserved. Please send comments using this Feedback page. |
Java Technology |