Release: 1.0.0 | Release Date: Not released

SQLAlchemy 1.0 Documentation

Oracle

Support for the Oracle database.

DBAPI Support

The following dialect/DBAPI options are available. Please refer to individual DBAPI sections for connect information.

Connect Arguments

The dialect supports several create_engine() arguments which affect the behavior of the dialect regardless of driver in use.

  • use_ansi - Use ANSI JOIN constructs (see the section on Oracle 8). Defaults to True. If False, Oracle-8 compatible constructs are used for joins.
  • optimize_limits - defaults to False. see the section on LIMIT/OFFSET.
  • use_binds_for_limits - defaults to True. see the section on LIMIT/OFFSET.

Auto Increment Behavior

SQLAlchemy Table objects which include integer primary keys are usually assumed to have “autoincrementing” behavior, meaning they can generate their own primary key values upon INSERT. Since Oracle has no “autoincrement” feature, SQLAlchemy relies upon sequences to produce these values. With the Oracle dialect, a sequence must always be explicitly specified to enable autoincrement. This is divergent with the majority of documentation examples which assume the usage of an autoincrement-capable database. To specify sequences, use the sqlalchemy.schema.Sequence object which is passed to a Column construct:

t = Table('mytable', metadata,
      Column('id', Integer, Sequence('id_seq'), primary_key=True),
      Column(...), ...
)

This step is also required when using table reflection, i.e. autoload=True:

t = Table('mytable', metadata,
      Column('id', Integer, Sequence('id_seq'), primary_key=True),
      autoload=True
)

Identifier Casing

In Oracle, the data dictionary represents all case insensitive identifier names using UPPERCASE text. SQLAlchemy on the other hand considers an all-lower case identifier name to be case insensitive. The Oracle dialect converts all case insensitive identifiers to and from those two formats during schema level communication, such as reflection of tables and indexes. Using an UPPERCASE name on the SQLAlchemy side indicates a case sensitive identifier, and SQLAlchemy will quote the name - this will cause mismatches against data dictionary data received from Oracle, so unless identifier names have been truly created as case sensitive (i.e. using quoted names), all lowercase names should be used on the SQLAlchemy side.

LIMIT/OFFSET Support

Oracle has no support for the LIMIT or OFFSET keywords. SQLAlchemy uses a wrapped subquery approach in conjunction with ROWNUM. The exact methodology is taken from http://www.oracle.com/technology/oramag/oracle/06-sep/o56asktom.html .

There are two options which affect its behavior:

  • the “FIRST ROWS()” optimization keyword is not used by default. To enable the usage of this optimization directive, specify optimize_limits=True to create_engine().
  • the values passed for the limit/offset are sent as bound parameters. Some users have observed that Oracle produces a poor query plan when the values are sent as binds and not rendered literally. To render the limit/offset values literally within the SQL statement, specify use_binds_for_limits=False to create_engine().

Some users have reported better performance when the entirely different approach of a window query is used, i.e. ROW_NUMBER() OVER (ORDER BY), to provide LIMIT/OFFSET (note that the majority of users don’t observe this). To suit this case the method used for LIMIT/OFFSET can be replaced entirely. See the recipe at http://www.sqlalchemy.org/trac/wiki/UsageRecipes/WindowFunctionsByDefault which installs a select compiler that overrides the generation of limit/offset with a window function.

RETURNING Support

The Oracle database supports a limited form of RETURNING, in order to retrieve result sets of matched rows from INSERT, UPDATE and DELETE statements. Oracle’s RETURNING..INTO syntax only supports one row being returned, as it relies upon OUT parameters in order to function. In addition, supported DBAPIs have further limitations (see RETURNING Support).

SQLAlchemy’s “implicit returning” feature, which employs RETURNING within an INSERT and sometimes an UPDATE statement in order to fetch newly generated primary key values and other SQL defaults and expressions, is normally enabled on the Oracle backend. By default, “implicit returning” typically only fetches the value of a single nextval(some_seq) expression embedded into an INSERT in order to increment a sequence within an INSERT statement and get the value back at the same time. To disable this feature across the board, specify implicit_returning=False to create_engine():

engine = create_engine("oracle://scott:tiger@dsn",
                       implicit_returning=False)

Implicit returning can also be disabled on a table-by-table basis as a table option:

# Core Table
my_table = Table("my_table", metadata, ..., implicit_returning=False)


# declarative
class MyClass(Base):
    __tablename__ = 'my_table'
    __table_args__ = {"implicit_returning": False}

See also

RETURNING Support - additional cx_oracle-specific restrictions on implicit returning.

ON UPDATE CASCADE

Oracle doesn’t have native ON UPDATE CASCADE functionality. A trigger based solution is available at http://asktom.oracle.com/tkyte/update_cascade/index.html .

When using the SQLAlchemy ORM, the ORM has limited ability to manually issue cascading updates - specify ForeignKey objects using the “deferrable=True, initially=’deferred’” keyword arguments, and specify “passive_updates=False” on each relationship().

Oracle 8 Compatibility

When Oracle 8 is detected, the dialect internally configures itself to the following behaviors:

  • the use_ansi flag is set to False. This has the effect of converting all JOIN phrases into the WHERE clause, and in the case of LEFT OUTER JOIN makes use of Oracle’s (+) operator.
  • the NVARCHAR2 and NCLOB datatypes are no longer generated as DDL when the Unicode is used - VARCHAR2 and CLOB are issued instead. This because these types don’t seem to work correctly on Oracle 8 even though they are available. The NVARCHAR and NCLOB types will always generate NVARCHAR2 and NCLOB.
  • the “native unicode” mode is disabled when using cx_oracle, i.e. SQLAlchemy encodes all Python unicode objects to “string” before passing in as bind parameters.

DateTime Compatibility

Oracle has no datatype known as DATETIME, it instead has only DATE, which can actually store a date and time value. For this reason, the Oracle dialect provides a type oracle.DATE which is a subclass of DateTime. This type has no special behavior, and is only present as a “marker” for this type; additionally, when a database column is reflected and the type is reported as DATE, the time-supporting oracle.DATE type is used.

Changed in version 0.9.4: Added oracle.DATE to subclass DateTime. This is a change as previous versions would reflect a DATE column as types.DATE, which subclasses Date. The only significance here is for schemes that are examining the type of column for use in special Python translations or for migrating schemas to other database backends.

Oracle Table Options

The CREATE TABLE phrase supports the following options with Oracle in conjunction with the Table construct:

  • ON COMMIT:

    Table(
        "some_table", metadata, ...,
        prefixes=['GLOBAL TEMPORARY'], oracle_on_commit='PRESERVE ROWS')

New in version 1.0.0.

Oracle Data Types

As with all SQLAlchemy dialects, all UPPERCASE types that are known to be valid with Oracle are importable from the top level dialect, whether they originate from sqlalchemy.types or from the local dialect:

from sqlalchemy.dialects.oracle import \
            BFILE, BLOB, CHAR, CLOB, DATE, \
            DOUBLE_PRECISION, FLOAT, INTERVAL, LONG, NCLOB, \
            NUMBER, NVARCHAR, NVARCHAR2, RAW, TIMESTAMP, VARCHAR, \
            VARCHAR2

Types which are specific to Oracle, or have Oracle-specific construction arguments, are as follows:

class sqlalchemy.dialects.oracle.BFILE(length=None)

Bases: sqlalchemy.types.LargeBinary

__init__(length=None)

Construct a LargeBinary type.

Parameters:length – optional, a length for the column for use in DDL statements, for those BLOB types that accept a length (i.e. MySQL). It does not produce a small BINARY/VARBINARY type - use the BINARY/VARBINARY types specifically for those. May be safely omitted if no CREATE TABLE will be issued. Certain databases may require a length for use in DDL, and will raise an exception when the CREATE TABLE DDL is issued.
class sqlalchemy.dialects.oracle.DATE(timezone=False)

Bases: sqlalchemy.types.DateTime

Provide the oracle DATE type.

This type has no special Python behavior, except that it subclasses types.DateTime; this is to suit the fact that the Oracle DATE type supports a time value.

New in version 0.9.4.

__init__(timezone=False)

Construct a new DateTime.

Parameters:timezone – boolean. If True, and supported by the backend, will produce ‘TIMESTAMP WITH TIMEZONE’. For backends that don’t support timezone aware timestamps, has no effect.
class sqlalchemy.dialects.oracle.DOUBLE_PRECISION(precision=None, scale=None, asdecimal=None)

Bases: sqlalchemy.types.Numeric

class sqlalchemy.dialects.oracle.INTERVAL(day_precision=None, second_precision=None)

Bases: sqlalchemy.types.TypeEngine

__init__(day_precision=None, second_precision=None)

Construct an INTERVAL.

Note that only DAY TO SECOND intervals are currently supported. This is due to a lack of support for YEAR TO MONTH intervals within available DBAPIs (cx_oracle and zxjdbc).

Parameters:
  • day_precision – the day precision value. this is the number of digits to store for the day field. Defaults to “2”
  • second_precision – the second precision value. this is the number of digits to store for the fractional seconds field. Defaults to “6”.
class sqlalchemy.dialects.oracle.NCLOB(length=None, collation=None, convert_unicode=False, unicode_error=None, _warn_on_bytestring=False)

Bases: sqlalchemy.types.Text

__init__(length=None, collation=None, convert_unicode=False, unicode_error=None, _warn_on_bytestring=False)

Create a string-holding type.

Parameters:
  • length – optional, a length for the column for use in DDL and CAST expressions. May be safely omitted if no CREATE TABLE will be issued. Certain databases may require a length for use in DDL, and will raise an exception when the CREATE TABLE DDL is issued if a VARCHAR with no length is included. Whether the value is interpreted as bytes or characters is database specific.
  • collation

    Optional, a column-level collation for use in DDL and CAST expressions. Renders using the COLLATE keyword supported by SQLite, MySQL, and Postgresql. E.g.:

    >>> from sqlalchemy import cast, select, String
    >>> print select([cast('some string', String(collation='utf8'))])
    SELECT CAST(:param_1 AS VARCHAR COLLATE utf8) AS anon_1

    New in version 0.8: Added support for COLLATE to all string types.

  • convert_unicode

    When set to True, the String type will assume that input is to be passed as Python unicode objects, and results returned as Python unicode objects. If the DBAPI in use does not support Python unicode (which is fewer and fewer these days), SQLAlchemy will encode/decode the value, using the value of the encoding parameter passed to create_engine() as the encoding.

    When using a DBAPI that natively supports Python unicode objects, this flag generally does not need to be set. For columns that are explicitly intended to store non-ASCII data, the Unicode or UnicodeText types should be used regardless, which feature the same behavior of convert_unicode but also indicate an underlying column type that directly supports unicode, such as NVARCHAR.

    For the extremely rare case that Python unicode is to be encoded/decoded by SQLAlchemy on a backend that does natively support Python unicode, the value force can be passed here which will cause SQLAlchemy’s encode/decode services to be used unconditionally.

  • unicode_error – Optional, a method to use to handle Unicode conversion errors. Behaves like the errors keyword argument to the standard library’s string.decode() functions. This flag requires that convert_unicode is set to force - otherwise, SQLAlchemy is not guaranteed to handle the task of unicode conversion. Note that this flag adds significant performance overhead to row-fetching operations for backends that already return unicode objects natively (which most DBAPIs do). This flag should only be used as a last resort for reading strings from a column with varied or corrupted encodings.
class sqlalchemy.dialects.oracle.NUMBER(precision=None, scale=None, asdecimal=None)

Bases: sqlalchemy.types.Numeric, sqlalchemy.types.Integer

class sqlalchemy.dialects.oracle.LONG(length=None, collation=None, convert_unicode=False, unicode_error=None, _warn_on_bytestring=False)

Bases: sqlalchemy.types.Text

__init__(length=None, collation=None, convert_unicode=False, unicode_error=None, _warn_on_bytestring=False)

Create a string-holding type.

Parameters:
  • length – optional, a length for the column for use in DDL and CAST expressions. May be safely omitted if no CREATE TABLE will be issued. Certain databases may require a length for use in DDL, and will raise an exception when the CREATE TABLE DDL is issued if a VARCHAR with no length is included. Whether the value is interpreted as bytes or characters is database specific.
  • collation

    Optional, a column-level collation for use in DDL and CAST expressions. Renders using the COLLATE keyword supported by SQLite, MySQL, and Postgresql. E.g.:

    >>> from sqlalchemy import cast, select, String
    >>> print select([cast('some string', String(collation='utf8'))])
    SELECT CAST(:param_1 AS VARCHAR COLLATE utf8) AS anon_1

    New in version 0.8: Added support for COLLATE to all string types.

  • convert_unicode

    When set to True, the String type will assume that input is to be passed as Python unicode objects, and results returned as Python unicode objects. If the DBAPI in use does not support Python unicode (which is fewer and fewer these days), SQLAlchemy will encode/decode the value, using the value of the encoding parameter passed to create_engine() as the encoding.

    When using a DBAPI that natively supports Python unicode objects, this flag generally does not need to be set. For columns that are explicitly intended to store non-ASCII data, the Unicode or UnicodeText types should be used regardless, which feature the same behavior of convert_unicode but also indicate an underlying column type that directly supports unicode, such as NVARCHAR.

    For the extremely rare case that Python unicode is to be encoded/decoded by SQLAlchemy on a backend that does natively support Python unicode, the value force can be passed here which will cause SQLAlchemy’s encode/decode services to be used unconditionally.

  • unicode_error – Optional, a method to use to handle Unicode conversion errors. Behaves like the errors keyword argument to the standard library’s string.decode() functions. This flag requires that convert_unicode is set to force - otherwise, SQLAlchemy is not guaranteed to handle the task of unicode conversion. Note that this flag adds significant performance overhead to row-fetching operations for backends that already return unicode objects natively (which most DBAPIs do). This flag should only be used as a last resort for reading strings from a column with varied or corrupted encodings.
class sqlalchemy.dialects.oracle.RAW(length=None)

Bases: sqlalchemy.types._Binary

cx_Oracle

Support for the Oracle database via the cx-Oracle driver.

DBAPI

Documentation and download information (if applicable) for cx-Oracle is available at: http://cx-oracle.sourceforge.net/

Connecting

Connect String:

oracle+cx_oracle://user:pass@host:port/dbname[?key=value&key=value...]

Additional Connect Arguments

When connecting with dbname present, the host, port, and dbname tokens are converted to a TNS name using the cx_oracle makedsn() function. Otherwise, the host token is taken directly as a TNS name.

Additional arguments which may be specified either as query string arguments on the URL, or as keyword arguments to create_engine() are:

  • allow_twophase - enable two-phase transactions. Defaults to True.

  • arraysize - set the cx_oracle.arraysize value on cursors, defaulted to 50. This setting is significant with cx_Oracle as the contents of LOB objects are only readable within a “live” row (e.g. within a batch of 50 rows).

  • auto_convert_lobs - defaults to True; See LOB Objects.

  • auto_setinputsizes - the cx_oracle.setinputsizes() call is issued for all bind parameters. This is required for LOB datatypes but can be disabled to reduce overhead. Defaults to True. Specific types can be excluded from this process using the exclude_setinputsizes parameter.

  • coerce_to_unicode - see Unicode for detail.

  • coerce_to_decimal - see Precision Numerics for detail.

  • exclude_setinputsizes - a tuple or list of string DBAPI type names to be excluded from the “auto setinputsizes” feature. The type names here must match DBAPI types that are found in the “cx_Oracle” module namespace, such as cx_Oracle.UNICODE, cx_Oracle.NCLOB, etc. Defaults to (STRING, UNICODE).

    New in version 0.8: specific DBAPI types can be excluded from the auto_setinputsizes feature via the exclude_setinputsizes attribute.

  • mode - This is given the string value of SYSDBA or SYSOPER, or alternatively an integer value. This value is only available as a URL query string argument.

  • threaded - enable multithreaded access to cx_oracle connections. Defaults to True. Note that this is the opposite default of the cx_Oracle DBAPI itself.

Unicode

The cx_Oracle DBAPI as of version 5 fully supports unicode, and has the ability to return string results as Python unicode objects natively.

When used in Python 3, cx_Oracle returns all strings as Python unicode objects (that is, plain str in Python 3). In Python 2, it will return as Python unicode those column values that are of type NVARCHAR or NCLOB. For column values that are of type VARCHAR or other non-unicode string types, it will return values as Python strings (e.g. bytestrings).

The cx_Oracle SQLAlchemy dialect presents two different options for the use case of returning VARCHAR column values as Python unicode objects under Python 2:

  • the cx_Oracle DBAPI has the ability to coerce all string results to Python unicode objects unconditionally using output type handlers. This has the advantage that the unicode conversion is global to all statements at the cx_Oracle driver level, meaning it works with raw textual SQL statements that have no typing information associated. However, this system has been observed to incur signfiicant performance overhead, not only because it takes effect for all string values unconditionally, but also because cx_Oracle under Python 2 seems to use a pure-Python function call in order to do the decode operation, which under cPython can orders of magnitude slower than doing it using C functions alone.
  • SQLAlchemy has unicode-decoding services built in, and when using SQLAlchemy’s C extensions, these functions do not use any Python function calls and are very fast. The disadvantage to this approach is that the unicode conversion only takes effect for statements where the Unicode type or String type with convert_unicode=True is explicitly associated with the result column. This is the case for any ORM or Core query or SQL expression as well as for a text() construct that specifies output column types, so in the vast majority of cases this is not an issue. However, when sending a completely raw string to Connection.execute(), this typing information isn’t present, unless the string is handled within a text() construct that adds typing information.

As of version 0.9.2 of SQLAlchemy, the default approach is to use SQLAlchemy’s typing system. This keeps cx_Oracle’s expensive Python 2 approach disabled unless the user explicitly wants it. Under Python 3, SQLAlchemy detects that cx_Oracle is returning unicode objects natively and cx_Oracle’s system is used.

To re-enable cx_Oracle’s output type handler under Python 2, the coerce_to_unicode=True flag (new in 0.9.4) can be passed to create_engine():

engine = create_engine("oracle+cx_oracle://dsn", coerce_to_unicode=True)

Alternatively, to run a pure string SQL statement and get VARCHAR results as Python unicode under Python 2 without using cx_Oracle’s native handlers, the text() feature can be used:

from sqlalchemy import text, Unicode
result = conn.execute(
    text("select username from user").columns(username=Unicode))

Changed in version 0.9.2: cx_Oracle’s outputtypehandlers are no longer used for unicode results of non-unicode datatypes in Python 2, after they were identified as a major performance bottleneck. SQLAlchemy’s own unicode facilities are used instead.

New in version 0.9.4: Added the coerce_to_unicode flag, to re-enable cx_Oracle’s outputtypehandler and revert to pre-0.9.2 behavior.

RETURNING Support

The cx_oracle DBAPI supports a limited subset of Oracle’s already limited RETURNING support. Typically, results can only be guaranteed for at most one column being returned; this is the typical case when SQLAlchemy uses RETURNING to get just the value of a primary-key-associated sequence value. Additional column expressions will cause problems in a non-determinative way, due to cx_oracle’s lack of support for the OCI_DATA_AT_EXEC API which is required for more complex RETURNING scenarios.

For this reason, stability may be enhanced by disabling RETURNING support completely; SQLAlchemy otherwise will use RETURNING to fetch newly sequence-generated primary keys. As illustrated in RETURNING Support:

engine = create_engine("oracle://scott:tiger@dsn",
                       implicit_returning=False)

LOB Objects

cx_oracle returns oracle LOBs using the cx_oracle.LOB object. SQLAlchemy converts these to strings so that the interface of the Binary type is consistent with that of other backends, and so that the linkage to a live cursor is not needed in scenarios like result.fetchmany() and result.fetchall(). This means that by default, LOB objects are fully fetched unconditionally by SQLAlchemy, and the linkage to a live cursor is broken.

To disable this processing, pass auto_convert_lobs=False to create_engine().

Two Phase Transaction Support

Two Phase transactions are implemented using XA transactions, and are known to work in a rudimental fashion with recent versions of cx_Oracle as of SQLAlchemy 0.8.0b2, 0.7.10. However, the mechanism is not yet considered to be robust and should still be regarded as experimental.

In particular, the cx_Oracle DBAPI as recently as 5.1.2 has a bug regarding two phase which prevents a particular DBAPI connection from being consistently usable in both prepared transactions as well as traditional DBAPI usage patterns; therefore once a particular connection is used via Connection.begin_prepared(), all subsequent usages of the underlying DBAPI connection must be within the context of prepared transactions.

The default behavior of Engine is to maintain a pool of DBAPI connections. Therefore, due to the above glitch, a DBAPI connection that has been used in a two-phase operation, and is then returned to the pool, will not be usable in a non-two-phase context. To avoid this situation, the application can make one of several choices:

  • Disable connection pooling using NullPool
  • Ensure that the particular Engine in use is only used for two-phase operations. A Engine bound to an ORM Session which includes twophase=True will consistently use the two-phase transaction style.
  • For ad-hoc two-phase operations without disabling pooling, the DBAPI connection in use can be evicted from the connection pool using the Connection.detach() method.

Changed in version 0.8.0b2,0.7.10: Support for cx_oracle prepared transactions has been implemented and tested.

Precision Numerics

The SQLAlchemy dialect goes through a lot of steps to ensure that decimal numbers are sent and received with full accuracy. An “outputtypehandler” callable is associated with each cx_oracle connection object which detects numeric types and receives them as string values, instead of receiving a Python float directly, which is then passed to the Python Decimal constructor. The Numeric and Float types under the cx_oracle dialect are aware of this behavior, and will coerce the Decimal to float if the asdecimal flag is False (default on Float, optional on Numeric).

Because the handler coerces to Decimal in all cases first, the feature can detract significantly from performance. If precision numerics aren’t required, the decimal handling can be disabled by passing the flag coerce_to_decimal=False to create_engine():

engine = create_engine("oracle+cx_oracle://dsn", coerce_to_decimal=False)

New in version 0.7.6: Add the coerce_to_decimal flag.

Another alternative to performance is to use the cdecimal library; see Numeric for additional notes.

The handler attempts to use the “precision” and “scale” attributes of the result set column to best determine if subsequent incoming values should be received as Decimal as opposed to int (in which case no processing is added). There are several scenarios where OCI does not provide unambiguous data as to the numeric type, including some situations where individual rows may return a combination of floating point and integer values. Certain values for “precision” and “scale” have been observed to determine this scenario. When it occurs, the outputtypehandler receives as string and then passes off to a processing function which detects, for each returned value, if a decimal point is present, and if so converts to Decimal, otherwise to int. The intention is that simple int-based statements like “SELECT my_seq.nextval() FROM DUAL” continue to return ints and not Decimal objects, and that any kind of floating point value is received as a string so that there is no floating point loss of precision.

The “decimal point is present” logic itself is also sensitive to locale. Under OCI, this is controlled by the NLS_LANG environment variable. Upon first connection, the dialect runs a test to determine the current “decimal” character, which can be a comma ”,” for European locales. From that point forward the outputtypehandler uses that character to represent a decimal point. Note that cx_oracle 5.0.3 or greater is required when dealing with numerics with locale settings that don’t use a period ”.” as the decimal character.

Changed in version 0.6.6: The outputtypehandler supports the case where the locale uses a comma ”,” character to represent a decimal point.

zxjdbc

Support for the Oracle database via the zxJDBC for Jython driver.

DBAPI

Drivers for this database are available at: http://www.oracle.com/technology/software/tech/java/sqlj_jdbc/index.html.

Connecting

Connect String:

oracle+zxjdbc://user:pass@host/dbname