SQLAlchemy 0.7 Documentation
- Connect Arguments
- Auto Increment Behavior
- Identifier Casing
- LIMIT/OFFSET Support
- ON UPDATE CASCADE
- Oracle 8 Compatibility
- Synonym/DBLINK Reflection
- Oracle Data Types
- cx_Oracle Notes
- zxjdbc Notes
Support for the Oracle database.
Oracle version 8 through current (11g at the time of this writing) are supported.
For information on connecting via specific drivers, see the documentation for that driver.
The dialect supports several create_engine() arguments which affect the behavior of the dialect regardless of driver in use.
- use_ansi - Use ANSI JOIN constructs (see the section on Oracle 8). Defaults to True. If False, Oracle-8 compatible constructs are used for joins.
- optimize_limits - defaults to False. see the section on LIMIT/OFFSET.
- use_binds_for_limits - defaults to True. see the section on LIMIT/OFFSET.
Auto Increment Behavior¶
SQLAlchemy Table objects which include integer primary keys are usually assumed to have “autoincrementing” behavior, meaning they can generate their own primary key values upon INSERT. Since Oracle has no “autoincrement” feature, SQLAlchemy relies upon sequences to produce these values. With the Oracle dialect, a sequence must always be explicitly specified to enable autoincrement. This is divergent with the majority of documentation examples which assume the usage of an autoincrement-capable database. To specify sequences, use the sqlalchemy.schema.Sequence object which is passed to a Column construct:
t = Table('mytable', metadata, Column('id', Integer, Sequence('id_seq'), primary_key=True), Column(...), ... )
This step is also required when using table reflection, i.e. autoload=True:
t = Table('mytable', metadata, Column('id', Integer, Sequence('id_seq'), primary_key=True), autoload=True )
In Oracle, the data dictionary represents all case insensitive identifier names using UPPERCASE text. SQLAlchemy on the other hand considers an all-lower case identifier name to be case insensitive. The Oracle dialect converts all case insensitive identifiers to and from those two formats during schema level communication, such as reflection of tables and indexes. Using an UPPERCASE name on the SQLAlchemy side indicates a case sensitive identifier, and SQLAlchemy will quote the name - this will cause mismatches against data dictionary data received from Oracle, so unless identifier names have been truly created as case sensitive (i.e. using quoted names), all lowercase names should be used on the SQLAlchemy side.
Changed in version 0.6: SQLAlchemy uses the “native unicode” mode provided as of cx_oracle 5. cx_oracle 5.0.2 or greater is recommended for support of NCLOB. If not using cx_oracle 5, the NLS_LANG environment variable needs to be set in order for the oracle client library to use proper encoding, such as “AMERICAN_AMERICA.UTF8”.
Also note that Oracle supports unicode data through the NVARCHAR and NCLOB data types. When using the SQLAlchemy Unicode and UnicodeText types, these DDL types will be used within CREATE TABLE statements. Usage of VARCHAR2 and CLOB with unicode text still requires NLS_LANG to be set.
Oracle has no support for the LIMIT or OFFSET keywords. SQLAlchemy uses a wrapped subquery approach in conjunction with ROWNUM. The exact methodology is taken from http://www.oracle.com/technology/oramag/oracle/06-sep/o56asktom.html .
There are two options which affect its behavior:
- the “FIRST ROWS()” optimization keyword is not used by default. To enable the usage of this optimization directive, specify optimize_limits=True to create_engine().
- the values passed for the limit/offset are sent as bound parameters. Some users have observed that Oracle produces a poor query plan when the values are sent as binds and not rendered literally. To render the limit/offset values literally within the SQL statement, specify use_binds_for_limits=False to create_engine().
Some users have reported better performance when the entirely different approach of a window query is used, i.e. ROW_NUMBER() OVER (ORDER BY), to provide LIMIT/OFFSET (note that the majority of users don’t observe this). To suit this case the method used for LIMIT/OFFSET can be replaced entirely. See the recipe at http://www.sqlalchemy.org/trac/wiki/UsageRecipes/WindowFunctionsByDefault which installs a select compiler that overrides the generation of limit/offset with a window function.
ON UPDATE CASCADE¶
Oracle doesn’t have native ON UPDATE CASCADE functionality. A trigger based solution is available at http://asktom.oracle.com/tkyte/update_cascade/index.html .
When using the SQLAlchemy ORM, the ORM has limited ability to manually issue cascading updates - specify ForeignKey objects using the “deferrable=True, initially=’deferred’” keyword arguments, and specify “passive_updates=False” on each relationship().
Oracle 8 Compatibility¶
When Oracle 8 is detected, the dialect internally configures itself to the following behaviors:
- the use_ansi flag is set to False. This has the effect of converting all JOIN phrases into the WHERE clause, and in the case of LEFT OUTER JOIN makes use of Oracle’s (+) operator.
- the NVARCHAR2 and NCLOB datatypes are no longer generated as DDL when the Unicode is used - VARCHAR2 and CLOB are issued instead. This because these types don’t seem to work correctly on Oracle 8 even though they are available. The NVARCHAR and NCLOB types will always generate NVARCHAR2 and NCLOB.
- the “native unicode” mode is disabled when using cx_oracle, i.e. SQLAlchemy encodes all Python unicode objects to “string” before passing in as bind parameters.
Oracle Data Types¶
As with all SQLAlchemy dialects, all UPPERCASE types that are known to be valid with Oracle are importable from the top level dialect, whether they originate from sqlalchemy.types or from the local dialect:
from sqlalchemy.dialects.oracle import \ BFILE, BLOB, CHAR, CLOB, DATE, DATETIME, \ DOUBLE_PRECISION, FLOAT, INTERVAL, LONG, NCLOB, \ NUMBER, NVARCHAR, NVARCHAR2, RAW, TIMESTAMP, VARCHAR, \ VARCHAR2
Types which are specific to Oracle, or have Oracle-specific construction arguments, are as follows:
- class sqlalchemy.dialects.oracle.BFILE(length=None)¶
Construct a LargeBinary type.
Parameters: length – optional, a length for the column for use in DDL statements, for those BLOB types that accept a length (i.e. MySQL). It does not produce a small BINARY/VARBINARY type - use the BINARY/VARBINARY types specifically for those. May be safely omitted if no CREATE TABLE will be issued. Certain databases may require a length for use in DDL, and will raise an exception when the CREATE TABLE DDL is issued.
- class sqlalchemy.dialects.oracle.DOUBLE_PRECISION(precision=None, scale=None, asdecimal=None)¶
- class sqlalchemy.dialects.oracle.INTERVAL(day_precision=None, second_precision=None)¶
- __init__(day_precision=None, second_precision=None)¶
Construct an INTERVAL.
Note that only DAY TO SECOND intervals are currently supported. This is due to a lack of support for YEAR TO MONTH intervals within available DBAPIs (cx_oracle and zxjdbc).
- day_precision – the day precision value. this is the number of digits to store for the day field. Defaults to “2”
- second_precision – the second precision value. this is the number of digits to store for the fractional seconds field. Defaults to “6”.
- class sqlalchemy.dialects.oracle.NCLOB(length=None, convert_unicode=False, assert_unicode=None, unicode_error=None, _warn_on_bytestring=False)¶
- __init__(length=None, convert_unicode=False, assert_unicode=None, unicode_error=None, _warn_on_bytestring=False)¶
Create a string-holding type.
- length – optional, a length for the column for use in DDL statements. May be safely omitted if no CREATE TABLE will be issued. Certain databases may require a length for use in DDL, and will raise an exception when the CREATE TABLE DDL is issued if a VARCHAR with no length is included. Whether the value is interpreted as bytes or characters is database specific.
- convert_unicode –
When set to True, the String type will assume that input is to be passed as Python unicode objects, and results returned as Python unicode objects. If the DBAPI in use does not support Python unicode (which is fewer and fewer these days), SQLAlchemy will encode/decode the value, using the value of the encoding parameter passed to create_engine() as the encoding.
When using a DBAPI that natively supports Python unicode objects, this flag generally does not need to be set. For columns that are explicitly intended to store non-ASCII data, the Unicode or UnicodeText types should be used regardless, which feature the same behavior of convert_unicode but also indicate an underlying column type that directly supports unicode, such as NVARCHAR.
For the extremely rare case that Python unicode is to be encoded/decoded by SQLAlchemy on a backend that does natively support Python unicode, the value force can be passed here which will cause SQLAlchemy’s encode/decode services to be used unconditionally.
- assert_unicode – Deprecated. A warning is emitted when a non-unicode object is passed to the Unicode subtype of String, or the UnicodeText subtype of Text. See Unicode for information on how to control this warning.
- unicode_error – Optional, a method to use to handle Unicode conversion errors. Behaves like the errors keyword argument to the standard library’s string.decode() functions. This flag requires that convert_unicode is set to force - otherwise, SQLAlchemy is not guaranteed to handle the task of unicode conversion. Note that this flag adds significant performance overhead to row-fetching operations for backends that already return unicode objects natively (which most DBAPIs do). This flag should only be used as a last resort for reading strings from a column with varied or corrupted encodings.
- class sqlalchemy.dialects.oracle.NUMBER(precision=None, scale=None, asdecimal=None)¶
- class sqlalchemy.dialects.oracle.LONG(length=None, convert_unicode=False, assert_unicode=None, unicode_error=None, _warn_on_bytestring=False)¶
- __init__(length=None, convert_unicode=False, assert_unicode=None, unicode_error=None, _warn_on_bytestring=False)¶
Create a string-holding type.
- convert_unicode –
- class sqlalchemy.dialects.oracle.RAW(length=None)¶
Support for the Oracle database via the cx_oracle driver.
The Oracle dialect uses the cx_oracle driver, available at http://cx-oracle.sourceforge.net/ . The dialect has several behaviors which are specifically tailored towards compatibility with this module. Version 5.0 or greater is strongly recommended, as SQLAlchemy makes extensive use of the cx_oracle output converters for numeric and string conversions.
Connecting with create_engine() uses the standard URL approach of oracle://user:pass@host:port/dbname[?key=value&key=value...]. If dbname is present, the host, port, and dbname tokens are converted to a TNS name using the cx_oracle makedsn() function. Otherwise, the host token is taken directly as a TNS name.
Additional arguments which may be specified either as query string arguments on the URL, or as keyword arguments to create_engine() are:
- allow_twophase - enable two-phase transactions. Defaults to True.
- arraysize - set the cx_oracle.arraysize value on cursors, in SQLAlchemy it defaults to 50. See the section on “LOB Objects” below.
- auto_convert_lobs - defaults to True, see the section on LOB objects.
- auto_setinputsizes - the cx_oracle.setinputsizes() call is issued for all bind parameters. This is required for LOB datatypes but can be disabled to reduce overhead. Defaults to True.
- mode - This is given the string value of SYSDBA or SYSOPER, or alternatively an integer value. This value is only available as a URL query string argument.
- threaded - enable multithreaded access to cx_oracle connections. Defaults to True. Note that this is the opposite default of cx_oracle itself.
cx_oracle 5 fully supports Python unicode objects. SQLAlchemy will pass all unicode strings directly to cx_oracle, and additionally uses an output handler so that all string based result values are returned as unicode as well. Generally, the NLS_LANG environment variable determines the nature of the encoding to be used.
Note that this behavior is disabled when Oracle 8 is detected, as it has been observed that issues remain when passing Python unicodes to cx_oracle with Oracle 8.
cx_oracle returns oracle LOBs using the cx_oracle.LOB object. SQLAlchemy converts these to strings so that the interface of the Binary type is consistent with that of other backends, and so that the linkage to a live cursor is not needed in scenarios like result.fetchmany() and result.fetchall(). This means that by default, LOB objects are fully fetched unconditionally by SQLAlchemy, and the linkage to a live cursor is broken.
To disable this processing, pass auto_convert_lobs=False to create_engine().
Two Phase Transaction Support¶
Two Phase transactions are implemented using XA transactions, and are known to work in a rudimental fashion with recent versions of cx_Oracle as of SQLAlchemy 0.8.0b2, 0.7.10. However, the mechanism is not yet considered to be robust and should still be regarded as experimental.
In particular, the cx_Oracle DBAPI as recently as 5.1.2 has a bug regarding two phase which prevents a particular DBAPI connection from being consistently usable in both prepared transactions as well as traditional DBAPI usage patterns; therefore once a particular connection is used via Connection.begin_prepared(), all subsequent usages of the underlying DBAPI connection must be within the context of prepared transactions.
The default behavior of Engine is to maintain a pool of DBAPI connections. Therefore, due to the above glitch, a DBAPI connection that has been used in a two-phase operation, and is then returned to the pool, will not be usable in a non-two-phase context. To avoid this situation, the application can make one of several choices:
- Disable connection pooling using NullPool
- Ensure that the particular Engine in use is only used for two-phase operations. A Engine bound to an ORM Session which includes twophase=True will consistently use the two-phase transaction style.
- For ad-hoc two-phase operations without disabling pooling, the DBAPI connection in use can be evicted from the connection pool using the Connection.detach method.
Changed in version 0.8.0b2,0.7.10: Support for cx_oracle prepared transactions has been implemented and tested.
The SQLAlchemy dialect goes through a lot of steps to ensure that decimal numbers are sent and received with full accuracy. An “outputtypehandler” callable is associated with each cx_oracle connection object which detects numeric types and receives them as string values, instead of receiving a Python float directly, which is then passed to the Python Decimal constructor. The Numeric and Float types under the cx_oracle dialect are aware of this behavior, and will coerce the Decimal to float if the asdecimal flag is False (default on Float, optional on Numeric).
Because the handler coerces to Decimal in all cases first, the feature can detract significantly from performance. If precision numerics aren’t required, the decimal handling can be disabled by passing the flag coerce_to_decimal=False to create_engine():
engine = create_engine("oracle+cx_oracle://dsn", coerce_to_decimal=False)
New in version 0.7.6: Add the coerce_to_decimal flag.
The handler attempts to use the “precision” and “scale” attributes of the result set column to best determine if subsequent incoming values should be received as Decimal as opposed to int (in which case no processing is added). There are several scenarios where OCI does not provide unambiguous data as to the numeric type, including some situations where individual rows may return a combination of floating point and integer values. Certain values for “precision” and “scale” have been observed to determine this scenario. When it occurs, the outputtypehandler receives as string and then passes off to a processing function which detects, for each returned value, if a decimal point is present, and if so converts to Decimal, otherwise to int. The intention is that simple int-based statements like “SELECT my_seq.nextval() FROM DUAL” continue to return ints and not Decimal objects, and that any kind of floating point value is received as a string so that there is no floating point loss of precision.
The “decimal point is present” logic itself is also sensitive to locale. Under OCI, this is controlled by the NLS_LANG environment variable. Upon first connection, the dialect runs a test to determine the current “decimal” character, which can be a comma ”,” for european locales. From that point forward the outputtypehandler uses that character to represent a decimal point. Note that cx_oracle 5.0.3 or greater is required when dealing with numerics with locale settings that don’t use a period ”.” as the decimal character.
Changed in version 0.6.6: The outputtypehandler uses a comma ”,” character to represent a decimal point.
Support for the Oracle database via the zxjdbc JDBC connector.
The official Oracle JDBC driver is at http://www.oracle.com/technology/software/tech/java/sqlj_jdbc/index.html.