SQLAlchemy 1.4 Documentation
Dialects
- PostgreSQL¶
- Support for the PostgreSQL database.
- Sequences/SERIAL/IDENTITY
- Server Side Cursors
- Transaction Isolation Level
- Setting READ ONLY / DEFERRABLE
- Temporary Table / Resource Reset for Connection Pooling
- Setting Alternate Search Paths on Connect
- Remote-Schema Table Introspection and PostgreSQL search_path
- INSERT/UPDATE…RETURNING
- INSERT…ON CONFLICT (Upsert)
- Full Text Search
- FROM ONLY …
- PostgreSQL-Specific Index Options
- PostgreSQL Index Reflection
- Special Reflection Options
- PostgreSQL Table Options
- PostgreSQL Constraint Options
- Table values, Table and Column valued functions, Row and Tuple objects
- ARRAY Types
- JSON Types
- HSTORE Type
- ENUM Types
- PostgreSQL Data Types and Custom SQL Constructs
aggregate_order_by
array
ARRAY
array_agg()
Any()
All()
BIT
BYTEA
CIDR
ENUM
HSTORE
HSTORE.Comparator
HSTORE.Comparator.array()
HSTORE.Comparator.contained_by()
HSTORE.Comparator.contains()
HSTORE.Comparator.defined()
HSTORE.Comparator.delete()
HSTORE.Comparator.has_all()
HSTORE.Comparator.has_any()
HSTORE.Comparator.has_key()
HSTORE.Comparator.keys()
HSTORE.Comparator.matrix()
HSTORE.Comparator.slice()
HSTORE.Comparator.vals()
HSTORE.__init__()
HSTORE.bind_processor()
HSTORE.comparator_factory
HSTORE.hashable
HSTORE.result_processor()
hstore
INET
INTERVAL
JSON
JSONB
MACADDR
MACADDR8
MONEY
OID
REGCLASS
TIMESTAMP
TIME
TSVECTOR
UUID
- Range Types
INT4RANGE
INT8RANGE
NUMRANGE
DATERANGE
TSRANGE
TSTZRANGE
RangeOperators
RangeOperators.comparator_factory
RangeOperators.comparator_factory.__ne__()
RangeOperators.comparator_factory.adjacent_to()
RangeOperators.comparator_factory.contained_by()
RangeOperators.comparator_factory.contains()
RangeOperators.comparator_factory.not_extend_left_of()
RangeOperators.comparator_factory.not_extend_right_of()
RangeOperators.comparator_factory.overlaps()
RangeOperators.comparator_factory.strictly_left_of()
RangeOperators.comparator_factory.strictly_right_of()
- PostgreSQL Constraint Types
- PostgreSQL DML Constructs
- psycopg2
- DBAPI
- Connecting
- psycopg2 Connect Arguments
- SSL Connections
- Unix Domain Connections
- Specifying multiple fallback hosts
- Empty DSN Connections / Environment Variable Connections
- Per-Statement/Connection Execution Options
- Psycopg2 Fast Execution Helpers
- Unicode with Psycopg2
- Transactions
- Psycopg2 Transaction Isolation Level
- NOTICE logging
- HSTORE type
- pg8000
- asyncpg
- psycopg2cffi
- py-postgresql
- pygresql
- MySQL and MariaDB
- SQLite
- Oracle
- Microsoft SQL Server
- Firebird
- Sybase
Project Versions
- Previous: Dialects
- Next: MySQL and MariaDB
- Up: Home
- On this page:
- PostgreSQL
- Support for the PostgreSQL database.
- Sequences/SERIAL/IDENTITY
- Server Side Cursors
- Transaction Isolation Level
- Setting READ ONLY / DEFERRABLE
- Temporary Table / Resource Reset for Connection Pooling
- Setting Alternate Search Paths on Connect
- Remote-Schema Table Introspection and PostgreSQL search_path
- INSERT/UPDATE…RETURNING
- INSERT…ON CONFLICT (Upsert)
- Full Text Search
- FROM ONLY …
- PostgreSQL-Specific Index Options
- PostgreSQL Index Reflection
- Special Reflection Options
- PostgreSQL Table Options
- PostgreSQL Constraint Options
- Table values, Table and Column valued functions, Row and Tuple objects
- ARRAY Types
- JSON Types
- HSTORE Type
- ENUM Types
- PostgreSQL Data Types and Custom SQL Constructs
aggregate_order_by
array
ARRAY
array_agg()
Any()
All()
BIT
BYTEA
CIDR
ENUM
HSTORE
HSTORE.Comparator
HSTORE.Comparator.array()
HSTORE.Comparator.contained_by()
HSTORE.Comparator.contains()
HSTORE.Comparator.defined()
HSTORE.Comparator.delete()
HSTORE.Comparator.has_all()
HSTORE.Comparator.has_any()
HSTORE.Comparator.has_key()
HSTORE.Comparator.keys()
HSTORE.Comparator.matrix()
HSTORE.Comparator.slice()
HSTORE.Comparator.vals()
HSTORE.__init__()
HSTORE.bind_processor()
HSTORE.comparator_factory
HSTORE.hashable
HSTORE.result_processor()
hstore
INET
INTERVAL
JSON
JSONB
MACADDR
MACADDR8
MONEY
OID
REGCLASS
TIMESTAMP
TIME
TSVECTOR
UUID
- Range Types
INT4RANGE
INT8RANGE
NUMRANGE
DATERANGE
TSRANGE
TSTZRANGE
RangeOperators
RangeOperators.comparator_factory
RangeOperators.comparator_factory.__ne__()
RangeOperators.comparator_factory.adjacent_to()
RangeOperators.comparator_factory.contained_by()
RangeOperators.comparator_factory.contains()
RangeOperators.comparator_factory.not_extend_left_of()
RangeOperators.comparator_factory.not_extend_right_of()
RangeOperators.comparator_factory.overlaps()
RangeOperators.comparator_factory.strictly_left_of()
RangeOperators.comparator_factory.strictly_right_of()
- PostgreSQL Constraint Types
- PostgreSQL DML Constructs
- psycopg2
- DBAPI
- Connecting
- psycopg2 Connect Arguments
- SSL Connections
- Unix Domain Connections
- Specifying multiple fallback hosts
- Empty DSN Connections / Environment Variable Connections
- Per-Statement/Connection Execution Options
- Psycopg2 Fast Execution Helpers
- Unicode with Psycopg2
- Transactions
- Psycopg2 Transaction Isolation Level
- NOTICE logging
- HSTORE type
- pg8000
- asyncpg
- psycopg2cffi
- py-postgresql
- pygresql
PostgreSQL¶
Support for the PostgreSQL database.
The following table summarizes current support levels for database release versions.
Support type |
Versions |
---|---|
9.6+ |
|
8+ |
DBAPI Support¶
The following dialect/DBAPI options are available. Please refer to individual DBAPI sections for connect information.
Sequences/SERIAL/IDENTITY¶
PostgreSQL supports sequences, and SQLAlchemy uses these as the default means
of creating new primary key values for integer-based primary key columns. When
creating tables, SQLAlchemy will issue the SERIAL
datatype for
integer-based primary key columns, which generates a sequence and server side
default corresponding to the column.
To specify a specific named sequence to be used for primary key generation,
use the Sequence()
construct:
Table('sometable', metadata,
Column('id', Integer, Sequence('some_id_seq'), primary_key=True)
)
When SQLAlchemy issues a single INSERT statement, to fulfill the contract of
having the “last insert identifier” available, a RETURNING clause is added to
the INSERT statement which specifies the primary key columns should be
returned after the statement completes. The RETURNING functionality only takes
place if PostgreSQL 8.2 or later is in use. As a fallback approach, the
sequence, whether specified explicitly or implicitly via SERIAL
, is
executed independently beforehand, the returned value to be used in the
subsequent insert. Note that when an
insert()
construct is executed using
“executemany” semantics, the “last inserted identifier” functionality does not
apply; no RETURNING clause is emitted nor is the sequence pre-executed in this
case.
To force the usage of RETURNING by default off, specify the flag
implicit_returning=False
to create_engine()
.
PostgreSQL 10 and above IDENTITY columns¶
PostgreSQL 10 and above have a new IDENTITY feature that supersedes the use
of SERIAL. The Identity
construct in a
Column
can be used to control its behavior:
from sqlalchemy import Table, Column, MetaData, Integer, Computed
metadata = MetaData()
data = Table(
"data",
metadata,
Column(
'id', Integer, Identity(start=42, cycle=True), primary_key=True
),
Column('data', String)
)
The CREATE TABLE for the above Table
object would be:
CREATE TABLE data (
id INTEGER GENERATED BY DEFAULT AS IDENTITY (START WITH 42 CYCLE),
data VARCHAR,
PRIMARY KEY (id)
)
Changed in version 1.4: Added Identity
construct
in a Column
to specify the option of an autoincrementing
column.
Note
Previous versions of SQLAlchemy did not have built-in support for rendering of IDENTITY, and could use the following compilation hook to replace occurrences of SERIAL with IDENTITY:
from sqlalchemy.schema import CreateColumn
from sqlalchemy.ext.compiler import compiles
@compiles(CreateColumn, 'postgresql')
def use_identity(element, compiler, **kw):
text = compiler.visit_create_column(element, **kw)
text = text.replace(
"SERIAL", "INT GENERATED BY DEFAULT AS IDENTITY"
)
return text
Using the above, a table such as:
t = Table(
't', m,
Column('id', Integer, primary_key=True),
Column('data', String)
)
Will generate on the backing database as:
CREATE TABLE t (
id INT GENERATED BY DEFAULT AS IDENTITY,
data VARCHAR,
PRIMARY KEY (id)
)
Server Side Cursors¶
Server-side cursor support is available for the psycopg2, asyncpg dialects and may also be available in others.
Server side cursors are enabled on a per-statement basis by using the
Connection.execution_options.stream_results
connection execution
option:
with engine.connect() as conn:
result = conn.execution_options(stream_results=True).execute(text("select * from table"))
Note that some kinds of SQL statements may not be supported with server side cursors; generally, only SQL statements that return rows should be used with this option.
Deprecated since version 1.4: The dialect-level server_side_cursors flag is deprecated
and will be removed in a future release. Please use the
Connection.stream_results
execution option for
unbuffered cursor support.
Transaction Isolation Level¶
Most SQLAlchemy dialects support setting of transaction isolation level
using the create_engine.isolation_level
parameter
at the create_engine()
level, and at the Connection
level via the Connection.execution_options.isolation_level
parameter.
For PostgreSQL dialects, this feature works either by making use of the
DBAPI-specific features, such as psycopg2’s isolation level flags which will
embed the isolation level setting inline with the "BEGIN"
statement, or for
DBAPIs with no direct support by emitting SET SESSION CHARACTERISTICS AS
TRANSACTION ISOLATION LEVEL <level>
ahead of the "BEGIN"
statement
emitted by the DBAPI. For the special AUTOCOMMIT isolation level,
DBAPI-specific techniques are used which is typically an .autocommit
flag on the DBAPI connection object.
To set isolation level using create_engine()
:
engine = create_engine(
"postgresql+pg8000://scott:tiger@localhost/test",
isolation_level = "REPEATABLE READ"
)
To set using per-connection execution options:
with engine.connect() as conn:
conn = conn.execution_options(
isolation_level="REPEATABLE READ"
)
with conn.begin():
# ... work with transaction
There are also more options for isolation level configurations, such as
“sub-engine” objects linked to a main Engine
which each apply
different isolation level settings. See the discussion at
Setting Transaction Isolation Levels including DBAPI Autocommit for background.
Valid values for isolation_level
on most PostgreSQL dialects include:
READ COMMITTED
READ UNCOMMITTED
REPEATABLE READ
SERIALIZABLE
AUTOCOMMIT
Setting READ ONLY / DEFERRABLE¶
Most PostgreSQL dialects support setting the “READ ONLY” and “DEFERRABLE”
characteristics of the transaction, which is in addition to the isolation level
setting. These two attributes can be established either in conjunction with or
independently of the isolation level by passing the postgresql_readonly
and
postgresql_deferrable
flags with
Connection.execution_options()
. The example below illustrates
passing the "SERIALIZABLE"
isolation level at the same time as setting
“READ ONLY” and “DEFERRABLE”:
with engine.connect() as conn:
conn = conn.execution_options(
isolation_level="SERIALIZABLE",
postgresql_readonly=True,
postgresql_deferrable=True
)
with conn.begin():
# ... work with transaction
Note that some DBAPIs such as asyncpg only support “readonly” with SERIALIZABLE isolation.
New in version 1.4: added support for the postgresql_readonly
and postgresql_deferrable
execution options.
Temporary Table / Resource Reset for Connection Pooling¶
The QueuePool
connection pool implementation used
by the SQLAlchemy Engine
object includes
reset on return behavior that will invoke
the DBAPI .rollback()
method when connections are returned to the pool.
While this rollback will clear out the immediate state used by the previous
transaction, it does not cover a wider range of session-level state, including
temporary tables as well as other server state such as prepared statement
handles and statement caches. The PostgreSQL database includes a variety
of commands which may be used to reset this state, including
DISCARD
, RESET
, DEALLOCATE
, and UNLISTEN
.
To install
one or more of these commands as the means of performing reset-on-return,
the PoolEvents.reset()
event hook may be used, as demonstrated
in the example below (requires SQLAlchemy 1.4.43 or greater). The implementation
will end transactions in progress as well as discard temporary tables
using the CLOSE
, RESET
and DISCARD
commands; see the PostgreSQL
documentation for background on what each of these statements do.
The create_engine.pool_reset_on_return
parameter
is set to None
so that the custom scheme can replace the default behavior
completely. The custom hook implementation calls .rollback()
in any case,
as it’s usually important that the DBAPI’s own tracking of commit/rollback
will remain consistent with the state of the transaction:
from sqlalchemy import create_engine
from sqlalchemy import event
postgresql_engine = create_engine(
"postgresql+pyscopg2://scott:tiger@hostname/dbname",
# disable default reset-on-return scheme
pool_reset_on_return=None,
)
@event.listens_for(postgresql_engine, "reset")
def _reset_postgresql(dbapi_connection, connection_record, reset_state):
dbapi_connection.execute("CLOSE ALL")
dbapi_connection.execute("RESET ALL")
dbapi_connection.execute("DISCARD TEMP")
# so that the DBAPI itself knows that the connection has been
# reset
dbapi_connection.rollback()
Changed in version 1.4.43: Ensured the PoolEvents.reset()
event
is invoked for all “reset” occurrences, so that it’s appropriate
as a place for custom “reset” handlers. Previous schemes which
use the PoolEvents.checkin()
handler remain usable as well.
See also
Reset On Return - in the Connection Pooling documentation
Setting Alternate Search Paths on Connect¶
The PostgreSQL search_path
variable refers to the list of schema names
that will be implicitly referred towards when a particular table or other
object is referenced in a SQL statement. As detailed in the next section
Remote-Schema Table Introspection and PostgreSQL search_path, SQLAlchemy is generally organized around
the concept of keeping this variable at its default value of public
,
however, in order to have it set to any arbitrary name or names when connections
are used automatically, the “SET SESSION search_path” command may be invoked
for all connections in a pool using the following event handler, as discussed
at Setting a Default Schema for New Connections:
from sqlalchemy import event
from sqlalchemy import create_engine
engine = create_engine("postgresql+psycopg2://scott:tiger@host/dbname")
@event.listens_for(engine, "connect", insert=True)
def set_search_path(dbapi_connection, connection_record):
existing_autocommit = dbapi_connection.autocommit
dbapi_connection.autocommit = True
cursor = dbapi_connection.cursor()
cursor.execute("SET SESSION search_path='%s'" % schema_name)
cursor.close()
dbapi_connection.autocommit = existing_autocommit
The reason the recipe is complicated by use of the .autocommit
DBAPI
attribute is so that when the SET SESSION search_path
directive is invoked,
it is invoked outside of the scope of any transaction and therefore will not
be reverted when the DBAPI connection has a rollback.
See also
Setting a Default Schema for New Connections - in the Describing Databases with MetaData documentation
Remote-Schema Table Introspection and PostgreSQL search_path¶
Section Best Practices Summarized
keep the search_path
variable set to its default of public
, without
any other schema names. Ensure the username used to connect does not
match remote schemas, or ensure the "$user"
token is removed from
search_path
. For other schema names, name these explicitly
within Table
definitions. Alternatively, the
postgresql_ignore_search_path
option will cause all reflected
Table
objects to have a Table.schema
attribute set up.
The PostgreSQL dialect can reflect tables from any schema, as outlined in Reflecting Tables from Other Schemas.
In all cases, the first thing SQLAlchemy does when reflecting tables is
to determine the default schema for the current database connection.
It does this using the PostgreSQL current_schema()
function, illustated below using a PostgreSQL client session (i.e. using
the psql
tool):
test=> select current_schema();
current_schema
----------------
public
(1 row)
Above we see that on a plain install of PostgreSQL, the default schema name
is the name public
.
However, if your database username matches the name of a schema, PostgreSQL’s
default is to then use that name as the default schema. Below, we log in
using the username scott
. When we create a schema named scott
, it
implicitly changes the default schema:
test=> select current_schema();
current_schema
----------------
public
(1 row)
test=> create schema scott;
CREATE SCHEMA
test=> select current_schema();
current_schema
----------------
scott
(1 row)
The behavior of current_schema()
is derived from the
PostgreSQL search path
variable search_path
, which in modern PostgreSQL versions defaults to this:
test=> show search_path;
search_path
-----------------
"$user", public
(1 row)
Where above, the "$user"
variable will inject the current username as the
default schema, if one exists. Otherwise, public
is used.
When a Table
object is reflected, if it is present in the
schema indicated by the current_schema()
function, the schema name assigned
to the “.schema” attribute of the Table is the Python “None” value. Otherwise, the
“.schema” attribute will be assigned the string name of that schema.
With regards to tables which these Table
objects refer to via foreign key constraint, a decision must be made as to how
the .schema
is represented in those remote tables, in the case where that
remote schema name is also a member of the current search_path
.
By default, the PostgreSQL dialect mimics the behavior encouraged by
PostgreSQL’s own pg_get_constraintdef()
builtin procedure. This function
returns a sample definition for a particular foreign key constraint,
omitting the referenced schema name from that definition when the name is
also in the PostgreSQL schema search path. The interaction below
illustrates this behavior:
test=> CREATE TABLE test_schema.referred(id INTEGER PRIMARY KEY);
CREATE TABLE
test=> CREATE TABLE referring(
test(> id INTEGER PRIMARY KEY,
test(> referred_id INTEGER REFERENCES test_schema.referred(id));
CREATE TABLE
test=> SET search_path TO public, test_schema;
test=> SELECT pg_catalog.pg_get_constraintdef(r.oid, true) FROM
test-> pg_catalog.pg_class c JOIN pg_catalog.pg_namespace n
test-> ON n.oid = c.relnamespace
test-> JOIN pg_catalog.pg_constraint r ON c.oid = r.conrelid
test-> WHERE c.relname='referring' AND r.contype = 'f'
test-> ;
pg_get_constraintdef
---------------------------------------------------
FOREIGN KEY (referred_id) REFERENCES referred(id)
(1 row)
Above, we created a table referred
as a member of the remote schema
test_schema
, however when we added test_schema
to the
PG search_path
and then asked pg_get_constraintdef()
for the
FOREIGN KEY
syntax, test_schema
was not included in the output of
the function.
On the other hand, if we set the search path back to the typical default
of public
:
test=> SET search_path TO public;
SET
The same query against pg_get_constraintdef()
now returns the fully
schema-qualified name for us:
test=> SELECT pg_catalog.pg_get_constraintdef(r.oid, true) FROM
test-> pg_catalog.pg_class c JOIN pg_catalog.pg_namespace n
test-> ON n.oid = c.relnamespace
test-> JOIN pg_catalog.pg_constraint r ON c.oid = r.conrelid
test-> WHERE c.relname='referring' AND r.contype = 'f';
pg_get_constraintdef
---------------------------------------------------------------
FOREIGN KEY (referred_id) REFERENCES test_schema.referred(id)
(1 row)
SQLAlchemy will by default use the return value of pg_get_constraintdef()
in order to determine the remote schema name. That is, if our search_path
were set to include test_schema
, and we invoked a table
reflection process as follows:
>>> from sqlalchemy import Table, MetaData, create_engine, text
>>> engine = create_engine("postgresql://scott:tiger@localhost/test")
>>> with engine.connect() as conn:
... conn.execute(text("SET search_path TO test_schema, public"))
... metadata_obj = MetaData()
... referring = Table('referring', metadata_obj,
... autoload_with=conn)
...
<sqlalchemy.engine.result.CursorResult object at 0x101612ed0>
The above process would deliver to the MetaData.tables
collection
referred
table named without the schema:
>>> metadata_obj.tables['referred'].schema is None
True
To alter the behavior of reflection such that the referred schema is
maintained regardless of the search_path
setting, use the
postgresql_ignore_search_path
option, which can be specified as a
dialect-specific argument to both Table
as well as
MetaData.reflect()
:
>>> with engine.connect() as conn:
... conn.execute(text("SET search_path TO test_schema, public"))
... metadata_obj = MetaData()
... referring = Table('referring', metadata_obj,
... autoload_with=conn,
... postgresql_ignore_search_path=True)
...
<sqlalchemy.engine.result.CursorResult object at 0x1016126d0>
We will now have test_schema.referred
stored as schema-qualified:
>>> metadata_obj.tables['test_schema.referred'].schema
'test_schema'
See also
Interaction of Schema-qualified Reflection with the Default Schema - discussion of the issue from a backend-agnostic perspective
The Schema Search Path - on the PostgreSQL website.
INSERT/UPDATE…RETURNING¶
The dialect supports PG 8.2’s INSERT..RETURNING
, UPDATE..RETURNING
and
DELETE..RETURNING
syntaxes. INSERT..RETURNING
is used by default
for single-row INSERT statements in order to fetch newly generated
primary key identifiers. To specify an explicit RETURNING
clause,
use the _UpdateBase.returning()
method on a per-statement basis:
# INSERT..RETURNING
result = table.insert().returning(table.c.col1, table.c.col2).\
values(name='foo')
print(result.fetchall())
# UPDATE..RETURNING
result = table.update().returning(table.c.col1, table.c.col2).\
where(table.c.name=='foo').values(name='bar')
print(result.fetchall())
# DELETE..RETURNING
result = table.delete().returning(table.c.col1, table.c.col2).\
where(table.c.name=='foo')
print(result.fetchall())
INSERT…ON CONFLICT (Upsert)¶
Starting with version 9.5, PostgreSQL allows “upserts” (update or insert) of
rows into a table via the ON CONFLICT
clause of the INSERT
statement. A
candidate row will only be inserted if that row does not violate any unique
constraints. In the case of a unique constraint violation, a secondary action
can occur which can be either “DO UPDATE”, indicating that the data in the
target row should be updated, or “DO NOTHING”, which indicates to silently skip
this row.
Conflicts are determined using existing unique constraints and indexes. These constraints may be identified either using their name as stated in DDL, or they may be inferred by stating the columns and conditions that comprise the indexes.
SQLAlchemy provides ON CONFLICT
support via the PostgreSQL-specific
insert()
function, which provides
the generative methods Insert.on_conflict_do_update()
and Insert.on_conflict_do_nothing()
:
>>> from sqlalchemy.dialects.postgresql import insert
>>> insert_stmt = insert(my_table).values(
... id='some_existing_id',
... data='inserted value')
>>> do_nothing_stmt = insert_stmt.on_conflict_do_nothing(
... index_elements=['id']
... )
>>> print(do_nothing_stmt)
INSERT INTO my_table (id, data) VALUES (%(id)s, %(data)s)
ON CONFLICT (id) DO NOTHING
>>> do_update_stmt = insert_stmt.on_conflict_do_update(
... constraint='pk_my_table',
... set_=dict(data='updated value')
... )
>>> print(do_update_stmt)
INSERT INTO my_table (id, data) VALUES (%(id)s, %(data)s)
ON CONFLICT ON CONSTRAINT pk_my_table DO UPDATE SET data = %(param_1)s
New in version 1.1.
See also
INSERT .. ON CONFLICT - in the PostgreSQL documentation.
Specifying the Target¶
Both methods supply the “target” of the conflict using either the named constraint or by column inference:
The
Insert.on_conflict_do_update.index_elements
argument specifies a sequence containing string column names,Column
objects, and/or SQL expression elements, which would identify a unique index:>>> do_update_stmt = insert_stmt.on_conflict_do_update( ... index_elements=['id'], ... set_=dict(data='updated value') ... ) >>> print(do_update_stmt)
INSERT INTO my_table (id, data) VALUES (%(id)s, %(data)s) ON CONFLICT (id) DO UPDATE SET data = %(param_1)s>>> do_update_stmt = insert_stmt.on_conflict_do_update( ... index_elements=[my_table.c.id], ... set_=dict(data='updated value') ... ) >>> print(do_update_stmt)INSERT INTO my_table (id, data) VALUES (%(id)s, %(data)s) ON CONFLICT (id) DO UPDATE SET data = %(param_1)sWhen using
Insert.on_conflict_do_update.index_elements
to infer an index, a partial index can be inferred by also specifying the use theInsert.on_conflict_do_update.index_where
parameter:>>> stmt = insert(my_table).values(user_email='a@b.com', data='inserted data') >>> stmt = stmt.on_conflict_do_update( ... index_elements=[my_table.c.user_email], ... index_where=my_table.c.user_email.like('%@gmail.com'), ... set_=dict(data=stmt.excluded.data) ... ) >>> print(stmt)
INSERT INTO my_table (data, user_email) VALUES (%(data)s, %(user_email)s) ON CONFLICT (user_email) WHERE user_email LIKE %(user_email_1)s DO UPDATE SET data = excluded.dataThe
Insert.on_conflict_do_update.constraint
argument is used to specify an index directly rather than inferring it. This can be the name of a UNIQUE constraint, a PRIMARY KEY constraint, or an INDEX:>>> do_update_stmt = insert_stmt.on_conflict_do_update( ... constraint='my_table_idx_1', ... set_=dict(data='updated value') ... ) >>> print(do_update_stmt)
INSERT INTO my_table (id, data) VALUES (%(id)s, %(data)s) ON CONFLICT ON CONSTRAINT my_table_idx_1 DO UPDATE SET data = %(param_1)s>>> do_update_stmt = insert_stmt.on_conflict_do_update( ... constraint='my_table_pk', ... set_=dict(data='updated value') ... ) >>> print(do_update_stmt)INSERT INTO my_table (id, data) VALUES (%(id)s, %(data)s) ON CONFLICT ON CONSTRAINT my_table_pk DO UPDATE SET data = %(param_1)sThe
Insert.on_conflict_do_update.constraint
argument may also refer to a SQLAlchemy construct representing a constraint, e.g.UniqueConstraint
,PrimaryKeyConstraint
,Index
, orExcludeConstraint
. In this use, if the constraint has a name, it is used directly. Otherwise, if the constraint is unnamed, then inference will be used, where the expressions and optional WHERE clause of the constraint will be spelled out in the construct. This use is especially convenient to refer to the named or unnamed primary key of aTable
using theTable.primary_key
attribute:>>> do_update_stmt = insert_stmt.on_conflict_do_update( ... constraint=my_table.primary_key, ... set_=dict(data='updated value') ... ) >>> print(do_update_stmt)
INSERT INTO my_table (id, data) VALUES (%(id)s, %(data)s) ON CONFLICT (id) DO UPDATE SET data = %(param_1)s
The SET Clause¶
ON CONFLICT...DO UPDATE
is used to perform an update of the already
existing row, using any combination of new values as well as values
from the proposed insertion. These values are specified using the
Insert.on_conflict_do_update.set_
parameter. This
parameter accepts a dictionary which consists of direct values
for UPDATE:
>>> stmt = insert(my_table).values(id='some_id', data='inserted value')
>>> do_update_stmt = stmt.on_conflict_do_update(
... index_elements=['id'],
... set_=dict(data='updated value')
... )
>>> print(do_update_stmt)
INSERT INTO my_table (id, data) VALUES (%(id)s, %(data)s)
ON CONFLICT (id) DO UPDATE SET data = %(param_1)s
Warning
The Insert.on_conflict_do_update()
method does not take into
account Python-side default UPDATE values or generation functions, e.g.
those specified using Column.onupdate
.
These values will not be exercised for an ON CONFLICT style of UPDATE,
unless they are manually specified in the
Insert.on_conflict_do_update.set_
dictionary.
Updating using the Excluded INSERT Values¶
In order to refer to the proposed insertion row, the special alias
Insert.excluded
is available as an attribute on
the Insert
object; this object is a
ColumnCollection
which alias contains all columns of the target
table:
>>> stmt = insert(my_table).values(
... id='some_id',
... data='inserted value',
... author='jlh'
... )
>>> do_update_stmt = stmt.on_conflict_do_update(
... index_elements=['id'],
... set_=dict(data='updated value', author=stmt.excluded.author)
... )
>>> print(do_update_stmt)
INSERT INTO my_table (id, data, author)
VALUES (%(id)s, %(data)s, %(author)s)
ON CONFLICT (id) DO UPDATE SET data = %(param_1)s, author = excluded.author
Additional WHERE Criteria¶
The Insert.on_conflict_do_update()
method also accepts
a WHERE clause using the Insert.on_conflict_do_update.where
parameter, which will limit those rows which receive an UPDATE:
>>> stmt = insert(my_table).values(
... id='some_id',
... data='inserted value',
... author='jlh'
... )
>>> on_update_stmt = stmt.on_conflict_do_update(
... index_elements=['id'],
... set_=dict(data='updated value', author=stmt.excluded.author),
... where=(my_table.c.status == 2)
... )
>>> print(on_update_stmt)
INSERT INTO my_table (id, data, author)
VALUES (%(id)s, %(data)s, %(author)s)
ON CONFLICT (id) DO UPDATE SET data = %(param_1)s, author = excluded.author
WHERE my_table.status = %(status_1)s
Skipping Rows with DO NOTHING¶
ON CONFLICT
may be used to skip inserting a row entirely
if any conflict with a unique or exclusion constraint occurs; below
this is illustrated using the
Insert.on_conflict_do_nothing()
method:
>>> stmt = insert(my_table).values(id='some_id', data='inserted value')
>>> stmt = stmt.on_conflict_do_nothing(index_elements=['id'])
>>> print(stmt)
INSERT INTO my_table (id, data) VALUES (%(id)s, %(data)s)
ON CONFLICT (id) DO NOTHING
If DO NOTHING
is used without specifying any columns or constraint,
it has the effect of skipping the INSERT for any unique or exclusion
constraint violation which occurs:
>>> stmt = insert(my_table).values(id='some_id', data='inserted value')
>>> stmt = stmt.on_conflict_do_nothing()
>>> print(stmt)
INSERT INTO my_table (id, data) VALUES (%(id)s, %(data)s)
ON CONFLICT DO NOTHING
Full Text Search¶
SQLAlchemy makes available the PostgreSQL @@
operator via the
ColumnElement.match()
method on any textual column expression.
On the PostgreSQL dialect, an expression like the following:
select(sometable.c.text.match("search string"))
will emit to the database:
SELECT text @@ to_tsquery('search string') FROM table
Various other PostgreSQL text search functions such as to_tsquery()
,
to_tsvector()
, and plainto_tsquery()
are available by explicitly using
the standard SQLAlchemy func
construct.
For example:
select(func.to_tsvector('fat cats ate rats').match('cat & rat'))
Emits the equivalent of:
SELECT to_tsvector('fat cats ate rats') @@ to_tsquery('cat & rat')
The TSVECTOR
type can provide for explicit CAST:
from sqlalchemy.dialects.postgresql import TSVECTOR
from sqlalchemy import select, cast
select(cast("some text", TSVECTOR))
produces a statement equivalent to:
SELECT CAST('some text' AS TSVECTOR) AS anon_1
Tip
It’s important to remember that text searching in PostgreSQL is powerful but complicated, and SQLAlchemy users are advised to reference the PostgreSQL documentation regarding Full Text Search.
There are important differences between to_tsquery
and
plainto_tsquery
, the most significant of which is that to_tsquery
expects specially formatted “querytext” that is written to PostgreSQL’s own
specification, while plainto_tsquery
expects unformatted text that is
transformed into to_tsquery
compatible querytext. This means the input to
.match()
under PostgreSQL may be incompatible with the input to
.match()
under another database backend. SQLAlchemy users who support
multiple backends are advised to carefully implement their usage of
.match()
to work around these constraints.
Full Text Searches in PostgreSQL are influenced by a combination of: the
PostgreSQL setting of default_text_search_config
, the regconfig
used
to build the GIN/GiST indexes, and the regconfig
optionally passed in
during a query.
When performing a Full Text Search against a column that has a GIN or
GiST index that is already pre-computed (which is common on full text
searches) one may need to explicitly pass in a particular PostgreSQL
regconfig
value to ensure the query-planner utilizes the index and does
not re-compute the column on demand.
In order to provide for this explicit query planning, or to use different
search strategies, the match
method accepts a postgresql_regconfig
keyword argument:
select(mytable.c.id).where(
mytable.c.title.match('somestring', postgresql_regconfig='english')
)
Emits the equivalent of:
SELECT mytable.id FROM mytable
WHERE mytable.title @@ to_tsquery('english', 'somestring')
One can also specifically pass in a ‘regconfig’ value to the
to_tsvector()
command as the initial argument:
select(mytable.c.id).where(
func.to_tsvector('english', mytable.c.title )\
.match('somestring', postgresql_regconfig='english')
)
produces a statement equivalent to:
SELECT mytable.id FROM mytable
WHERE to_tsvector('english', mytable.title) @@
to_tsquery('english', 'somestring')
It is recommended that you use the EXPLAIN ANALYZE...
tool from
PostgreSQL to ensure that you are generating queries with SQLAlchemy that
take full advantage of any indexes you may have created for full text search.
See also
Full Text Search - in the PostgreSQL documentation
FROM ONLY …¶
The dialect supports PostgreSQL’s ONLY keyword for targeting only a particular
table in an inheritance hierarchy. This can be used to produce the
SELECT ... FROM ONLY
, UPDATE ONLY ...
, and DELETE FROM ONLY ...
syntaxes. It uses SQLAlchemy’s hints mechanism:
# SELECT ... FROM ONLY ...
result = table.select().with_hint(table, 'ONLY', 'postgresql')
print(result.fetchall())
# UPDATE ONLY ...
table.update(values=dict(foo='bar')).with_hint('ONLY',
dialect_name='postgresql')
# DELETE FROM ONLY ...
table.delete().with_hint('ONLY', dialect_name='postgresql')
PostgreSQL-Specific Index Options¶
Several extensions to the Index
construct are available, specific
to the PostgreSQL dialect.
Covering Indexes¶
The postgresql_include
option renders INCLUDE(colname) for the given
string names:
Index("my_index", table.c.x, postgresql_include=['y'])
would render the index as CREATE INDEX my_index ON table (x) INCLUDE (y)
Note that this feature requires PostgreSQL 11 or later.
New in version 1.4.
Partial Indexes¶
Partial indexes add criterion to the index definition so that the index is
applied to a subset of rows. These can be specified on Index
using the postgresql_where
keyword argument:
Index('my_index', my_table.c.id, postgresql_where=my_table.c.value > 10)
Operator Classes¶
PostgreSQL allows the specification of an operator class for each column of
an index (see
https://www.postgresql.org/docs/current/interactive/indexes-opclass.html).
The Index
construct allows these to be specified via the
postgresql_ops
keyword argument:
Index(
'my_index', my_table.c.id, my_table.c.data,
postgresql_ops={
'data': 'text_pattern_ops',
'id': 'int4_ops'
})
Note that the keys in the postgresql_ops
dictionaries are the
“key” name of the Column
, i.e. the name used to access it from
the .c
collection of Table
, which can be configured to be
different than the actual name of the column as expressed in the database.
If postgresql_ops
is to be used against a complex SQL expression such
as a function call, then to apply to the column it must be given a label
that is identified in the dictionary by name, e.g.:
Index(
'my_index', my_table.c.id,
func.lower(my_table.c.data).label('data_lower'),
postgresql_ops={
'data_lower': 'text_pattern_ops',
'id': 'int4_ops'
})
Operator classes are also supported by the
ExcludeConstraint
construct using the
ExcludeConstraint.ops
parameter. See that parameter for
details.
New in version 1.3.21: added support for operator classes with
ExcludeConstraint
.
Index Types¶
PostgreSQL provides several index types: B-Tree, Hash, GiST, and GIN, as well
as the ability for users to create their own (see
https://www.postgresql.org/docs/current/static/indexes-types.html). These can be
specified on Index
using the postgresql_using
keyword argument:
Index('my_index', my_table.c.data, postgresql_using='gin')
The value passed to the keyword argument will be simply passed through to the underlying CREATE INDEX command, so it must be a valid index type for your version of PostgreSQL.
Index Storage Parameters¶
PostgreSQL allows storage parameters to be set on indexes. The storage
parameters available depend on the index method used by the index. Storage
parameters can be specified on Index
using the postgresql_with
keyword argument:
Index('my_index', my_table.c.data, postgresql_with={"fillfactor": 50})
New in version 1.0.6.
PostgreSQL allows to define the tablespace in which to create the index.
The tablespace can be specified on Index
using the
postgresql_tablespace
keyword argument:
Index('my_index', my_table.c.data, postgresql_tablespace='my_tablespace')
New in version 1.1.
Note that the same option is available on Table
as well.
Indexes with CONCURRENTLY¶
The PostgreSQL index option CONCURRENTLY is supported by passing the
flag postgresql_concurrently
to the Index
construct:
tbl = Table('testtbl', m, Column('data', Integer))
idx1 = Index('test_idx1', tbl.c.data, postgresql_concurrently=True)
The above index construct will render DDL for CREATE INDEX, assuming PostgreSQL 8.2 or higher is detected or for a connection-less dialect, as:
CREATE INDEX CONCURRENTLY test_idx1 ON testtbl (data)
For DROP INDEX, assuming PostgreSQL 9.2 or higher is detected or for a connection-less dialect, it will emit:
DROP INDEX CONCURRENTLY test_idx1
New in version 1.1: support for CONCURRENTLY on DROP INDEX. The CONCURRENTLY keyword is now only emitted if a high enough version of PostgreSQL is detected on the connection (or for a connection-less dialect).
When using CONCURRENTLY, the PostgreSQL database requires that the statement be invoked outside of a transaction block. The Python DBAPI enforces that even for a single statement, a transaction is present, so to use this construct, the DBAPI’s “autocommit” mode must be used:
metadata = MetaData()
table = Table(
"foo", metadata,
Column("id", String))
index = Index(
"foo_idx", table.c.id, postgresql_concurrently=True)
with engine.connect() as conn:
with conn.execution_options(isolation_level='AUTOCOMMIT'):
table.create(conn)
See also
PostgreSQL Index Reflection¶
The PostgreSQL database creates a UNIQUE INDEX implicitly whenever the
UNIQUE CONSTRAINT construct is used. When inspecting a table using
Inspector
, the Inspector.get_indexes()
and the Inspector.get_unique_constraints()
will report on these
two constructs distinctly; in the case of the index, the key
duplicates_constraint
will be present in the index entry if it is
detected as mirroring a constraint. When performing reflection using
Table(..., autoload_with=engine)
, the UNIQUE INDEX is not returned
in Table.indexes
when it is detected as mirroring a
UniqueConstraint
in the Table.constraints
collection
.
Changed in version 1.0.0: - Table
reflection now includes
UniqueConstraint
objects present in the
Table.constraints
collection; the PostgreSQL backend will no longer include a “mirrored”
Index
construct in Table.indexes
if it is detected
as corresponding to a unique constraint.
Special Reflection Options¶
The Inspector
used for the PostgreSQL backend is an instance
of PGInspector
, which offers additional methods:
from sqlalchemy import create_engine, inspect
engine = create_engine("postgresql+psycopg2://localhost/test")
insp = inspect(engine) # will be a PGInspector
print(insp.get_enums())
Object Name | Description |
---|---|
- class sqlalchemy.dialects.postgresql.base.PGInspector(bind)¶
-
Class signature
class
sqlalchemy.dialects.postgresql.base.PGInspector
(sqlalchemy.engine.reflection.Inspector
)-
method
sqlalchemy.dialects.postgresql.base.PGInspector.
get_enums(schema=None)¶ Return a list of ENUM objects.
Each member is a dictionary containing these fields:
name - name of the enum
schema - the schema name for the enum.
visible - boolean, whether or not this enum is visible in the default search path.
labels - a list of string labels that apply to the enum.
- Parameters:
schema¶ – schema name. If None, the default schema (typically ‘public’) is used. May also be set to ‘*’ to indicate load enums for all schemas.
New in version 1.0.0.
-
method
sqlalchemy.dialects.postgresql.base.PGInspector.
get_foreign_table_names(schema=None)¶ Return a list of FOREIGN TABLE names.
Behavior is similar to that of
Inspector.get_table_names()
, except that the list is limited to those tables that report arelkind
value off
.New in version 1.0.0.
-
method
sqlalchemy.dialects.postgresql.base.PGInspector.
get_table_oid(table_name, schema=None)¶ Return the OID for the given table name.
-
method
sqlalchemy.dialects.postgresql.base.PGInspector.
get_view_names(schema=None, include=('plain', 'materialized'))¶ Return all view names in schema.
- Parameters:
schema¶ – Optional, retrieve names from a non-default schema. For special quoting, use
quoted_name
.include¶ –
specify which types of views to return. Passed as a string value (for a single type) or a tuple (for any number of types). Defaults to
('plain', 'materialized')
.New in version 1.1.
-
method
PostgreSQL Table Options¶
Several options for CREATE TABLE are supported directly by the PostgreSQL
dialect in conjunction with the Table
construct:
TABLESPACE
:Table("some_table", metadata, ..., postgresql_tablespace='some_tablespace')
The above option is also available on the
Index
construct.ON COMMIT
:Table("some_table", metadata, ..., postgresql_on_commit='PRESERVE ROWS')
WITH OIDS
:Table("some_table", metadata, ..., postgresql_with_oids=True)
WITHOUT OIDS
:Table("some_table", metadata, ..., postgresql_with_oids=False)
INHERITS
:Table("some_table", metadata, ..., postgresql_inherits="some_supertable") Table("some_table", metadata, ..., postgresql_inherits=("t1", "t2", ...)) .. versionadded:: 1.0.0
PARTITION BY
:Table("some_table", metadata, ..., postgresql_partition_by='LIST (part_column)') .. versionadded:: 1.2.6
See also
PostgreSQL CREATE TABLE options - in the PostgreSQL documentation.
PostgreSQL Constraint Options¶
The following option(s) are supported by the PostgreSQL dialect in conjunction with selected constraint constructs:
NOT VALID
: This option applies towards CHECK and FOREIGN KEY constraints when the constraint is being added to an existing table via ALTER TABLE, and has the effect that existing rows are not scanned during the ALTER operation against the constraint being added.When using a SQL migration tool such as Alembic that renders ALTER TABLE constructs, the
postgresql_not_valid
argument may be specified as an additional keyword argument within the operation that creates the constraint, as in the following Alembic example:def update(): op.create_foreign_key( "fk_user_address", "address", "user", ["user_id"], ["id"], postgresql_not_valid=True )
The keyword is ultimately accepted directly by the
CheckConstraint
,ForeignKeyConstraint
andForeignKey
constructs; when using a tool like Alembic, dialect-specific keyword arguments are passed through to these constructs from the migration operation directives:CheckConstraint("some_field IS NOT NULL", postgresql_not_valid=True) ForeignKeyConstraint(["some_id"], ["some_table.some_id"], postgresql_not_valid=True)
New in version 1.4.32.
See also
PostgreSQL ALTER TABLE options - in the PostgreSQL documentation.
Table values, Table and Column valued functions, Row and Tuple objects¶
PostgreSQL makes great use of modern SQL forms such as table-valued functions, tables and rows as values. These constructs are commonly used as part of PostgreSQL’s support for complex datatypes such as JSON, ARRAY, and other datatypes. SQLAlchemy’s SQL expression language has native support for most table-valued and row-valued forms.
Table-Valued Functions¶
Many PostgreSQL built-in functions are intended to be used in the FROM clause
of a SELECT statement, and are capable of returning table rows or sets of table
rows. A large portion of PostgreSQL’s JSON functions for example such as
json_array_elements()
, json_object_keys()
, json_each_text()
,
json_each()
, json_to_record()
, json_populate_recordset()
use such
forms. These classes of SQL function calling forms in SQLAlchemy are available
using the FunctionElement.table_valued()
method in conjunction
with Function
objects generated from the func
namespace.
Examples from PostgreSQL’s reference documentation follow below:
json_each()
:>>> from sqlalchemy import select, func >>> stmt = select(func.json_each('{"a":"foo", "b":"bar"}').table_valued("key", "value")) >>> print(stmt) SELECT anon_1.key, anon_1.value FROM json_each(:json_each_1) AS anon_1
json_populate_record()
:>>> from sqlalchemy import select, func, literal_column >>> stmt = select( ... func.json_populate_record( ... literal_column("null::myrowtype"), ... '{"a":1,"b":2}' ... ).table_valued("a", "b", name="x") ... ) >>> print(stmt) SELECT x.a, x.b FROM json_populate_record(null::myrowtype, :json_populate_record_1) AS x
json_to_record()
- this form uses a PostgreSQL specific form of derived columns in the alias, where we may make use ofcolumn()
elements with types to produce them. TheFunctionElement.table_valued()
method produces aTableValuedAlias
construct, and the methodTableValuedAlias.render_derived()
method sets up the derived columns specification:>>> from sqlalchemy import select, func, column, Integer, Text >>> stmt = select( ... func.json_to_record('{"a":1,"b":[1,2,3],"c":"bar"}').table_valued( ... column("a", Integer), column("b", Text), column("d", Text), ... ).render_derived(name="x", with_types=True) ... ) >>> print(stmt) SELECT x.a, x.b, x.d FROM json_to_record(:json_to_record_1) AS x(a INTEGER, b TEXT, d TEXT)
WITH ORDINALITY
- part of the SQL standard,WITH ORDINALITY
adds an ordinal counter to the output of a function and is accepted by a limited set of PostgreSQL functions includingunnest()
andgenerate_series()
. TheFunctionElement.table_valued()
method accepts a keyword parameterwith_ordinality
for this purpose, which accepts the string name that will be applied to the “ordinality” column:>>> from sqlalchemy import select, func >>> stmt = select( ... func.generate_series(4, 1, -1). ... table_valued("value", with_ordinality="ordinality"). ... render_derived() ... ) >>> print(stmt) SELECT anon_1.value, anon_1.ordinality FROM generate_series(:generate_series_1, :generate_series_2, :generate_series_3) WITH ORDINALITY AS anon_1(value, ordinality)
New in version 1.4.0b2.
See also
Table-Valued Functions - in the SQLAlchemy 1.4 / 2.0 Tutorial
Column Valued Functions¶
Similar to the table valued function, a column valued function is present
in the FROM clause, but delivers itself to the columns clause as a single
scalar value. PostgreSQL functions such as json_array_elements()
,
unnest()
and generate_series()
may use this form. Column valued functions are available using the
FunctionElement.column_valued()
method of FunctionElement
:
json_array_elements()
:>>> from sqlalchemy import select, func >>> stmt = select(func.json_array_elements('["one", "two"]').column_valued("x")) >>> print(stmt) SELECT x FROM json_array_elements(:json_array_elements_1) AS x
unnest()
- in order to generate a PostgreSQL ARRAY literal, thearray()
construct may be used:>>> from sqlalchemy.dialects.postgresql import array >>> from sqlalchemy import select, func >>> stmt = select(func.unnest(array([1, 2])).column_valued()) >>> print(stmt) SELECT anon_1 FROM unnest(ARRAY[%(param_1)s, %(param_2)s]) AS anon_1
The function can of course be used against an existing table-bound column that’s of type
ARRAY
:>>> from sqlalchemy import table, column, ARRAY, Integer >>> from sqlalchemy import select, func >>> t = table("t", column('value', ARRAY(Integer))) >>> stmt = select(func.unnest(t.c.value).column_valued("unnested_value")) >>> print(stmt) SELECT unnested_value FROM unnest(t.value) AS unnested_value
Row Types¶
Built-in support for rendering a ROW
may be approximated using
func.ROW
with the sqlalchemy.func
namespace, or by using the
tuple_()
construct:
>>> from sqlalchemy import table, column, func, tuple_
>>> t = table("t", column("id"), column("fk"))
>>> stmt = t.select().where(
... tuple_(t.c.id, t.c.fk) > (1,2)
... ).where(
... func.ROW(t.c.id, t.c.fk) < func.ROW(3, 7)
... )
>>> print(stmt)
SELECT t.id, t.fk
FROM t
WHERE (t.id, t.fk) > (:param_1, :param_2) AND ROW(t.id, t.fk) < ROW(:ROW_1, :ROW_2)
Table Types passed to Functions¶
PostgreSQL supports passing a table as an argument to a function, which it
refers towards as a “record” type. SQLAlchemy FromClause
objects
such as Table
support this special form using the
FromClause.table_valued()
method, which is comparable to the
FunctionElement.table_valued()
method except that the collection
of columns is already established by that of the FromClause
itself:
>>> from sqlalchemy import table, column, func, select
>>> a = table( "a", column("id"), column("x"), column("y"))
>>> stmt = select(func.row_to_json(a.table_valued()))
>>> print(stmt)
SELECT row_to_json(a) AS row_to_json_1
FROM a
New in version 1.4.0b2.
ARRAY Types¶
The PostgreSQL dialect supports arrays, both as multidimensional column types as well as array literals:
ARRAY
- ARRAY datatypearray
- array literalarray_agg()
- ARRAY_AGG SQL functionaggregate_order_by
- helper for PG’s ORDER BY aggregate function syntax.
JSON Types¶
The PostgreSQL dialect supports both JSON and JSONB datatypes, including psycopg2’s native support and support for all of PostgreSQL’s special operators:
HSTORE Type¶
The PostgreSQL HSTORE type as well as hstore literals are supported:
ENUM Types¶
PostgreSQL has an independently creatable TYPE structure which is used to implement an enumerated type. This approach introduces significant complexity on the SQLAlchemy side in terms of when this type should be CREATED and DROPPED. The type object is also an independently reflectable entity. The following sections should be consulted:
ENUM
- DDL and typing support for ENUM.PGInspector.get_enums()
- retrieve a listing of current ENUM typesENUM.create()
,ENUM.drop()
- individual CREATE and DROP commands for ENUM.
Using ENUM with ARRAY¶
The combination of ENUM and ARRAY is not directly supported by backend DBAPIs at this time. Prior to SQLAlchemy 1.3.17, a special workaround was needed in order to allow this combination to work, described below.
Changed in version 1.3.17: The combination of ENUM and ARRAY is now directly handled by SQLAlchemy’s implementation without any workarounds needed.
from sqlalchemy import TypeDecorator
from sqlalchemy.dialects.postgresql import ARRAY
class ArrayOfEnum(TypeDecorator):
impl = ARRAY
def bind_expression(self, bindvalue):
return sa.cast(bindvalue, self)
def result_processor(self, dialect, coltype):
super_rp = super(ArrayOfEnum, self).result_processor(
dialect, coltype)
def handle_raw_string(value):
inner = re.match(r"^{(.*)}$", value).group(1)
return inner.split(",") if inner else []
def process(value):
if value is None:
return None
return super_rp(handle_raw_string(value))
return process
E.g.:
Table(
'mydata', metadata,
Column('id', Integer, primary_key=True),
Column('data', ArrayOfEnum(ENUM('a', 'b, 'c', name='myenum')))
)
This type is not included as a built-in type as it would be incompatible with a DBAPI that suddenly decides to support ARRAY of ENUM directly in a new version.
Using JSON/JSONB with ARRAY¶
Similar to using ENUM, prior to SQLAlchemy 1.3.17, for an ARRAY of JSON/JSONB we need to render the appropriate CAST. Current psycopg2 drivers accommodate the result set correctly without any special steps.
Changed in version 1.3.17: The combination of JSON/JSONB and ARRAY is now directly handled by SQLAlchemy’s implementation without any workarounds needed.
class CastingArray(ARRAY):
def bind_expression(self, bindvalue):
return sa.cast(bindvalue, self)
E.g.:
Table(
'mydata', metadata,
Column('id', Integer, primary_key=True),
Column('data', CastingArray(JSONB))
)
PostgreSQL Data Types and Custom SQL Constructs¶
As with all SQLAlchemy dialects, all UPPERCASE types that are known to be
valid with PostgreSQL are importable from the top level dialect, whether
they originate from sqlalchemy.types
or from the local dialect:
from sqlalchemy.dialects.postgresql import (
ARRAY,
BIGINT,
BIT,
BOOLEAN,
BYTEA,
CHAR,
CIDR,
DATE,
DOUBLE_PRECISION,
ENUM,
FLOAT,
HSTORE,
INET,
INTEGER,
INTERVAL,
JSON,
JSONB,
MACADDR,
MACADDR8,
MONEY,
NUMERIC,
OID,
REAL,
SMALLINT,
TEXT,
TIME,
TIMESTAMP,
UUID,
VARCHAR,
INT4RANGE,
INT8RANGE,
NUMRANGE,
DATERANGE,
TSRANGE,
TSTZRANGE,
TSVECTOR,
)
Types which are specific to PostgreSQL, or have PostgreSQL-specific construction arguments, are as follows:
Object Name | Description |
---|---|
Represent a PostgreSQL aggregate order by expression. |
|
All(other, arrexpr[, operator]) |
A synonym for the ARRAY-level |
Any(other, arrexpr[, operator]) |
A synonym for the ARRAY-level |
A PostgreSQL ARRAY literal. |
|
PostgreSQL ARRAY type. |
|
array_agg(*arg, **kw) |
PostgreSQL-specific form of |
PostgreSQL ENUM type. |
|
Represent the PostgreSQL HSTORE type. |
|
Construct an hstore value within a SQL expression using the
PostgreSQL |
|
PostgreSQL INTERVAL type. |
|
Represent the PostgreSQL JSON type. |
|
Represent the PostgreSQL JSONB type. |
|
Provide the PostgreSQL MONEY type. |
|
Provide the PostgreSQL OID type. |
|
Provide the PostgreSQL REGCLASS type. |
|
PostgreSQL TIME type. |
|
Provide the PostgreSQL TIMESTAMP type. |
|
The |
|
PostgreSQL UUID type. |
- class sqlalchemy.dialects.postgresql.aggregate_order_by(target, *order_by)¶
Represent a PostgreSQL aggregate order by expression.
E.g.:
from sqlalchemy.dialects.postgresql import aggregate_order_by expr = func.array_agg(aggregate_order_by(table.c.a, table.c.b.desc())) stmt = select(expr)
would represent the expression:
SELECT array_agg(a ORDER BY b DESC) FROM table;
Similarly:
expr = func.string_agg( table.c.a, aggregate_order_by(literal_column("','"), table.c.a) ) stmt = select(expr)
Would represent:
SELECT string_agg(a, ',' ORDER BY a) FROM table;
New in version 1.1.
Changed in version 1.2.13: - the ORDER BY argument may be multiple terms
See also
Class signature
class
sqlalchemy.dialects.postgresql.aggregate_order_by
(sqlalchemy.sql.expression.ColumnElement
)
- class sqlalchemy.dialects.postgresql.array(clauses, **kw)¶
A PostgreSQL ARRAY literal.
This is used to produce ARRAY literals in SQL expressions, e.g.:
from sqlalchemy.dialects.postgresql import array from sqlalchemy.dialects import postgresql from sqlalchemy import select, func stmt = select(array([1,2]) + array([3,4,5])) print(stmt.compile(dialect=postgresql.dialect()))
Produces the SQL:
SELECT ARRAY[%(param_1)s, %(param_2)s] || ARRAY[%(param_3)s, %(param_4)s, %(param_5)s]) AS anon_1
An instance of
array
will always have the datatypeARRAY
. The “inner” type of the array is inferred from the values present, unless thetype_
keyword argument is passed:array(['foo', 'bar'], type_=CHAR)
Multidimensional arrays are produced by nesting
array
constructs. The dimensionality of the finalARRAY
type is calculated by recursively adding the dimensions of the innerARRAY
type:stmt = select( array([ array([1, 2]), array([3, 4]), array([column('q'), column('x')]) ]) ) print(stmt.compile(dialect=postgresql.dialect()))
Produces:
SELECT ARRAY[ARRAY[%(param_1)s, %(param_2)s], ARRAY[%(param_3)s, %(param_4)s], ARRAY[q, x]] AS anon_1
New in version 1.3.6: added support for multidimensional array literals
See also
- class sqlalchemy.dialects.postgresql.ARRAY(item_type, as_tuple=False, dimensions=None, zero_indexes=False)¶
PostgreSQL ARRAY type.
The
ARRAY
type is constructed in the same way as the coreARRAY
type; a member type is required, and a number of dimensions is recommended if the type is to be used for more than one dimension:from sqlalchemy.dialects import postgresql mytable = Table("mytable", metadata, Column("data", postgresql.ARRAY(Integer, dimensions=2)) )
The
ARRAY
type provides all operations defined on the coreARRAY
type, including support for “dimensions”, indexed access, and simple matching such asComparator.any()
andComparator.all()
.ARRAY
class also provides PostgreSQL-specific methods for containment operations, includingComparator.contains()
Comparator.contained_by()
, andComparator.overlap()
, e.g.:mytable.c.data.contains([1, 2])
The
ARRAY
type may not be supported on all PostgreSQL DBAPIs; it is currently known to work on psycopg2 only.Additionally, the
ARRAY
type does not work directly in conjunction with theENUM
type. For a workaround, see the special type at Using ENUM with ARRAY.Detecting Changes in ARRAY columns when using the ORM
The
ARRAY
type, when used with the SQLAlchemy ORM, does not detect in-place mutations to the array. In order to detect these, thesqlalchemy.ext.mutable
extension must be used, using theMutableList
class:from sqlalchemy.dialects.postgresql import ARRAY from sqlalchemy.ext.mutable import MutableList class SomeOrmClass(Base): # ... data = Column(MutableList.as_mutable(ARRAY(Integer)))
This extension will allow “in-place” changes such to the array such as
.append()
to produce events which will be detected by the unit of work. Note that changes to elements inside the array, including subarrays that are mutated in place, are not detected.Alternatively, assigning a new array value to an ORM element that replaces the old one will always trigger a change event.
Members
Class signature
class
sqlalchemy.dialects.postgresql.ARRAY
(sqlalchemy.types.ARRAY
)- class Comparator(expr)¶
Define comparison operations for
ARRAY
.Note that these operations are in addition to those provided by the base
Comparator
class, includingComparator.any()
andComparator.all()
.Class signature
class
sqlalchemy.dialects.postgresql.ARRAY.Comparator
(sqlalchemy.types.Comparator
)-
method
sqlalchemy.dialects.postgresql.ARRAY.Comparator.
contained_by(other)¶ Boolean expression. Test if elements are a proper subset of the elements of the argument array expression.
-
method
sqlalchemy.dialects.postgresql.ARRAY.Comparator.
contains(other, **kwargs)¶ Boolean expression. Test if elements are a superset of the elements of the argument array expression.
kwargs may be ignored by this operator but are required for API conformance.
-
method
sqlalchemy.dialects.postgresql.ARRAY.Comparator.
overlap(other)¶ Boolean expression. Test if array has elements in common with an argument array expression.
-
method
-
method
sqlalchemy.dialects.postgresql.ARRAY.
__init__(item_type, as_tuple=False, dimensions=None, zero_indexes=False)¶ Construct an ARRAY.
E.g.:
Column('myarray', ARRAY(Integer))
Arguments are:
- Parameters:
item_type¶ – The data type of items of this array. Note that dimensionality is irrelevant here, so multi-dimensional arrays like
INTEGER[][]
, are constructed asARRAY(Integer)
, not asARRAY(ARRAY(Integer))
or such.as_tuple=False¶ – Specify whether return results should be converted to tuples from lists. DBAPIs such as psycopg2 return lists by default. When tuples are returned, the results are hashable.
dimensions¶ – if non-None, the ARRAY will assume a fixed number of dimensions. This will cause the DDL emitted for this ARRAY to include the exact number of bracket clauses
[]
, and will also optimize the performance of the type overall. Note that PG arrays are always implicitly “non-dimensioned”, meaning they can store any number of dimensions no matter how they were declared.zero_indexes=False¶ –
when True, index values will be converted between Python zero-based and PostgreSQL one-based indexes, e.g. a value of one will be added to all index values before passing to the database.
New in version 0.9.5.
- function sqlalchemy.dialects.postgresql.array_agg(*arg, **kw)¶
PostgreSQL-specific form of
array_agg
, ensures return type isARRAY
and not the plainARRAY
, unless an explicittype_
is passed.New in version 1.1.
- function sqlalchemy.dialects.postgresql.Any(other, arrexpr, operator=<built-in function eq>)¶
A synonym for the ARRAY-level
Comparator.any()
method. See that method for details.
- function sqlalchemy.dialects.postgresql.All(other, arrexpr, operator=<built-in function eq>)¶
A synonym for the ARRAY-level
Comparator.all()
method. See that method for details.
- class sqlalchemy.dialects.postgresql.BIT(length=None, varying=False)¶
Class signature
class
sqlalchemy.dialects.postgresql.BIT
(sqlalchemy.types.TypeEngine
)
- class sqlalchemy.dialects.postgresql.BYTEA(length=None)¶
Members
Class signature
class
sqlalchemy.dialects.postgresql.BYTEA
(sqlalchemy.types.LargeBinary
)-
method
sqlalchemy.dialects.postgresql.BYTEA.
__init__(length=None)¶ inherited from the
sqlalchemy.types.LargeBinary.__init__
method ofLargeBinary
Construct a LargeBinary type.
- Parameters:
length¶ – optional, a length for the column for use in DDL statements, for those binary types that accept a length, such as the MySQL BLOB type.
-
method
- class sqlalchemy.dialects.postgresql.CIDR¶
Class signature
class
sqlalchemy.dialects.postgresql.CIDR
(sqlalchemy.types.TypeEngine
)
- class sqlalchemy.dialects.postgresql.DOUBLE_PRECISION(precision=None, asdecimal=False, decimal_return_scale=None)
Class signature
class
sqlalchemy.dialects.postgresql.DOUBLE_PRECISION
(sqlalchemy.types.Float
)-
method
sqlalchemy.dialects.postgresql.DOUBLE_PRECISION.
__init__(precision=None, asdecimal=False, decimal_return_scale=None) inherited from the
sqlalchemy.types.Float.__init__
method ofFloat
Construct a Float.
- Parameters:
precision¶ – the numeric precision for use in DDL
CREATE TABLE
.asdecimal¶ – the same flag as that of
Numeric
, but defaults toFalse
. Note that setting this flag toTrue
results in floating point conversion.decimal_return_scale¶ –
Default scale to use when converting from floats to Python decimals. Floating point values will typically be much longer due to decimal inaccuracy, and most floating point database types don’t have a notion of “scale”, so by default the float type looks for the first ten decimal places when converting. Specifying this value will override that length. Note that the MySQL float types, which do include “scale”, will use “scale” as the default for decimal_return_scale, if not otherwise specified.
New in version 0.9.0.
-
method
- class sqlalchemy.dialects.postgresql.ENUM(*enums, **kw)¶
PostgreSQL ENUM type.
This is a subclass of
Enum
which includes support for PG’sCREATE TYPE
andDROP TYPE
.When the builtin type
Enum
is used and theEnum.native_enum
flag is left at its default of True, the PostgreSQL backend will use aENUM
type as the implementation, so the special create/drop rules will be used.The create/drop behavior of ENUM is necessarily intricate, due to the awkward relationship the ENUM type has in relationship to the parent table, in that it may be “owned” by just a single table, or may be shared among many tables.
When using
Enum
orENUM
in an “inline” fashion, theCREATE TYPE
andDROP TYPE
is emitted corresponding to when theTable.create()
andTable.drop()
methods are called:table = Table('sometable', metadata, Column('some_enum', ENUM('a', 'b', 'c', name='myenum')) ) table.create(engine) # will emit CREATE ENUM and CREATE TABLE table.drop(engine) # will emit DROP TABLE and DROP ENUM
To use a common enumerated type between multiple tables, the best practice is to declare the
Enum
orENUM
independently, and associate it with theMetaData
object itself:my_enum = ENUM('a', 'b', 'c', name='myenum', metadata=metadata) t1 = Table('sometable_one', metadata, Column('some_enum', myenum) ) t2 = Table('sometable_two', metadata, Column('some_enum', myenum) )
When this pattern is used, care must still be taken at the level of individual table creates. Emitting CREATE TABLE without also specifying
checkfirst=True
will still cause issues:t1.create(engine) # will fail: no such type 'myenum'
If we specify
checkfirst=True
, the individual table-level create operation will check for theENUM
and create if not exists:# will check if enum exists, and emit CREATE TYPE if not t1.create(engine, checkfirst=True)
When using a metadata-level ENUM type, the type will always be created and dropped if either the metadata-wide create/drop is called:
metadata.create_all(engine) # will emit CREATE TYPE metadata.drop_all(engine) # will emit DROP TYPE
The type can also be created and dropped directly:
my_enum.create(engine) my_enum.drop(engine)
Changed in version 1.0.0: The PostgreSQL
ENUM
type now behaves more strictly with regards to CREATE/DROP. A metadata-level ENUM type will only be created and dropped at the metadata level, not the table level, with the exception oftable.create(checkfirst=True)
. Thetable.drop()
call will now emit a DROP TYPE for a table-level enumerated type.Members
Class signature
class
sqlalchemy.dialects.postgresql.ENUM
(sqlalchemy.types.NativeForEmulated
,sqlalchemy.types.Enum
)-
method
sqlalchemy.dialects.postgresql.ENUM.
__init__(*enums, **kw)¶ Construct an
ENUM
.Arguments are the same as that of
Enum
, but also including the following parameters.- Parameters:
create_type¶ – Defaults to True. Indicates that
CREATE TYPE
should be emitted, after optionally checking for the presence of the type, when the parent table is being created; and additionally thatDROP TYPE
is called when the table is dropped. WhenFalse
, no check will be performed and noCREATE TYPE
orDROP TYPE
is emitted, unlessENUM.create()
orENUM.drop()
are called directly. Setting toFalse
is helpful when invoking a creation scheme to a SQL file without access to the actual database - theENUM.create()
andENUM.drop()
methods can be used to emit SQL to a target bind.
-
method
sqlalchemy.dialects.postgresql.ENUM.
create(bind=None, checkfirst=True)¶ Emit
CREATE TYPE
for thisENUM
.If the underlying dialect does not support PostgreSQL CREATE TYPE, no action is taken.
- Parameters:
bind¶ – a connectable
Engine
,Connection
, or similar object to emit SQL.checkfirst¶ – if
True
, a query against the PG catalog will be first performed to see if the type does not exist already before creating.
-
method
sqlalchemy.dialects.postgresql.ENUM.
drop(bind=None, checkfirst=True)¶ Emit
DROP TYPE
for thisENUM
.If the underlying dialect does not support PostgreSQL DROP TYPE, no action is taken.
- Parameters:
bind¶ – a connectable
Engine
,Connection
, or similar object to emit SQL.checkfirst¶ – if
True
, a query against the PG catalog will be first performed to see if the type actually exists before dropping.
-
method
- class sqlalchemy.dialects.postgresql.HSTORE(text_type=None)¶
Represent the PostgreSQL HSTORE type.
The
HSTORE
type stores dictionaries containing strings, e.g.:data_table = Table('data_table', metadata, Column('id', Integer, primary_key=True), Column('data', HSTORE) ) with engine.connect() as conn: conn.execute( data_table.insert(), data = {"key1": "value1", "key2": "value2"} )
HSTORE
provides for a wide range of operations, including:Index operations:
data_table.c.data['some key'] == 'some value'
Containment operations:
data_table.c.data.has_key('some key') data_table.c.data.has_all(['one', 'two', 'three'])
Concatenation:
data_table.c.data + {"k1": "v1"}
For a full list of special methods see
comparator_factory
.Detecting Changes in HSTORE columns when using the ORM
For usage with the SQLAlchemy ORM, it may be desirable to combine the usage of
HSTORE
withMutableDict
dictionary now part of thesqlalchemy.ext.mutable
extension. This extension will allow “in-place” changes to the dictionary, e.g. addition of new keys or replacement/removal of existing keys to/from the current dictionary, to produce events which will be detected by the unit of work:from sqlalchemy.ext.mutable import MutableDict class MyClass(Base): __tablename__ = 'data_table' id = Column(Integer, primary_key=True) data = Column(MutableDict.as_mutable(HSTORE)) my_object = session.query(MyClass).one() # in-place mutation, requires Mutable extension # in order for the ORM to detect my_object.data['some_key'] = 'some value' session.commit()
When the
sqlalchemy.ext.mutable
extension is not used, the ORM will not be alerted to any changes to the contents of an existing dictionary, unless that dictionary value is re-assigned to the HSTORE-attribute itself, thus generating a change event.See also
hstore
- render the PostgreSQLhstore()
function.Members
array(), contained_by(), contains(), defined(), delete(), has_all(), has_any(), has_key(), keys(), matrix(), slice(), vals(), __init__(), bind_processor(), comparator_factory, hashable, result_processor()
Class signature
class
sqlalchemy.dialects.postgresql.HSTORE
(sqlalchemy.types.Indexable
,sqlalchemy.types.Concatenable
,sqlalchemy.types.TypeEngine
)- class Comparator(expr)¶
Define comparison operations for
HSTORE
.Class signature
class
sqlalchemy.dialects.postgresql.HSTORE.Comparator
(sqlalchemy.types.Comparator
,sqlalchemy.types.Comparator
)-
method
sqlalchemy.dialects.postgresql.HSTORE.Comparator.
array()¶ Text array expression. Returns array of alternating keys and values.
-
method
sqlalchemy.dialects.postgresql.HSTORE.Comparator.
contained_by(other)¶ Boolean expression. Test if keys are a proper subset of the keys of the argument jsonb expression.
-
method
sqlalchemy.dialects.postgresql.HSTORE.Comparator.
contains(other, **kwargs)¶ Boolean expression. Test if keys (or array) are a superset of/contained the keys of the argument jsonb expression.
kwargs may be ignored by this operator but are required for API conformance.
-
method
sqlalchemy.dialects.postgresql.HSTORE.Comparator.
defined(key)¶ Boolean expression. Test for presence of a non-NULL value for the key. Note that the key may be a SQLA expression.
-
method
sqlalchemy.dialects.postgresql.HSTORE.Comparator.
delete(key)¶ HStore expression. Returns the contents of this hstore with the given key deleted. Note that the key may be a SQLA expression.
-
method
sqlalchemy.dialects.postgresql.HSTORE.Comparator.
has_all(other)¶ Boolean expression. Test for presence of all keys in jsonb
-
method
sqlalchemy.dialects.postgresql.HSTORE.Comparator.
has_any(other)¶ Boolean expression. Test for presence of any key in jsonb
-
method
sqlalchemy.dialects.postgresql.HSTORE.Comparator.
has_key(other)¶ Boolean expression. Test for presence of a key. Note that the key may be a SQLA expression.
-
method
sqlalchemy.dialects.postgresql.HSTORE.Comparator.
keys()¶ Text array expression. Returns array of keys.
-
method
sqlalchemy.dialects.postgresql.HSTORE.Comparator.
matrix()¶ Text array expression. Returns array of [key, value] pairs.
-
method
sqlalchemy.dialects.postgresql.HSTORE.Comparator.
slice(array)¶ HStore expression. Returns a subset of an hstore defined by array of keys.
-
method
sqlalchemy.dialects.postgresql.HSTORE.Comparator.
vals()¶ Text array expression. Returns array of values.
-
method
-
method
sqlalchemy.dialects.postgresql.HSTORE.
__init__(text_type=None)¶ Construct a new
HSTORE
.
-
method
sqlalchemy.dialects.postgresql.HSTORE.
bind_processor(dialect)¶ Return a conversion function for processing bind values.
Returns a callable which will receive a bind parameter value as the sole positional argument and will return a value to send to the DB-API.
If processing is not necessary, the method should return
None
.Note
This method is only called relative to a dialect specific type object, which is often private to a dialect in use and is not the same type object as the public facing one, which means it’s not feasible to subclass a
TypeEngine
class in order to provide an alternateTypeEngine.bind_processor()
method, unless subclassing theUserDefinedType
class explicitly.To provide alternate behavior for
TypeEngine.bind_processor()
, implement aTypeDecorator
class and provide an implementation ofTypeDecorator.process_bind_param()
.See also
- Parameters:
dialect¶ – Dialect instance in use.
-
attribute
sqlalchemy.dialects.postgresql.HSTORE.
comparator_factory¶ alias of
Comparator
-
attribute
sqlalchemy.dialects.postgresql.HSTORE.
hashable = False¶ Flag, if False, means values from this type aren’t hashable.
Used by the ORM when uniquing result lists.
-
method
sqlalchemy.dialects.postgresql.HSTORE.
result_processor(dialect, coltype)¶ Return a conversion function for processing result row values.
Returns a callable which will receive a result row column value as the sole positional argument and will return a value to return to the user.
If processing is not necessary, the method should return
None
.Note
This method is only called relative to a dialect specific type object, which is often private to a dialect in use and is not the same type object as the public facing one, which means it’s not feasible to subclass a
TypeEngine
class in order to provide an alternateTypeEngine.result_processor()
method, unless subclassing theUserDefinedType
class explicitly.To provide alternate behavior for
TypeEngine.result_processor()
, implement aTypeDecorator
class and provide an implementation ofTypeDecorator.process_result_value()
.See also
- class sqlalchemy.dialects.postgresql.hstore(*args, **kwargs)¶
Construct an hstore value within a SQL expression using the PostgreSQL
hstore()
function.The
hstore
function accepts one or two arguments as described in the PostgreSQL documentation.E.g.:
from sqlalchemy.dialects.postgresql import array, hstore select(hstore('key1', 'value1')) select( hstore( array(['key1', 'key2', 'key3']), array(['value1', 'value2', 'value3']) ) )
See also
HSTORE
- the PostgreSQLHSTORE
datatype.Members
Class signature
class
sqlalchemy.dialects.postgresql.hstore
(sqlalchemy.sql.functions.GenericFunction
)-
attribute
sqlalchemy.dialects.postgresql.hstore.
inherit_cache = True¶ Indicate if this
HasCacheKey
instance should make use of the cache key generation scheme used by its immediate superclass.The attribute defaults to
None
, which indicates that a construct has not yet taken into account whether or not its appropriate for it to participate in caching; this is functionally equivalent to setting the value toFalse
, except that a warning is also emitted.This flag can be set to
True
on a particular class, if the SQL that corresponds to the object does not change based on attributes which are local to this class, and not its superclass.See also
Enabling Caching Support for Custom Constructs - General guideslines for setting the
HasCacheKey.inherit_cache
attribute for third-party or user defined SQL constructs.
-
attribute
sqlalchemy.dialects.postgresql.hstore.
type¶ alias of
HSTORE
-
attribute
- class sqlalchemy.dialects.postgresql.INET¶
Class signature
class
sqlalchemy.dialects.postgresql.INET
(sqlalchemy.types.TypeEngine
)
- class sqlalchemy.dialects.postgresql.INTERVAL(precision=None, fields=None)¶
PostgreSQL INTERVAL type.
Members
Class signature
class
sqlalchemy.dialects.postgresql.INTERVAL
(sqlalchemy.types.NativeForEmulated
,sqlalchemy.types._AbstractInterval
)-
method
sqlalchemy.dialects.postgresql.INTERVAL.
__init__(precision=None, fields=None)¶ Construct an INTERVAL.
-
method
- class sqlalchemy.dialects.postgresql.JSON(none_as_null=False, astext_type=None)¶
Represent the PostgreSQL JSON type.
JSON
is used automatically whenever the baseJSON
datatype is used against a PostgreSQL backend, however baseJSON
datatype does not provide Python accessors for PostgreSQL-specific comparison methods such asComparator.astext()
; additionally, to use PostgreSQLJSONB
, theJSONB
datatype should be used explicitly.See also
JSON
- main documentation for the generic cross-platform JSON datatype.The operators provided by the PostgreSQL version of
JSON
include:Index operations (the
->
operator):data_table.c.data['some key'] data_table.c.data[5]
Index operations returning text (the
->>
operator):data_table.c.data['some key'].astext == 'some value'
Note that equivalent functionality is available via the
Comparator.as_string
accessor.Index operations with CAST (equivalent to
CAST(col ->> ['some key'] AS <type>)
):data_table.c.data['some key'].astext.cast(Integer) == 5
Note that equivalent functionality is available via the
Comparator.as_integer
and similar accessors.Path index operations (the
#>
operator):data_table.c.data[('key_1', 'key_2', 5, ..., 'key_n')]
Path index operations returning text (the
#>>
operator):data_table.c.data[('key_1', 'key_2', 5, ..., 'key_n')].astext == 'some value'
Changed in version 1.1: The
ColumnElement.cast()
operator on JSON objects now requires that theComparator.astext
modifier be called explicitly, if the cast works only from a textual string.Index operations return an expression object whose type defaults to
JSON
by default, so that further JSON-oriented instructions may be called upon the result type.Custom serializers and deserializers are specified at the dialect level, that is using
create_engine()
. The reason for this is that when using psycopg2, the DBAPI only allows serializers at the per-cursor or per-connection level. E.g.:engine = create_engine("postgresql://scott:tiger@localhost/test", json_serializer=my_serialize_fn, json_deserializer=my_deserialize_fn )
When using the psycopg2 dialect, the json_deserializer is registered against the database using
psycopg2.extras.register_default_json
.Members
Class signature
class
sqlalchemy.dialects.postgresql.JSON
(sqlalchemy.types.JSON
)- class Comparator(expr)¶
Define comparison operations for
JSON
.Class signature
class
sqlalchemy.dialects.postgresql.JSON.Comparator
(sqlalchemy.types.Comparator
)-
attribute
sqlalchemy.dialects.postgresql.JSON.Comparator.
astext¶ On an indexed expression, use the “astext” (e.g. “->>”) conversion when rendered in SQL.
E.g.:
select(data_table.c.data['some key'].astext)
See also
-
attribute
-
method
sqlalchemy.dialects.postgresql.JSON.
__init__(none_as_null=False, astext_type=None)¶ Construct a
JSON
type.- Parameters:
none_as_null¶ –
if True, persist the value
None
as a SQL NULL value, not the JSON encoding ofnull
. Note that when this flag is False, thenull()
construct can still be used to persist a NULL value:from sqlalchemy import null conn.execute(table.insert(), data=null())
Changed in version 0.9.8: - Added
none_as_null
, andnull()
is now supported in order to persist a NULL value.See also
astext_type¶ –
the type to use for the
Comparator.astext
accessor on indexed attributes. Defaults toText
.New in version 1.1.
-
attribute
sqlalchemy.dialects.postgresql.JSON.
comparator_factory¶ alias of
Comparator
- class sqlalchemy.dialects.postgresql.JSONB(none_as_null=False, astext_type=None)¶
Represent the PostgreSQL JSONB type.
The
JSONB
type stores arbitrary JSONB format data, e.g.:data_table = Table('data_table', metadata, Column('id', Integer, primary_key=True), Column('data', JSONB) ) with engine.connect() as conn: conn.execute( data_table.insert(), data = {"key1": "value1", "key2": "value2"} )
The
JSONB
type includes all operations provided byJSON
, including the same behaviors for indexing operations. It also adds additional operators specific to JSONB, includingComparator.has_key()
,Comparator.has_all()
,Comparator.has_any()
,Comparator.contains()
, andComparator.contained_by()
.Like the
JSON
type, theJSONB
type does not detect in-place changes when used with the ORM, unless thesqlalchemy.ext.mutable
extension is used.Custom serializers and deserializers are shared with the
JSON
class, using thejson_serializer
andjson_deserializer
keyword arguments. These must be specified at the dialect level usingcreate_engine()
. When using psycopg2, the serializers are associated with the jsonb type usingpsycopg2.extras.register_default_jsonb
on a per-connection basis, in the same way thatpsycopg2.extras.register_default_json
is used to register these handlers with the json type.New in version 0.9.7.
See also
Members
contained_by(), contains(), has_all(), has_any(), has_key(), comparator_factory
Class signature
class
sqlalchemy.dialects.postgresql.JSONB
(sqlalchemy.dialects.postgresql.json.JSON
)- class Comparator(expr)¶
Define comparison operations for
JSON
.Class signature
class
sqlalchemy.dialects.postgresql.JSONB.Comparator
(sqlalchemy.dialects.postgresql.json.Comparator
)-
method
sqlalchemy.dialects.postgresql.JSONB.Comparator.
contained_by(other)¶ Boolean expression. Test if keys are a proper subset of the keys of the argument jsonb expression (equivalent of the
<@
operator).
-
method
sqlalchemy.dialects.postgresql.JSONB.Comparator.
contains(other, **kwargs)¶ Boolean expression. Test if keys (or array) are a superset of/contained the keys of the argument jsonb expression (equivalent of the
@>
operator).kwargs may be ignored by this operator but are required for API conformance.
-
method
sqlalchemy.dialects.postgresql.JSONB.Comparator.
has_all(other)¶ Boolean expression. Test for presence of all keys in jsonb (equivalent of the
?&
operator)
-
method
sqlalchemy.dialects.postgresql.JSONB.Comparator.
has_any(other)¶ Boolean expression. Test for presence of any key in jsonb (equivalent of the
?|
operator)
-
method
sqlalchemy.dialects.postgresql.JSONB.Comparator.
has_key(other)¶ Boolean expression. Test for presence of a key (equivalent of the
?
operator). Note that the key may be a SQLA expression.
-
method
-
attribute
sqlalchemy.dialects.postgresql.JSONB.
comparator_factory¶ alias of
Comparator
- class sqlalchemy.dialects.postgresql.MACADDR¶
Class signature
class
sqlalchemy.dialects.postgresql.MACADDR
(sqlalchemy.types.TypeEngine
)
- class sqlalchemy.dialects.postgresql.MACADDR8¶
Class signature
class
sqlalchemy.dialects.postgresql.MACADDR8
(sqlalchemy.types.TypeEngine
)
- class sqlalchemy.dialects.postgresql.MONEY¶
Provide the PostgreSQL MONEY type.
Depending on driver, result rows using this type may return a string value which includes currency symbols.
For this reason, it may be preferable to provide conversion to a numerically-based currency datatype using
TypeDecorator
:import re import decimal from sqlalchemy import TypeDecorator class NumericMoney(TypeDecorator): impl = MONEY def process_result_value(self, value: Any, dialect: Any) -> None: if value is not None: # adjust this for the currency and numeric m = re.match(r"\$([\d.]+)", value) if m: value = decimal.Decimal(m.group(1)) return value
Alternatively, the conversion may be applied as a CAST using the
TypeDecorator.column_expression()
method as follows:import decimal from sqlalchemy import cast from sqlalchemy import TypeDecorator class NumericMoney(TypeDecorator): impl = MONEY def column_expression(self, column: Any): return cast(column, Numeric())
New in version 1.2.
Class signature
class
sqlalchemy.dialects.postgresql.MONEY
(sqlalchemy.types.TypeEngine
)
- class sqlalchemy.dialects.postgresql.OID¶
Provide the PostgreSQL OID type.
New in version 0.9.5.
Class signature
class
sqlalchemy.dialects.postgresql.OID
(sqlalchemy.types.TypeEngine
)
- class sqlalchemy.dialects.postgresql.REAL(precision=None, asdecimal=False, decimal_return_scale=None)
The SQL REAL type.
Class signature
class
sqlalchemy.dialects.postgresql.REAL
(sqlalchemy.types.Float
)-
method
sqlalchemy.dialects.postgresql.REAL.
__init__(precision=None, asdecimal=False, decimal_return_scale=None) inherited from the
sqlalchemy.types.Float.__init__
method ofFloat
Construct a Float.
- Parameters:
precision¶ – the numeric precision for use in DDL
CREATE TABLE
.asdecimal¶ – the same flag as that of
Numeric
, but defaults toFalse
. Note that setting this flag toTrue
results in floating point conversion.decimal_return_scale¶ –
Default scale to use when converting from floats to Python decimals. Floating point values will typically be much longer due to decimal inaccuracy, and most floating point database types don’t have a notion of “scale”, so by default the float type looks for the first ten decimal places when converting. Specifying this value will override that length. Note that the MySQL float types, which do include “scale”, will use “scale” as the default for decimal_return_scale, if not otherwise specified.
New in version 0.9.0.
-
method
- class sqlalchemy.dialects.postgresql.REGCLASS¶
Provide the PostgreSQL REGCLASS type.
New in version 1.2.7.
Class signature
class
sqlalchemy.dialects.postgresql.REGCLASS
(sqlalchemy.types.TypeEngine
)
- class sqlalchemy.dialects.postgresql.TIMESTAMP(timezone=False, precision=None)¶
Provide the PostgreSQL TIMESTAMP type.
Members
Class signature
class
sqlalchemy.dialects.postgresql.TIMESTAMP
(sqlalchemy.types.TIMESTAMP
)-
method
sqlalchemy.dialects.postgresql.TIMESTAMP.
__init__(timezone=False, precision=None)¶ Construct a TIMESTAMP.
-
method
- class sqlalchemy.dialects.postgresql.TIME(timezone=False, precision=None)¶
PostgreSQL TIME type.
Members
Class signature
class
sqlalchemy.dialects.postgresql.TIME
(sqlalchemy.types.TIME
)-
method
sqlalchemy.dialects.postgresql.TIME.
__init__(timezone=False, precision=None)¶ Construct a TIME.
-
method
- class sqlalchemy.dialects.postgresql.TSVECTOR¶
The
TSVECTOR
type implements the PostgreSQL text search type TSVECTOR.It can be used to do full text queries on natural language documents.
New in version 0.9.0.
See also
Class signature
class
sqlalchemy.dialects.postgresql.TSVECTOR
(sqlalchemy.types.TypeEngine
)
- class sqlalchemy.dialects.postgresql.UUID(as_uuid=False)¶
PostgreSQL UUID type.
Represents the UUID column type, interpreting data either as natively returned by the DBAPI or as Python uuid objects.
The UUID type is currently known to work within the prominent DBAPI drivers supported by SQLAlchemy including psycopg2, pg8000 and asyncpg. Support for other DBAPI drivers may be incomplete or non-present.
Members
Class signature
class
sqlalchemy.dialects.postgresql.UUID
(sqlalchemy.types.TypeEngine
)-
method
sqlalchemy.dialects.postgresql.UUID.
__init__(as_uuid=False)¶ Construct a UUID type.
- Parameters:
as_uuid=False¶ – if True, values will be interpreted as Python uuid objects, converting to/from string via the DBAPI.
-
method
Range Types¶
The new range column types found in PostgreSQL 9.2 onwards are catered for by the following types:
Object Name | Description |
---|---|
Represent the PostgreSQL DATERANGE type. |
|
Represent the PostgreSQL INT4RANGE type. |
|
Represent the PostgreSQL INT8RANGE type. |
|
Represent the PostgreSQL NUMRANGE type. |
|
This mixin provides functionality for the Range Operators
listed in the Range Operators table of the PostgreSQL documentation
for Range Functions and Operators. It is used by all the range types
provided in the |
|
Represent the PostgreSQL TSRANGE type. |
|
Represent the PostgreSQL TSTZRANGE type. |
- class sqlalchemy.dialects.postgresql.INT4RANGE¶
Represent the PostgreSQL INT4RANGE type.
- class sqlalchemy.dialects.postgresql.INT8RANGE¶
Represent the PostgreSQL INT8RANGE type.
- class sqlalchemy.dialects.postgresql.NUMRANGE¶
Represent the PostgreSQL NUMRANGE type.
- class sqlalchemy.dialects.postgresql.DATERANGE¶
Represent the PostgreSQL DATERANGE type.
- class sqlalchemy.dialects.postgresql.TSRANGE¶
Represent the PostgreSQL TSRANGE type.
- class sqlalchemy.dialects.postgresql.TSTZRANGE¶
Represent the PostgreSQL TSTZRANGE type.
The types above get most of their functionality from the following mixin:
- class sqlalchemy.dialects.postgresql.ranges.RangeOperators¶
This mixin provides functionality for the Range Operators listed in the Range Operators table of the PostgreSQL documentation for Range Functions and Operators. It is used by all the range types provided in the
postgres
dialect and can likely be used for any range types you create yourself.Members
__ne__(), adjacent_to(), contained_by(), contains(), not_extend_left_of(), not_extend_right_of(), overlaps(), strictly_left_of(), strictly_right_of()
No extra support is provided for the Range Functions listed in the Range Functions table of the PostgreSQL documentation. For these, the normal
func()
object should be used.- class comparator_factory(expr)¶
Define comparison operations for range types.
Class signature
class
sqlalchemy.dialects.postgresql.ranges.RangeOperators.comparator_factory
(sqlalchemy.types.Comparator
)-
method
sqlalchemy.dialects.postgresql.ranges.RangeOperators.comparator_factory.
__ne__(other)¶ Boolean expression. Returns true if two ranges are not equal
-
method
sqlalchemy.dialects.postgresql.ranges.RangeOperators.comparator_factory.
adjacent_to(other)¶ Boolean expression. Returns true if the range in the column is adjacent to the range in the operand.
-
method
sqlalchemy.dialects.postgresql.ranges.RangeOperators.comparator_factory.
contained_by(other)¶ Boolean expression. Returns true if the column is contained within the right hand operand.
-
method
sqlalchemy.dialects.postgresql.ranges.RangeOperators.comparator_factory.
contains(other, **kw)¶ Boolean expression. Returns true if the right hand operand, which can be an element or a range, is contained within the column.
kwargs may be ignored by this operator but are required for API conformance.
-
method
sqlalchemy.dialects.postgresql.ranges.RangeOperators.comparator_factory.
not_extend_left_of(other)¶ Boolean expression. Returns true if the range in the column does not extend left of the range in the operand.
-
method
sqlalchemy.dialects.postgresql.ranges.RangeOperators.comparator_factory.
not_extend_right_of(other)¶ Boolean expression. Returns true if the range in the column does not extend right of the range in the operand.
-
method
sqlalchemy.dialects.postgresql.ranges.RangeOperators.comparator_factory.
overlaps(other)¶ Boolean expression. Returns true if the column overlaps (has points in common with) the right hand operand.
-
method
sqlalchemy.dialects.postgresql.ranges.RangeOperators.comparator_factory.
strictly_left_of(other)¶ Boolean expression. Returns true if the column is strictly left of the right hand operand.
-
method
sqlalchemy.dialects.postgresql.ranges.RangeOperators.comparator_factory.
strictly_right_of(other)¶ Boolean expression. Returns true if the column is strictly right of the right hand operand.
-
method
Warning
The range type DDL support should work with any PostgreSQL DBAPI
driver, however the data types returned may vary. If you are using
psycopg2
, it’s recommended to upgrade to version 2.5 or later
before using these column types.
When instantiating models that use these column types, you should pass
whatever data type is expected by the DBAPI driver you’re using for
the column type. For psycopg2
these are
psycopg2.extras.NumericRange
,
psycopg2.extras.DateRange
,
psycopg2.extras.DateTimeRange
and
psycopg2.extras.DateTimeTZRange
or the class you’ve
registered with psycopg2.extras.register_range
.
For example:
from psycopg2.extras import DateTimeRange
from sqlalchemy.dialects.postgresql import TSRANGE
class RoomBooking(Base):
__tablename__ = 'room_booking'
room = Column(Integer(), primary_key=True)
during = Column(TSRANGE())
booking = RoomBooking(
room=101,
during=DateTimeRange(datetime(2013, 3, 23), None)
)
PostgreSQL Constraint Types¶
SQLAlchemy supports PostgreSQL EXCLUDE constraints via the
ExcludeConstraint
class:
Object Name | Description |
---|---|
A table-level EXCLUDE constraint. |
- class sqlalchemy.dialects.postgresql.ExcludeConstraint(*elements, **kw)¶
A table-level EXCLUDE constraint.
Defines an EXCLUDE constraint as described in the PostgreSQL documentation.
Members
Class signature
class
sqlalchemy.dialects.postgresql.ExcludeConstraint
(sqlalchemy.schema.ColumnCollectionConstraint
)-
method
sqlalchemy.dialects.postgresql.ExcludeConstraint.
__init__(*elements, **kw)¶ Create an
ExcludeConstraint
object.E.g.:
const = ExcludeConstraint( (Column('period'), '&&'), (Column('group'), '='), where=(Column('group') != 'some group'), ops={'group': 'my_operator_class'} )
The constraint is normally embedded into the
Table
construct directly, or added later usingappend_constraint()
:some_table = Table( 'some_table', metadata, Column('id', Integer, primary_key=True), Column('period', TSRANGE()), Column('group', String) ) some_table.append_constraint( ExcludeConstraint( (some_table.c.period, '&&'), (some_table.c.group, '='), where=some_table.c.group != 'some group', name='some_table_excl_const', ops={'group': 'my_operator_class'} ) )
- Parameters:
*elements¶ – A sequence of two tuples of the form
(column, operator)
where “column” is a SQL expression element or a raw SQL string, most typically aColumn
object, and “operator” is a string containing the operator to use. In order to specify a column name when aColumn
object is not available, while ensuring that any necessary quoting rules take effect, an ad-hocColumn
orcolumn()
object should be used.name¶ – Optional, the in-database name of this constraint.
deferrable¶ – Optional bool. If set, emit DEFERRABLE or NOT DEFERRABLE when issuing DDL for this constraint.
initially¶ – Optional string. If set, emit INITIALLY <value> when issuing DDL for this constraint.
using¶ – Optional string. If set, emit USING <index_method> when issuing DDL for this constraint. Defaults to ‘gist’.
where¶ –
Optional SQL expression construct or literal SQL string. If set, emit WHERE <predicate> when issuing DDL for this constraint.
Warning
The
ExcludeConstraint.where
argument toExcludeConstraint
can be passed as a Python string argument, which will be treated as trusted SQL text and rendered as given. DO NOT PASS UNTRUSTED INPUT TO THIS PARAMETER.ops¶ –
Optional dictionary. Used to define operator classes for the elements; works the same way as that of the postgresql_ops parameter specified to the
Index
construct.New in version 1.3.21.
See also
Operator Classes - general description of how PostgreSQL operator classes are specified.
-
method
For example:
from sqlalchemy.dialects.postgresql import ExcludeConstraint, TSRANGE
class RoomBooking(Base):
__tablename__ = "room_booking"
room = Column(Integer(), primary_key=True)
during = Column(TSRANGE())
__table_args__ = (ExcludeConstraint(("room", "="), ("during", "&&")),)
PostgreSQL DML Constructs¶
Object Name | Description |
---|---|
insert(table[, values, inline, bind, ...], **dialect_kw) |
Construct an |
PostgreSQL-specific implementation of INSERT. |
- function sqlalchemy.dialects.postgresql.insert(table, values=None, inline=False, bind=None, prefixes=None, returning=None, return_defaults=False, **dialect_kw)¶
Construct an
Insert
object.This documentation is inherited from
sqlalchemy.sql.expression.insert()
; this constructor,sqlalchemy.dialects.postgresql.insert()
, creates asqlalchemy.dialects.postgresql.Insert
object. See that class for additional details describing this subclass.E.g.:
from sqlalchemy import insert stmt = ( insert(user_table). values(name='username', fullname='Full Username') )
Similar functionality is available via the
TableClause.insert()
method onTable
.See also
Inserting Rows with Core - in the SQLAlchemy 1.4 / 2.0 Tutorial
- Parameters:
table¶ –
TableClause
which is the subject of the insert.values¶ –
collection of values to be inserted; see
Insert.values()
for a description of allowed formats here. Can be omitted entirely; aInsert
construct will also dynamically render the VALUES clause at execution time based on the parameters passed toConnection.execute()
.Deprecated since version 1.4: The
insert.values
parameter will be removed in SQLAlchemy 2.0. Please refer to theInsert.values()
method.inline¶ –
if True, no attempt will be made to retrieve the SQL-generated default values to be provided within the statement; in particular, this allows SQL expressions to be rendered ‘inline’ within the statement without the need to pre-execute them beforehand; for backends that support “returning”, this turns off the “implicit returning” feature for the statement.
Deprecated since version 1.4: The
insert.inline
parameter will be removed in SQLAlchemy 2.0. Please use theInsert.inline()
method.
If both
Insert.values
and compile-time bind parameters are present, the compile-time bind parameters override the information specified withinInsert.values
on a per-key basis.The keys within
Insert.values
can be eitherColumn
objects or their string identifiers. Each key may reference one of:a literal data value (i.e. string, number, etc.);
a Column object;
a SELECT statement.
If a
SELECT
statement is specified which references thisINSERT
statement’s table, the statement will be correlated against theINSERT
statement.See also
Inserting Rows with Core - in the SQLAlchemy 1.4 / 2.0 Tutorial
- class sqlalchemy.dialects.postgresql.Insert(table, values=None, inline=False, bind=None, prefixes=None, returning=None, return_defaults=False, **dialect_kw)¶
PostgreSQL-specific implementation of INSERT.
Adds methods for PG-specific syntaxes such as ON CONFLICT.
The
Insert
object is created using thesqlalchemy.dialects.postgresql.insert()
function.New in version 1.1.
Class signature
class
sqlalchemy.dialects.postgresql.Insert
(sqlalchemy.sql.expression.Insert
)-
attribute
sqlalchemy.dialects.postgresql.Insert.
excluded¶ Provide the
excluded
namespace for an ON CONFLICT statementPG’s ON CONFLICT clause allows reference to the row that would be inserted, known as
excluded
. This attribute provides all columns in this row to be referenceable.Tip
The
Insert.excluded
attribute is an instance ofColumnCollection
, which provides an interface the same as that of theTable.c
collection described at Accessing Tables and Columns. With this collection, ordinary names are accessible like attributes (e.g.stmt.excluded.some_column
), but special names and dictionary method names should be accessed using indexed access, such asstmt.excluded["column name"]
orstmt.excluded["values"]
. See the docstring forColumnCollection
for further examples.See also
INSERT…ON CONFLICT (Upsert) - example of how to use
Insert.excluded
-
attribute
sqlalchemy.dialects.postgresql.Insert.
inherit_cache = False¶ Indicate if this
HasCacheKey
instance should make use of the cache key generation scheme used by its immediate superclass.The attribute defaults to
None
, which indicates that a construct has not yet taken into account whether or not its appropriate for it to participate in caching; this is functionally equivalent to setting the value toFalse
, except that a warning is also emitted.This flag can be set to
True
on a particular class, if the SQL that corresponds to the object does not change based on attributes which are local to this class, and not its superclass.See also
Enabling Caching Support for Custom Constructs - General guideslines for setting the
HasCacheKey.inherit_cache
attribute for third-party or user defined SQL constructs.
-
method
sqlalchemy.dialects.postgresql.Insert.
on_conflict_do_nothing(constraint=None, index_elements=None, index_where=None)¶ Specifies a DO NOTHING action for ON CONFLICT clause.
The
constraint
andindex_elements
arguments are optional, but only one of these can be specified.- Parameters:
constraint¶ – The name of a unique or exclusion constraint on the table, or the constraint object itself if it has a .name attribute.
index_elements¶ – A sequence consisting of string column names,
Column
objects, or other column expression objects that will be used to infer a target index.index_where¶ –
Additional WHERE criterion that can be used to infer a conditional target index.
New in version 1.1.
See also
-
method
sqlalchemy.dialects.postgresql.Insert.
on_conflict_do_update(constraint=None, index_elements=None, index_where=None, set_=None, where=None)¶ Specifies a DO UPDATE SET action for ON CONFLICT clause.
Either the
constraint
orindex_elements
argument is required, but only one of these can be specified.- Parameters:
constraint¶ – The name of a unique or exclusion constraint on the table, or the constraint object itself if it has a .name attribute.
index_elements¶ – A sequence consisting of string column names,
Column
objects, or other column expression objects that will be used to infer a target index.index_where¶ – Additional WHERE criterion that can be used to infer a conditional target index.
set_¶ –
A dictionary or other mapping object where the keys are either names of columns in the target table, or
Column
objects or other ORM-mapped columns matching that of the target table, and expressions or literals as values, specifying theSET
actions to take.New in version 1.4: The
Insert.on_conflict_do_update.set_
parameter supportsColumn
objects from the targetTable
as keys.Warning
This dictionary does not take into account Python-specified default UPDATE values or generation functions, e.g. those specified using
Column.onupdate
. These values will not be exercised for an ON CONFLICT style of UPDATE, unless they are manually specified in theInsert.on_conflict_do_update.set_
dictionary.where¶ –
Optional argument. If present, can be a literal SQL string or an acceptable expression for a
WHERE
clause that restricts the rows affected byDO UPDATE SET
. Rows not meeting theWHERE
condition will not be updated (effectively aDO NOTHING
for those rows).New in version 1.1.
See also
-
attribute
psycopg2¶
Support for the PostgreSQL database via the psycopg2 driver.
DBAPI¶
Documentation and download information (if applicable) for psycopg2 is available at: https://pypi.org/project/psycopg2/
Connecting¶
Connect String:
postgresql+psycopg2://user:password@host:port/dbname[?key=value&key=value...]
psycopg2 Connect Arguments¶
Keyword arguments that are specific to the SQLAlchemy psycopg2 dialect
may be passed to create_engine()
, and include the following:
isolation_level
: This option, available for all PostgreSQL dialects, includes theAUTOCOMMIT
isolation level when using the psycopg2 dialect. This option sets the default isolation level for the connection that is set immediately upon connection to the database before the connection is pooled. This option is generally superseded by the more modernConnection.execution_options.isolation_level
execution option, detailed at Setting Transaction Isolation Levels including DBAPI Autocommit.client_encoding
: sets the client encoding in a libpq-agnostic way, using psycopg2’sset_client_encoding()
method.See also
use_native_unicode
: Under Python 2 only, this can be set to False to disable the use of psycopg2’s native Unicode support.See also
executemany_mode
,executemany_batch_page_size
,executemany_values_page_size
: Allows use of psycopg2 extensions for optimizing “executemany”-style queries. See the referenced section below for details.See also
Tip
The above keyword arguments are dialect keyword arguments, meaning
that they are passed as explicit keyword arguments to create_engine()
:
engine = create_engine(
"postgresql+psycopg2://scott:tiger@localhost/test",
isolation_level="SERIALIZABLE",
)
These should not be confused with DBAPI connect arguments, which
are passed as part of the create_engine.connect_args
dictionary and/or are passed in the URL query string, as detailed in
the section Custom DBAPI connect() arguments / on-connect routines.
SSL Connections¶
The psycopg2 module has a connection argument named sslmode
for
controlling its behavior regarding secure (SSL) connections. The default is
sslmode=prefer
; it will attempt an SSL connection and if that fails it
will fall back to an unencrypted connection. sslmode=require
may be used
to ensure that only secure connections are established. Consult the
psycopg2 / libpq documentation for further options that are available.
Note that sslmode
is specific to psycopg2 so it is included in the
connection URI:
engine = sa.create_engine(
"postgresql+psycopg2://scott:tiger@192.168.0.199:5432/test?sslmode=require"
)
Unix Domain Connections¶
psycopg2 supports connecting via Unix domain connections. When the host
portion of the URL is omitted, SQLAlchemy passes None
to psycopg2,
which specifies Unix-domain communication rather than TCP/IP communication:
create_engine("postgresql+psycopg2://user:password@/dbname")
By default, the socket file used is to connect to a Unix-domain socket
in /tmp
, or whatever socket directory was specified when PostgreSQL
was built. This value can be overridden by passing a pathname to psycopg2,
using host
as an additional keyword argument:
create_engine("postgresql+psycopg2://user:password@/dbname?host=/var/lib/postgresql")
See also
Specifying multiple fallback hosts¶
psycopg2 supports multiple connection points in the connection string.
When the host
parameter is used multiple times in the query section of
the URL, SQLAlchemy will create a single string of the host and port
information provided to make the connections. Tokens may consist of
host::port
or just host
; in the latter case, the default port
is selected by libpq. In the example below, three host connections
are specified, for HostA::PortA
, HostB
connecting to the default port,
and HostC::PortC
:
create_engine(
"postgresql+psycopg2://user:password@/dbname?host=HostA:PortA&host=HostB&host=HostC:PortC"
)
As an alternative, libpq query string format also may be used; this specifies
host
and port
as single query string arguments with comma-separated
lists - the default port can be chosen by indicating an empty value
in the comma separated list:
create_engine(
"postgresql+psycopg2://user:password@/dbname?host=HostA,HostB,HostC&port=PortA,,PortC"
)
With either URL style, connections to each host is attempted based on a
configurable strategy, which may be configured using the libpq
target_session_attrs
parameter. Per libpq this defaults to any
which indicates a connection to each host is then attempted until a connection is successful.
Other strategies include primary
, prefer-standby
, etc. The complete
list is documented by PostgreSQL at
libpq connection strings.
For example, to indicate two hosts using the primary
strategy:
create_engine(
"postgresql+psycopg2://user:password@/dbname?host=HostA:PortA&host=HostB&host=HostC:PortC&target_session_attrs=primary"
)
Changed in version 1.4.40: Port specification in psycopg2 multiple host format is repaired, previously ports were not correctly interpreted in this context. libpq comma-separated format is also now supported.
New in version 1.3.20: Support for multiple hosts in PostgreSQL connection string.
See also
libpq connection strings - please refer to this section in the libpq documentation for complete background on multiple host support.
Empty DSN Connections / Environment Variable Connections¶
The psycopg2 DBAPI can connect to PostgreSQL by passing an empty DSN to the
libpq client library, which by default indicates to connect to a localhost
PostgreSQL database that is open for “trust” connections. This behavior can be
further tailored using a particular set of environment variables which are
prefixed with PG_...
, which are consumed by libpq
to take the place of
any or all elements of the connection string.
For this form, the URL can be passed without any elements other than the initial scheme:
engine = create_engine('postgresql+psycopg2://')
In the above form, a blank “dsn” string is passed to the psycopg2.connect()
function which in turn represents an empty DSN passed to libpq.
New in version 1.3.2: support for parameter-less connections with psycopg2.
See also
Environment Variables -
PostgreSQL documentation on how to use PG_...
environment variables for connections.
Per-Statement/Connection Execution Options¶
The following DBAPI-specific options are respected when used with
Connection.execution_options()
,
Executable.execution_options()
,
Query.execution_options()
,
in addition to those not specific to DBAPIs:
isolation_level
- Set the transaction isolation level for the lifespan of aConnection
(can only be set on a connection, not a statement or query). See Psycopg2 Transaction Isolation Level.stream_results
- Enable or disable usage of psycopg2 server side cursors - this feature makes use of “named” cursors in combination with special result handling methods so that result rows are not fully buffered. Defaults to False, meaning cursors are buffered by default.max_row_buffer
- when usingstream_results
, an integer value that specifies the maximum number of rows to buffer at a time. This is interpreted by theBufferedRowCursorResult
, and if omitted the buffer will grow to ultimately store 1000 rows at a time.Changed in version 1.4: The
max_row_buffer
size can now be greater than 1000, and the buffer will grow to that size.
Psycopg2 Fast Execution Helpers¶
Modern versions of psycopg2 include a feature known as
Fast Execution Helpers , which
have been shown in benchmarking to improve psycopg2’s executemany()
performance, primarily with INSERT statements, by multiple orders of magnitude.
SQLAlchemy internally makes use of these extensions for executemany()
style
calls, which correspond to lists of parameters being passed to
Connection.execute()
as detailed in multiple parameter
sets. The ORM also uses this mode internally whenever
possible.
The two available extensions on the psycopg2 side are the execute_values()
and execute_batch()
functions. The psycopg2 dialect defaults to using the
execute_values()
extension for all qualifying INSERT statements.
Changed in version 1.4: The psycopg2 dialect now defaults to a new mode
"values_only"
for executemany_mode
, which allows an order of
magnitude performance improvement for INSERT statements, but does not
include “batch” mode for UPDATE and DELETE statements which removes the
ability of cursor.rowcount
to function correctly.
The use of these extensions is controlled by the executemany_mode
flag
which may be passed to create_engine()
:
engine = create_engine(
"postgresql+psycopg2://scott:tiger@host/dbname",
executemany_mode='values_plus_batch')
Possible options for executemany_mode
include:
values_only
- this is the default value. the psycopg2 execute_values() extension is used for qualifying INSERT statements, which rewrites the INSERT to include multiple VALUES clauses so that many parameter sets can be inserted with one statement.New in version 1.4: Added
"values_only"
setting forexecutemany_mode
which is also now the default.None
- No psycopg2 extensions are not used, and the usualcursor.executemany()
method is used when invoking statements with multiple parameter sets.'batch'
- Usespsycopg2.extras.execute_batch
for all qualifying INSERT, UPDATE and DELETE statements, so that multiple copies of a SQL query, each one corresponding to a parameter set passed toexecutemany()
, are joined into a single SQL string separated by a semicolon. When using this mode, theCursorResult.rowcount
attribute will not contain a value for executemany-style executions.'values_plus_batch'
-execute_values
is used for qualifying INSERT statements,execute_batch
is used for UPDATE and DELETE. When using this mode, theCursorResult.rowcount
attribute will not contain a value for executemany-style executions against UPDATE and DELETE statements.
By “qualifying statements”, we mean that the statement being executed
must be a Core insert()
, update()
or delete()
construct, and not a plain textual SQL
string or one constructed using text()
. When using the
ORM, all insert/update/delete statements used by the ORM flush process
are qualifying.
The “page size” for the “values” and “batch” strategies can be affected
by using the executemany_batch_page_size
and
executemany_values_page_size
engine parameters. These
control how many parameter sets
should be represented in each execution. The “values” page size defaults
to 1000, which is different that psycopg2’s default. The “batch” page
size defaults to 100. These can be affected by passing new values to
create_engine()
:
engine = create_engine(
"postgresql+psycopg2://scott:tiger@host/dbname",
executemany_mode='values',
executemany_values_page_size=10000, executemany_batch_page_size=500)
Changed in version 1.4: The default for executemany_values_page_size
is now 1000, up from
100.
See also
Sending Multiple Parameters - General information on using the
Connection
object to execute statements in such a way as to make
use of the DBAPI .executemany()
method.
Unicode with Psycopg2¶
The psycopg2 DBAPI driver supports Unicode data transparently. Under Python 2
only, the SQLAlchemy psycopg2 dialect will enable the
psycopg2.extensions.UNICODE
extension by default to ensure Unicode is
handled properly; under Python 3, this is psycopg2’s default behavior.
The client character encoding can be controlled for the psycopg2 dialect in the following ways:
For PostgreSQL 9.1 and above, the
client_encoding
parameter may be passed in the database URL; this parameter is consumed by the underlyinglibpq
PostgreSQL client library:engine = create_engine("postgresql+psycopg2://user:pass@host/dbname?client_encoding=utf8")
Alternatively, the above
client_encoding
value may be passed usingcreate_engine.connect_args
for programmatic establishment withlibpq
:engine = create_engine( "postgresql+psycopg2://user:pass@host/dbname", connect_args={'client_encoding': 'utf8'} )
For all PostgreSQL versions, psycopg2 supports a client-side encoding value that will be passed to database connections when they are first established. The SQLAlchemy psycopg2 dialect supports this using the
client_encoding
parameter passed tocreate_engine()
:engine = create_engine( "postgresql+psycopg2://user:pass@host/dbname", client_encoding="utf8" )
Tip
The above
client_encoding
parameter admittedly is very similar in appearance to usage of the parameter within thecreate_engine.connect_args
dictionary; the difference above is that the parameter is consumed by psycopg2 and is passed to the database connection usingSET client_encoding TO 'utf8'
; in the previously mentioned style, the parameter is instead passed through psycopg2 and consumed by thelibpq
library.A common way to set up client encoding with PostgreSQL databases is to ensure it is configured within the server-side postgresql.conf file; this is the recommended way to set encoding for a server that is consistently of one encoding in all databases:
# postgresql.conf file # client_encoding = sql_ascii # actually, defaults to database # encoding client_encoding = utf8
Disabling Native Unicode¶
Under Python 2 only, SQLAlchemy can also be instructed to skip the usage of the
psycopg2 UNICODE
extension and to instead utilize its own unicode
encode/decode services, which are normally reserved only for those DBAPIs that
don’t fully support unicode directly. Passing use_native_unicode=False
to
create_engine()
will disable usage of psycopg2.extensions.
UNICODE
. SQLAlchemy will instead encode data itself into Python bytestrings
on the way in and coerce from bytes on the way back, using the value of the
create_engine()
encoding
parameter, which defaults to utf-8
.
SQLAlchemy’s own unicode encode/decode functionality is steadily becoming
obsolete as most DBAPIs now support unicode fully.
Transactions¶
The psycopg2 dialect fully supports SAVEPOINT and two-phase commit operations.
Psycopg2 Transaction Isolation Level¶
As discussed in Transaction Isolation Level,
all PostgreSQL dialects support setting of transaction isolation level
both via the isolation_level
parameter passed to create_engine()
,
as well as the isolation_level
argument used by
Connection.execution_options()
. When using the psycopg2 dialect
, these
options make use of psycopg2’s set_isolation_level()
connection method,
rather than emitting a PostgreSQL directive; this is because psycopg2’s
API-level setting is always emitted at the start of each transaction in any
case.
The psycopg2 dialect supports these constants for isolation level:
READ COMMITTED
READ UNCOMMITTED
REPEATABLE READ
SERIALIZABLE
AUTOCOMMIT
NOTICE logging¶
The psycopg2 dialect will log PostgreSQL NOTICE messages
via the sqlalchemy.dialects.postgresql
logger. When this logger
is set to the logging.INFO
level, notice messages will be logged:
import logging
logging.getLogger('sqlalchemy.dialects.postgresql').setLevel(logging.INFO)
Above, it is assumed that logging is configured externally. If this is not
the case, configuration such as logging.basicConfig()
must be utilized:
import logging
logging.basicConfig() # log messages to stdout
logging.getLogger('sqlalchemy.dialects.postgresql').setLevel(logging.INFO)
See also
Logging HOWTO - on the python.org website
HSTORE type¶
The psycopg2
DBAPI includes an extension to natively handle marshalling of
the HSTORE type. The SQLAlchemy psycopg2 dialect will enable this extension
by default when psycopg2 version 2.4 or greater is used, and
it is detected that the target database has the HSTORE type set up for use.
In other words, when the dialect makes the first
connection, a sequence like the following is performed:
Request the available HSTORE oids using
psycopg2.extras.HstoreAdapter.get_oids()
. If this function returns a list of HSTORE identifiers, we then determine that theHSTORE
extension is present. This function is skipped if the version of psycopg2 installed is less than version 2.4.If the
use_native_hstore
flag is at its default ofTrue
, and we’ve detected thatHSTORE
oids are available, thepsycopg2.extensions.register_hstore()
extension is invoked for all connections.
The register_hstore()
extension has the effect of all Python
dictionaries being accepted as parameters regardless of the type of target
column in SQL. The dictionaries are converted by this extension into a
textual HSTORE expression. If this behavior is not desired, disable the
use of the hstore extension by setting use_native_hstore
to False
as
follows:
engine = create_engine("postgresql+psycopg2://scott:tiger@localhost/test",
use_native_hstore=False)
The HSTORE
type is still supported when the
psycopg2.extensions.register_hstore()
extension is not used. It merely
means that the coercion between Python dictionaries and the HSTORE
string format, on both the parameter side and the result side, will take
place within SQLAlchemy’s own marshalling logic, and not that of psycopg2
which may be more performant.
pg8000¶
Support for the PostgreSQL database via the pg8000 driver.
DBAPI¶
Documentation and download information (if applicable) for pg8000 is available at: https://pypi.org/project/pg8000/
Connecting¶
Connect String:
postgresql+pg8000://user:password@host:port/dbname[?key=value&key=value...]
Changed in version 1.4: The pg8000 dialect has been updated for version 1.16.6 and higher, and is again part of SQLAlchemy’s continuous integration with full feature support.
Unicode¶
pg8000 will encode / decode string values between it and the server using the
PostgreSQL client_encoding
parameter; by default this is the value in
the postgresql.conf
file, which often defaults to SQL_ASCII
.
Typically, this can be changed to utf-8
, as a more useful default:
#client_encoding = sql_ascii # actually, defaults to database
# encoding
client_encoding = utf8
The client_encoding
can be overridden for a session by executing the SQL:
SET CLIENT_ENCODING TO ‘utf8’;
SQLAlchemy will execute this SQL on all new connections based on the value
passed to create_engine()
using the client_encoding
parameter:
engine = create_engine(
"postgresql+pg8000://user:pass@host/dbname", client_encoding='utf8')
SSL Connections¶
pg8000 accepts a Python SSLContext
object which may be specified using the
create_engine.connect_args
dictionary:
import ssl
ssl_context = ssl.create_default_context()
engine = sa.create_engine(
"postgresql+pg8000://scott:tiger@192.168.0.199/test",
connect_args={"ssl_context": ssl_context},
)
If the server uses an automatically-generated certificate that is self-signed or does not match the host name (as seen from the client), it may also be necessary to disable hostname checking:
import ssl
ssl_context = ssl.create_default_context()
ssl_context.check_hostname = False
ssl_context.verify_mode = ssl.CERT_NONE
engine = sa.create_engine(
"postgresql+pg8000://scott:tiger@192.168.0.199/test",
connect_args={"ssl_context": ssl_context},
)
pg8000 Transaction Isolation Level¶
The pg8000 dialect offers the same isolation level settings as that of the psycopg2 dialect:
READ COMMITTED
READ UNCOMMITTED
REPEATABLE READ
SERIALIZABLE
AUTOCOMMIT
asyncpg¶
Support for the PostgreSQL database via the asyncpg driver.
DBAPI¶
Documentation and download information (if applicable) for asyncpg is available at: https://magicstack.github.io/asyncpg/
Connecting¶
Connect String:
postgresql+asyncpg://user:password@host:port/dbname[?key=value&key=value...]
The asyncpg dialect is SQLAlchemy’s first Python asyncio dialect.
Using a special asyncio mediation layer, the asyncpg dialect is usable as the backend for the SQLAlchemy asyncio extension package.
This dialect should normally be used only with the
create_async_engine()
engine creation function:
from sqlalchemy.ext.asyncio import create_async_engine
engine = create_async_engine("postgresql+asyncpg://user:pass@hostname/dbname")
The dialect can also be run as a “synchronous” dialect within the
create_engine()
function, which will pass “await” calls into
an ad-hoc event loop. This mode of operation is of limited use
and is for special testing scenarios only. The mode can be enabled by
adding the SQLAlchemy-specific flag async_fallback
to the URL
in conjunction with create_engine()
:
# for testing purposes only; do not use in production!
engine = create_engine("postgresql+asyncpg://user:pass@hostname/dbname?async_fallback=true")
New in version 1.4.
Note
By default asyncpg does not decode the json
and jsonb
types and
returns them as strings. SQLAlchemy sets default type decoder for json
and jsonb
types using the python builtin json.loads
function.
The json implementation used can be changed by setting the attribute
json_deserializer
when creating the engine with
create_engine()
or create_async_engine()
.
Prepared Statement Cache¶
The asyncpg SQLAlchemy dialect makes use of asyncpg.connection.prepare()
for all statements. The prepared statement objects are cached after
construction which appears to grant a 10% or more performance improvement for
statement invocation. The cache is on a per-DBAPI connection basis, which
means that the primary storage for prepared statements is within DBAPI
connections pooled within the connection pool. The size of this cache
defaults to 100 statements per DBAPI connection and may be adjusted using the
prepared_statement_cache_size
DBAPI argument (note that while this argument
is implemented by SQLAlchemy, it is part of the DBAPI emulation portion of the
asyncpg dialect, therefore is handled as a DBAPI argument, not a dialect
argument):
engine = create_async_engine("postgresql+asyncpg://user:pass@hostname/dbname?prepared_statement_cache_size=500")
To disable the prepared statement cache, use a value of zero:
engine = create_async_engine("postgresql+asyncpg://user:pass@hostname/dbname?prepared_statement_cache_size=0")
New in version 1.4.0b2: Added prepared_statement_cache_size
for asyncpg.
Warning
The asyncpg
database driver necessarily uses caches for
PostgreSQL type OIDs, which become stale when custom PostgreSQL datatypes
such as ENUM
objects are changed via DDL operations. Additionally,
prepared statements themselves which are optionally cached by SQLAlchemy’s
driver as described above may also become “stale” when DDL has been emitted
to the PostgreSQL database which modifies the tables or other objects
involved in a particular prepared statement.
The SQLAlchemy asyncpg dialect will invalidate these caches within its local
process when statements that represent DDL are emitted on a local
connection, but this is only controllable within a single Python process /
database engine. If DDL changes are made from other database engines
and/or processes, a running application may encounter asyncpg exceptions
InvalidCachedStatementError
and/or InternalServerError("cache lookup
failed for type <oid>")
if it refers to pooled database connections which
operated upon the previous structures. The SQLAlchemy asyncpg dialect will
recover from these error cases when the driver raises these exceptions by
clearing its internal caches as well as those of the asyncpg driver in
response to them, but cannot prevent them from being raised in the first
place if the cached prepared statement or asyncpg type caches have gone
stale, nor can it retry the statement as the PostgreSQL transaction is
invalidated when these errors occur.
Disabling the PostgreSQL JIT to improve ENUM datatype handling¶
Asyncpg has an issue when
using PostgreSQL ENUM datatypes, where upon the creation of new database
connections, an expensive query may be emitted in order to retrieve metadata
regarding custom types which has been shown to negatively affect performance.
To mitigate this issue, the PostgreSQL “jit” setting may be disabled from the
client using this setting passed to create_async_engine()
:
engine = create_async_engine(
"postgresql+asyncpg://user:password@localhost/tmp",
connect_args={"server_settings": {"jit": "off"}},
)
psycopg2cffi¶
Support for the PostgreSQL database via the psycopg2cffi driver.
DBAPI¶
Documentation and download information (if applicable) for psycopg2cffi is available at: https://pypi.org/project/psycopg2cffi/
Connecting¶
Connect String:
postgresql+psycopg2cffi://user:password@host:port/dbname[?key=value&key=value...]
psycopg2cffi
is an adaptation of psycopg2
, using CFFI for the C
layer. This makes it suitable for use in e.g. PyPy. Documentation
is as per psycopg2
.
New in version 1.0.0.
py-postgresql¶
Support for the PostgreSQL database via the py-postgresql driver.
DBAPI¶
Documentation and download information (if applicable) for py-postgresql is available at: https://python.projects.pgfoundry.org/
Connecting¶
Connect String:
postgresql+pypostgresql://user:password@host:port/dbname[?key=value&key=value...]
Note
The pypostgresql dialect is not tested as part of SQLAlchemy’s continuous integration and may have unresolved issues. The recommended PostgreSQL driver is psycopg2.
Deprecated since version 1.4: The py-postgresql DBAPI is deprecated and will be removed in a future version. This DBAPI is superseded by the external version available at external-dialect. Please use the external version or one of the supported DBAPIs to connect to PostgreSQL.
pygresql¶
Support for the PostgreSQL database via the pygresql driver.
DBAPI¶
Documentation and download information (if applicable) for pygresql is available at: https://www.pygresql.org/
Connecting¶
Connect String:
postgresql+pygresql://user:password@host:port/dbname[?key=value&key=value...]
Note
The pygresql dialect is not tested as part of SQLAlchemy’s continuous integration and may have unresolved issues. The recommended PostgreSQL dialect is psycopg2.
Deprecated since version 1.4: The pygresql DBAPI is deprecated and will be removed in a future version. Please use one of the supported DBAPIs to connect to PostgreSQL.
flambé! the dragon and The Alchemist image designs created and generously donated by Rotem Yaari.
Created using Sphinx 7.2.6. Documentation last generated: Wed 30 Oct 2024 02:18:58 PM EDT