Release: 1.0.0 | Release Date: Not released

SQLAlchemy 1.0 Documentation

Mapper Configuration

This section describes a variety of configurational patterns that are usable with mappers. It assumes you’ve worked through Object Relational Tutorial and know how to construct and use rudimentary mappers and relationships.

Classical Mappings

A Classical Mapping refers to the configuration of a mapped class using the mapper() function, without using the Declarative system. As an example, start with the declarative mapping introduced in Object Relational Tutorial:

class User(Base):
    __tablename__ = 'users'

    id = Column(Integer, primary_key=True)
    name = Column(String)
    fullname = Column(String)
    password = Column(String)

In “classical” form, the table metadata is created separately with the Table construct, then associated with the User class via the mapper() function:

from sqlalchemy import Table, MetaData, Column, ForeignKey, Integer, String
from sqlalchemy.orm import mapper

metadata = MetaData()

user = Table('user', metadata,
            Column('id', Integer, primary_key=True),
            Column('name', String(50)),
            Column('fullname', String(50)),
            Column('password', String(12))
        )

class User(object):
    def __init__(self, name, fullname, password):
        self.name = name
        self.fullname = fullname
        self.password = password

mapper(User, user)

Information about mapped attributes, such as relationships to other classes, are provided via the properties dictionary. The example below illustrates a second Table object, mapped to a class called Address, then linked to User via relationship():

address = Table('address', metadata,
            Column('id', Integer, primary_key=True),
            Column('user_id', Integer, ForeignKey('user.id')),
            Column('email_address', String(50))
            )

mapper(User, user, properties={
    'addresses' : relationship(Address, backref='user', order_by=address.c.id)
})

mapper(Address, address)

When using classical mappings, classes must be provided directly without the benefit of the “string lookup” system provided by Declarative. SQL expressions are typically specified in terms of the Table objects, i.e. address.c.id above for the Address relationship, and not Address.id, as Address may not yet be linked to table metadata, nor can we specify a string here.

Some examples in the documentation still use the classical approach, but note that the classical as well as Declarative approaches are fully interchangeable. Both systems ultimately create the same configuration, consisting of a Table, user-defined class, linked together with a mapper(). When we talk about “the behavior of mapper()”, this includes when using the Declarative system as well - it’s still used, just behind the scenes.

Customizing Column Properties

The default behavior of mapper() is to assemble all the columns in the mapped Table into mapped object attributes, each of which are named according to the name of the column itself (specifically, the key attribute of Column). This behavior can be modified in several ways.

Naming Columns Distinctly from Attribute Names

A mapping by default shares the same name for a Column as that of the mapped attribute - specifically it matches the Column.key attribute on Column, which by default is the same as the Column.name.

The name assigned to the Python attribute which maps to Column can be different from either Column.name or Column.key just by assigning it that way, as we illustrate here in a Declarative mapping:

class User(Base):
    __tablename__ = 'user'
    id = Column('user_id', Integer, primary_key=True)
    name = Column('user_name', String(50))

Where above User.id resolves to a column named user_id and User.name resolves to a column named user_name.

When mapping to an existing table, the Column object can be referenced directly:

class User(Base):
    __table__ = user_table
    id = user_table.c.user_id
    name = user_table.c.user_name

Or in a classical mapping, placed in the properties dictionary with the desired key:

mapper(User, user_table, properties={
   'id': user_table.c.user_id,
   'name': user_table.c.user_name,
})

In the next section we’ll examine the usage of .key more closely.

Automating Column Naming Schemes from Reflected Tables

In the previous section Naming Columns Distinctly from Attribute Names, we showed how a Column explicitly mapped to a class can have a different attribute name than the column. But what if we aren’t listing out Column objects explicitly, and instead are automating the production of Table objects using reflection (e.g. as described in Reflecting Database Objects)? In this case we can make use of the DDLEvents.column_reflect() event to intercept the production of Column objects and provide them with the Column.key of our choice:

@event.listens_for(Table, "column_reflect")
def column_reflect(inspector, table, column_info):
    # set column.key = "attr_<lower_case_name>"
    column_info['key'] = "attr_%s" % column_info['name'].lower()

With the above event, the reflection of Column objects will be intercepted with our event that adds a new ”.key” element, such as in a mapping as below:

class MyClass(Base):
    __table__ = Table("some_table", Base.metadata,
                autoload=True, autoload_with=some_engine)

If we want to qualify our event to only react for the specific MetaData object above, we can check for it in our event:

@event.listens_for(Table, "column_reflect")
def column_reflect(inspector, table, column_info):
    if table.metadata is Base.metadata:
        # set column.key = "attr_<lower_case_name>"
        column_info['key'] = "attr_%s" % column_info['name'].lower()

Naming All Columns with a Prefix

A quick approach to prefix column names, typically when mapping to an existing Table object, is to use column_prefix:

class User(Base):
    __table__ = user_table
    __mapper_args__ = {'column_prefix':'_'}

The above will place attribute names such as _user_id, _user_name, _password etc. on the mapped User class.

This approach is uncommon in modern usage. For dealing with reflected tables, a more flexible approach is to use that described in Automating Column Naming Schemes from Reflected Tables.

Using column_property for column level options

Options can be specified when mapping a Column using the column_property() function. This function explicitly creates the ColumnProperty used by the mapper() to keep track of the Column; normally, the mapper() creates this automatically. Using column_property(), we can pass additional arguments about how we’d like the Column to be mapped. Below, we pass an option active_history, which specifies that a change to this column’s value should result in the former value being loaded first:

from sqlalchemy.orm import column_property

class User(Base):
    __tablename__ = 'user'

    id = Column(Integer, primary_key=True)
    name = column_property(Column(String(50)), active_history=True)

column_property() is also used to map a single attribute to multiple columns. This use case arises when mapping to a join() which has attributes which are equated to each other:

class User(Base):
    __table__ = user.join(address)

    # assign "user.id", "address.user_id" to the
    # "id" attribute
    id = column_property(user_table.c.id, address_table.c.user_id)

For more examples featuring this usage, see Mapping a Class against Multiple Tables.

Another place where column_property() is needed is to specify SQL expressions as mapped attributes, such as below where we create an attribute fullname that is the string concatenation of the firstname and lastname columns:

class User(Base):
    __tablename__ = 'user'
    id = Column(Integer, primary_key=True)
    firstname = Column(String(50))
    lastname = Column(String(50))
    fullname = column_property(firstname + " " + lastname)

See examples of this usage at SQL Expressions as Mapped Attributes.

sqlalchemy.orm.column_property(*columns, **kwargs)

Provide a column-level property for use with a Mapper.

Column-based properties can normally be applied to the mapper’s properties dictionary using the Column element directly. Use this function when the given column is not directly present within the mapper’s selectable; examples include SQL expressions, functions, and scalar SELECT queries.

Columns that aren’t present in the mapper’s selectable won’t be persisted by the mapper and are effectively “read-only” attributes.

Parameters:
  • *cols – list of Column objects to be mapped.
  • active_history=False

    When True, indicates that the “previous” value for a scalar attribute should be loaded when replaced, if not already loaded. Normally, history tracking logic for simple non-primary-key scalar values only needs to be aware of the “new” value in order to perform a flush. This flag is available for applications that make use of attributes.get_history() or Session.is_modified() which also need to know the “previous” value of the attribute.

    New in version 0.6.6.

  • comparator_factory – a class which extends ColumnProperty.Comparator which provides custom SQL clause generation for comparison operations.
  • group – a group name for this property when marked as deferred.
  • deferred – when True, the column property is “deferred”, meaning that it does not load immediately, and is instead loaded when the attribute is first accessed on an instance. See also deferred().
  • doc – optional string that will be applied as the doc on the class-bound descriptor.
  • expire_on_flush=True

    Disable expiry on flush. A column_property() which refers to a SQL expression (and not a single table-bound column) is considered to be a “read only” property; populating it has no effect on the state of data, and it can only return database state. For this reason a column_property()’s value is expired whenever the parent object is involved in a flush, that is, has any kind of “dirty” state within a flush. Setting this parameter to False will have the effect of leaving any existing value present after the flush proceeds. Note however that the Session with default expiration settings still expires all attributes after a Session.commit() call, however.

    New in version 0.7.3.

  • info

    Optional data dictionary which will be populated into the MapperProperty.info attribute of this object.

    New in version 0.8.

  • extension – an AttributeExtension instance, or list of extensions, which will be prepended to the list of attribute listeners for the resulting descriptor placed on the class. Deprecated. Please see AttributeEvents.

Mapping a Subset of Table Columns

Sometimes, a Table object was made available using the reflection process described at Reflecting Database Objects to load the table’s structure from the database. For such a table that has lots of columns that don’t need to be referenced in the application, the include_properties or exclude_properties arguments can specify that only a subset of columns should be mapped. For example:

class User(Base):
    __table__ = user_table
    __mapper_args__ = {
        'include_properties' :['user_id', 'user_name']
    }

...will map the User class to the user_table table, only including the user_id and user_name columns - the rest are not referenced. Similarly:

class Address(Base):
    __table__ = address_table
    __mapper_args__ = {
        'exclude_properties' : ['street', 'city', 'state', 'zip']
    }

...will map the Address class to the address_table table, including all columns present except street, city, state, and zip.

When this mapping is used, the columns that are not included will not be referenced in any SELECT statements emitted by Query, nor will there be any mapped attribute on the mapped class which represents the column; assigning an attribute of that name will have no effect beyond that of a normal Python attribute assignment.

In some cases, multiple columns may have the same name, such as when mapping to a join of two or more tables that share some column name. include_properties and exclude_properties can also accommodate Column objects to more accurately describe which columns should be included or excluded:

class UserAddress(Base):
    __table__ = user_table.join(addresses_table)
    __mapper_args__ = {
        'exclude_properties' :[address_table.c.id],
        'primary_key' : [user_table.c.id]
    }

Note

insert and update defaults configured on individual Column objects, i.e. those described at metadata_defaults including those configured by the default, update, server_default and server_onupdate arguments, will continue to function normally even if those Column objects are not mapped. This is because in the case of default and update, the Column object is still present on the underlying Table, thus allowing the default functions to take place when the ORM emits an INSERT or UPDATE, and in the case of server_default and server_onupdate, the relational database itself maintains these functions.

Deferred Column Loading

This feature allows particular columns of a table be loaded only upon direct access, instead of when the entity is queried using Query. This feature is useful when one wants to avoid loading a large text or binary field into memory when it’s not needed. Individual columns can be lazy loaded by themselves or placed into groups that lazy-load together, using the orm.deferred() function to mark them as “deferred”. In the example below, we define a mapping that will load each of .excerpt and .photo in separate, individual-row SELECT statements when each attribute is first referenced on the individual object instance:

from sqlalchemy.orm import deferred
from sqlalchemy import Integer, String, Text, Binary, Column

class Book(Base):
    __tablename__ = 'book'

    book_id = Column(Integer, primary_key=True)
    title = Column(String(200), nullable=False)
    summary = Column(String(2000))
    excerpt = deferred(Column(Text))
    photo = deferred(Column(Binary))

Classical mappings as always place the usage of orm.deferred() in the properties dictionary against the table-bound Column:

mapper(Book, book_table, properties={
    'photo':deferred(book_table.c.photo)
})

Deferred columns can be associated with a “group” name, so that they load together when any of them are first accessed. The example below defines a mapping with a photos deferred group. When one .photo is accessed, all three photos will be loaded in one SELECT statement. The .excerpt will be loaded separately when it is accessed:

class Book(Base):
    __tablename__ = 'book'

    book_id = Column(Integer, primary_key=True)
    title = Column(String(200), nullable=False)
    summary = Column(String(2000))
    excerpt = deferred(Column(Text))
    photo1 = deferred(Column(Binary), group='photos')
    photo2 = deferred(Column(Binary), group='photos')
    photo3 = deferred(Column(Binary), group='photos')

You can defer or undefer columns at the Query level using options, including orm.defer() and orm.undefer():

from sqlalchemy.orm import defer, undefer

query = session.query(Book)
query = query.options(defer('summary'))
query = query.options(undefer('excerpt'))
query.all()

orm.deferred() attributes which are marked with a “group” can be undeferred using orm.undefer_group(), sending in the group name:

from sqlalchemy.orm import undefer_group

query = session.query(Book)
query.options(undefer_group('photos')).all()

Load Only Cols

An arbitrary set of columns can be selected as “load only” columns, which will be loaded while deferring all other columns on a given entity, using orm.load_only():

from sqlalchemy.orm import load_only

session.query(Book).options(load_only("summary", "excerpt"))

New in version 0.9.0.

Deferred Loading with Multiple Entities

To specify column deferral options within a Query that loads multiple types of entity, the Load object can specify which parent entity to start with:

from sqlalchemy.orm import Load

query = session.query(Book, Author).join(Book.author)
query = query.options(
            Load(Book).load_only("summary", "excerpt"),
            Load(Author).defer("bio")
        )

To specify column deferral options along the path of various relationships, the options support chaining, where the loading style of each relationship is specified first, then is chained to the deferral options. Such as, to load Book instances, then joined-eager-load the Author, then apply deferral options to the Author entity:

from sqlalchemy.orm import joinedload

query = session.query(Book)
query = query.options(
            joinedload(Book.author).load_only("summary", "excerpt"),
        )

In the case where the loading style of parent relationships should be left unchanged, use orm.defaultload():

from sqlalchemy.orm import defaultload

query = session.query(Book)
query = query.options(
            defaultload(Book.author).load_only("summary", "excerpt"),
        )

New in version 0.9.0: support for Load and other options which allow for better targeting of deferral options.

Column Deferral API

sqlalchemy.orm.deferred(*columns, **kw)

Indicate a column-based mapped attribute that by default will not load unless accessed.

Parameters:
  • *columns – columns to be mapped. This is typically a single Column object, however a collection is supported in order to support multiple columns mapped under the same attribute.
  • **kw – additional keyword arguments passed to ColumnProperty.
sqlalchemy.orm.defer(key, *addl_attrs)

Indicate that the given column-oriented attribute should be deferred, e.g. not loaded until accessed.

This function is part of the Load interface and supports both method-chained and standalone operation.

e.g.:

from sqlalchemy.orm import defer

session.query(MyClass).options(
                    defer("attribute_one"),
                    defer("attribute_two"))

session.query(MyClass).options(
                    defer(MyClass.attribute_one),
                    defer(MyClass.attribute_two))

To specify a deferred load of an attribute on a related class, the path can be specified one token at a time, specifying the loading style for each link along the chain. To leave the loading style for a link unchanged, use orm.defaultload():

session.query(MyClass).options(defaultload("someattr").defer("some_column"))

A Load object that is present on a certain path can have Load.defer() called multiple times, each will operate on the same parent entity:

session.query(MyClass).options(
                defaultload("someattr").
                    defer("some_column").
                    defer("some_other_column").
                    defer("another_column")
    )
Parameters:
  • key – Attribute to be deferred.
  • *addl_attrs – Deprecated; this option supports the old 0.8 style of specifying a path as a series of attributes, which is now superseded by the method-chained style.
sqlalchemy.orm.load_only(*attrs)

Indicate that for a particular entity, only the given list of column-based attribute names should be loaded; all others will be deferred.

This function is part of the Load interface and supports both method-chained and standalone operation.

Example - given a class User, load only the name and fullname attributes:

session.query(User).options(load_only("name", "fullname"))

Example - given a relationship User.addresses -> Address, specify subquery loading for the User.addresses collection, but on each Address object load only the email_address attribute:

session.query(User).options(
        subqueryload("addreses").load_only("email_address")
)

For a Query that has multiple entities, the lead entity can be specifically referred to using the Load constructor:

session.query(User, Address).join(User.addresses).options(
            Load(User).load_only("name", "fullname"),
            Load(Address).load_only("email_addres")
        )

New in version 0.9.0.

sqlalchemy.orm.undefer(key, *addl_attrs)

Indicate that the given column-oriented attribute should be undeferred, e.g. specified within the SELECT statement of the entity as a whole.

The column being undeferred is typically set up on the mapping as a deferred() attribute.

This function is part of the Load interface and supports both method-chained and standalone operation.

Examples:

# undefer two columns
session.query(MyClass).options(undefer("col1"), undefer("col2"))

# undefer all columns specific to a single class using Load + *
session.query(MyClass, MyOtherClass).options(
    Load(MyClass).undefer("*"))
Parameters:
  • key – Attribute to be undeferred.
  • *addl_attrs – Deprecated; this option supports the old 0.8 style of specifying a path as a series of attributes, which is now superseded by the method-chained style.
sqlalchemy.orm.undefer_group(name)

Indicate that columns within the given deferred group name should be undeferred.

The columns being undeferred are set up on the mapping as deferred() attributes and include a “group” name.

E.g:

session.query(MyClass).options(undefer_group("large_attrs"))

To undefer a group of attributes on a related entity, the path can be spelled out using relationship loader options, such as orm.defaultload():

session.query(MyClass).options(
    defaultload("someattr").undefer_group("large_attrs"))

Changed in version 0.9.0: orm.undefer_group() is now specific to a particiular entity load path.

SQL Expressions as Mapped Attributes

Attributes on a mapped class can be linked to SQL expressions, which can be used in queries.

Using a Hybrid

The easiest and most flexible way to link relatively simple SQL expressions to a class is to use a so-called “hybrid attribute”, described in the section Hybrid Attributes. The hybrid provides for an expression that works at both the Python level as well as at the SQL expression level. For example, below we map a class User, containing attributes firstname and lastname, and include a hybrid that will provide for us the fullname, which is the string concatenation of the two:

from sqlalchemy.ext.hybrid import hybrid_property

class User(Base):
    __tablename__ = 'user'
    id = Column(Integer, primary_key=True)
    firstname = Column(String(50))
    lastname = Column(String(50))

    @hybrid_property
    def fullname(self):
        return self.firstname + " " + self.lastname

Above, the fullname attribute is interpreted at both the instance and class level, so that it is available from an instance:

some_user = session.query(User).first()
print some_user.fullname

as well as usable wtihin queries:

some_user = session.query(User).filter(User.fullname == "John Smith").first()

The string concatenation example is a simple one, where the Python expression can be dual purposed at the instance and class level. Often, the SQL expression must be distinguished from the Python expression, which can be achieved using hybrid_property.expression(). Below we illustrate the case where a conditional needs to be present inside the hybrid, using the if statement in Python and the sql.expression.case() construct for SQL expressions:

from sqlalchemy.ext.hybrid import hybrid_property
from sqlalchemy.sql import case

class User(Base):
    __tablename__ = 'user'
    id = Column(Integer, primary_key=True)
    firstname = Column(String(50))
    lastname = Column(String(50))

    @hybrid_property
    def fullname(self):
        if self.firstname is not None:
            return self.firstname + " " + self.lastname
        else:
            return self.lastname

    @fullname.expression
    def fullname(cls):
        return case([
            (cls.firstname != None, cls.firstname + " " + cls.lastname),
        ], else_ = cls.lastname)

Using column_property

The orm.column_property() function can be used to map a SQL expression in a manner similar to a regularly mapped Column. With this technique, the attribute is loaded along with all other column-mapped attributes at load time. This is in some cases an advantage over the usage of hybrids, as the value can be loaded up front at the same time as the parent row of the object, particularly if the expression is one which links to other tables (typically as a correlated subquery) to access data that wouldn’t normally be available on an already loaded object.

Disadvantages to using orm.column_property() for SQL expressions include that the expression must be compatible with the SELECT statement emitted for the class as a whole, and there are also some configurational quirks which can occur when using orm.column_property() from declarative mixins.

Our “fullname” example can be expressed using orm.column_property() as follows:

from sqlalchemy.orm import column_property

class User(Base):
    __tablename__ = 'user'
    id = Column(Integer, primary_key=True)
    firstname = Column(String(50))
    lastname = Column(String(50))
    fullname = column_property(firstname + " " + lastname)

Correlated subqueries may be used as well. Below we use the select() construct to create a SELECT that links together the count of Address objects available for a particular User:

from sqlalchemy.orm import column_property
from sqlalchemy import select, func
from sqlalchemy import Column, Integer, String, ForeignKey

from sqlalchemy.ext.declarative import declarative_base

Base = declarative_base()

class Address(Base):
    __tablename__ = 'address'
    id = Column(Integer, primary_key=True)
    user_id = Column(Integer, ForeignKey('user.id'))

class User(Base):
    __tablename__ = 'user'
    id = Column(Integer, primary_key=True)
    address_count = column_property(
        select([func.count(Address.id)]).\
            where(Address.user_id==id).\
            correlate_except(Address)
    )

In the above example, we define a select() construct like the following:

select([func.count(Address.id)]).\
    where(Address.user_id==id).\
    correlate_except(Address)

The meaning of the above statement is, select the count of Address.id rows where the Address.user_id column is equated to id, which in the context of the User class is the Column named id (note that id is also the name of a Python built in function, which is not what we want to use here - if we were outside of the User class definition, we’d use User.id).

The select.correlate_except() directive indicates that each element in the FROM clause of this select() may be omitted from the FROM list (that is, correlated to the enclosing SELECT statement against User) except for the one corresponding to Address. This isn’t strictly necessary, but prevents Address from being inadvertently omitted from the FROM list in the case of a long string of joins between User and Address tables where SELECT statements against Address are nested.

If import issues prevent the column_property() from being defined inline with the class, it can be assigned to the class after both are configured. In Declarative this has the effect of calling Mapper.add_property() to add an additional property after the fact:

User.address_count = column_property(
        select([func.count(Address.id)]).\
            where(Address.user_id==User.id)
    )

For many-to-many relationships, use and_() to join the fields of the association table to both tables in a relation, illustrated here with a classical mapping:

from sqlalchemy import and_

mapper(Author, authors, properties={
    'book_count': column_property(
                        select([func.count(books.c.id)],
                            and_(
                                book_authors.c.author_id==authors.c.id,
                                book_authors.c.book_id==books.c.id
                            )))
    })

Using a plain descriptor

In cases where a SQL query more elaborate than what orm.column_property() or hybrid_property can provide must be emitted, a regular Python function accessed as an attribute can be used, assuming the expression only needs to be available on an already-loaded instance. The function is decorated with Python’s own @property decorator to mark it as a read-only attribute. Within the function, object_session() is used to locate the Session corresponding to the current object, which is then used to emit a query:

from sqlalchemy.orm import object_session
from sqlalchemy import select, func

class User(Base):
    __tablename__ = 'user'
    id = Column(Integer, primary_key=True)
    firstname = Column(String(50))
    lastname = Column(String(50))

    @property
    def address_count(self):
        return object_session(self).\
            scalar(
                select([func.count(Address.id)]).\
                    where(Address.user_id==self.id)
            )

The plain descriptor approach is useful as a last resort, but is less performant in the usual case than both the hybrid and column property approaches, in that it needs to emit a SQL query upon each access.

Changing Attribute Behavior

Simple Validators

A quick way to add a “validation” routine to an attribute is to use the validates() decorator. An attribute validator can raise an exception, halting the process of mutating the attribute’s value, or can change the given value into something different. Validators, like all attribute extensions, are only called by normal userland code; they are not issued when the ORM is populating the object:

from sqlalchemy.orm import validates

class EmailAddress(Base):
    __tablename__ = 'address'

    id = Column(Integer, primary_key=True)
    email = Column(String)

    @validates('email')
    def validate_email(self, key, address):
        assert '@' in address
        return address

Changed in version 1.0.0: - validators are no longer triggered within the flush process when the newly fetched values for primary key columns as well as some python- or server-side defaults are fetched. Prior to 1.0, validators may be triggered in those cases as well.

Validators also receive collection append events, when items are added to a collection:

from sqlalchemy.orm import validates

class User(Base):
    # ...

    addresses = relationship("Address")

    @validates('addresses')
    def validate_address(self, key, address):
        assert '@' in address.email
        return address

The validation function by default does not get emitted for collection remove events, as the typical expectation is that a value being discarded doesn’t require validation. However, validates() supports reception of these events by specifying include_removes=True to the decorator. When this flag is set, the validation function must receive an additional boolean argument which if True indicates that the operation is a removal:

from sqlalchemy.orm import validates

class User(Base):
    # ...

    addresses = relationship("Address")

    @validates('addresses', include_removes=True)
    def validate_address(self, key, address, is_remove):
        if is_remove:
            raise ValueError(
                    "not allowed to remove items from the collection")
        else:
            assert '@' in address.email
            return address

The case where mutually dependent validators are linked via a backref can also be tailored, using the include_backrefs=False option; this option, when set to False, prevents a validation function from emitting if the event occurs as a result of a backref:

from sqlalchemy.orm import validates

class User(Base):
    # ...

    addresses = relationship("Address", backref='user')

    @validates('addresses', include_backrefs=False)
    def validate_address(self, key, address):
        assert '@' in address.email
        return address

Above, if we were to assign to Address.user as in some_address.user = some_user, the validate_address() function would not be emitted, even though an append occurs to some_user.addresses - the event is caused by a backref.

Note that the validates() decorator is a convenience function built on top of attribute events. An application that requires more control over configuration of attribute change behavior can make use of this system, described at AttributeEvents.

sqlalchemy.orm.validates(*names, **kw)

Decorate a method as a ‘validator’ for one or more named properties.

Designates a method as a validator, a method which receives the name of the attribute as well as a value to be assigned, or in the case of a collection, the value to be added to the collection. The function can then raise validation exceptions to halt the process from continuing (where Python’s built-in ValueError and AssertionError exceptions are reasonable choices), or can modify or replace the value before proceeding. The function should otherwise return the given value.

Note that a validator for a collection cannot issue a load of that collection within the validation routine - this usage raises an assertion to avoid recursion overflows. This is a reentrant condition which is not supported.

Parameters:
  • *names – list of attribute names to be validated.
  • include_removes

    if True, “remove” events will be sent as well - the validation function must accept an additional argument “is_remove” which will be a boolean.

    New in version 0.7.7.

  • include_backrefs

    defaults to True; if False, the validation function will not emit if the originator is an attribute event related via a backref. This can be used for bi-directional validates() usage where only one validator should emit per attribute operation.

    New in version 0.9.0.

See also

Simple Validators - usage examples for validates()

Using Descriptors and Hybrids

A more comprehensive way to produce modified behavior for an attribute is to use descriptors. These are commonly used in Python using the property() function. The standard SQLAlchemy technique for descriptors is to create a plain descriptor, and to have it read/write from a mapped attribute with a different name. Below we illustrate this using Python 2.6-style properties:

class EmailAddress(Base):
    __tablename__ = 'email_address'

    id = Column(Integer, primary_key=True)

    # name the attribute with an underscore,
    # different from the column name
    _email = Column("email", String)

    # then create an ".email" attribute
    # to get/set "._email"
    @property
    def email(self):
        return self._email

    @email.setter
    def email(self, email):
        self._email = email

The approach above will work, but there’s more we can add. While our EmailAddress object will shuttle the value through the email descriptor and into the _email mapped attribute, the class level EmailAddress.email attribute does not have the usual expression semantics usable with Query. To provide these, we instead use the hybrid extension as follows:

from sqlalchemy.ext.hybrid import hybrid_property

class EmailAddress(Base):
    __tablename__ = 'email_address'

    id = Column(Integer, primary_key=True)

    _email = Column("email", String)

    @hybrid_property
    def email(self):
        return self._email

    @email.setter
    def email(self, email):
        self._email = email

The .email attribute, in addition to providing getter/setter behavior when we have an instance of EmailAddress, also provides a SQL expression when used at the class level, that is, from the EmailAddress class directly:

from sqlalchemy.orm import Session
session = Session()

sqladdress = session.query(EmailAddress).\
                 filter(EmailAddress.email == 'address@example.com').\
                 one()

address.email = 'otheraddress@example.com'
sqlsession.commit()

The hybrid_property also allows us to change the behavior of the attribute, including defining separate behaviors when the attribute is accessed at the instance level versus at the class/expression level, using the hybrid_property.expression() modifier. Such as, if we wanted to add a host name automatically, we might define two sets of string manipulation logic:

class EmailAddress(Base):
    __tablename__ = 'email_address'

    id = Column(Integer, primary_key=True)

    _email = Column("email", String)

    @hybrid_property
    def email(self):
        """Return the value of _email up until the last twelve
        characters."""

        return self._email[:-12]

    @email.setter
    def email(self, email):
        """Set the value of _email, tacking on the twelve character
        value @example.com."""

        self._email = email + "@example.com"

    @email.expression
    def email(cls):
        """Produce a SQL expression that represents the value
        of the _email column, minus the last twelve characters."""

        return func.substr(cls._email, 0, func.length(cls._email) - 12)

Above, accessing the email property of an instance of EmailAddress will return the value of the _email attribute, removing or adding the hostname @example.com from the value. When we query against the email attribute, a SQL function is rendered which produces the same effect:

sqladdress = session.query(EmailAddress).filter(EmailAddress.email == 'address').one()

Read more about Hybrids at Hybrid Attributes.

Synonyms

Synonyms are a mapper-level construct that allow any attribute on a class to “mirror” another attribute that is mapped.

In the most basic sense, the synonym is an easy way to make a certain attribute available by an additional name:

class MyClass(Base):
    __tablename__ = 'my_table'

    id = Column(Integer, primary_key=True)
    job_status = Column(String(50))

    status = synonym("job_status")

The above class MyClass has two attributes, .job_status and .status that will behave as one attribute, both at the expression level:

>>> print MyClass.job_status == 'some_status'
my_table.job_status = :job_status_1

>>> print MyClass.status == 'some_status'
my_table.job_status = :job_status_1

and at the instance level:

>>> m1 = MyClass(status='x')
>>> m1.status, m1.job_status
('x', 'x')

>>> m1.job_status = 'y'
>>> m1.status, m1.job_status
('y', 'y')

The synonym() can be used for any kind of mapped attribute that subclasses MapperProperty, including mapped columns and relationships, as well as synonyms themselves.

Beyond a simple mirror, synonym() can also be made to reference a user-defined descriptor. We can supply our status synonym with a @property:

class MyClass(Base):
    __tablename__ = 'my_table'

    id = Column(Integer, primary_key=True)
    status = Column(String(50))

    @property
    def job_status(self):
        return "Status: " + self.status

    job_status = synonym("status", descriptor=job_status)

When using Declarative, the above pattern can be expressed more succinctly using the synonym_for() decorator:

from sqlalchemy.ext.declarative import synonym_for

class MyClass(Base):
    __tablename__ = 'my_table'

    id = Column(Integer, primary_key=True)
    status = Column(String(50))

    @synonym_for("status")
    @property
    def job_status(self):
        return "Status: " + self.status

While the synonym() is useful for simple mirroring, the use case of augmenting attribute behavior with descriptors is better handled in modern usage using the hybrid attribute feature, which is more oriented towards Python descriptors. Technically, a synonym() can do everything that a hybrid_property can do, as it also supports injection of custom SQL capabilities, but the hybrid is more straightforward to use in more complex situations.

sqlalchemy.orm.synonym(name, map_column=None, descriptor=None, comparator_factory=None, doc=None, info=None)

Denote an attribute name as a synonym to a mapped property, in that the attribute will mirror the value and expression behavior of another attribute.

Parameters:
  • name – the name of the existing mapped property. This can refer to the string name of any MapperProperty configured on the class, including column-bound attributes and relationships.
  • descriptor – a Python descriptor that will be used as a getter (and potentially a setter) when this attribute is accessed at the instance level.
  • map_column

    if True, the synonym() construct will locate the existing named MapperProperty based on the attribute name of this synonym(), and assign it to a new attribute linked to the name of this synonym(). That is, given a mapping like:

    class MyClass(Base):
        __tablename__ = 'my_table'
    
        id = Column(Integer, primary_key=True)
        job_status = Column(String(50))
    
        job_status = synonym("_job_status", map_column=True)

    The above class MyClass will now have the job_status Column object mapped to the attribute named _job_status, and the attribute named job_status will refer to the synonym itself. This feature is typically used in conjunction with the descriptor argument in order to link a user-defined descriptor as a “wrapper” for an existing column.

  • info

    Optional data dictionary which will be populated into the InspectionAttr.info attribute of this object.

    New in version 1.0.0.

  • comparator_factory

    A subclass of PropComparator that will provide custom comparison behavior at the SQL expression level.

    Note

    For the use case of providing an attribute which redefines both Python-level and SQL-expression level behavior of an attribute, please refer to the Hybrid attribute introduced at Using Descriptors and Hybrids for a more effective technique.

See also

Synonyms - examples of functionality.

Using Descriptors and Hybrids - Hybrids provide a better approach for more complicated attribute-wrapping schemes than synonyms.

Operator Customization

The “operators” used by the SQLAlchemy ORM and Core expression language are fully customizable. For example, the comparison expression User.name == 'ed' makes usage of an operator built into Python itself called operator.eq - the actual SQL construct which SQLAlchemy associates with such an operator can be modified. New operations can be associated with column expressions as well. The operators which take place for column expressions are most directly redefined at the type level - see the section Redefining and Creating New Operators for a description.

ORM level functions like column_property(), relationship(), and composite() also provide for operator redefinition at the ORM level, by passing a PropComparator subclass to the comparator_factory argument of each function. Customization of operators at this level is a rare use case. See the documentation at PropComparator for an overview.

Composite Column Types

Sets of columns can be associated with a single user-defined datatype. The ORM provides a single attribute which represents the group of columns using the class you provide.

Changed in version 0.7: Composites have been simplified such that they no longer “conceal” the underlying column based attributes. Additionally, in-place mutation is no longer automatic; see the section below on enabling mutability to support tracking of in-place changes.

Changed in version 0.9: Composites will return their object-form, rather than as individual columns, when used in a column-oriented Query construct. See Composite attributes are now returned as their object form when queried on a per-attribute basis.

A simple example represents pairs of columns as a Point object. Point represents such a pair as .x and .y:

class Point(object):
    def __init__(self, x, y):
        self.x = x
        self.y = y

    def __composite_values__(self):
        return self.x, self.y

    def __repr__(self):
        return "Point(x=%r, y=%r)" % (self.x, self.y)

    def __eq__(self, other):
        return isinstance(other, Point) and \
            other.x == self.x and \
            other.y == self.y

    def __ne__(self, other):
        return not self.__eq__(other)

The requirements for the custom datatype class are that it have a constructor which accepts positional arguments corresponding to its column format, and also provides a method __composite_values__() which returns the state of the object as a list or tuple, in order of its column-based attributes. It also should supply adequate __eq__() and __ne__() methods which test the equality of two instances.

We will create a mapping to a table vertice, which represents two points as x1/y1 and x2/y2. These are created normally as Column objects. Then, the composite() function is used to assign new attributes that will represent sets of columns via the Point class:

from sqlalchemy import Column, Integer
from sqlalchemy.orm import composite
from sqlalchemy.ext.declarative import declarative_base

Base = declarative_base()

class Vertex(Base):
    __tablename__ = 'vertice'

    id = Column(Integer, primary_key=True)
    x1 = Column(Integer)
    y1 = Column(Integer)
    x2 = Column(Integer)
    y2 = Column(Integer)

    start = composite(Point, x1, y1)
    end = composite(Point, x2, y2)

A classical mapping above would define each composite() against the existing table:

mapper(Vertex, vertice_table, properties={
    'start':composite(Point, vertice_table.c.x1, vertice_table.c.y1),
    'end':composite(Point, vertice_table.c.x2, vertice_table.c.y2),
})

We can now persist and use Vertex instances, as well as query for them, using the .start and .end attributes against ad-hoc Point instances:

>>> v = Vertex(start=Point(3, 4), end=Point(5, 6))
>>> session.add(v)
>>> q = session.query(Vertex).filter(Vertex.start == Point(3, 4))
sql>>> print q.first().start
Point(x=3, y=4)
sqlalchemy.orm.composite(class_, *attrs, **kwargs)

Return a composite column-based property for use with a Mapper.

See the mapping documentation section Composite Column Types for a full usage example.

The MapperProperty returned by composite() is the CompositeProperty.

Parameters:
  • class_ – The “composite type” class.
  • *cols – List of Column objects to be mapped.
  • active_history=False

    When True, indicates that the “previous” value for a scalar attribute should be loaded when replaced, if not already loaded. See the same flag on column_property().

    Changed in version 0.7: This flag specifically becomes meaningful - previously it was a placeholder.

  • group – A group name for this property when marked as deferred.
  • deferred – When True, the column property is “deferred”, meaning that it does not load immediately, and is instead loaded when the attribute is first accessed on an instance. See also deferred().
  • comparator_factory – a class which extends CompositeProperty.Comparator which provides custom SQL clause generation for comparison operations.
  • doc – optional string that will be applied as the doc on the class-bound descriptor.
  • info

    Optional data dictionary which will be populated into the MapperProperty.info attribute of this object.

    New in version 0.8.

  • extension – an AttributeExtension instance, or list of extensions, which will be prepended to the list of attribute listeners for the resulting descriptor placed on the class. Deprecated. Please see AttributeEvents.

Tracking In-Place Mutations on Composites

In-place changes to an existing composite value are not tracked automatically. Instead, the composite class needs to provide events to its parent object explicitly. This task is largely automated via the usage of the MutableComposite mixin, which uses events to associate each user-defined composite object with all parent associations. Please see the example in Establishing Mutability on Composites.

Changed in version 0.7: In-place changes to an existing composite value are no longer tracked automatically; the functionality is superseded by the MutableComposite class.

Redefining Comparison Operations for Composites

The “equals” comparison operation by default produces an AND of all corresponding columns equated to one another. This can be changed using the comparator_factory argument to composite(), where we specify a custom CompositeProperty.Comparator class to define existing or new operations. Below we illustrate the “greater than” operator, implementing the same expression that the base “greater than” does:

from sqlalchemy.orm.properties import CompositeProperty
from sqlalchemy import sql

class PointComparator(CompositeProperty.Comparator):
    def __gt__(self, other):
        """redefine the 'greater than' operation"""

        return sql.and_(*[a>b for a, b in
                          zip(self.__clause_element__().clauses,
                              other.__composite_values__())])

class Vertex(Base):
    ___tablename__ = 'vertice'

    id = Column(Integer, primary_key=True)
    x1 = Column(Integer)
    y1 = Column(Integer)
    x2 = Column(Integer)
    y2 = Column(Integer)

    start = composite(Point, x1, y1,
                        comparator_factory=PointComparator)
    end = composite(Point, x2, y2,
                        comparator_factory=PointComparator)

Column Bundles

The Bundle may be used to query for groups of columns under one namespace.

New in version 0.9.0.

The bundle allows columns to be grouped together:

from sqlalchemy.orm import Bundle

bn = Bundle('mybundle', MyClass.data1, MyClass.data2)
for row in session.query(bn).filter(bn.c.data1 == 'd1'):
    print row.mybundle.data1, row.mybundle.data2

The bundle can be subclassed to provide custom behaviors when results are fetched. The method Bundle.create_row_processor() is given the Query and a set of “row processor” functions at query execution time; these processor functions when given a result row will return the individual attribute value, which can then be adapted into any kind of return data structure. Below illustrates replacing the usual KeyedTuple return structure with a straight Python dictionary:

from sqlalchemy.orm import Bundle

class DictBundle(Bundle):
    def create_row_processor(self, query, procs, labels):
        """Override create_row_processor to return values as dictionaries"""
        def proc(row):
            return dict(
                        zip(labels, (proc(row) for proc in procs))
                    )
        return proc

Changed in version 1.0: The proc() callable passed to the create_row_processor() method of custom Bundle classes now accepts only a single “row” argument.

A result from the above bundle will return dictionary values:

bn = DictBundle('mybundle', MyClass.data1, MyClass.data2)
for row in session.query(bn).filter(bn.c.data1 == 'd1'):
    print row.mybundle['data1'], row.mybundle['data2']

The Bundle construct is also integrated into the behavior of composite(), where it is used to return composite attributes as objects when queried as individual attributes.

Mapping a Class against Multiple Tables

Mappers can be constructed against arbitrary relational units (called selectables) in addition to plain tables. For example, the join() function creates a selectable unit comprised of multiple tables, complete with its own composite primary key, which can be mapped in the same way as a Table:

from sqlalchemy import Table, Column, Integer, \
        String, MetaData, join, ForeignKey
from sqlalchemy.ext.declarative import declarative_base
from sqlalchemy.orm import column_property

metadata = MetaData()

# define two Table objects
user_table = Table('user', metadata,
            Column('id', Integer, primary_key=True),
            Column('name', String),
        )

address_table = Table('address', metadata,
            Column('id', Integer, primary_key=True),
            Column('user_id', Integer, ForeignKey('user.id')),
            Column('email_address', String)
            )

# define a join between them.  This
# takes place across the user.id and address.user_id
# columns.
user_address_join = join(user_table, address_table)

Base = declarative_base()

# map to it
class AddressUser(Base):
    __table__ = user_address_join

    id = column_property(user_table.c.id, address_table.c.user_id)
    address_id = address_table.c.id

In the example above, the join expresses columns for both the user and the address table. The user.id and address.user_id columns are equated by foreign key, so in the mapping they are defined as one attribute, AddressUser.id, using column_property() to indicate a specialized column mapping. Based on this part of the configuration, the mapping will copy new primary key values from user.id into the address.user_id column when a flush occurs.

Additionally, the address.id column is mapped explicitly to an attribute named address_id. This is to disambiguate the mapping of the address.id column from the same-named AddressUser.id attribute, which here has been assigned to refer to the user table combined with the address.user_id foreign key.

The natural primary key of the above mapping is the composite of (user.id, address.id), as these are the primary key columns of the user and address table combined together. The identity of an AddressUser object will be in terms of these two values, and is represented from an AddressUser object as (AddressUser.id, AddressUser.address_id).

Mapping a Class against Arbitrary Selects

Similar to mapping against a join, a plain select() object can be used with a mapper as well. The example fragment below illustrates mapping a class called Customer to a select() which includes a join to a subquery:

from sqlalchemy import select, func

subq = select([
            func.count(orders.c.id).label('order_count'),
            func.max(orders.c.price).label('highest_order'),
            orders.c.customer_id
            ]).group_by(orders.c.customer_id).alias()

customer_select = select([customers, subq]).\
            select_from(
                join(customers, subq,
                        customers.c.id == subq.c.customer_id)
            ).alias()

class Customer(Base):
    __table__ = customer_select

Above, the full row represented by customer_select will be all the columns of the customers table, in addition to those columns exposed by the subq subquery, which are order_count, highest_order, and customer_id. Mapping the Customer class to this selectable then creates a class which will contain those attributes.

When the ORM persists new instances of Customer, only the customers table will actually receive an INSERT. This is because the primary key of the orders table is not represented in the mapping; the ORM will only emit an INSERT into a table for which it has mapped the primary key.

Note

The practice of mapping to arbitrary SELECT statements, especially complex ones as above, is almost never needed; it necessarily tends to produce complex queries which are often less efficient than that which would be produced by direct query construction. The practice is to some degree based on the very early history of SQLAlchemy where the mapper() construct was meant to represent the primary querying interface; in modern usage, the Query object can be used to construct virtually any SELECT statement, including complex composites, and should be favored over the “map-to-selectable” approach.

Multiple Mappers for One Class

In modern SQLAlchemy, a particular class is only mapped by one mapper() at a time. The rationale here is that the mapper() modifies the class itself, not only persisting it towards a particular Table, but also instrumenting attributes upon the class which are structured specifically according to the table metadata.

One potential use case for another mapper to exist at the same time is if we wanted to load instances of our class not just from the immediate Table to which it is mapped, but from another selectable that is a derivation of that Table. To create a second mapper that only handles querying when used explicitly, we can use the mapper.non_primary argument. In practice, this approach is usually not needed, as we can do this sort of thing at query time using methods such as Query.select_from(), however it is useful in the rare case that we wish to build a relationship() to such a mapper. An example of this is at Relationship to Non Primary Mapper.

Another potential use is if we genuinely want instances of our class to be persisted into different tables at different times; certain kinds of data sharding configurations may persist a particular class into tables that are identical in structure except for their name. For this kind of pattern, Python offers a better approach than the complexity of mapping the same class multiple times, which is to instead create new mapped classes for each target table. SQLAlchemy refers to this as the “entity name” pattern, which is described as a recipe at Entity Name.

Constructors and Object Initialization

Mapping imposes no restrictions or requirements on the constructor (__init__) method for the class. You are free to require any arguments for the function that you wish, assign attributes to the instance that are unknown to the ORM, and generally do anything else you would normally do when writing a constructor for a Python class.

The SQLAlchemy ORM does not call __init__ when recreating objects from database rows. The ORM’s process is somewhat akin to the Python standard library’s pickle module, invoking the low level __new__ method and then quietly restoring attributes directly on the instance rather than calling __init__.

If you need to do some setup on database-loaded instances before they’re ready to use, you can use the @reconstructor decorator to tag a method as the ORM counterpart to __init__. SQLAlchemy will call this method with no arguments every time it loads or reconstructs one of your instances. This is useful for recreating transient properties that are normally assigned in your __init__:

from sqlalchemy import orm

class MyMappedClass(object):
    def __init__(self, data):
        self.data = data
        # we need stuff on all instances, but not in the database.
        self.stuff = []

    @orm.reconstructor
    def init_on_load(self):
        self.stuff = []

When obj = MyMappedClass() is executed, Python calls the __init__ method as normal and the data argument is required. When instances are loaded during a Query operation as in query(MyMappedClass).one(), init_on_load is called.

Any method may be tagged as the reconstructor(), even the __init__ method. SQLAlchemy will call the reconstructor method with no arguments. Scalar (non-collection) database-mapped attributes of the instance will be available for use within the function. Eagerly-loaded collections are generally not yet available and will usually only contain the first element. ORM state changes made to objects at this stage will not be recorded for the next flush() operation, so the activity within a reconstructor should be conservative.

reconstructor() is a shortcut into a larger system of “instance level” events, which can be subscribed to using the event API - see InstanceEvents for the full API description of these events.

sqlalchemy.orm.reconstructor(fn)

Decorate a method as the ‘reconstructor’ hook.

Designates a method as the “reconstructor”, an __init__-like method that will be called by the ORM after the instance has been loaded from the database or otherwise reconstituted.

The reconstructor will be invoked with no arguments. Scalar (non-collection) database-mapped attributes of the instance will be available for use within the function. Eagerly-loaded collections are generally not yet available and will usually only contain the first element. ORM state changes made to objects at this stage will not be recorded for the next flush() operation, so the activity within a reconstructor should be conservative.

Configuring a Version Counter

The Mapper supports management of a version id column, which is a single table column that increments or otherwise updates its value each time an UPDATE to the mapped table occurs. This value is checked each time the ORM emits an UPDATE or DELETE against the row to ensure that the value held in memory matches the database value.

Warning

Because the versioning feature relies upon comparison of the in memory record of an object, the feature only applies to the Session.flush() process, where the ORM flushes individual in-memory rows to the database. It does not take effect when performing a multirow UPDATE or DELETE using Query.update() or Query.delete() methods, as these methods only emit an UPDATE or DELETE statement but otherwise do not have direct access to the contents of those rows being affected.

The purpose of this feature is to detect when two concurrent transactions are modifying the same row at roughly the same time, or alternatively to provide a guard against the usage of a “stale” row in a system that might be re-using data from a previous transaction without refreshing (e.g. if one sets expire_on_commit=False with a Session, it is possible to re-use the data from a previous transaction).

Concurrent transaction updates

When detecting concurrent updates within transactions, it is typically the case that the database’s transaction isolation level is below the level of repeatable read; otherwise, the transaction will not be exposed to a new row value created by a concurrent update which conflicts with the locally updated value. In this case, the SQLAlchemy versioning feature will typically not be useful for in-transaction conflict detection, though it still can be used for cross-transaction staleness detection.

The database that enforces repeatable reads will typically either have locked the target row against a concurrent update, or is employing some form of multi version concurrency control such that it will emit an error when the transaction is committed. SQLAlchemy’s version_id_col is an alternative which allows version tracking to occur for specific tables within a transaction that otherwise might not have this isolation level set.

See also

Repeatable Read Isolation Level - Postgresql’s implementation of repeatable read, including a description of the error condition.

Simple Version Counting

The most straightforward way to track versions is to add an integer column to the mapped table, then establish it as the version_id_col within the mapper options:

class User(Base):
    __tablename__ = 'user'

    id = Column(Integer, primary_key=True)
    version_id = Column(Integer, nullable=False)
    name = Column(String(50), nullable=False)

    __mapper_args__ = {
        "version_id_col": version_id
    }

Above, the User mapping tracks integer versions using the column version_id. When an object of type User is first flushed, the version_id column will be given a value of “1”. Then, an UPDATE of the table later on will always be emitted in a manner similar to the following:

UPDATE user SET version_id=:version_id, name=:name
WHERE user.id = :user_id AND user.version_id = :user_version_id
{"name": "new name", "version_id": 2, "user_id": 1, "user_version_id": 1}

The above UPDATE statement is updating the row that not only matches user.id = 1, it also is requiring that user.version_id = 1, where “1” is the last version identifier we’ve been known to use on this object. If a transaction elsewhere has modified the row independently, this version id will no longer match, and the UPDATE statement will report that no rows matched; this is the condition that SQLAlchemy tests, that exactly one row matched our UPDATE (or DELETE) statement. If zero rows match, that indicates our version of the data is stale, and a StaleDataError is raised.

Custom Version Counters / Types

Other kinds of values or counters can be used for versioning. Common types include dates and GUIDs. When using an alternate type or counter scheme, SQLAlchemy provides a hook for this scheme using the version_id_generator argument, which accepts a version generation callable. This callable is passed the value of the current known version, and is expected to return the subsequent version.

For example, if we wanted to track the versioning of our User class using a randomly generated GUID, we could do this (note that some backends support a native GUID type, but we illustrate here using a simple string):

import uuid

class User(Base):
    __tablename__ = 'user'

    id = Column(Integer, primary_key=True)
    version_uuid = Column(String(32))
    name = Column(String(50), nullable=False)

    __mapper_args__ = {
        'version_id_col':version_uuid,
        'version_id_generator':lambda version: uuid.uuid4().hex
    }

The persistence engine will call upon uuid.uuid4() each time a User object is subject to an INSERT or an UPDATE. In this case, our version generation function can disregard the incoming value of version, as the uuid4() function generates identifiers without any prerequisite value. If we were using a sequential versioning scheme such as numeric or a special character system, we could make use of the given version in order to help determine the subsequent value.

Server Side Version Counters

The version_id_generator can also be configured to rely upon a value that is generated by the database. In this case, the database would need some means of generating new identifiers when a row is subject to an INSERT as well as with an UPDATE. For the UPDATE case, typically an update trigger is needed, unless the database in question supports some other native version identifier. The Postgresql database in particular supports a system column called xmin which provides UPDATE versioning. We can make use of the Postgresql xmin column to version our User class as follows:

class User(Base):
    __tablename__ = 'user'

    id = Column(Integer, primary_key=True)
    name = Column(String(50), nullable=False)
    xmin = Column("xmin", Integer, system=True)

    __mapper_args__ = {
        'version_id_col': xmin,
        'version_id_generator': False
    }

With the above mapping, the ORM will rely upon the xmin column for automatically providing the new value of the version id counter.

creating tables that refer to system columns

In the above scenario, as xmin is a system column provided by Postgresql, we use the system=True argument to mark it as a system-provided column, omitted from the CREATE TABLE statement.

The ORM typically does not actively fetch the values of database-generated values when it emits an INSERT or UPDATE, instead leaving these columns as “expired” and to be fetched when they are next accessed, unless the eager_defaults mapper() flag is set. However, when a server side version column is used, the ORM needs to actively fetch the newly generated value. This is so that the version counter is set up before any concurrent transaction may update it again. This fetching is also best done simultaneously within the INSERT or UPDATE statement using RETURNING, otherwise if emitting a SELECT statement afterwards, there is still a potential race condition where the version counter may change before it can be fetched.

When the target database supports RETURNING, an INSERT statement for our User class will look like this:

INSERT INTO "user" (name) VALUES (%(name)s) RETURNING "user".id, "user".xmin
{'name': 'ed'}

Where above, the ORM can acquire any newly generated primary key values along with server-generated version identifiers in one statement. When the backend does not support RETURNING, an additional SELECT must be emitted for every INSERT and UPDATE, which is much less efficient, and also introduces the possibility of missed version counters:

INSERT INTO "user" (name) VALUES (%(name)s)
{'name': 'ed'}

SELECT "user".version_id AS user_version_id FROM "user" where
"user".id = :param_1
{"param_1": 1}

It is strongly recommended that server side version counters only be used when absolutely necessary and only on backends that support RETURNING, e.g. Postgresql, Oracle, SQL Server (though SQL Server has major caveats when triggers are used), Firebird.

New in version 0.9.0: Support for server side version identifier tracking.

Programmatic or Conditional Version Counters

When version_id_generator is set to False, we can also programmatically (and conditionally) set the version identifier on our object in the same way we assign any other mapped attribute. Such as if we used our UUID example, but set version_id_generator to False, we can set the version identifier at our choosing:

import uuid

class User(Base):
    __tablename__ = 'user'

    id = Column(Integer, primary_key=True)
    version_uuid = Column(String(32))
    name = Column(String(50), nullable=False)

    __mapper_args__ = {
        'version_id_col':version_uuid,
        'version_id_generator': False
    }

u1 = User(name='u1', version_uuid=uuid.uuid4())

session.add(u1)

session.commit()

u1.name = 'u2'
u1.version_uuid = uuid.uuid4()

session.commit()

We can update our User object without incrementing the version counter as well; the value of the counter will remain unchanged, and the UPDATE statement will still check against the previous value. This may be useful for schemes where only certain classes of UPDATE are sensitive to concurrency issues:

# will leave version_uuid unchanged
u1.name = 'u3'
session.commit()

New in version 0.9.0: Support for programmatic and conditional version identifier tracking.

Class Mapping API

sqlalchemy.orm.mapper(class_, local_table=None, properties=None, primary_key=None, non_primary=False, inherits=None, inherit_condition=None, inherit_foreign_keys=None, extension=None, order_by=False, always_refresh=False, version_id_col=None, version_id_generator=None, polymorphic_on=None, _polymorphic_map=None, polymorphic_identity=None, concrete=False, with_polymorphic=None, allow_partial_pks=True, batch=True, column_prefix=None, include_properties=None, exclude_properties=None, passive_updates=True, confirm_deleted_rows=True, eager_defaults=False, legacy_is_orphan=False, _compiled_cache_size=100)

Return a new Mapper object.

This function is typically used behind the scenes via the Declarative extension. When using Declarative, many of the usual mapper() arguments are handled by the Declarative extension itself, including class_, local_table, properties, and inherits. Other options are passed to mapper() using the __mapper_args__ class variable:

class MyClass(Base):
    __tablename__ = 'my_table'
    id = Column(Integer, primary_key=True)
    type = Column(String(50))
    alt = Column("some_alt", Integer)

    __mapper_args__ = {
        'polymorphic_on' : type
    }

Explicit use of mapper() is often referred to as classical mapping. The above declarative example is equivalent in classical form to:

my_table = Table("my_table", metadata,
    Column('id', Integer, primary_key=True),
    Column('type', String(50)),
    Column("some_alt", Integer)
)

class MyClass(object):
    pass

mapper(MyClass, my_table,
    polymorphic_on=my_table.c.type,
    properties={
        'alt':my_table.c.some_alt
    })

See also

Classical Mappings - discussion of direct usage of mapper()

Parameters:
  • class_ – The class to be mapped. When using Declarative, this argument is automatically passed as the declared class itself.
  • local_table – The Table or other selectable to which the class is mapped. May be None if this mapper inherits from another mapper using single-table inheritance. When using Declarative, this argument is automatically passed by the extension, based on what is configured via the __table__ argument or via the Table produced as a result of the __tablename__ and Column arguments present.
  • always_refresh – If True, all query operations for this mapped class will overwrite all data within object instances that already exist within the session, erasing any in-memory changes with whatever information was loaded from the database. Usage of this flag is highly discouraged; as an alternative, see the method Query.populate_existing().
  • allow_partial_pks – Defaults to True. Indicates that a composite primary key with some NULL values should be considered as possibly existing within the database. This affects whether a mapper will assign an incoming row to an existing identity, as well as if Session.merge() will check the database first for a particular primary key value. A “partial primary key” can occur if one has mapped to an OUTER JOIN, for example.
  • batch – Defaults to True, indicating that save operations of multiple entities can be batched together for efficiency. Setting to False indicates that an instance will be fully saved before saving the next instance. This is used in the extremely rare case that a MapperEvents listener requires being called in between individual row persistence operations.
  • column_prefix

    A string which will be prepended to the mapped attribute name when Column objects are automatically assigned as attributes to the mapped class. Does not affect explicitly specified column-based properties.

    See the section Naming All Columns with a Prefix for an example.

  • concrete

    If True, indicates this mapper should use concrete table inheritance with its parent mapper.

    See the section Concrete Table Inheritance for an example.

  • confirm_deleted_rows

    defaults to True; when a DELETE occurs of one more rows based on specific primary keys, a warning is emitted when the number of rows matched does not equal the number of rows expected. This parameter may be set to False to handle the case where database ON DELETE CASCADE rules may be deleting some of those rows automatically. The warning may be changed to an exception in a future release.

    New in version 0.9.4: - added mapper.confirm_deleted_rows as well as conditional matched row checking on delete.

  • eager_defaults

    if True, the ORM will immediately fetch the value of server-generated default values after an INSERT or UPDATE, rather than leaving them as expired to be fetched on next access. This can be used for event schemes where the server-generated values are needed immediately before the flush completes. By default, this scheme will emit an individual SELECT statement per row inserted or updated, which note can add significant performance overhead. However, if the target database supports RETURNING, the default values will be returned inline with the INSERT or UPDATE statement, which can greatly enhance performance for an application that needs frequent access to just-generated server defaults.

    Changed in version 0.9.0: The eager_defaults option can now make use of RETURNING for backends which support it.

  • exclude_properties

    A list or set of string column names to be excluded from mapping.

    See Mapping a Subset of Table Columns for an example.

  • extension – A MapperExtension instance or list of MapperExtension instances which will be applied to all operations by this Mapper. Deprecated. Please see MapperEvents.
  • include_properties

    An inclusive list or set of string column names to map.

    See Mapping a Subset of Table Columns for an example.

  • inherits

    A mapped class or the corresponding Mapper of one indicating a superclass to which this Mapper should inherit from. The mapped class here must be a subclass of the other mapper’s class. When using Declarative, this argument is passed automatically as a result of the natural class hierarchy of the declared classes.

  • inherit_condition – For joined table inheritance, a SQL expression which will define how the two tables are joined; defaults to a natural join between the two tables.
  • inherit_foreign_keys – When inherit_condition is used and the columns present are missing a ForeignKey configuration, this parameter can be used to specify which columns are “foreign”. In most cases can be left as None.
  • legacy_is_orphan

    Boolean, defaults to False. When True, specifies that “legacy” orphan consideration is to be applied to objects mapped by this mapper, which means that a pending (that is, not persistent) object is auto-expunged from an owning Session only when it is de-associated from all parents that specify a delete-orphan cascade towards this mapper. The new default behavior is that the object is auto-expunged when it is de-associated with any of its parents that specify delete-orphan cascade. This behavior is more consistent with that of a persistent object, and allows behavior to be consistent in more scenarios independently of whether or not an orphanable object has been flushed yet or not.

    See the change note and example at The consideration of a “pending” object as an “orphan” has been made more aggressive for more detail on this change.

    New in version 0.8: - the consideration of a pending object as an “orphan” has been modified to more closely match the behavior as that of persistent objects, which is that the object is expunged from the Session as soon as it is de-associated from any of its orphan-enabled parents. Previously, the pending object would be expunged only if de-associated from all of its orphan-enabled parents. The new flag legacy_is_orphan is added to orm.mapper() which re-establishes the legacy behavior.

  • non_primary

    Specify that this Mapper is in addition to the “primary” mapper, that is, the one used for persistence. The Mapper created here may be used for ad-hoc mapping of the class to an alternate selectable, for loading only.

    Mapper.non_primary is not an often used option, but is useful in some specific relationship() cases.

  • order_by – A single Column or list of Column objects for which selection operations should use as the default ordering for entities. By default mappers have no pre-defined ordering.
  • passive_updates

    Indicates UPDATE behavior of foreign key columns when a primary key column changes on a joined-table inheritance mapping. Defaults to True.

    When True, it is assumed that ON UPDATE CASCADE is configured on the foreign key in the database, and that the database will handle propagation of an UPDATE from a source column to dependent columns on joined-table rows.

    When False, it is assumed that the database does not enforce referential integrity and will not be issuing its own CASCADE operation for an update. The unit of work process will emit an UPDATE statement for the dependent columns during a primary key change.

    See also

    Mutable Primary Keys / Update Cascades - description of a similar feature as used with relationship()

  • polymorphic_on

    Specifies the column, attribute, or SQL expression used to determine the target class for an incoming row, when inheriting classes are present.

    This value is commonly a Column object that’s present in the mapped Table:

    class Employee(Base):
        __tablename__ = 'employee'
    
        id = Column(Integer, primary_key=True)
        discriminator = Column(String(50))
    
        __mapper_args__ = {
            "polymorphic_on":discriminator,
            "polymorphic_identity":"employee"
        }

    It may also be specified as a SQL expression, as in this example where we use the case() construct to provide a conditional approach:

    class Employee(Base):
        __tablename__ = 'employee'
    
        id = Column(Integer, primary_key=True)
        discriminator = Column(String(50))
    
        __mapper_args__ = {
            "polymorphic_on":case([
                (discriminator == "EN", "engineer"),
                (discriminator == "MA", "manager"),
            ], else_="employee"),
            "polymorphic_identity":"employee"
        }

    It may also refer to any attribute configured with column_property(), or to the string name of one:

    class Employee(Base):
        __tablename__ = 'employee'
    
        id = Column(Integer, primary_key=True)
        discriminator = Column(String(50))
        employee_type = column_property(
            case([
                (discriminator == "EN", "engineer"),
                (discriminator == "MA", "manager"),
            ], else_="employee")
        )
    
        __mapper_args__ = {
            "polymorphic_on":employee_type,
            "polymorphic_identity":"employee"
        }

    Changed in version 0.7.4: polymorphic_on may be specified as a SQL expression, or refer to any attribute configured with column_property(), or to the string name of one.

    When setting polymorphic_on to reference an attribute or expression that’s not present in the locally mapped Table, yet the value of the discriminator should be persisted to the database, the value of the discriminator is not automatically set on new instances; this must be handled by the user, either through manual means or via event listeners. A typical approach to establishing such a listener looks like:

    from sqlalchemy import event
    from sqlalchemy.orm import object_mapper
    
    @event.listens_for(Employee, "init", propagate=True)
    def set_identity(instance, *arg, **kw):
        mapper = object_mapper(instance)
        instance.discriminator = mapper.polymorphic_identity

    Where above, we assign the value of polymorphic_identity for the mapped class to the discriminator attribute, thus persisting the value to the discriminator column in the database.

    Warning

    Currently, only one discriminator column may be set, typically on the base-most class in the hierarchy. “Cascading” polymorphic columns are not yet supported.

  • polymorphic_identity – Specifies the value which identifies this particular class as returned by the column expression referred to by the polymorphic_on setting. As rows are received, the value corresponding to the polymorphic_on column expression is compared to this value, indicating which subclass should be used for the newly reconstructed object.
  • properties – A dictionary mapping the string names of object attributes to MapperProperty instances, which define the persistence behavior of that attribute. Note that Column objects present in the mapped Table are automatically placed into ColumnProperty instances upon mapping, unless overridden. When using Declarative, this argument is passed automatically, based on all those MapperProperty instances declared in the declared class body.
  • primary_key – A list of Column objects which define the primary key to be used against this mapper’s selectable unit. This is normally simply the primary key of the local_table, but can be overridden here.
  • version_id_col

    A Column that will be used to keep a running version id of rows in the table. This is used to detect concurrent updates or the presence of stale data in a flush. The methodology is to detect if an UPDATE statement does not match the last known version id, a StaleDataError exception is thrown. By default, the column must be of Integer type, unless version_id_generator specifies an alternative version generator.

    See also

    Configuring a Version Counter - discussion of version counting and rationale.

  • version_id_generator

    Define how new version ids should be generated. Defaults to None, which indicates that a simple integer counting scheme be employed. To provide a custom versioning scheme, provide a callable function of the form:

    def generate_version(version):
        return next_version

    Alternatively, server-side versioning functions such as triggers, or programmatic versioning schemes outside of the version id generator may be used, by specifying the value False. Please see Server Side Version Counters for a discussion of important points when using this option.

    New in version 0.9.0: version_id_generator supports server-side version number generation.

  • with_polymorphic

    A tuple in the form (<classes>, <selectable>) indicating the default style of “polymorphic” loading, that is, which tables are queried at once. <classes> is any single or list of mappers and/or classes indicating the inherited classes that should be loaded at once. The special value '*' may be used to indicate all descending classes should be loaded immediately. The second tuple argument <selectable> indicates a selectable that will be used to query for multiple classes.

    See also

    Basic Control of Which Tables are Queried - discussion of polymorphic querying techniques.

sqlalchemy.orm.object_mapper(instance)

Given an object, return the primary Mapper associated with the object instance.

Raises sqlalchemy.orm.exc.UnmappedInstanceError if no mapping is configured.

This function is available via the inspection system as:

inspect(instance).mapper

Using the inspection system will raise sqlalchemy.exc.NoInspectionAvailable if the instance is not part of a mapping.

sqlalchemy.orm.class_mapper(class_, configure=True)

Given a class, return the primary Mapper associated with the key.

Raises UnmappedClassError if no mapping is configured on the given class, or ArgumentError if a non-class object is passed.

Equivalent functionality is available via the inspect() function as:

inspect(some_mapped_class)

Using the inspection system will raise sqlalchemy.exc.NoInspectionAvailable if the class is not mapped.

sqlalchemy.orm.configure_mappers()

Initialize the inter-mapper relationships of all mappers that have been constructed thus far.

This function can be called any number of times, but in most cases is handled internally.

sqlalchemy.orm.clear_mappers()

Remove all mappers from all classes.

This function removes all instrumentation from classes and disposes of their associated mappers. Once called, the classes are unmapped and can be later re-mapped with new mappers.

clear_mappers() is not for normal use, as there is literally no valid usage for it outside of very specific testing scenarios. Normally, mappers are permanent structural components of user-defined classes, and are never discarded independently of their class. If a mapped class itself is garbage collected, its mapper is automatically disposed of as well. As such, clear_mappers() is only for usage in test suites that re-use the same classes with different mappings, which is itself an extremely rare use case - the only such use case is in fact SQLAlchemy’s own test suite, and possibly the test suites of other ORM extension libraries which intend to test various combinations of mapper construction upon a fixed set of classes.

sqlalchemy.orm.util.identity_key(*args, **kwargs)

Generate “identity key” tuples, as are used as keys in the Session.identity_map dictionary.

This function has several call styles:

  • identity_key(class, ident)

    This form receives a mapped class and a primary key scalar or tuple as an argument.

    E.g.:

    >>> identity_key(MyClass, (1, 2))
    (<class '__main__.MyClass'>, (1, 2))
    param class:mapped class (must be a positional argument)
    param ident:primary key, may be a scalar or tuple argument.
  • identity_key(instance=instance)

    This form will produce the identity key for a given instance. The instance need not be persistent, only that its primary key attributes are populated (else the key will contain None for those missing values).

    E.g.:

    >>> instance = MyClass(1, 2)
    >>> identity_key(instance=instance)
    (<class '__main__.MyClass'>, (1, 2))

    In this form, the given instance is ultimately run though Mapper.identity_key_from_instance(), which will have the effect of performing a database check for the corresponding row if the object is expired.

    param instance:object instance (must be given as a keyword arg)
  • identity_key(class, row=row)

    This form is similar to the class/tuple form, except is passed a database result row as a RowProxy object.

    E.g.:

    >>> row = engine.execute("select * from table where a=1 and b=2").first()
    >>> identity_key(MyClass, row=row)
    (<class '__main__.MyClass'>, (1, 2))
    param class:mapped class (must be a positional argument)
    param row:RowProxy row returned by a ResultProxy (must be given as a keyword arg)
sqlalchemy.orm.util.polymorphic_union(table_map, typecolname, aliasname='p_union', cast_nulls=True)

Create a UNION statement used by a polymorphic mapper.

See Concrete Table Inheritance for an example of how this is used.

Parameters:
  • table_map – mapping of polymorphic identities to Table objects.
  • typecolname – string name of a “discriminator” column, which will be derived from the query, producing the polymorphic identity for each row. If None, no polymorphic discriminator is generated.
  • aliasname – name of the alias() construct generated.
  • cast_nulls – if True, non-existent columns, which are represented as labeled NULLs, will be passed into CAST. This is a legacy behavior that is problematic on some backends such as Oracle - in which case it can be set to False.
class sqlalchemy.orm.mapper.Mapper(class_, local_table=None, properties=None, primary_key=None, non_primary=False, inherits=None, inherit_condition=None, inherit_foreign_keys=None, extension=None, order_by=False, always_refresh=False, version_id_col=None, version_id_generator=None, polymorphic_on=None, _polymorphic_map=None, polymorphic_identity=None, concrete=False, with_polymorphic=None, allow_partial_pks=True, batch=True, column_prefix=None, include_properties=None, exclude_properties=None, passive_updates=True, confirm_deleted_rows=True, eager_defaults=False, legacy_is_orphan=False, _compiled_cache_size=100)

Bases: sqlalchemy.orm.base.InspectionAttr

Define the correlation of class attributes to database table columns.

The Mapper object is instantiated using the mapper() function. For information about instantiating new Mapper objects, see that function’s documentation.

When mapper() is used explicitly to link a user defined class with table metadata, this is referred to as classical mapping. Modern SQLAlchemy usage tends to favor the sqlalchemy.ext.declarative extension for class configuration, which makes usage of mapper() behind the scenes.

Given a particular class known to be mapped by the ORM, the Mapper which maintains it can be acquired using the inspect() function:

from sqlalchemy import inspect

mapper = inspect(MyClass)

A class which was mapped by the sqlalchemy.ext.declarative extension will also have its mapper available via the __mapper__ attribute.

__init__(class_, local_table=None, properties=None, primary_key=None, non_primary=False, inherits=None, inherit_condition=None, inherit_foreign_keys=None, extension=None, order_by=False, always_refresh=False, version_id_col=None, version_id_generator=None, polymorphic_on=None, _polymorphic_map=None, polymorphic_identity=None, concrete=False, with_polymorphic=None, allow_partial_pks=True, batch=True, column_prefix=None, include_properties=None, exclude_properties=None, passive_updates=True, confirm_deleted_rows=True, eager_defaults=False, legacy_is_orphan=False, _compiled_cache_size=100)

Construct a new Mapper object.

This constructor is mirrored as a public API function; see mapper() for a full usage and argument description.

add_properties(dict_of_properties)

Add the given dictionary of properties to this mapper, using add_property.

add_property(key, prop)

Add an individual MapperProperty to this mapper.

If the mapper has not been configured yet, just adds the property to the initial properties dictionary sent to the constructor. If this Mapper has already been configured, then the given MapperProperty is configured immediately.

all_orm_descriptors

A namespace of all InspectionAttr attributes associated with the mapped class.

These attributes are in all cases Python descriptors associated with the mapped class or its superclasses.

This namespace includes attributes that are mapped to the class as well as attributes declared by extension modules. It includes any Python descriptor type that inherits from InspectionAttr. This includes QueryableAttribute, as well as extension types such as hybrid_property, hybrid_method and AssociationProxy.

To distinguish between mapped attributes and extension attributes, the attribute InspectionAttr.extension_type will refer to a constant that distinguishes between different extension types.

When dealing with a QueryableAttribute, the QueryableAttribute.property attribute refers to the MapperProperty property, which is what you get when referring to the collection of mapped properties via Mapper.attrs.

New in version 0.8.0.

See also

Mapper.attrs

attrs

A namespace of all MapperProperty objects associated this mapper.

This is an object that provides each property based on its key name. For instance, the mapper for a User class which has User.name attribute would provide mapper.attrs.name, which would be the ColumnProperty representing the name column. The namespace object can also be iterated, which would yield each MapperProperty.

Mapper has several pre-filtered views of this attribute which limit the types of properties returned, inclding synonyms, column_attrs, relationships, and composites.

base_mapper = None

The base-most Mapper in an inheritance chain.

In a non-inheriting scenario, this attribute will always be this Mapper. In an inheritance scenario, it references the Mapper which is parent to all other Mapper objects in the inheritance chain.

This is a read only attribute determined during mapper construction. Behavior is undefined if directly modified.

c = None

A synonym for columns.

cascade_iterator(type_, state, halt_on=None)

Iterate each element and its mapper in an object graph, for all relationships that meet the given cascade rule.

Parameters:
  • type_ – The name of the cascade rule (i.e. save-update, delete, etc.)
  • state – The lead InstanceState. child items will be processed per the relationships defined for this object’s mapper.

the return value are object instances; this provides a strong reference so that they don’t fall out of scope immediately.

class_ = None

The Python class which this Mapper maps.

This is a read only attribute determined during mapper construction. Behavior is undefined if directly modified.

class_manager = None

The ClassManager which maintains event listeners and class-bound descriptors for this Mapper.

This is a read only attribute determined during mapper construction. Behavior is undefined if directly modified.

column_attrs

Return a namespace of all ColumnProperty properties maintained by this Mapper.

See also

Mapper.attrs - namespace of all MapperProperty objects.

columns = None

A collection of Column or other scalar expression objects maintained by this Mapper.

The collection behaves the same as that of the c attribute on any Table object, except that only those columns included in this mapping are present, and are keyed based on the attribute name defined in the mapping, not necessarily the key attribute of the Column itself. Additionally, scalar expressions mapped by column_property() are also present here.

This is a read only attribute determined during mapper construction. Behavior is undefined if directly modified.

common_parent(other)

Return true if the given mapper shares a common inherited parent as this mapper.

composites

Return a namespace of all CompositeProperty properties maintained by this Mapper.

See also

Mapper.attrs - namespace of all MapperProperty objects.

concrete = None

Represent True if this Mapper is a concrete inheritance mapper.

This is a read only attribute determined during mapper construction. Behavior is undefined if directly modified.

configured = None

Represent True if this Mapper has been configured.

This is a read only attribute determined during mapper construction. Behavior is undefined if directly modified.

entity

Part of the inspection API.

Returns self.class_.

get_property(key, _configure_mappers=True)

return a MapperProperty associated with the given key.

get_property_by_column(column)

Given a Column object, return the MapperProperty which maps this column.

identity_key_from_instance(instance)

Return the identity key for the given instance, based on its primary key attributes.

If the instance’s state is expired, calling this method will result in a database check to see if the object has been deleted. If the row no longer exists, ObjectDeletedError is raised.

This value is typically also found on the instance state under the attribute name key.

identity_key_from_primary_key(primary_key)

Return an identity-map key for use in storing/retrieving an item from an identity map.

Parameters:primary_key – A list of values indicating the identifier.
identity_key_from_row(row, adapter=None)

Return an identity-map key for use in storing/retrieving an item from the identity map.

Parameters:row – A RowProxy instance. The columns which are mapped by this Mapper should be locatable in the row, preferably via the Column object directly (as is the case when a select() construct is executed), or via string names of the form <tablename>_<colname>.
inherits = None

References the Mapper which this Mapper inherits from, if any.

This is a read only attribute determined during mapper construction. Behavior is undefined if directly modified.

is_mapper = True

Part of the inspection API.

isa(other)

Return True if the this mapper inherits from the given mapper.

iterate_properties

return an iterator of all MapperProperty objects.

local_table = None

The Selectable which this Mapper manages.

Typically is an instance of Table or Alias. May also be None.

The “local” table is the selectable that the Mapper is directly responsible for managing from an attribute access and flush perspective. For non-inheriting mappers, the local table is the same as the “mapped” table. For joined-table inheritance mappers, local_table will be the particular sub-table of the overall “join” which this Mapper represents. If this mapper is a single-table inheriting mapper, local_table will be None.

See also

mapped_table.

mapped_table = None

The Selectable to which this Mapper is mapped.

Typically an instance of Table, Join, or Alias.

The “mapped” table is the selectable that the mapper selects from during queries. For non-inheriting mappers, the mapped table is the same as the “local” table. For joined-table inheritance mappers, mapped_table references the full Join representing full rows for this particular subclass. For single-table inheritance mappers, mapped_table references the base table.

See also

local_table.

mapper

Part of the inspection API.

Returns self.

non_primary = None

Represent True if this Mapper is a “non-primary” mapper, e.g. a mapper that is used only to selet rows but not for persistence management.

This is a read only attribute determined during mapper construction. Behavior is undefined if directly modified.

polymorphic_identity = None

Represent an identifier which is matched against the polymorphic_on column during result row loading.

Used only with inheritance, this object can be of any type which is comparable to the type of column represented by polymorphic_on.

This is a read only attribute determined during mapper construction. Behavior is undefined if directly modified.

polymorphic_iterator()

Iterate through the collection including this mapper and all descendant mappers.

This includes not just the immediately inheriting mappers but all their inheriting mappers as well.

To iterate through an entire hierarchy, use mapper.base_mapper.polymorphic_iterator().

polymorphic_map = None

A mapping of “polymorphic identity” identifiers mapped to Mapper instances, within an inheritance scenario.

The identifiers can be of any type which is comparable to the type of column represented by polymorphic_on.

An inheritance chain of mappers will all reference the same polymorphic map object. The object is used to correlate incoming result rows to target mappers.

This is a read only attribute determined during mapper construction. Behavior is undefined if directly modified.

polymorphic_on = None

The Column or SQL expression specified as the polymorphic_on argument for this Mapper, within an inheritance scenario.

This attribute is normally a Column instance but may also be an expression, such as one derived from cast().

This is a read only attribute determined during mapper construction. Behavior is undefined if directly modified.

primary_key = None

An iterable containing the collection of Column objects which comprise the ‘primary key’ of the mapped table, from the perspective of this Mapper.

This list is against the selectable in mapped_table. In the case of inheriting mappers, some columns may be managed by a superclass mapper. For example, in the case of a Join, the primary key is determined by all of the primary key columns across all tables referenced by the Join.

The list is also not necessarily the same as the primary key column collection associated with the underlying tables; the Mapper features a primary_key argument that can override what the Mapper considers as primary key columns.

This is a read only attribute determined during mapper construction. Behavior is undefined if directly modified.

primary_key_from_instance(instance)

Return the list of primary key values for the given instance.

If the instance’s state is expired, calling this method will result in a database check to see if the object has been deleted. If the row no longer exists, ObjectDeletedError is raised.

primary_mapper()

Return the primary mapper corresponding to this mapper’s class key (class).

relationships

Return a namespace of all RelationshipProperty properties maintained by this Mapper.

See also

Mapper.attrs - namespace of all MapperProperty objects.

selectable

The select() construct this Mapper selects from by default.

Normally, this is equivalent to mapped_table, unless the with_polymorphic feature is in use, in which case the full “polymorphic” selectable is returned.

self_and_descendants

The collection including this mapper and all descendant mappers.

This includes not just the immediately inheriting mappers but all their inheriting mappers as well.

single = None

Represent True if this Mapper is a single table inheritance mapper.

local_table will be None if this flag is set.

This is a read only attribute determined during mapper construction. Behavior is undefined if directly modified.

synonyms

Return a namespace of all SynonymProperty properties maintained by this Mapper.

See also

Mapper.attrs - namespace of all MapperProperty objects.

tables = None

An iterable containing the collection of Table objects which this Mapper is aware of.

If the mapper is mapped to a Join, or an Alias representing a Select, the individual Table objects that comprise the full construct will be represented here.

This is a read only attribute determined during mapper construction. Behavior is undefined if directly modified.

validators = None

An immutable dictionary of attributes which have been decorated using the validates() decorator.

The dictionary contains string attribute names as keys mapped to the actual validation method.

with_polymorphic_mappers

The list of Mapper objects included in the default “polymorphic” query.