Release: 0.7.10 | Release Date: February 7, 2013

SQLAlchemy 0.7 Documentation

Mapper Configuration

This section describes a variety of configurational patterns that are usable with mappers. It assumes you’ve worked through Object Relational Tutorial and know how to construct and use rudimentary mappers and relationships.

Classical Mappings

A Classical Mapping refers to the configuration of a mapped class using the mapper() function, without using the Declarative system. As an example, start with the declarative mapping introduced in Object Relational Tutorial:

class User(Base):
    __tablename__ = 'users'

    id = Column(Integer, primary_key=True)
    name = Column(String)
    fullname = Column(String)
    password = Column(String)

In “classical” form, the table metadata is created separately with the Table construct, then associated with the User class via the mapper() function:

from sqlalchemy import Table, MetaData, Column, ForeignKey, Integer, String
from sqlalchemy.orm import mapper

metadata = MetaData()

user = Table('user', metadata,
            Column('id', Integer, primary_key=True),
            Column('name', String(50)),
            Column('fullname', String(50)),
            Column('password', String(12))
        )

class User(object):
    def __init__(self, name, fullname, password):
        self.name = name
        self.fullname = fullname
        self.password = password

mapper(User, user)

Information about mapped attributes, such as relationships to other classes, are provided via the properties dictionary. The example below illustrates a second Table object, mapped to a class called Address, then linked to User via relationship():

address = Table('address', metadata,
            Column('id', Integer, primary_key=True),
            Column('user_id', Integer, ForeignKey('user.id')),
            Column('email_address', String(50))
            )

mapper(User, user, properties={
    'addresses' : relationship(Address, backref='user', order_by=address.c.id)
})

mapper(Address, address)

When using classical mappings, classes must be provided directly without the benefit of the “string lookup” system provided by Declarative. SQL expressions are typically specified in terms of the Table objects, i.e. address.c.id above for the Address relationship, and not Address.id, as Address may not yet be linked to table metadata, nor can we specify a string here.

Some examples in the documentation still use the classical approach, but note that the classical as well as Declarative approaches are fully interchangeable. Both systems ultimately create the same configuration, consisting of a Table, user-defined class, linked together with a mapper(). When we talk about “the behavior of mapper()”, this includes when using the Declarative system as well - it’s still used, just behind the scenes.

Customizing Column Properties

The default behavior of mapper() is to assemble all the columns in the mapped Table into mapped object attributes, each of which are named according to the name of the column itself (specifically, the key attribute of Column). This behavior can be modified in several ways.

Naming Columns Distinctly from Attribute Names

A mapping by default shares the same name for a Column as that of the mapped attribute. The name assigned to the Column can be different, as we illustrate here in a Declarative mapping:

class User(Base):
    __tablename__ = 'user'
    id = Column('user_id', Integer, primary_key=True)
    name = Column('user_name', String(50))

Where above User.id resolves to a column named user_id and User.name resolves to a column named user_name.

When mapping to an existing table, the Column object can be referenced directly:

class User(Base):
    __table__ = user_table
    id = user_table.c.user_id
    name = user_table.c.user_name

Or in a classical mapping, placed in the properties dictionary with the desired key:

mapper(User, user_table, properties={
   'id': user_table.c.user_id,
   'name': user_table.c.user_name,
})

Naming All Columns with a Prefix

A way to automate the assignment of a prefix to the mapped attribute names relative to the column name is to use column_prefix:

class User(Base):
    __table__ = user_table
    __mapper_args__ = {'column_prefix':'_'}

The above will place attribute names such as _user_id, _user_name, _password etc. on the mapped User class.

The classical version of the above:

mapper(User, user_table, column_prefix='_')

Using column_property for column level options

Options can be specified when mapping a Column using the column_property() function. This function explicitly creates the ColumnProperty used by the mapper() to keep track of the Column; normally, the mapper() creates this automatically. Using column_property(), we can pass additional arguments about how we’d like the Column to be mapped. Below, we pass an option active_history, which specifies that a change to this column’s value should result in the former value being loaded first:

from sqlalchemy.orm import column_property

class User(Base):
    __tablename__ = 'user'

    id = Column(Integer, primary_key=True)
    name = column_property(Column(String(50)), active_history=True)

column_property() is also used to map a single attribute to multiple columns. This use case arises when mapping to a join() which has attributes which are equated to each other:

class User(Base):
    __table__ = user.join(address)

    # assign "user.id", "address.user_id" to the
    # "id" attribute
    id = column_property(user_table.c.id, address_table.c.user_id)

For more examples featuring this usage, see Mapping a Class against Multiple Tables.

Another place where column_property() is needed is to specify SQL expressions as mapped attributes, such as below where we create an attribute fullname that is the string concatenation of the firstname and lastname columns:

class User(Base):
    __tablename__ = 'user'
    id = Column(Integer, primary_key=True)
    firstname = Column(String(50))
    lastname = Column(String(50))
    fullname = column_property(firstname + " " + lastname)

See examples of this usage at SQL Expressions as Mapped Attributes.

sqlalchemy.orm.column_property(*cols, **kw)

Provide a column-level property for use with a Mapper.

Column-based properties can normally be applied to the mapper’s properties dictionary using the Column element directly. Use this function when the given column is not directly present within the mapper’s selectable; examples include SQL expressions, functions, and scalar SELECT queries.

Columns that aren’t present in the mapper’s selectable won’t be persisted by the mapper and are effectively “read-only” attributes.

Parameters:
  • *cols – list of Column objects to be mapped.
  • active_history=False

    When True, indicates that the “previous” value for a scalar attribute should be loaded when replaced, if not already loaded. Normally, history tracking logic for simple non-primary-key scalar values only needs to be aware of the “new” value in order to perform a flush. This flag is available for applications that make use of attributes.get_history() or Session.is_modified() which also need to know the “previous” value of the attribute.

    New in version 0.6.6.

  • comparator_factory – a class which extends ColumnProperty.Comparator which provides custom SQL clause generation for comparison operations.
  • group – a group name for this property when marked as deferred.
  • deferred – when True, the column property is “deferred”, meaning that it does not load immediately, and is instead loaded when the attribute is first accessed on an instance. See also deferred().
  • doc – optional string that will be applied as the doc on the class-bound descriptor.
  • expire_on_flush=True

    Disable expiry on flush. A column_property() which refers to a SQL expression (and not a single table-bound column) is considered to be a “read only” property; populating it has no effect on the state of data, and it can only return database state. For this reason a column_property()’s value is expired whenever the parent object is involved in a flush, that is, has any kind of “dirty” state within a flush. Setting this parameter to False will have the effect of leaving any existing value present after the flush proceeds. Note however that the Session with default expiration settings still expires all attributes after a Session.commit() call, however.

    New in version 0.7.3.

  • extension – an AttributeExtension instance, or list of extensions, which will be prepended to the list of attribute listeners for the resulting descriptor placed on the class. Deprecated. Please see AttributeEvents.

Mapping a Subset of Table Columns

Sometimes, a Table object was made available using the reflection process described at Reflecting Database Objects to load the table’s structure from the database. For such a table that has lots of columns that don’t need to be referenced in the application, the include_properties or exclude_properties arguments can specify that only a subset of columns should be mapped. For example:

class User(Base):
    __table__ = user_table
    __mapper_args__ = {
        'include_properties' :['user_id', 'user_name']
    }

...will map the User class to the user_table table, only including the user_id and user_name columns - the rest are not referenced. Similarly:

class Address(Base):
    __table__ = address_table
    __mapper_args__ = {
        'exclude_properties' : ['street', 'city', 'state', 'zip']
    }

...will map the Address class to the address_table table, including all columns present except street, city, state, and zip.

When this mapping is used, the columns that are not included will not be referenced in any SELECT statements emitted by Query, nor will there be any mapped attribute on the mapped class which represents the column; assigning an attribute of that name will have no effect beyond that of a normal Python attribute assignment.

In some cases, multiple columns may have the same name, such as when mapping to a join of two or more tables that share some column name. include_properties and exclude_properties can also accommodate Column objects to more accurately describe which columns should be included or excluded:

class UserAddress(Base):
    __table__ = user_table.join(addresses_table)
    __mapper_args__ = {
        'exclude_properties' :[address_table.c.id],
        'primary_key' : [user_table.c.id]
    }

Note

insert and update defaults configured on individual Column objects, i.e. those described at Column Insert/Update Defaults including those configured by the default, update, server_default and server_onupdate arguments, will continue to function normally even if those Column objects are not mapped. This is because in the case of default and update, the Column object is still present on the underlying Table, thus allowing the default functions to take place when the ORM emits an INSERT or UPDATE, and in the case of server_default and server_onupdate, the relational database itself maintains these functions.

Deferred Column Loading

This feature allows particular columns of a table be loaded only upon direct access, instead of when the entity is queried using Query. This feature is useful when one wants to avoid loading a large text or binary field into memory when it’s not needed. Individual columns can be lazy loaded by themselves or placed into groups that lazy-load together, using the orm.deferred() function to mark them as “deferred”. In the example below, we define a mapping that will load each of .excerpt and .photo in separate, individual-row SELECT statements when each attribute is first referenced on the individual object instance:

from sqlalchemy.orm import deferred
from sqlalchemy import Integer, String, Text, Binary, Column

class Book(Base):
    __tablename__ = 'book'

    book_id = Column(Integer, primary_key=True)
    title = Column(String(200), nullable=False)
    summary = Column(String(2000))
    excerpt = deferred(Column(Text))
    photo = deferred(Column(Binary))

Classical mappings as always place the usage of orm.deferred() in the properties dictionary against the table-bound Column:

mapper(Book, book_table, properties={
    'photo':deferred(book_table.c.photo)
})

Deferred columns can be associated with a “group” name, so that they load together when any of them are first accessed. The example below defines a mapping with a photos deferred group. When one .photo is accessed, all three photos will be loaded in one SELECT statement. The .excerpt will be loaded separately when it is accessed:

class Book(Base):
    __tablename__ = 'book'

    book_id = Column(Integer, primary_key=True)
    title = Column(String(200), nullable=False)
    summary = Column(String(2000))
    excerpt = deferred(Column(Text))
    photo1 = deferred(Column(Binary), group='photos')
    photo2 = deferred(Column(Binary), group='photos')
    photo3 = deferred(Column(Binary), group='photos')

You can defer or undefer columns at the Query level using the orm.defer() and orm.undefer() query options:

from sqlalchemy.orm import defer, undefer

query = session.query(Book)
query.options(defer('summary')).all()
query.options(undefer('excerpt')).all()

And an entire “deferred group”, i.e. which uses the group keyword argument to orm.deferred(), can be undeferred using orm.undefer_group(), sending in the group name:

from sqlalchemy.orm import undefer_group

query = session.query(Book)
query.options(undefer_group('photos')).all()

Column Deferral API

sqlalchemy.orm.deferred(*columns, **kwargs)

Return a DeferredColumnProperty, which indicates this object attributes should only be loaded from its corresponding table column when first accessed.

Used with the “properties” dictionary sent to mapper().

See also:

Deferred Column Loading

sqlalchemy.orm.defer(*key)

Return a MapperOption that will convert the column property of the given name into a deferred load.

Used with Query.options().

e.g.:

from sqlalchemy.orm import defer

query(MyClass).options(defer("attribute_one"),
                    defer("attribute_two"))

A class bound descriptor is also accepted:

query(MyClass).options(
                    defer(MyClass.attribute_one),
                    defer(MyClass.attribute_two))

A “path” can be specified onto a related or collection object using a dotted name. The orm.defer() option will be applied to that object when loaded:

query(MyClass).options(
                    defer("related.attribute_one"),
                    defer("related.attribute_two"))

To specify a path via class, send multiple arguments:

query(MyClass).options(
                    defer(MyClass.related, MyOtherClass.attribute_one),
                    defer(MyClass.related, MyOtherClass.attribute_two))

See also:

Deferred Column Loading

Parameters:*key – A key representing an individual path. Multiple entries are accepted to allow a multiple-token path for a single target, not multiple targets.
sqlalchemy.orm.undefer(*key)

Return a MapperOption that will convert the column property of the given name into a non-deferred (regular column) load.

Used with Query.options().

e.g.:

from sqlalchemy.orm import undefer

query(MyClass).options(undefer("attribute_one"),
                        undefer("attribute_two"))

A class bound descriptor is also accepted:

query(MyClass).options(
                    undefer(MyClass.attribute_one),
                    undefer(MyClass.attribute_two))

A “path” can be specified onto a related or collection object using a dotted name. The orm.undefer() option will be applied to that object when loaded:

query(MyClass).options(
                    undefer("related.attribute_one"),
                    undefer("related.attribute_two"))

To specify a path via class, send multiple arguments:

query(MyClass).options(
                    undefer(MyClass.related, MyOtherClass.attribute_one),
                    undefer(MyClass.related, MyOtherClass.attribute_two))

See also:

orm.undefer_group() as a means to “undefer” a group of attributes at once.

Deferred Column Loading

Parameters:*key – A key representing an individual path. Multiple entries are accepted to allow a multiple-token path for a single target, not multiple targets.
sqlalchemy.orm.undefer_group(name)

Return a MapperOption that will convert the given group of deferred column properties into a non-deferred (regular column) load.

Used with Query.options().

e.g.:

query(MyClass).options(undefer("group_one"))

See also:

Deferred Column Loading

Parameters:name – String name of the deferred group. This name is established using the “group” name to the orm.deferred() configurational function.

SQL Expressions as Mapped Attributes

Attributes on a mapped class can be linked to SQL expressions, which can be used in queries.

Using a Hybrid

The easiest and most flexible way to link relatively simple SQL expressions to a class is to use a so-called “hybrid attribute”, described in the section Hybrid Attributes. The hybrid provides for an expression that works at both the Python level as well as at the SQL expression level. For example, below we map a class User, containing attributes firstname and lastname, and include a hybrid that will provide for us the fullname, which is the string concatenation of the two:

from sqlalchemy.ext.hybrid import hybrid_property

class User(Base):
    __tablename__ = 'user'
    id = Column(Integer, primary_key=True)
    firstname = Column(String(50))
    lastname = Column(String(50))

    @hybrid_property
    def fullname(self):
        return self.firstname + " " + self.lastname

Above, the fullname attribute is interpreted at both the instance and class level, so that it is available from an instance:

some_user = session.query(User).first()
print some_user.fullname

as well as usable wtihin queries:

some_user = session.query(User).filter(User.fullname == "John Smith").first()

The string concatenation example is a simple one, where the Python expression can be dual purposed at the instance and class level. Often, the SQL expression must be distinguished from the Python expression, which can be achieved using hybrid_property.expression(). Below we illustrate the case where a conditional needs to be present inside the hybrid, using the if statement in Python and the sql.expression.case() construct for SQL expressions:

from sqlalchemy.ext.hybrid import hybrid_property
from sqlalchemy.sql import case

class User(Base):
    __tablename__ = 'user'
    id = Column(Integer, primary_key=True)
    firstname = Column(String(50))
    lastname = Column(String(50))

    @hybrid_property
    def fullname(self):
        if self.firstname is not None:
            return self.firstname + " " + self.lastname
        else:
            return self.lastname

    @fullname.expression
    def fullname(cls):
        return case([
            (cls.firstname != None, cls.firstname + " " + cls.lastname),
        ], else_ = cls.lastname)

Using column_property

The orm.column_property() function can be used to map a SQL expression in a manner similar to a regularly mapped Column. With this technique, the attribute is loaded along with all other column-mapped attributes at load time. This is in some cases an advantage over the usage of hybrids, as the value can be loaded up front at the same time as the parent row of the object, particularly if the expression is one which links to other tables (typically as a correlated subquery) to access data that wouldn’t normally be available on an already loaded object.

Disadvantages to using orm.column_property() for SQL expressions include that the expression must be compatible with the SELECT statement emitted for the class as a whole, and there are also some configurational quirks which can occur when using orm.column_property() from declarative mixins.

Our “fullname” example can be expressed using orm.column_property() as follows:

from sqlalchemy.orm import column_property

class User(Base):
    __tablename__ = 'user'
    id = Column(Integer, primary_key=True)
    firstname = Column(String(50))
    lastname = Column(String(50))
    fullname = column_property(firstname + " " + lastname)

Correlated subqueries may be used as well. Below we use the select() construct to create a SELECT that links together the count of Address objects available for a particular User:

from sqlalchemy.orm import column_property
from sqlalchemy import select, func
from sqlalchemy import Column, Integer, String, ForeignKey

from sqlalchemy.ext.declarative import declarative_base

Base = declarative_base()

class Address(Base):
    __tablename__ = 'address'
    id = Column(Integer, primary_key=True)
    user_id = Column(Integer, ForeignKey('user.id'))

class User(Base):
    __tablename__ = 'user'
    id = Column(Integer, primary_key=True)
    address_count = column_property(
        select([func.count(Address.id)]).\
            where(Address.user_id==id)
    )

If import issues prevent the column_property() from being defined inline with the class, it can be assigned to the class after both are configured. In Declarative this has the effect of calling Mapper.add_property() to add an additional property after the fact:

User.address_count = column_property(
        select([func.count(Address.id)]).\
            where(Address.user_id==User.id)
    )

For many-to-many relationships, use and_() to join the fields of the association table to both tables in a relation, illustrated here with a classical mapping:

from sqlalchemy import and_

mapper(Author, authors, properties={
    'book_count': column_property(
                        select([func.count(books.c.id)],
                            and_(
                                book_authors.c.author_id==authors.c.id,
                                book_authors.c.book_id==books.c.id
                            )))
    })

Using a plain descriptor

In cases where a SQL query more elaborate than what orm.column_property() or hybrid_property can provide must be emitted, a regular Python function accessed as an attribute can be used, assuming the expression only needs to be available on an already-loaded instance. The function is decorated with Python’s own @property decorator to mark it as a read-only attribute. Within the function, object_session() is used to locate the Session corresponding to the current object, which is then used to emit a query:

from sqlalchemy.orm import object_session
from sqlalchemy import select, func

class User(Base):
    __tablename__ = 'user'
    id = Column(Integer, primary_key=True)
    firstname = Column(String(50))
    lastname = Column(String(50))

    @property
    def address_count(self):
        return object_session(self).\
            scalar(
                select([func.count(Address.id)]).\
                    where(Address.user_id==self.id)
            )

The plain descriptor approach is useful as a last resort, but is less performant in the usual case than both the hybrid and column property approaches, in that it needs to emit a SQL query upon each access.

Changing Attribute Behavior

Simple Validators

A quick way to add a “validation” routine to an attribute is to use the validates() decorator. An attribute validator can raise an exception, halting the process of mutating the attribute’s value, or can change the given value into something different. Validators, like all attribute extensions, are only called by normal userland code; they are not issued when the ORM is populating the object:

from sqlalchemy.orm import validates

class EmailAddress(Base):
    __tablename__ = 'address'

    id = Column(Integer, primary_key=True)
    email = Column(String)

    @validates('email')
    def validate_email(self, key, address):
        assert '@' in address
        return address

Validators also receive collection events, when items are added to a collection:

from sqlalchemy.orm import validates

class User(Base):
    # ...

    addresses = relationship("Address")

    @validates('addresses')
    def validate_address(self, key, address):
        assert '@' in address.email
        return address

Note that the validates() decorator is a convenience function built on top of attribute events. An application that requires more control over configuration of attribute change behavior can make use of this system, described at AttributeEvents.

sqlalchemy.orm.validates(*names, **kw)

Decorate a method as a ‘validator’ for one or more named properties.

Designates a method as a validator, a method which receives the name of the attribute as well as a value to be assigned, or in the case of a collection, the value to be added to the collection. The function can then raise validation exceptions to halt the process from continuing (where Python’s built-in ValueError and AssertionError exceptions are reasonable choices), or can modify or replace the value before proceeding. The function should otherwise return the given value.

Note that a validator for a collection cannot issue a load of that collection within the validation routine - this usage raises an assertion to avoid recursion overflows. This is a reentrant condition which is not supported.

Parameters:
  • *names – list of attribute names to be validated.
  • include_removes

    if True, “remove” events will be sent as well - the validation function must accept an additional argument “is_remove” which will be a boolean.

    New in version 0.7.7.

Using Descriptors and Hybrids

A more comprehensive way to produce modified behavior for an attribute is to use descriptors. These are commonly used in Python using the property() function. The standard SQLAlchemy technique for descriptors is to create a plain descriptor, and to have it read/write from a mapped attribute with a different name. Below we illustrate this using Python 2.6-style properties:

class EmailAddress(Base):
    __tablename__ = 'email_address'

    id = Column(Integer, primary_key=True)

    # name the attribute with an underscore,
    # different from the column name
    _email = Column("email", String)

    # then create an ".email" attribute
    # to get/set "._email"
    @property
    def email(self):
        return self._email

    @email.setter
    def email(self, email):
        self._email = email

The approach above will work, but there’s more we can add. While our EmailAddress object will shuttle the value through the email descriptor and into the _email mapped attribute, the class level EmailAddress.email attribute does not have the usual expression semantics usable with Query. To provide these, we instead use the hybrid extension as follows:

from sqlalchemy.ext.hybrid import hybrid_property

class EmailAddress(Base):
    __tablename__ = 'email_address'

    id = Column(Integer, primary_key=True)

    _email = Column("email", String)

    @hybrid_property
    def email(self):
        return self._email

    @email.setter
    def email(self, email):
        self._email = email

The .email attribute, in addition to providing getter/setter behavior when we have an instance of EmailAddress, also provides a SQL expression when used at the class level, that is, from the EmailAddress class directly:

from sqlalchemy.orm import Session
session = Session()

sqladdress = session.query(EmailAddress).\
                 filter(EmailAddress.email == 'address@example.com').\
                 one()

address.email = 'otheraddress@example.com'
sqlsession.commit()

The hybrid_property also allows us to change the behavior of the attribute, including defining separate behaviors when the attribute is accessed at the instance level versus at the class/expression level, using the hybrid_property.expression() modifier. Such as, if we wanted to add a host name automatically, we might define two sets of string manipulation logic:

class EmailAddress(Base):
    __tablename__ = 'email_address'

    id = Column(Integer, primary_key=True)

    _email = Column("email", String)

    @hybrid_property
    def email(self):
        """Return the value of _email up until the last twelve
        characters."""

        return self._email[:-12]

    @email.setter
    def email(self, email):
        """Set the value of _email, tacking on the twelve character
        value @example.com."""

        self._email = email + "@example.com"

    @email.expression
    def email(cls):
        """Produce a SQL expression that represents the value
        of the _email column, minus the last twelve characters."""

        return func.substr(cls._email, 0, func.length(cls._email) - 12)

Above, accessing the email property of an instance of EmailAddress will return the value of the _email attribute, removing or adding the hostname @example.com from the value. When we query against the email attribute, a SQL function is rendered which produces the same effect:

sqladdress = session.query(EmailAddress).filter(EmailAddress.email == 'address').one()

Read more about Hybrids at Hybrid Attributes.

Synonyms

Synonyms are a mapper-level construct that applies expression behavior to a descriptor based attribute.

Changed in version 0.7: The functionality of synonym is superceded as of 0.7 by hybrid attributes.

sqlalchemy.orm.synonym(name, map_column=False, descriptor=None, comparator_factory=None, doc=None)

Denote an attribute name as a synonym to a mapped property.

Changed in version 0.7: synonym() is superseded by the hybrid extension. See the documentation for hybrids at Hybrid Attributes.

Used with the properties dictionary sent to mapper():

class MyClass(object):
    def _get_status(self):
        return self._status
    def _set_status(self, value):
        self._status = value
    status = property(_get_status, _set_status)

mapper(MyClass, sometable, properties={
    "status":synonym("_status", map_column=True)
})

Above, the status attribute of MyClass will produce expression behavior against the table column named status, using the Python attribute _status on the mapped class to represent the underlying value.

Parameters:
  • name – the name of the existing mapped property, which can be any other MapperProperty including column-based properties and relationships.
  • map_column – if True, an additional ColumnProperty is created on the mapper automatically, using the synonym’s name as the keyname of the property, and the keyname of this synonym() as the name of the column to map.

Custom Comparators

The expressions returned by comparison operations, such as User.name=='ed', can be customized, by implementing an object that explicitly defines each comparison method needed.

This is a relatively rare use case which generally applies only to highly customized types. Usually, custom SQL behaviors can be associated with a mapped class by composing together the classes’ existing mapped attributes with other expression components, using the techniques described in SQL Expressions as Mapped Attributes. Those approaches should be considered first before resorting to custom comparison objects.

Each of orm.column_property(), composite(), relationship(), and comparable_property() accept an argument called comparator_factory. A subclass of PropComparator can be provided for this argument, which can then reimplement basic Python comparison methods such as __eq__(), __ne__(), __lt__(), and so on.

It’s best to subclass the PropComparator subclass provided by each type of property. For example, to allow a column-mapped attribute to do case-insensitive comparison:

from sqlalchemy.orm.properties import ColumnProperty
from sqlalchemy.sql import func, Column, Integer, String

class MyComparator(ColumnProperty.Comparator):
    def __eq__(self, other):
        return func.lower(self.__clause_element__()) == func.lower(other)

class EmailAddress(Base):
    __tablename__ = 'address'
    id = Column(Integer, primary_key=True)
    email = column_property(
                    Column('email', String),
                    comparator_factory=MyComparator
                )

Above, comparisons on the email column are wrapped in the SQL lower() function to produce case-insensitive matching:

>>> str(EmailAddress.email == 'SomeAddress@foo.com')
lower(address.email) = lower(:lower_1)

When building a PropComparator, the __clause_element__() method should be used in order to acquire the underlying mapped column. This will return a column that is appropriately wrapped in any kind of subquery or aliasing that has been applied in the context of the generated SQL statement.

sqlalchemy.orm.comparable_property(comparator_factory, descriptor=None)

Provides a method of applying a PropComparator to any Python descriptor attribute.

Changed in version 0.7: comparable_property() is superseded by the hybrid extension. See the example at Building Custom Comparators.

Allows any Python descriptor to behave like a SQL-enabled attribute when used at the class level in queries, allowing redefinition of expression operator behavior.

In the example below we redefine PropComparator.operate() to wrap both sides of an expression in func.lower() to produce case-insensitive comparison:

from sqlalchemy.orm import comparable_property
from sqlalchemy.orm.interfaces import PropComparator
from sqlalchemy.sql import func
from sqlalchemy import Integer, String, Column
from sqlalchemy.ext.declarative import declarative_base

class CaseInsensitiveComparator(PropComparator):
    def __clause_element__(self):
        return self.prop

    def operate(self, op, other):
        return op(
            func.lower(self.__clause_element__()),
            func.lower(other)
        )

Base = declarative_base()

class SearchWord(Base):
    __tablename__ = 'search_word'
    id = Column(Integer, primary_key=True)
    word = Column(String)
    word_insensitive = comparable_property(lambda prop, mapper:
                            CaseInsensitiveComparator(mapper.c.word, mapper)
                        )

A mapping like the above allows the word_insensitive attribute to render an expression like:

>>> print SearchWord.word_insensitive == "Trucks"
lower(search_word.word) = lower(:lower_1)
Parameters:
  • comparator_factory – A PropComparator subclass or factory that defines operator behavior for this property.
  • descriptor

    Optional when used in a properties={} declaration. The Python descriptor or property to layer comparison behavior on top of.

    The like-named descriptor will be automatically retrieved from the mapped class if left blank in a properties declaration.

Composite Column Types

Sets of columns can be associated with a single user-defined datatype. The ORM provides a single attribute which represents the group of columns using the class you provide.

Changed in version 0.7: Composites have been simplified such that they no longer “conceal” the underlying column based attributes. Additionally, in-place mutation is no longer automatic; see the section below on enabling mutability to support tracking of in-place changes.

A simple example represents pairs of columns as a Point object. Point represents such a pair as .x and .y:

class Point(object):
    def __init__(self, x, y):
        self.x = x
        self.y = y

    def __composite_values__(self):
        return self.x, self.y

    def __repr__(self):
        return "Point(x=%r, y=%r)" % (self.x, self.y)

    def __eq__(self, other):
        return isinstance(other, Point) and \
            other.x == self.x and \
            other.y == self.y

    def __ne__(self, other):
        return not self.__eq__(other)

The requirements for the custom datatype class are that it have a constructor which accepts positional arguments corresponding to its column format, and also provides a method __composite_values__() which returns the state of the object as a list or tuple, in order of its column-based attributes. It also should supply adequate __eq__() and __ne__() methods which test the equality of two instances.

We will create a mapping to a table vertice, which represents two points as x1/y1 and x2/y2. These are created normally as Column objects. Then, the composite() function is used to assign new attributes that will represent sets of columns via the Point class:

from sqlalchemy import Column, Integer
from sqlalchemy.orm import composite
from sqlalchemy.ext.declarative import declarative_base

Base = declarative_base()

class Vertex(Base):
    __tablename__ = 'vertice'

    id = Column(Integer, primary_key=True)
    x1 = Column(Integer)
    y1 = Column(Integer)
    x2 = Column(Integer)
    y2 = Column(Integer)

    start = composite(Point, x1, y1)
    end = composite(Point, x2, y2)

A classical mapping above would define each composite() against the existing table:

mapper(Vertex, vertice_table, properties={
    'start':composite(Point, vertice_table.c.x1, vertice_table.c.y1),
    'end':composite(Point, vertice_table.c.x2, vertice_table.c.y2),
})

We can now persist and use Vertex instances, as well as query for them, using the .start and .end attributes against ad-hoc Point instances:

>>> v = Vertex(start=Point(3, 4), end=Point(5, 6))
>>> session.add(v)
>>> q = session.query(Vertex).filter(Vertex.start == Point(3, 4))
sql>>> print q.first().start
Point(x=3, y=4)
sqlalchemy.orm.composite(class_, *cols, **kwargs)

Return a composite column-based property for use with a Mapper.

See the mapping documentation section Composite Column Types for a full usage example.

Parameters:
  • class_ – The “composite type” class.
  • *cols – List of Column objects to be mapped.
  • active_history=False

    When True, indicates that the “previous” value for a scalar attribute should be loaded when replaced, if not already loaded. See the same flag on column_property().

    Changed in version 0.7: This flag specifically becomes meaningful - previously it was a placeholder.

  • group – A group name for this property when marked as deferred.
  • deferred – When True, the column property is “deferred”, meaning that it does not load immediately, and is instead loaded when the attribute is first accessed on an instance. See also deferred().
  • comparator_factory – a class which extends CompositeProperty.Comparator which provides custom SQL clause generation for comparison operations.
  • doc – optional string that will be applied as the doc on the class-bound descriptor.
  • extension – an AttributeExtension instance, or list of extensions, which will be prepended to the list of attribute listeners for the resulting descriptor placed on the class. Deprecated. Please see AttributeEvents.

Tracking In-Place Mutations on Composites

In-place changes to an existing composite value are not tracked automatically. Instead, the composite class needs to provide events to its parent object explicitly. This task is largely automated via the usage of the MutableComposite mixin, which uses events to associate each user-defined composite object with all parent associations. Please see the example in Establishing Mutability on Composites.

Changed in version 0.7: No automatic tracking of in-place changes to an existing composite value.

Redefining Comparison Operations for Composites

The “equals” comparison operation by default produces an AND of all corresponding columns equated to one another. This can be changed using the comparator_factory, described in Custom Comparators. Below we illustrate the “greater than” operator, implementing the same expression that the base “greater than” does:

from sqlalchemy.orm.properties import CompositeProperty
from sqlalchemy import sql

class PointComparator(CompositeProperty.Comparator):
    def __gt__(self, other):
        """redefine the 'greater than' operation"""

        return sql.and_(*[a>b for a, b in
                          zip(self.__clause_element__().clauses,
                              other.__composite_values__())])

class Vertex(Base):
    ___tablename__ = 'vertice'

    id = Column(Integer, primary_key=True)
    x1 = Column(Integer)
    y1 = Column(Integer)
    x2 = Column(Integer)
    y2 = Column(Integer)

    start = composite(Point, x1, y1,
                        comparator_factory=PointComparator)
    end = composite(Point, x2, y2,
                        comparator_factory=PointComparator)

Mapping a Class against Multiple Tables

Mappers can be constructed against arbitrary relational units (called selectables) in addition to plain tables. For example, the join() function creates a selectable unit comprised of multiple tables, complete with its own composite primary key, which can be mapped in the same way as a Table:

from sqlalchemy import Table, Column, Integer, \
        String, MetaData, join, ForeignKey
from sqlalchemy.ext.declarative import declarative_base
from sqlalchemy.orm import column_property

metadata = MetaData()

# define two Table objects
user_table = Table('user', metadata,
            Column('id', Integer, primary_key=True),
            Column('name', String),
        )

address_table = Table('address', metadata,
            Column('id', Integer, primary_key=True),
            Column('user_id', Integer, ForeignKey('user.id')),
            Column('email_address', String)
            )

# define a join between them.  This
# takes place across the user.id and address.user_id
# columns.
user_address_join = join(user_table, address_table)

Base = declarative_base()

# map to it
class AddressUser(Base):
    __table__ = user_address_join

    id = column_property(user_table.c.id, address_table.c.user_id)
    address_id = address_table.c.id

In the example above, the join expresses columns for both the user and the address table. The user.id and address.user_id columns are equated by foreign key, so in the mapping they are defined as one attribute, AddressUser.id, using column_property() to indicate a specialized column mapping. Based on this part of the configuration, the mapping will copy new primary key values from user.id into the address.user_id column when a flush occurs.

Additionally, the address.id column is mapped explicitly to an attribute named address_id. This is to disambiguate the mapping of the address.id column from the same-named AddressUser.id attribute, which here has been assigned to refer to the user table combined with the address.user_id foreign key.

The natural primary key of the above mapping is the composite of (user.id, address.id), as these are the primary key columns of the user and address table combined together. The identity of an AddressUser object will be in terms of these two values, and is represented from an AddressUser object as (AddressUser.id, AddressUser.address_id).

Mapping a Class against Arbitrary Selects

Similar to mapping against a join, a plain select() object can be used with a mapper as well. The example fragment below illustrates mapping a class called Customer to a select() which includes a join to a subquery:

from sqlalchemy import select, func

subq = select([
            func.count(orders.c.id).label('order_count'),
            func.max(orders.c.price).label('highest_order'),
            orders.c.customer_id
            ]).group_by(orders.c.customer_id).alias()

customer_select = select([customers, subq]).\
            select_from(
                join(customers, subq,
                        customers.c.id == subq.c.customer_id)
            ).alias()

class Customer(Base):
    __table__ = customer_select

Above, the full row represented by customer_select will be all the columns of the customers table, in addition to those columns exposed by the subq subquery, which are order_count, highest_order, and customer_id. Mapping the Customer class to this selectable then creates a class which will contain those attributes.

When the ORM persists new instances of Customer, only the customers table will actually receive an INSERT. This is because the primary key of the orders table is not represented in the mapping; the ORM will only emit an INSERT into a table for which it has mapped the primary key.

Note

The practice of mapping to arbitrary SELECT statements, especially complex ones as above, is almost never needed; it necessarily tends to produce complex queries which are often less efficient than that which would be produced by direct query construction. The practice is to some degree based on the very early history of SQLAlchemy where the mapper() construct was meant to represent the primary querying interface; in modern usage, the Query object can be used to construct virtually any SELECT statement, including complex composites, and should be favored over the “map-to-selectable” approach.

Multiple Mappers for One Class

In modern SQLAlchemy, a particular class is only mapped by one mapper() at a time. The rationale here is that the mapper() modifies the class itself, not only persisting it towards a particular Table, but also instrumenting attributes upon the class which are structured specifically according to the table metadata.

One potential use case for another mapper to exist at the same time is if we wanted to load instances of our class not just from the immediate Table to which it is mapped, but from another selectable that is a derivation of that Table. While there technically is a way to create such a mapper(), using the non_primary=True option, this approach is virtually never needed. Instead, we use the functionality of the Query object to achieve this, using a method such as Query.select_from() or Query.from_statement() to specify a derived selectable.

Another potential use is if we genuinely want instances of our class to be persisted into different tables at different times; certain kinds of data sharding configurations may persist a particular class into tables that are identical in structure except for their name. For this kind of pattern, Python offers a better approach than the complexity of mapping the same class multiple times, which is to instead create new mapped classes for each target table. SQLAlchemy refers to this as the “entity name” pattern, which is described as a recipe at Entity Name.

Constructors and Object Initialization

Mapping imposes no restrictions or requirements on the constructor (__init__) method for the class. You are free to require any arguments for the function that you wish, assign attributes to the instance that are unknown to the ORM, and generally do anything else you would normally do when writing a constructor for a Python class.

The SQLAlchemy ORM does not call __init__ when recreating objects from database rows. The ORM’s process is somewhat akin to the Python standard library’s pickle module, invoking the low level __new__ method and then quietly restoring attributes directly on the instance rather than calling __init__.

If you need to do some setup on database-loaded instances before they’re ready to use, you can use the @reconstructor decorator to tag a method as the ORM counterpart to __init__. SQLAlchemy will call this method with no arguments every time it loads or reconstructs one of your instances. This is useful for recreating transient properties that are normally assigned in your __init__:

from sqlalchemy import orm

class MyMappedClass(object):
    def __init__(self, data):
        self.data = data
        # we need stuff on all instances, but not in the database.
        self.stuff = []

    @orm.reconstructor
    def init_on_load(self):
        self.stuff = []

When obj = MyMappedClass() is executed, Python calls the __init__ method as normal and the data argument is required. When instances are loaded during a Query operation as in query(MyMappedClass).one(), init_on_load is called.

Any method may be tagged as the reconstructor(), even the __init__ method. SQLAlchemy will call the reconstructor method with no arguments. Scalar (non-collection) database-mapped attributes of the instance will be available for use within the function. Eagerly-loaded collections are generally not yet available and will usually only contain the first element. ORM state changes made to objects at this stage will not be recorded for the next flush() operation, so the activity within a reconstructor should be conservative.

reconstructor() is a shortcut into a larger system of “instance level” events, which can be subscribed to using the event API - see InstanceEvents for the full API description of these events.

sqlalchemy.orm.reconstructor(fn)

Decorate a method as the ‘reconstructor’ hook.

Designates a method as the “reconstructor”, an __init__-like method that will be called by the ORM after the instance has been loaded from the database or otherwise reconstituted.

The reconstructor will be invoked with no arguments. Scalar (non-collection) database-mapped attributes of the instance will be available for use within the function. Eagerly-loaded collections are generally not yet available and will usually only contain the first element. ORM state changes made to objects at this stage will not be recorded for the next flush() operation, so the activity within a reconstructor should be conservative.

Class Mapping API

sqlalchemy.orm.mapper(class_, local_table=None, *args, **params)

Return a new Mapper object.

This function is typically used behind the scenes via the Declarative extension. When using Declarative, many of the usual mapper() arguments are handled by the Declarative extension itself, including class_, local_table, properties, and inherits. Other options are passed to mapper() using the __mapper_args__ class variable:

class MyClass(Base):
    __tablename__ = 'my_table'
    id = Column(Integer, primary_key=True)
    type = Column(String(50))
    alt = Column("some_alt", Integer)

    __mapper_args__ = {
        'polymorphic_on' : type
    }

Explicit use of mapper() is often referred to as classical mapping. The above declarative example is equivalent in classical form to:

my_table = Table("my_table", metadata,
    Column('id', Integer, primary_key=True),
    Column('type', String(50)),
    Column("some_alt", Integer)
)

class MyClass(object):
    pass

mapper(MyClass, my_table,
    polymorphic_on=my_table.c.type,
    properties={
        'alt':my_table.c.some_alt
    })

See also:

Classical Mappings - discussion of direct usage of mapper()

Parameters:
  • class_ – The class to be mapped. When using Declarative, this argument is automatically passed as the declared class itself.
  • local_table – The Table or other selectable to which the class is mapped. May be None if this mapper inherits from another mapper using single-table inheritance. When using Declarative, this argument is automatically passed by the extension, based on what is configured via the __table__ argument or via the Table produced as a result of the __tablename__ and Column arguments present.
  • always_refresh – If True, all query operations for this mapped class will overwrite all data within object instances that already exist within the session, erasing any in-memory changes with whatever information was loaded from the database. Usage of this flag is highly discouraged; as an alternative, see the method Query.populate_existing().
  • allow_null_pks – This flag is deprecated - this is stated as allow_partial_pks which defaults to True.
  • allow_partial_pks – Defaults to True. Indicates that a composite primary key with some NULL values should be considered as possibly existing within the database. This affects whether a mapper will assign an incoming row to an existing identity, as well as if Session.merge() will check the database first for a particular primary key value. A “partial primary key” can occur if one has mapped to an OUTER JOIN, for example.
  • batch – Defaults to True, indicating that save operations of multiple entities can be batched together for efficiency. Setting to False indicates that an instance will be fully saved before saving the next instance. This is used in the extremely rare case that a MapperEvents listener requires being called in between individual row persistence operations.
  • column_prefix

    A string which will be prepended to the mapped attribute name when Column objects are automatically assigned as attributes to the mapped class. Does not affect explicitly specified column-based properties.

    See the section Naming All Columns with a Prefix for an example.

  • concrete

    If True, indicates this mapper should use concrete table inheritance with its parent mapper.

    See the section Concrete Table Inheritance for an example.

  • exclude_properties

    A list or set of string column names to be excluded from mapping.

    See Mapping a Subset of Table Columns for an example.

  • extension – A MapperExtension instance or list of MapperExtension instances which will be applied to all operations by this Mapper. Deprecated. Please see MapperEvents.
  • include_properties

    An inclusive list or set of string column names to map.

    See Mapping a Subset of Table Columns for an example.

  • inherits

    A mapped class or the corresponding Mapper of one indicating a superclass to which this Mapper should inherit from. The mapped class here must be a subclass of the other mapper’s class. When using Declarative, this argument is passed automatically as a result of the natural class hierarchy of the declared classes.

    See also:

    Mapping Class Inheritance Hierarchies

  • inherit_condition – For joined table inheritance, a SQL expression which will define how the two tables are joined; defaults to a natural join between the two tables.
  • inherit_foreign_keys – When inherit_condition is used and the columns present are missing a ForeignKey configuration, this parameter can be used to specify which columns are “foreign”. In most cases can be left as None.
  • non_primary

    Specify that this Mapper is in addition to the “primary” mapper, that is, the one used for persistence. The Mapper created here may be used for ad-hoc mapping of the class to an alternate selectable, for loading only.

    The non_primary feature is rarely needed with modern usage.

  • order_by – A single Column or list of Column objects for which selection operations should use as the default ordering for entities. By default mappers have no pre-defined ordering.
  • passive_updates

    Indicates UPDATE behavior of foreign key columns when a primary key column changes on a joined-table inheritance mapping. Defaults to True.

    When True, it is assumed that ON UPDATE CASCADE is configured on the foreign key in the database, and that the database will handle propagation of an UPDATE from a source column to dependent columns on joined-table rows.

    When False, it is assumed that the database does not enforce referential integrity and will not be issuing its own CASCADE operation for an update. The Mapper here will emit an UPDATE statement for the dependent columns during a primary key change.

    See also:

    Mutable Primary Keys / Update Cascades - description of a similar feature as used with relationship()

  • polymorphic_on

    Specifies the column, attribute, or SQL expression used to determine the target class for an incoming row, when inheriting classes are present.

    This value is commonly a Column object that’s present in the mapped Table:

    class Employee(Base):
        __tablename__ = 'employee'
    
        id = Column(Integer, primary_key=True)
        discriminator = Column(String(50))
    
        __mapper_args__ = {
            "polymorphic_on":discriminator,
            "polymorphic_identity":"employee"
        }

    It may also be specified as a SQL expression, as in this example where we use the case() construct to provide a conditional approach:

    class Employee(Base):
        __tablename__ = 'employee'
    
        id = Column(Integer, primary_key=True)
        discriminator = Column(String(50))
    
        __mapper_args__ = {
            "polymorphic_on":case([
                (discriminator == "EN", "engineer"),
                (discriminator == "MA", "manager"),
            ], else_="employee"),
            "polymorphic_identity":"employee"
        }

    It may also refer to any attribute configured with column_property(), or to the string name of one:

    class Employee(Base):
        __tablename__ = 'employee'
    
        id = Column(Integer, primary_key=True)
        discriminator = Column(String(50))
        employee_type = column_property(
            case([
                (discriminator == "EN", "engineer"),
                (discriminator == "MA", "manager"),
            ], else_="employee")
        )
    
        __mapper_args__ = {
            "polymorphic_on":employee_type,
            "polymorphic_identity":"employee"
        }

    Changed in version 0.7.4: polymorphic_on may be specified as a SQL expression, or refer to any attribute configured with column_property(), or to the string name of one.

    When setting polymorphic_on to reference an attribute or expression that’s not present in the locally mapped Table, yet the value of the discriminator should be persisted to the database, the value of the discriminator is not automatically set on new instances; this must be handled by the user, either through manual means or via event listeners. A typical approach to establishing such a listener looks like:

    from sqlalchemy import event
    from sqlalchemy.orm import object_mapper
    
    @event.listens_for(Employee, "init", propagate=True)
    def set_identity(instance, *arg, **kw):
        mapper = object_mapper(instance)
        instance.discriminator = mapper.polymorphic_identity

    Where above, we assign the value of polymorphic_identity for the mapped class to the discriminator attribute, thus persisting the value to the discriminator column in the database.

    See also:

    Mapping Class Inheritance Hierarchies

  • polymorphic_identity – Specifies the value which identifies this particular class as returned by the column expression referred to by the polymorphic_on setting. As rows are received, the value corresponding to the polymorphic_on column expression is compared to this value, indicating which subclass should be used for the newly reconstructed object.
  • properties – A dictionary mapping the string names of object attributes to MapperProperty instances, which define the persistence behavior of that attribute. Note that Column objects present in the mapped Table are automatically placed into ColumnProperty instances upon mapping, unless overridden. When using Declarative, this argument is passed automatically, based on all those MapperProperty instances declared in the declared class body.
  • primary_key – A list of Column objects which define the primary key to be used against this mapper’s selectable unit. This is normally simply the primary key of the local_table, but can be overridden here.
  • version_id_col – A Column that will be used to keep a running version id of mapped entities in the database. This is used during save operations to ensure that no other thread or process has updated the instance during the lifetime of the entity, else a StaleDataError exception is thrown. By default the column must be of Integer type, unless version_id_generator specifies a new generation algorithm.
  • version_id_generator

    A callable which defines the algorithm used to generate new version ids. Defaults to an integer generator. Can be replaced with one that generates timestamps, uuids, etc. e.g.:

    import uuid
    
    class MyClass(Base):
        __tablename__ = 'mytable'
        id = Column(Integer, primary_key=True)
        version_uuid = Column(String(32))
    
        __mapper_args__ = {
            'version_id_col':version_uuid,
            'version_id_generator':lambda version:uuid.uuid4().hex
        }

    The callable receives the current version identifier as its single argument.

  • with_polymorphic

    A tuple in the form (<classes>, <selectable>) indicating the default style of “polymorphic” loading, that is, which tables are queried at once. <classes> is any single or list of mappers and/or classes indicating the inherited classes that should be loaded at once. The special value '*' may be used to indicate all descending classes should be loaded immediately. The second tuple argument <selectable> indicates a selectable that will be used to query for multiple classes.

    See also:

    Concrete Table Inheritance - typically uses with_polymorphic to specify a UNION statement to select from.

    Basic Control of Which Tables are Queried - usage example of the related Query.with_polymorphic() method

sqlalchemy.orm.object_mapper(instance)

Given an object, return the primary Mapper associated with the object instance.

Raises UnmappedInstanceError if no mapping is configured.

sqlalchemy.orm.class_mapper(class_, compile=True)

Given a class, return the primary Mapper associated with the key.

Raises UnmappedClassError if no mapping is configured on the given class, or ArgumentError if a non-class object is passed.

sqlalchemy.orm.compile_mappers()

Initialize the inter-mapper relationships of all mappers that have been defined.

Deprecated since version 0.7: compile_mappers() is renamed to configure_mappers()

sqlalchemy.orm.configure_mappers()

Initialize the inter-mapper relationships of all mappers that have been constructed thus far.

This function can be called any number of times, but in most cases is handled internally.

sqlalchemy.orm.clear_mappers()

Remove all mappers from all classes.

This function removes all instrumentation from classes and disposes of their associated mappers. Once called, the classes are unmapped and can be later re-mapped with new mappers.

clear_mappers() is not for normal use, as there is literally no valid usage for it outside of very specific testing scenarios. Normally, mappers are permanent structural components of user-defined classes, and are never discarded independently of their class. If a mapped class itself is garbage collected, its mapper is automatically disposed of as well. As such, clear_mappers() is only for usage in test suites that re-use the same classes with different mappings, which is itself an extremely rare use case - the only such use case is in fact SQLAlchemy’s own test suite, and possibly the test suites of other ORM extension libraries which intend to test various combinations of mapper construction upon a fixed set of classes.

sqlalchemy.orm.util.identity_key(*args, **kwargs)

Get an identity key.

Valid call signatures:

  • identity_key(class, ident)

    class

    mapped class (must be a positional argument)

    ident

    primary key, if the key is composite this is a tuple

  • identity_key(instance=instance)

    instance

    object instance (must be given as a keyword arg)

  • identity_key(class, row=row)

    class

    mapped class (must be a positional argument)

    row

    result proxy row (must be given as a keyword arg)

sqlalchemy.orm.util.polymorphic_union(table_map, typecolname, aliasname='p_union', cast_nulls=True)

Create a UNION statement used by a polymorphic mapper.

See Concrete Table Inheritance for an example of how this is used.

Parameters:
  • table_map – mapping of polymorphic identities to Table objects.
  • typecolname – string name of a “discriminator” column, which will be derived from the query, producing the polymorphic identity for each row. If None, no polymorphic discriminator is generated.
  • aliasname – name of the alias() construct generated.
  • cast_nulls – if True, non-existent columns, which are represented as labeled NULLs, will be passed into CAST. This is a legacy behavior that is problematic on some backends such as Oracle - in which case it can be set to False.
class sqlalchemy.orm.mapper.Mapper(class_, local_table, properties=None, primary_key=None, non_primary=False, inherits=None, inherit_condition=None, inherit_foreign_keys=None, extension=None, order_by=False, always_refresh=False, version_id_col=None, version_id_generator=None, polymorphic_on=None, _polymorphic_map=None, polymorphic_identity=None, concrete=False, with_polymorphic=None, allow_null_pks=None, allow_partial_pks=True, batch=True, column_prefix=None, include_properties=None, exclude_properties=None, passive_updates=True, eager_defaults=False, _compiled_cache_size=100)

Define the correlation of class attributes to database table columns.

Instances of this class should be constructed via the mapper() function.

__init__(class_, local_table, properties=None, primary_key=None, non_primary=False, inherits=None, inherit_condition=None, inherit_foreign_keys=None, extension=None, order_by=False, always_refresh=False, version_id_col=None, version_id_generator=None, polymorphic_on=None, _polymorphic_map=None, polymorphic_identity=None, concrete=False, with_polymorphic=None, allow_null_pks=None, allow_partial_pks=True, batch=True, column_prefix=None, include_properties=None, exclude_properties=None, passive_updates=True, eager_defaults=False, _compiled_cache_size=100)

Construct a new mapper.

Mappers are normally constructed via the mapper() function. See for details.

add_properties(dict_of_properties)

Add the given dictionary of properties to this mapper, using add_property.

add_property(key, prop)

Add an individual MapperProperty to this mapper.

If the mapper has not been configured yet, just adds the property to the initial properties dictionary sent to the constructor. If this Mapper has already been configured, then the given MapperProperty is configured immediately.

base_mapper = None

The base-most Mapper in an inheritance chain.

In a non-inheriting scenario, this attribute will always be this Mapper. In an inheritance scenario, it references the Mapper which is parent to all other Mapper objects in the inheritance chain.

This is a read only attribute determined during mapper construction. Behavior is undefined if directly modified.

c = None

A synonym for columns.

cascade_iterator(type_, state, halt_on=None)

Iterate each element and its mapper in an object graph, for all relationships that meet the given cascade rule.

Parameters:
  • type – The name of the cascade rule (i.e. save-update, delete, etc.)
  • state – The lead InstanceState. child items will be processed per the relationships defined for this object’s mapper.

the return value are object instances; this provides a strong reference so that they don’t fall out of scope immediately.

class_ = None

The Python class which this Mapper maps.

This is a read only attribute determined during mapper construction. Behavior is undefined if directly modified.

class_manager = None

The ClassManager which maintains event listeners and class-bound descriptors for this Mapper.

This is a read only attribute determined during mapper construction. Behavior is undefined if directly modified.

columns = None

A collection of Column or other scalar expression objects maintained by this Mapper.

The collection behaves the same as that of the c attribute on any Table object, except that only those columns included in this mapping are present, and are keyed based on the attribute name defined in the mapping, not necessarily the key attribute of the Column itself. Additionally, scalar expressions mapped by column_property() are also present here.

This is a read only attribute determined during mapper construction. Behavior is undefined if directly modified.

common_parent(other)

Return true if the given mapper shares a common inherited parent as this mapper.

compile()

Initialize the inter-mapper relationships of all mappers that

Deprecated since version 0.7: Mapper.compile() is replaced by configure_mappers()

have been constructed thus far.

compiled

Deprecated since version 0.7: Mapper.compiled is replaced by Mapper.configured

concrete = None

Represent True if this Mapper is a concrete inheritance mapper.

This is a read only attribute determined during mapper construction. Behavior is undefined if directly modified.

configured = None

Represent True if this Mapper has been configured.

This is a read only attribute determined during mapper construction. Behavior is undefined if directly modified.

See also configure_mappers().

get_property(key, _compile_mappers=True)

return a MapperProperty associated with the given key.

get_property_by_column(column)

Given a Column object, return the MapperProperty which maps this column.

identity_key_from_instance(instance)

Return the identity key for the given instance, based on its primary key attributes.

This value is typically also found on the instance state under the attribute name key.

identity_key_from_primary_key(primary_key)

Return an identity-map key for use in storing/retrieving an item from an identity map.

primary_key
A list of values indicating the identifier.
identity_key_from_row(row, adapter=None)

Return an identity-map key for use in storing/retrieving an item from the identity map.

row
A sqlalchemy.engine.base.RowProxy instance or a dictionary corresponding result-set ColumnElement instances to their values within a row.
inherits = None

References the Mapper which this Mapper inherits from, if any.

This is a read only attribute determined during mapper construction. Behavior is undefined if directly modified.

isa(other)

Return True if the this mapper inherits from the given mapper.

iterate_properties

return an iterator of all MapperProperty objects.

local_table = None

The Selectable which this Mapper manages.

Typically is an instance of Table or Alias. May also be None.

The “local” table is the selectable that the Mapper is directly responsible for managing from an attribute access and flush perspective. For non-inheriting mappers, the local table is the same as the “mapped” table. For joined-table inheritance mappers, local_table will be the particular sub-table of the overall “join” which this Mapper represents. If this mapper is a single-table inheriting mapper, local_table will be None.

See also mapped_table.

mapped_table = None

The Selectable to which this Mapper is mapped.

Typically an instance of Table, Join, or Alias.

The “mapped” table is the selectable that the mapper selects from during queries. For non-inheriting mappers, the mapped table is the same as the “local” table. For joined-table inheritance mappers, mapped_table references the full Join representing full rows for this particular subclass. For single-table inheritance mappers, mapped_table references the base table.

See also local_table.

non_primary = None

Represent True if this Mapper is a “non-primary” mapper, e.g. a mapper that is used only to selet rows but not for persistence management.

This is a read only attribute determined during mapper construction. Behavior is undefined if directly modified.

polymorphic_identity = None

Represent an identifier which is matched against the polymorphic_on column during result row loading.

Used only with inheritance, this object can be of any type which is comparable to the type of column represented by polymorphic_on.

This is a read only attribute determined during mapper construction. Behavior is undefined if directly modified.

polymorphic_iterator()

Iterate through the collection including this mapper and all descendant mappers.

This includes not just the immediately inheriting mappers but all their inheriting mappers as well.

To iterate through an entire hierarchy, use mapper.base_mapper.polymorphic_iterator().

polymorphic_map = None

A mapping of “polymorphic identity” identifiers mapped to Mapper instances, within an inheritance scenario.

The identifiers can be of any type which is comparable to the type of column represented by polymorphic_on.

An inheritance chain of mappers will all reference the same polymorphic map object. The object is used to correlate incoming result rows to target mappers.

This is a read only attribute determined during mapper construction. Behavior is undefined if directly modified.

polymorphic_on = None

The Column specified as the polymorphic_on column for this Mapper, within an inheritance scenario.

This attribute may also be of other types besides Column in a future SQLAlchemy release.

This is a read only attribute determined during mapper construction. Behavior is undefined if directly modified.

primary_key = None

An iterable containing the collection of Column objects which comprise the ‘primary key’ of the mapped table, from the perspective of this Mapper.

This list is against the selectable in mapped_table. In the case of inheriting mappers, some columns may be managed by a superclass mapper. For example, in the case of a Join, the primary key is determined by all of the primary key columns across all tables referenced by the Join.

The list is also not necessarily the same as the primary key column collection associated with the underlying tables; the Mapper features a primary_key argument that can override what the Mapper considers as primary key columns.

This is a read only attribute determined during mapper construction. Behavior is undefined if directly modified.

primary_key_from_instance(instance)

Return the list of primary key values for the given instance.

primary_mapper()

Return the primary mapper corresponding to this mapper’s class key (class).

self_and_descendants

The collection including this mapper and all descendant mappers.

This includes not just the immediately inheriting mappers but all their inheriting mappers as well.

single = None

Represent True if this Mapper is a single table inheritance mapper.

local_table will be None if this flag is set.

This is a read only attribute determined during mapper construction. Behavior is undefined if directly modified.

tables = None

An iterable containing the collection of Table objects which this Mapper is aware of.

If the mapper is mapped to a Join, or an Alias representing a Select, the individual Table objects that comprise the full construct will be represented here.

This is a read only attribute determined during mapper construction. Behavior is undefined if directly modified.

validators = None

An immutable dictionary of attributes which have been decorated using the validates() decorator.

The dictionary contains string attribute names as keys mapped to the actual validation method.