SQLAlchemy 0.7 Documentation
- Adjacency List
- Attribute Instrumentation
- Beaker Caching
- Declarative Reflection
- Directed Graphs
- Dynamic Relations as Dictionaries
- Generic Associations
- Horizontal Sharding
- Inheritance Mappings
- Large Collections
- Nested Sets
- Polymorphic Associations
- PostGIS Integration
- Versioned Objects
- Vertical Attribute Mapping
- XML Persistence
The SQLAlchemy distribution includes a variety of code examples illustrating a select set of patterns, some typical and some not so typical. All are runnable and can be found in the
/examples directory of the distribution. Each example contains a README in its
__init__.py file, each of which are listed below.
Additional SQLAlchemy examples, some user contributed, are available on the wiki at http://www.sqlalchemy.org/trac/wiki/UsageRecipes.
An example of a dictionary-of-dictionaries structure mapped using an adjacency list model.
node = TreeNode('rootnode') node.append('node1') node.append('node3') session.add(node) session.commit() dump_tree(node)
Examples illustrating the usage of the “association object” pattern, where an intermediary class mediates the relationship between two classes that are associated in a many-to-many pattern.
This directory includes the following examples:
- basic_association.py - illustrate a many-to-many relationship between an “Order” and a collection of “Item” objects, associating a purchase price with each via an association object called “OrderItem”
- proxied_association.py - same example as basic_association, adding in
sqlalchemy.ext.associationproxyto make explicit references to “OrderItem” optional.
- dict_of_sets_with_default.py - an advanced association proxy example which illustrates nesting of association proxies to produce multi-level Python collections, in this case a dictionary with string keys and sets of integers as values, which conceal the underlying mapped classes.
Two examples illustrating modifications to SQLAlchemy’s attribute management system.
listen_for_events.py illustrates the usage of
AttributeExtension to intercept attribute events. It additionally illustrates a way to automatically attach these listeners to all class attributes using a
custom_management.py illustrates much deeper usage of
InstrumentationManager as well as collection adaptation, to completely change the underlying method used to store state on an object. This example was developed to illustrate techniques which would be used by other third party object instrumentation systems to interact with SQLAlchemy’s event system and is only intended for very intricate framework integrations.
Illustrates how to embed Beaker cache functionality within the Query object, allowing full cache control as well as the ability to pull “lazy loaded” attributes from long term cache as well.
In this demo, the following techniques are illustrated:
- Using custom subclasses of Query
- Basic technique of circumventing Query to pull from a custom cache source instead of the database.
- Rudimental caching with Beaker, using “regions” which allow global control over a fixed set of configurations.
- Using custom MapperOption objects to configure options on a Query, including the ability to invoke the options deep within an object graph when lazy loads occur.
# query for Person objects, specifying cache q = Session.query(Person).options(FromCache("default", "all_people")) # specify that each Person's "addresses" collection comes from # cache too q = q.options(RelationshipCache("default", "by_person", Person.addresses)) # query print q.all()
To run, both SQLAlchemy and Beaker (1.4 or greater) must be installed or on the current PYTHONPATH. The demo will create a local directory for datafiles, insert initial data, and run. Running the demo a second time will utilize the cache files already present, and exactly one SQL statement against two tables will be emitted - the displayed result however will utilize dozens of lazyloads that all pull from cache.
The demo scripts themselves, in order of complexity, are run as follows:
python examples/beaker_caching/helloworld.py python examples/beaker_caching/relationship_caching.py python examples/beaker_caching/advanced.py python examples/beaker_caching/local_session_caching.py
Listing of files:
environment.py - Establish the Session, the Beaker cache manager, data / cache file paths, and configurations, bootstrap fixture data if necessary.
caching_query.py - Represent functions and classes which allow the usage of Beaker caching with SQLAlchemy. Introduces a query option called FromCache.
model.py - The datamodel, which represents Person that has multiple Address objects, each with PostalCode, City, Country
fixture_data.py - creates demo PostalCode, Address, Person objects in the database.
helloworld.py - the basic idea.
relationship_caching.py - Illustrates how to add cache options on relationship endpoints, so that lazyloads load from cache.
advanced.py - Further examples of how to use FromCache. Combines techniques from the first two scripts.
local_session_caching.py - Grok everything so far ? This example creates a new Beaker container that will persist data in a dictionary which is local to the current session. remove() the session and the cache is gone.
Illustrates how to mix table reflection with Declarative, such that the reflection process itself can take place after all classes are defined. Declarative classes can also override column definitions loaded from the database.
At the core of this example is the ability to change how Declarative
assigns mappings to classes. The
__mapper_cls__ special attribute
is overridden to provide a function that gathers mapping requirements
as they are established, without actually creating the mapping.
Then, a second class-level method
prepare() is used to iterate
through all mapping configurations collected, reflect the tables
named within and generate the actual mappers.
New in version 0.7.5: This new example makes usage of the new
autoload_replace flag on
Table to allow declared
classes to override reflected columns.
Base = declarative_base(cls=DeclarativeReflectedBase) class Foo(Base): __tablename__ = 'foo' bars = relationship("Bar") class Bar(Base): __tablename__ = 'bar' # illustrate overriding of "bar.foo_id" to have # a foreign key constraint otherwise not # reflected, such as when using MySQL foo_id = Column(Integer, ForeignKey('foo.id')) Base.prepare(e) s = Session(e) s.add_all([ Foo(bars=[Bar(data='b1'), Bar(data='b2')], data='f1'), Foo(bars=[Bar(data='b3'), Bar(data='b4')], data='f2') ]) s.commit()
An example of persistence for a directed graph structure. The graph is stored as a collection of edges, each referencing both a “lower” and an “upper” node in a table of nodes. Basic persistence and querying for lower- and upper- neighbors are illustrated:
n2 = Node(2) n5 = Node(5) n2.add_neighbor(n5) print n2.higher_neighbors()
Dynamic Relations as Dictionaries¶
Illustrates how to place a dictionary-like facade on top of a “dynamic” relation, so that dictionary operations (assuming simple string keys) can operate upon a large collection without loading the full collection at once.
Illustrates various methods of associating multiple types of parents with a particular child object.
The examples all use the declarative extension along with
declarative mixins. Each one presents the identical use
case at the end - two classes,
HasAddresses mixin, which ensures that the
parent class is provided with an
The configurations include:
table_per_related.py- illustrates a distinct table per related collection.
table_per_association.py- illustrates a shared collection table, using a table per association.
discriminator_on_association.py- shared collection table and shared association table, including a discriminator column.
discriminator_on_association.py script in particular is a modernized
version of the “polymorphic associations” example present in older versions of
SQLAlchemy, originally from the blog post at http://techspot.zzzeek.org/2007/05/29/polymorphic-associations-with-sqlalchemy/.
A basic example of using the SQLAlchemy Sharding API. Sharding refers to horizontally scaling data across multiple databases.
The basic components of a “sharded” mapping are:
- multiple databases, each assigned a ‘shard id’
- a function which can return a single shard id, given an instance to be saved; this is called “shard_chooser”
- a function which can return a list of shard ids which apply to a particular instance identifier; this is called “id_chooser”. If it returns all shard ids, all shards will be searched.
- a function which can return a list of shard ids to try, given a particular Query (“query_chooser”). If it returns all shard ids, all shards will be queried and the results joined together.
In this example, four sqlite databases will store information about weather data on a database-per-continent basis. We provide example shard_chooser, id_chooser and query_chooser functions. The query_chooser illustrates inspection of the SQL expression element in order to attempt to determine a single shard being requested.
The construction of generic sharding routines is an ambitious approach to the issue of organizing instances among multiple databases. For a more plain-spoken alternative, the “distinct entity” approach is a simple method of assigning objects to different tables (and potentially database nodes) in an explicit way - described on the wiki at EntityName.
Working examples of single-table, joined-table, and concrete-table inheritance as described in datamapping_inheritance.
Large collection example.
Illustrates the options to use with
relationship() when the list of related objects is very large, including:
- “dynamic” relationships which query slices of data as accessed
- how to use ON DELETE CASCADE in conjunction with
passive_deletes=Trueto greatly improve the performance of related collection deletion.
Illustrates a rudimentary way to implement the “nested sets” pattern for hierarchical data using the SQLAlchemy ORM.
See Generic Associations for a modern version of polymorphic associations.
A naive example illustrating techniques to help embed PostGIS functionality.
This example was originally developed in the hopes that it would be extrapolated into a comprehensive PostGIS integration layer. We are pleased to announce that this has come to fruition as GeoAlchemy.
The example illustrates:
- a DDL extension which allows CREATE/DROP to work in conjunction with AddGeometryColumn/DropGeometryColumn
- a Geometry type, as well as a few subtypes, which convert result row values to a GIS-aware object, and also integrates with the DDL extension.
- a GIS-aware object which stores a raw geometry value and provides a factory for functions such as AsText().
- an ORM comparator which can override standard column methods on mapped objects to produce GIS operators.
- an attribute event listener that intercepts strings and converts to GeomFromText().
- a standalone operator example.
The implementation is limited to only public, well known and simple to use extension points.
Illustrates an extension which creates version tables for entities and stores records for each change. The same idea as Elixir’s versioned extension, but more efficient (uses attribute API to get history) and handles class inheritance. The given extensions generate an anonymous “history” class which represents historical versions of the target object.
Usage is illustrated via a unit test module
test_versioning.py, which can
be run via nose:
cd examples/versioning nosetests -v
A fragment of example usage, using declarative:
from history_meta import Versioned, versioned_session Base = declarative_base() class SomeClass(Versioned, Base): __tablename__ = 'sometable' id = Column(Integer, primary_key=True) name = Column(String(50)) def __eq__(self, other): assert type(other) is SomeClass and other.id == self.id Session = sessionmaker(bind=engine) versioned_session(Session) sess = Session() sc = SomeClass(name='sc1') sess.add(sc) sess.commit() sc.name = 'sc1modified' sess.commit() assert sc.version == 2 SomeClassHistory = SomeClass.__history_mapper__.class_ assert sess.query(SomeClassHistory).\ filter(SomeClassHistory.version == 1).\ all() \ == [SomeClassHistory(version=1, name='sc1')]
Versioned mixin is designed to work with declarative. To use the extension with
classical mappers, the
_history_mapper function can be applied:
from history_meta import _history_mapper m = mapper(SomeClass, sometable) _history_mapper(m) SomeHistoryClass = SomeClass.__history_mapper__.class_
Vertical Attribute Mapping¶
Illustrates “vertical table” mappings.
A “vertical table” refers to a technique where individual attributes of an object are stored as distinct rows in a table. The “vertical table” technique is used to persist objects which can have a varied set of attributes, at the expense of simple query control and brevity. It is commonly found in content/document management systems in order to represent user-created structures flexibly.
Two variants on the approach are given. In the second, each row references a “datatype” which contains information about the type of information stored in the attribute, such as integer, string, or date.
shrew = Animal(u'shrew') shrew[u'cuteness'] = 5 shrew[u'weasel-like'] = False shrew[u'poisonous'] = True session.add(shrew) session.flush() q = (session.query(Animal). filter(Animal.facts.any( and_(AnimalFact.key == u'weasel-like', AnimalFact.value == True)))) print 'weasel-like animals', q.all()
Illustrates three strategies for persisting and querying XML documents as represented by ElementTree in a relational database. The techniques do not apply any mappings to the ElementTree objects directly, so are compatible with the native cElementTree as well as lxml, and can be adapted to suit any kind of DOM representation system. Querying along xpath-like strings is illustrated as well.
In order of complexity:
pickle.py- Quick and dirty, serialize the whole DOM into a BLOB column. While the example is very brief, it has very limited functionality.
adjacency_list.py- Each DOM node is stored in an individual table row, with attributes represented in a separate table. The nodes are associated in a hierarchy using an adjacency list structure. A query function is introduced which can search for nodes along any path with a given structure of attributes, basically a (very narrow) subset of xpath.
optimized_al.py- Uses the same strategy as
adjacency_list.py, but associates each DOM row with its owning document row, so that a full document of DOM nodes can be loaded using O(1) queries - the construction of the “hierarchy” is performed after the load in a non-recursive fashion and is much more efficient.
# parse an XML file and persist in the database doc = ElementTree.parse("test.xml") session.add(Document(file, doc)) session.commit() # locate documents with a certain path/attribute structure for document in find_document('/somefile/header/field2[@attr=foo]'): # dump the XML print document