ORM Utilities¶
These modules provide higher-level abstractions on top of Peewee’s core ORM and work with any database backend.
Shortcuts¶
playhouse.shortcuts provides helpers for serializing model instances to
and from dictionaries, resolving compound queries, and thread-safe database
swapping.
Model Serialization¶
- model_to_dict(model, recurse=True, backrefs=False, only=None, exclude=None, extra_attrs=None, fields_from_query=None, max_depth=None, manytomany=False)¶
Convert a model instance to a dictionary.
- Parameters:
recurse (bool) – Follow foreign keys and include the related object as a nested dict (default:
True).backrefs (bool) – Follow back-references and include related collections as nested lists of dicts.
only – A list or set of field instances to include exclusively.
exclude – A list or set of field instances to exclude.
extra_attrs – A list of attribute or method names to include in the output dict.
fields_from_query (Select) – Restrict serialization to only the fields that were explicitly selected in the generating query.
max_depth (int) – Maximum depth when following relations.
manytomany (bool) – Include many-to-many fields.
Examples:
user = User.create(username='alice') model_to_dict(user) # {'id': 1, 'username': 'alice'} model_to_dict(user, backrefs=True) # {'id': 1, 'username': 'alice', 'tweets': []} t = Tweet.create(user=user, content='hello') model_to_dict(t) # {'id': 1, 'content': 'hello', 'user': {'id': 1, 'username': 'alice'}} model_to_dict(t, recurse=False) # {'id': 1, 'content': 'hello', 'user': 1} model_to_dict(user, backrefs=True) # {'id': 1, 'tweets': [{'id': 1, 'content': 'hello'}], 'username': 'alice'}
Note
If your use case is unusual, write a small custom function rather than trying to coerce
model_to_dictwith a complex combination of parameters.
- dict_to_model(model_class, data, ignore_unknown=False)¶
Construct a model instance from a dictionary. Foreign keys may be provided as nested dicts; back-references as lists of dicts.
- Parameters:
model_class (Model) – The model class to construct.
data (dict) – A dictionary of data. Foreign keys can be included as nested dictionaries, and back-references as lists of dictionaries.
ignore_unknown (bool) – Allow keys that do not correspond to any field on the model.
user = dict_to_model(User, {'id': 1, 'username': 'alice'}) user.username # 'alice' # Nested foreign key: tweet = dict_to_model(Tweet, { 'id': 1, 'content': 'hi', 'user': {'id': 1, 'username': 'alice'}}) tweet.user.username # 'alice'
- update_model_from_dict(instance, data, ignore_unknown=False)¶
Update an existing model instance with values from a dictionary. Follows the same rules as
dict_to_model().- Parameters:
instance (Model) – The model instance to update.
data (dict) – A dictionary of data. Foreign keys can be included as nested dictionaries, and back-references as lists of dictionaries.
ignore_unknown (bool) – Allow keys that do not correspond to any field on the model.
Compound Query Resolution¶
- resolve_multimodel_query(query, key='_model_identifier')¶
Resolve rows from a compound
UNIONor similar query to the correct model class. Useful when two tables are unioned and you need each row as an instance of the appropriate model.- Parameters:
query – A compound
SelectQuery.key (str) – Name of the column used to identify the model.
- Returns:
An iterable that yields properly typed model instances.
Thread-Safe Database Swapping¶
- class ThreadSafeDatabaseMetadata¶
Model
Metadataimplementation that enables thedatabaseattribute to safely changed in a multi-threaded application. Use this when your application may swap the active database (e.g. primary / read replica) at runtime across threads:from playhouse.shortcuts import ThreadSafeDatabaseMetadata primary = PostgresqlDatabase('main') replica = PostgresqlDatabase('replica') class BaseModel(Model): class Meta: database = primary model_metadata_class = ThreadSafeDatabaseMetadata # Safe to do at runtime from any thread: BaseModel._meta.database = replica
Pydantic Integration¶
playhouse.pydantic_utils generates Pydantic v2
models from Peewee Model classes using the to_pydantic()
function.
Example¶
import datetime
from peewee import *
from playhouse.pydantic_utils import to_pydantic
db = SqliteDatabase(':memory:')
class User(db.Model):
name = CharField(verbose_name='Full Name', help_text='Display name')
age = IntegerField()
active = BooleanField(default=True)
bio = TextField(null=True)
status = CharField(
verbose_name='Status',
help_text='Record status',
choices=[
('active', 'Active'),
('archived', 'Archived'),
('deleted', 'Deleted'),
])
created = DateTimeField(default=datetime.datetime.now)
# Generate a Pydantic model in one call:
UserSchema = to_pydantic(User)
UserSchema is a standard Pydantic BaseModel. You can validate data,
serialize instances, or populate instances from user data:
# Validate a dict (e.g. from an HTTP request body).
data = UserSchema.model_validate({'name': 'Huey', 'age': 14, 'status': 'active'})
print(data.model_dump())
# {'name': 'Huey', 'age': 14, 'active': True, 'bio': None, 'score': None,
# 'status': 'active', 'created': datetime.datetime(...)}
# Populate an instance from the validated data.
user = User(**validated.dict())
# Validate directly from a Peewee model instance:
huey = User.create(name='Huey', age=14, status='active')
data = UserSchema.model_validate(huey)
How field metadata is mapped¶
to_pydantic() reads the metadata you already set on your Peewee fields and
translates it into the Pydantic equivalents:
Peewee attribute |
Pydantic effect |
|---|---|
|
The generated field uses a |
|
Sets |
|
Wraps the type in |
|
Becomes the |
|
Becomes the |
Fields with no default and null=False (default) are required in the
generated Pydantic model.
Field type mapping¶
Peewee field types are mapped to a Python type that Pydantic uses for validation.
Peewee field |
Python type |
|---|---|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
type of related PK |
AutoField and BigAutoField are excluded from the generated schema by
default (exclude_autofield=True) - they can be included by passing
exclude_autofield=False.
ForeignKeyField resolves through the related model’s primary-key field, so
a foreign key to a model with an AutoField PK becomes int. This is
overridden when you provide a nested schema via the relationships
parameter.
Any field whose field_type is not present in the map falls back to
Any, which means Pydantic will accept any value without validation. If you
use custom field type and want strict validation, ensure they set a
recognized field_type or handle the conversion yourself.
When a field has choices defined, the mapped Python type above is
replaced by a Literal constrained to the choice values, regardless of
the underlying field type.
API reference¶
- to_pydantic(model_cls, exclude=None, include=None, exclude_autofield=True, model_name=None, relationships=None, base_model=None)¶
Generate a Pydantic
BaseModelclass from a Peewee model.- Parameters:
model_cls (Model) – Peewee model class.
exclude (set or list) – Field names to exclude from the generated schema.
include (set or list) – If provided, only these field names will appear in the generated schema. All other fields are excluded.
exclude_autofield (bool) – When
True(the default), the auto-incrementing primary-key field is omitted from the schema. Set toFalsewhen you need theidfield in responses.model_name (str) – Name for the generated Pydantic class. Defaults to
<ModelName>Schema.relationships (dict) – A mapping that tells
to_pydantichow to handle foreign-key or back-reference fields as nested Pydantic models instead of flat scalar values. See Nested relationships below.base_model – User-provided subclass of Pydantic
BaseModelto use as the base class for the generated model.
- Returns:
A Pydantic
BaseModelsubclass configured withfrom_attributes=True.
Generate a Pydantic
Modelfor the given Peeweemodel_cls. The generated model will preserve Peewee field metadata:choices- restrict acceptable values for field.default- provide a default value for field.verbose_name- provide a human-readable title for field.help_text- provide a human-readable description for field.null- control whether field is optional or required.
Foreign-key fields are exposed using the underlying column name, and accept a scalar value unless you specify the schema for the relation using the
relationshipsparameter. See below for example.
Foreign-key handling¶
By default, foreign-key fields are exposed using their underlying column name
(e.g. user_id rather than user) and accept a plain scalar value, typically
an integer primary key. This keeps the schema flat and is a good fit when you
are accepting input data:
class Tweet(db.Model):
user = ForeignKeyField(User, backref='tweets')
content = TextField()
timestamp = DateTimeField(default=datetime.datetime.now)
is_published = BooleanField(default=True)
TweetSchema = to_pydantic(Tweet)
# The schema exposes the column name "user_id", not "user":
data = TweetSchema.model_validate({'user_id': 1, 'content': 'hello'})
print(data.model_dump())
# {'user_id': 1,
# 'content': 'hello',
# 'timestamp': datetime.datetime(...),
# 'is_published: True}
# Works when validating from a model instance too:
tweet = Tweet.create(user=huey, content='hello')
data = TweetSchema.model_validate(tweet)
print(data.model_dump())
# {'user_id': 1,
# 'content': 'hello',
# 'timestamp': datetime.datetime(...),
# 'is_published: True}
Nested relationships¶
When you wish to embed the related object rather than just its ID, pass a
relationships dict that maps a Peewee ForeignKeyField
(or backref) to the Pydantic schema that should be used for the nested object.
Nested foreign key
# Include the id field so it appears in the response.
UserSchema = to_pydantic(User, exclude_autofield=False)
TweetResponse = to_pydantic(
Tweet,
exclude_autofield=False,
relationships={Tweet.user: UserSchema})
tweet = Tweet.create(user=huey, content='hello')
data = TweetResponse.model_validate(tweet)
print(data.model_dump())
# {'id': 1,
# 'content': 'hello',
# 'user': {'id': 1, 'name': 'Huey', 'age': 14, ...},
# 'timestamp': datetime.datetime(...),
# 'is_published': True}
Note
Validating from a model instance will access tweet.user, which triggers
a SELECT query if the relation is not already loaded. To avoid the extra
query, use a join:
tweet = (Tweet
.select(Tweet, User)
.join(User)
.get())
data = TweetResponse.model_validate(tweet) # No additional query.
Nested back-references
Back-references work the same way, but the schema must be wrapped in
List[...] since back-references may contain 0..n records.
from typing import List
# Exclude the "user" FK from the tweet schema to avoid circular nesting.
TweetResponse = to_pydantic(Tweet, exclude={'user'}, exclude_autofield=False)
UserDetail = to_pydantic(
User,
exclude_autofield=False,
relationships={User.tweets: List[TweetResponse]})
user = User.create(name='Huey', age=14, status='active')
Tweet.create(user=user, content='tweet 0')
Tweet.create(user=user, content='tweet 1')
data = UserDetail.model_validate(user)
print(data.model_dump())
# {'id': 1, 'name': 'Huey', ...,
# 'tweets': [{'id': 1, 'content': 'tweet 0', ...},
# {'id': 2, 'content': 'tweet 1', ...}]}
Note
As with foreign keys, accessing a back-reference triggers a query. Use
prefetch() to load the collection up front:
users = (User
.select()
.where(User.id == 123)
.prefetch(Tweet))
data = UserDetail.model_validate(users[0]) # No additional query.
JSON schema output¶
Because the generated class is a regular Pydantic model, you can call
model_json_schema() to get a JSON-schema dict suitable for OpenAPI docs:
import json
print(json.dumps(UserSchema.model_json_schema(), indent=2))
{
"properties": {
"name": {
"description": "Display name",
"title": "Full Name",
"type": "string"
},
"age": {
"title": "Age",
"type": "integer"
},
"active": {
"default": true,
"title": "Active",
"type": "boolean"
},
"bio": {
"anyOf": [{"type": "string"}, {"type": "null"}],
"default": null,
"title": "Bio"
},
"status": {
"description": "Record status | Choices: 'active' = Active, 'archived' = Archived, 'deleted' = Deleted",
"enum": ["active", "archived", "deleted"],
"title": "Status",
"type": "string"
},
"created": {
"format": "date-time",
"title": "Created",
"type": "string"
}
},
"required": ["name", "age", "status"],
"title": "UserSchema",
"type": "object"
}
Note that name, age, and status are the only required fields. All
other fields have defaults (active defaults to True, bio defaults
to None, and created uses a default_factory).
Hybrid Attributes¶
A hybrid attribute behaves differently depending on whether it is accessed on a model instance (executes Python logic) or on the model class (generates a SQL expression). This lets you write Python methods that work both as Python computations and as composable SQL clauses.
The concept is borrowed from SQLAlchemy’s hybrid extension.
from playhouse.hybrid import hybrid_property, hybrid_method
class Interval(Model):
start = IntegerField()
end = IntegerField()
@hybrid_property
def length(self):
return self.end - self.start
@hybrid_method
def contains(self, point):
return (self.start <= point) & (point < self.end)
On an instance, Python arithmetic runs:
i = Interval(start=1, end=5)
i.length # 4 (Python arithmetic)
i.contains(3) # True (Python comparison)
On the class, SQL is generated:
Interval.select().where(Interval.length > 5)
# WHERE ("end" - "start") > 5
Interval.select().where(Interval.contains(2))
# WHERE ("start" <= 2) AND (2 < "end")
When the Python and SQL implementations differ, provide a separate
expression override:
class Interval(Model):
start = IntegerField()
end = IntegerField()
@hybrid_property
def radius(self):
return abs(self.length) / 2 # Python: uses Python abs()
@radius.expression
def radius(cls):
return fn.ABS(cls.length) / 2 # SQL: uses fn.ABS()
Example:
query = Interval.select().where(Interval.radius < 3)
This query is equivalent to the following SQL:
SELECT "t1"."id", "t1"."start", "t1"."end"
FROM "interval" AS t1
WHERE ((abs("t1"."end" - "t1"."start") / 2) < 3)
- class hybrid_property(fget, fset=None, fdel=None, expr=None)¶
Decorator for defining a property with separate instance and class behaviors. Use
@prop.expressionto specify the SQL form when it differs from the Python form.Examples:
class Interval(Model): start = IntegerField() end = IntegerField() @hybrid_property def length(self): return self.end - self.start @hybrid_property def radius(self): return abs(self.length) / 2 @radius.expression def radius(cls): return fn.ABS(cls.length) / 2
When accessed on an
Intervalinstance, thelengthandradiusproperties will behave as you would expect. When accessed as class attributes, though, a SQL expression will be generated instead:query = (Interval .select() .where( (Interval.length > 6) & (Interval.radius >= 3)))
Would generate the following SQL:
SELECT "t1"."id", "t1"."start", "t1"."end" FROM "interval" AS t1 WHERE ( (("t1"."end" - "t1"."start") > 6) AND ((abs("t1"."end" - "t1"."start") / 2) >= 3) )
- class hybrid_method(func, expr=None)¶
Decorator for defining a method with separate instance and class behaviors. Use
@method.expressionto specify the SQL form.Example:
class Interval(Model): start = IntegerField() end = IntegerField() @hybrid_method def contains(self, point): return (self.start <= point) & (point < self.end)
When called with an
Intervalinstance, thecontainsmethod will behave as you would expect. When called as a classmethod, though, a SQL expression will be generated:query = Interval.select().where(Interval.contains(2))
Would generate the following SQL:
SELECT "t1"."id", "t1"."start", "t1"."end" FROM "interval" AS t1 WHERE (("t1"."start" <= 2) AND (2 < "t1"."end"))
Key/Value Store¶
playhouse.kv.KeyValue provides a persistent dictionary backed by a Peewee
database instance.
from playhouse.kv import KeyValue
KV = KeyValue() # Defaults to an in-memory SQLite database.
KV['k1'] = 'v1'
KV.update(k2='v2', k3='v3')
assert KV['k2'] == 'v2'
print(dict(KV)) # {'k1': 'v1', 'k2': 'v2', 'k3': 'v3'}
# Expression-based access:
for value in KV[KV.key > 'k1']:
print(value) # 'v2', 'v3'
# Expression-based bulk update:
KV[KV.key > 'k1'] = 'updated'
# Expression-based deletion:
del KV[KV.key > 'k1']
- class KeyValue(key_field=None, value_field=None, ordered=False, database=None, table_name='keyvalue')¶
- Parameters:
key_field (Field) – Field for the key. Defaults to
CharField. Must specifyprimary_key=True.value_field (Field) – Field for the value. Defaults to
PickleField.ordered (bool) – Return keys in sorted order when iterating.
database (Database) – Database to use. Defaults to an in-memory SQLite database.
table_name (str) – Name of the underlying table.
The table is created automatically on construction if it does not exist. Supports the standard dictionary interface plus expression-based access.
- __contains__(expr)¶
- Parameters:
expr – a single key or an expression
- Returns:
Boolean whether key/expression exists.
Example:
kv = KeyValue() kv.update(k1='v1', k2='v2') 'k1' in kv # True 'kx' in kv # False (KV.key < 'k2') in KV # True (KV.key > 'k2') in KV # False
- __len__()¶
- Returns:
Count of items stored.
- __getitem__(expr)¶
- Parameters:
expr – a single key or an expression.
- Returns:
value(s) corresponding to key/expression.
- Raises:
KeyErrorif single key given and not found.
Examples:
KV = KeyValue() KV.update(k1='v1', k2='v2', k3='v3') KV['k1'] # 'v1' KV['kx'] # KeyError: "kx" not found KV[KV.key > 'k1'] # ['v2', 'v3'] KV[KV.key < 'k1'] # []
- __setitem__(expr, value)¶
- Parameters:
expr – a single key or an expression.
value – value to set for key(s)
Set value for the given key. If
expris an expression, then any keys matching the expression will have their value updated.Example:
KV = KeyValue() KV.update(k1='v1', k2='v2', k3='v3') KV['k1'] = 'v1-x' print(KV['k1']) # 'v1-x' KV[KV.key >= 'k2'] = 'v99' print(dict(KV)) # {'k1': 'v1-x', 'k2': 'v99', 'k3': 'v99'}
- __delitem__(expr)¶
- Parameters:
expr – a single key or an expression.
Delete the given key. If an expression is given, delete all keys that match the expression.
Example:
KV = KeyValue() KV.update(k1=1, k2=2, k3=3) del KV['k1'] # Deletes "k1". del KV['k1'] # KeyError: "k1" does not exist del KV[KV.key > 'k2'] # Deletes "k3". del KV[KV.key > 'k99'] # Nothing deleted, no keys match.
- keys()¶
- Returns:
an iterable of all keys in the table.
- values()¶
- Returns:
an iterable of all values in the table.
- items()¶
- Returns:
an iterable of all key/value pairs in the table.
- update(__data=None, **mapping)¶
Efficiently bulk-insert or replace the given key/value pairs.
Example:
KV = KeyValue() KV.update(k1=1, k2=2) # Sets 'k1'=1, 'k2'=2. print(dict(KV)) # {'k1': 1, 'k2': 2} KV.update(k2=22, k3=3) # Updates 'k2'->22, sets 'k3'=3. print(dict(KV)) # {'k1': 1, 'k2': 22, 'k3': 3} KV.update({'k2': -2, 'k4': 4}) # Also can pass a dictionary. print(dict(KV)) # {'k1': 1, 'k2': -2, 'k3': 3, 'k4': 4}
- get(expr, default=None)¶
- Parameters:
expr – a single key or an expression.
default – default value if key not found.
- Returns:
value of given key/expr or default if single key not found.
Get the value at the given key. If the key does not exist, the default value is returned, unless the key is an expression in which case an empty list will be returned.
- pop(expr, default=Sentinel)¶
- Parameters:
expr – a single key or an expression.
default – default value if key does not exist.
- Returns:
value of given key/expr or default if single key not found.
Get value and delete the given key. If the key does not exist, the default value is returned, unless the key is an expression in which case an empty list is returned.
- clear()¶
Remove all items from the key-value table.
Signals¶
playhouse.signals adds Django-style model lifecycle signals. Models must
subclass playhouse.signals.Model (not peewee.Model) for hooks to fire.
from playhouse.signals import Model, post_save
class MyModel(Model):
data = IntegerField()
class Meta:
database = db
@post_save(sender=MyModel)
def on_save(model_class, instance, created):
if created:
notify_new(instance)
The following signals are provided:
pre_saveCalled immediately before an object is saved to the database. Provides an additional keyword argument
created, indicating whether the model is being saved for the first time or updated.post_saveCalled immediately after an object is saved to the database. Provides an additional keyword argument
created, indicating whether the model is being saved for the first time or updated.pre_deleteCalled immediately before an object is deleted from the database when
Model.delete_instance()is used.post_deleteCalled immediately after an object is deleted from the database when
Model.delete_instance()is used.pre_initCalled when a model class is first instantiated
Warning
Signals fire only through the high-level instance methods
(save(), delete_instance()). Bulk
operations via insert(), update(), and
delete() do not trigger signals because no model instance
is involved.
Connecting handlers¶
Whenever a signal is dispatched, it will call any handlers that have been registered. This allows totally separate code to respond to events like model save and delete.
The Signal class provides a connect() method,
which takes a callback function and two optional parameters for “sender” and
“name”. If specified, the “sender” parameter should be a single model class
and allows your callback to only receive signals from that one model class.
The “name” parameter is used as a convenient alias in the event you wish to
unregister your signal handler.
Example:
@post_save(sender=MyModel, name='project.cache_buster')
def cache_bust(sender, instance, created):
cache.delete(make_cache_key(instance))
Or connect manually:
def on_delete(sender, instance):
audit_log(instance)
pre_delete.connect(on_delete, sender=MyModel)
Disconnect by name or reference:
post_save.disconnect(name='project.cache_buster')
pre_delete.disconnect(on_delete)
Signal callback signature:
pre_init(sender, instance)pre_save(sender, instance, created)post_save(sender, instance, created)pre_delete(sender, instance)post_delete(sender, instance)
- class Signal¶
Stores a list of receivers (callbacks) and calls them when the “send” method is invoked.
- connect(receiver, name=None, sender=None)¶
- Parameters:
receiver (callable) – a callable that takes at least two parameters, a “sender”, which is the Model subclass that triggered the signal, and an “instance”, which is the actual model instance.
name (string) – a short alias
sender (Model) – if specified, only instances of this model class will trigger the receiver callback.
Add the receiver to the internal list of receivers, which will be called whenever the signal is sent.
from playhouse.signals import post_save from project.handlers import cache_buster post_save.connect(cache_buster, name='project.cache_buster')
- disconnect(receiver=None, name=None, sender=None)¶
- param callable receiver:
the callback to disconnect
- param string name:
a short alias
- param Model sender:
disconnect model-specific handler.
Disconnect the given receiver (or the receiver with the given name alias) so that it no longer is called. Either the receiver or the name must be provided.
post_save.disconnect(name='project.cache_buster')
- send(instance, *args, **kwargs)¶
- Parameters:
instance – a model instance
Iterates over the receivers and will call them in the order in which they were connected. If the receiver specified a sender, it will only be called if the instance is an instance of the sender.
DataSet¶
playhouse.dataset exposes a dict-oriented API for relational data, modeled
after the dataset library. It is useful
for quick scripts, data loading, and CSV/JSON import-export.
Basic operations:
from playhouse.dataset import DataSet
db = DataSet('sqlite:///data.db')
# Access a table (created automatically if it doesn't exist):
users = db['user']
# Insert rows with any columns:
users.insert(name='Alice', age=30)
users.insert(name='Bob', age=25, active=True) # New column added automatically.
# Retrieve rows:
alice = users.find_one(name='Alice')
print(alice) # {'id': 1, 'name': 'Alice', 'age': 30, 'active': None}
for user in users:
print(user['name'])
for admin in users.find(active=True):
print(admin['name']) # Bob.
# Update:
users.update(name='Alice', age=31, columns=['name']) # 'name' is the lookup.
# Update all records:
users.update(admin=False)
# Delete:
users.delete(name='Bob')
Export and import data:
# Export to JSON:
db.freeze(users.all(), format='json', filename='users.json')
# Export CSV to stdout:
db.freeze(users.all(), format='csv', file_obj=sys.stdout)
# Import from CSV:
db.thaw('user', format='csv', filename='import.csv')
# Import a JSON file to a new table.
db.thaw('new_table', format='json', filename='json-data.json')
Transactions:
# Transactions.
with db.transaction() as txn:
users.insert(name='Charlie')
with db.transaction() as nested_txn:
table.update(name='Charlie', favorite_orm='sqlalchemy', columns=['name'])
nested_txn.rollback() # JK.
Introspection:
print(db.tables)
# ['new_table', 'user']
print(db['user'].columns)
# ['id', 'age', 'name', 'active', 'admin', 'favorite_orm']
print(len(db['user']))
# 2
- class DataSet(url, **kwargs)¶
- Parameters:
url – Database URLs or a
Databaseinstance.kwargs – additional keyword arguments passed to
Introspector.generate_models()when introspecting the db.
- tables¶
List of table names in the database (computed dynamically).
- query(sql, params=None, commit=True)¶
- Parameters:
sql (str) – A SQL query.
params (list) – Optional parameters for the query.
commit (bool) – Whether the query should be committed upon execution.
- Returns:
A database cursor.
Execute the provided query against the database.
- transaction()¶
Return a context manager representing a transaction.
- freeze(query, format='csv', filename=None, file_obj=None, encoding='utf8', iso8601_datetimes=False, base64_bytes=False, **kwargs)¶
- Parameters:
query – A
SelectQuery, generated usingall()or ~Table.find.format – Output format. By default, csv and json are supported.
filename – Filename to write output to.
file_obj – File-like object to write output to.
encoding (str) – File encoding.
iso8601_datetimes (bool) – Encode datetimes and dates in ISO 8601 format.
base64_bytes (bool) – Encode binary data as base64. By default hex is used.
kwargs – Arbitrary parameters for export-specific functionality.
Export data to a file.
- thaw(table, format='csv', filename=None, file_obj=None, strict=False, encoding='utf8', iso8601_datetimes=False, base64_bytes=False, **kwargs)¶
- Parameters:
table (str) – The name of the table to load data into.
format – Input format. By default, csv and json are supported.
filename – Filename to read data from.
file_obj – File-like object to read data from.
strict (bool) – Whether to store values for columns that do not already exist on the table.
encoding (str) – File encoding.
iso8601_datetimes (bool) – Decode datetimes and dates from ISO 8601 format.
base64_bytes (bool) – Decode BLOB field-data from base64. By default hex is assumed.
kwargs – Arbitrary parameters for import-specific functionality.
Import data from a file into
table. Ifstrict=False(default), new columns are added automatically.
- class Table(dataset, name, model_class)¶
Provides a high-level API for working with rows in a given table.
- columns¶
List of column names.
- insert(**data)¶
Insert a row, adding new columns as needed.
- update(columns=None, conjunction=None, **data)¶
Update the table using the provided data. If one or more columns are specified in the columns parameter, then those columns’ values in the data dictionary will be used to determine which rows to update.
# Update all rows. db['users'].update(favorite_orm='peewee') # Only update Huey's record, setting his age to 3. db['users'].update(name='Huey', age=3, columns=['name'])
- find(**query)¶
Return all rows matching equality conditions (all rows if no conditions given).
- find_one(**query)¶
Return the first matching row, or
None.
- all()¶
Return all rows.
- delete(**query)¶
Delete matching rows (all rows if no conditions given).
- create_index(columns, unique=False)¶
Create an index on the given columns:
# Create a unique index on the `username` column. db['users'].create_index(['username'], unique=True)
- freeze(format='csv', filename=None, file_obj=None, encoding='utf8', iso8601_datetimes=False, base64_bytes=False, **kwargs)¶
- Parameters:
format – Output format. By default, csv and json are supported.
filename – Filename to write output to.
file_obj – File-like object to write output to.
encoding (str) – File encoding.
iso8601_datetimes (bool) – Encode datetimes and dates in ISO 8601 format.
base64_bytes (bool) – Encode binary data as base64. By default hex is used.
kwargs – Arbitrary parameters for export-specific functionality.
- thaw(format='csv', filename=None, file_obj=None, strict=False, encoding='utf8', iso8601_datetimes=False, base64_bytes=False, **kwargs)¶
- Parameters:
format – Input format. By default, csv and json are supported.
filename – Filename to read data from.
file_obj – File-like object to read data from.
strict (bool) – Whether to store values for columns that do not already exist on the table.
encoding (str) – File encoding.
iso8601_datetimes (bool) – Decode datetimes and dates from ISO 8601 format.
base64_bytes (bool) – Decode BLOB field-data from base64. By default hex is assumed.
kwargs – Arbitrary parameters for import-specific functionality.
Extra Field Types¶
playhouse.fields provides two general-purpose field types.
- class CompressedField(compression_level=6, algorithm='zlib', **kwargs)¶
Stores compressed binary data using
zliborbz2. ExtendsBlobField; compression and decompression are transparent:from playhouse.fields import CompressedField class LogEntry(Model): payload = CompressedField(algorithm='zlib', compression_level=9)
- Parameters:
compression_level (int) – 0-9 (9 is maximum compression).
algorithm (str) –
'zlib'or'bz2'.
Flask Utilities¶
playhouse.flask_utils simplifies Peewee integration with
Flask.
FlaskDB Wrapper¶
FlaskDB handles three boilerplate tasks:
Creates a Peewee database instance from Flask’s
app.config.Provides a
Modelbase class whoseMeta.databaseis wired to the Peewee instance.Registers
before_request/teardown_requesthooks that open and close a connection for every request.
Basic setup:
from flask import Flask
from playhouse.flask_utils import FlaskDB
app = Flask(__name__)
app.config['DATABASE'] = 'postgresql://postgres:pw@localhost/my_app'
db_wrapper = FlaskDB(app)
class User(db_wrapper.Model):
username = CharField(unique=True)
class Tweet(db_wrapper.Model):
user = ForeignKeyField(User, backref='tweets')
content = TextField()
Access the underlying Peewee database:
peewee_db = db_wrapper.database
@app.route('/transfer', methods=['POST'])
def transfer():
with peewee_db.atomic():
# ... transactional logic ...
return jsonify({'ok': True})
Application factory pattern:
db_wrapper = FlaskDB()
class User(db_wrapper.Model):
username = CharField(unique=True)
def create_app():
app = Flask(__name__)
app.config['DATABASE'] = 'sqlite:///my_app.db'
db_wrapper.init_app(app)
return app
Configuration via dict or a Database instance directly:
# Dictionary-based (uses playhouse.db_url under the hood):
app.config['DATABASE'] = {
'name': 'my_app',
'engine': 'playhouse.pool.PooledPostgresqlDatabase',
'user': 'postgres',
'max_connections': 32,
}
# Pass a database object:
peewee_db = PostgresqlExtDatabase('my_app')
db_wrapper = FlaskDB(app, peewee_db)
Excluding routes from connection management:
app.config['FLASKDB_EXCLUDED_ROUTES'] = ('health_check', 'static')
Query Helpers¶
- get_object_or_404(query_or_model, *query)¶
- Parameters:
query_or_model – Either a
Modelclass or a pre-filteredSelectQuery.query – Peewee filter expressions.
Retrieve a single object matching the given query, or abort with HTTP 404 if no match is found.
@app.route('/post/<slug>/') def post_detail(slug): post = get_object_or_404( Post.select().where(Post.published == True), Post.slug == slug) return render_template('post_detail.html', post=post)
- object_list(template_name, query, context_variable='object_list', paginate_by=20, page_var='page', check_bounds=True, **kwargs)¶
Paginate a query and render a template with the results.
- Parameters:
template_name (str) – Template to render.
query –
SelectQueryto paginate.context_variable (str) – Template variable name for the page of objects (default:
'object_list').paginate_by (int) – Items per page.
page_var (str) – GET parameter name for the page number.
check_bounds (bool) – Return 404 for invalid page numbers.
kwargs – Extra template context variables.
The template receives:
object_list(orcontext_variable) - page of objects.page- current page number.pagination- aPaginatedQueryinstance.
@app.route('/posts/') def post_list(): return object_list( 'post_list.html', query=Post.select().where(Post.published == True), paginate_by=10)
- class PaginatedQuery(query_or_model, paginate_by, page_var='page', check_bounds=False)¶
- Parameters:
query_or_model – Either a
Modelor aSelectQueryinstance containing the collection of records you wish to paginate.paginate_by – Number of objects per-page.
page_var – The name of the
GETargument which contains the page.check_bounds – Whether to check that the given page is a valid page. If
check_boundsisTrueand an invalid page is specified, then a 404 will be returned.
Helper class to perform pagination based on
GETarguments.- get_page()¶
Return the current page number (1-based; defaults to 1).
- get_page_count()¶
Return the total number of pages.
- get_object_list()¶
Return the
SelectQueryfor the requested page, with appropriateLIMITandOFFSETapplied. Returns a 404 ifcheck_bounds=Trueand the page is empty.