Skip to content

Releases: piccolo-orm/piccolo

0.74.3

30 Apr 19:11
Compare
Choose a tag to compare

If you had a table containing an array of BigInt, then migrations could fail:

from piccolo.table import Table
from piccolo.columns.column_types import Array, BigInt

class MyTable(Table):
    my_column = Array(base_column=BigInt())

It's because the BigInt base column needs access to the parent table to know if it's targeting Postgres or SQLite. See PR 501.

Thanks to @cheesycod for reporting this issue.

0.74.2

27 Apr 18:23
Compare
Choose a tag to compare

If a user created a custom Column subclass, then migrations would fail. For example:

class CustomColumn(Varchar):
    def __init__(self, custom_arg: str = '', *args, **kwargs):
        self.custom_arg = custom_arg
        super().__init__(*args, **kwargs)

    @property
    def column_type(self):
        return 'VARCHAR'

See PR 497. Thanks to @WintonLi for reporting this issue.

0.74.1

23 Apr 08:35
Compare
Choose a tag to compare

When using pip install piccolo[all] on Windows it would fail because uvloop isn't supported. Thanks to @jack1142 for reporting this issue.

0.74.0

13 Apr 20:49
Compare
Choose a tag to compare

We've had the ability to bulk modify rows for a while. Here we append '!!!' to each band's name:

>>> await Band.update({Band.name: Band.name + '!!!'}, force=True)

It only worked for some columns - Varchar, Text, Integer etc.

We now allow Date, Timestamp, Timestamptz and Interval columns to be bulk modified using a timedelta. Here we modify each concert's start date, so it's one day later:

>>> await Concert.update(
...     {Concert.starts: Concert.starts + timedelta(days=1)},
...     force=True
... )

Thanks to @theelderbeever for suggesting this feature.

0.73.0

08 Apr 16:37
Compare
Choose a tag to compare

You can now specify extra nodes for a database. For example, if you have a read replica.

  DB = PostgresEngine(
      config={'database': 'main_db', 'host': 'prod.my_db.com'},
      extra_nodes={
          'read_replica_1': PostgresEngine(
              config={
                  'database': 'main_db',
                  'host': 'read_replica_1.my_db.com'
              }
          )
      }
  )

And can then run queries on these other nodes:

  >>> await MyTable.select().run(node="read_replica_1")

See PR 481. Thanks to @dashsatish for suggesting this feature.

Also, the targ library has been updated so it tells users about the --trace argument which can be used to get a full traceback when a CLI command fails.

0.72.0

30 Mar 09:32
Compare
Choose a tag to compare

Fixed typos with drop_constraints. Courtesy @smythp.

Lots of documentation improvements, such as fixing Sphinx's autodoc for the Array column.

AppConfig now accepts a pathlib.Path instance. For example:

# piccolo_app.py

import pathlib

APP_CONFIG = AppConfig(
    app_name="blog",
    migrations_folder_path=pathlib.Path(__file__) /  "piccolo_migrations"
)

Thanks to @theelderbeever for recommending this feature.

0.71.1

13 Mar 08:35
Compare
Choose a tag to compare

Fixed a bug with ModelBuilder and nullable columns (see PR 462). Thanks to @fiolet069 for reporting this issue.

0.71.0

11 Mar 20:13
Compare
Choose a tag to compare

The ModelBuilder class, which is used to generate mock data in tests, now supports Array columns. Courtesy @backwardspy.

Lots of internal code optimisations and clean up. Courtesy @yezz123.

Added docs for troubleshooting common MyPy errors.

Also thanks to @adriangb for helping us with our dependency issues.

0.70.1

09 Mar 10:02
Compare
Choose a tag to compare

Fixed a bug with auto migrations. If renaming multiple columns at once, it could get confused. Thanks to @theelderbeever for reporting this issue, and @sinisaos for helping to replicate it. See PR 457.

0.70.0

08 Mar 00:24
Compare
Choose a tag to compare

We ran a profiler on the Piccolo codebase and identified some optimisations. For example, we were calling self.querystring multiple times in a method, rather than assigning it to a local variable.

We also ran a linter which identified when list / set / dict comprehensions could be more efficient.

The performance is now slightly improved (especially when fetching large numbers of rows from the database).

Example query times on a MacBook, when fetching 1000 rows from a local Postgres database (using await SomeTable.select()):

  • 8 ms without a connection pool
  • 2 ms with a connection pool

As you can see, having a connection pool is the main thing you can do to improve performance.

Thanks to @AliSayyah for all his work on this.