Python CLI utility and library for manipulating SQLite databases
sqlite-utils query
, memory
and bulk
commands now all accept a new --functions
option. This can be passed a string of Python code, and any callable objects defined in that code will be made available to SQL queries as custom SQL functions. See Defining custom SQL functions for details. (#471)db[table].create(...)
method now accepts a new transform=True
parameter. If the table already exists it will be transform to match the schema configuration options passed to the function. This may result in columns being added or dropped, column types being changed, column order being updated or not null and default values for columns being set. (#467)sqlite-utils create-table
command now accepts a --transform
option.table.default_values
returns a dictionary mapping each column name with a default value to the configured default value. (#475)--load-extension
option can now be provided a path to a compiled SQLite extension module accompanied by the name of an entrypoint, separated by a colon - for example --load-extension ./lines0:sqlite3_lines0_noread_init
. This feature is modelled on code first contributed to Datasette by Alex Garcia. (#470)db.register_function(fn, name=...)
parameter. (#458)--order
option for specifying the sort order for the returned rows. (#469)global
keyword. (#472)table.extract()
would not behave correctly for columns containing null values. Thanks, Forest Gregg. (#423)sqlite-utils
to import and clean an example CSV file.sqlite-utils
now have a Discord community. Join the Discord here.sqlite-utils duplicate data.db table_name new_name
CLI command for Duplicating tables. (#454)sqlite_utils.utils.rows_from_file()
is now a documented API. It can be used to read a sequence of dictionaries from a file-like object containing CSV, TSV, JSON or newline-delimited JSON. It can be passed an explicit format or can attempt to detect the format automatically. (#443)sqlite_utils.utils.TypeTracker
is now a documented API for detecting the likely column types for a sequence of string rows, see Detecting column types using TypeTracker. (#445)sqlite_utils.utils.chunks()
is now a documented API for splitting an iterator into chunks. (#451)sqlite-utils enable-fts
now has a --replace
option for replacing the existing FTS configuration for a table. (#450)create-index
, add-column
and duplicate
commands all now take a --ignore
option for ignoring errors should the database not be in the right state for them to operate. (#450)See also the annotated release notes for this release.
sqlite_utils.utils.utils.rows_from_file()
is now a documented API, see Reading rows from a file. (#443)rows_from_file()
has two new parameters to help handle CSV files with rows that contain more values than are listed in that CSV file's headings: ignore_extras=True
and extras_key="name-of-key"
. (#440)sqlite_utils.utils.maximize_csv_field_size_limit()
helper function for increasing the field size limit for reading CSV files to its maximum, see Setting the maximum CSV field size limit. (#442)table.search(where=, where_args=)
parameters for adding additional WHERE
clauses to a search query. The where=
parameter is available on table.search_sql(...)
as well. See Searching with table.search(). (#441)table.detect_fts()
and other search-related functions could fail if two FTS-enabled tables had names that were prefixes of each other. (#434)Now depends on click-default-group-wheel, a pure Python wheel package. This means you can install and use this package with Pyodide, which can run Python entirely in your browser using WebAssembly. (#429)
Try that out using the Pyodide REPL:
>>> import micropip
>>> await micropip.install("sqlite-utils")
>>> import sqlite_utils
>>> db = sqlite_utils.Database(memory=True)
>>> list(db.query("select 3 * 5"))
[{'3 * 5': 15}]
errors=r.IGNORE/r.SET_NULL
parameter for the r.parsedatetime()
and r.parsedate()
convert recipes. (#416)--multi
could not be used in combination with --dry-run
for the convert command. (#415)deterministic=True
is supported. (#425)hash_id_columns=
parameter for creating a primary key that's a hash of the content of specific columns - see Setting an ID based on the hash of the row contents for details. (#343)(3, 38, 0)
.sqlite-utils
command-line tool - thanks, Chris Amico. (#398)
--init-spatialite
option for initializing SpatiaLite on a newly created database.db[table].create(..., if_not_exists=True)
option for creating a table only if it does not already exist. (#397)Database(memory_name="my_shared_database")
parameter for creating a named in-memory database that can be shared between multiple connections. (#405)sqlite-utils transform
. (#403)This release introduces four new utility methods for working with SpatiaLite. Thanks, Chris Amico. (#330)
sqlite_utils.utils.find_spatialite()
finds the location of the SpatiaLite module on disk.db.init_spatialite()
initializes SpatiaLite for the given database.table.add_geometry_column(...)
adds a geometry column to an existing table.table.create_spatial_index(...)
creates a spatial index for a column.sqlite-utils batch
now accepts a --batch-size
option. (#392)--help
- see CLI reference. (#384)