Simple DDL Parser to parse SQL (HQL, TSQL, AWS Redshift, BigQuery, Snowflake and other dialects) ddl files to json/python dict with full information about columns: types, defaults, primary keys, etc. & table properties, types, domains, etc.
🐬 MySQL:
❄ Snowflake:
DEFAULT
and CHECK
statements. https://github.com/xnuinside/simple-ddl-parser/issues/240
v1.0.3
CREATE OR REPLACE SCHEMA
.stage_
fileformat option value equal a single string as FIELD_OPTIONALLY_ENCLOSED_BY = '\"'
, FIELD_DELIMITER = '|'
v1.0.0 In output structure was done important changes that can in theory breaks code.
all custom table properties that are defined after column definition in 'CREATE TABLE' statement and relative to only one dialect (only for SparkSQL, or HQL,etc), for example, like here:
https://github.com/xnuinside/simple-ddl-parser/blob/main/tests/dialects/test_snowflake.py#L767 or https://github.com/xnuinside/simple-ddl-parser/blob/main/tests/dialects/test_spark_sql.py#L133 will be saved now in property table_properties
as dict.
Previously they was placed on same level of table output as columns
, alter
, etc. Now, they grouped and moved to key table_properties
.
And more.
Full Changelog of version 1.0.0: https://github.com/xnuinside/simple-ddl-parser/blob/main/CHANGELOG.txt#L1
v0.32.0
create table
statement and alter.
For example, if in create table you use quotes like "schema_name"."table_name", but in alter was schema_name.table_name - previously it didn't work, but now parser understand that it is the same table.v0.31.3
AS ()
statement - https://github.com/xnuinside/simple-ddl-parser/issues/218
Big-big thanks for contribution goes to https://github.com/dmaresma and https://github.com/cfhowes
v0.31.2
ORDER|NOORDER
statement - https://github.com/xnuinside/simple-ddl-parser/issues/213
Thanks to https://github.com/dmaresma for contributions.
WITH TAG
statement.