Graph convolutions in Keras with TensorFlow, PyTorch or Jax.
ExtensiveMolecularLabelScaler.transform
missing default value.kgcnn.layers.geom.PositionEncodingBasisLayer
kgcnn.literature.GCN.make_model_weighted
kgcnn.literature.AttentiveFP.make_model
kgcnn.ops.activ
to leaky_relu2 and swish2.kgcnn.__safe_scatter_max_min_to_zero__
for tensorflow and jax backend scattering with default to True.train_force.py
GraphBatchNormalization
.kgcnn.io.loader
for unused IDs and graph state input.DisjointForceMeanAbsoluteError
Completely reworked version of kgcnn for Keras 3.0 and multi-backend support. A lot of fundamental changes have been made.
However, we tried to keep as much of the API from kgcnn 3.0 so that models in literature can be used with minimal changes.
Mainly, the "input_tensor_type"="ragged"
model parameter has to be added if ragged tensors are used as input in tensorflow.
For very few models also the order of inputs had to be changed.
Also note that the input embedding layer requires integer tensor input and does not cast from float anymore.
The scope of models has been reduced for initial release but will be extended in upcoming versions. Note that some changes are also stem for keras API changes, like for example learning_rate rate parameter or serialization. Moreover, tensorflow addons had to be dropped for keras 3.0 .
The general representations of graphs has been changed from ragged tensors (tensorflow only, not supported by keras 3.0) to the disjoint graph representation compatible with e.g. PyTorchGeometric. Input can be padded or (still) ragged input. Or direct disjoint representations with batch loader. (See models chapter in docs).
For jax we added a padded_disjoint
parameter that can enable jit'able jax models but requires a dataloader,
which is not yet thoroughly implemented in kgcnn
. For padded samples it can already been tested,
but the padding of each sample is a much larger overhead than padding the batch.
Some other changes:
train_graph.py
script. Command line arguments are now optional and just used for verification, all but category
which has to select a model/hyperparameter combination from hyper file.
Since the hyperparameter file already contains all necessary information.transform_dataset
. Key names of properties to transform has been moved to the constructor!
Also be sure to check StandardLabelScaler
if you want to scale regression targets, since target properties are default here.kgcnn.layers.scale
layer controlled by output_scaling
model argument.input_node_embedding
or input_edge_embedding
arguments which can be set to None
for no embedding.
Also embedding input tokens must be of dtype int now. No auto-casting from float anymore.kgcnn.ops
with kgcnn.backend
to generalize aggregation functions for graph operations.rdkit_xyz_to_mol
as e.g. list.from_xyz
to MolecularGraphRDKit
.kgcnn.molecule.preprocessor
module for graph preprocessors.kgcnn.layers.pooling
to kgcnn.layers.aggr
for better compatibility.
However, kept legacy pooling module and all old ALIAS.RelationalMLP
.HyperParameter
is not verified on initialize anymore, just call hyper.verify()
.kgcnn.metrics.loss
into separate modul kgcnn.losses
to be more compatible with keras.get_split_indices
to make the graph indexing more consistent.add_eps
to PAiNNUpdate
layer as option.data.transform.scaler.standard
to hopefully now fix all errors with the scalers.kgcnn.ops.activ
and layers kgcnn.layers.activ
that have trainable parameters, due to keras changes in 2.13.0.
Please check your config, since parameters are ignored in normal functions!
If for example "kgcnn>leaky_relu" you can not change the leak anymore. You must use a kgcnn.layers.activ
for that.kgcnn.graph.methods.range_neighbour_lattice
to use pymatgen.PolynomialDecayScheduler
use_batch_jacobian
.kgcnn.layers.gather
to reduce/simplify code and speed up some models.
The behaviour of GatherNodes
has changed a little in that it first splits and then concatenates. The default parameters now have split_axis
and concat_axis
set to 2. concat_indices
has been removed.
The default behaviour of the layer however stays the same.FracToRealCoordinates
has been fixed and improved speed.kgcnn.data.transform.scaler.serial
QMDataset
if attributes have been chosen. Now set_attributes
does not cause an error.QMDataset
with labels without SDF file.kgcnn.layers.conv.GraphSageNodeLayer
.reverse_edge_indices
option to GraphDict.from_networkx
. Fixed error in connection with kgcnn.crystal
.kgcnn.io.file
. Experimental. Will get more updates.StandardLabelScaler
inheritance.kgcnn.crystal.periodic_table
to now properly include package data.Major refactoring of kgcnn layers and models.
We try to provide the most important layers for graph convolution as kgcnn.layers
with ragged tensor representation.
As for literature models only input and output is matched with kgcnn
.
kgcnn.layers.conv
to kgcnn.literature
.graph.methods
.kgcnn.mol.*
and kgcnn.moldyn.*
into kgcnn.molecule
hyper
into trainig
crystal
.ACSFConstNormalization
to literature models as option.MLP
. Now includes more normalization options.GraphBaseLayer
and added it to the pooling layers directly.MemoryGraphList.tensor()
so that the correct dtype is given to the tensor output. This is important for model loading etc.CENTChargePlusElectrostaticEnergy
to kgcnn.layers.conv.hdnnp_conv
and kgcnn.literature.HDNNP4th
.train_force.py
of v2.2.2 that forgets to apply inverse scaling to dataset, causing subsequent folds to have wrong labels.MolDynamicsModelPredictor
to call keras model without very expensive retracing. Alternative mode use use_predict=True
.GraphInstanceNormalization
and GraphNormalization
to kgcnn.layers.norm
.StandardScaler
or StandardLabelScaler
.kgcnn.data.transform
. We will expand on this in the future.EnergyForceExtensiveScaler
. New name is EnergyForceExtensiveLabelScaler
. Return is just y now. Added experimental functionality for transforming dataset.kgcnn.md
to kgcnn.moldyn
for naming conflicts with markdown.MolDynamicsModelPredictor
renamed argument model_postprocessor
to graph_postprocessor
.tensorflow_gpu
from setup.pyHDNNP4th.py
to literature.ChangeTensorType
config for model save.kgcnn.xai
.