TDNN Save Abandoned

PyTorch implementation of a Time Delay Neural Network (TDNN)

Project README

Fast TDNN layer implementation

This is an alternative implementation of the TDNN layer, proposed by Waibel et al. [1]. The main difference compared to other implementations is that it exploits the Pytorch Conv1d dilatation argument, making it multitudes faster than other popular implementations such as SiddGururani's PyTorch-TDNN.

Usage

# Create a TDNN layer 
layer_context = [-2, 0, 2]
input_n_feat = previous_layer_n_feat 
tddn_layer = TDNN(context=layer_context, input_channels=input_n_feat, output_channels=512, full_context=False)

# Run a forward pass; batch.size = [BATCH_SIZE, INPUT_CHANNELS, SEQUENCE_LENGTH]
out = tdnn_layer(batch)

References

[1] A. Waibel, T. Hanazawa, G. Hinton, and K. Shikano, “Phoneme Recognition Using Time-Delay Neural Networks,”, 1989

Open Source Agenda is not affiliated with "TDNN" Project. README Source: jonasvdd/TDNN
Stars
37
Open Issues
0
Last Commit
4 years ago
License
MIT

Open Source Agenda Badge

Open Source Agenda Rating