MIDI / symbolic music tokenizers for Deep Learning models 🎶
Python package to tokenize MIDI music files, presented at the ISMIR 2021 LBDs.
MidiTok can tokenize MIDI files, i.e. convert them into sequences of tokens ready to be fed to models such as Transformer, for any generation, transcription or MIR task. MidiTok features most known MIDI tokenizations (e.g. REMI, Compound Word...), and is built around the idea that they all share common parameters and methods. It supports Byte Pair Encoding (BPE) and data augmentation.
MidiTok is integrated with the Hugging Face Hub 🤗! Don't hesitate to share your models to the community!
Documentation: miditok.readthedocs.com
pip install miditok
MidiTok uses Symusic to read and write MIDI files, and BPE is backed by Hugging Face 🤗tokenizers for super-fast encoding.
Tokenizing and detokenzing can be done by calling the tokenizer:
from miditok import REMI, TokenizerConfig
from symusic import Score
# Creating a multitrack tokenizer, read the doc to explore all the parameters
config = TokenizerConfig(num_velocities=16, use_chords=True, use_programs=True)
tokenizer = REMI(config)
# Loads a midi, converts to tokens, and back to a MIDI
midi = Score("path/to/your_midi.mid")
tokens = tokenizer(midi) # calling the tokenizer will automatically detect MIDIs, paths and tokens
converted_back_midi = tokenizer(tokens) # PyTorch / Tensorflow / Numpy tensors supported
Here is a complete yet concise example of how you can use MidiTok to train any PyTorch model. And here is a simple notebook example showing how to use Hugging Face models to generate music, with MidiTok taking care of tokenizing MIDIs.
from miditok import REMI, TokenizerConfig
from miditok.pytorch_data import DatasetMIDI, DataCollator, split_midis_for_training
from torch.utils.data import DataLoader
from pathlib import Path
# Creating a multitrack tokenizer, read the doc to explore all the parameters
config = TokenizerConfig(num_velocities=16, use_chords=True, use_programs=True)
tokenizer = REMI(config)
# Train the tokenizer with Byte Pair Encoding (BPE)
midi_paths = list(Path("path", "to", "midis").glob("**/*.mid"))
tokenizer.train(vocab_size=30000, files_paths=midi_paths)
tokenizer.save_params(Path("path", "to", "save", "tokenizer.json"))
# And pushing it to the Hugging Face hub (you can download it back with .from_pretrained)
tokenizer.push_to_hub("username/model-name", private=True, token="your_hf_token")
# Split MIDIs into smaller chunks for training
dataset_chunks_dir = Path("path", "to", "midi_chunks")
split_midis_for_training(
files_paths=midi_paths,
tokenizer=tokenizer,
save_dir=dataset_chunks_dir,
max_seq_len=1024,
)
# Create a Dataset, a DataLoader and a collator to train a model
dataset = DatasetMIDI(
files_paths=list(dataset_chunks_dir.glob("**/*.mid")),
tokenizer=tokenizer,
max_seq_len=1024,
bos_token_id=tokenizer["BOS_None"],
eos_token_id=tokenizer["EOS_None"],
)
collator = DataCollator(tokenizer["PAD_None"])
dataloader = DataLoader(dataset, batch_size=64, collate_fn=collator)
# Iterate over the dataloader to train a model
for batch in dataloader:
print("Train your model on this batch...")
MidiTok implements the tokenizations: (links to original papers)
You can find short presentations in the documentation.
Contributions are gratefully welcomed, feel free to open an issue or send a PR if you want to add a tokenization or speed up the code. You can read the contribution guide for details.
no_duration_drums
option, discarding duration tokens for drum notes;If you use MidiTok for your research, a citation in your manuscript would be gladly appreciated. ❤️
[MidiTok paper] [MidiTok original ISMIR publication]
@inproceedings{miditok2021,
title={{MidiTok}: A Python package for {MIDI} file tokenization},
author={Fradet, Nathan and Briot, Jean-Pierre and Chhel, Fabien and El Fallah Seghrouchni, Amal and Gutowski, Nicolas},
booktitle={Extended Abstracts for the Late-Breaking Demo Session of the 22nd International Society for Music Information Retrieval Conference},
year={2021},
url={https://archives.ismir.net/ismir2021/latebreaking/000005.pdf},
}
The BibTeX citations of all tokenizations can be found in the documentation
Special thanks to all the contributors. We acknowledge Aubay, the LIP6, LERIA and ESEO for the initial financing and support.