Thomasgauthier LoRD Save

Low-Rank adapter extraction for fine-tuned transformers model

Project README

LoRD: Low-Rank Decomposition of finetuned Large Language Models

This repository contains code for extracting LoRA adapters from finetuned transformers models, using Singular Value Decomposition (SVD).

LoRA (Low-Rank Adaptation) is a technique for parameter-efficient fine-tuning of large language models. The technique presented here allows extracting PEFT compatible Low-Rank adapters from full fine-tunes or merged model.

Getting started

Everything you need to extract and publish your LoRA adapter is available in the LoRD.ipynb notebook.

Running the notebook on Colab is the easiest way to get started.

Special thanks

Thanks to @kohya_ss for their prior work on LoRA extraction for Stable Diffusion.

Open Source Agenda is not affiliated with "Thomasgauthier LoRD" Project. README Source: thomasgauthier/LoRD
Stars
87
Open Issues
0
Last Commit
1 week ago
Repository
License

Open Source Agenda Badge

Open Source Agenda Rating