Open Source LLM toolkit to build trustworthy LLM applications. TigerArmo...
simpleT5 is built on top of PyTorch-lightning⚡️ and Transformers🤗 that...
A library for easily merging multiple LLM experts, and efficiently train...
活字通用大模型
Personal Project: MPP-Qwen14B(Multimodal Pipeline Parallel-Qwen14B). Don...
USING BERT FOR Attribute Extraction in KnowledgeGraph. fine-tuning and f...
🛰️ 基于真实医疗对话数据在ChatGLM上进行LoRA、P-Tuning V2、Freeze、RLHF等...
🐳 Aurora is a [Chinese Version] MoE model. Aurora is a further work bas...
🔥 Korean GPT-2, KoGPT2 FineTuning cased. 한국어 가사 데이터 학습 🔥
AutoAudit—— the LLM for Cyber Security 网络安全大语言模型
ALBERT model Pretraining and Fine Tuning using TF2.0
an MLOps/LLMOps platform
[NAACL 2021] This is the code for our paper `Fine-Tuning Pre-trained Lan...
chatglm-6b微调/LORA/PPO/推理, 样本为自动生成的整数/小数加减乘除运算, 可g...
Notus is a collection of fine-tuned LLMs using SFT, DPO, SFT+DPO, and/or...