Text Generation Webui Colab Save Abandoned

A colab gradio web UI for running Large Language Models

Project README

🐣 Please follow me for new updates https://twitter.com/camenduru
🔥 Please join our discord server https://discord.gg/k5BwmmvJJU

🚦 WIP 🚦

🦒 Colab

colab Info - Model Page
Open In Colab vicuna-13b-GPTQ-4bit-128g
https://vicuna.lmsys.org
Open In Colab vicuna-13B-1.1-GPTQ-4bit-128g
https://vicuna.lmsys.org
Open In Colab stable-vicuna-13B-GPTQ-4bit-128g
https://huggingface.co/CarperAI/stable-vicuna-13b-delta
Open In Colab gpt4-x-alpaca-13b-native-4bit-128g
https://huggingface.co/chavinlo/gpt4-x-alpaca
Open In Colab pyg-7b-GPTQ-4bit-128g
https://huggingface.co/Neko-Institute-of-Science/pygmalion-7b
Open In Colab koala-13B-GPTQ-4bit-128g
https://bair.berkeley.edu/blog/2023/04/03/koala
Open In Colab oasst-llama13b-GPTQ-4bit-128g
https://open-assistant.io
Open In Colab wizard-lm-uncensored-7b-GPTQ-4bit-128g
https://github.com/nlpxucan/WizardLM
Open In Colab mpt-storywriter-7b-GPTQ-4bit-128g
https://www.mosaicml.com
Open In Colab wizard-lm-uncensored-13b-GPTQ-4bit-128g
https://github.com/nlpxucan/WizardLM
Open In Colab pyg-13b-GPTQ-4bit-128g
https://huggingface.co/PygmalionAI/pygmalion-13b
Open In Colab falcon-7b-instruct-GPTQ-4bit
https://falconllm.tii.ae/

🦒 Colab Pro

According to the Facebook Research LLaMA license (Non-commercial bespoke license), maybe we cannot use this model with a Colab Pro account. But Yann LeCun said "GPL v3" (https://twitter.com/ylecun/status/1629189925089296386) I am a little confused. Is it possible to use this with a non-free Colab Pro account?

Tutorial

https://www.youtube.com/watch?v=kgA7eKU1XuA

Text Generation Web UI

https://github.com/oobabooga/text-generation-webui (Thanks to @oobabooga ❤)

Models License

Model License
vicuna-13b-GPTQ-4bit-128g From https://vicuna.lmsys.org: The online demo is a research preview intended for non-commercial use only, subject to the model License of LLaMA, Terms of Use of the data generated by OpenAI, and Privacy Practices of ShareGPT. Please contact us If you find any potential violation. The code is released under the Apache License 2.0.
gpt4-x-alpaca-13b-native-4bit-128g https://huggingface.co/chavinlo/alpaca-native -> https://huggingface.co/chavinlo/alpaca-13b -> https://huggingface.co/chavinlo/gpt4-x-alpaca

Special Thanks

Thanks to facebookresearch ❤ for https://github.com/facebookresearch/llama
Thanks to lmsys ❤ for https://huggingface.co/lmsys/vicuna-13b-delta-v0
Thanks to anon8231489123 ❤ for https://huggingface.co/anon8231489123/vicuna-13b-GPTQ-4bit-128g (GPTQ 4bit quantization of: https://huggingface.co/lmsys/vicuna-13b-delta-v0)
Thanks to tatsu-lab ❤ for https://github.com/tatsu-lab/stanford_alpaca
Thanks to chavinlo ❤ for https://huggingface.co/chavinlo/gpt4-x-alpaca
Thanks to qwopqwop200 ❤ for https://github.com/qwopqwop200/GPTQ-for-LLaMa
Thanks to tsumeone ❤ for https://huggingface.co/tsumeone/gpt4-x-alpaca-13b-native-4bit-128g-cuda (GPTQ 4bit quantization of: https://huggingface.co/chavinlo/gpt4-x-alpaca)
Thanks to transformers ❤ for https://github.com/huggingface/transformers
Thanks to gradio-app ❤ for https://github.com/gradio-app/gradio
Thanks to TheBloke ❤ for https://huggingface.co/TheBloke/stable-vicuna-13B-GPTQ
Thanks to Neko-Institute-of-Science ❤ for https://huggingface.co/Neko-Institute-of-Science/pygmalion-7b
Thanks to gozfarb ❤ for https://huggingface.co/gozfarb/pygmalion-7b-4bit-128g-cuda (GPTQ 4bit quantization of: https://huggingface.co/Neko-Institute-of-Science/pygmalion-7b)
Thanks to young-geng ❤ for https://huggingface.co/young-geng/koala
Thanks to TheBloke ❤ for https://huggingface.co/TheBloke/koala-13B-GPTQ-4bit-128g (GPTQ 4bit quantization of: https://huggingface.co/young-geng/koala)
Thanks to dvruette ❤ for https://huggingface.co/dvruette/oasst-llama-13b-2-epochs
Thanks to gozfarb ❤ for https://huggingface.co/gozfarb/oasst-llama13b-4bit-128g (GPTQ 4bit quantization of: https://huggingface.co/dvruette/oasst-llama-13b-2-epochs)
Thanks to ehartford ❤ for https://huggingface.co/ehartford/WizardLM-7B-Uncensored
Thanks to TheBloke ❤ for https://huggingface.co/TheBloke/WizardLM-7B-uncensored-GPTQ (GPTQ 4bit quantization of: https://huggingface.co/ehartford/WizardLM-7B-Uncensored)
Thanks to mosaicml ❤ for https://huggingface.co/mosaicml/mpt-7b-storywriter
Thanks to OccamRazor ❤ for https://huggingface.co/OccamRazor/mpt-7b-storywriter-4bit-128g (GPTQ 4bit quantization of: https://huggingface.co/mosaicml/mpt-7b-storywriter)
Thanks to ehartford ❤ for https://huggingface.co/ehartford/WizardLM-13B-Uncensored
Thanks to ausboss ❤ for https://huggingface.co/ausboss/WizardLM-13B-Uncensored-4bit-128g (GPTQ 4bit quantization of: https://huggingface.co/ehartford/WizardLM-13B-Uncensored)
Thanks to PygmalionAI ❤ for https://huggingface.co/PygmalionAI/pygmalion-13b
Thanks to notstoic ❤ for https://huggingface.co/notstoic/pygmalion-13b-4bit-128g (GPTQ 4bit quantization of: https://huggingface.co/PygmalionAI/pygmalion-13b)

Open Source Agenda is not affiliated with "Text Generation Webui Colab" Project. README Source: camenduru/text-generation-webui-colab
Stars
301
Open Issues
6
Last Commit
10 months ago

Open Source Agenda Badge

Open Source Agenda Rating