Utilities to use the Hugging Face Hub API
// Programatically interact with the Hub
await createRepo({
repo: {type: "model", name: "my-user/nlp-model"},
credentials: {accessToken: HF_TOKEN}
});
await uploadFile({
repo: "my-user/nlp-model",
credentials: {accessToken: HF_TOKEN},
// Can work with native File in browsers
file: {
path: "pytorch_model.bin",
content: new Blob(...)
}
});
// Use hosted inference
await inference.translation({
model: 't5-base',
inputs: 'My name is Wolfgang and I live in Berlin'
})
await inference.textToImage({
model: 'stabilityai/stable-diffusion-2',
inputs: 'award winning high resolution photo of a giant tortoise/((ladybird)) hybrid, [trending on artstation]',
parameters: {
negative_prompt: 'blurry',
}
})
// and much more…
This is a collection of JS libraries to interact with the Hugging Face API, with TS types included.
We use modern features to avoid polyfills and dependencies, so the libraries will only work on modern browsers / Node.js >= 18 / Bun / Deno.
The libraries are still very young, please help us by opening issues!
To install via NPM, you can download the libraries as needed:
npm install @huggingface/inference
npm install @huggingface/hub
npm install @huggingface/agents
Then import the libraries in your code:
import { HfInference } from "@huggingface/inference";
import { HfAgent } from "@huggingface/agents";
import { createRepo, commit, deleteRepo, listFiles } from "@huggingface/hub";
import type { RepoId, Credentials } from "@huggingface/hub";
You can run our packages with vanilla JS, without any bundler, by using a CDN or static hosting. Using ES modules, i.e. <script type="module">
, you can import the libraries in your code:
<script type="module">
import { HfInference } from 'https://cdn.jsdelivr.net/npm/@huggingface/[email protected]/+esm';
import { createRepo, commit, deleteRepo, listFiles } from "https://cdn.jsdelivr.net/npm/@huggingface/[email protected]/+esm";
</script>
// esm.sh
import { HfInference } from "https://esm.sh/@huggingface/inference"
import { HfAgent } from "https://esm.sh/@huggingface/agents";
import { createRepo, commit, deleteRepo, listFiles } from "https://esm.sh/@huggingface/hub"
// or npm:
import { HfInference } from "npm:@huggingface/inference"
import { HfAgent } from "npm:@huggingface/agents";
import { createRepo, commit, deleteRepo, listFiles } from "npm:@huggingface/hub"
Get your HF access token in your account settings.
import { HfInference } from "@huggingface/inference";
const HF_TOKEN = "hf_...";
const inference = new HfInference(HF_TOKEN);
// You can also omit "model" to use the recommended model for the task
await inference.translation({
model: 't5-base',
inputs: 'My name is Wolfgang and I live in Amsterdam'
})
await inference.textToImage({
model: 'stabilityai/stable-diffusion-2',
inputs: 'award winning high resolution photo of a giant tortoise/((ladybird)) hybrid, [trending on artstation]',
parameters: {
negative_prompt: 'blurry',
}
})
await inference.imageToText({
data: await (await fetch('https://picsum.photos/300/300')).blob(),
model: 'nlpconnect/vit-gpt2-image-captioning',
})
// Using your own dedicated inference endpoint: https://hf.co/docs/inference-endpoints/
const gpt2 = inference.endpoint('https://xyz.eu-west-1.aws.endpoints.huggingface.cloud/gpt2');
const { generated_text } = await gpt2.textGeneration({inputs: 'The answer to the universe is'});
import { createRepo, uploadFile, deleteFiles } from "@huggingface/hub";
const HF_TOKEN = "hf_...";
await createRepo({
repo: "my-user/nlp-model", // or {type: "model", name: "my-user/nlp-test"},
credentials: {accessToken: HF_TOKEN}
});
await uploadFile({
repo: "my-user/nlp-model",
credentials: {accessToken: HF_TOKEN},
// Can work with native File in browsers
file: {
path: "pytorch_model.bin",
content: new Blob(...)
}
});
await deleteFiles({
repo: {type: "space", name: "my-user/my-space"}, // or "spaces/my-user/my-space"
credentials: {accessToken: HF_TOKEN},
paths: ["README.md", ".gitattributes"]
});
import {HfAgent, LLMFromHub, defaultTools} from '@huggingface/agents';
const HF_TOKEN = "hf_...";
const agent = new HfAgent(
HF_TOKEN,
LLMFromHub(HF_TOKEN),
[...defaultTools]
);
// you can generate the code, inspect it and then run it
const code = await agent.generateCode("Draw a picture of a cat wearing a top hat. Then caption the picture and read it out loud.");
console.log(code);
const messages = await agent.evaluateCode(code)
console.log(messages); // contains the data
// or you can run the code directly, however you can't check that the code is safe to execute this way, use at your own risk.
const messages = await agent.run("Draw a picture of a cat wearing a top hat. Then caption the picture and read it out loud.")
console.log(messages);
There are more features of course, check each library's README!
sudo corepack enable
pnpm install
pnpm -r format:check
pnpm -r lint:check
pnpm -r test
pnpm -r build
This will generate ESM and CJS javascript files in packages/*/dist
, eg packages/inference/dist/index.mjs
.