Llm Versions Save

Access large language models from the command-line

0.14

2 days ago
  • Support for OpenAI's new GPT-4o model: llm -m gpt-4o 'say hi in Spanish' #490
  • The gpt-4-turbo alias is now a model ID, which indicates the latest version of OpenAI's GPT-4 Turbo text and image model. Your existing logs.db database may contain records under the previous model ID of gpt-4-turbo-preview. #493
  • New llm logs -r/--response option for outputting just the last captured response, without wrapping it in Markdown and accompanying it with the prompt. #431
  • Nine new {ref}plugins <plugin-directory> since version 0.13:

0.13.1

3 months ago
  • Fix for No module named 'readline' error on Windows. #407

0.13

3 months ago

See also LLM 0.13: The annotated release notes.

  • Added support for new OpenAI embedding models: 3-small and 3-large and three variants of those with different dimension sizes, 3-small-512, 3-large-256 and 3-large-1024. See OpenAI embedding models for details. #394
  • The default gpt-4-turbo model alias now points to gpt-4-turbo-preview, which uses the most recent OpenAI GPT-4 turbo model (currently gpt-4-0125-preview). #396
  • New OpenAI model aliases gpt-4-1106-preview and gpt-4-0125-preview.
  • OpenAI models now support a -o json_object 1 option which will cause their output to be returned as a valid JSON object. #373
  • New plugins since the last release include llm-mistral, llm-gemini, llm-ollama and llm-bedrock-meta.
  • The keys.json file for storing API keys is now created with 600 file permissions. #351
  • Documented a pattern for installing plugins that depend on PyTorch using the Homebrew version of LLM, despite Homebrew using Python 3.12 when PyTorch have not yet released a stable package for that Python version. #397
  • Underlying OpenAI Python library has been upgraded to >1.0. It is possible this could cause compatibility issues with LLM plugins that also depend on that library. #325
  • Arrow keys now work inside the llm chat command. #376
  • LLM_OPENAI_SHOW_RESPONSES=1 environment variable now outputs much more detailed information about the HTTP request and response made to OpenAI (and OpenAI-compatible) APIs. #404
  • Dropped support for Python 3.7.

0.12

6 months ago
  • Support for the new GPT-4 Turbo model from OpenAI. Try it using llm chat -m gpt-4-turbo or llm chat -m 4t. #323
  • New -o seed 1 option for OpenAI models which sets a seed that can attempt to evaluate the prompt deterministically. #324

0.11.2

6 months ago
  • Pin to version of OpenAI Python library prior to 1.0 to avoid breaking. #327

0.11.1

6 months ago

0.11

7 months ago

LLM now supports the new OpenAI gpt-3.5-turbo-instruct model, and OpenAI completion (as opposed to chat completion) models in general. #284

llm -m gpt-3.5-turbo-instruct 'Reasons to tame a wild beaver:'

OpenAI completion models like this support a -o logprobs 3 option, which accepts a number between 1 and 5 and will include the log probabilities (for each produced token, what were the top 3 options considered by the model) in the logged response.

llm -m gpt-3.5-turbo-instruct 'Say hello succinctly' -o logprobs 3

You can then view the logprobs that were recorded in the SQLite logs database like this:

sqlite-utils "$(llm logs path)" \
  'select * from responses order by id desc limit 1' | \
  jq '.[0].response_json' -r | jq

Truncated output looks like this:

  [
    {
      "text": "Hi",
      "top_logprobs": [
        {
          "Hi": -0.13706253,
          "Hello": -2.3714375,
          "Hey": -3.3714373
        }
      ]
    },
    {
      "text": " there",
      "top_logprobs": [
        {
          " there": -0.96057636,
          "!\"": -0.5855763,
          ".\"": -3.2574513
        }
      ]
    }
  ]

Also in this release:

  • The llm.user_dir() function, used by plugins, now ensures the directory exists before returning it. #275
  • New LLM_OPENAI_SHOW_RESPONSES=1 environment variable for displaying the full HTTP response returned by OpenAI compatible APIs. #286
  • The llm embed-multi command now has a --batch-size X option for setting the batch size to use when processing embeddings - useful if you have limited memory available. #273
  • The collection.embed_multi() method also now accepts an optional batch_size=int argument.
  • Fixed two bugs with llm embed-multi --files relating to handling of directories. Thanks, ealvar3z. #274, #280

0.10

8 months ago

The two major features in this release are the llm chat command and support for embedding binary data.

llm chat

The new llm chat command starts an ongoing chat conversation with a model in your terminal. It works with all models supported by LLM and its plugins, including locally installed models such as Llama 2. #231

This offers a big performance boost for local models, since they don't need to be freshly loaded into memory for each prompt.

Here's an example chat with Llama 2 13B, provided by the llm-mlc plugin.

llm  chat  -m  mlc-chat-Llama-2-13b-chat-hf-q4f16_1

This starts a chat session:

Type 'exit' or 'quit' to exit
Type '!multi' to enter multiple lines, then '!end' to finish
> Who are you?
Hello! I'm just an AI, here to assist you with any questions you may have.
My name is LLaMA, and I'm a large language model trained to provide helpful
and informative responses to a wide range of questions and topics. I'm here
to help you find the information you need, while ensuring a safe and
respectful conversation. Please feel free to ask me anything, and I'll do my
best to provide a helpful and accurate response.
> Tell me a joke about otters
Sure, here's a joke about otters:

Why did the otter go to the party?

Because he heard it was a "whale" of a time!

(Get it? Whale, like a big sea mammal, but also a "wild" or "fun" time.
Otters are known for their playful and social nature, so it's a lighthearted
and silly joke.)

I hope that brought a smile to your face! Do you have any other questions or
topics you'd like to discuss?
> exit

Chat sessions are logged to SQLite - use llm logs to view them. They can accept system prompts, templates and model options - consult the chat documentation for details.

Binary embedding support

LLM's embeddings feature has been expanded to provide support for embedding binary data, in addition to text. #254

This enables models like CLIP, supported by the new llm-clip plugin.

CLIP is a multi-modal embedding model which can embed images and text into the same vector space. This means you can use it to create an embedding index of photos, and then search for the embedding vector for "a happy dog" and get back images that are semantically closest to that string.

To create embeddings for every JPEG in a directory stored in a photos collection, run:

llm install  llm-clip
llm embed-multi  photos  --files  photos/  '*.jpg'  --binary  -m  clip

Now you can search for photos of racoons using:

llm similar photos -c 'raccoon'

This spits out a list of images, ranked by how similar they are to the string "raccoon":

{"id": "IMG_4801.jpeg", "score": 0.28125139257127457, "content": null, "metadata": null}
{"id": "IMG_4656.jpeg", "score": 0.26626441704164294, "content": null, "metadata": null}
{"id": "IMG_2944.jpeg", "score": 0.2647445926996852, "content": null, "metadata": null}
...

Also in this release

  • The LLM_LOAD_PLUGINS environment variable can be used to control which plugins are loaded when llm starts running. #256
  • The llm plugins --all option includes builtin plugins in the list of plugins. #259
  • The llm embed-db family of commands has been renamed to llm collections. #229
  • llm embed-multi --files now has an --encoding option and defaults to falling back to latin-1 if a file cannot be processed as utf-8. #225

0.10a1

8 months ago
  • Support for embedding binary data. #254
  • llm chat now works for models with API keys. #247
  • llm chat -o for passing options to a model. #244
  • llm chat --no-stream option. #248
  • LLM_LOAD_PLUGINS environment variable. #256
  • llm plugins --all option for including builtin plugins. #259
  • llm embed-db has been renamed to llm collections. #229
  • Fixed bug where llm embed -c option was treated as a filepath, not a string. Thanks, mhalle. #263

0.10a0

8 months ago
  • New llm chat command for starting an interactive terminal chat with a model. #231
  • llm embed-multi --files now has an --encoding option and defaults to falling back to latin-1 if a file cannot be processed as utf-8. #225