Roboflow Inference Versions Save

A fast, easy-to-use, production-ready inference server for computer vision supporting deployment of many popular model architectures and fine-tuned models.

v0.9.22

3 weeks ago

What's Changed

New Contributors

Full Changelog: https://github.com/roboflow/inference/compare/v0.9.20...v0.9.22

v0.9.20

1 month ago

What's Changed

  • Bump version for pypi wheels

Full Changelog: https://github.com/roboflow/inference/compare/v0.9.19...v0.9.20

v0.9.19

1 month ago

GroundingDINO bugfixes and enhancements!

Allows users to pass custom box_threshold and text_threshold params to Grounding DINO core model. Update docs to reflect box_threshold and text_threshold params. Fixes error by filtering out detections where text similarity is lower than text_threshold and Grounding DINO returns None for class ID. Fixes images passed to Grounding DINO model being loaded as RBG instead of BGR. Adds NMS to Grounding DINO, optionally using class agnostic NMS via CLASS_AGNOSTIC_NMS env var.

Try it out:

from inference.models.grounding_dino import GroundingDINO

model = GroundingDINO(api_key="")

results = model.infer(
    {
        "image": {
            "type": "url",
            "value": "https://media.roboflow.com/fruit.png",
        },
        "text": ["apple"],

        # Optional params
        "box_threshold": 0.5
        "text_threshold": 0.5
    }
)

print(results.json())

Full Changelog: https://github.com/roboflow/inference/compare/v0.9.18...v0.9.19

v0.9.18

1 month ago

πŸš€ Added

πŸŽ₯ πŸŽ₯ Multiple video sources 🀝 InferencePipeline

Previous versions of the InferencePipeline could only support a single video source. However, from now on, you can pass multiple videos into a single pipeline and have all of them processed! Here is a demo:

Here's how to achieve the result:

from inference import InferencePipeline
from inference.core.interfaces.stream.sinks import render_boxes

pipeline = InferencePipeline.init(
    video_reference=["your_video.mp4", "your_other_ideo.mp4"],
    model_id="yolov8n-640",
    on_prediction=render_boxes,
)
pipeline.start()
pipeline.join()

There were a lot of internal changes made, but the majority of users should not experience any breaking changes. Please visit our πŸ“– documentation to discover all the differences. If you are affected by the changes we needed to introduce, here is the πŸ”§ migration guide.

Barcode detector in workflows

Thanks to @chandlersupple, we have ability to detect and read barcodes in workflows.

Visit our πŸ“– documentation to see how to bring this step into your workflow.

🌱 Changed

Easier data collection in inference πŸ”₯

We've introduced a new parameter handled by the inference server (including hosted inference at Roboflow platform). This parameter, called active_learning_target_dataset, can now be added to requests to specify the Roboflow project where collected data should be stored.

Thanks to this change, you can now collect datasets while using Universe models. We've also updated Active Learning πŸ“– docs

from inference_sdk import InferenceHTTPClient, InferenceConfiguration

# prepare and set configuration
configuration = InferenceConfiguration(
    active_learning_target_dataset="my_dataset",
)
client = InferenceHTTPClient(
    api_url="https://detect.roboflow.com",
    api_key="<YOUR_ROBOFLOW_API_KEY>",
).configure(configuration)

# run normal request and have your data sampled 🀯 
client.infer(
    "./path_to/your_image.jpg",
    model_id="yolov8n-640",
)

Other changes

πŸ”¨ Fixed

Thanks to contribution of @hvaria πŸ… we have two problems solved:

New Contributors

Full Changelog: https://github.com/roboflow/inference/compare/v0.9.17...v0.9.18

v0.9.17

1 month ago

πŸš€ Added

YOLOWorld - new versions and Roboflow hosted inference 🀯

inference package now support 5 new versions of YOLOWorld model. We've added versions x, v2-s, v2-m, v2-l, v2-x. Versions with prefix v2 have better performance than the previously published ones.

To use YOLOWorld in inference, use the following model_id: yolo_world/<version>, substituting <version> with one of [s, m, l, x, v2-s, v2-m, v2-l, v2-x].

You can use the models in different contexts:

Roboflow hosted inference - easiest way to get your predictions :boom:

πŸ’‘ Please make sure you have inference-sdk installed

If you do not have the whole inference package installed, you will need to install at leastinference-sdk:

pip install inference-sdk
πŸ’‘ You need Roboflow account to use our hosted platform
import cv2
from inference_sdk import InferenceHTTPClient

client = InferenceHTTPClient(api_url="https://infer.roboflow.com", api_key="<YOUR_ROBOFLOW_API_KEY>")
image = cv2.imread("<path_to_your_image>")
results = client.infer_from_yolo_world(
    image,
    ["person", "backpack", "dog", "eye", "nose", "ear", "tongue"],
    model_version="s",  # <-- you do not need to provide `yolo_world/` prefix here
)

Self-hosted inference server

πŸ’‘ Please remember to clean up old version of docker image

If you ever used inference server before, please run:

docker rmi roboflow/roboflow-inference-server-cpu:latest

# or, if you have GPU on the machine
docker rmi roboflow/roboflow-inference-server-gpu:latest

in order to make sure the newest version of image is pulled.

πŸ’‘ Please make sure you run the server and have sdk installed

If you do not have the whole inference package installed, you will need to install at least inference-cli and inference-sdk:

pip install inference-sdk inference-cli

Make sure you start local instance of inference server before running the code

inference server start
import cv2
from inference_sdk import InferenceHTTPClient

client = InferenceHTTPClient(api_url="http://127.0.0.1:9001")
image = cv2.imread("<path_to_your_image>")
results = client.infer_from_yolo_world(
    image,
    ["person", "backpack", "dog", "eye", "nose", "ear", "tongue"],
    model_version="s",  # <-- you do not need to provide `yolo_world/` prefix here
)

In inference Python package

πŸ’‘ Please remember to install inference with yolo-world extras
pip install "inference[yolo-world]"
import cv2
from inference.models import YOLOWorld

image = cv2.imread("<path_to_your_image>")
model = YOLOWorld(model_id="yolo_world/s")
results = model.infer(
    image, 
    ["person", "backpack", "dog", "eye", "nose", "ear", "tongue"]
)

🌱 Changed

New Contributors

Full Changelog: https://github.com/roboflow/inference/compare/v0.9.16...v0.9.17

v0.9.16

1 month ago

πŸš€ Added

🎬 InferencePipeline can now process the video using your custom logic

Prior to v0.9.16, InferencePipeline was only able to make inference against Roboflow models. Now - you can inject any arbitrary logic of your choice and process videos (files and streams) using custom function you create. Just look at the example:

import os
import json
from inference.core.interfaces.camera.entities import VideoFrame
from inference import InferencePipeline

TARGET_DIR = "./my_predictions"

class MyModel:

  def __init__(self, weights_path: str):
    self._model = your_model_loader(weights_path)

  def infer(self, video_frame: VideoFrame) -> dict:
    return self._model(video_frame.image)


def save_prediction(prediction: dict, video_frame: VideoFrame) -> None:
  with open(os.path.join(TARGET_DIR, f"{video_frame.frame_id}.json")) as f:
    json.dump(prediction, f)

my_model = MyModel("./my_model.pt")

pipeline = InferencePipeline.init_with_custom_logic(
  video_reference="./my_video.mp4",
  on_video_frame=my_model.infer,   # <-- your custom video frame processing function
  on_prediction=save_prediction,  # <-- your custom sink for predictions
)

# start the pipeline
pipeline.start()
# wait for the pipeline to finish
pipeline.join()

That's not everything! Remember our workflows feature? We've just added workflows into InferencePipeline (in experimental mode). Check InferencePipeline.init_with_workflow(...) to test the feature.

❗ Breaking change: we've reverted changes introduced in v0.9.15 to InferencePipeline.init(...) making it compatible with YOLOWorld model. Now, you would need to use InferencePipeline.init_with_yolo_world(...) as shown here:

pipeline = InferencePipeline.init_with_yolo_world(
      video_reference="YOUR-VIDEO"
      on_prediction=...,
      classes=["person", "dog", "car", "truck"]
  )

We've updated πŸ“– docs to make it easy to use new feature.

Thanks @paulguerrie for great contribution

🌱 Changed

  • Huge changes in πŸ“– docs - thanks @capjamesg, @SkalskiP, @SolomonLake for contribution
  • Improved contributor experience by adding contributor guide and separating GHA CI, such that most important tests could work against repository fork
  • OpenVINO as default ONNX Execution Provider for x86 based docker images to improve speed of inference (@probicheaux )
  • Camera properties in InferencePipeline can be set now by caller (@sberan)

πŸ”¨ Fixed

  • added missing structlog dependency to package (@paulguerrie)
  • clarified models licence (@yeldarby)
  • bugs in lambda HTTP inference
  • fixed portion of security vulnerabilities
  • ❗ breaking: Two exceptions (WorkspaceLoadError, MalformedWorkflowResponseError), when raised will be given HTTP502 error, instead of HTTP500 as previously
  • bug in workflows with class-filter at the level of detection-based model blocks not being applied.

New Contributors

Full Changelog: https://github.com/roboflow/inference/compare/v0.9.15...v0.9.16

v0.9.15

2 months ago

What's Changed

Full Changelog: https://github.com/roboflow/inference/compare/v0.9.14...v0.9.15

v0.9.15rc1

2 months ago

What's Changed

Full Changelog: https://github.com/roboflow/inference/compare/v0.9.14...v0.9.15rc1

v0.9.14

2 months ago

πŸš€ Added

LMMs (GPT-4V and CogVLM) 🀝 workflows

Now, with Roboflow workflows LMMs integration is made easy πŸ’ͺ . Just look at the demo! 🀯

As always, we encourage you to visit workflows docs πŸ“– and examples.

This is how to create a multi-functional app with workflows and LMMs:

inference server start
from inference_sdk import InferenceHTTPClient

LOCAL_CLIENT = InferenceHTTPClient(
    api_url="http://127.0.0.1:9001", 
    api_key=ROBOFLOW_API_KEY,
)
FLEXIBLE_SPECIFICATION = {
    "version": "1.0",
    "inputs": [
        { "type": "InferenceImage", "name": "image" },
        { "type": "InferenceParameter", "name": "open_ai_key" },
        { "type": "InferenceParameter", "name": "lmm_type" },
        { "type": "InferenceParameter", "name": "prompt" },
        { "type": "InferenceParameter", "name": "expected_output" },
    ],
    "steps": [     
        {
            "type": "LMM",
            "name": "step_1",
            "image": "$inputs.image",
            "lmm_type": "$inputs.lmm_type",
            "prompt": "$inputs.prompt",
            "json_output": "$inputs.expected_output",
            "remote_api_key": "$inputs.open_ai_key",
        },
    ],
    "outputs": [
        { "type": "JsonField", "name": "structured_output", "selector": "$steps.step_1.structured_output" },
        { "type": "JsonField", "name": "llm_output", "selector": "$steps.step_1.*" },
    ]   
}

response_gpt = LOCAL_CLIENT.infer_from_workflow(
    specification=FLEXIBLE_SPECIFICATION,
    images={
        "image": cars_image,
    },
    parameters={
        "open_ai_key": OPEN_AI_KEY,
        "lmm_type": "gpt_4v",
        "prompt": "You are supposed to act as object counting expert. Please provide number of **CARS** visible in the image",
        "expected_output": {
            "objects_count": "Integer value with number of objects",
        }
    }
)

🌱 Changed

πŸ”¨ Fixed

Full Changelog: https://github.com/roboflow/inference/compare/v0.9.13...v0.9.14

v0.9.13

2 months ago

πŸš€ Added

YOLO World 🀝 workflows

We've introduced Yolo World model into workflows making it trivially easy to use the model as any other object-detection model ☺️

To try this out, install dependencies first:

pip install inference-sdk inference-cli

Start the server:

inference server start

And run the script:

from inference_sdk import InferenceHTTPClient

CLIENT = InferenceHTTPClient(api_url="http://127.0.0.1:9001", api_key="YOUR_API_KEY")

YOLO_WORLD = {
    "specification": {
        "version": "1.0",
        "inputs": [
            { "type": "InferenceImage", "name": "image" },
            { "type": "InferenceParameter", "name": "classes" },
            { "type": "InferenceParameter", "name": "confidence", "default_value": 0.003 },
        ],
        "steps": [
            {
                "type": "YoloWorld",
                "name": "step_1",
                "image": "$inputs.image",
                "class_names": "$inputs.classes",
                "confidence": "$inputs.confidence",
            },
        ],
        "outputs": [
            { "type": "JsonField", "name": "predictions", "selector": "$steps.step_1.predictions" },
        ]   
    }
}

response = CLIENT.infer_from_workflow(
    specification=YOLO_WORLD["specification"],
    images={
        "image": frame,
    },
    parameters={
        "classes": ["yellow filling", "black hole"]  # each time you may specify different classes!
    }
)

Check details in documentation πŸ“– and discover usage examples.

πŸ† Contributors

@PawelPeczek-Roboflow (PaweΕ‚ PΔ™czek)

Full Changelog: https://github.com/roboflow/inference/compare/v0.9.12...v0.9.13