Effective prompting for Large Multimodal Models like GPT-4 Vision, LLaVA or CogVLM. 🔥
Multimodal-Maestro gives you more control over large multimodal models to get the outputs you want. With more effective prompting tactics, you can get multimodal models to do tasks you didn't know (or think!) were possible. Curious how it works? Try our HF space!
⚠️ Our package has been renamed to maestro
. Install the package in a
3.11>=Python>=3.8 environment.
pip install maestro
🚧 The project is still under construction. The redesigned API is coming soon.
Description | Colab |
---|---|
Prompt LMMs with Multimodal Maestro | |
Manually annotate ONE image and let GPT-4V annotate ALL of them |
Find dog.
>>> The dog is prominently featured in the center of the image with the label [9].
load image
import cv2
image = cv2.imread("...")
create and refine marks
import maestro
generator = maestro.SegmentAnythingMarkGenerator(device='cuda')
marks = generator.generate(image=image)
marks = maestro.refine_marks(marks=marks)
visualize marks
mark_visualizer = maestro.MarkVisualizer()
marked_image = mark_visualizer.visualize(image=image, marks=marks)
prompt
prompt = "Find dog."
response = maestro.prompt_image(api_key=api_key, image=marked_image, prompt=prompt)
>>> "The dog is prominently featured in the center of the image with the label [9]."
extract related marks
masks = maestro.extract_relevant_masks(text=response, detections=refined_marks)
>>> {'6': array([
... [False, False, False, ..., False, False, False],
... [False, False, False, ..., False, False, False],
... [False, False, False, ..., False, False, False],
... ...,
... [ True, True, True, ..., False, False, False],
... [ True, True, True, ..., False, False, False],
... [ True, True, True, ..., False, False, False]])
... }
maestro
API.We would love your help in making this repository even better! If you noticed any bug, or if you have any suggestions for improvement, feel free to open an issue or submit a pull request.