Gpt Index Versions Save

LlamaIndex is a data framework for your LLM applications

v0.10.32

3 weeks ago

v0.10.31

3 weeks ago

llama-index-core [0.10.31]

  • fix async streaming response from query engine (#12953)
  • enforce uuid in element node parsers (#12951)
  • add function calling LLM program (#12980)
  • make the PydanticSingleSelector work with async api (#12964)
  • fix query pipeline's arun_with_intermediates (#13002)

llama-index-agent-coa [0.1.0]

  • Add COA Agent integration (#13043)

llama-index-agent-lats [0.1.0]

  • Official LATs agent integration (#13031)

llama-index-agent-llm-compiler [0.1.0]

  • Add LLMCompiler Agent Integration (#13044)

llama-index-llms-anthropic [0.1.10]

  • Add the ability to pass custom headers to Anthropic LLM requests (#12819)

llama-index-llms-bedrock [0.1.7]

  • Adding claude 3 opus to BedRock integration (#13033)

llama-index-llms-fireworks [0.1.5]

  • Add new Llama 3 and Mixtral 8x22b model into Llama Index for Fireworks (#12970)

llama-index-llms-openai [0.1.16]

  • Fix AsyncOpenAI "RuntimeError: Event loop is closed bug" when instances of AsyncOpenAI are rapidly created & destroyed (#12946)
  • Don't retry on all OpenAI APIStatusError exceptions - just InternalServerError (#12947)

llama-index-llms-watsonx [0.1.7]

  • Updated IBM watsonx foundation models (#12973)

llama-index-packs-code-hierarchy [0.1.6]

  • Return the parent node if the query node is not present (#12983)
  • fixed bug when function is defined twice (#12941)

llama-index-program-openai [0.1.6]

  • dding support for streaming partial instances of Pydantic output class in OpenAIPydanticProgram (#13021)

llama-index-readers-openapi [0.1.0]

  • add reader for openapi files (#12998)

llama-index-readers-slack [0.1.4]

  • Avoid infinite loop when not handled exception is raised (#12963)

llama-index-readers-web [0.1.10]

  • Improve whole site reader to remove duplicate links (#12977)

llama-index-retrievers-bedrock [0.1.1]

  • Fix Bedrock KB retriever to use query bundle (#12910)

llama-index-vector-stores-awsdocdb [0.1.0]

  • Integrating AWS DocumentDB as a vector storage method (#12217)

llama-index-vector-stores-databricks [0.1.2]

  • Fix databricks vector search metadata (#12999)

llama-index-vector-stores-neo4j [0.1.4]

  • Neo4j metadata filtering support (#12923)

llama-index-vector-stores-pinecone [0.1.5]

  • Fix error querying PineconeVectorStore using sparse query mode (#12967)

llama-index-vector-stores-qdrant [0.2.5]

  • Many fixes for async and checking if collection exists (#12916)

llama-index-vector-stores-weaviate [0.1.5]

  • Adds the index deletion functionality to the WeviateVectoreStore (#12993)

v0.10.30

4 weeks ago

llama-index-core [0.10.30]

  • Add intermediate outputs to QueryPipeline (#12683)
  • Fix show progress causing results to be out of order (#12897)
  • add OR filter condition support to simple vector store (#12823)
  • improved custom agent init (#12824)
  • fix pipeline load without docstore (#12808)
  • Use async _aprocess_actions in _arun_step_stream (#12846)
  • provide the exception to the StreamChatErrorEvent (#12879)
  • fix bug in load and search tool spec (#12902)

llama-index-embeddings-azure-opena [0.1.7]

  • Expose azure_ad_token_provider argument to support token expiration (#12818)

llama-index-embeddings-cohere [0.1.8]

  • Add httpx_async_client option (#12896)

llama-index-embeddings-ipex-llm [0.1.0]

  • add ipex-llm embedding integration (#12740)

llama-index-embeddings-octoai [0.1.0]

  • add octoai embeddings (#12857)

llama-index-llms-azure-openai [0.1.6]

  • Expose azure_ad_token_provider argument to support token expiration (#12818)

llama-index-llms-ipex-llm [0.1.2]

  • add support for loading "low-bit format" model to IpexLLM integration (#12785)

llama-index-llms-mistralai [0.1.11]

  • support open-mixtral-8x22b (#12894)

llama-index-packs-agents-lats [0.1.0]

  • added LATS agent pack (#12735)

llama-index-readers-smart-pdf-loader [0.1.4]

  • Use passed in metadata for documents (#12844)

llama-index-readers-web [0.1.9]

  • added Firecrawl Web Loader (#12825)

llama-index-vector-stores-milvus [0.1.10]

  • use batch insertions into Milvus vector store (#12837)

llama-index-vector-stores-vearch [0.1.0]

  • add vearch to vector stores (#10972)

v0.10.29

1 month ago

llama-index-core [0.10.29]

  • BREAKING Moved PandasQueryEngine and PandasInstruction parser to llama-index-experimental (#12419)
    • new install: pip install -U llama-index-experimental
    • new import: from llama_index.experimental.query_engine import PandasQueryEngine
  • Fixed some core dependencies to make python3.12 work nicely (#12762)
  • update async utils run_jobs() to include tqdm description (#12812)
  • Refactor kvdocstore delete methods (#12681)

llama-index-llms-bedrock [0.1.6]

  • Support for Mistral Large from Bedrock (#12804)

llama-index-llms-openvino [0.1.0]

  • Added OpenVino LLMs (#12639)

llama-index-llms-predibase [0.1.4]

  • Update LlamaIndex-Predibase Integration to latest API (#12736)
  • Enable choice of either Predibase-hosted or HuggingFace-hosted fine-tuned adapters in LlamaIndex-Predibase integration (#12789)

llama-index-output-parsers-guardrails [0.1.3]

  • Modernize GuardrailsOutputParser (#12676)

llama-index-packs-agents-coa [0.1.0]

  • Chain-of-Abstraction Agent Pack (#12757)

llama-index-packs-code-hierarchy [0.1.3]

  • Fixed issue with chunking multi-byte characters (#12715)

llama-index-packs-raft-dataset [0.1.4]

  • Fix bug in raft dataset generator - multiple system prompts (#12751)

llama-index-postprocessor-openvino-rerank [0.1.2]

  • Add openvino rerank support (#12688)

llama-index-readers-file [0.1.18]

  • convert to Path in docx reader if input path str (#12807)
  • make pip check work for optional pdf packages (#12758)

llama-index-readers-s3 [0.1.7]

  • wrong doc id when using default s3 endpoint in S3Reader (#12803)

llama-index-retrievers-bedrock [0.1.0]

  • Add Amazon Bedrock knowledge base integration as retriever (#12737)

llama-index-retrievers-mongodb-atlas-bm25-retriever [0.1.3]

  • Add mongodb atlas bm25 retriever (#12519)

llama-index-storage-chat-store-redis [0.1.3]

  • fix message serialization in redis chat store (#12802)

llama-index-vector-stores-astra-db [0.1.6]

  • Relax dependency version to accept astrapy 1.* (#12792)

llama-index-vector-stores-couchbase [0.1.0]

  • Add support for Couchbase as a Vector Store (#12680)

llama-index-vector-stores-elasticsearch [0.1.7]

  • Fix elasticsearch hybrid rrf window_size (#12695)

llama-index-vector-stores-milvus [0.1.8]

  • Added support to retrieve metadata fields from milvus (#12626)

llama-index-vector-stores-redis [0.2.0]

  • Modernize redis vector store, use redisvl (#12386)

llama-index-vector-stores-qdrant [0.2.0]

  • refactor: Switch default Qdrant sparse encoder (#12512)

v0.10.28

1 month ago

llama-index-core [0.10.28]

  • Support indented code block fences in markdown node parser (#12393)
  • Pass in output parser to guideline evaluator (#12646)
  • Added example of query pipeline + memory (#12654)
  • Add missing node postprocessor in CondensePlusContextChatEngine async mode (#12663)
  • Added return_direct option to tools /tool metadata (#12587)
  • Add retry for batch eval runner (#12647)
  • Thread-safe instrumentation (#12638)
  • Coroutine-safe instrumentation Spans #12589
  • Add in-memory loading for non-default filesystems in PDFReader (#12659)
  • Remove redundant tokenizer call in sentence splitter (#12655)
  • Add SynthesizeComponent import to shortcut imports (#12655)
  • Improved truncation in SimpleSummarize (#12655)
  • adding err handling in eval_utils default_parser for correctness (#12624)
  • Add async_postprocess_nodes at RankGPT Postprocessor Nodes (#12620)
  • Fix MarkdownNodeParser ref_doc_id (#12615)

llama-index-embeddings-openvino [0.1.5]

  • Added initial support for openvino embeddings (#12643)

llama-index-llms-anthropic [0.1.9]

  • add anthropic tool calling (#12591)

llama-index-llms-ipex-llm [0.1.1]

  • add ipex-llm integration (#12322)
  • add more data types support to ipex-llm llm integration (#12635)

llama-index-llms-openllm [0.1.4]

  • Proper PrivateAttr usage in OpenLLM (#12655)

llama-index-multi-modal-llms-anthropic [0.1.4]

  • Bumped anthropic dep version (#12655)

llama-index-multi-modal-llms-gemini [0.1.5]

  • bump generativeai dep (#12645)

llama-index-packs-dense-x-retrieval [0.1.4]

  • Add streaming support for DenseXRetrievalPack (#12607)

llama-index-readers-mongodb [0.1.4]

  • Improve efficiency of MongoDB reader (#12664)

llama-index-readers-wikipedia [0.1.4]

  • Added multilingual support for the Wikipedia reader (#12616)

llama-index-storage-index-store-elasticsearch [0.1.3]

  • remove invalid chars from default collection name (#12672)

llama-index-vector-stores-milvus [0.1.8]

  • Added support to retrieve metadata fields from milvus (#12626)
  • Bug fix - Similarity metric is always IP for MilvusVectorStore (#12611)

v0.10.27

1 month ago

llama-index-agent-openai [0.2.2]

  • Update imports for message thread typing (#12437)

llama-index-core [0.10.27]

  • Fix for pydantic query engine outputs being blank (#12469)
  • Add span_id attribute to Events (instrumentation) (#12417)
  • Fix RedisDocstore node retrieval from docs property (#12324)
  • Add node-postprocessors to retriever_tool (#12415)
  • FLAREInstructQueryEngine : delegating retriever api if the query engine supports it (#12503)
  • Make chat message to dict safer (#12526)
  • fix check in batch eval runner for multi-kwargs (#12563)
  • Fixes agent_react_multimodal_step.py bug with partial args (#12566)

llama-index-embeddings-clip [0.1.5]

  • Added support to load clip model from local file path (#12577)

llama-index-embeddings-cloudflar-workersai [0.1.0]

  • text embedding integration: Cloudflare Workers AI (#12446)

llama-index-embeddings-voyageai [0.1.4]

  • Fix pydantic issue in class definition (#12469)

llama-index-finetuning [0.1.5]

  • Small typo fix in QA generation prompt (#12470)

llama-index-graph-stores-falkordb [0.1.3]

  • Replace redis driver with FalkorDB driver (#12434)

llama-index-llms-anthropic [0.1.8]

  • Add ability to pass custom HTTP headers to Anthropic client (#12558)

llama-index-llms-cohere [0.1.6]

  • Add support for Cohere Command R+ model (#12581)

llama-index-llms-databricks [0.1.0]

  • Integrations with DataBricks LLM API (#12432)

llama-index-llms-watsonx [0.1.6]

  • Updated Watsonx foundation models (#12493)
  • Updated base model name on watsonx integration #12491

lama-index-postprocessor-rankllm-rerank [0.1.2]

  • Add RankGPT support inside RankLLM (#12475)

llama-index-readers-microsoft-sharepoint [0.1.7]

  • Use recursive strategy by default for SharePoint (#12557)

llama-index-readers-web [0.1.8]

  • Readability web page reader fix playwright async api bug (#12520)

llama-index-vector-stores-kdbai [0.1.5]

  • small to_list fix (#12515)

llama-index-vector-stores-neptune [0.1.0]

  • Add support for Neptune Analytics as a Vector Store (#12423)

llama-index-vector-stores-postgres [0.1.5]

  • fix(postgres): numeric metadata filters (#12583)

v0.10.26

1 month ago

v0.10.25

1 month ago

v0.10.24

1 month ago

v0.10.23

1 month ago