LlamaIndex Release Notes

Last updated: Mar 26, 2026

LlamaIndex Products

All LlamaIndex Release Notes (10)

  • Mar 25, 2026
    • Date parsed from source:
      Mar 25, 2026
    • First seen by Releasebot:
      Mar 26, 2026
    LlamaIndex logo

    LlamaIndex

    v0.14.19

    LlamaIndex releases a broad update with core fixes, new and improved LLM integrations, and storage and tooling enhancements. Highlights include Azure OpenAI responses support, Gemini 3 defaults, GPT 5.4 mini and nano variants, a new MiniMax provider, Redis node safety updates, and an AgentCore adapter.

    Release Notes

    [2026-03-25]

    llama-index-agent-agentmesh [0.2.0]

    • chore(deps): bump the uv group across 49 directories with 1 update (#21083)

    llama-index-callbacks-argilla [0.5.0]

    • chore(deps): bump the uv group across 3 directories with 1 update (#21069)

    llama-index-core [0.14.19]

    • fix: pass delete_from_docstore parameter in BaseIndex.delete_ref_doc (#20990)
    • fix(core): preserve CTE names during schema prefixing in SQLDatabase.run_sql (#21028)
    • fix(core): align sync retrieval dedup key with async (hash + ref_doc_id) (#21034)
    • fix(core): raise ValueError instead of returning string from structured_predict (#21036)
    • fix(core): remove incorrect per-node delete calls in index helpers (#21050)
    • chore(deps): bump the uv group across 49 directories with 1 update (#21083)
    • chore(deps): bump the uv group across 44 directories with 1 update (#21097)
    • enable llama-cloud>1.0 install (#21140)

    llama-index-embeddings-fireworks [0.5.2]

    • test(embeddings-fireworks): add test suite and fix docs (#20977)

    llama-index-embeddings-upstage [0.6.1]

    • chore(deps): bump the uv group across 49 directories with 1 update (#21083)

    llama-index-indices-managed-llama-cloud [0.11.1]

    • fix: llama-cloud managed index and remove llamaparse reader (#21043)
    • enable llama-cloud>1.0 install (#21140)

    llama-index-llms-azure-openai [0.5.3]

    • azure openai responses support (#21088)
    • fix azure openai responses (#21099)

    llama-index-llms-bedrock-converse [0.14.3]

    • use proper tool choice format in bedrock converse (#21098)

    llama-index-llms-cohere [0.8.0]

    • docs(cohere): update first basic usage example to chat API (#21108)

    llama-index-llms-google-genai [0.9.1]

    • feat: gemini 3 default and temperature (#21060)
    • fix(google-genai): avoid mutating messages list in prepare_chat_params (#21141)

    llama-index-llms-litellm [0.7.1]

    • Add support for custom LLM provider in model kwargs (#21095)

    llama-index-llms-minimax [0.1.0]

    • feat: add MiniMax LLM provider integration with M2.7 default (#20955)

    llama-index-llms-ollama [0.10.1]

    • fix(ollama): pass custom headers to auto-created clients (#21091)

    llama-index-llms-openai [0.7.3]

    • feat(llms/openai): Add support for Mini and Nano variants of GPT 5.4 (#21065)

    llama-index-llms-ovhcloud [0.2.1]

    • chore(deps): bump the uv group across 49 directories with 1 update (#21083)

    llama-index-packs-agent-search-retriever [0.5.0]

    • chore(deps): bump the uv group across 49 directories with 1 update (#21083)
    • chore(deps): bump the uv group across 44 directories with 1 update (#21097)

    llama-index-packs-amazon-product-extraction [0.5.0]

    • chore(deps): bump the uv group across 49 directories with 1 update (#21083)
    • chore(deps): bump the uv group across 44 directories with 1 update (#21097)

    llama-index-packs-arize-phoenix-query-engine [0.5.0]

    • chore(deps): bump the uv group across 49 directories with 1 update (#21083)
    • chore(deps): bump the uv group across 44 directories with 1 update (#21097)

    llama-index-packs-auto-merging-retriever [0.6.0]

    • chore(deps): bump the uv group across 49 directories with 1 update (#21083)
    • chore(deps): bump the uv group across 44 directories with 1 update (#21097)

    llama-index-packs-code-hierarchy [0.7.1]

    • chore(deps): bump the uv group across 49 directories with 1 update (#21083)
    • chore(deps): bump the uv group across 44 directories with 1 update (#21097)

    llama-index-packs-cohere-citation-chat [0.6.0]

    • chore(deps): bump the uv group across 49 directories with 1 update (#21083)
    • chore(deps): bump the uv group across 44 directories with 1 update (#21097)

    llama-index-packs-deeplake-deepmemory-retriever [0.5.0]

    • chore(deps): bump the uv group across 49 directories with 1 update (#21083)
    • chore(deps): bump the uv group across 44 directories with 1 update (#21097)

    llama-index-packs-deeplake-multimodal-retrieval [0.4.0]

    • chore(deps): bump the uv group across 49 directories with 1 update (#21083)
    • chore(deps): bump the uv group across 44 directories with 1 update (#21097)

    llama-index-packs-dense-x-retrieval [0.6.1]

    • chore(deps): bump the uv group across 49 directories with 1 update (#21083)
    • chore(deps): bump the uv group across 44 directories with 1 update (#21097)

    llama-index-packs-diff-private-simple-dataset [0.5.0]

    • chore(deps): bump the uv group across 49 directories with 1 update (#21083)
    • chore(deps): bump the uv group across 44 directories with 1 update (#21097)

    llama-index-packs-evaluator-benchmarker [0.5.0]

    • chore(deps): bump the uv group across 49 directories with 1 update (#21083)
    • chore(deps): bump the uv group across 44 directories with 1 update (#21097)

    llama-index-packs-fusion-retriever [0.6.0]

    • chore(deps): bump the uv group across 49 directories with 1 update (#21083)
    • chore(deps): bump the uv group across 44 directories with 1 update (#21097)

    llama-index-packs-fuzzy-citation [0.5.0]

    • chore(deps): bump the uv group across 49 directories with 1 update (#21083)
    • chore(deps): bump the uv group across 44 directories with 1 update (#21097)

    llama-index-packs-gmail-openai-agent [0.5.0]

    • chore(deps): bump the uv group across 49 directories with 1 update (#21083)
    • chore(deps): bump the uv group across 44 directories with 1 update (#21097)

    llama-index-packs-koda-retriever [0.5.0]

    • chore(deps): bump the uv group across 49 directories with 1 update (#21083)
    • chore(deps): bump the uv group across 44 directories with 1 update (#21097)

    llama-index-packs-llama-dataset-metadata [0.5.0]

    • chore(deps): bump the uv group across 49 directories with 1 update (#21083)
    • chore(deps): bump the uv group across 44 directories with 1 update (#21097)

    llama-index-packs-llama-guard-moderator [0.5.0]

    • chore(deps): bump the uv group across 49 directories with 1 update (#21083)
    • chore(deps): bump the uv group across 44 directories with 1 update (#21097)

    llama-index-packs-llava-completion [0.5.0]

    • chore(deps): bump the uv group across 49 directories with 1 update (#21083)
    • chore(deps): bump the uv group across 44 directories with 1 update (#21097)

    llama-index-packs-longrag [0.6.0]

    • chore(deps): bump the uv group across 49 directories with 1 update (#21083)
    • chore(deps): bump the uv group across 44 directories with 1 update (#21097)

    llama-index-packs-mixture-of-agents [0.5.0]

    • chore(deps): bump the uv group across 49 directories with 1 update (#21083)
    • chore(deps): bump the uv group across 44 directories with 1 update (#21097)

    llama-index-packs-multi-tenancy-rag [0.6.0]

    • chore(deps): bump the uv group across 49 directories with 1 update (#21083)
    • chore(deps): bump the uv group across 44 directories with 1 update (#21097)

    llama-index-packs-multidoc-autoretrieval [0.5.0]

    • chore(deps): bump the uv group across 49 directories with 1 update (#21083)
    • chore(deps): bump the uv group across 44 directories with 1 update (#21097)

    llama-index-packs-nebulagraph-query-engine [0.6.0]

    • chore(deps): bump the uv group across 49 directories with 1 update (#21083)
    • chore(deps): bump the uv group across 44 directories with 1 update (#21097)

    llama-index-packs-neo4j-query-engine [0.5.1]

    • chore(deps): bump the uv group across 49 directories with 1 update (#21083)
    • chore(deps): bump the uv group across 44 directories with 1 update (#21097)

    llama-index-packs-node-parser-semantic-chunking [0.5.1]

    • chore(deps): bump the uv group across 49 directories with 1 update (#21083)
    • chore(deps): bump the uv group across 44 directories with 1 update (#21097)

    llama-index-packs-ollama-query-engine [0.6.0]

    • chore(deps): bump the uv group across 49 directories with 1 update (#21083)
    • chore(deps): bump the uv group across 44 directories with 1 update (#21097)

    llama-index-packs-panel-chatbot [0.5.0]

    • chore(deps): bump the uv group across 3 directories with 1 update (#21069)
    • chore(deps): bump the uv group across 49 directories with 1 update (#21083)
    • chore(deps): bump the uv group across 44 directories with 1 update (#21097)

    llama-index-packs-raft-dataset [0.5.1]

    • chore(deps): bump the uv group across 49 directories with 1 update (#21083)
    • chore(deps): bump the uv group across 44 directories with 1 update (#21097)

    llama-index-packs-ragatouille-retriever [0.6.0]

    • chore(deps): bump the uv group across 49 directories with 1 update (#21083)
    • chore(deps): bump the uv group across 44 directories with 1 update (#21097)

    llama-index-packs-raptor [0.4.1]

    • chore(deps): bump the uv group across 49 directories with 1 update (#21083)
    • chore(deps): bump the uv group across 44 directories with 1 update (#21097)

    llama-index-packs-recursive-retriever [0.8.1]

    • chore(deps): bump the uv group across 3 directories with 1 update (#21069)
    • chore(deps): bump the uv group across 49 directories with 1 update (#21083)
    • chore(deps): bump the uv group across 44 directories with 1 update (#21097)

    llama-index-packs-searchain [0.3.0]

    • chore(deps): bump the uv group across 49 directories with 1 update (#21083)
    • chore(deps): bump the uv group across 44 directories with 1 update (#21097)

    llama-index-packs-self-discover [0.5.0]

    • chore(deps): bump the uv group across 49 directories with 1 update (#21083)
    • chore(deps): bump the uv group across 44 directories with 1 update (#21097)

    llama-index-packs-self-rag [0.5.0]

    • chore(deps): bump the uv group across 49 directories with 1 update (#21083)
    • chore(deps): bump the uv group across 44 directories with 1 update (#21097)

    llama-index-packs-sentence-window-retriever [0.6.0]

    • chore(deps): bump the uv group across 49 directories with 1 update (#21083)
    • chore(deps): bump the uv group across 44 directories with 1 update (#21097)

    llama-index-packs-snowflake-query-engine [0.6.0]

    • chore(deps): bump the uv group across 49 directories with 1 update (#21083)
    • chore(deps): bump the uv group across 44 directories with 1 update (#21097)

    llama-index-packs-stock-market-data-query-engine [0.6.0]

    • chore(deps): bump the uv group across 49 directories with 1 update (#21083)
    • chore(deps): bump the uv group across 44 directories with 1 update (#21097)

    llama-index-packs-streamlit-chatbot [0.5.2]

    • chore(deps): bump the uv group across 49 directories with 1 update (#21083)
    • chore(deps): bump the uv group across 44 directories with 1 update (#21097)

    llama-index-packs-sub-question-weaviate [0.5.0]

    • chore(deps): bump the uv group across 49 directories with 1 update (#21083)
    • chore(deps): bump the uv group across 44 directories with 1 update (#21097)

    llama-index-packs-timescale-vector-autoretrieval [0.5.0]

    • chore(deps): bump the uv group across 49 directories with 1 update (#21083)
    • chore(deps): bump the uv group across 44 directories with 1 update (#21097)

    llama-index-postprocessor-google-rerank [0.1.0]

    • chore(deps): bump the uv group across 49 directories with 1 update (#21083)

    llama-index-readers-llama-parse [0.6.1]

    • enable llama-cloud>1.0 install (#21140)

    llama-index-readers-service-now [0.3.0]

    • chore(deps): bump nltk from 3.9.1 to 3.9.3 in /llama-index-integrations/readers/llama-index-readers-service-now in the uv group across 1 directory (#21080)

    llama-index-storage-chat-store-opensearch [0.2.0]

    • chore(deps): bump the uv group across 49 directories with 1 update (#21083)

    llama-index-tools-aws-bedrock-agentcore [0.3.1]

    • feat(tools/agentcore): add AgentCoreRuntime adapter (#21008)
    • fix bedrock tests (#21129)

    llama-index-tools-exa [0.5.1]

    • update exa tool description and default search type (#21096)

    llama-index-vector-stores-redis [0.8.0]

    • feat(redis): implement safe get_nodes and delete_nodes support (#20972)

    llama-index-voice-agents-gemini-live [0.4.0]

    • feat: latest gemini model default (#21061)
    Original source Report a problem
  • Mar 16, 2026
    • Date parsed from source:
      Mar 16, 2026
    • First seen by Releasebot:
      Mar 26, 2026
    LlamaIndex logo

    LlamaIndex

    v0.14.18

    LlamaIndex releases a broad March 2026 update with Python 3.9 deprecation across many packages, core text-match filter alignment, OpenAI GPT-5.4 model support, new Google Discovery Engine rerank and Google auth for Calendar and Gmail, plus multiple bug fixes and backend improvements.

    Release Notes

    [2026-03-16]

    llama-index-agent-agentmesh [0.2.0]

    • chore: deprecate python 3.9 once and for all (#20956)

    llama-index-agent-azure [0.3.0]

    • chore: deprecate python 3.9 once and for all (#20956)

    llama-index-callbacks-agentops [0.5.0]

    • chore: deprecate python 3.9 once and for all (#20956)

    llama-index-callbacks-argilla [0.5.0]

    • chore: deprecate python 3.9 once and for all (#20956)

    llama-index-callbacks-arize-phoenix [0.7.0]

    • chore: deprecate python 3.9 once and for all (#20956)

    llama-index-callbacks-honeyhive [0.5.0]

    • chore: deprecate python 3.9 once and for all (#20956)
    • chore(deps): bump the uv group across 51 directories with 3 updates (#20962)

    llama-index-callbacks-langfuse [0.5.0]

    • chore: deprecate python 3.9 once and for all (#20956)

    llama-index-callbacks-literalai [1.4.0]

    • chore: deprecate python 3.9 once and for all (#20956)

    llama-index-callbacks-openinference [0.5.0]

    • chore: deprecate python 3.9 once and for all (#20956)

    llama-index-callbacks-opik [1.3.0]

    • chore: deprecate python 3.9 once and for all (#20956)

    llama-index-callbacks-promptlayer [0.5.0]

    • chore: deprecate python 3.9 once and for all (#20956)

    llama-index-callbacks-uptrain [0.6.0]

    • chore: deprecate python 3.9 once and for all (#20956)

    llama-index-callbacks-wandb [0.5.0]

    • chore: deprecate python 3.9 once and for all (#20956)

    llama-index-core [0.14.18]

    • feat: align text match filters across core and vector backends (#20883)
    • fix(chat_engine): preserve chat history on incomplete stream consumption (#20897)
    • fix: guard against ZeroDivisionError in LlamaDebugHandler._get_time_stats_from_event_pairs (#20937)
    • fix: add stacklevel=2 to warnings.warn() for accurate caller reporting (#20939)
    • chore: deprecate python 3.9 once and for all (#20956)

    Release 0.14.17 (#20957)

    • chore(deps): bump the uv group across 51 directories with 3 updates (#20962)
    • fix: use apostprocess_nodes() in async retrieval paths (#20974)
    • fix (test) : use >= 1 to avoid racy stream_chat memory assertion (#20980)
    • fix(core): preserve response metadata in async _aretrieve_from_object (#20995)
    • fix: preserve non-ASCII schema descriptions in PydanticOutputParser (#21016)
    • chore(deps): bump the uv group across 42 directories with 2 updates (#21020)
    • fix(core): structured_predict() returns default values for single-field models (#21025)
    • fix openai mimetype guess (#21030)

    llama-index-embeddings-adapter [0.5.0]

    • chore: deprecate python 3.9 once and for all (#20956)

    llama-index-embeddings-alephalpha [0.5.0]

    • chore: deprecate python 3.9 once and for all (#20956)

    llama-index-embeddings-alibabacloud-aisearch [0.4.0]

    • chore: deprecate python 3.9 once and for all (#20956)

    llama-index-embeddings-anyscale [0.5.1]

    • chore: deprecate python 3.9 once and for all (#20956)
    • vbump all the things (#20978)

    llama-index-embeddings-autoembeddings [0.3.0]

    • chore: deprecate python 3.9 once and for all (#20956)

    llama-index-embeddings-azure-inference [0.5.0]

    • chore: deprecate python 3.9 once and for all (#20956)

    llama-index-embeddings-azure-openai [0.5.1]

    • chore: deprecate python 3.9 once and for all (#20956)
    • vbump all the things (#20978)

    llama-index-embeddings-baseten [0.2.1]

    • chore: deprecate python 3.9 once and for all (#20956)
    • vbump all the things (#20978)

    llama-index-embeddings-bedrock [0.8.0]

    • chore: deprecate python 3.9 once and for all (#20956)

    llama-index-embeddings-clarifai [0.5.0]

    • chore: deprecate python 3.9 once and for all (#20956)

    llama-index-embeddings-clip [0.6.0]

    • chore: deprecate python 3.9 once and for all (#20956)

    llama-index-embeddings-cloudflare-workersai [0.5.0]

    • chore: deprecate python 3.9 once and for all (#20956)

    llama-index-embeddings-cohere [0.8.0]

    • chore: deprecate python 3.9 once and for all (#20956)

    llama-index-embeddings-dashscope [0.5.0]

    • chore: deprecate python 3.9 once and for all (#20956)

    llama-index-embeddings-databricks [0.5.1]

    • chore: deprecate python 3.9 once and for all (#20956)
    • vbump all the things (#20978)

    llama-index-embeddings-deepinfra [0.5.0]

    • chore: deprecate python 3.9 once and for all (#20956)

    llama-index-embeddings-elasticsearch [0.5.0]

    • chore: deprecate python 3.9 once and for all (#20956)

    llama-index-embeddings-fastembed [0.6.0]

    • chore: deprecate python 3.9 once and for all (#20956)

    llama-index-embeddings-fireworks [0.5.1]

    • chore: deprecate python 3.9 once and for all (#20956)
    • vbump all the things (#20978)

    llama-index-embeddings-gaudi [0.4.0]

    • chore: deprecate python 3.9 once and for all (#20956)

    llama-index-embeddings-gigachat [0.5.0]

    • chore: deprecate python 3.9 once and for all (#20956)

    llama-index-embeddings-google-genai [0.5.0]

    • chore: deprecate python 3.9 once and for all (#20956)

    llama-index-embeddings-heroku [0.2.0]

    • chore: deprecate python 3.9 once and for all (#20956)

    llama-index-embeddings-huggingface [0.7.0]

    • chore: deprecate python 3.9 once and for all (#20956)

    llama-index-embeddings-huggingface-api [0.5.0]

    • chore: deprecate python 3.9 once and for all (#20956)

    llama-index-embeddings-huggingface-openvino [0.7.0]

    • chore: deprecate python 3.9 once and for all (#20956)

    llama-index-embeddings-huggingface-optimum-intel [0.5.0]

    • chore: deprecate python 3.9 once and for all (#20956)

    llama-index-embeddings-instructor [0.5.0]

    • chore: deprecate python 3.9 once and for all (#20956)

    llama-index-embeddings-isaacus [0.2.0]

    • chore: deprecate python 3.9 once and for all (#20956)

    llama-index-embeddings-jinaai [0.6.0]

    • chore: deprecate python 3.9 once and for all (#20956)

    llama-index-embeddings-langchain [0.5.0]

    • chore: deprecate python 3.9 once and for all (#20956)

    llama-index-embeddings-litellm [0.5.0]

    • chore: deprecate python 3.9 once and for all (#20956)

    llama-index-embeddings-llamafile [0.5.0]

    • chore: deprecate python 3.9 once and for all (#20956)

    llama-index-embeddings-llm-rails [0.5.0]

    • chore: deprecate python 3.9 once and for all (#20956)

    llama-index-embeddings-mistralai [0.5.0]

    • chore: deprecate python 3.9 once and for all (#20956)

    llama-index-embeddings-modelscope [0.6.0]

    • chore(deps): bump the uv group across 51 directories with 3 updates (#20962)

    llama-index-embeddings-nebius [0.5.1]

    • vbump all the things (#20978)

    llama-index-embeddings-opea [0.3.1]

    • vbump all the things (#20978)

    llama-index-embeddings-openai-like [0.3.1]

    • vbump all the things (#20978)

    llama-index-embeddings-upstage [0.6.1]

    • vbump all the things (#20978)

    llama-index-indices-managed-lancedb [0.3.1]

    • drop the mutable default in init (#20998)

    llama-index-instrumentation [0.5.0]

    • chore: deprecate python 3.9 once and for all (#20956)

    llama-index-llms-anthropic [0.11.1]

    • Bugfix: Pydantic validation error in AnthropicCompletionResponse (#21027)

    llama-index-llms-anyscale [0.5.1]

    • vbump all the things (#20978)

    llama-index-llms-azure-openai [0.5.1]

    • vbump all the things (#20978)

    llama-index-llms-baseten [0.2.1]

    • vbump all the things (#20978)

    llama-index-llms-bedrock-converse [0.14.2]

    • vbump all the things (#20978)
    • feat(bedrock-converse): Set context window size to 1M for Opus 4.6 & Sonnet 4.6 (#20982)

    llama-index-llms-deepinfra [0.6.1]

    • vbump all the things (#20978)

    llama-index-llms-everlyai [0.5.1]

    • vbump all the things (#20978)

    llama-index-llms-fireworks [0.5.1]

    • vbump all the things (#20978)

    llama-index-llms-keywordsai [1.2.1]

    • vbump all the things (#20978)

    llama-index-llms-monsterapi [0.5.1]

    • vbump all the things (#20978)

    llama-index-llms-openai [0.7.2]

    • fix openai document block format (#20975)
    • feat(openai): add support for GPT-5.4 and GPT-5.4-pro models (#20976)

    llama-index-llms-openai-like [0.7.1]

    • vbump all the things (#20978)

    llama-index-llms-ovhcloud [0.2.1]

    • vbump all the things (#20978)

    llama-index-llms-perplexity [0.5.1]

    • vbump all the things (#20978)

    llama-index-llms-portkey [0.5.1]

    • vbump all the things (#20978)

    llama-index-llms-upstage [0.8.1]

    • vbump all the things (#20978)

    llama-index-llms-yi [0.5.1]

    • vbump all the things (#20978)

    llama-index-packs-agent-search-retriever [0.5.0]

    • chore(deps): bump the uv group across 51 directories with 3 updates (#20962)
    • chore(deps): bump the uv group across 43 directories with 5 updates (#20988)
    • chore(deps): bump the uv group across 42 directories with 2 updates (#21020)

    llama-index-packs-amazon-product-extraction [0.5.0]

    • chore(deps): bump the uv group across 51 directories with 3 updates (#20962)
    • chore(deps): bump the uv group across 43 directories with 5 updates (#20988)
    • chore(deps): bump the uv group across 42 directories with 2 updates (#21020)

    llama-index-packs-arize-phoenix-query-engine [0.5.0]

    • chore(deps): bump the uv group across 51 directories with 3 updates (#20962)
    • chore(deps): bump the uv group across 43 directories with 5 updates (#20988)
    • chore(deps): bump the uv group across 42 directories with 2 updates (#21020)

    llama-index-packs-auto-merging-retriever [0.6.0]

    • chore(deps): bump the uv group across 51 directories with 3 updates (#20962)
    • chore(deps): bump the uv group across 43 directories with 5 updates (#20988)
    • chore(deps): bump the uv group across 42 directories with 2 updates (#21020)

    llama-index-packs-code-hierarchy [0.7.1]

    • chore(deps): bump the uv group across 51 directories with 3 updates (#20962)
    • vbump all the things (#20978)
    • chore(deps): bump the uv group across 43 directories with 5 updates (#20988)
    • chore(deps): bump the uv group across 42 directories with 2 updates (#21020)

    llama-index-packs-cohere-citation-chat [0.6.0]

    • chore(deps): bump the uv group across 51 directories with 3 updates (#20962)
    • chore(deps): bump the uv group across 43 directories with 5 updates (#20988)
    • chore(deps): bump the uv group across 42 directories with 2 updates (#21020)

    llama-index-packs-deeplake-deepmemory-retriever [0.5.0]

    • chore(deps): bump the uv group across 51 directories with 3 updates (#20962)
    • chore(deps): bump the uv group across 43 directories with 5 updates (#20988)
    • chore(deps): bump the uv group across 42 directories with 2 updates (#21020)

    llama-index-packs-deeplake-multimodal-retrieval [0.4.0]

    • chore(deps): bump the uv group across 51 directories with 3 updates (#20962)
    • chore(deps): bump the uv group across 43 directories with 5 updates (#20988)

    llama-index-packs-dense-x-retrieval [0.6.1]

    • chore(deps): bump the uv group across 51 directories with 3 updates (#20962)
    • vbump all the things (#20978)
    • chore(deps): bump the uv group across 43 directories with 5 updates (#20988)
    • chore(deps): bump the uv group across 42 directories with 2 updates (#21020)

    llama-index-packs-diff-private-simple-dataset [0.5.0]

    • chore(deps): bump the uv group across 51 directories with 3 updates (#20962)
    • chore(deps): bump the uv group across 43 directories with 5 updates (#20988)
    • chore(deps): bump the uv group across 42 directories with 2 updates (#21020)

    llama-index-packs-evaluator-benchmarker [0.5.0]

    • chore(deps): bump the uv group across 51 directories with 3 updates (#20962)
    • chore(deps): bump the uv group across 43 directories with 5 updates (#20988)
    • chore(deps): bump the uv group across 42 directories with 2 updates (#21020)

    llama-index-packs-fusion-retriever [0.6.0]

    • chore(deps): bump the uv group across 51 directories with 3 updates (#20962)
    • chore(deps): bump the uv group across 43 directories with 5 updates (#20988)
    • chore(deps): bump the uv group across 42 directories with 2 updates (#21020)

    llama-index-packs-fuzzy-citation [0.5.0]

    • chore(deps): bump the uv group across 51 directories with 3 updates (#20962)
    • chore(deps): bump the uv group across 43 directories with 5 updates (#20988)
    • chore(deps): bump the uv group across 42 directories with 2 updates (#21020)

    llama-index-packs-gmail-openai-agent [0.5.0]

    • chore(deps): bump the uv group across 51 directories with 3 updates (#20962)
    • chore(deps): bump the uv group across 43 directories with 5 updates (#20988)
    • chore(deps): bump the uv group across 42 directories with 2 updates (#21020)

    llama-index-packs-koda-retriever [0.5.0]

    • chore(deps): bump the uv group across 51 directories with 3 updates (#20962)
    • chore(deps): bump the uv group across 43 directories with 5 updates (#20988)
    • chore(deps): bump the uv group across 42 directories with 2 updates (#21020)

    llama-index-packs-llama-dataset-metadata [0.5.0]

    • chore(deps): bump the uv group across 51 directories with 3 updates (#20962)
    • chore(deps): bump the uv group across 43 directories with 5 updates (#20988)
    • chore(deps): bump the uv group across 42 directories with 2 updates (#21020)

    llama-index-packs-llama-guard-moderator [0.5.0]

    • chore(deps): bump the uv group across 51 directories with 3 updates (#20962)
    • chore(deps): bump the uv group across 43 directories with 5 updates (#20988)
    • chore(deps): bump the uv group across 42 directories with 2 updates (#21020)

    llama-index-packs-llava-completion [0.5.0]

    • chore(deps): bump the uv group across 51 directories with 3 updates (#20962)
    • chore(deps): bump the uv group across 43 directories with 5 updates (#20988)
    • chore(deps): bump the uv group across 42 directories with 2 updates (#21020)

    llama-index-packs-longrag [0.6.0]

    • chore(deps): bump the uv group across 51 directories with 3 updates (#20962)
    • chore(deps): bump the uv group across 43 directories with 5 updates (#20988)
    • chore(deps): bump the uv group across 42 directories with 2 updates (#21020)

    llama-index-packs-mixture-of-agents [0.5.0]

    • chore(deps): bump the uv group across 51 directories with 3 updates (#20962)
    • chore(deps): bump the uv group across 43 directories with 5 updates (#20988)
    • chore(deps): bump the uv group across 42 directories with 2 updates (#21020)

    llama-index-packs-multi-tenancy-rag [0.6.0]

    • chore(deps): bump the uv group across 51 directories with 3 updates (#20962)
    • chore(deps): bump the uv group across 43 directories with 5 updates (#20988)
    • chore(deps): bump the uv group across 42 directories with 2 updates (#21020)

    llama-index-packs-multidoc-autoretrieval [0.5.0]

    • chore(deps): bump the uv group across 51 directories with 3 updates (#20962)
    • chore(deps): bump the uv group across 43 directories with 5 updates (#20988)
    • chore(deps): bump the uv group across 42 directories with 2 updates (#21020)

    llama-index-packs-nebulagraph-query-engine [0.6.0]

    • chore(deps): bump the uv group across 51 directories with 3 updates (#20962)
    • chore(deps): bump the uv group across 43 directories with 5 updates (#20988)
    • chore(deps): bump the uv group across 42 directories with 2 updates (#21020)

    llama-index-packs-neo4j-query-engine [0.5.1]

    • chore(deps): bump the uv group across 51 directories with 3 updates (#20962)
    • chore(deps): bump the uv group across 43 directories with 5 updates (#20988)
    • chore(deps): bump the uv group across 42 directories with 2 updates (#21020)

    llama-index-packs-node-parser-semantic-chunking [0.5.1]

    • chore(deps): bump the uv group across 51 directories with 3 updates (#20962)
    • vbump all the things (#20978)
    • chore(deps): bump the uv group across 43 directories with 5 updates (#20988)
    • chore(deps): bump the uv group across 42 directories with 2 updates (#21020)

    llama-index-packs-ollama-query-engine [0.6.0]

    • chore(deps): bump the uv group across 51 directories with 3 updates (#20962)
    • chore(deps): bump the uv group across 43 directories with 5 updates (#20988)
    • chore(deps): bump the uv group across 42 directories with 2 updates (#21020)

    llama-index-packs-panel-chatbot [0.5.0]

    • chore(deps): bump the uv group across 51 directories with 3 updates (#20962)
    • chore(deps): bump the uv group across 43 directories with 5 updates (#20988)
    • chore(deps): bump the uv group across 42 directories with 2 updates (#21020)

    llama-index-packs-raft-dataset [0.5.1]

    • chore(deps): bump the uv group across 51 directories with 3 updates (#20962)
    • vbump all the things (#20978)
    • chore(deps): bump the uv group across 43 directories with 5 updates (#20988)
    • chore(deps): bump the uv group across 42 directories with 2 updates (#21020)

    llama-index-packs-rag-evaluator [0.5.1]

    • chore(deps): bump the uv group across 51 directories with 3 updates (#20962)
    • vbump all the things (#20978)
    • chore(deps): bump the uv group across 43 directories with 5 updates (#20988)
    • chore(deps): bump the uv group across 42 directories with 2 updates (#21020)

    llama-index-packs-ragatouille-retriever [0.6.0]

    • chore(deps): bump the uv group across 51 directories with 3 updates (#20962)
    • chore(deps): bump the uv group across 43 directories with 5 updates (#20988)
    • chore(deps): bump the uv group across 42 directories with 2 updates (#21020)

    llama-index-packs-raptor [0.4.1]

    • chore(deps): bump the uv group across 51 directories with 3 updates (#20962)

    llama-index-packs-recursive-retriever [0.8.1]

    • chore(deps): bump the uv group across 51 directories with 3 updates (#20962)
    • vbump all the things (#20978)
    • chore(deps): bump the uv group across 43 directories with 5 updates (#20988)
    • chore(deps): bump the uv group across 42 directories with 2 updates (#21020)

    llama-index-packs-resume-screener [0.10.1]

    • chore(deps): bump the uv group across 51 directories with 3 updates (#20962)
    • vbump all the things (#20978)
    • chore(deps): bump the uv group across 43 directories with 5 updates (#20988)
    • chore(deps): bump the uv group across 42 directories with 2 updates (#21020)

    llama-index-packs-retry-engine-weaviate [0.6.0]

    • chore(deps): bump the uv group across 51 directories with 3 updates (#20962)
    • chore(deps): bump the uv group across 43 directories with 5 updates (#20988)
    • chore(deps): bump the uv group across 42 directories with 2 updates (#21020)

    llama-index-packs-searchain [0.3.0]

    • chore(deps): bump the uv group across 51 directories with 3 updates (#20962)
    • chore(deps): bump the uv group across 43 directories with 5 updates (#20988)
    • chore(deps): bump the uv group across 42 directories with 2 updates (#21020)

    llama-index-packs-self-discover [0.5.0]

    • chore(deps): bump the uv group across 51 directories with 3 updates (#20962)
    • chore(deps): bump the uv group across 43 directories with 5 updates (#20988)
    • chore(deps): bump the uv group across 42 directories with 2 updates (#21020)

    llama-index-packs-self-rag [0.5.0]

    • chore(deps): bump the uv group across 51 directories with 3 updates (#20962)
    • chore(deps): bump the uv group across 43 directories with 5 updates (#20988)
    • chore(deps): bump the uv group across 42 directories with 2 updates (#21020)

    llama-index-packs-sentence-window-retriever [0.6.0]

    • chore(deps): bump the uv group across 51 directories with 3 updates (#20962)
    • chore(deps): bump the uv group across 43 directories with 5 updates (#20988)
    • chore(deps): bump the uv group across 42 directories with 2 updates (#21020)

    llama-index-packs-snowflake-query-engine [0.6.0]

    • chore(deps): bump the uv group across 51 directories with 3 updates (#20962)
    • chore(deps): bump the uv group across 43 directories with 5 updates (#20988)
    • chore(deps): bump the uv group across 42 directories with 2 updates (#21020)

    llama-index-packs-stock-market-data-query-engine [0.6.0]

    • chore(deps): bump the uv group across 51 directories with 3 updates (#20962)
    • chore(deps): bump the uv group across 43 directories with 5 updates (#20988)
    • chore(deps): bump the uv group across 42 directories with 2 updates (#21020)

    llama-index-packs-streamlit-chatbot [0.5.2]

    • chore(deps): bump the uv group across 51 directories with 3 updates (#20962)

    llama-index-packs-sub-question-weaviate [0.5.0]

    • chore(deps): bump the uv group across 51 directories with 3 updates (#20962)
    • chore(deps): bump the uv group across 43 directories with 5 updates (#20988)
    • chore(deps): bump the uv group across 42 directories with 2 updates (#21020)

    llama-index-packs-timescale-vector-autoretrieval [0.5.0]

    • chore(deps): bump the uv group across 51 directories with 3 updates (#20962)
    • chore(deps): bump the uv group across 43 directories with 5 updates (#20988)
    • chore(deps): bump the uv group across 42 directories with 2 updates (#21020)

    llama-index-packs-trulens-eval-packs [0.4.1]

    • chore(deps): bump the uv group across 51 directories with 3 updates (#20962)
    • chore(deps): bump langchain-community from 0.0.38 to 0.3.27 in /llama-index-packs/llama-index-packs-trulens-eval-packs (#20983)
    • chore(deps): bump the uv group across 43 directories with 5 updates (#20988)

    llama-index-postprocessor-google-rerank [0.1.0]

    • feat(postprocessor): add Google Discovery Engine rerank integration (#20893)

    llama-index-readers-gcs [0.6.1]

    • vbump all the things (#20978)

    llama-index-readers-github [0.11.2]

    • fix(github-reader): replace run_until_complete with asyncio_run for async compatibility (#20963)
    • vbump all the things (#20978)

    llama-index-readers-joplin [0.6.1]

    • vbump all the things (#20978)

    llama-index-readers-mbox [0.6.1]

    • vbump all the things (#20978)

    llama-index-readers-microsoft-sharepoint [0.9.1]

    • vbump all the things (#20978)

    llama-index-readers-obsidian [0.7.1]

    • vbump all the things (#20978)

    llama-index-readers-pandas-ai [0.6.1]

    • vbump all the things (#20978)

    llama-index-readers-pebblo [0.6.1]

    • vbump all the things (#20978)

    llama-index-readers-s3 [0.6.1]

    • vbump all the things (#20978)

    llama-index-readers-service-now [0.3.0]

    • chore(deps): bump the uv group across 51 directories with 3 updates (#20962)

    llama-index-retrievers-bm25 [0.7.1]

    • fix: handle empty corpus after metadata filtering in BM25Retriever (#20926)

    llama-index-storage-docstore-postgres [0.5.0]

    • Expose Postgres KVStore engine settings for timeouts (fix #15888) (#20951)

    llama-index-storage-kvstore-postgres [0.5.0]

    • Expose Postgres KVStore engine settings for timeouts (fix #15888) (#20951)

    llama-index-tools-google [0.7.1]

    • feat(google-tools): support service account and cloud auth for Calendar and Gmail (#20879)

    llama-index-vector-stores-couchbase [0.7.1]

    • vbump all the things (#20978)

    llama-index-vector-stores-mongodb [0.10.1]

    • vbump all the things (#20978)

    llama-index-vector-stores-opensearch [1.2.0]

    • feat: align text match filters across core and vector backends (#20883)

    llama-index-vector-stores-postgres [0.8.1]

    • feat(postgres): add MMR (Maximal Marginal Relevance) query support (#20860)

    llama-index-vector-stores-qdrant [0.10.0]

    • feat: align text match filters across core and vector backends (#20883)

    llama-index-vector-stores-solr [0.2.0]

    • feat: align text match filters across core and vector backends (#20883)
    • fix: correct typo 'compatability' to 'compatibility' in Solr client (#21029)
    Original source Report a problem
  • All of your release notes in one feed

    Join Releasebot and get updates from LlamaIndex and hundreds of other software products.

  • Mar 10, 2026
    • Date parsed from source:
      Mar 10, 2026
    • First seen by Releasebot:
      Mar 26, 2026
    LlamaIndex logo

    LlamaIndex

    v0.14.16

    LlamaIndex releases a broad core update with new rate limiting, multimodal reranking, stronger OpenAI and Anthropic support, richer vector store filtering, and multiple stability, security, and observability improvements across integrations.

    Release Notes

    [2026-03-10]

    llama-index-core [0.14.16]

    • Add token-bucket rate limiter for LLM and embedding API calls (#20712)
    • Fix/20706 chonkie init doc (#20713)
    • fix: pass tool_choice through FunctionCallingProgram (#20740)
    • feat: Multimodal LLMReranker (#20743)
    • feat: add optional embed_model to SemanticDoubleMergingSplitterNodeParser (#20748)
    • fix(core): preserve doc_id in legacy_json_to_doc (#20750)
    • fix: async retry backoff to avoid blocking event loop (#20764)
    • Fix additionalProperties in auto-generated KG schema models (#20768)
    • fix: respect db_schema when custom async_engine is provided (#20779)
    • fix(core): replace blocking run_async_tasks with asyncio.gather (#20795)
    • feat(rate_limiter): add SlidingWindowRateLimiter for strict per-minute caps (#20799)
    • fix(core): preserve docstore_strategy across pipeline runs when no vector store is attached (#20824)
    • Fix FunctionTool not respecting pydantic Field defaults (#20839)
    • Fix MarkdownElementNodeParser to extract code blocks (#20840)
    • security: add RestrictedUnpickler to SimpleObjectNodeMapping (CWE-502) (#20857)
    • feat: extend vector store metadata filters (#20861)
    • fix(react): pass system_prompt to ReActChatFormatter template (#20873)
    • refactor: deprecate asyncio_module in favour of get_asyncio_module (#20902)
    • fix(core): partial-failure handling in SubQuestionQueryEngine (#20905)
    • fix: add bounds check to prevent infinite loop in ChatMemoryBuffer.get() (#20914)
    • fix: ensure streaming flag reset on exception in CondenseQuestionChatEngine (#20915)
    • fix: pass through run id correctly (#20928)

    llama-index-embeddings-bedrock [0.7.4]

    • fix: raise ValueError when 'model' is passed instead of 'model_name' in BedrockEmbedding (#20836)

    llama-index-embeddings-openai [0.5.2]

    • Respect Retry-After header in OpenAI retry decorator (#20813)

    llama-index-embeddings-upstage [0.5.1]

    • chore(deps): bump the uv group across 47 directories with 3 updates (#20793)

    llama-index-graph-stores-neo4j [0.6.0]

    • Add Neo4j user agent (#20827)
    • feat(neo4j): add apoc_sample parameter for large database schema introspection (#20859)

    llama-index-instrumentation [0.4.3]

    • otel instrumentation enhancements (#20816)

    llama-index-llms-anthropic [0.10.11]

    • Add User-Agent header for Anthropic API calls (#20771)
    • fix: apply cache_control only to last block to respect Anthropic's 4-block limit (#20875)

    llama-index-llms-azure-inference [0.6.0]

    • fix(azure-inference): properly manage async client lifecycle to prevent unclosed sessions (#20885)

    llama-index-llms-bedrock-converse [0.13.0]

    • fix(bedrock-converse): Improve handling of reasoningContent in responses from Converse & ConverStream requests (#20853)

    llama-index-llms-langchain [0.7.2]

    • fix: bump ver to trigger llama-index-llms-langchain integration release (#20751)

    llama-index-llms-mistralai [0.10.0.post2]

    • Fix mistralai pkg version bump (#20776)
    • fix: update Mistral package Python requirement (#20777)

    llama-index-llms-modelslab [0.1.0]

    • feat: Add ModelsLab LLM integration (llama-index-llms-modelslab) (#20731)

    llama-index-llms-openai [0.6.26]

    • fix-openai-toolcall-after-thinking #20333 (#20725)
    • fix: forward allow_parallel_tool_calls for OpenAI chat completions (#20744)
    • feat: gpt-5-chat support (#20774)
    • feat: support reasoning_content in OpenAI Chat Completions (#20786)
    • nit: add openai model name (#20800)
    • fix: Use constrained decoding for OpenAIResponses structured_predict (#20808)
    • Respect Retry-After header in OpenAI retry decorator (#20813)
    • fix openai tool calls (#20831)
    • fix: strip parallel_tool_calls for reasoning models (#20866)

    llama-index-node-parser-chonkie [0.1.2]

    • Fix/20706 chonkie init doc (#20713)

    llama-index-observability-otel [0.5.1]

    • feat: add extra span processors to register within the otel tracer (#20747)
    • feat: pass a custom tracer provider (#20765)
    • feat: add inheritance for external context (#20788)
    • otel instrumentation enhancements (#20816)

    llama-index-packs-agent-search-retriever [0.4.1]

    • chore(deps): bump the uv group across 47 directories with 3 updates (#20793)

    llama-index-packs-amazon-product-extraction [0.4.1]

    • chore(deps): bump the uv group across 47 directories with 3 updates (#20793)

    llama-index-packs-arize-phoenix-query-engine [0.4.1]

    • chore(deps): bump the uv group across 47 directories with 3 updates (#20793)
    • chore(deps): bump the uv group across 6 directories with 2 updates (#20856)

    llama-index-packs-auto-merging-retriever [0.5.1]

    • chore(deps): bump the uv group across 47 directories with 3 updates (#20793)

    llama-index-packs-code-hierarchy [0.6.1]

    • chore(deps): bump the uv group across 8 directories with 2 updates (#20758)
    • chore(deps): bump the uv group across 47 directories with 3 updates (#20793)
    • bump the uv group across 9 directories with 2 updates (#20798)
    • chore(deps): bump the uv group across 6 directories with 2 updates (#20856)

    llama-index-packs-cohere-citation-chat [0.5.1]

    • chore(deps): bump the uv group across 47 directories with 3 updates (#20793)

    llama-index-packs-deeplake-deepmemory-retriever [0.4.1]

    • chore(deps): bump the uv group across 47 directories with 3 updates (#20793)

    llama-index-packs-deeplake-multimodal-retrieval [0.3.1]

    • chore(deps): bump the uv group across 47 directories with 3 updates (#20793)

    llama-index-packs-dense-x-retrieval [0.5.1]

    • chore(deps): bump the uv group across 47 directories with 3 updates (#20793)

    llama-index-packs-diff-private-simple-dataset [0.4.1]

    • chore(deps): bump the uv group across 47 directories with 3 updates (#20793)

    llama-index-packs-evaluator-benchmarker [0.4.1]

    • chore(deps): bump the uv group across 47 directories with 3 updates (#20793)

    llama-index-packs-fusion-retriever [0.5.1]

    • chore(deps): bump the uv group across 47 directories with 3 updates (#20793)

    llama-index-packs-fuzzy-citation [0.4.1]

    • chore(deps): bump the uv group across 47 directories with 3 updates (#20793)

    llama-index-packs-gmail-openai-agent [0.4.1]

    • chore(deps): bump the uv group across 47 directories with 3 updates (#20793)

    llama-index-packs-koda-retriever [0.4.1]

    • chore(deps): bump the uv group across 47 directories with 3 updates (#20793)

    llama-index-packs-llama-dataset-metadata [0.4.1]

    • chore(deps): bump the uv group across 47 directories with 3 updates (#20793)

    llama-index-packs-llama-guard-moderator [0.4.1]

    • chore(deps): bump the uv group across 47 directories with 3 updates (#20793)

    llama-index-packs-llava-completion [0.4.1]

    • chore(deps): bump the uv group across 47 directories with 3 updates (#20793)

    llama-index-packs-longrag [0.5.1]

    • chore(deps): bump the uv group across 47 directories with 3 updates (#20793)

    llama-index-packs-mixture-of-agents [0.4.1]

    • chore(deps): bump the uv group across 47 directories with 3 updates (#20793)

    llama-index-packs-multi-tenancy-rag [0.5.1]

    • chore(deps): bump the uv group across 47 directories with 3 updates (#20793)

    llama-index-packs-multidoc-autoretrieval [0.4.1]

    • chore(deps): bump the uv group across 47 directories with 3 updates (#20793)

    llama-index-packs-nebulagraph-query-engine [0.5.1]

    • chore(deps): bump the uv group across 47 directories with 3 updates (#20793)

    llama-index-packs-neo4j-query-engine [0.5.1]

    • chore(deps): bump the uv group across 47 directories with 3 updates (#20793)
    • feat(neo4j): add apoc_sample parameter for large database schema introspection (#20859)

    llama-index-packs-node-parser-semantic-chunking [0.4.1]

    • chore(deps): bump the uv group across 47 directories with 3 updates (#20793)

    llama-index-packs-ollama-query-engine [0.5.1]

    • chore(deps): bump the uv group across 47 directories with 3 updates (#20793)

    llama-index-packs-panel-chatbot [0.4.1]

    • chore(deps): bump the uv group across 8 directories with 2 updates (#20758)
    • chore(deps): bump the uv group across 47 directories with 3 updates (#20793)
    • bump the uv group across 9 directories with 2 updates (#20798)
    • chore(deps): bump the uv group across 6 directories with 2 updates (#20856)

    llama-index-packs-raft-dataset [0.4.1]

    • chore(deps): bump the uv group across 47 directories with 3 updates (#20793)

    llama-index-packs-rag-evaluator [0.4.1]

    • chore(deps): bump the uv group across 47 directories with 3 updates (#20793)

    llama-index-packs-ragatouille-retriever [0.5.1]

    • chore(deps): bump the uv group across 47 directories with 3 updates (#20793)

    llama-index-packs-raptor [0.4.1]

    • chore(deps): bump the uv group across 8 directories with 2 updates (#20758)
    • chore(deps): bump the uv group across 47 directories with 3 updates (#20793)
    • bump the uv group across 9 directories with 2 updates (#20798)

    llama-index-packs-recursive-retriever [0.7.1]

    • chore(deps): bump the uv group across 8 directories with 2 updates (#20758)
    • chore(deps): bump the uv group across 47 directories with 3 updates (#20793)
    • bump the uv group across 9 directories with 2 updates (#20798)
    • chore(deps): bump the uv group across 6 directories with 2 updates (#20856)

    llama-index-packs-resume-screener [0.9.3]

    • chore(deps): bump the uv group across 8 directories with 2 updates (#20758)
    • chore(deps): bump the uv group across 47 directories with 3 updates (#20793)
    • bump the uv group across 9 directories with 2 updates (#20798)
    • chore(deps): bump the uv group across 6 directories with 2 updates (#20856)

    llama-index-packs-retry-engine-weaviate [0.5.1]

    • chore(deps): bump the uv group across 47 directories with 3 updates (#20793)

    llama-index-packs-searchain [0.2.1]

    • chore(deps): bump the uv group across 47 directories with 3 updates (#20793)

    llama-index-packs-self-discover [0.4.1]

    • chore(deps): bump the uv group across 47 directories with 3 updates (#20793)

    llama-index-packs-self-rag [0.4.1]

    • chore(deps): bump the uv group across 47 directories with 3 updates (#20793)

    llama-index-packs-sentence-window-retriever [0.5.1]

    • chore(deps): bump the uv group across 47 directories with 3 updates (#20793)

    llama-index-packs-snowflake-query-engine [0.5.1]

    • chore(deps): bump the uv group across 47 directories with 3 updates (#20793)

    llama-index-packs-stock-market-data-query-engine [0.5.1]

    • chore(deps): bump the uv group across 47 directories with 3 updates (#20793)

    llama-index-packs-streamlit-chatbot [0.5.2]

    • chore(deps): bump the uv group across 47 directories with 3 updates (#20793)

    llama-index-packs-sub-question-weaviate [0.4.1]

    • chore(deps): bump the uv group across 47 directories with 3 updates (#20793)

    llama-index-packs-timescale-vector-autoretrieval [0.4.1]

    • chore(deps): bump the uv group across 47 directories with 3 updates (#20793)

    llama-index-packs-trulens-eval-packs [0.4.1]

    • chore(deps): bump the uv group across 47 directories with 3 updates (#20793)

    llama-index-postprocessor-cohere-rerank [0.7.0]

    • Update CohereRerank to ClientV2 to enable V4 rerankers (#20778)

    llama-index-readers-github [0.10.0]

    • bump the uv group across 9 directories with 2 updates (#20798)

    llama-index-readers-igpt-email [0.1.0]

    • feat: Add iGPT Email Intelligence tool and reader integrations (#20727)

    llama-index-readers-microsoft-sharepoint [0.8.1]

    • fix: set _drive_id_endpoint before early return in SharePointReader._get_drive_id (#20837)

    llama-index-readers-preprocess [0.5.0]

    • Deprecate Preprocess reader: service discontinued (#20759)

    llama-index-readers-screenpipe [0.1.0]

    • feat: add Screenpipe reader integration for screen OCR and audio tran… (#20789)

    llama-index-storage-chat-store-opensearch [0.1.0]

    • feat: add OpenSearch chat store integration (#20796)

    llama-index-storage-chat-store-redis [0.6.0]

    • perf(redis-chat-store): Use Pydantic directly for ChatMessage serialization & deserialization (#20931)

    llama-index-tools-aws-bedrock-agentcore [0.2.0]

    • feat(tools): add browser management and code interpreter lifecycle to AWS Bedrock AgentCore (#20811)

    llama-index-tools-igpt-email [0.1.0]

    • feat: Add iGPT Email Intelligence tool and reader integrations (#20727)

    llama-index-tools-mcp [0.4.8]

    • fix: handle enum types in _resolve_union_option for Literal unions (#20780)

    llama-index-tools-moss [0.2.0]

    • fix: Moss integration bug with QueryOptions (#20815)

    llama-index-tools-seltz [0.2.0]

    • feat(seltz): update Seltz integration to SDK 0.2.0 (#20906)

    llama-index-vector-stores-azureaisearch [0.4.5]

    • fix(azureaisearch): raise on unsupported query modes (#20846)

    llama-index-vector-stores-lancedb [0.4.5]

    • fix(lancedb): paginate table existence checks (#20841)

    llama-index-vector-stores-lantern [0.4.2]

    • fix(lantern,yugabytedb): remove deprecated sessionmaker.close_all() from close() (#20884)

    llama-index-vector-stores-neo4jvector [0.5.3]

    • Add Neo4j user agent (#20827)

    llama-index-vector-stores-opensearch [1.1.1]

    • fix(opensearch): defer OpensearchVectorClient index creation to first use (#20849)
    • fix(opensearch): track client ownership and clean up unclosed sessions (#20903)

    llama-index-vector-stores-qdrant [0.9.2]

    • fix(qdrant): prevent alpha=0.0 from incorrectly falling back to 0.5 (#20880)

    llama-index-vector-stores-weaviate [1.5.0]

    • fix: coerce Weaviate MetadataFilter values to match collection schema types (#20730)

    llama-index-vector-stores-yugabytedb [0.5.5]

    • fix(lantern,yugabytedb): remove deprecated sessionmaker.close_all() from close() (#20884)
    Original source Report a problem
  • Feb 18, 2026
    • Date parsed from source:
      Feb 18, 2026
    • First seen by Releasebot:
      Mar 26, 2026
    LlamaIndex logo

    LlamaIndex

    v0.14.15

    LlamaIndex ships broader multimodal and agent updates, adds new model support across Anthropic, Bedrock, IBM, Mistral, and OCI, and expands readers, observability, vector stores, and MCP tooling with new integrations, fixes, and reliability improvements.

    Release Notes

    [2026-02-18]

    llama-index-agent-agentmesh [0.1.0]

    • [Integration] AgentMesh: Trust Layer for LlamaIndex Agents (#20644)

    llama-index-core [0.14.15]

    • Support basic operations for multimodal types (#20640)
    • Feat recursive llm type support (#20642)
    • fix: remove redundant metadata_seperator field from TextNode (#20649)
    • fix(tests): update mock prompt type in mock_prompts.py (#20661)
    • Feat multimodal template var formatting (#20682)
    • Feat multimodal prompt templates (#20683)
    • Feat multimodal chat prompt helper (#20684)
    • Add retry and error handling to BaseExtractor (#20693)
    • ensure at least one message/content block is returned by the old memory (#20729)

    llama-index-embeddings-ibm [0.6.0.post1]

    • chore: Remove persistent_connection parameter support, update (#20714)
    • docs: Update IBM docs (#20718)

    llama-index-llms-anthropic [0.10.9]

    • Sonnet 4-6 addition (#20723)

    llama-index-llms-bedrock-converse [0.12.10]

    • fix(bedrock-converse): ensure thinking_delta is populated in all chat modes (#20664)
    • feat(bedrock-converse): Add support for Claude Sonnet 4.6 (#20726)

    llama-index-llms-ibm [0.7.0.post1]

    • chore: Remove persistent_connection parameter support, update (#20714)
    • docs: Update IBM docs (#20718)

    llama-index-llms-mistralai [0.10.0]

    • Rrubini/mistral azure sdk (#20668)

    llama-index-llms-oci-data-science [1.0.0]

    • Add support for new OCI DataScience endpoint /predictWithStream for streaming use case (#20545)

    llama-index-observability-otel [0.3.0]

    • improve otel data serialization by flattening dicts (#20719)
    • feat: support custom span processor; refactor: use llama-index-instrumentation instead of llama-index-core (#20732)

    llama-index-program-evaporate [0.5.2]

    • Sandbox LLM-generated code execution in EvaporateExtractor (#20676)

    llama-index-readers-bitbucket [0.4.2]

    • fix: replace mutable default argument in load_all_file_paths (#20698)

    llama-index-readers-github [0.10.0]

    • feat: Enhance GitHubRepoReader with selective file fetching and deduplication (Issue #20471) (#20550)

    llama-index-readers-layoutir [0.1.1]

    • feat: Add LayoutIR reader integration (#20708)
    • fix(layoutir): hotfix for output_dir crash and Block extraction (#20708 follow-up) (#20715)
    • fix(layoutir): restrict requires-python to >=3.12 to match layoutir dependency (#20733)

    llama-index-readers-microsoft-sharepoint [0.8.0]

    • Add pagination support for Microsoft Graph API calls in SharePoint reader (#20704)

    llama-index-readers-whatsapp [0.4.2]

    • fix: Update WhatsAppChatLoader to retrieve DataFrame in pandas format (#20722)

    llama-index-tools-mcp [0.4.7]

    • feat: propagate partial_params to get_tools_from_mcp utils (#20669)

    llama-index-vector-stores-faiss [0.5.3]

    • Replace eval() with json.loads in FaissMapVectorStore persistence (#20675)

    llama-index-vector-stores-milvus [1.0.0]

    • Fix: remove ORM Collection mix-usage with MilvusClient in Milvus vector store (#20687)
    Original source Report a problem
  • Feb 10, 2026
    • Date parsed from source:
      Feb 10, 2026
    • First seen by Releasebot:
      Mar 26, 2026
    LlamaIndex logo

    LlamaIndex

    v0.14.14

    LlamaIndex ships a broad release with stronger core stability, security defaults, and LangChain 1.x support, plus new governance and retry features, expanded model and tool integrations, and fresh vector store and reader capabilities across the ecosystem.

    Release Notes

    [2026-02-10]

    llama-index-callbacks-wandb [0.4.2]

    • Fix potential crashes and improve security defaults in core components (#20610)

    llama-index-core [0.14.14]

    • fix: catch pydantic ValidationError in VectorStoreQueryOutputParser (#20450)
    • fix: distinguish empty string from None in MediaResource.hash (#20451)
    • Langchain1.x support (#20472)
    • Fix DeprecationWarning: 'asyncio.iscoroutinefunction' is deprecated (#20517)
    • fix(core): fallback to bundled nltk cache if env var missing (#20528)
    • feat(callbacks): add TokenBudgetHandler for cost governance (#20546)
    • fix(core):handled a edge case in truncate_text function (#20551)
    • fix(core):fix in types Thread passing None when target is None instead of copy_context().run (#20553)
    • chore: bump llama-index lockfile, and minor test tweaks (#20556)
    • Compatibility for workflows context changes (#20557)
    • test(core): fix cache dir path test for Windows compatibility (#20566)
    • fix(tests): enforce utf-8 encoding in json reader tests for windows compatibility (#20576)
    • Fix BM25Retriever mapping in upgrade tool / 修复升级工具中的 BM25Retriever 映射 (#20582)
    • fix(agent): handle empty LLM responses with retry logic and add test cases (#20596)
    • fix: add show_progress parameter to run_transformations to prevent unexpected keyword argument error (#20608)
    • Fix potential crashes and improve security defaults in core components (#20610)
    • Add core 3.14 tests (#20619)

    llama-index-embeddings-cohere [0.7.0]

    • fix(embeddings-cohere): add retry logic with tenacity (#20592)

    llama-index-embeddings-google-genai [0.3.2]

    • Add client headers to Gemini API requests (#20519)

    llama-index-embeddings-siliconflow [0.3.2]

    • Fix DeprecationWarning: 'asyncio.iscoroutinefunction' is deprecated (#20517)

    llama-index-embeddings-upstage [0.5.1]

    • chore(deps): bump the uv group across 4 directories with 4 updates (#20531)

    llama-index-graph-stores-falkordb [0.4.2]

    • fix(falkordb): Fix MENTIONS relationship creation with triplet_source_id (#20650)

    llama-index-llms-anthropic [0.10.8]

    • chore: Update cacheable Anthropic models (#20581)
    • chore: add support for opus 4.6 (#20635)

    llama-index-llms-bedrock-converse [0.12.8]

    • fix bedrock converse empty tool config issue (#20571)
    • fix(llms-bedrock-converse): improve bedrock converse retry handling (#20590)
    • feat(bedrock-converse): Add support for Claude Opus 4.6 (#20637)
    • Add support for adaptive thinking in Bedrock (#20659)
    • chore(deps): bump the pip group across 2 directories with 7 updates (#20662)

    llama-index-llms-cohere [0.7.1]

    • Feat: add custom base_url support to Cohere LLM (#20534)
    • fix(llms-cohere): handle additional error types in retry logic (#20591)

    llama-index-llms-dashscope [0.5.2]

    • fix(dashscope): remove empty tool_calls from assistant messages (#20535)

    llama-index-llms-google-genai [0.8.7]

    • Add client headers to Gemini API requests (#20519)
    • fix(decorator):adds logic to llm_retry_decorator for async methods. (#20588)
    • Fix/google genai cleanup (#20607)
    • fix(google-genai): skip model meta fetch when not needed (#20639)

    llama-index-llms-huggingface-api [0.6.2]

    • Update sensible default provider for huggingface inference api (#20589)

    llama-index-llms-langchain [0.7.1]

    • Langchain1.x support (#20472)

    llama-index-llms-openai [0.6.18]

    • OpenAI response fix (#20538)
    • feat: Add support for gpt-5.2-chat model (#20549)
    • fix(openai): make image_url detail optional in message dict (#20609)
    • Add new reasoning types (#20612)
    • fix(openai): exclude unsupported params for all reasoning models (#20627)

    llama-index-llms-openai-like [0.6.0]

    • make transformers an optional dependency for openai-like (#20580)

    llama-index-llms-openrouter [0.4.4]

    • make transformers an optional dependency for openai-like (#20580)

    llama-index-llms-siliconflow [0.4.3]

    • Fix DeprecationWarning: 'asyncio.iscoroutinefunction' is deprecated (#20517)

    llama-index-llms-upstage [0.7.0]

    • add new upstage model(solar-pro3) (#20544)

    llama-index-llms-vllm [0.6.2]

    • feat: add openai-like server mode for VllmServer (#20537)

    llama-index-memory-bedrock-agentcore [0.1.2]

    • Add event and memory record deletion methods in bedrock-agentcorememory (#20428)
    • chore(deps): update llama-index-core dependency lock to include 0.14.x (#20483)

    llama-index-memory-mem0 [1.0.0]

    • fix: mem0 integration cleanup + refactor (#20532)

    llama-index-node-parser-chonkie [0.1.1]

    • feat: add chonkie integration (#20622)
    • update readme (#20656)

    llama-index-node-parser-docling [0.4.2]

    • fix: catch pydantic ValidationError in VectorStoreQueryOutputParser (#20450)

    llama-index-packs-code-hierarchy [0.6.1]

    • chore(deps): bump the uv group across 12 directories with 14 updates (#20578)

    llama-index-packs-gmail-openai-agent [0.4.1]

    • chore(deps): bump the uv group across 12 directories with 14 updates (#20578)

    llama-index-packs-multidoc-autoretrieval [0.4.1]

    • chore(deps): bump the uv group across 12 directories with 14 updates (#20578)

    llama-index-packs-panel-chatbot [0.4.1]

    • chore(deps): bump the uv group across 12 directories with 14 updates (#20578)

    llama-index-packs-recursive-retriever [0.7.1]

    • chore(deps): bump the uv group across 12 directories with 14 updates (#20578)
    • chore(deps): bump the pip group across 2 directories with 7 updates (#20662)

    llama-index-packs-resume-screener [0.9.3]

    • chore(deps): bump the uv group across 12 directories with 14 updates (#20578)

    llama-index-packs-retry-engine-weaviate [0.5.1]

    • chore(deps): bump the uv group across 12 directories with 14 updates (#20578)

    llama-index-packs-streamlit-chatbot [0.5.2]

    • chore(deps): bump the uv group across 12 directories with 14 updates (#20578)

    llama-index-packs-sub-question-weaviate [0.4.1]

    • chore(deps): bump the uv group across 12 directories with 14 updates (#20578)

    llama-index-packs-timescale-vector-autoretrieval [0.4.1]

    • chore(deps): bump the uv group across 12 directories with 14 updates (#20578)

    llama-index-postprocessor-cohere-rerank [0.6.0]

    • fix(cohere-rerank): add retry logic and tenacity dependency to cohere rerank (#20593)

    llama-index-postprocessor-nvidia-rerank [0.5.4]

    • fix(nvidia-rerank): fix initialization logic for on-prem auth (#20560)
    • fix(nvidia-rerank): correct private attribute reference (#20570)
    • fix(nvidia-rerank): Fix POST request url for locally hosted NIM rerankers (#20579)

    llama-index-postprocessor-tei-rerank [0.4.2]

    • fix(tei-rerank): use index field from API response for correct score … (#20599)
    • test(tei-rerank): add test coverage for rerank retry coverage (#20600)

    llama-index-protocols-ag-ui [0.2.4]

    • fix: avoid ValueError in ag-ui message conversion for multi-block ChatMessages (#20648)

    llama-index-readers-datasets [0.1.0]

    • chore(deps): bump the uv group across 4 directories with 4 updates (#20531)

    llama-index-readers-microsoft-sharepoint [0.7.0]

    • Sharepoint page support events (#20572)

    llama-index-readers-obsidian [0.6.1]

    • Langchain1.x support (#20472)

    llama-index-readers-service-now [0.2.2]

    • chore(deps): bump the pip group across 2 directories with 7 updates (#20662)

    llama-index-tools-mcp [0.4.6]

    • feat: implement partial_params support to McpToolSpec (#20554)

    llama-index-tools-mcp-discovery [0.1.0]

    • Add llama-index-tools-mcp-discovery integration (#20502)

    llama-index-tools-moss [0.1.0]

    • feat(tools): add Moss search engine integration (#20615)

    llama-index-tools-seltz [0.1.0]

    • feat(tools): add Seltz web knowledge tool integration (#20626)

    llama-index-tools-typecast [0.1.0]

    • Migrate Typecast tool to V2 API for voices endpoints (#20548)

    llama-index-tools-wolfram-alpha [0.5.0]

    • feat(wolfram-alpha): switch to LLM API with bearer auth (#20586)

    llama-index-vector-stores-clickhouse [0.6.2]

    • fix(clickhouse): Add drop_existing_table parameter to prevent data loss (#20651)

    llama-index-vector-stores-milvus [0.9.6]

    • chore(deps): bump the uv group across 4 directories with 4 updates (#20531)

    llama-index-vector-stores-mongodb [0.9.1]

    • Update MongoDB vector store tests to use newer model (#20515)

    llama-index-vector-stores-oceanbase [0.4.0]

    • feat(oceanbase): add sparse/fulltext/hybrid search (#20524)

    llama-index-vector-stores-opensearch [1.0.0]

    • Changed OpenSearch engine default from deprecated nmslib to faiss (#20507)
    • chore(deps): bump the uv group across 4 directories with 4 updates (#20531)

    llama-index-vector-stores-postgres [0.7.3]

    • fix(postgres): disable bitmap scan for vector queries (#20514)

    llama-index-vector-stores-yugabytedb [0.5.4]

    • Add YugabyteDB as a Vector Store (#20559)
    • chore(deps): bump the pip group across 2 directories with 7 updates (#20662)

    llama-index-voice-agents-gemini-live [0.2.2]

    • Add client headers to Gemini API requests (#20519)
    Original source Report a problem
  • Jan 21, 2026
    • Date parsed from source:
      Jan 21, 2026
    • First seen by Releasebot:
      Mar 26, 2026
    LlamaIndex logo

    LlamaIndex

    v0.14.13

    LlamaIndex ships a broad release with new agent workflow and code splitting features, Ray distributed ingestion, multi-modal conversation support, refreshed memory handling, and improved integrations across LLMs, embeddings, vector stores, readers, and tools, plus several security and bug fixes.

    Release Notes

    [2026-01-21]

    llama-index-core [0.14.13]

    • feat: add early_stopping_method parameter to agent workflows (#20389)
    • feat: Add token-based code splitting support to CodeSplitter (#20438)
    • Add RayIngestionPipeline integration for distributed data ingestion (#20443)
    • Added the multi-modal version of the Condensed Conversation & Context… (#20446)
    • Replace ChatMemoryBuffer with Memory (#20458)
    • fix(bug):Raise value error on when input is empty list in mean_agg instead of returning float (#20466)
    • fix: The classmethod of ReActChatFormatter should use cls instead of the class name (#20475)
    • feat: add configurable empty response message to synthesizers (#20503)

    llama-index-embeddings-bedrock [0.7.3]

    • Enable use of ARNs for Bedrock Embedding Models (#20435)

    llama-index-embeddings-ollama [0.8.6]

    • Improved Ollama batch embedding (#20447)

    llama-index-embeddings-voyageai [0.5.3]

    • Adding voyage-4 models (#20497)

    llama-index-ingestion-ray [0.1.0]

    • Add RayIngestionPipeline integration for distributed data ingestion (#20443)

    llama-index-llms-anthropic [0.10.6]

    • feat: enhance structured predict methods for anthropic (#20440)
    • fix: preserve input_tokens in Anthropic stream_chat responses (#20512)

    llama-index-llms-apertis [0.1.0]

    • Add Apertis LLM integration with example notebook (#20436)

    llama-index-llms-bedrock-converse [0.12.4]

    • chore(bedrock-converse): Remove extraneous thinking_delta kwarg from ChatMessage (#20455)

    llama-index-llms-gemini [0.6.2]

    • chore: deprecate llama-index-llms-gemini (#20511)

    llama-index-llms-openai [0.6.13]

    • Sanitize OpenAI structured output JSON schema name for generic Pydantic models (#20452)
    • chore: vbump openai (#20482)

    llama-index-llms-openrouter [0.4.3]

    • Feature/openrouter provider routing support (#20431)

    llama-index-packs-recursive-retriever [0.7.1]

    • security: remove exposed OpenAI API keys from notebook outputs (#20474)

    llama-index-packs-sentence-window-retriever [0.5.1]

    • security: remove exposed OpenAI API keys from notebook outputs (#20474)

    llama-index-readers-datasets [0.1.0]

    • Add HuggingFace datasets reader integration (#20468)

    llama-index-readers-patentsview [1.0.0]

    • Patentsview reader api changes (#20481)

    llama-index-retrievers-you [1.0.0]

    • Revamp YouRetriever integration (#20493)

    llama-index-tools-parallel-web-systems [0.1.0]

    • feat: added Parallel Web System tools (#20442)

    llama-index-vector-stores-alibabacloud-mysql [0.1.0]

    • Feature/alibaba mysql vector integration (#20396)

    llama-index-vector-stores-milvus [0.9.6]

    • Feat milvus partition names (#20445)
    • improve(llama-index-vector-stores-milvus): Changed the partition parameter to milvus_partition_name in add/delete. (#20460)

    llama-index-vector-stores-mongodb [0.9.1]

    • INTPYTHON-863 Fix mongodb async integration (#20444)

    llama-index-vector-stores-neo4jvector [0.5.2]

    • Handle missing metadata for neo4j vector store (#20491)

    llama-index-vector-stores-opensearch [0.6.3]

    • fix (opensearch): add close and aclose methods to vector client (#20463)

    llama-index-vector-stores-qdrant [0.9.1]

    • Qdrant search params (#20476)

    llama-index-vector-stores-vertexaivectorsearch [0.3.4]

    • feat(vertexaivectorsearch): add hybrid search support (#20487)

    llama-index-vector-stores-volcenginemysql [0.2.0]

    • feat: Volcengine MySQL vector store integration (#20404)
    Original source Report a problem
  • Dec 30, 2025
    • Date parsed from source:
      Dec 30, 2025
    • First seen by Releasebot:
      Mar 26, 2026
    LlamaIndex logo

    LlamaIndex

    v0.14.12

    LlamaIndex releases broad platform updates with async tool spec support, new and improved LLM and vector store integrations, better file and metadata handling, and fixes across core, readers, and tools for a smoother developer experience.

    Release Notes

    [2025-12-30]

    llama-index-callbacks-agentops [0.4.1]

    • Feat/async tool spec support (#20338)

    llama-index-core [0.14.12]

    • Feat/async tool spec support (#20338)
    • Improve MockFunctionCallingLLM (#20356)
    • fix(openai): sanitize generic Pydantic model schema names (#20371)
    • Element node parser (#20399)
    • improve llama dev logging (#20411)
    • test(node_parser): add unit tests for Java CodeSplitter (#20423)
    • fix: crash in log_vector_store_query_result when result.ids is None (#20427)

    llama-index-embeddings-litellm [0.4.1]

    • Add docstring to LiteLLM embedding class (#20336)

    llama-index-embeddings-ollama [0.8.5]

    • feat(llama-index-embeddings-ollama): Add keep_alive parameter (#20395)
    • docs: improve Ollama embeddings README with comprehensive documentation (#20414)

    llama-index-embeddings-voyageai [0.5.2]

    • Voyage multimodal 35 (#20398)

    llama-index-graph-stores-nebula [0.5.1]

    • feat(nebula): add MENTIONS edge to property graph store (#20401)

    llama-index-llms-aibadgr [0.1.0]

    • feat(llama-index-llms-aibadgr): Add AI Badgr OpenAI‑compatible LLM integration (#20365)

    llama-index-llms-anthropic [0.10.4]

    • add back haiku-3 support (#20408)

    llama-index-llms-bedrock-converse [0.12.3]

    • fix: bedrock converse thinking block issue (#20355)

    llama-index-llms-google-genai [0.8.3]

    • Switch use_file_api to Flexible file_mode; Improve File Upload Handling & Bump google-genai to v1.52.0 (#20347)
    • Fix missing role from Google-GenAI (#20357)
    • Add signature index fix (#20362)
    • Add positional thought signature for thoughts (#20418)

    llama-index-llms-ollama [0.9.1]

    • feature: pydantic no longer complains if you pass 'low', 'medium', 'h… (#20394)

    llama-index-llms-openai [0.6.12]

    • fix: Handle tools=None in OpenAIResponses._get_model_kwargs (#20358)
    • feat: add support for gpt-5.2 and 5.2 pro (#20361)

    llama-index-readers-confluence [0.6.1]

    • fix(confluence): support Python 3.14 (#20370)

    llama-index-readers-file [0.5.6]

    • Loosen constraint on pandas version (#20387)

    llama-index-readers-service-now [0.2.2]

    • chore(deps): bump urllib3 from 2.5.0 to 2.6.0 in /llama-index-integrations/readers/llama-index-readers-service-now in the pip group across 1 directory (#20341)

    llama-index-tools-mcp [0.4.5]

    • fix: pass timeout parameters to transport clients in BasicMCPClient (#20340)
    • feature: Permit to pass a custom httpx.AsyncClient when creating a BasicMcpClient (#20368)

    llama-index-tools-typecast [0.1.0]

    • feat: add Typecast tool integration with text to speech features (#20343)

    llama-index-vector-stores-azurepostgresql [0.2.0]

    • Feat/async tool spec support (#20338)

    llama-index-vector-stores-chroma [0.5.5]

    • Fix chroma nested metadata filters (#20424)
    • fix(chroma): support multimodal results (#20426)

    llama-index-vector-stores-couchbase [0.6.0]

    • Update FTS & GSI reference docs for Couchbase vector-store (#20346)

    llama-index-vector-stores-faiss [0.5.2]

    • fix(faiss): pass numpy array instead of int to add_with_ids (#20384)

    llama-index-vector-stores-lancedb [0.4.4]

    • Feat/async tool spec support (#20338)
    • fix(vector_stores/lancedb): add missing '<' filter operator (#20364)
    • fix(lancedb): fix metadata filtering logic and list value SQL generation (#20374)

    llama-index-vector-stores-mongodb [0.9.0]

    • Update mongo vector store to initialize without list permissions (#20354)
    • add mongodb delete index (#20429)
    • async mongodb atlas support (#20430)

    llama-index-vector-stores-redis [0.6.2]

    • Redis metadata filter fix (#20359)

    llama-index-vector-stores-vertexaivectorsearch [0.3.3]

    • feat(vertex-vector-search): Add Google Vertex AI Vector Search v2.0 support (#20351)
    Original source Report a problem
  • Dec 4, 2025
    • Date parsed from source:
      Dec 4, 2025
    • First seen by Releasebot:
      Mar 26, 2026
    LlamaIndex logo

    LlamaIndex

    v0.14.10

    LlamaIndex adds mock function calling LLM support and an Airweave tool integration with advanced search.

    Release Notes

    [2025-12-04]

    llama-index-core [0.14.10]

    feat: add mock function calling llm (#20331)

    llama-index-llms-qianfan [0.4.1]

    test: fix typo 'reponse' to 'response' in variable names (#20329)

    llama-index-tools-airweave [0.1.0]

    feat: add Airweave tool integration with advanced search features (#20111)

    llama-index-utils-qianfan [0.4.1]

    test: fix typo 'reponse' to 'response' in variable names (#20329)

    Original source Report a problem
  • Dec 2, 2025
    • Date parsed from source:
      Dec 2, 2025
    • First seen by Releasebot:
      Mar 26, 2026
    LlamaIndex logo

    LlamaIndex

    v0.14.9

    LlamaIndex ships a broad update across core, LLM, embedding, reader, and vector store packages, adding new model and provider support, improving multimodal and ingestion behavior, and tightening fixes for chat, async, and database integrations.

    Release Notes

    [2025-12-02]

    llama-index-agent-azure [0.2.1]

    • fix: Pin azure-ai-projects version to prevent breaking changes (#20255)

    llama-index-core [0.14.9]

    • MultiModalVectorStoreIndex now returns a multi-modal ContextChatEngine. (#20265)
    • Ingestion to vector store now ensures that _node-content is readable (#20266)
    • fix: ensure context is copied with async utils run_async (#20286)
    • fix(memory): ensure first message in queue is always a user message after flush (#20310)

    llama-index-embeddings-bedrock [0.7.2]

    • feat(embeddings-bedrock): Add support for Amazon Bedrock Application Inference Profiles (#20267)
    • fix:(embeddings-bedrock) correct extraction of provider from model_name (#20295)
    • Bump version of bedrock-embedding (#20304)

    llama-index-embeddings-voyageai [0.5.1]

    • VoyageAI correction and documentation (#20251)

    llama-index-llms-anthropic [0.10.3]

    • feat: add anthropic opus 4.5 (#20306)

    llama-index-llms-bedrock-converse [0.12.2]

    • fix(bedrock-converse): Only use guardrail_stream_processing_mode in streaming functions (#20289)
    • feat: add anthropic opus 4.5 (#20306)
    • feat(bedrock-converse): Additional support for Claude Opus 4.5 (#20317)

    llama-index-llms-google-genai [0.7.4]

    • Fix gemini-3 support and gemini function call support (#20315)

    llama-index-llms-helicone [0.1.1]

    • update helicone docs + examples (#20208)

    llama-index-llms-openai [0.6.10]

    • Smallest Nit (#20252)
    • Feat: Add gpt-5.1-chat model support (#20311)

    llama-index-llms-ovhcloud [0.1.0]

    • Add OVHcloud AI Endpoints provider (#20288)

    llama-index-llms-siliconflow [0.4.2]

    • [Bugfix] None check on content in delta in siliconflow LLM (#20327)

    llama-index-node-parser-docling [0.4.2]

    • Relax docling Python constraints (#20322)

    llama-index-packs-resume-screener [0.9.3]

    • feat: Update pypdf to latest version (#20285)

    llama-index-postprocessor-voyageai-rerank [0.4.1]

    • VoyageAI correction and documentation (#20251)

    llama-index-protocols-ag-ui [0.2.3]

    • fix: correct order of ag-ui events to avoid event conflicts (#20296)

    llama-index-readers-confluence [0.6.0]

    • Refactor Confluence integration: Update license to MIT, remove requirements.txt, and implement HtmlTextParser for HTML to Markdown conversion. Update dependencies and tests accordingly. (#20262)

    llama-index-readers-docling [0.4.2]

    • Relax docling Python constraints (#20322)

    llama-index-readers-file [0.5.5]

    • feat: Update pypdf to latest version (#20285)

    llama-index-readers-reddit [0.4.1]

    • Fix typo in README.md for Reddit integration (#20283)

    llama-index-storage-chat-store-postgres [0.3.2]

    • [FIX] Postgres ChatStore automatically prefix table name with "data_" (#20241)

    llama-index-vector-stores-azureaisearch [0.4.4]

    • vector-azureaisearch: check if user agent already in policy before add it to azure client (#20243)
    • fix(azureaisearch): Add close/aclose methods to fix unclosed client session warnings (#20309)

    llama-index-vector-stores-milvus [0.9.4]

    • Fix/consistency level param for milvus (#20268)

    llama-index-vector-stores-postgres [0.7.2]

    • Fix postgresql dispose (#20312)

    llama-index-vector-stores-qdrant [0.9.0]

    • fix: Update qdrant-client version constraints (#20280)
    • Feat: update Qdrant client to 1.16.0 (#20287)

    llama-index-vector-stores-vertexaivectorsearch [0.3.2]

    • fix: update blob path in batch_update_index (#20281)

    llama-index-voice-agents-openai [0.2.2]

    • Smallest Nit (#20252)
    Original source Report a problem
  • Nov 10, 2025
    • Date parsed from source:
      Nov 10, 2025
    • First seen by Releasebot:
      Mar 26, 2026
    LlamaIndex logo

    LlamaIndex

    v0.14.8

    LlamaIndex ships a broad release with stronger ReAct and multi-block agent handling, new tool-call support across OpenAI, Google GenAI, Anthropic, and Bedrock Converse, plus Scrapy web reading, OpenAI v2 SDK updates, and several fixes for storage, vector search, and parsing.

    Release Notes

    [2025-11-10]

    llama-index-core [0.14.8]

    • Fix ReActOutputParser getting stuck when "Answer:" contains "Action:" (#20098)
    • Add buffer to image, audio, video and document blocks (#20153)
    • fix(agent): Handle multi-block ChatMessage in ReActAgent (#20196)
    • Fix/20209 (#20214)
    • Preserve Exception in ToolOutput (#20231)
    • fix weird pydantic warning (#20235)

    llama-index-embeddings-nvidia [0.4.2]

    • docs: Edit pass and update example model (#20198)

    llama-index-embeddings-ollama [0.8.4]

    • Added a test case (no code) to check the embedding through an actual connection to a Ollama server (after checking that the ollama server exists) (#20230)

    llama-index-llms-anthropic [0.10.2]

    • feat(llms/anthropic): Add support for RawMessageDeltaEvent in streaming (#20206)
    • chore: remove unsupported models (#20211)

    llama-index-llms-bedrock-converse [0.11.1]

    • feat: integrate bedrock converse with tool call block (#20099)
    • feat: Update model name extraction to include 'jp' region prefix and … (#20233)

    llama-index-llms-google-genai [0.7.3]

    • feat: google genai integration with tool block (#20096)
    • fix: non-streaming gemini tool calling (#20207)
    • Add token usage information in GoogleGenAI chat additional_kwargs (#20219)
    • bug fix google genai stream_complete (#20220)

    llama-index-llms-nvidia [0.4.4]

    • docs: Edit pass and code example updates (#20200)

    llama-index-llms-openai [0.6.8]

    • FixV2: Correct DocumentBlock type for OpenAI from 'input_file' to 'file' (#20203)
    • OpenAI v2 sdk support (#20234)

    llama-index-llms-upstage [0.6.5]

    • OpenAI v2 sdk support (#20234)

    llama-index-packs-streamlit-chatbot [0.5.2]

    • OpenAI v2 sdk support (#20234)

    llama-index-packs-voyage-query-engine [0.5.2]

    • OpenAI v2 sdk support (#20234)

    llama-index-postprocessor-nvidia-rerank [0.5.1]

    • docs: Edit pass (#20199)

    llama-index-readers-web [0.5.6]

    • feat: Add ScrapyWebReader Integration (#20212)
    • Update Scrapy dependency to 2.13.3 (#20228)

    llama-index-readers-whisper [0.3.0]

    • OpenAI v2 sdk support (#20234)

    llama-index-storage-kvstore-postgres [0.4.3]

    • fix: Ensure schema creation only occurs if it doesn't already exist (#20225)

    llama-index-tools-brightdata [0.2.1]

    • docs: add api key claim instructions (#20204)

    llama-index-tools-mcp [0.4.3]

    • Added test case for issue 19211. No code change (#20201)

    llama-index-utils-oracleai [0.3.1]

    • Update llama-index-core dependency to 0.12.45 (#20227)

    llama-index-vector-stores-lancedb [0.4.2]

    • fix: FTS index recreation bug on every LanceDB query (#20213)
    Original source Report a problem

Related vendors