Java version of LangChain
Response<T>
instead of T
. Response<T>
contains token usage and finish reason.InMemoryEmbeddingStore
can now be easily persisted and restored, see serializeToJson()
, serializeToFile()
, fromJson()
and fromFile()
HtmlTextExtractor
Added an option to setup a proxy for OpenAI models (#93)
Added more pre-packaged in-process embedding models (#91):
InMemoryEmbeddingStore: return matches from highest to lowest (#90)
DocumentTransformer
and it's first implementation: HtmlTextExtractor
OpenAiTokenizer
is now more precise and can estimate tokens for tools/functionsOpenAiChatModel
and OpenAiStreamingChatModel
Added in-process embedding models:
The idea is to give users an option to embed documents/texts in the same Java process without any external dependencies. ONNX Runtime is used to run models inside JVM. Each model resides in it's own maven module (inside the jar).
Added more request parameters for OpenAi models:
You can now try out OpenAI's gpt-3.5-turbo
and text-embedding-ada-002
models with LangChain4j for free, without needing an OpenAI account and keys!
Simply use the API key "demo".
Result
class. Now models return results (AiMessage
/Embedding
/Moderation
/etc) directly, without wrapping it into Result
object.@UserMessage
in AI Services.