Dataset Viewer
Auto-converted to Parquet
id
stringlengths
14
16
text
stringlengths
31
2.73k
source
stringlengths
49
114
317715aa0412-0
.rst .pdf Welcome to LangChain Contents Getting Started Modules Use Cases Reference Docs LangChain Ecosystem Additional Resources Welcome to LangChain# LangChain is a framework for developing applications powered by language models. We believe that the most powerful and differentiated applications will not only call out to a language model via an API, but will also: Be data-aware: connect a language model to other sources of data Be agentic: allow a language model to interact with its environment The LangChain framework is designed with the above principles in mind. This is the Python specific portion of the documentation. For a purely conceptual guide to LangChain, see here. For the JavaScript documentation, see here. Getting Started# Checkout the below guide for a walkthrough of how to get started using LangChain to create an Language Model application. Getting Started Documentation Modules# There are several main modules that LangChain provides support for. For each module we provide some examples to get started, how-to guides, reference docs, and conceptual guides. These modules are, in increasing order of complexity: Models: The various model types and model integrations LangChain supports. Prompts: This includes prompt management, prompt optimization, and prompt serialization. Memory: Memory is the concept of persisting state between calls of a chain/agent. LangChain provides a standard interface for memory, a collection of memory implementations, and examples of chains/agents that use memory. Indexes: Language models are often more powerful when combined with your own text data - this module covers best practices for doing exactly that. Chains: Chains go beyond just a single LLM call, and are sequences of calls (whether to an LLM or a different utility). LangChain provides a standard interface for chains, lots of integrations with other tools, and end-to-end chains for common applications.
https://python.langchain.com/en/latest/index.html
317715aa0412-1
Agents: Agents involve an LLM making decisions about which Actions to take, taking that Action, seeing an Observation, and repeating that until done. LangChain provides a standard interface for agents, a selection of agents to choose from, and examples of end to end agents. Use Cases# The above modules can be used in a variety of ways. LangChain also provides guidance and assistance in this. Below are some of the common use cases LangChain supports. Personal Assistants: The main LangChain use case. Personal assistants need to take actions, remember interactions, and have knowledge about your data. Question Answering: The second big LangChain use case. Answering questions over specific documents, only utilizing the information in those documents to construct an answer. Chatbots: Since language models are good at producing text, that makes them ideal for creating chatbots. Querying Tabular Data: If you want to understand how to use LLMs to query data that is stored in a tabular format (csvs, SQL, dataframes, etc) you should read this page. Interacting with APIs: Enabling LLMs to interact with APIs is extremely powerful in order to give them more up-to-date information and allow them to take actions. Extraction: Extract structured information from text. Summarization: Summarizing longer documents into shorter, more condensed chunks of information. A type of Data Augmented Generation. Evaluation: Generative models are notoriously hard to evaluate with traditional metrics. One new way of evaluating them is using language models themselves to do the evaluation. LangChain provides some prompts/chains for assisting in this. Reference Docs# All of LangChain’s reference documentation, in one place. Full documentation on all methods, classes, installation methods, and integration setups for LangChain. Reference Documentation LangChain Ecosystem# Guides for how other companies/products can be used with LangChain LangChain Ecosystem
https://python.langchain.com/en/latest/index.html
317715aa0412-2
Guides for how other companies/products can be used with LangChain LangChain Ecosystem Additional Resources# Additional collection of resources we think may be useful as you develop your application! LangChainHub: The LangChainHub is a place to share and explore other prompts, chains, and agents. Glossary: A glossary of all related terms, papers, methods, etc. Whether implemented in LangChain or not! Gallery: A collection of our favorite projects that use LangChain. Useful for finding inspiration or seeing how things were done in other applications. Deployments: A collection of instructions, code snippets, and template repositories for deploying LangChain apps. Tracing: A guide on using tracing in LangChain to visualize the execution of chains and agents. Model Laboratory: Experimenting with different prompts, models, and chains is a big part of developing the best possible application. The ModelLaboratory makes it easy to do so. Discord: Join us on our Discord to discuss all things LangChain! Production Support: As you move your LangChains into production, we’d love to offer more comprehensive support. Please fill out this form and we’ll set up a dedicated support Slack channel. next Quickstart Guide Contents Getting Started Modules Use Cases Reference Docs LangChain Ecosystem Additional Resources By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Apr 06, 2023.
https://python.langchain.com/en/latest/index.html
ec8471acd39b-0
.rst .pdf LangChain Gallery Contents Open Source Misc. Colab Notebooks Proprietary LangChain Gallery# Lots of people have built some pretty awesome stuff with LangChain. This is a collection of our favorites. If you see any other demos that you think we should highlight, be sure to let us know! Open Source# HowDoI.ai This is an experiment in building a large-language-model-backed chatbot. It can hold a conversation, remember previous comments/questions, and answer all types of queries (history, web search, movie data, weather, news, and more). YouTube Transcription QA with Sources An end-to-end example of doing question answering on YouTube transcripts, returning the timestamps as sources to legitimize the answer. QA Slack Bot This application is a Slack Bot that uses Langchain and OpenAI’s GPT3 language model to provide domain specific answers. You provide the documents. ThoughtSource A central, open resource and community around data and tools related to chain-of-thought reasoning in large language models. LLM Strategy This Python package adds a decorator llm_strategy that connects to an LLM (such as OpenAI’s GPT-3) and uses the LLM to “implement” abstract methods in interface classes. It does this by forwarding requests to the LLM and converting the responses back to Python data using Python’s @dataclasses. Zero-Shot Corporate Lobbyist A notebook showing how to use GPT to help with the work of a corporate lobbyist. Dagster Documentation ChatBot A jupyter notebook demonstrating how you could create a semantic search engine on documents in one of your Google Folders Google Folder Semantic Search Build a GitHub support bot with GPT3, LangChain, and Python. Talk With Wind Record sounds of anything (birds, wind, fire, train station) and chat with it.
https://python.langchain.com/en/latest/gallery.html
ec8471acd39b-1
Record sounds of anything (birds, wind, fire, train station) and chat with it. ChatGPT LangChain This simple application demonstrates a conversational agent implemented with OpenAI GPT-3.5 and LangChain. When necessary, it leverages tools for complex math, searching the internet, and accessing news and weather. GPT Math Techniques A Hugging Face spaces project showing off the benefits of using PAL for math problems. GPT Political Compass Measure the political compass of GPT. Notion Database Question-Answering Bot Open source GitHub project shows how to use LangChain to create a chatbot that can answer questions about an arbitrary Notion database. LlamaIndex LlamaIndex (formerly GPT Index) is a project consisting of a set of data structures that are created using GPT-3 and can be traversed using GPT-3 in order to answer queries. Grover’s Algorithm Leveraging Qiskit, OpenAI and LangChain to demonstrate Grover’s algorithm QNimGPT A chat UI to play Nim, where a player can select an opponent, either a quantum computer or an AI ReAct TextWorld Leveraging the ReActTextWorldAgent to play TextWorld with an LLM! Fact Checker This repo is a simple demonstration of using LangChain to do fact-checking with prompt chaining. DocsGPT Answer questions about the documentation of any project Misc. Colab Notebooks# Wolfram Alpha in Conversational Agent Give ChatGPT a WolframAlpha neural implant Tool Updates in Agents Agent improvements (6th Jan 2023) Conversational Agent with Tools (Langchain AGI) Langchain AGI (23rd Dec 2022) Proprietary# Daimon A chat-based AI personal assistant with long-term memory about you.
https://python.langchain.com/en/latest/gallery.html
ec8471acd39b-2
Daimon A chat-based AI personal assistant with long-term memory about you. AI Assisted SQL Query Generator An app to write SQL using natural language, and execute against real DB. Clerkie Stack Tracing QA Bot to help debug complex stack tracing (especially the ones that go multi-function/file deep). Sales Email Writer By Raza Habib, this demo utilizes LangChain + SerpAPI + HumanLoop to write sales emails. Give it a company name and a person, this application will use Google Search (via SerpAPI) to get more information on the company and the person, and then write them a sales message. Question-Answering on a Web Browser By Zahid Khawaja, this demo utilizes question answering to answer questions about a given website. A followup added this for YouTube videos, and then another followup added it for Wikipedia. Mynd A journaling app for self-care that uses AI to uncover insights and patterns over time. previous Glossary next Deployments Contents Open Source Misc. Colab Notebooks Proprietary By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Apr 06, 2023.
https://python.langchain.com/en/latest/gallery.html
c850c360a1e9-0
Index _ | A | B | C | D | E | F | G | H | I | J | K | L | M | N | O | P | Q | R | S | T | U | V | W | Z _ __call__() (langchain.llms.AI21 method) (langchain.llms.AlephAlpha method) (langchain.llms.Anthropic method) (langchain.llms.AzureOpenAI method) (langchain.llms.Banana method) (langchain.llms.CerebriumAI method) (langchain.llms.Cohere method) (langchain.llms.DeepInfra method) (langchain.llms.ForefrontAI method) (langchain.llms.GooseAI method) (langchain.llms.GPT4All method) (langchain.llms.HuggingFaceEndpoint method) (langchain.llms.HuggingFaceHub method) (langchain.llms.HuggingFacePipeline method) (langchain.llms.LlamaCpp method) (langchain.llms.Modal method) (langchain.llms.NLPCloud method) (langchain.llms.OpenAI method) (langchain.llms.OpenAIChat method) (langchain.llms.Petals method) (langchain.llms.PromptLayerOpenAI method) (langchain.llms.PromptLayerOpenAIChat method) (langchain.llms.Replicate method) (langchain.llms.RWKV method) (langchain.llms.SagemakerEndpoint method) (langchain.llms.SelfHostedHuggingFaceLLM method) (langchain.llms.SelfHostedPipeline method) (langchain.llms.StochasticAI method) (langchain.llms.Writer method) A
https://python.langchain.com/en/latest/genindex.html
c850c360a1e9-1
(langchain.llms.StochasticAI method) (langchain.llms.Writer method) A aapply() (langchain.chains.LLMChain method) aapply_and_parse() (langchain.chains.LLMChain method) add() (langchain.docstore.InMemoryDocstore method) add_documents() (langchain.vectorstores.VectorStore method) add_embeddings() (langchain.vectorstores.FAISS method) add_example() (langchain.prompts.example_selector.LengthBasedExampleSelector method) (langchain.prompts.example_selector.SemanticSimilarityExampleSelector method) add_texts() (langchain.vectorstores.AtlasDB method) (langchain.vectorstores.Chroma method) (langchain.vectorstores.DeepLake method) (langchain.vectorstores.ElasticVectorSearch method) (langchain.vectorstores.FAISS method) (langchain.vectorstores.Milvus method) (langchain.vectorstores.OpenSearchVectorSearch method) (langchain.vectorstores.Pinecone method) (langchain.vectorstores.Qdrant method) (langchain.vectorstores.VectorStore method) (langchain.vectorstores.Weaviate method) agenerate() (langchain.chains.LLMChain method) (langchain.llms.AI21 method) (langchain.llms.AlephAlpha method) (langchain.llms.Anthropic method) (langchain.llms.AzureOpenAI method) (langchain.llms.Banana method) (langchain.llms.CerebriumAI method) (langchain.llms.Cohere method) (langchain.llms.DeepInfra method) (langchain.llms.ForefrontAI method) (langchain.llms.GooseAI method) (langchain.llms.GPT4All method) (langchain.llms.HuggingFaceEndpoint method) (langchain.llms.HuggingFaceHub method) (langchain.llms.HuggingFacePipeline method)
https://python.langchain.com/en/latest/genindex.html
c850c360a1e9-2
(langchain.llms.HuggingFacePipeline method) (langchain.llms.LlamaCpp method) (langchain.llms.Modal method) (langchain.llms.NLPCloud method) (langchain.llms.OpenAI method) (langchain.llms.OpenAIChat method) (langchain.llms.Petals method) (langchain.llms.PromptLayerOpenAI method) (langchain.llms.PromptLayerOpenAIChat method) (langchain.llms.Replicate method) (langchain.llms.RWKV method) (langchain.llms.SagemakerEndpoint method) (langchain.llms.SelfHostedHuggingFaceLLM method) (langchain.llms.SelfHostedPipeline method) (langchain.llms.StochasticAI method) (langchain.llms.Writer method) agenerate_prompt() (langchain.llms.AI21 method) (langchain.llms.AlephAlpha method) (langchain.llms.Anthropic method) (langchain.llms.AzureOpenAI method) (langchain.llms.Banana method) (langchain.llms.CerebriumAI method) (langchain.llms.Cohere method) (langchain.llms.DeepInfra method) (langchain.llms.ForefrontAI method) (langchain.llms.GooseAI method) (langchain.llms.GPT4All method) (langchain.llms.HuggingFaceEndpoint method) (langchain.llms.HuggingFaceHub method) (langchain.llms.HuggingFacePipeline method) (langchain.llms.LlamaCpp method) (langchain.llms.Modal method) (langchain.llms.NLPCloud method) (langchain.llms.OpenAI method) (langchain.llms.OpenAIChat method) (langchain.llms.Petals method) (langchain.llms.PromptLayerOpenAI method)
https://python.langchain.com/en/latest/genindex.html
c850c360a1e9-3
(langchain.llms.PromptLayerOpenAI method) (langchain.llms.PromptLayerOpenAIChat method) (langchain.llms.Replicate method) (langchain.llms.RWKV method) (langchain.llms.SagemakerEndpoint method) (langchain.llms.SelfHostedHuggingFaceLLM method) (langchain.llms.SelfHostedPipeline method) (langchain.llms.StochasticAI method) (langchain.llms.Writer method) agent (langchain.agents.AgentExecutor attribute) (langchain.agents.MRKLChain attribute) (langchain.agents.ReActChain attribute) (langchain.agents.SelfAskWithSearchChain attribute) AgentType (class in langchain.agents) ai_prefix (langchain.agents.ConversationalAgent attribute) aiosession (langchain.serpapi.SerpAPIWrapper attribute) (langchain.utilities.searx_search.SearxSearchWrapper attribute) aleph_alpha_api_key (langchain.llms.AlephAlpha attribute) allowed_tools (langchain.agents.Agent attribute) (langchain.agents.ReActTextWorldAgent attribute) (langchain.agents.ZeroShotAgent attribute) answers (langchain.utilities.searx_search.SearxResults property) api_answer_chain (langchain.chains.APIChain attribute) api_docs (langchain.chains.APIChain attribute) api_operation (langchain.chains.OpenAPIEndpointChain attribute) api_request_chain (langchain.chains.APIChain attribute) (langchain.chains.OpenAPIEndpointChain attribute) api_response_chain (langchain.chains.OpenAPIEndpointChain attribute) api_url (langchain.llms.StochasticAI attribute) aplan() (langchain.agents.Agent method) (langchain.agents.BaseMultiActionAgent method) (langchain.agents.BaseSingleActionAgent method)
https://python.langchain.com/en/latest/genindex.html
c850c360a1e9-4
(langchain.agents.BaseSingleActionAgent method) (langchain.agents.LLMSingleActionAgent method) apply() (langchain.chains.LLMChain method) apply_and_parse() (langchain.chains.LLMChain method) apredict() (langchain.chains.LLMChain method) apredict_and_parse() (langchain.chains.LLMChain method) aprep_prompts() (langchain.chains.LLMChain method) are_all_true_prompt (langchain.chains.LLMSummarizationCheckerChain attribute) aresults() (langchain.utilities.searx_search.SearxSearchWrapper method) arun() (langchain.serpapi.SerpAPIWrapper method) (langchain.utilities.searx_search.SearxSearchWrapper method) as_retriever() (langchain.vectorstores.VectorStore method) AtlasDB (class in langchain.vectorstores) B bad_words (langchain.llms.NLPCloud attribute) base_embeddings (langchain.chains.HypotheticalDocumentEmbedder attribute) base_url (langchain.llms.AI21 attribute) (langchain.llms.ForefrontAI attribute) (langchain.llms.Writer attribute) batch_size (langchain.llms.AzureOpenAI attribute) beam_search_diversity_rate (langchain.llms.Writer attribute) beam_width (langchain.llms.Writer attribute) best_of (langchain.llms.AlephAlpha attribute) (langchain.llms.AzureOpenAI attribute) C callback_manager (langchain.agents.MRKLChain attribute) (langchain.agents.ReActChain attribute) (langchain.agents.SelfAskWithSearchChain attribute) categories (langchain.utilities.searx_search.SearxSearchWrapper attribute) chain (langchain.chains.ConstitutionalChain attribute) chains (langchain.chains.SequentialChain attribute)
https://python.langchain.com/en/latest/genindex.html
c850c360a1e9-5
chains (langchain.chains.SequentialChain attribute) (langchain.chains.SimpleSequentialChain attribute) CharacterTextSplitter (class in langchain.text_splitter) CHAT_CONVERSATIONAL_REACT_DESCRIPTION (langchain.agents.AgentType attribute) CHAT_ZERO_SHOT_REACT_DESCRIPTION (langchain.agents.AgentType attribute) check_assertions_prompt (langchain.chains.LLMCheckerChain attribute) (langchain.chains.LLMSummarizationCheckerChain attribute) Chroma (class in langchain.vectorstores) CHUNK_LEN (langchain.llms.RWKV attribute) chunk_size (langchain.embeddings.OpenAIEmbeddings attribute) client (langchain.llms.Petals attribute) combine_docs_chain (langchain.chains.AnalyzeDocumentChain attribute) combine_documents_chain (langchain.chains.MapReduceChain attribute) combine_embeddings() (langchain.chains.HypotheticalDocumentEmbedder method) completion_bias_exclusion_first_token_only (langchain.llms.AlephAlpha attribute) compress_to_size (langchain.embeddings.AlephAlphaAsymmetricSemanticEmbedding attribute) constitutional_principles (langchain.chains.ConstitutionalChain attribute) construct() (langchain.llms.AI21 class method) (langchain.llms.AlephAlpha class method) (langchain.llms.Anthropic class method) (langchain.llms.AzureOpenAI class method) (langchain.llms.Banana class method) (langchain.llms.CerebriumAI class method) (langchain.llms.Cohere class method) (langchain.llms.DeepInfra class method) (langchain.llms.ForefrontAI class method) (langchain.llms.GooseAI class method) (langchain.llms.GPT4All class method) (langchain.llms.HuggingFaceEndpoint class method)
https://python.langchain.com/en/latest/genindex.html
c850c360a1e9-6
(langchain.llms.HuggingFaceEndpoint class method) (langchain.llms.HuggingFaceHub class method) (langchain.llms.HuggingFacePipeline class method) (langchain.llms.LlamaCpp class method) (langchain.llms.Modal class method) (langchain.llms.NLPCloud class method) (langchain.llms.OpenAI class method) (langchain.llms.OpenAIChat class method) (langchain.llms.Petals class method) (langchain.llms.PromptLayerOpenAI class method) (langchain.llms.PromptLayerOpenAIChat class method) (langchain.llms.Replicate class method) (langchain.llms.RWKV class method) (langchain.llms.SagemakerEndpoint class method) (langchain.llms.SelfHostedHuggingFaceLLM class method) (langchain.llms.SelfHostedPipeline class method) (langchain.llms.StochasticAI class method) (langchain.llms.Writer class method) content_handler (langchain.embeddings.SagemakerEndpointEmbeddings attribute) (langchain.llms.SagemakerEndpoint attribute) CONTENT_KEY (langchain.vectorstores.Qdrant attribute) contextual_control_threshold (langchain.embeddings.AlephAlphaAsymmetricSemanticEmbedding attribute) (langchain.llms.AlephAlpha attribute) control_log_additive (langchain.embeddings.AlephAlphaAsymmetricSemanticEmbedding attribute) (langchain.llms.AlephAlpha attribute) CONVERSATIONAL_REACT_DESCRIPTION (langchain.agents.AgentType attribute) copy() (langchain.llms.AI21 method) (langchain.llms.AlephAlpha method) (langchain.llms.Anthropic method) (langchain.llms.AzureOpenAI method) (langchain.llms.Banana method) (langchain.llms.CerebriumAI method)
https://python.langchain.com/en/latest/genindex.html
c850c360a1e9-7
(langchain.llms.CerebriumAI method) (langchain.llms.Cohere method) (langchain.llms.DeepInfra method) (langchain.llms.ForefrontAI method) (langchain.llms.GooseAI method) (langchain.llms.GPT4All method) (langchain.llms.HuggingFaceEndpoint method) (langchain.llms.HuggingFaceHub method) (langchain.llms.HuggingFacePipeline method) (langchain.llms.LlamaCpp method) (langchain.llms.Modal method) (langchain.llms.NLPCloud method) (langchain.llms.OpenAI method) (langchain.llms.OpenAIChat method) (langchain.llms.Petals method) (langchain.llms.PromptLayerOpenAI method) (langchain.llms.PromptLayerOpenAIChat method) (langchain.llms.Replicate method) (langchain.llms.RWKV method) (langchain.llms.SagemakerEndpoint method) (langchain.llms.SelfHostedHuggingFaceLLM method) (langchain.llms.SelfHostedPipeline method) (langchain.llms.StochasticAI method) (langchain.llms.Writer method) coroutine (langchain.agents.Tool attribute) countPenalty (langchain.llms.AI21 attribute) create_assertions_prompt (langchain.chains.LLMSummarizationCheckerChain attribute) create_csv_agent() (in module langchain.agents) create_documents() (langchain.text_splitter.TextSplitter method) create_draft_answer_prompt (langchain.chains.LLMCheckerChain attribute) create_index() (langchain.vectorstores.AtlasDB method) create_json_agent() (in module langchain.agents) create_llm_result() (langchain.llms.AzureOpenAI method) (langchain.llms.OpenAI method)
https://python.langchain.com/en/latest/genindex.html
c850c360a1e9-8
(langchain.llms.OpenAI method) (langchain.llms.PromptLayerOpenAI method) create_openapi_agent() (in module langchain.agents) create_outputs() (langchain.chains.LLMChain method) create_pandas_dataframe_agent() (in module langchain.agents) create_prompt() (langchain.agents.Agent class method) (langchain.agents.ConversationalAgent class method) (langchain.agents.ConversationalChatAgent class method) (langchain.agents.ReActTextWorldAgent class method) (langchain.agents.ZeroShotAgent class method) create_sql_agent() (in module langchain.agents) create_vectorstore_agent() (in module langchain.agents) create_vectorstore_router_agent() (in module langchain.agents) credentials_profile_name (langchain.embeddings.SagemakerEndpointEmbeddings attribute) (langchain.llms.SagemakerEndpoint attribute) critique_chain (langchain.chains.ConstitutionalChain attribute) D database (langchain.chains.SQLDatabaseChain attribute) decider_chain (langchain.chains.SQLDatabaseSequentialChain attribute) DeepLake (class in langchain.vectorstores) delete() (langchain.vectorstores.DeepLake method) delete_collection() (langchain.vectorstores.Chroma method) delete_dataset() (langchain.vectorstores.DeepLake method) deployment_name (langchain.llms.AzureOpenAI attribute) description (langchain.agents.Tool attribute) deserialize_json_input() (langchain.chains.OpenAPIEndpointChain method) device (langchain.llms.SelfHostedHuggingFaceLLM attribute) dict() (langchain.agents.BaseMultiActionAgent method) (langchain.agents.BaseSingleActionAgent method) (langchain.llms.AI21 method) (langchain.llms.AlephAlpha method) (langchain.llms.Anthropic method)
https://python.langchain.com/en/latest/genindex.html
c850c360a1e9-9
(langchain.llms.AlephAlpha method) (langchain.llms.Anthropic method) (langchain.llms.AzureOpenAI method) (langchain.llms.Banana method) (langchain.llms.CerebriumAI method) (langchain.llms.Cohere method) (langchain.llms.DeepInfra method) (langchain.llms.ForefrontAI method) (langchain.llms.GooseAI method) (langchain.llms.GPT4All method) (langchain.llms.HuggingFaceEndpoint method) (langchain.llms.HuggingFaceHub method) (langchain.llms.HuggingFacePipeline method) (langchain.llms.LlamaCpp method) (langchain.llms.Modal method) (langchain.llms.NLPCloud method) (langchain.llms.OpenAI method) (langchain.llms.OpenAIChat method) (langchain.llms.Petals method) (langchain.llms.PromptLayerOpenAI method) (langchain.llms.PromptLayerOpenAIChat method) (langchain.llms.Replicate method) (langchain.llms.RWKV method) (langchain.llms.SagemakerEndpoint method) (langchain.llms.SelfHostedHuggingFaceLLM method) (langchain.llms.SelfHostedPipeline method) (langchain.llms.StochasticAI method) (langchain.llms.Writer method) (langchain.prompts.BasePromptTemplate method) (langchain.prompts.FewShotPromptTemplate method) (langchain.prompts.FewShotPromptWithTemplates method) do_sample (langchain.llms.NLPCloud attribute) (langchain.llms.Petals attribute) E early_stopping (langchain.llms.NLPCloud attribute) early_stopping_method (langchain.agents.AgentExecutor attribute) (langchain.agents.MRKLChain attribute)
https://python.langchain.com/en/latest/genindex.html
c850c360a1e9-10
(langchain.agents.MRKLChain attribute) (langchain.agents.ReActChain attribute) (langchain.agents.SelfAskWithSearchChain attribute) echo (langchain.llms.AlephAlpha attribute) (langchain.llms.GPT4All attribute) (langchain.llms.LlamaCpp attribute) ElasticVectorSearch (class in langchain.vectorstores) embed_documents() (langchain.chains.HypotheticalDocumentEmbedder method) (langchain.embeddings.AlephAlphaAsymmetricSemanticEmbedding method) (langchain.embeddings.AlephAlphaSymmetricSemanticEmbedding method) (langchain.embeddings.CohereEmbeddings method) (langchain.embeddings.FakeEmbeddings method) (langchain.embeddings.HuggingFaceEmbeddings method) (langchain.embeddings.HuggingFaceHubEmbeddings method) (langchain.embeddings.HuggingFaceInstructEmbeddings method) (langchain.embeddings.LlamaCppEmbeddings method) (langchain.embeddings.OpenAIEmbeddings method) (langchain.embeddings.SagemakerEndpointEmbeddings method) (langchain.embeddings.SelfHostedEmbeddings method) (langchain.embeddings.SelfHostedHuggingFaceInstructEmbeddings method) (langchain.embeddings.TensorflowHubEmbeddings method) embed_instruction (langchain.embeddings.HuggingFaceInstructEmbeddings attribute) (langchain.embeddings.SelfHostedHuggingFaceInstructEmbeddings attribute) embed_query() (langchain.chains.HypotheticalDocumentEmbedder method) (langchain.embeddings.AlephAlphaAsymmetricSemanticEmbedding method) (langchain.embeddings.AlephAlphaSymmetricSemanticEmbedding method) (langchain.embeddings.CohereEmbeddings method) (langchain.embeddings.FakeEmbeddings method) (langchain.embeddings.HuggingFaceEmbeddings method)
https://python.langchain.com/en/latest/genindex.html
c850c360a1e9-11
(langchain.embeddings.HuggingFaceEmbeddings method) (langchain.embeddings.HuggingFaceHubEmbeddings method) (langchain.embeddings.HuggingFaceInstructEmbeddings method) (langchain.embeddings.LlamaCppEmbeddings method) (langchain.embeddings.OpenAIEmbeddings method) (langchain.embeddings.SagemakerEndpointEmbeddings method) (langchain.embeddings.SelfHostedEmbeddings method) (langchain.embeddings.SelfHostedHuggingFaceInstructEmbeddings method) (langchain.embeddings.TensorflowHubEmbeddings method) embedding (langchain.llms.GPT4All attribute) endpoint_kwargs (langchain.embeddings.SagemakerEndpointEmbeddings attribute) (langchain.llms.SagemakerEndpoint attribute) endpoint_name (langchain.embeddings.SagemakerEndpointEmbeddings attribute) (langchain.llms.SagemakerEndpoint attribute) endpoint_url (langchain.llms.CerebriumAI attribute) (langchain.llms.ForefrontAI attribute) (langchain.llms.HuggingFaceEndpoint attribute) (langchain.llms.Modal attribute) engines (langchain.utilities.searx_search.SearxSearchWrapper attribute) entity_extraction_chain (langchain.chains.GraphQAChain attribute) error (langchain.chains.OpenAIModerationChain attribute) example_keys (langchain.prompts.example_selector.SemanticSimilarityExampleSelector attribute) example_prompt (langchain.prompts.example_selector.LengthBasedExampleSelector attribute) (langchain.prompts.FewShotPromptTemplate attribute) (langchain.prompts.FewShotPromptWithTemplates attribute) example_selector (langchain.prompts.FewShotPromptTemplate attribute) (langchain.prompts.FewShotPromptWithTemplates attribute) example_separator (langchain.prompts.FewShotPromptTemplate attribute) (langchain.prompts.FewShotPromptWithTemplates attribute)
https://python.langchain.com/en/latest/genindex.html
c850c360a1e9-12
(langchain.prompts.FewShotPromptWithTemplates attribute) examples (langchain.prompts.example_selector.LengthBasedExampleSelector attribute) (langchain.prompts.FewShotPromptTemplate attribute) (langchain.prompts.FewShotPromptWithTemplates attribute) F f16_kv (langchain.embeddings.LlamaCppEmbeddings attribute) (langchain.llms.GPT4All attribute) (langchain.llms.LlamaCpp attribute) FAISS (class in langchain.vectorstores) fetch_k (langchain.prompts.example_selector.MaxMarginalRelevanceExampleSelector attribute) finish_tool_name (langchain.agents.Agent property) (langchain.agents.ConversationalAgent property) format() (langchain.prompts.BaseChatPromptTemplate method) (langchain.prompts.BasePromptTemplate method) (langchain.prompts.ChatPromptTemplate method) (langchain.prompts.FewShotPromptTemplate method) (langchain.prompts.FewShotPromptWithTemplates method) (langchain.prompts.PromptTemplate method) format_messages() (langchain.prompts.BaseChatPromptTemplate method) (langchain.prompts.ChatPromptTemplate method) (langchain.prompts.MessagesPlaceholder method) format_prompt() (langchain.prompts.BaseChatPromptTemplate method) (langchain.prompts.BasePromptTemplate method) (langchain.prompts.StringPromptTemplate method) frequency_penalty (langchain.llms.AlephAlpha attribute) (langchain.llms.AzureOpenAI attribute) (langchain.llms.Cohere attribute) (langchain.llms.GooseAI attribute) frequencyPenalty (langchain.llms.AI21 attribute) from_agent_and_tools() (langchain.agents.AgentExecutor class method) from_api_operation() (langchain.chains.OpenAPIEndpointChain class method) from_chains() (langchain.agents.MRKLChain class method)
https://python.langchain.com/en/latest/genindex.html
c850c360a1e9-13
from_chains() (langchain.agents.MRKLChain class method) from_colored_object_prompt() (langchain.chains.PALChain class method) from_documents() (langchain.vectorstores.AtlasDB class method) (langchain.vectorstores.Chroma class method) (langchain.vectorstores.Qdrant class method) (langchain.vectorstores.VectorStore class method) from_embeddings() (langchain.vectorstores.FAISS class method) from_examples() (langchain.prompts.example_selector.MaxMarginalRelevanceExampleSelector class method) (langchain.prompts.example_selector.SemanticSimilarityExampleSelector class method) (langchain.prompts.PromptTemplate class method) from_existing_index() (langchain.vectorstores.Pinecone class method) from_file() (langchain.prompts.PromptTemplate class method) from_huggingface_tokenizer() (langchain.text_splitter.TextSplitter class method) from_llm() (langchain.chains.ChatVectorDBChain class method) (langchain.chains.ConstitutionalChain class method) (langchain.chains.ConversationalRetrievalChain class method) (langchain.chains.GraphQAChain class method) (langchain.chains.HypotheticalDocumentEmbedder class method) (langchain.chains.QAGenerationChain class method) (langchain.chains.SQLDatabaseSequentialChain class method) from_llm_and_api_docs() (langchain.chains.APIChain class method) from_llm_and_tools() (langchain.agents.Agent class method) (langchain.agents.ConversationalAgent class method) (langchain.agents.ConversationalChatAgent class method) (langchain.agents.ZeroShotAgent class method) from_math_prompt() (langchain.chains.PALChain class method) from_model_id() (langchain.llms.HuggingFacePipeline class method)
https://python.langchain.com/en/latest/genindex.html
c850c360a1e9-14
from_model_id() (langchain.llms.HuggingFacePipeline class method) from_params() (langchain.chains.MapReduceChain class method) from_pipeline() (langchain.llms.SelfHostedHuggingFaceLLM class method) (langchain.llms.SelfHostedPipeline class method) from_string() (langchain.chains.LLMChain class method) from_template() (langchain.prompts.PromptTemplate class method) from_texts() (langchain.vectorstores.AtlasDB class method) (langchain.vectorstores.Chroma class method) (langchain.vectorstores.DeepLake class method) (langchain.vectorstores.ElasticVectorSearch class method) (langchain.vectorstores.FAISS class method) (langchain.vectorstores.Milvus class method) (langchain.vectorstores.OpenSearchVectorSearch class method) (langchain.vectorstores.Pinecone class method) (langchain.vectorstores.Qdrant class method) (langchain.vectorstores.VectorStore class method) (langchain.vectorstores.Weaviate class method) from_tiktoken_encoder() (langchain.text_splitter.TextSplitter class method) from_url_and_method() (langchain.chains.OpenAPIEndpointChain class method) func (langchain.agents.Tool attribute) G generate() (langchain.chains.LLMChain method) (langchain.llms.AI21 method) (langchain.llms.AlephAlpha method) (langchain.llms.Anthropic method) (langchain.llms.AzureOpenAI method) (langchain.llms.Banana method) (langchain.llms.CerebriumAI method) (langchain.llms.Cohere method) (langchain.llms.DeepInfra method) (langchain.llms.ForefrontAI method) (langchain.llms.GooseAI method) (langchain.llms.GPT4All method)
https://python.langchain.com/en/latest/genindex.html
c850c360a1e9-15
(langchain.llms.GooseAI method) (langchain.llms.GPT4All method) (langchain.llms.HuggingFaceEndpoint method) (langchain.llms.HuggingFaceHub method) (langchain.llms.HuggingFacePipeline method) (langchain.llms.LlamaCpp method) (langchain.llms.Modal method) (langchain.llms.NLPCloud method) (langchain.llms.OpenAI method) (langchain.llms.OpenAIChat method) (langchain.llms.Petals method) (langchain.llms.PromptLayerOpenAI method) (langchain.llms.PromptLayerOpenAIChat method) (langchain.llms.Replicate method) (langchain.llms.RWKV method) (langchain.llms.SagemakerEndpoint method) (langchain.llms.SelfHostedHuggingFaceLLM method) (langchain.llms.SelfHostedPipeline method) (langchain.llms.StochasticAI method) (langchain.llms.Writer method) generate_prompt() (langchain.llms.AI21 method) (langchain.llms.AlephAlpha method) (langchain.llms.Anthropic method) (langchain.llms.AzureOpenAI method) (langchain.llms.Banana method) (langchain.llms.CerebriumAI method) (langchain.llms.Cohere method) (langchain.llms.DeepInfra method) (langchain.llms.ForefrontAI method) (langchain.llms.GooseAI method) (langchain.llms.GPT4All method) (langchain.llms.HuggingFaceEndpoint method) (langchain.llms.HuggingFaceHub method) (langchain.llms.HuggingFacePipeline method) (langchain.llms.LlamaCpp method) (langchain.llms.Modal method) (langchain.llms.NLPCloud method)
https://python.langchain.com/en/latest/genindex.html
c850c360a1e9-16
(langchain.llms.Modal method) (langchain.llms.NLPCloud method) (langchain.llms.OpenAI method) (langchain.llms.OpenAIChat method) (langchain.llms.Petals method) (langchain.llms.PromptLayerOpenAI method) (langchain.llms.PromptLayerOpenAIChat method) (langchain.llms.Replicate method) (langchain.llms.RWKV method) (langchain.llms.SagemakerEndpoint method) (langchain.llms.SelfHostedHuggingFaceLLM method) (langchain.llms.SelfHostedPipeline method) (langchain.llms.StochasticAI method) (langchain.llms.Writer method) get_all_tool_names() (in module langchain.agents) get_allowed_tools() (langchain.agents.Agent method) (langchain.agents.BaseMultiActionAgent method) (langchain.agents.BaseSingleActionAgent method) get_answer_expr (langchain.chains.PALChain attribute) get_full_inputs() (langchain.agents.Agent method) get_num_tokens() (langchain.llms.AI21 method) (langchain.llms.AlephAlpha method) (langchain.llms.Anthropic method) (langchain.llms.AzureOpenAI method) (langchain.llms.Banana method) (langchain.llms.CerebriumAI method) (langchain.llms.Cohere method) (langchain.llms.DeepInfra method) (langchain.llms.ForefrontAI method) (langchain.llms.GooseAI method) (langchain.llms.GPT4All method) (langchain.llms.HuggingFaceEndpoint method) (langchain.llms.HuggingFaceHub method) (langchain.llms.HuggingFacePipeline method) (langchain.llms.LlamaCpp method) (langchain.llms.Modal method)
https://python.langchain.com/en/latest/genindex.html
c850c360a1e9-17
(langchain.llms.LlamaCpp method) (langchain.llms.Modal method) (langchain.llms.NLPCloud method) (langchain.llms.OpenAI method) (langchain.llms.OpenAIChat method) (langchain.llms.Petals method) (langchain.llms.PromptLayerOpenAI method) (langchain.llms.PromptLayerOpenAIChat method) (langchain.llms.Replicate method) (langchain.llms.RWKV method) (langchain.llms.SagemakerEndpoint method) (langchain.llms.SelfHostedHuggingFaceLLM method) (langchain.llms.SelfHostedPipeline method) (langchain.llms.StochasticAI method) (langchain.llms.Writer method) get_num_tokens_from_messages() (langchain.llms.AI21 method) (langchain.llms.AlephAlpha method) (langchain.llms.Anthropic method) (langchain.llms.AzureOpenAI method) (langchain.llms.Banana method) (langchain.llms.CerebriumAI method) (langchain.llms.Cohere method) (langchain.llms.DeepInfra method) (langchain.llms.ForefrontAI method) (langchain.llms.GooseAI method) (langchain.llms.GPT4All method) (langchain.llms.HuggingFaceEndpoint method) (langchain.llms.HuggingFaceHub method) (langchain.llms.HuggingFacePipeline method) (langchain.llms.LlamaCpp method) (langchain.llms.Modal method) (langchain.llms.NLPCloud method) (langchain.llms.OpenAI method) (langchain.llms.OpenAIChat method) (langchain.llms.Petals method) (langchain.llms.PromptLayerOpenAI method) (langchain.llms.PromptLayerOpenAIChat method)
https://python.langchain.com/en/latest/genindex.html
c850c360a1e9-18
(langchain.llms.PromptLayerOpenAIChat method) (langchain.llms.Replicate method) (langchain.llms.RWKV method) (langchain.llms.SagemakerEndpoint method) (langchain.llms.SelfHostedHuggingFaceLLM method) (langchain.llms.SelfHostedPipeline method) (langchain.llms.StochasticAI method) (langchain.llms.Writer method) get_params() (langchain.serpapi.SerpAPIWrapper method) get_principles() (langchain.chains.ConstitutionalChain class method) get_sub_prompts() (langchain.llms.AzureOpenAI method) (langchain.llms.OpenAI method) (langchain.llms.PromptLayerOpenAI method) get_text_length (langchain.prompts.example_selector.LengthBasedExampleSelector attribute) globals (langchain.python.PythonREPL attribute) graph (langchain.chains.GraphQAChain attribute) H hardware (langchain.embeddings.SelfHostedHuggingFaceEmbeddings attribute) (langchain.llms.SelfHostedHuggingFaceLLM attribute) (langchain.llms.SelfHostedPipeline attribute) headers (langchain.utilities.searx_search.SearxSearchWrapper attribute) hosting (langchain.embeddings.AlephAlphaAsymmetricSemanticEmbedding attribute) I inference_fn (langchain.embeddings.SelfHostedEmbeddings attribute) (langchain.embeddings.SelfHostedHuggingFaceEmbeddings attribute) (langchain.llms.SelfHostedHuggingFaceLLM attribute) (langchain.llms.SelfHostedPipeline attribute) inference_kwargs (langchain.embeddings.SelfHostedEmbeddings attribute) initialize_agent() (in module langchain.agents) InMemoryDocstore (class in langchain.docstore)
https://python.langchain.com/en/latest/genindex.html
c850c360a1e9-19
InMemoryDocstore (class in langchain.docstore) input_key (langchain.chains.QAGenerationChain attribute) input_keys (langchain.chains.ConstitutionalChain property) (langchain.chains.ConversationChain property) (langchain.chains.HypotheticalDocumentEmbedder property) (langchain.chains.QAGenerationChain property) (langchain.prompts.example_selector.SemanticSimilarityExampleSelector attribute) input_variables (langchain.chains.SequentialChain attribute) (langchain.chains.TransformChain attribute) (langchain.prompts.BasePromptTemplate attribute) (langchain.prompts.FewShotPromptTemplate attribute) (langchain.prompts.FewShotPromptWithTemplates attribute) (langchain.prompts.MessagesPlaceholder property) (langchain.prompts.PromptTemplate attribute) J json() (langchain.llms.AI21 method) (langchain.llms.AlephAlpha method) (langchain.llms.Anthropic method) (langchain.llms.AzureOpenAI method) (langchain.llms.Banana method) (langchain.llms.CerebriumAI method) (langchain.llms.Cohere method) (langchain.llms.DeepInfra method) (langchain.llms.ForefrontAI method) (langchain.llms.GooseAI method) (langchain.llms.GPT4All method) (langchain.llms.HuggingFaceEndpoint method) (langchain.llms.HuggingFaceHub method) (langchain.llms.HuggingFacePipeline method) (langchain.llms.LlamaCpp method) (langchain.llms.Modal method) (langchain.llms.NLPCloud method) (langchain.llms.OpenAI method) (langchain.llms.OpenAIChat method) (langchain.llms.Petals method) (langchain.llms.PromptLayerOpenAI method)
https://python.langchain.com/en/latest/genindex.html
c850c360a1e9-20
(langchain.llms.PromptLayerOpenAI method) (langchain.llms.PromptLayerOpenAIChat method) (langchain.llms.Replicate method) (langchain.llms.RWKV method) (langchain.llms.SagemakerEndpoint method) (langchain.llms.SelfHostedHuggingFaceLLM method) (langchain.llms.SelfHostedPipeline method) (langchain.llms.StochasticAI method) (langchain.llms.Writer method) K k (langchain.chains.QAGenerationChain attribute) (langchain.chains.VectorDBQA attribute) (langchain.chains.VectorDBQAWithSourcesChain attribute) (langchain.llms.Cohere attribute) (langchain.prompts.example_selector.SemanticSimilarityExampleSelector attribute) (langchain.utilities.searx_search.SearxSearchWrapper attribute) L langchain.agents module langchain.chains module langchain.docstore module langchain.embeddings module langchain.llms module langchain.prompts module langchain.prompts.example_selector module langchain.python module langchain.serpapi module langchain.text_splitter module langchain.utilities.searx_search module langchain.vectorstores module last_n_tokens_size (langchain.llms.LlamaCpp attribute) LatexTextSplitter (class in langchain.text_splitter) length (langchain.llms.ForefrontAI attribute) (langchain.llms.Writer attribute) length_no_input (langchain.llms.NLPCloud attribute) length_penalty (langchain.llms.NLPCloud attribute) length_pentaly (langchain.llms.Writer attribute)
https://python.langchain.com/en/latest/genindex.html
c850c360a1e9-21
length_pentaly (langchain.llms.Writer attribute) list_assertions_prompt (langchain.chains.LLMCheckerChain attribute) llm (langchain.chains.LLMBashChain attribute) (langchain.chains.LLMChain attribute) (langchain.chains.LLMCheckerChain attribute) (langchain.chains.LLMMathChain attribute) (langchain.chains.LLMSummarizationCheckerChain attribute) (langchain.chains.PALChain attribute) (langchain.chains.SQLDatabaseChain attribute) llm_chain (langchain.agents.Agent attribute) (langchain.agents.LLMSingleActionAgent attribute) (langchain.agents.ReActTextWorldAgent attribute) (langchain.agents.ZeroShotAgent attribute) (langchain.chains.HypotheticalDocumentEmbedder attribute) (langchain.chains.LLMRequestsChain attribute) (langchain.chains.QAGenerationChain attribute) llm_prefix (langchain.agents.Agent property) (langchain.agents.ConversationalAgent property) (langchain.agents.ConversationalChatAgent property) (langchain.agents.ZeroShotAgent property) load_agent() (in module langchain.agents) load_chain() (in module langchain.chains) load_fn_kwargs (langchain.embeddings.SelfHostedHuggingFaceEmbeddings attribute) (langchain.llms.SelfHostedHuggingFaceLLM attribute) (langchain.llms.SelfHostedPipeline attribute) load_local() (langchain.vectorstores.FAISS class method) load_prompt() (in module langchain.prompts) load_tools() (in module langchain.agents) locals (langchain.python.PythonREPL attribute) log_probs (langchain.llms.AlephAlpha attribute) logit_bias (langchain.llms.AlephAlpha attribute)
https://python.langchain.com/en/latest/genindex.html
c850c360a1e9-22
logit_bias (langchain.llms.AlephAlpha attribute) (langchain.llms.AzureOpenAI attribute) (langchain.llms.GooseAI attribute) logitBias (langchain.llms.AI21 attribute) logits_all (langchain.embeddings.LlamaCppEmbeddings attribute) (langchain.llms.GPT4All attribute) (langchain.llms.LlamaCpp attribute) logprobs (langchain.llms.LlamaCpp attribute) (langchain.llms.Writer attribute) lookup_tool() (langchain.agents.AgentExecutor method) M MarkdownTextSplitter (class in langchain.text_splitter) max_checks (langchain.chains.LLMSummarizationCheckerChain attribute) max_execution_time (langchain.agents.AgentExecutor attribute) (langchain.agents.MRKLChain attribute) (langchain.agents.ReActChain attribute) (langchain.agents.SelfAskWithSearchChain attribute) max_iterations (langchain.agents.AgentExecutor attribute) (langchain.agents.MRKLChain attribute) (langchain.agents.ReActChain attribute) (langchain.agents.SelfAskWithSearchChain attribute) max_length (langchain.llms.NLPCloud attribute) (langchain.llms.Petals attribute) (langchain.prompts.example_selector.LengthBasedExampleSelector attribute) max_marginal_relevance_search() (langchain.vectorstores.Chroma method) (langchain.vectorstores.DeepLake method) (langchain.vectorstores.FAISS method) (langchain.vectorstores.Milvus method) (langchain.vectorstores.Qdrant method) (langchain.vectorstores.VectorStore method) max_marginal_relevance_search_by_vector() (langchain.vectorstores.Chroma method) (langchain.vectorstores.DeepLake method) (langchain.vectorstores.FAISS method) (langchain.vectorstores.VectorStore method)
https://python.langchain.com/en/latest/genindex.html
c850c360a1e9-23
(langchain.vectorstores.FAISS method) (langchain.vectorstores.VectorStore method) max_new_tokens (langchain.llms.Petals attribute) max_retries (langchain.embeddings.OpenAIEmbeddings attribute) (langchain.llms.AzureOpenAI attribute) (langchain.llms.OpenAIChat attribute) (langchain.llms.PromptLayerOpenAIChat attribute) max_tokens (langchain.llms.AzureOpenAI attribute) (langchain.llms.Cohere attribute) (langchain.llms.GooseAI attribute) (langchain.llms.LlamaCpp attribute) max_tokens_for_prompt() (langchain.llms.AzureOpenAI method) (langchain.llms.OpenAI method) (langchain.llms.PromptLayerOpenAI method) max_tokens_limit (langchain.chains.ConversationalRetrievalChain attribute) (langchain.chains.RetrievalQAWithSourcesChain attribute) (langchain.chains.VectorDBQAWithSourcesChain attribute) max_tokens_per_generation (langchain.llms.RWKV attribute) max_tokens_to_sample (langchain.llms.Anthropic attribute) maximum_tokens (langchain.llms.AlephAlpha attribute) maxTokens (langchain.llms.AI21 attribute) memory (langchain.agents.MRKLChain attribute) (langchain.agents.ReActChain attribute) (langchain.agents.SelfAskWithSearchChain attribute) (langchain.chains.ConversationChain attribute) merge_from() (langchain.vectorstores.FAISS method) METADATA_KEY (langchain.vectorstores.Qdrant attribute) Milvus (class in langchain.vectorstores) min_length (langchain.llms.NLPCloud attribute) min_tokens (langchain.llms.GooseAI attribute) minimum_tokens (langchain.llms.AlephAlpha attribute) minTokens (langchain.llms.AI21 attribute)
https://python.langchain.com/en/latest/genindex.html
c850c360a1e9-24
minTokens (langchain.llms.AI21 attribute) model (langchain.embeddings.AlephAlphaAsymmetricSemanticEmbedding attribute) (langchain.embeddings.CohereEmbeddings attribute) (langchain.llms.AI21 attribute) (langchain.llms.AlephAlpha attribute) (langchain.llms.Anthropic attribute) (langchain.llms.Cohere attribute) (langchain.llms.GPT4All attribute) (langchain.llms.RWKV attribute) model_id (langchain.embeddings.SelfHostedHuggingFaceEmbeddings attribute) (langchain.embeddings.SelfHostedHuggingFaceInstructEmbeddings attribute) (langchain.llms.HuggingFacePipeline attribute) (langchain.llms.SelfHostedHuggingFaceLLM attribute) (langchain.llms.Writer attribute) model_key (langchain.llms.Banana attribute) model_kwargs (langchain.embeddings.HuggingFaceHubEmbeddings attribute) (langchain.embeddings.SagemakerEndpointEmbeddings attribute) (langchain.llms.AzureOpenAI attribute) (langchain.llms.Banana attribute) (langchain.llms.CerebriumAI attribute) (langchain.llms.GooseAI attribute) (langchain.llms.HuggingFaceEndpoint attribute) (langchain.llms.HuggingFaceHub attribute) (langchain.llms.HuggingFacePipeline attribute) (langchain.llms.Modal attribute) (langchain.llms.OpenAIChat attribute) (langchain.llms.Petals attribute) (langchain.llms.PromptLayerOpenAIChat attribute) (langchain.llms.SagemakerEndpoint attribute) (langchain.llms.SelfHostedHuggingFaceLLM attribute) (langchain.llms.StochasticAI attribute) model_load_fn (langchain.embeddings.SelfHostedHuggingFaceEmbeddings attribute)
https://python.langchain.com/en/latest/genindex.html
c850c360a1e9-25
model_load_fn (langchain.embeddings.SelfHostedHuggingFaceEmbeddings attribute) (langchain.llms.SelfHostedHuggingFaceLLM attribute) (langchain.llms.SelfHostedPipeline attribute) model_name (langchain.chains.OpenAIModerationChain attribute) (langchain.embeddings.HuggingFaceEmbeddings attribute) (langchain.embeddings.HuggingFaceInstructEmbeddings attribute) (langchain.llms.AzureOpenAI attribute) (langchain.llms.GooseAI attribute) (langchain.llms.NLPCloud attribute) (langchain.llms.OpenAIChat attribute) (langchain.llms.Petals attribute) (langchain.llms.PromptLayerOpenAIChat attribute) model_path (langchain.llms.LlamaCpp attribute) model_reqs (langchain.embeddings.SelfHostedHuggingFaceEmbeddings attribute) (langchain.embeddings.SelfHostedHuggingFaceInstructEmbeddings attribute) (langchain.llms.SelfHostedHuggingFaceLLM attribute) (langchain.llms.SelfHostedPipeline attribute) model_url (langchain.embeddings.TensorflowHubEmbeddings attribute) modelname_to_contextsize() (langchain.llms.AzureOpenAI method) (langchain.llms.OpenAI method) (langchain.llms.PromptLayerOpenAI method) module langchain.agents langchain.chains langchain.docstore langchain.embeddings langchain.llms langchain.prompts langchain.prompts.example_selector langchain.python langchain.serpapi langchain.text_splitter langchain.utilities.searx_search langchain.vectorstores N n (langchain.llms.AlephAlpha attribute) (langchain.llms.AzureOpenAI attribute) (langchain.llms.GooseAI attribute)
https://python.langchain.com/en/latest/genindex.html
c850c360a1e9-26
(langchain.llms.AzureOpenAI attribute) (langchain.llms.GooseAI attribute) n_batch (langchain.embeddings.LlamaCppEmbeddings attribute) (langchain.llms.GPT4All attribute) (langchain.llms.LlamaCpp attribute) n_ctx (langchain.embeddings.LlamaCppEmbeddings attribute) (langchain.llms.GPT4All attribute) (langchain.llms.LlamaCpp attribute) n_parts (langchain.embeddings.LlamaCppEmbeddings attribute) (langchain.llms.GPT4All attribute) (langchain.llms.LlamaCpp attribute) n_predict (langchain.llms.GPT4All attribute) n_threads (langchain.embeddings.LlamaCppEmbeddings attribute) (langchain.llms.GPT4All attribute) (langchain.llms.LlamaCpp attribute) NLTKTextSplitter (class in langchain.text_splitter) normalize (langchain.embeddings.AlephAlphaAsymmetricSemanticEmbedding attribute) num_beams (langchain.llms.NLPCloud attribute) num_return_sequences (langchain.llms.NLPCloud attribute) numResults (langchain.llms.AI21 attribute) O observation_prefix (langchain.agents.Agent property) (langchain.agents.ConversationalAgent property) (langchain.agents.ConversationalChatAgent property) (langchain.agents.ZeroShotAgent property) openai_api_key (langchain.chains.OpenAIModerationChain attribute) OpenSearchVectorSearch (class in langchain.vectorstores) output_key (langchain.chains.QAGenerationChain attribute) output_keys (langchain.chains.ConstitutionalChain property) (langchain.chains.HypotheticalDocumentEmbedder property) (langchain.chains.QAGenerationChain property) output_parser (langchain.agents.ConversationalChatAgent attribute)
https://python.langchain.com/en/latest/genindex.html
c850c360a1e9-27
output_parser (langchain.agents.ConversationalChatAgent attribute) (langchain.agents.LLMSingleActionAgent attribute) (langchain.prompts.BasePromptTemplate attribute) output_variables (langchain.chains.TransformChain attribute) P p (langchain.llms.Cohere attribute) param_mapping (langchain.chains.OpenAPIEndpointChain attribute) params (langchain.serpapi.SerpAPIWrapper attribute) (langchain.utilities.searx_search.SearxSearchWrapper attribute) parse() (langchain.agents.AgentOutputParser method) partial() (langchain.prompts.BasePromptTemplate method) (langchain.prompts.ChatPromptTemplate method) penalty_alpha_frequency (langchain.llms.RWKV attribute) penalty_alpha_presence (langchain.llms.RWKV attribute) penalty_bias (langchain.llms.AlephAlpha attribute) penalty_exceptions (langchain.llms.AlephAlpha attribute) penalty_exceptions_include_stop_sequences (langchain.llms.AlephAlpha attribute) persist() (langchain.vectorstores.Chroma method) (langchain.vectorstores.DeepLake method) Pinecone (class in langchain.vectorstores) plan() (langchain.agents.Agent method) (langchain.agents.BaseMultiActionAgent method) (langchain.agents.BaseSingleActionAgent method) (langchain.agents.LLMSingleActionAgent method) predict() (langchain.chains.LLMChain method) predict_and_parse() (langchain.chains.LLMChain method) prefix (langchain.prompts.FewShotPromptTemplate attribute) (langchain.prompts.FewShotPromptWithTemplates attribute) prefix_messages (langchain.llms.OpenAIChat attribute) (langchain.llms.PromptLayerOpenAIChat attribute) prep_prompts() (langchain.chains.LLMChain method)
https://python.langchain.com/en/latest/genindex.html
c850c360a1e9-28
prep_prompts() (langchain.chains.LLMChain method) prep_streaming_params() (langchain.llms.AzureOpenAI method) (langchain.llms.OpenAI method) (langchain.llms.PromptLayerOpenAI method) presence_penalty (langchain.llms.AlephAlpha attribute) (langchain.llms.AzureOpenAI attribute) (langchain.llms.Cohere attribute) (langchain.llms.GooseAI attribute) presencePenalty (langchain.llms.AI21 attribute) Prompt (in module langchain.prompts) prompt (langchain.chains.ConversationChain attribute) (langchain.chains.LLMBashChain attribute) (langchain.chains.LLMChain attribute) (langchain.chains.LLMMathChain attribute) (langchain.chains.PALChain attribute) (langchain.chains.SQLDatabaseChain attribute) python_globals (langchain.chains.PALChain attribute) python_locals (langchain.chains.PALChain attribute) PythonCodeTextSplitter (class in langchain.text_splitter) Q qa_chain (langchain.chains.GraphQAChain attribute) Qdrant (class in langchain.vectorstores) query_instruction (langchain.embeddings.HuggingFaceInstructEmbeddings attribute) (langchain.embeddings.SelfHostedHuggingFaceInstructEmbeddings attribute) query_suffix (langchain.utilities.searx_search.SearxSearchWrapper attribute) R random_seed (langchain.llms.Writer attribute) raw_completion (langchain.llms.AlephAlpha attribute) REACT_DOCSTORE (langchain.agents.AgentType attribute) RecursiveCharacterTextSplitter (class in langchain.text_splitter) reduce_k_below_max_tokens (langchain.chains.RetrievalQAWithSourcesChain attribute) (langchain.chains.VectorDBQAWithSourcesChain attribute)
https://python.langchain.com/en/latest/genindex.html
c850c360a1e9-29
(langchain.chains.VectorDBQAWithSourcesChain attribute) region_name (langchain.embeddings.SagemakerEndpointEmbeddings attribute) (langchain.llms.SagemakerEndpoint attribute) remove_end_sequence (langchain.llms.NLPCloud attribute) remove_input (langchain.llms.NLPCloud attribute) repeat_last_n (langchain.llms.GPT4All attribute) repeat_penalty (langchain.llms.GPT4All attribute) (langchain.llms.LlamaCpp attribute) repetition_penalties_include_completion (langchain.llms.AlephAlpha attribute) repetition_penalties_include_prompt (langchain.llms.AlephAlpha attribute) repetition_penalty (langchain.llms.ForefrontAI attribute) (langchain.llms.NLPCloud attribute) (langchain.llms.Writer attribute) repo_id (langchain.embeddings.HuggingFaceHubEmbeddings attribute) (langchain.llms.HuggingFaceHub attribute) request_timeout (langchain.llms.AzureOpenAI attribute) requests (langchain.chains.OpenAPIEndpointChain attribute) requests_wrapper (langchain.chains.APIChain attribute) (langchain.chains.LLMRequestsChain attribute) results() (langchain.serpapi.SerpAPIWrapper method) (langchain.utilities.searx_search.SearxSearchWrapper method) retriever (langchain.chains.ConversationalRetrievalChain attribute) (langchain.chains.RetrievalQA attribute) (langchain.chains.RetrievalQAWithSourcesChain attribute) return_all (langchain.chains.SequentialChain attribute) return_direct (langchain.chains.SQLDatabaseChain attribute) return_intermediate_steps (langchain.agents.AgentExecutor attribute) (langchain.agents.MRKLChain attribute) (langchain.agents.ReActChain attribute) (langchain.agents.SelfAskWithSearchChain attribute)
https://python.langchain.com/en/latest/genindex.html
c850c360a1e9-30
(langchain.agents.SelfAskWithSearchChain attribute) (langchain.chains.OpenAPIEndpointChain attribute) (langchain.chains.PALChain attribute) (langchain.chains.SQLDatabaseChain attribute) (langchain.chains.SQLDatabaseSequentialChain attribute) return_stopped_response() (langchain.agents.Agent method) (langchain.agents.BaseMultiActionAgent method) (langchain.agents.BaseSingleActionAgent method) return_values (langchain.agents.Agent property) (langchain.agents.BaseMultiActionAgent property) (langchain.agents.BaseSingleActionAgent property) revised_answer_prompt (langchain.chains.LLMCheckerChain attribute) revised_summary_prompt (langchain.chains.LLMSummarizationCheckerChain attribute) revision_chain (langchain.chains.ConstitutionalChain attribute) run() (langchain.python.PythonREPL method) (langchain.serpapi.SerpAPIWrapper method) (langchain.utilities.searx_search.SearxSearchWrapper method) rwkv_verbose (langchain.llms.RWKV attribute) S save() (langchain.agents.AgentExecutor method) (langchain.agents.BaseMultiActionAgent method) (langchain.agents.BaseSingleActionAgent method) (langchain.llms.AI21 method) (langchain.llms.AlephAlpha method) (langchain.llms.Anthropic method) (langchain.llms.AzureOpenAI method) (langchain.llms.Banana method) (langchain.llms.CerebriumAI method) (langchain.llms.Cohere method) (langchain.llms.DeepInfra method) (langchain.llms.ForefrontAI method) (langchain.llms.GooseAI method) (langchain.llms.GPT4All method) (langchain.llms.HuggingFaceEndpoint method) (langchain.llms.HuggingFaceHub method)
https://python.langchain.com/en/latest/genindex.html
c850c360a1e9-31
(langchain.llms.HuggingFaceHub method) (langchain.llms.HuggingFacePipeline method) (langchain.llms.LlamaCpp method) (langchain.llms.Modal method) (langchain.llms.NLPCloud method) (langchain.llms.OpenAI method) (langchain.llms.OpenAIChat method) (langchain.llms.Petals method) (langchain.llms.PromptLayerOpenAI method) (langchain.llms.PromptLayerOpenAIChat method) (langchain.llms.Replicate method) (langchain.llms.RWKV method) (langchain.llms.SagemakerEndpoint method) (langchain.llms.SelfHostedHuggingFaceLLM method) (langchain.llms.SelfHostedPipeline method) (langchain.llms.StochasticAI method) (langchain.llms.Writer method) (langchain.prompts.BasePromptTemplate method) (langchain.prompts.ChatPromptTemplate method) save_agent() (langchain.agents.AgentExecutor method) save_local() (langchain.vectorstores.FAISS method) search() (langchain.docstore.InMemoryDocstore method) (langchain.docstore.Wikipedia method) (langchain.vectorstores.DeepLake method) search_kwargs (langchain.chains.ChatVectorDBChain attribute) (langchain.chains.VectorDBQA attribute) (langchain.chains.VectorDBQAWithSourcesChain attribute) search_type (langchain.chains.VectorDBQA attribute) searx_host (langchain.utilities.searx_search.SearxSearchWrapper attribute) SearxResults (class in langchain.utilities.searx_search) seed (langchain.embeddings.LlamaCppEmbeddings attribute) (langchain.llms.GPT4All attribute) (langchain.llms.LlamaCpp attribute) select_examples() (langchain.prompts.example_selector.LengthBasedExampleSelector method)
https://python.langchain.com/en/latest/genindex.html
c850c360a1e9-32
select_examples() (langchain.prompts.example_selector.LengthBasedExampleSelector method) (langchain.prompts.example_selector.MaxMarginalRelevanceExampleSelector method) (langchain.prompts.example_selector.SemanticSimilarityExampleSelector method) SELF_ASK_WITH_SEARCH (langchain.agents.AgentType attribute) serpapi_api_key (langchain.serpapi.SerpAPIWrapper attribute) similarity_search() (langchain.vectorstores.AtlasDB method) (langchain.vectorstores.Chroma method) (langchain.vectorstores.DeepLake method) (langchain.vectorstores.ElasticVectorSearch method) (langchain.vectorstores.FAISS method) (langchain.vectorstores.Milvus method) (langchain.vectorstores.OpenSearchVectorSearch method) (langchain.vectorstores.Pinecone method) (langchain.vectorstores.Qdrant method) (langchain.vectorstores.VectorStore method) (langchain.vectorstores.Weaviate method) similarity_search_by_vector() (langchain.vectorstores.Chroma method) (langchain.vectorstores.DeepLake method) (langchain.vectorstores.FAISS method) (langchain.vectorstores.VectorStore method) similarity_search_with_score() (langchain.vectorstores.Chroma method) (langchain.vectorstores.DeepLake method) (langchain.vectorstores.FAISS method) (langchain.vectorstores.Milvus method) (langchain.vectorstores.Pinecone method) (langchain.vectorstores.Qdrant method) similarity_search_with_score_by_vector() (langchain.vectorstores.FAISS method) SpacyTextSplitter (class in langchain.text_splitter) split_documents() (langchain.text_splitter.TextSplitter method) split_text() (langchain.text_splitter.CharacterTextSplitter method) (langchain.text_splitter.NLTKTextSplitter method) (langchain.text_splitter.RecursiveCharacterTextSplitter method)
https://python.langchain.com/en/latest/genindex.html
c850c360a1e9-33
(langchain.text_splitter.RecursiveCharacterTextSplitter method) (langchain.text_splitter.SpacyTextSplitter method) (langchain.text_splitter.TextSplitter method) (langchain.text_splitter.TokenTextSplitter method) sql_chain (langchain.chains.SQLDatabaseSequentialChain attribute) stop (langchain.agents.LLMSingleActionAgent attribute) (langchain.chains.PALChain attribute) (langchain.llms.GPT4All attribute) (langchain.llms.LlamaCpp attribute) (langchain.llms.Writer attribute) stop_sequences (langchain.llms.AlephAlpha attribute) strategy (langchain.llms.RWKV attribute) stream() (langchain.llms.Anthropic method) (langchain.llms.AzureOpenAI method) (langchain.llms.OpenAI method) (langchain.llms.PromptLayerOpenAI method) streaming (langchain.llms.Anthropic attribute) (langchain.llms.AzureOpenAI attribute) (langchain.llms.GPT4All attribute) (langchain.llms.OpenAIChat attribute) (langchain.llms.PromptLayerOpenAIChat attribute) strip_outputs (langchain.chains.SimpleSequentialChain attribute) suffix (langchain.llms.LlamaCpp attribute) (langchain.prompts.FewShotPromptTemplate attribute) (langchain.prompts.FewShotPromptWithTemplates attribute) T task (langchain.embeddings.HuggingFaceHubEmbeddings attribute) (langchain.llms.HuggingFaceEndpoint attribute) (langchain.llms.HuggingFaceHub attribute) (langchain.llms.SelfHostedHuggingFaceLLM attribute) temp (langchain.llms.GPT4All attribute) temperature (langchain.llms.AI21 attribute) (langchain.llms.AlephAlpha attribute) (langchain.llms.Anthropic attribute)
https://python.langchain.com/en/latest/genindex.html
c850c360a1e9-34
(langchain.llms.AlephAlpha attribute) (langchain.llms.Anthropic attribute) (langchain.llms.AzureOpenAI attribute) (langchain.llms.Cohere attribute) (langchain.llms.ForefrontAI attribute) (langchain.llms.GooseAI attribute) (langchain.llms.LlamaCpp attribute) (langchain.llms.NLPCloud attribute) (langchain.llms.Petals attribute) (langchain.llms.RWKV attribute) (langchain.llms.Writer attribute) template (langchain.prompts.PromptTemplate attribute) template_format (langchain.prompts.FewShotPromptTemplate attribute) (langchain.prompts.FewShotPromptWithTemplates attribute) (langchain.prompts.PromptTemplate attribute) text_length (langchain.chains.LLMRequestsChain attribute) text_splitter (langchain.chains.AnalyzeDocumentChain attribute) (langchain.chains.MapReduceChain attribute) (langchain.chains.QAGenerationChain attribute) TextSplitter (class in langchain.text_splitter) tokenizer (langchain.llms.Petals attribute) tokens (langchain.llms.AlephAlpha attribute) tokens_path (langchain.llms.RWKV attribute) tokens_to_generate (langchain.llms.Writer attribute) TokenTextSplitter (class in langchain.text_splitter) tool() (in module langchain.agents) tool_run_logging_kwargs() (langchain.agents.Agent method) (langchain.agents.BaseMultiActionAgent method) (langchain.agents.BaseSingleActionAgent method) (langchain.agents.LLMSingleActionAgent method) tools (langchain.agents.AgentExecutor attribute) (langchain.agents.MRKLChain attribute) (langchain.agents.ReActChain attribute) (langchain.agents.SelfAskWithSearchChain attribute)
https://python.langchain.com/en/latest/genindex.html
c850c360a1e9-35
(langchain.agents.SelfAskWithSearchChain attribute) top_k (langchain.chains.SQLDatabaseChain attribute) (langchain.llms.AlephAlpha attribute) (langchain.llms.Anthropic attribute) (langchain.llms.ForefrontAI attribute) (langchain.llms.GPT4All attribute) (langchain.llms.LlamaCpp attribute) (langchain.llms.NLPCloud attribute) (langchain.llms.Petals attribute) (langchain.llms.Writer attribute) top_k_docs_for_context (langchain.chains.ChatVectorDBChain attribute) top_p (langchain.llms.AlephAlpha attribute) (langchain.llms.Anthropic attribute) (langchain.llms.AzureOpenAI attribute) (langchain.llms.ForefrontAI attribute) (langchain.llms.GooseAI attribute) (langchain.llms.GPT4All attribute) (langchain.llms.LlamaCpp attribute) (langchain.llms.NLPCloud attribute) (langchain.llms.Petals attribute) (langchain.llms.RWKV attribute) (langchain.llms.Writer attribute) topP (langchain.llms.AI21 attribute) transform (langchain.chains.TransformChain attribute) truncate (langchain.embeddings.CohereEmbeddings attribute) (langchain.llms.Cohere attribute) U unsecure (langchain.utilities.searx_search.SearxSearchWrapper attribute) update_forward_refs() (langchain.llms.AI21 class method) (langchain.llms.AlephAlpha class method) (langchain.llms.Anthropic class method) (langchain.llms.AzureOpenAI class method) (langchain.llms.Banana class method) (langchain.llms.CerebriumAI class method) (langchain.llms.Cohere class method) (langchain.llms.DeepInfra class method)
https://python.langchain.com/en/latest/genindex.html
c850c360a1e9-36
(langchain.llms.DeepInfra class method) (langchain.llms.ForefrontAI class method) (langchain.llms.GooseAI class method) (langchain.llms.GPT4All class method) (langchain.llms.HuggingFaceEndpoint class method) (langchain.llms.HuggingFaceHub class method) (langchain.llms.HuggingFacePipeline class method) (langchain.llms.LlamaCpp class method) (langchain.llms.Modal class method) (langchain.llms.NLPCloud class method) (langchain.llms.OpenAI class method) (langchain.llms.OpenAIChat class method) (langchain.llms.Petals class method) (langchain.llms.PromptLayerOpenAI class method) (langchain.llms.PromptLayerOpenAIChat class method) (langchain.llms.Replicate class method) (langchain.llms.RWKV class method) (langchain.llms.SagemakerEndpoint class method) (langchain.llms.SelfHostedHuggingFaceLLM class method) (langchain.llms.SelfHostedPipeline class method) (langchain.llms.StochasticAI class method) (langchain.llms.Writer class method) use_mlock (langchain.embeddings.LlamaCppEmbeddings attribute) (langchain.llms.GPT4All attribute) (langchain.llms.LlamaCpp attribute) use_multiplicative_presence_penalty (langchain.llms.AlephAlpha attribute) V validate_template (langchain.prompts.FewShotPromptTemplate attribute) (langchain.prompts.FewShotPromptWithTemplates attribute) (langchain.prompts.PromptTemplate attribute) VectorStore (class in langchain.vectorstores) vectorstore (langchain.chains.ChatVectorDBChain attribute) (langchain.chains.VectorDBQA attribute) (langchain.chains.VectorDBQAWithSourcesChain attribute)
https://python.langchain.com/en/latest/genindex.html
c850c360a1e9-37
(langchain.chains.VectorDBQAWithSourcesChain attribute) (langchain.prompts.example_selector.SemanticSimilarityExampleSelector attribute) verbose (langchain.agents.MRKLChain attribute) (langchain.agents.ReActChain attribute) (langchain.agents.SelfAskWithSearchChain attribute) (langchain.llms.AzureOpenAI attribute) (langchain.llms.OpenAI attribute) (langchain.llms.OpenAIChat attribute) vocab_only (langchain.embeddings.LlamaCppEmbeddings attribute) (langchain.llms.GPT4All attribute) (langchain.llms.LlamaCpp attribute) W Weaviate (class in langchain.vectorstores) Wikipedia (class in langchain.docstore) Z ZERO_SHOT_REACT_DESCRIPTION (langchain.agents.AgentType attribute) By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Apr 06, 2023.
https://python.langchain.com/en/latest/genindex.html
0d886e15c527-0
.rst .pdf LangChain Ecosystem LangChain Ecosystem# Guides for how other companies/products can be used with LangChain AI21 Labs Aim Apify AtlasDB Banana CerebriumAI Chroma ClearML Integration Getting API Credentials Setting Up Scenario 1: Just an LLM Scenario 2: Creating a agent with tools Tips and Next Steps Cohere DeepInfra Deep Lake ForefrontAI Google Search Wrapper Google Serper Wrapper GooseAI GPT4All Graphsignal Hazy Research Helicone Hugging Face Jina Llama.cpp Milvus Modal NLPCloud OpenAI OpenSearch Petals PGVector Pinecone PromptLayer Qdrant Replicate Runhouse RWKV-4 SearxNG Search API SerpAPI StochasticAI Unstructured Weights & Biases Weaviate Wolfram Alpha Wrapper Writer previous Agents next AI21 Labs By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Apr 06, 2023.
https://python.langchain.com/en/latest/ecosystem.html
24188b0e0762-0
.md .pdf Tracing Contents Tracing Walkthrough Changing Sessions Tracing# By enabling tracing in your LangChain runs, you’ll be able to more effectively visualize, step through, and debug your chains and agents. First, you should install tracing and set up your environment properly. You can use either a locally hosted version of this (uses Docker) or a cloud hosted version (in closed alpha). If you’re interested in using the hosted platform, please fill out the form here. Locally Hosted Setup Cloud Hosted Setup Tracing Walkthrough# When you first access the UI, you should see a page with your tracing sessions. An initial one “default” should already be created for you. A session is just a way to group traces together. If you click on a session, it will take you to a page with no recorded traces that says “No Runs.” You can create a new session with the new session form. If we click on the default session, we can see that to start we have no traces stored. If we now start running chains and agents with tracing enabled, we will see data show up here. To do so, we can run this notebook as an example. After running it, we will see an initial trace show up. From here we can explore the trace at a high level by clicking on the arrow to show nested runs. We can keep on clicking further and further down to explore deeper and deeper. We can also click on the “Explore” button of the top level run to dive even deeper. Here, we can see the inputs and outputs in full, as well as all the nested traces. We can keep on exploring each of these nested traces in more detail. For example, here is the lowest level trace with the exact inputs/outputs to the LLM. Changing Sessions#
https://python.langchain.com/en/latest/tracing.html
24188b0e0762-1
Changing Sessions# To initially record traces to a session other than "default", you can set the LANGCHAIN_SESSION environment variable to the name of the session you want to record to: import os os.environ["LANGCHAIN_HANDLER"] = "langchain" os.environ["LANGCHAIN_SESSION"] = "my_session" # Make sure this session actually exists. You can create a new session in the UI. To switch sessions mid-script or mid-notebook, do NOT set the LANGCHAIN_SESSION environment variable. Instead: langchain.set_tracing_callback_manager(session_name="my_session") previous Deployments Contents Tracing Walkthrough Changing Sessions By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Apr 06, 2023.
https://python.langchain.com/en/latest/tracing.html
e5c95207e101-0
.rst .pdf API References API References# All of LangChain’s reference documentation, in one place. Full documentation on all methods, classes, and APIs in LangChain. Prompts Utilities Chains Agents previous Integrations next Utilities By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Apr 06, 2023.
https://python.langchain.com/en/latest/reference.html
907892935143-0
.md .pdf Glossary Contents Chain of Thought Prompting Action Plan Generation ReAct Prompting Self-ask Prompt Chaining Memetic Proxy Self Consistency Inception MemPrompt Glossary# This is a collection of terminology commonly used when developing LLM applications. It contains reference to external papers or sources where the concept was first introduced, as well as to places in LangChain where the concept is used. Chain of Thought Prompting# A prompting technique used to encourage the model to generate a series of intermediate reasoning steps. A less formal way to induce this behavior is to include “Let’s think step-by-step” in the prompt. Resources: Chain-of-Thought Paper Step-by-Step Paper Action Plan Generation# A prompt usage that uses a language model to generate actions to take. The results of these actions can then be fed back into the language model to generate a subsequent action. Resources: WebGPT Paper SayCan Paper ReAct Prompting# A prompting technique that combines Chain-of-Thought prompting with action plan generation. This induces the to model to think about what action to take, then take it. Resources: Paper LangChain Example Self-ask# A prompting method that builds on top of chain-of-thought prompting. In this method, the model explicitly asks itself follow-up questions, which are then answered by an external search engine. Resources: Paper LangChain Example Prompt Chaining# Combining multiple LLM calls together, with the output of one-step being the input to the next. Resources: PromptChainer Paper Language Model Cascades ICE Primer Book Socratic Models Memetic Proxy#
https://python.langchain.com/en/latest/glossary.html
907892935143-1
Language Model Cascades ICE Primer Book Socratic Models Memetic Proxy# Encouraging the LLM to respond in a certain way framing the discussion in a context that the model knows of and that will result in that type of response. For example, as a conversation between a student and a teacher. Resources: Paper Self Consistency# A decoding strategy that samples a diverse set of reasoning paths and then selects the most consistent answer. Is most effective when combined with Chain-of-thought prompting. Resources: Paper Inception# Also called “First Person Instruction”. Encouraging the model to think a certain way by including the start of the model’s response in the prompt. Resources: Example MemPrompt# MemPrompt maintains a memory of errors and user feedback, and uses them to prevent repetition of mistakes. Resources: Paper previous Writer next LangChain Gallery Contents Chain of Thought Prompting Action Plan Generation ReAct Prompting Self-ask Prompt Chaining Memetic Proxy Self Consistency Inception MemPrompt By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Apr 06, 2023.
https://python.langchain.com/en/latest/glossary.html
000e0716e4a1-0
.ipynb .pdf Model Comparison Model Comparison# Constructing your language model application will likely involved choosing between many different options of prompts, models, and even chains to use. When doing so, you will want to compare these different options on different inputs in an easy, flexible, and intuitive way. LangChain provides the concept of a ModelLaboratory to test out and try different models. from langchain import LLMChain, OpenAI, Cohere, HuggingFaceHub, PromptTemplate from langchain.model_laboratory import ModelLaboratory llms = [ OpenAI(temperature=0), Cohere(model="command-xlarge-20221108", max_tokens=20, temperature=0), HuggingFaceHub(repo_id="google/flan-t5-xl", model_kwargs={"temperature":1}) ] model_lab = ModelLaboratory.from_llms(llms) model_lab.compare("What color is a flamingo?") Input: What color is a flamingo? OpenAI Params: {'model': 'text-davinci-002', 'temperature': 0.0, 'max_tokens': 256, 'top_p': 1, 'frequency_penalty': 0, 'presence_penalty': 0, 'n': 1, 'best_of': 1} Flamingos are pink. Cohere Params: {'model': 'command-xlarge-20221108', 'max_tokens': 20, 'temperature': 0.0, 'k': 0, 'p': 1, 'frequency_penalty': 0, 'presence_penalty': 0} Pink HuggingFaceHub Params: {'repo_id': 'google/flan-t5-xl', 'temperature': 1} pink
https://python.langchain.com/en/latest/model_laboratory.html
000e0716e4a1-1
pink prompt = PromptTemplate(template="What is the capital of {state}?", input_variables=["state"]) model_lab_with_prompt = ModelLaboratory.from_llms(llms, prompt=prompt) model_lab_with_prompt.compare("New York") Input: New York OpenAI Params: {'model': 'text-davinci-002', 'temperature': 0.0, 'max_tokens': 256, 'top_p': 1, 'frequency_penalty': 0, 'presence_penalty': 0, 'n': 1, 'best_of': 1} The capital of New York is Albany. Cohere Params: {'model': 'command-xlarge-20221108', 'max_tokens': 20, 'temperature': 0.0, 'k': 0, 'p': 1, 'frequency_penalty': 0, 'presence_penalty': 0} The capital of New York is Albany. HuggingFaceHub Params: {'repo_id': 'google/flan-t5-xl', 'temperature': 1} st john s from langchain import SelfAskWithSearchChain, SerpAPIWrapper open_ai_llm = OpenAI(temperature=0) search = SerpAPIWrapper() self_ask_with_search_openai = SelfAskWithSearchChain(llm=open_ai_llm, search_chain=search, verbose=True) cohere_llm = Cohere(temperature=0, model="command-xlarge-20221108") search = SerpAPIWrapper() self_ask_with_search_cohere = SelfAskWithSearchChain(llm=cohere_llm, search_chain=search, verbose=True) chains = [self_ask_with_search_openai, self_ask_with_search_cohere] names = [str(open_ai_llm), str(cohere_llm)]
https://python.langchain.com/en/latest/model_laboratory.html
000e0716e4a1-2
names = [str(open_ai_llm), str(cohere_llm)] model_lab = ModelLaboratory(chains, names=names) model_lab.compare("What is the hometown of the reigning men's U.S. Open champion?") Input: What is the hometown of the reigning men's U.S. Open champion? OpenAI Params: {'model': 'text-davinci-002', 'temperature': 0.0, 'max_tokens': 256, 'top_p': 1, 'frequency_penalty': 0, 'presence_penalty': 0, 'n': 1, 'best_of': 1} > Entering new chain... What is the hometown of the reigning men's U.S. Open champion? Are follow up questions needed here: Yes. Follow up: Who is the reigning men's U.S. Open champion? Intermediate answer: Carlos Alcaraz. Follow up: Where is Carlos Alcaraz from? Intermediate answer: El Palmar, Spain. So the final answer is: El Palmar, Spain > Finished chain. So the final answer is: El Palmar, Spain Cohere Params: {'model': 'command-xlarge-20221108', 'max_tokens': 256, 'temperature': 0.0, 'k': 0, 'p': 1, 'frequency_penalty': 0, 'presence_penalty': 0} > Entering new chain... What is the hometown of the reigning men's U.S. Open champion? Are follow up questions needed here: Yes. Follow up: Who is the reigning men's U.S. Open champion? Intermediate answer: Carlos Alcaraz. So the final answer is: Carlos Alcaraz > Finished chain. So the final answer is: Carlos Alcaraz By Harrison Chase
https://python.langchain.com/en/latest/model_laboratory.html
000e0716e4a1-3
So the final answer is: Carlos Alcaraz By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Apr 06, 2023.
https://python.langchain.com/en/latest/model_laboratory.html
5c2ef66f4c4c-0
Search Error Please activate JavaScript to enable the search functionality. Ctrl+K By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Apr 06, 2023.
https://python.langchain.com/en/latest/search.html
6fd108073ec2-0
.md .pdf Deployments Contents Streamlit Gradio (on Hugging Face) Beam Vercel SteamShip Langchain-serve Deployments# So you’ve made a really cool chain - now what? How do you deploy it and make it easily sharable with the world? This section covers several options for that. Note that these are meant as quick deployment options for prototypes and demos, and not for production systems. If you are looking for help with deployment of a production system, please contact us directly. What follows is a list of template GitHub repositories aimed that are intended to be very easy to fork and modify to use your chain. This is far from an exhaustive list of options, and we are EXTREMELY open to contributions here. Streamlit# This repo serves as a template for how to deploy a LangChain with Streamlit. It implements a chatbot interface. It also contains instructions for how to deploy this app on the Streamlit platform. Gradio (on Hugging Face)# This repo serves as a template for how deploy a LangChain with Gradio. It implements a chatbot interface, with a “Bring-Your-Own-Token” approach (nice for not wracking up big bills). It also contains instructions for how to deploy this app on the Hugging Face platform. This is heavily influenced by James Weaver’s excellent examples. Beam# This repo serves as a template for how deploy a LangChain with Beam. It implements a Question Answering app and contains instructions for deploying the app as a serverless REST API. Vercel# A minimal example on how to run LangChain on Vercel using Flask. SteamShip# This repository contains LangChain adapters for Steamship, enabling LangChain developers to rapidly deploy their apps on Steamship.
https://python.langchain.com/en/latest/deployments.html
6fd108073ec2-1
This includes: production ready endpoints, horizontal scaling across dependencies, persistant storage of app state, multi-tenancy support, etc. Langchain-serve# This repository allows users to serve local chains and agents as RESTful, gRPC, or Websocket APIs thanks to Jina. Deploy your chains & agents with ease and enjoy independent scaling, serverless and autoscaling APIs, as well as a Streamlit playground on Jina AI Cloud. previous LangChain Gallery next Tracing Contents Streamlit Gradio (on Hugging Face) Beam Vercel SteamShip Langchain-serve By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Apr 06, 2023.
https://python.langchain.com/en/latest/deployments.html
bc41a7e94c0f-0
Source code for langchain.text_splitter """Functionality for splitting text.""" from __future__ import annotations import copy import logging from abc import ABC, abstractmethod from typing import ( AbstractSet, Any, Callable, Collection, Iterable, List, Literal, Optional, Union, ) from langchain.docstore.document import Document logger = logging.getLogger() [docs]class TextSplitter(ABC): """Interface for splitting text into chunks.""" def __init__( self, chunk_size: int = 4000, chunk_overlap: int = 200, length_function: Callable[[str], int] = len, ): """Create a new TextSplitter.""" if chunk_overlap > chunk_size: raise ValueError( f"Got a larger chunk overlap ({chunk_overlap}) than chunk size " f"({chunk_size}), should be smaller." ) self._chunk_size = chunk_size self._chunk_overlap = chunk_overlap self._length_function = length_function [docs] @abstractmethod def split_text(self, text: str) -> List[str]: """Split text into multiple components.""" [docs] def create_documents( self, texts: List[str], metadatas: Optional[List[dict]] = None ) -> List[Document]: """Create documents from a list of texts.""" _metadatas = metadatas or [{}] * len(texts) documents = [] for i, text in enumerate(texts): for chunk in self.split_text(text): new_doc = Document( page_content=chunk, metadata=copy.deepcopy(_metadatas[i]) )
https://python.langchain.com/en/latest/_modules/langchain/text_splitter.html
bc41a7e94c0f-1
page_content=chunk, metadata=copy.deepcopy(_metadatas[i]) ) documents.append(new_doc) return documents [docs] def split_documents(self, documents: List[Document]) -> List[Document]: """Split documents.""" texts = [doc.page_content for doc in documents] metadatas = [doc.metadata for doc in documents] return self.create_documents(texts, metadatas) def _join_docs(self, docs: List[str], separator: str) -> Optional[str]: text = separator.join(docs) text = text.strip() if text == "": return None else: return text def _merge_splits(self, splits: Iterable[str], separator: str) -> List[str]: # We now want to combine these smaller pieces into medium size # chunks to send to the LLM. separator_len = self._length_function(separator) docs = [] current_doc: List[str] = [] total = 0 for d in splits: _len = self._length_function(d) if ( total + _len + (separator_len if len(current_doc) > 0 else 0) > self._chunk_size ): if total > self._chunk_size: logger.warning( f"Created a chunk of size {total}, " f"which is longer than the specified {self._chunk_size}" ) if len(current_doc) > 0: doc = self._join_docs(current_doc, separator) if doc is not None: docs.append(doc) # Keep on popping if: # - we have a larger chunk than in the chunk overlap
https://python.langchain.com/en/latest/_modules/langchain/text_splitter.html
bc41a7e94c0f-2
# - we have a larger chunk than in the chunk overlap # - or if we still have any chunks and the length is long while total > self._chunk_overlap or ( total + _len + (separator_len if len(current_doc) > 0 else 0) > self._chunk_size and total > 0 ): total -= self._length_function(current_doc[0]) + ( separator_len if len(current_doc) > 1 else 0 ) current_doc = current_doc[1:] current_doc.append(d) total += _len + (separator_len if len(current_doc) > 1 else 0) doc = self._join_docs(current_doc, separator) if doc is not None: docs.append(doc) return docs [docs] @classmethod def from_huggingface_tokenizer(cls, tokenizer: Any, **kwargs: Any) -> TextSplitter: """Text splitter that uses HuggingFace tokenizer to count length.""" try: from transformers import PreTrainedTokenizerBase if not isinstance(tokenizer, PreTrainedTokenizerBase): raise ValueError( "Tokenizer received was not an instance of PreTrainedTokenizerBase" ) def _huggingface_tokenizer_length(text: str) -> int: return len(tokenizer.encode(text)) except ImportError: raise ValueError( "Could not import transformers python package. " "Please it install it with `pip install transformers`." ) return cls(length_function=_huggingface_tokenizer_length, **kwargs) [docs] @classmethod def from_tiktoken_encoder( cls, encoding_name: str = "gpt2",
https://python.langchain.com/en/latest/_modules/langchain/text_splitter.html
bc41a7e94c0f-3
cls, encoding_name: str = "gpt2", allowed_special: Union[Literal["all"], AbstractSet[str]] = set(), disallowed_special: Union[Literal["all"], Collection[str]] = "all", **kwargs: Any, ) -> TextSplitter: """Text splitter that uses tiktoken encoder to count length.""" try: import tiktoken except ImportError: raise ValueError( "Could not import tiktoken python package. " "This is needed in order to calculate max_tokens_for_prompt. " "Please it install it with `pip install tiktoken`." ) # create a GPT-3 encoder instance enc = tiktoken.get_encoding(encoding_name) def _tiktoken_encoder(text: str, **kwargs: Any) -> int: return len( enc.encode( text, allowed_special=allowed_special, disallowed_special=disallowed_special, **kwargs, ) ) return cls(length_function=_tiktoken_encoder, **kwargs) [docs]class CharacterTextSplitter(TextSplitter): """Implementation of splitting text that looks at characters.""" def __init__(self, separator: str = "\n\n", **kwargs: Any): """Create a new TextSplitter.""" super().__init__(**kwargs) self._separator = separator [docs] def split_text(self, text: str) -> List[str]: """Split incoming text and return chunks.""" # First we naively split the large input into a bunch of smaller ones. if self._separator: splits = text.split(self._separator) else: splits = list(text)
https://python.langchain.com/en/latest/_modules/langchain/text_splitter.html
bc41a7e94c0f-4
splits = text.split(self._separator) else: splits = list(text) return self._merge_splits(splits, self._separator) [docs]class TokenTextSplitter(TextSplitter): """Implementation of splitting text that looks at tokens.""" def __init__( self, encoding_name: str = "gpt2", allowed_special: Union[Literal["all"], AbstractSet[str]] = set(), disallowed_special: Union[Literal["all"], Collection[str]] = "all", **kwargs: Any, ): """Create a new TextSplitter.""" super().__init__(**kwargs) try: import tiktoken except ImportError: raise ValueError( "Could not import tiktoken python package. " "This is needed in order to for TokenTextSplitter. " "Please it install it with `pip install tiktoken`." ) # create a GPT-3 encoder instance self._tokenizer = tiktoken.get_encoding(encoding_name) self._allowed_special = allowed_special self._disallowed_special = disallowed_special [docs] def split_text(self, text: str) -> List[str]: """Split incoming text and return chunks.""" splits = [] input_ids = self._tokenizer.encode( text, allowed_special=self._allowed_special, disallowed_special=self._disallowed_special, ) start_idx = 0 cur_idx = min(start_idx + self._chunk_size, len(input_ids)) chunk_ids = input_ids[start_idx:cur_idx] while start_idx < len(input_ids): splits.append(self._tokenizer.decode(chunk_ids)) start_idx += self._chunk_size - self._chunk_overlap
https://python.langchain.com/en/latest/_modules/langchain/text_splitter.html
bc41a7e94c0f-5
start_idx += self._chunk_size - self._chunk_overlap cur_idx = min(start_idx + self._chunk_size, len(input_ids)) chunk_ids = input_ids[start_idx:cur_idx] return splits [docs]class RecursiveCharacterTextSplitter(TextSplitter): """Implementation of splitting text that looks at characters. Recursively tries to split by different characters to find one that works. """ def __init__(self, separators: Optional[List[str]] = None, **kwargs: Any): """Create a new TextSplitter.""" super().__init__(**kwargs) self._separators = separators or ["\n\n", "\n", " ", ""] [docs] def split_text(self, text: str) -> List[str]: """Split incoming text and return chunks.""" final_chunks = [] # Get appropriate separator to use separator = self._separators[-1] for _s in self._separators: if _s == "": separator = _s break if _s in text: separator = _s break # Now that we have the separator, split the text if separator: splits = text.split(separator) else: splits = list(text) # Now go merging things, recursively splitting longer texts. _good_splits = [] for s in splits: if self._length_function(s) < self._chunk_size: _good_splits.append(s) else: if _good_splits: merged_text = self._merge_splits(_good_splits, separator) final_chunks.extend(merged_text) _good_splits = [] other_info = self.split_text(s)
https://python.langchain.com/en/latest/_modules/langchain/text_splitter.html
bc41a7e94c0f-6
_good_splits = [] other_info = self.split_text(s) final_chunks.extend(other_info) if _good_splits: merged_text = self._merge_splits(_good_splits, separator) final_chunks.extend(merged_text) return final_chunks [docs]class NLTKTextSplitter(TextSplitter): """Implementation of splitting text that looks at sentences using NLTK.""" def __init__(self, separator: str = "\n\n", **kwargs: Any): """Initialize the NLTK splitter.""" super().__init__(**kwargs) try: from nltk.tokenize import sent_tokenize self._tokenizer = sent_tokenize except ImportError: raise ImportError( "NLTK is not installed, please install it with `pip install nltk`." ) self._separator = separator [docs] def split_text(self, text: str) -> List[str]: """Split incoming text and return chunks.""" # First we naively split the large input into a bunch of smaller ones. splits = self._tokenizer(text) return self._merge_splits(splits, self._separator) [docs]class SpacyTextSplitter(TextSplitter): """Implementation of splitting text that looks at sentences using Spacy.""" def __init__( self, separator: str = "\n\n", pipeline: str = "en_core_web_sm", **kwargs: Any ): """Initialize the spacy text splitter.""" super().__init__(**kwargs) try: import spacy except ImportError: raise ImportError( "Spacy is not installed, please install it with `pip install spacy`." ) self._tokenizer = spacy.load(pipeline) self._separator = separator
https://python.langchain.com/en/latest/_modules/langchain/text_splitter.html
bc41a7e94c0f-7
self._tokenizer = spacy.load(pipeline) self._separator = separator [docs] def split_text(self, text: str) -> List[str]: """Split incoming text and return chunks.""" splits = (str(s) for s in self._tokenizer(text).sents) return self._merge_splits(splits, self._separator) [docs]class MarkdownTextSplitter(RecursiveCharacterTextSplitter): """Attempts to split the text along Markdown-formatted headings.""" def __init__(self, **kwargs: Any): """Initialize a MarkdownTextSplitter.""" separators = [ # First, try to split along Markdown headings (starting with level 2) "\n## ", "\n### ", "\n#### ", "\n##### ", "\n###### ", # Note the alternative syntax for headings (below) is not handled here # Heading level 2 # --------------- # End of code block "```\n\n", # Horizontal lines "\n\n***\n\n", "\n\n---\n\n", "\n\n___\n\n", # Note that this splitter doesn't handle horizontal lines defined # by *three or more* of ***, ---, or ___, but this is not handled "\n\n", "\n", " ", "", ] super().__init__(separators=separators, **kwargs) [docs]class LatexTextSplitter(RecursiveCharacterTextSplitter): """Attempts to split the text along Latex-formatted layout elements.""" def __init__(self, **kwargs: Any): """Initialize a LatexTextSplitter.""" separators = [
https://python.langchain.com/en/latest/_modules/langchain/text_splitter.html
bc41a7e94c0f-8
"""Initialize a LatexTextSplitter.""" separators = [ # First, try to split along Latex sections "\n\\chapter{", "\n\\section{", "\n\\subsection{", "\n\\subsubsection{", # Now split by environments "\n\\begin{enumerate}", "\n\\begin{itemize}", "\n\\begin{description}", "\n\\begin{list}", "\n\\begin{quote}", "\n\\begin{quotation}", "\n\\begin{verse}", "\n\\begin{verbatim}", ## Now split by math environments "\n\\begin{align}", "$$", "$", # Now split by the normal type of lines " ", "", ] super().__init__(separators=separators, **kwargs) [docs]class PythonCodeTextSplitter(RecursiveCharacterTextSplitter): """Attempts to split the text along Python syntax.""" def __init__(self, **kwargs: Any): """Initialize a MarkdownTextSplitter.""" separators = [ # First, try to split along class definitions "\nclass ", "\ndef ", "\n\tdef ", # Now split by the normal type of lines "\n\n", "\n", " ", "", ] super().__init__(separators=separators, **kwargs) By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Apr 06, 2023.
https://python.langchain.com/en/latest/_modules/langchain/text_splitter.html
68601317a35b-0
Source code for langchain.python """Mock Python REPL.""" import sys from io import StringIO from typing import Dict, Optional from pydantic import BaseModel, Field [docs]class PythonREPL(BaseModel): """Simulates a standalone Python REPL.""" globals: Optional[Dict] = Field(default_factory=dict, alias="_globals") locals: Optional[Dict] = Field(default_factory=dict, alias="_locals") [docs] def run(self, command: str) -> str: """Run command with own globals/locals and returns anything printed.""" old_stdout = sys.stdout sys.stdout = mystdout = StringIO() try: exec(command, self.globals, self.locals) sys.stdout = old_stdout output = mystdout.getvalue() except Exception as e: sys.stdout = old_stdout output = str(e) return output By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Apr 06, 2023.
https://python.langchain.com/en/latest/_modules/langchain/python.html
2fd77d19d3e4-0
Source code for langchain.vectorstores.pinecone """Wrapper around Pinecone vector database.""" from __future__ import annotations import uuid from typing import Any, Callable, Iterable, List, Optional, Tuple from langchain.docstore.document import Document from langchain.embeddings.base import Embeddings from langchain.vectorstores.base import VectorStore [docs]class Pinecone(VectorStore): """Wrapper around Pinecone vector database. To use, you should have the ``pinecone-client`` python package installed. Example: .. code-block:: python from langchain.vectorstores import Pinecone from langchain.embeddings.openai import OpenAIEmbeddings import pinecone # The environment should be the one specified next to the API key # in your Pinecone console pinecone.init(api_key="***", environment="...") index = pinecone.Index("langchain-demo") embeddings = OpenAIEmbeddings() vectorstore = Pinecone(index, embeddings.embed_query, "text") """ def __init__( self, index: Any, embedding_function: Callable, text_key: str, namespace: Optional[str] = None, ): """Initialize with Pinecone client.""" try: import pinecone except ImportError: raise ValueError( "Could not import pinecone python package. " "Please install it with `pip install pinecone-client`." ) if not isinstance(index, pinecone.index.Index): raise ValueError( f"client should be an instance of pinecone.index.Index, " f"got {type(index)}" ) self._index = index self._embedding_function = embedding_function self._text_key = text_key
https://python.langchain.com/en/latest/_modules/langchain/vectorstores/pinecone.html
2fd77d19d3e4-1
self._embedding_function = embedding_function self._text_key = text_key self._namespace = namespace [docs] def add_texts( self, texts: Iterable[str], metadatas: Optional[List[dict]] = None, ids: Optional[List[str]] = None, namespace: Optional[str] = None, batch_size: int = 32, **kwargs: Any, ) -> List[str]: """Run more texts through the embeddings and add to the vectorstore. Args: texts: Iterable of strings to add to the vectorstore. metadatas: Optional list of metadatas associated with the texts. ids: Optional list of ids to associate with the texts. namespace: Optional pinecone namespace to add the texts to. Returns: List of ids from adding the texts into the vectorstore. """ if namespace is None: namespace = self._namespace # Embed and create the documents docs = [] ids = ids or [str(uuid.uuid4()) for _ in texts] for i, text in enumerate(texts): embedding = self._embedding_function(text) metadata = metadatas[i] if metadatas else {} metadata[self._text_key] = text docs.append((ids[i], embedding, metadata)) # upsert to Pinecone self._index.upsert(vectors=docs, namespace=namespace, batch_size=batch_size) return ids [docs] def similarity_search_with_score( self, query: str, k: int = 4, filter: Optional[dict] = None, namespace: Optional[str] = None,
https://python.langchain.com/en/latest/_modules/langchain/vectorstores/pinecone.html
2fd77d19d3e4-2
filter: Optional[dict] = None, namespace: Optional[str] = None, ) -> List[Tuple[Document, float]]: """Return pinecone documents most similar to query, along with scores. Args: query: Text to look up documents similar to. k: Number of Documents to return. Defaults to 4. filter: Dictionary of argument(s) to filter on metadata namespace: Namespace to search in. Default will search in '' namespace. Returns: List of Documents most similar to the query and score for each """ if namespace is None: namespace = self._namespace query_obj = self._embedding_function(query) docs = [] results = self._index.query( [query_obj], top_k=k, include_metadata=True, namespace=namespace, filter=filter, ) for res in results["matches"]: metadata = res["metadata"] text = metadata.pop(self._text_key) docs.append((Document(page_content=text, metadata=metadata), res["score"])) return docs [docs] def similarity_search( self, query: str, k: int = 4, filter: Optional[dict] = None, namespace: Optional[str] = None, **kwargs: Any, ) -> List[Document]: """Return pinecone documents most similar to query. Args: query: Text to look up documents similar to. k: Number of Documents to return. Defaults to 4. filter: Dictionary of argument(s) to filter on metadata namespace: Namespace to search in. Default will search in '' namespace. Returns:
https://python.langchain.com/en/latest/_modules/langchain/vectorstores/pinecone.html
2fd77d19d3e4-3
namespace: Namespace to search in. Default will search in '' namespace. Returns: List of Documents most similar to the query and score for each """ if namespace is None: namespace = self._namespace query_obj = self._embedding_function(query) docs = [] results = self._index.query( [query_obj], top_k=k, include_metadata=True, namespace=namespace, filter=filter, ) for res in results["matches"]: metadata = res["metadata"] text = metadata.pop(self._text_key) docs.append(Document(page_content=text, metadata=metadata)) return docs [docs] @classmethod def from_texts( cls, texts: List[str], embedding: Embeddings, metadatas: Optional[List[dict]] = None, ids: Optional[List[str]] = None, batch_size: int = 32, text_key: str = "text", index_name: Optional[str] = None, namespace: Optional[str] = None, **kwargs: Any, ) -> Pinecone: """Construct Pinecone wrapper from raw documents. This is a user friendly interface that: 1. Embeds documents. 2. Adds the documents to a provided Pinecone index This is intended to be a quick way to get started. Example: .. code-block:: python from langchain import Pinecone from langchain.embeddings import OpenAIEmbeddings import pinecone # The environment should be the one specified next to the API key # in your Pinecone console pinecone.init(api_key="***", environment="...")
https://python.langchain.com/en/latest/_modules/langchain/vectorstores/pinecone.html
2fd77d19d3e4-4
pinecone.init(api_key="***", environment="...") embeddings = OpenAIEmbeddings() pinecone = Pinecone.from_texts( texts, embeddings, index_name="langchain-demo" ) """ try: import pinecone except ImportError: raise ValueError( "Could not import pinecone python package. " "Please install it with `pip install pinecone-client`." ) indexes = pinecone.list_indexes() # checks if provided index exists if index_name in indexes: index = pinecone.Index(index_name) elif len(indexes) == 0: raise ValueError( "No active indexes found in your Pinecone project, " "are you sure you're using the right API key and environment?" ) else: raise ValueError( f"Index '{index_name}' not found in your Pinecone project. " "Did you mean one of the following indexes: {', '.join(indexes)}" ) for i in range(0, len(texts), batch_size): # set end position of batch i_end = min(i + batch_size, len(texts)) # get batch of texts and ids lines_batch = texts[i:i_end] # create ids if not provided if ids: ids_batch = ids[i:i_end] else: ids_batch = [str(uuid.uuid4()) for n in range(i, i_end)] # create embeddings embeds = embedding.embed_documents(lines_batch) # prep metadata and upsert batch if metadatas: metadata = metadatas[i:i_end] else:
https://python.langchain.com/en/latest/_modules/langchain/vectorstores/pinecone.html
2fd77d19d3e4-5
metadata = metadatas[i:i_end] else: metadata = [{} for _ in range(i, i_end)] for j, line in enumerate(lines_batch): metadata[j][text_key] = line to_upsert = zip(ids_batch, embeds, metadata) # upsert to Pinecone index.upsert(vectors=list(to_upsert), namespace=namespace) return cls(index, embedding.embed_query, text_key, namespace) [docs] @classmethod def from_existing_index( cls, index_name: str, embedding: Embeddings, text_key: str = "text", namespace: Optional[str] = None, ) -> Pinecone: """Load pinecone vectorstore from index name.""" try: import pinecone except ImportError: raise ValueError( "Could not import pinecone python package. " "Please install it with `pip install pinecone-client`." ) return cls( pinecone.Index(index_name), embedding.embed_query, text_key, namespace ) By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Apr 06, 2023.
https://python.langchain.com/en/latest/_modules/langchain/vectorstores/pinecone.html
409f41501d76-0
Source code for langchain.vectorstores.base """Interface for vector stores.""" from __future__ import annotations from abc import ABC, abstractmethod from typing import Any, Dict, Iterable, List, Optional from pydantic import BaseModel, Field, root_validator from langchain.docstore.document import Document from langchain.embeddings.base import Embeddings from langchain.schema import BaseRetriever [docs]class VectorStore(ABC): """Interface for vector stores.""" [docs] @abstractmethod def add_texts( self, texts: Iterable[str], metadatas: Optional[List[dict]] = None, **kwargs: Any, ) -> List[str]: """Run more texts through the embeddings and add to the vectorstore. Args: texts: Iterable of strings to add to the vectorstore. metadatas: Optional list of metadatas associated with the texts. kwargs: vectorstore specific parameters Returns: List of ids from adding the texts into the vectorstore. """ [docs] def add_documents(self, documents: List[Document], **kwargs: Any) -> List[str]: """Run more documents through the embeddings and add to the vectorstore. Args: documents (List[Document]: Documents to add to the vectorstore. Returns: List[str]: List of IDs of the added texts. """ # TODO: Handle the case where the user doesn't provide ids on the Collection texts = [doc.page_content for doc in documents] metadatas = [doc.metadata for doc in documents] return self.add_texts(texts, metadatas, **kwargs) [docs] @abstractmethod def similarity_search(
https://python.langchain.com/en/latest/_modules/langchain/vectorstores/base.html
409f41501d76-1
[docs] @abstractmethod def similarity_search( self, query: str, k: int = 4, **kwargs: Any ) -> List[Document]: """Return docs most similar to query.""" [docs] def similarity_search_by_vector( self, embedding: List[float], k: int = 4, **kwargs: Any ) -> List[Document]: """Return docs most similar to embedding vector. Args: embedding: Embedding to look up documents similar to. k: Number of Documents to return. Defaults to 4. Returns: List of Documents most similar to the query vector. """ raise NotImplementedError [docs] def max_marginal_relevance_search( self, query: str, k: int = 4, fetch_k: int = 20 ) -> List[Document]: """Return docs selected using the maximal marginal relevance. Maximal marginal relevance optimizes for similarity to query AND diversity among selected documents. Args: query: Text to look up documents similar to. k: Number of Documents to return. Defaults to 4. fetch_k: Number of Documents to fetch to pass to MMR algorithm. Returns: List of Documents selected by maximal marginal relevance. """ raise NotImplementedError [docs] def max_marginal_relevance_search_by_vector( self, embedding: List[float], k: int = 4, fetch_k: int = 20 ) -> List[Document]: """Return docs selected using the maximal marginal relevance. Maximal marginal relevance optimizes for similarity to query AND diversity among selected documents. Args: embedding: Embedding to look up documents similar to.
https://python.langchain.com/en/latest/_modules/langchain/vectorstores/base.html
409f41501d76-2
Args: embedding: Embedding to look up documents similar to. k: Number of Documents to return. Defaults to 4. fetch_k: Number of Documents to fetch to pass to MMR algorithm. Returns: List of Documents selected by maximal marginal relevance. """ raise NotImplementedError [docs] @classmethod def from_documents( cls, documents: List[Document], embedding: Embeddings, **kwargs: Any, ) -> VectorStore: """Return VectorStore initialized from documents and embeddings.""" texts = [d.page_content for d in documents] metadatas = [d.metadata for d in documents] return cls.from_texts(texts, embedding, metadatas=metadatas, **kwargs) [docs] @classmethod @abstractmethod def from_texts( cls, texts: List[str], embedding: Embeddings, metadatas: Optional[List[dict]] = None, **kwargs: Any, ) -> VectorStore: """Return VectorStore initialized from texts and embeddings.""" [docs] def as_retriever(self, **kwargs: Any) -> BaseRetriever: return VectorStoreRetriever(vectorstore=self, **kwargs) class VectorStoreRetriever(BaseRetriever, BaseModel): vectorstore: VectorStore search_type: str = "similarity" search_kwargs: dict = Field(default_factory=dict) class Config: """Configuration for this pydantic object.""" arbitrary_types_allowed = True @root_validator() def validate_search_type(cls, values: Dict) -> Dict: """Validate search type.""" if "search_type" in values:
https://python.langchain.com/en/latest/_modules/langchain/vectorstores/base.html
409f41501d76-3
"""Validate search type.""" if "search_type" in values: search_type = values["search_type"] if search_type not in ("similarity", "mmr"): raise ValueError(f"search_type of {search_type} not allowed.") return values def get_relevant_documents(self, query: str) -> List[Document]: if self.search_type == "similarity": docs = self.vectorstore.similarity_search(query, **self.search_kwargs) elif self.search_type == "mmr": docs = self.vectorstore.max_marginal_relevance_search( query, **self.search_kwargs ) else: raise ValueError(f"search_type of {self.search_type} not allowed.") return docs async def aget_relevant_documents(self, query: str) -> List[Document]: raise NotImplementedError("VectorStoreRetriever does not support async") By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Apr 06, 2023.
https://python.langchain.com/en/latest/_modules/langchain/vectorstores/base.html
db18f5297dd2-0
Source code for langchain.vectorstores.elastic_vector_search """Wrapper around Elasticsearch vector database.""" from __future__ import annotations import uuid from abc import ABC from typing import Any, Dict, Iterable, List, Optional from langchain.docstore.document import Document from langchain.embeddings.base import Embeddings from langchain.utils import get_from_dict_or_env from langchain.vectorstores.base import VectorStore def _default_text_mapping(dim: int) -> Dict: return { "properties": { "text": {"type": "text"}, "vector": {"type": "dense_vector", "dims": dim}, } } def _default_script_query(query_vector: List[float]) -> Dict: return { "script_score": { "query": {"match_all": {}}, "script": { "source": "cosineSimilarity(params.query_vector, 'vector') + 1.0", "params": {"query_vector": query_vector}, }, } } # ElasticVectorSearch is a concrete implementation of the abstract base class # VectorStore, which defines a common interface for all vector database # implementations. By inheriting from the ABC class, ElasticVectorSearch can be # defined as an abstract base class itself, allowing the creation of subclasses with # their own specific implementations. If you plan to subclass ElasticVectorSearch, # you can inherit from it and define your own implementation of the necessary methods # and attributes. [docs]class ElasticVectorSearch(VectorStore, ABC): """Wrapper around Elasticsearch as a vector database. To connect to an Elasticsearch instance that does not require login credentials, pass the Elasticsearch URL and index name along with the embedding object to the constructor. Example: .. code-block:: python
https://python.langchain.com/en/latest/_modules/langchain/vectorstores/elastic_vector_search.html
db18f5297dd2-1
embedding object to the constructor. Example: .. code-block:: python from langchain import ElasticVectorSearch from langchain.embeddings import OpenAIEmbeddings embedding = OpenAIEmbeddings() elastic_vector_search = ElasticVectorSearch( elasticsearch_url="http://localhost:9200", index_name="test_index", embedding=embedding ) To connect to an Elasticsearch instance that requires login credentials, including Elastic Cloud, use the Elasticsearch URL format https://username:password@es_host:9243. For example, to connect to Elastic Cloud, create the Elasticsearch URL with the required authentication details and pass it to the ElasticVectorSearch constructor as the named parameter elasticsearch_url. You can obtain your Elastic Cloud URL and login credentials by logging in to the Elastic Cloud console at https://cloud.elastic.co, selecting your deployment, and navigating to the "Deployments" page. To obtain your Elastic Cloud password for the default "elastic" user: 1. Log in to the Elastic Cloud console at https://cloud.elastic.co 2. Go to "Security" > "Users" 3. Locate the "elastic" user and click "Edit" 4. Click "Reset password" 5. Follow the prompts to reset the password The format for Elastic Cloud URLs is https://username:password@cluster_id.region_id.gcp.cloud.es.io:9243. Example: .. code-block:: python from langchain import ElasticVectorSearch from langchain.embeddings import OpenAIEmbeddings embedding = OpenAIEmbeddings() elastic_host = "cluster_id.region_id.gcp.cloud.es.io"
https://python.langchain.com/en/latest/_modules/langchain/vectorstores/elastic_vector_search.html
db18f5297dd2-2
elastic_host = "cluster_id.region_id.gcp.cloud.es.io" elasticsearch_url = f"https://username:password@{elastic_host}:9243" elastic_vector_search = ElasticVectorSearch( elasticsearch_url=elasticsearch_url, index_name="test_index", embedding=embedding ) Args: elasticsearch_url (str): The URL for the Elasticsearch instance. index_name (str): The name of the Elasticsearch index for the embeddings. embedding (Embeddings): An object that provides the ability to embed text. It should be an instance of a class that subclasses the Embeddings abstract base class, such as OpenAIEmbeddings() Raises: ValueError: If the elasticsearch python package is not installed. """ def __init__(self, elasticsearch_url: str, index_name: str, embedding: Embeddings): """Initialize with necessary components.""" try: import elasticsearch except ImportError: raise ValueError( "Could not import elasticsearch python package. " "Please install it with `pip install elasticsearch`." ) self.embedding = embedding self.index_name = index_name try: es_client = elasticsearch.Elasticsearch(elasticsearch_url) # noqa except ValueError as e: raise ValueError( f"Your elasticsearch client string is misformatted. Got error: {e} " ) self.client = es_client [docs] def add_texts( self, texts: Iterable[str], metadatas: Optional[List[dict]] = None, refresh_indices: bool = True, **kwargs: Any, ) -> List[str]:
https://python.langchain.com/en/latest/_modules/langchain/vectorstores/elastic_vector_search.html
db18f5297dd2-3
**kwargs: Any, ) -> List[str]: """Run more texts through the embeddings and add to the vectorstore. Args: texts: Iterable of strings to add to the vectorstore. metadatas: Optional list of metadatas associated with the texts. refresh_indices: bool to refresh ElasticSearch indices Returns: List of ids from adding the texts into the vectorstore. """ try: from elasticsearch.helpers import bulk except ImportError: raise ValueError( "Could not import elasticsearch python package. " "Please install it with `pip install elasticsearch`." ) requests = [] ids = [] embeddings = self.embedding.embed_documents(list(texts)) for i, text in enumerate(texts): metadata = metadatas[i] if metadatas else {} _id = str(uuid.uuid4()) request = { "_op_type": "index", "_index": self.index_name, "vector": embeddings[i], "text": text, "metadata": metadata, "_id": _id, } ids.append(_id) requests.append(request) bulk(self.client, requests) if refresh_indices: self.client.indices.refresh(index=self.index_name) return ids [docs] def similarity_search( self, query: str, k: int = 4, **kwargs: Any ) -> List[Document]: """Return docs most similar to query. Args: query: Text to look up documents similar to. k: Number of Documents to return. Defaults to 4. Returns: List of Documents most similar to the query. """
https://python.langchain.com/en/latest/_modules/langchain/vectorstores/elastic_vector_search.html
db18f5297dd2-4
Returns: List of Documents most similar to the query. """ embedding = self.embedding.embed_query(query) script_query = _default_script_query(embedding) response = self.client.search(index=self.index_name, query=script_query) hits = [hit["_source"] for hit in response["hits"]["hits"][:k]] documents = [ Document(page_content=hit["text"], metadata=hit["metadata"]) for hit in hits ] return documents [docs] @classmethod def from_texts( cls, texts: List[str], embedding: Embeddings, metadatas: Optional[List[dict]] = None, **kwargs: Any, ) -> ElasticVectorSearch: """Construct ElasticVectorSearch wrapper from raw documents. This is a user-friendly interface that: 1. Embeds documents. 2. Creates a new index for the embeddings in the Elasticsearch instance. 3. Adds the documents to the newly created Elasticsearch index. This is intended to be a quick way to get started. Example: .. code-block:: python from langchain import ElasticVectorSearch from langchain.embeddings import OpenAIEmbeddings embeddings = OpenAIEmbeddings() elastic_vector_search = ElasticVectorSearch.from_texts( texts, embeddings, elasticsearch_url="http://localhost:9200" ) """ elasticsearch_url = get_from_dict_or_env( kwargs, "elasticsearch_url", "ELASTICSEARCH_URL" ) try: import elasticsearch from elasticsearch.helpers import bulk except ImportError: raise ValueError( "Could not import elasticsearch python package. "
https://python.langchain.com/en/latest/_modules/langchain/vectorstores/elastic_vector_search.html
db18f5297dd2-5
raise ValueError( "Could not import elasticsearch python package. " "Please install it with `pip install elasticsearch`." ) try: client = elasticsearch.Elasticsearch(elasticsearch_url) except ValueError as e: raise ValueError( "Your elasticsearch client string is misformatted. " f"Got error: {e} " ) index_name = kwargs.get("index_name", uuid.uuid4().hex) embeddings = embedding.embed_documents(texts) dim = len(embeddings[0]) mapping = _default_text_mapping(dim) # TODO would be nice to create index before embedding, # just to save expensive steps for last client.indices.create(index=index_name, mappings=mapping) requests = [] for i, text in enumerate(texts): metadata = metadatas[i] if metadatas else {} request = { "_op_type": "index", "_index": index_name, "vector": embeddings[i], "text": text, "metadata": metadata, } requests.append(request) bulk(client, requests) client.indices.refresh(index=index_name) return cls(elasticsearch_url, index_name, embedding) By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Apr 06, 2023.
https://python.langchain.com/en/latest/_modules/langchain/vectorstores/elastic_vector_search.html
cc83e8624591-0
Source code for langchain.vectorstores.milvus """Wrapper around the Milvus vector database.""" from __future__ import annotations import uuid from typing import Any, Iterable, List, Optional, Tuple import numpy as np from langchain.docstore.document import Document from langchain.embeddings.base import Embeddings from langchain.vectorstores.base import VectorStore from langchain.vectorstores.utils import maximal_marginal_relevance [docs]class Milvus(VectorStore): """Wrapper around the Milvus vector database.""" def __init__( self, embedding_function: Embeddings, connection_args: dict, collection_name: str, text_field: str, ): """Initialize wrapper around the milvus vector database. In order to use this you need to have `pymilvus` installed and a running Milvus instance. See the following documentation for how to run a Milvus instance: https://milvus.io/docs/install_standalone-docker.md Args: embedding_function (Embeddings): Function used to embed the text connection_args (dict): Arguments for pymilvus connections.connect() collection_name (str): The name of the collection to search. text_field (str): The field in Milvus schema where the original text is stored. """ try: from pymilvus import Collection, DataType, connections except ImportError: raise ValueError( "Could not import pymilvus python package. " "Please install it with `pip install pymilvus`." ) # Connecting to Milvus instance if not connections.has_connection("default"): connections.connect(**connection_args)
https://python.langchain.com/en/latest/_modules/langchain/vectorstores/milvus.html
cc83e8624591-1
if not connections.has_connection("default"): connections.connect(**connection_args) self.embedding_func = embedding_function self.collection_name = collection_name self.text_field = text_field self.auto_id = False self.primary_field = None self.vector_field = None self.fields = [] self.col = Collection(self.collection_name) schema = self.col.schema # Grabbing the fields for the existing collection. for x in schema.fields: self.fields.append(x.name) if x.auto_id: self.fields.remove(x.name) if x.is_primary: self.primary_field = x.name if x.dtype == DataType.FLOAT_VECTOR or x.dtype == DataType.BINARY_VECTOR: self.vector_field = x.name # Default search params when one is not provided. self.index_params = { "IVF_FLAT": {"params": {"nprobe": 10}}, "IVF_SQ8": {"params": {"nprobe": 10}}, "IVF_PQ": {"params": {"nprobe": 10}}, "HNSW": {"params": {"ef": 10}}, "RHNSW_FLAT": {"params": {"ef": 10}}, "RHNSW_SQ": {"params": {"ef": 10}}, "RHNSW_PQ": {"params": {"ef": 10}}, "IVF_HNSW": {"params": {"nprobe": 10, "ef": 10}}, "ANNOY": {"params": {"search_k": 10}}, } [docs] def add_texts( self, texts: Iterable[str], metadatas: Optional[List[dict]] = None,
https://python.langchain.com/en/latest/_modules/langchain/vectorstores/milvus.html
cc83e8624591-2
texts: Iterable[str], metadatas: Optional[List[dict]] = None, partition_name: Optional[str] = None, timeout: Optional[int] = None, **kwargs: Any, ) -> List[str]: """Insert text data into Milvus. When using add_texts() it is assumed that a collecton has already been made and indexed. If metadata is included, it is assumed that it is ordered correctly to match the schema provided to the Collection and that the embedding vector is the first schema field. Args: texts (Iterable[str]): The text being embedded and inserted. metadatas (Optional[List[dict]], optional): The metadata that corresponds to each insert. Defaults to None. partition_name (str, optional): The partition of the collection to insert data into. Defaults to None. timeout: specified timeout. Returns: List[str]: The resulting keys for each inserted element. """ insert_dict: Any = {self.text_field: list(texts)} try: insert_dict[self.vector_field] = self.embedding_func.embed_documents( list(texts) ) except NotImplementedError: insert_dict[self.vector_field] = [ self.embedding_func.embed_query(x) for x in texts ] # Collect the metadata into the insert dict. if len(self.fields) > 2 and metadatas is not None: for d in metadatas: for key, value in d.items(): if key in self.fields: insert_dict.setdefault(key, []).append(value) # Convert dict to list of lists for insertion insert_list = [insert_dict[x] for x in self.fields] # Insert into the collection.
https://python.langchain.com/en/latest/_modules/langchain/vectorstores/milvus.html
cc83e8624591-3
# Insert into the collection. res = self.col.insert( insert_list, partition_name=partition_name, timeout=timeout ) # Flush to make sure newly inserted is immediately searchable. self.col.flush() return res.primary_keys def _worker_search( self, query: str, k: int = 4, param: Optional[dict] = None, expr: Optional[str] = None, partition_names: Optional[List[str]] = None, round_decimal: int = -1, timeout: Optional[int] = None, **kwargs: Any, ) -> Tuple[List[float], List[Tuple[Document, Any, Any]]]: # Load the collection into memory for searching. self.col.load() # Decide to use default params if not passed in. if param is None: index_type = self.col.indexes[0].params["index_type"] param = self.index_params[index_type] # Embed the query text. data = [self.embedding_func.embed_query(query)] # Determine result metadata fields. output_fields = self.fields[:] output_fields.remove(self.vector_field) # Perform the search. res = self.col.search( data, self.vector_field, param, k, expr=expr, output_fields=output_fields, partition_names=partition_names, round_decimal=round_decimal, timeout=timeout, **kwargs, ) # Organize results. ret = [] for result in res[0]: meta = {x: result.entity.get(x) for x in output_fields} ret.append( (
https://python.langchain.com/en/latest/_modules/langchain/vectorstores/milvus.html
cc83e8624591-4
ret.append( ( Document(page_content=meta.pop(self.text_field), metadata=meta), result.distance, result.id, ) ) return data[0], ret [docs] def similarity_search_with_score( self, query: str, k: int = 4, param: Optional[dict] = None, expr: Optional[str] = None, partition_names: Optional[List[str]] = None, round_decimal: int = -1, timeout: Optional[int] = None, **kwargs: Any, ) -> List[Tuple[Document, float]]: """Perform a search on a query string and return results. Args: query (str): The text being searched. k (int, optional): The amount of results ot return. Defaults to 4. param (dict, optional): The search params for the specified index. Defaults to None. expr (str, optional): Filtering expression. Defaults to None. partition_names (List[str], optional): Partitions to search through. Defaults to None. round_decimal (int, optional): Round the resulting distance. Defaults to -1. timeout (int, optional): Amount to wait before timeout error. Defaults to None. kwargs: Collection.search() keyword arguments. Returns: List[float], List[Tuple[Document, any, any]]: search_embedding, (Document, distance, primary_field) results. """ _, result = self._worker_search( query, k, param, expr, partition_names, round_decimal, timeout, **kwargs ) return [(x, y) for x, y, _ in result]
https://python.langchain.com/en/latest/_modules/langchain/vectorstores/milvus.html
cc83e8624591-5
) return [(x, y) for x, y, _ in result] [docs] def max_marginal_relevance_search( self, query: str, k: int = 4, fetch_k: int = 20, param: Optional[dict] = None, expr: Optional[str] = None, partition_names: Optional[List[str]] = None, round_decimal: int = -1, timeout: Optional[int] = None, **kwargs: Any, ) -> List[Document]: """Perform a search and return results that are reordered by MMR. Args: query (str): The text being searched. k (int, optional): How many results to give. Defaults to 4. fetch_k (int, optional): Total results to select k from. Defaults to 20. param (dict, optional): The search params for the specified index. Defaults to None. expr (str, optional): Filtering expression. Defaults to None. partition_names (List[str], optional): What partitions to search. Defaults to None. round_decimal (int, optional): Round the resulting distance. Defaults to -1. timeout (int, optional): Amount to wait before timeout error. Defaults to None. Returns: List[Document]: Document results for search. """ data, res = self._worker_search( query, fetch_k, param, expr, partition_names, round_decimal, timeout, **kwargs, ) # Extract result IDs. ids = [x for _, _, x in res]
https://python.langchain.com/en/latest/_modules/langchain/vectorstores/milvus.html
cc83e8624591-6
# Extract result IDs. ids = [x for _, _, x in res] # Get the raw vectors from Milvus. vectors = self.col.query( expr=f"{self.primary_field} in {ids}", output_fields=[self.primary_field, self.vector_field], ) # Reorganize the results from query to match result order. vectors = {x[self.primary_field]: x[self.vector_field] for x in vectors} search_embedding = data ordered_result_embeddings = [vectors[x] for x in ids] # Get the new order of results. new_ordering = maximal_marginal_relevance( np.array(search_embedding), ordered_result_embeddings, k=k ) # Reorder the values and return. ret = [] for x in new_ordering: if x == -1: break else: ret.append(res[x][0]) return ret [docs] def similarity_search( self, query: str, k: int = 4, param: Optional[dict] = None, expr: Optional[str] = None, partition_names: Optional[List[str]] = None, round_decimal: int = -1, timeout: Optional[int] = None, **kwargs: Any, ) -> List[Document]: """Perform a similarity search against the query string. Args: query (str): The text to search. k (int, optional): How many results to return. Defaults to 4. param (dict, optional): The search params for the index type. Defaults to None. expr (str, optional): Filtering expression. Defaults to None.
https://python.langchain.com/en/latest/_modules/langchain/vectorstores/milvus.html
cc83e8624591-7
Defaults to None. expr (str, optional): Filtering expression. Defaults to None. partition_names (List[str], optional): What partitions to search. Defaults to None. round_decimal (int, optional): What decimal point to round to. Defaults to -1. timeout (int, optional): How long to wait before timeout error. Defaults to None. Returns: List[Document]: Document results for search. """ _, docs_and_scores = self._worker_search( query, k, param, expr, partition_names, round_decimal, timeout, **kwargs ) return [doc for doc, _, _ in docs_and_scores] [docs] @classmethod def from_texts( cls, texts: List[str], embedding: Embeddings, metadatas: Optional[List[dict]] = None, **kwargs: Any, ) -> Milvus: """Create a Milvus collection, indexes it with HNSW, and insert data. Args: texts (List[str]): Text to insert. embedding (Embeddings): Embedding function to use. metadatas (Optional[List[dict]], optional): Dict metatadata. Defaults to None. Returns: VectorStore: The Milvus vector store. """ try: from pymilvus import ( Collection, CollectionSchema, DataType, FieldSchema, connections, ) from pymilvus.orm.types import infer_dtype_bydata except ImportError: raise ValueError( "Could not import pymilvus python package. " "Please install it with `pip install pymilvus`."
https://python.langchain.com/en/latest/_modules/langchain/vectorstores/milvus.html
cc83e8624591-8
"Please install it with `pip install pymilvus`." ) # Connect to Milvus instance if not connections.has_connection("default"): connections.connect(**kwargs.get("connection_args", {"port": 19530})) # Determine embedding dim embeddings = embedding.embed_query(texts[0]) dim = len(embeddings) # Generate unique names primary_field = "c" + str(uuid.uuid4().hex) vector_field = "c" + str(uuid.uuid4().hex) text_field = "c" + str(uuid.uuid4().hex) collection_name = "c" + str(uuid.uuid4().hex) fields = [] # Determine metadata schema if metadatas: # Check if all metadata keys line up key = metadatas[0].keys() for x in metadatas: if key != x.keys(): raise ValueError( "Mismatched metadata. " "Make sure all metadata has the same keys and datatype." ) # Create FieldSchema for each entry in singular metadata. for key, value in metadatas[0].items(): # Infer the corresponding datatype of the metadata dtype = infer_dtype_bydata(value) if dtype == DataType.UNKNOWN: raise ValueError(f"Unrecognized datatype for {key}.") elif dtype == DataType.VARCHAR: # Find out max length text based metadata max_length = 0 for subvalues in metadatas: max_length = max(max_length, len(subvalues[key])) fields.append( FieldSchema(key, DataType.VARCHAR, max_length=max_length + 1) ) else: fields.append(FieldSchema(key, dtype))
https://python.langchain.com/en/latest/_modules/langchain/vectorstores/milvus.html
cc83e8624591-9
) else: fields.append(FieldSchema(key, dtype)) # Find out max length of texts max_length = 0 for y in texts: max_length = max(max_length, len(y)) # Create the text field fields.append( FieldSchema(text_field, DataType.VARCHAR, max_length=max_length + 1) ) # Create the primary key field fields.append( FieldSchema(primary_field, DataType.INT64, is_primary=True, auto_id=True) ) # Create the vector field fields.append(FieldSchema(vector_field, DataType.FLOAT_VECTOR, dim=dim)) # Create the schema for the collection schema = CollectionSchema(fields) # Create the collection collection = Collection(collection_name, schema) # Index parameters for the collection index = { "index_type": "HNSW", "metric_type": "L2", "params": {"M": 8, "efConstruction": 64}, } # Create the index collection.create_index(vector_field, index) # Create the VectorStore milvus = cls( embedding, kwargs.get("connection_args", {"port": 19530}), collection_name, text_field, ) # Add the texts. milvus.add_texts(texts, metadatas) return milvus By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Apr 06, 2023.
https://python.langchain.com/en/latest/_modules/langchain/vectorstores/milvus.html
f83ef5f57362-0
Source code for langchain.vectorstores.weaviate """Wrapper around weaviate vector database.""" from __future__ import annotations from typing import Any, Dict, Iterable, List, Optional from uuid import uuid4 from langchain.docstore.document import Document from langchain.embeddings.base import Embeddings from langchain.vectorstores.base import VectorStore [docs]class Weaviate(VectorStore): """Wrapper around Weaviate vector database. To use, you should have the ``weaviate-client`` python package installed. Example: .. code-block:: python import weaviate from langchain.vectorstores import Weaviate client = weaviate.Client(url=os.environ["WEAVIATE_URL"], ...) weaviate = Weaviate(client, index_name, text_key) """ def __init__( self, client: Any, index_name: str, text_key: str, attributes: Optional[List[str]] = None, ): """Initialize with Weaviate client.""" try: import weaviate except ImportError: raise ValueError( "Could not import weaviate python package. " "Please install it with `pip install weaviate-client`." ) if not isinstance(client, weaviate.Client): raise ValueError( f"client should be an instance of weaviate.Client, got {type(client)}" ) self._client = client self._index_name = index_name self._text_key = text_key self._query_attrs = [self._text_key] if attributes is not None: self._query_attrs.extend(attributes) [docs] def add_texts( self,
https://python.langchain.com/en/latest/_modules/langchain/vectorstores/weaviate.html
f83ef5f57362-1
[docs] def add_texts( self, texts: Iterable[str], metadatas: Optional[List[dict]] = None, **kwargs: Any, ) -> List[str]: """Upload texts with metadata (properties) to Weaviate.""" from weaviate.util import get_valid_uuid with self._client.batch as batch: ids = [] for i, doc in enumerate(texts): data_properties = { self._text_key: doc, } if metadatas is not None: for key in metadatas[i].keys(): data_properties[key] = metadatas[i][key] _id = get_valid_uuid(uuid4()) batch.add_data_object(data_properties, self._index_name, _id) ids.append(_id) return ids [docs] def similarity_search( self, query: str, k: int = 4, **kwargs: Any ) -> List[Document]: """Look up similar documents in weaviate.""" content: Dict[str, Any] = {"concepts": [query]} if kwargs.get("search_distance"): content["certainty"] = kwargs.get("search_distance") query_obj = self._client.query.get(self._index_name, self._query_attrs) result = query_obj.with_near_text(content).with_limit(k).do() docs = [] for res in result["data"]["Get"][self._index_name]: text = res.pop(self._text_key) docs.append(Document(page_content=text, metadata=res)) return docs [docs] @classmethod def from_texts( cls, texts: List[str], embedding: Embeddings,
https://python.langchain.com/en/latest/_modules/langchain/vectorstores/weaviate.html
f83ef5f57362-2
cls, texts: List[str], embedding: Embeddings, metadatas: Optional[List[dict]] = None, **kwargs: Any, ) -> VectorStore: """Not implemented for Weaviate yet.""" raise NotImplementedError("weaviate does not currently support `from_texts`.") By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Apr 06, 2023.
https://python.langchain.com/en/latest/_modules/langchain/vectorstores/weaviate.html
83ccd01628fc-0
Source code for langchain.vectorstores.faiss """Wrapper around FAISS vector database.""" from __future__ import annotations import pickle import uuid from pathlib import Path from typing import Any, Callable, Dict, Iterable, List, Optional, Tuple import numpy as np from langchain.docstore.base import AddableMixin, Docstore from langchain.docstore.document import Document from langchain.docstore.in_memory import InMemoryDocstore from langchain.embeddings.base import Embeddings from langchain.vectorstores.base import VectorStore from langchain.vectorstores.utils import maximal_marginal_relevance def dependable_faiss_import() -> Any: """Import faiss if available, otherwise raise error.""" try: import faiss except ImportError: raise ValueError( "Could not import faiss python package. " "Please install it with `pip install faiss` " "or `pip install faiss-cpu` (depending on Python version)." ) return faiss [docs]class FAISS(VectorStore): """Wrapper around FAISS vector database. To use, you should have the ``faiss`` python package installed. Example: .. code-block:: python from langchain import FAISS faiss = FAISS(embedding_function, index, docstore, index_to_docstore_id) """ def __init__( self, embedding_function: Callable, index: Any, docstore: Docstore, index_to_docstore_id: Dict[int, str], ): """Initialize with necessary components.""" self.embedding_function = embedding_function self.index = index self.docstore = docstore self.index_to_docstore_id = index_to_docstore_id def __add(
https://python.langchain.com/en/latest/_modules/langchain/vectorstores/faiss.html
83ccd01628fc-1
self.index_to_docstore_id = index_to_docstore_id def __add( self, texts: Iterable[str], embeddings: Iterable[List[float]], metadatas: Optional[List[dict]] = None, **kwargs: Any, ) -> List[str]: if not isinstance(self.docstore, AddableMixin): raise ValueError( "If trying to add texts, the underlying docstore should support " f"adding items, which {self.docstore} does not" ) documents = [] for i, text in enumerate(texts): metadata = metadatas[i] if metadatas else {} documents.append(Document(page_content=text, metadata=metadata)) # Add to the index, the index_to_id mapping, and the docstore. starting_len = len(self.index_to_docstore_id) self.index.add(np.array(embeddings, dtype=np.float32)) # Get list of index, id, and docs. full_info = [ (starting_len + i, str(uuid.uuid4()), doc) for i, doc in enumerate(documents) ] # Add information to docstore and index. self.docstore.add({_id: doc for _, _id, doc in full_info}) index_to_id = {index: _id for index, _id, _ in full_info} self.index_to_docstore_id.update(index_to_id) return [_id for _, _id, _ in full_info] [docs] def add_texts( self, texts: Iterable[str], metadatas: Optional[List[dict]] = None, **kwargs: Any, ) -> List[str]:
https://python.langchain.com/en/latest/_modules/langchain/vectorstores/faiss.html
83ccd01628fc-2
**kwargs: Any, ) -> List[str]: """Run more texts through the embeddings and add to the vectorstore. Args: texts: Iterable of strings to add to the vectorstore. metadatas: Optional list of metadatas associated with the texts. Returns: List of ids from adding the texts into the vectorstore. """ if not isinstance(self.docstore, AddableMixin): raise ValueError( "If trying to add texts, the underlying docstore should support " f"adding items, which {self.docstore} does not" ) # Embed and create the documents. embeddings = [self.embedding_function(text) for text in texts] return self.__add(texts, embeddings, metadatas, **kwargs) [docs] def add_embeddings( self, text_embeddings: Iterable[Tuple[str, List[float]]], metadatas: Optional[List[dict]] = None, **kwargs: Any, ) -> List[str]: """Run more texts through the embeddings and add to the vectorstore. Args: text_embeddings: Iterable pairs of string and embedding to add to the vectorstore. metadatas: Optional list of metadatas associated with the texts. Returns: List of ids from adding the texts into the vectorstore. """ if not isinstance(self.docstore, AddableMixin): raise ValueError( "If trying to add texts, the underlying docstore should support " f"adding items, which {self.docstore} does not" ) # Embed and create the documents. texts = [te[0] for te in text_embeddings]
https://python.langchain.com/en/latest/_modules/langchain/vectorstores/faiss.html
83ccd01628fc-3
texts = [te[0] for te in text_embeddings] embeddings = [te[1] for te in text_embeddings] return self.__add(texts, embeddings, metadatas, **kwargs) [docs] def similarity_search_with_score_by_vector( self, embedding: List[float], k: int = 4 ) -> List[Tuple[Document, float]]: """Return docs most similar to query. Args: query: Text to look up documents similar to. k: Number of Documents to return. Defaults to 4. Returns: List of Documents most similar to the query and score for each """ scores, indices = self.index.search(np.array([embedding], dtype=np.float32), k) docs = [] for j, i in enumerate(indices[0]): if i == -1: # This happens when not enough docs are returned. continue _id = self.index_to_docstore_id[i] doc = self.docstore.search(_id) if not isinstance(doc, Document): raise ValueError(f"Could not find document for id {_id}, got {doc}") docs.append((doc, scores[0][j])) return docs [docs] def similarity_search_with_score( self, query: str, k: int = 4 ) -> List[Tuple[Document, float]]: """Return docs most similar to query. Args: query: Text to look up documents similar to. k: Number of Documents to return. Defaults to 4. Returns: List of Documents most similar to the query and score for each """ embedding = self.embedding_function(query)
https://python.langchain.com/en/latest/_modules/langchain/vectorstores/faiss.html

No dataset card yet

Downloads last month
8