��c��RDUCK3 `q�GR��O �����������������59�y�ko4�����Et!e!�'",�0�5�;A�EIK3PFU�Z�\]�_�e�jOp*vF|m�9�f�q���n�����,�޳{�Z���C����Z�A�������������W F�v�>" #Y*=1�7=>?EL�RY�_ef�l�s�x�xu�.�ъ�*�S����ƛ{�ԧc������v�����o���)���#����.�E�U�\���?���"�'-,@.K0q2s4�68J8�>�C�C�JPHP�VuY�YE`zc�fkko}s�w�{�|�|}���.�֏����/�Z����@�E�/�`��� �K���X������\�]�������g���8�0�2�����)��2 ��?g~�#k&�)Q-�04|7�:�=�@�@�G�L�LSfZ)`�fRk~kr�x�|������"�ߑ3�������_�{���N�:�f�����������z�����4� C| | 11| 1| 3|Sandstrom, Miss. ...|female| 4.0| 1| 1| PP 9549| 16.7| G6| S|16.7| G6| S| | 12| 1| 1|Bonnell, Miss. El...|female|58.0| 0| 0| 113783| 26.55| C103| S| | 13| 0| 3|Saundercock, Mr. ...| male|20.0| 0| 0| A/5. 2151| 8.05| null| S| | 14| 0| 3|Andersson, Mr. An...| male|39.0| 1| 5| 347082| 31.275| null| S| | 15| 0| 3|Vestrom, Miss. Hu...|female|14.0| 0| 0| 350406| 7.8542| null| S| | 16| 1|16| 1| 2|Hewlett, Mrs. (Ma...|female|55.0| 0| 0| 248706| 16.0| null| S| | 17| 0| 3|Rice, Master. Eugene| male| 2.0| 4| 1| 382652| 29.125| null| Q| | 18| 1| 2|Williams, Mr. Cha...| male|null| 0| 0| 244373| 13.0| null| S| | 19| 0| 3|Vander Planke, Mfixed the issues entirely.\n\n\n\n### Other Answer\nI had the same problem with my 64Gb iPhone 6+. Tried a lot of things and eventually downloaded all my images and videos to my PC and restarted the phone - problem solved. Been working now for two days.', lookup_str='', metadata={'source': 'https://www.ifixit.com/Answers/View/318583/My+iPhone+6+is+typing+and+opening+apps+by+itself', 'title': 'My iPhone 6 is typing and opening apps by itself'}, lookup_index=0)]loader = IFixitLoader("https://www.ifixit.com/Device/Standard_iPad")data = loader.load()data [Document(page_content="Standard iPad\nThe standard edition of the tablet computer made by Apple.\n== Background Information ==\n\nOriginally introduced in January 2010, the iPad is Apple's standard edition of their tablet computer. In total, there have been ten generations of the standard edition of the iPad.\n\n== Additional Information ==\n\n* [link|https://www.apple.com/ipad-select/|Official Apple Product Page]\n* [link|https://en.wikipedia.org/wiki/IPad#iPad|Official iPad Wikipedia]", lookup_str='', metadata={'source': 'https://www.ifixit.com/Device/Standard_iPad', 'title': 'Standard iPad'}, lookup_index=0)]Searching iFixit using /suggest​If you're looking for a more general way to search iFixit based on a keyword or phrase, the /suggest endpoint will return content related to the search term, then the loader will load the content from each of the suggested items and prep and return the documents.data = IFixitLoader.load_suggestions("Banana")data [Document(page_content='Banana\nTasty fruit. Good source of potassium. Yellow.\n==me touch 1,2, or 3 so now I have to find a way to get rid of my password and all of a sudden my phone wants to touch stuff on its own which got my phone disabled many times to the point where I have to wait a whole hour and I really need to finish something on my phone today PLEASE HELPPPP\n\n\n\n### Other Answer\nIn my case , iphone 6 screen was faulty. I got it replaced at local repair shop, so far phone is working fine.\n\n\n\n### Other Answer\nthis problem in iphone 6 has many different scenarios and solutions, first try to reconnect the lcd screen to the motherboard again, if didnt solve, try to replace the lcd connector on the motherboard, if not solved, then remains two issues, lcd screen it self or touch IC. in my country some repair shops just change them all for almost 40$ since they dont want to troubleshoot one by one. readers of this comment also should know that partial screen not responding in other iphone models might also have an issue in LCD connector on the motherboard, specially if you lock/unlock screen and screen works again for sometime. lcd connectors gets disconnected lightly from the motherboard due to multiple falls and hits after sometime. best of luck for all\n\n\n\n### Other Answer\nI am facing the same issue whereby these ghost touches type and open apps , I am using an original Iphone cable , how to I fix this issue.\n\n\n\n### Other Answer\nThere were two issues with the phone I had troubles with. It was my dads and turns out he carried it in his pocket. The phone itself had a little bend in it as a result. A little pressure in the opposite direction helped the issue. But it also had a tiny crack in the screen which wasnt obvious, once we added a screen protector this fixed the issues entirely.\n\n\n\n### Other Answer\nI had the same problem with mychanged the block plug.\n\n\n\n### Other Answer\nHad the problem: Inherited a 6s Plus from my wife. She had no problem with it.\nLooks like it was merely the cheap phone case I purchased on Amazon. It was either pinching the edges or torquing the screen/body of the phone. Problem solved.\n\n\n\n### Other Answer\nI bought my phone on march 6 and it was a brand new, but It sucks me uo because it freezing, shaking and control by itself. I went to the store where I bought this and I told them to replacr it, but they told me I have to pay it because Its about lcd issue. Please help me what other ways to fix it. Or should I try to remove the screen or should I follow your step above.\n\n\n\n### Other Answer\nI tried everything and it seems to come back to needing the original iPhone cable…or at least another 1 that would have come with another iPhone…not the $5 Store fast charging cables. My original cable is pretty beat up - like most that I see - but I’ve been beaten up much MUCH less by sticking with its use! I didn’t find that the casing/shell around it or not made any diff.\n\n\n\n### Other Answer\ngreat now I have to wait one more hour to reset my phone and while I was tryin to connect my phone to my computer the computer also restarted smh does anyone else knows how I can get my phone to work… my problem is I have a black dot on the bottom left of my screen an it wont allow me to touch a certain part of my screen unless I rotate my phone and I know the password but the first number is a 2 and it won\'t let me touch 1,2, or 3 so now I have to find a way to getis JailBroken, suggest downloading a virus. While all my symptoms were similar, there was indeed a virus/malware on the phone which allowed for remote control of my iphone (even while in lock mode). My mistake for buying a third party iphone i suppose. Anyway i have since had the phone restored to factory and everything is working as expected for now. I will of course keep you posted if this changes. Thanks to all for the helpful posts, really helped me narrow a few things down.\n\n\n\n### Other Answer\nWhen my phone was doing this, it ended up being the screen protector that i got from 5 below. I took it off and it stopped. I ordered more protectors from amazon and replaced it\n\n\n\n### Other Answer\niPhone 6 Plus first generation….I had the same issues as all above, apps opening by themselves, self typing, ultra sensitive screen, items jumping around all over….it even called someone on FaceTime twice by itself when I was not in the room…..I thought the phone was toast and i’d have to buy a new one took me a while to figure out but it was the extra cheap block plug I bought at a dollar store for convenience of an extra charging station when I move around the house from den to living room…..cord was fine but bought a new Apple brand block plug…no more problems works just fine now. This issue was a recent event so had to narrow things down to what had changed recently to my phone so I could figure it out.\nI even had the same problem on a laptop with documents opening up by themselves…..a laptop that was plugged in to the same wall plug as my phone charger with the dollar store block plug….until I changed the block plug.\n\n\n\n### Other Answer\nHad the problem: Inherited aI’m not sure if I accidentally bend the phone when I installed the shell, or, if I got a defective glass protector that messes up the touch sensor. Well, yesterday was the worse day, keeps dropping calls and ghost pressing keys for me when I was on a call. I got fed up, so I removed the screen protector, and so far problems have not reoccurred yet. I’m crossing my fingers that problems indeed solved.\n\n\n\n### Other Answer\nthank you so much for this post! i was struggling doing the reset because i cannot type userids and passwords correctly because the iphone 6 plus i have kept on typing letters incorrectly. I have been doing it for a day until i come across this article. Very helpful! God bless you!!\n\n\n\n### Other Answer\nI just turned it off, and turned it back on.\n\n\n\n### Other Answer\nMy problem has not gone away completely but its better now i changed my charger and turned off prediction ....,,,now it rarely happens\n\n\n\n### Other Answer\nI tried all of the above. I then turned off my home cleaned it with isopropyl alcohol 90%. Then I baked it in my oven on warm for an hour and a half over foil. Took it out and set it cool completely on the glass top stove. Then I turned on and it worked.\n\n\n\n### Other Answer\nI think at& t should man up and fix your phone for free! You pay a lot for a Apple they should back it. I did the next 30 month payments and finally have it paid off in June. My iPad sept. Looking forward to a almost 100 drop in my phone bill! Now this crap!!! Really\n\n\n\n### Other Answer\nIf your phone is JailBroken, suggest downloading a virus. While all my symptoms were similar, there was indeedsaid touch desease was due to dropping! Bullshit!! I am 56 I have never dropped it!! Looks brand new! Never dropped or abused any way! I have my original charger. I am going to clean it and try everyone’s advice. It really sucks! I had 40,000,000 on my heart of Vegas slots! I play every day. I would be spinning and my fingers were no where max buttons and it would light up and switch to max. It did it 3 times before I caught it light up by its self. It sucks. Hope I can fix it!!!!\n\n\n\n### Other Answer\nNo answer, but same problem with iPhone 6 plus--random, self-generated jumping amongst apps and typing on its own--plus freezing regularly (aha--maybe that\'s what the "plus" in "6 plus" refers to?). An Apple Genius recommended upgrading to iOS 11.3.1 from 11.2.2, to see if that fixed the trouble. If it didn\'t, Apple will sell me a new phone for $168! Of couese the OS upgrade didn\'t fix the problem. Thanks for helping me figure out that it\'s most likely a hardware problem--which the "genius" probably knows too.\nI\'m getting ready to go Android.\n\n\n\n### Other Answer\nI experienced similar ghost touches. Two weeks ago, I changed my iPhone 6 Plus shell (I had forced the phone into it because it’s pretty tight), and also put a new glass screen protector (the edges of the protector don’t stick to the screen, weird, so I brushed pressure on the edges at times to see if they may smooth out one day miraculously). I’m not sure if I accidentally bend the phone when I installed the shell,that it is an LCD issue. LCD get out of order without any reason (It was neither hit or nor slipped, but LCD get out of order all and sudden, while using it) it started opening things at random. I get LCD replaced with new one, that cost me $80.00 in total ($70.00 LCD charges + $10.00 as labor charges to fix it). iPhone is back to perfect mode now. It was iphone 6s. Thanks.\n\n\n\n### Other Answer\nI was having the same issue with my 6 plus, I took it to a repair shop, they opened the phone, disconnected the three ribbons the screen has, blew up and cleaned the connectors and connected the screen again and it solved the issue… it’s hardware, not software.\n\n\n\n### Other Answer\nHey.\nJust had this problem now. As it turns out, you just need to plug in your phone. I use a case and when I took it off I noticed that there was a lot of dust and dirt around the areas that the case didn\'t cover. I shined a light in my ports and noticed they were filled with dust. Tomorrow I plan on using pressurized air to clean it out and the problem should be solved. If you plug in your phone and unplug it and it stops the issue, I recommend cleaning your phone thoroughly.\n\n\n\n### Other Answer\nI simply changed the power supply and problem was gone. The block that plugs in the wall not the sub cord. The cord was fine but not the block.\n\n\n\n### Other Answer\nSomeone ask! I purchased my iPhone 6s Plus for 1000 from at&t. Before I touched it, I purchased a otter defender case. I read where at&t said touch desease was due to dropping! Bullshit!! I am 56 I havephysically and tend to bend on pressure. And my phone had no case or cover.\nI took the phone to apple stores and they said sensors need to be replaced and possibly screen replacement as well. My phone is just 17 months old.\nHere is what I did two days ago and since then it is working like a charm..\nHold the phone in portrait (as if watching a movie). Twist it very very gently. do it few times.Rest the phone for 10 mins (put it on a flat surface). You can now notice those self typing things gone and screen getting stabilized.\nThen, reset the hardware (hold the power and home button till the screen goes off and comes back with apple logo). release the buttons when you see this.\nThen, connect to your laptop and log in to iTunes and reset your phone completely. (please take a back-up first).\nAnd your phone should be good to use again.\nWhat really happened here for me is that the sensors might have stuck to the screen and with mild twisting, they got disengaged/released.\nI posted this in Apple Community and the moderators deleted it, for the best reasons known to them.\nInstead of throwing away your phone (or selling cheaply), try this and you could be saving your phone.\nLet me know how it goes.\n\n\n\n### Other Answer\nIt was the charging cord! I bought a gas station braided cord and it was the culprit. Once I plugged my OEM cord into the phone the GHOSTS went away.\n\n\n\n### Other Answer\nI\'ve same issue that I just get resolved. I first tried to restore it from iCloud back, however it was not a software issue or any virus issue, so after restore same problem continues. Then I get my phone to local area iphone repairing lab, and they detected that it is an LCD issue. LCD get out of order without any reason (It was neither hitStep 7\nEat and enjoy!\nThis is where you'll need your teeth.\nDo not choke on banana!\n", lookup_str='', metadata={'source': 'https://www.ifixit.com/Teardown/Banana+Teardown/811', 'title': 'Banana Teardown'}, lookup_index=0)]loader = IFixitLoader( "https://www.ifixit.com/Answers/View/318583/My+iPhone+6+is+typing+and+opening+apps+by+itself")data = loader.load()data [Document(page_content='# My iPhone 6 is typing and opening apps by itself\nmy iphone 6 is typing and opening apps by itself. How do i fix this. I just bought it last week.\nI restored as manufactures cleaned up the screen\nthe problem continues\n\n## 27 Answers\n\nFilter by: \n\nMost Helpful\nNewest\nOldest\n\n### Accepted Answer\nHi,\nWhere did you buy it? If you bought it from Apple or from an official retailer like Carphone warehouse etc. Then you\'ll have a year warranty and can get it replaced free.\nIf you bought it second hand, from a third part repair shop or online, then it may still have warranty, unless it is refurbished and has been repaired elsewhere.\nIf this is the case, it may be the screen that needs replacing to solve your issue.\nEither way, wherever you got it, it\'s best to return it and get a refund or a replacement device. :-)\n\n\n\n### Most Helpful Answer\nI had the same issues, screen freezing, opening apps by itself, selecting the screens and typing on it\'s own. I first suspected aliens and then ghosts and then hackers.\niPhone 6 is weak physically and tend to bend on pressure. And my phone had no case or cover.\nI tookopen repair community on the web. The site contains nearly 100k repair manuals, 200k Questions & Answers on 42k devices, and all the data is licensed under CC-BY-NC-SA 3.0.This loader will allow you to download the text of a repair guide, text of Q&A's and wikis from devices on iFixit using their open APIs. It's incredibly useful for context related to technical documents and answers to questions about devices in the corpus of data on iFixit.from langchain.document_loaders import IFixitLoaderloader = IFixitLoader("https://www.ifixit.com/Teardown/Banana+Teardown/811")data = loader.load()data [Document(page_content="# Banana Teardown\nIn this teardown, we open a banana to see what's inside. Yellow and delicious, but most importantly, yellow.\n\n\n###Tools Required:\n\n - Fingers\n\n - Teeth\n\n - Thumbs\n\n\n###Parts Required:\n\n - None\n\n\n## Step 1\nTake one banana from the bunch.\nDon't squeeze too hard!\n\n\n## Step 2\nHold the banana in your left hand and grip the stem between your right thumb and forefinger.\n\n\n## Step 3\nPull the stem downward until the peel splits.\n\n\n## Step 4\nInsert your thumbs into the split of the peel and pull the two sides apart.\nExpose the top of the banana. It may be slightly squished from pulling on the stem, but this will not affect the flavor.\n\n\n## Step 5\nPull open the peel, starting from your original split, and opening it along the length of the banana.\n\n\n## Step 6\nRemove fruit from peel.\n\n\n## Step 7\nEat and enjoy!\nThis is where you'll need yourSkip to main content🦜ï¸�🔗 LangChainDocsUse casesIntegrationsAPILangSmithJS/TS DocsCTRLKIntegrationsCallbacksChat modelsDocument loadersEtherscan LoaderacreomAirbyte JSONAirtableAlibaba Cloud MaxComputeApify DatasetArxivAsyncHtmlLoaderAWS S3 DirectoryAWS S3 FileAZLyricsAzure Blob Storage ContainerAzure Blob Storage FileBibTeXBiliBiliBlackboardBlockchainBrave SearchBrowserlesschatgpt_loaderCollege ConfidentialConfluenceCoNLL-UCopy PasteCSVCube Semantic LayerDatadog LogsDiffbotDiscordDocugamiDuckDBEmailEmbaasEPubEverNoteexample_dataMicrosoft ExcelFacebook ChatFaunaFigmaGeopandasGitGitBookGitHubGoogle BigQueryGoogle Cloud Storage DirectoryGoogle Cloud Storage FileGoogle DriveGrobidGutenbergHacker NewsHuggingFace datasetiFixitImagesImage captionsIMSDbIuguJoplinJupyter NotebookLarkSuite (FeiShu)MastodonMediaWikiDumpMergeDocLoadermhtmlMicrosoft OneDriveMicrosoft PowerPointMicrosoft WordModern TreasuryNotion DB 1/2Notion DB 2/2ObsidianOpen Document Format (ODT)Open City DataOrg-modePandas DataFramePsychicPySpark DataFrame LoaderReadTheDocs DocumentationRecursive URL LoaderRedditRoamRocksetRSTSitemapSlackSnowflakeSource CodeSpreedlyStripeSubtitleTelegramTencent COS DirectoryTencent COS File2MarkdownTOMLTrelloTSVTwitterUnstructured FileURLWeatherWebBaseLoaderWhatsApp ChatWikipediaXMLXorbits Pandas DataFrameLoading documents from a YouTube urlYouTube transcriptsDocument transformersLLMsMemoryRetrieversText embedding modelsAgent toolkitsToolsVector storesGrouped by providerIntegrationsDocument loadersiFixitOn this pageiFixitiFixit is the largest, open repair community on the web. The site contains nearly 100k repair manuals, 200kiFixit | 🦜ï¸�🔗 Langchainfrom a Tencent COS File.#! pip install cos-python-sdk-v5from langchain.document_loaders import TencentCOSFileLoaderfrom qcloud_cos import CosConfigconf = CosConfig( Region="your cos region", SecretId="your cos secret_id", SecretKey="your cos secret_key",)loader = TencentCOSFileLoader(conf=conf, bucket="you_cos_bucket", key="fake.docx")loader.load()PreviousTencent COS DirectoryNext2MarkdownCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.Skip to main content🦜ï¸�🔗 LangChainDocsUse casesIntegrationsAPILangSmithJS/TS DocsCTRLKIntegrationsCallbacksChat modelsDocument loadersEtherscan LoaderacreomAirbyte JSONAirtableAlibaba Cloud MaxComputeApify DatasetArxivAsyncHtmlLoaderAWS S3 DirectoryAWS S3 FileAZLyricsAzure Blob Storage ContainerAzure Blob Storage FileBibTeXBiliBiliBlackboardBlockchainBrave SearchBrowserlesschatgpt_loaderCollege ConfidentialConfluenceCoNLL-UCopy PasteCSVCube Semantic LayerDatadog LogsDiffbotDiscordDocugamiDuckDBEmailEmbaasEPubEverNoteexample_dataMicrosoft ExcelFacebook ChatFaunaFigmaGeopandasGitGitBookGitHubGoogle BigQueryGoogle Cloud Storage DirectoryGoogle Cloud Storage FileGoogle DriveGrobidGutenbergHacker NewsHuggingFace datasetiFixitImagesImage captionsIMSDbIuguJoplinJupyter NotebookLarkSuite (FeiShu)MastodonMediaWikiDumpMergeDocLoadermhtmlMicrosoft OneDriveMicrosoft PowerPointMicrosoft WordModern TreasuryNotion DB 1/2Notion DB 2/2ObsidianOpen Document Format (ODT)Open City DataOrg-modePandas DataFramePsychicPySpark DataFrame LoaderReadTheDocs DocumentationRecursive URL LoaderRedditRoamRocksetRSTSitemapSlackSnowflakeSource CodeSpreedlyStripeSubtitleTelegramTencent COS DirectoryTencent COS File2MarkdownTOMLTrelloTSVTwitterUnstructured FileURLWeatherWebBaseLoaderWhatsApp ChatWikipediaXMLXorbits Pandas DataFrameLoading documents from a YouTube urlYouTube transcriptsDocument transformersLLMsMemoryRetrieversText embedding modelsAgent toolkitsToolsVector storesGrouped by providerIntegrationsDocument loadersTencent COS FileTencent COS FileThis covers how to load document object from a Tencent COS File.#! pip install cos-python-sdk-v5from langchain.document_loadersTencent COS File | 🦜ï¸�🔗 Langchainformatting, and organizing mode, designed for notes, planning, and authoring within the free software text editor Emacs.UnstructuredOrgModeLoader​You can load data from Org-mode files with UnstructuredOrgModeLoader using the following workflow.from langchain.document_loaders import UnstructuredOrgModeLoaderloader = UnstructuredOrgModeLoader(file_path="example_data/README.org", mode="elements")docs = loader.load()print(docs[0]) page_content='Example Docs' metadata={'source': 'example_data/README.org', 'filename': 'README.org', 'file_directory': 'example_data', 'filetype': 'text/org', 'page_number': 1, 'category': 'Title'}PreviousOpen City DataNextPandas DataFrameUnstructuredOrgModeLoaderCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.Skip to main content🦜ï¸�🔗 LangChainDocsUse casesIntegrationsAPILangSmithJS/TS DocsCTRLKIntegrationsCallbacksChat modelsDocument loadersEtherscan LoaderacreomAirbyte JSONAirtableAlibaba Cloud MaxComputeApify DatasetArxivAsyncHtmlLoaderAWS S3 DirectoryAWS S3 FileAZLyricsAzure Blob Storage ContainerAzure Blob Storage FileBibTeXBiliBiliBlackboardBlockchainBrave SearchBrowserlesschatgpt_loaderCollege ConfidentialConfluenceCoNLL-UCopy PasteCSVCube Semantic LayerDatadog LogsDiffbotDiscordDocugamiDuckDBEmailEmbaasEPubEverNoteexample_dataMicrosoft ExcelFacebook ChatFaunaFigmaGeopandasGitGitBookGitHubGoogle BigQueryGoogle Cloud Storage DirectoryGoogle Cloud Storage FileGoogle DriveGrobidGutenbergHacker NewsHuggingFace datasetiFixitImagesImage captionsIMSDbIuguJoplinJupyter NotebookLarkSuite (FeiShu)MastodonMediaWikiDumpMergeDocLoadermhtmlMicrosoft OneDriveMicrosoft PowerPointMicrosoft WordModern TreasuryNotion DB 1/2Notion DB 2/2ObsidianOpen Document Format (ODT)Open City DataOrg-modePandas DataFramePsychicPySpark DataFrame LoaderReadTheDocs DocumentationRecursive URL LoaderRedditRoamRocksetRSTSitemapSlackSnowflakeSource CodeSpreedlyStripeSubtitleTelegramTencent COS DirectoryTencent COS File2MarkdownTOMLTrelloTSVTwitterUnstructured FileURLWeatherWebBaseLoaderWhatsApp ChatWikipediaXMLXorbits Pandas DataFrameLoading documents from a YouTube urlYouTube transcriptsDocument transformersLLMsMemoryRetrieversText embedding modelsAgent toolkitsToolsVector storesGrouped by providerIntegrationsDocument loadersOrg-modeOn this pageOrg-modeA Org Mode document is a document editing, formatting, and organizing mode, designed for notes, planning, and authoring within the free softwareOrg-mode | 🦜ï¸�🔗 LangchainWhen using a PAT you provide only the token value, you cannot provide a username. Please note that ConfluenceLoader will run under the permissions of the user that generated the PAT and will only be able to load documents for which said user has access to. from langchain.document_loaders import ConfluenceLoaderloader = ConfluenceLoader(url="https://yoursite.atlassian.com/wiki", token="12345")documents = loader.load( space_key="SPACE", include_attachments=True, limit=50, max_pages=50)PreviousCollege ConfidentialNextCoNLL-UExamplesUsername and Password or Username and API Token (Atlassian Cloud only)Personal Access Token (Server/On-Prem only)CommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.You can generate an API token at: https://id.atlassian.com/manage-profile/security/api-tokens.The limit parameter specifies how many documents will be retrieved in a single call, not how many documents will be retrieved in total. By default the code will return up to 1000 documents in 50 documents batches. To control the total number of documents use the max_pages parameter. Plese note the maximum value for the limit parameter in the atlassian-python-api package is currently 100. from langchain.document_loaders import ConfluenceLoaderloader = ConfluenceLoader( url="https://yoursite.atlassian.com/wiki", username="me", api_key="12345")documents = loader.load(space_key="SPACE", include_attachments=True, limit=50)Personal Access Token (Server/On-Prem only)​This method is valid for the Data Center/Server on-prem edition only. For more information on how to generate a Personal Access Token (PAT) check the official Confluence documentation at: https://confluence.atlassian.com/enterprise/using-personal-access-tokens-1026032365.html. When using a PAT you provide only the token value, you cannot provide a username.saves and organizes all of the project-related material. Confluence is a knowledge base that primarily handles content management activities. A loader for Confluence pages.This currently supports username/api_key, Oauth2 login. Additionally, on-prem installations also support token authentication. Specify a list page_id-s and/or space_key to load in the corresponding pages into Document objects, if both are specified the union of both sets will be returned.You can also specify a boolean include_attachments to include attachments, this is set to False by default, if set to True all attachments will be downloaded and ConfluenceReader will extract the text from the attachments and add it to the Document object. Currently supported attachment types are: PDF, PNG, JPEG/JPG, SVG, Word and Excel.Hint: space_key and page_id can both be found in the URL of a page in Confluence - https://yoursite.atlassian.com/wiki/spaces//pages/Before using ConfluenceLoader make sure you have the latest version of the atlassian-python-api package installed:#!pip install atlassian-python-apiExamples​Username and Password or Username and API Token (Atlassian Cloud only)​This example authenticates using either a username and password or, if you're connecting to an Atlassian Cloud hosted version of Confluence, a username and an API Token.Skip to main content🦜ï¸�🔗 LangChainDocsUse casesIntegrationsAPILangSmithJS/TS DocsCTRLKIntegrationsCallbacksChat modelsDocument loadersEtherscan LoaderacreomAirbyte JSONAirtableAlibaba Cloud MaxComputeApify DatasetArxivAsyncHtmlLoaderAWS S3 DirectoryAWS S3 FileAZLyricsAzure Blob Storage ContainerAzure Blob Storage FileBibTeXBiliBiliBlackboardBlockchainBrave SearchBrowserlesschatgpt_loaderCollege ConfidentialConfluenceCoNLL-UCopy PasteCSVCube Semantic LayerDatadog LogsDiffbotDiscordDocugamiDuckDBEmailEmbaasEPubEverNoteexample_dataMicrosoft ExcelFacebook ChatFaunaFigmaGeopandasGitGitBookGitHubGoogle BigQueryGoogle Cloud Storage DirectoryGoogle Cloud Storage FileGoogle DriveGrobidGutenbergHacker NewsHuggingFace datasetiFixitImagesImage captionsIMSDbIuguJoplinJupyter NotebookLarkSuite (FeiShu)MastodonMediaWikiDumpMergeDocLoadermhtmlMicrosoft OneDriveMicrosoft PowerPointMicrosoft WordModern TreasuryNotion DB 1/2Notion DB 2/2ObsidianOpen Document Format (ODT)Open City DataOrg-modePandas DataFramePsychicPySpark DataFrame LoaderReadTheDocs DocumentationRecursive URL LoaderRedditRoamRocksetRSTSitemapSlackSnowflakeSource CodeSpreedlyStripeSubtitleTelegramTencent COS DirectoryTencent COS File2MarkdownTOMLTrelloTSVTwitterUnstructured FileURLWeatherWebBaseLoaderWhatsApp ChatWikipediaXMLXorbits Pandas DataFrameLoading documents from a YouTube urlYouTube transcriptsDocument transformersLLMsMemoryRetrieversText embedding modelsAgent toolkitsToolsVector storesGrouped by providerIntegrationsDocument loadersConfluenceOn this pageConfluenceConfluence is a wiki collaboration platform that saves and organizes all of the project-related material. Confluence is a knowledge base that primarily handlesConfluence | 🦜ï¸�🔗 Langchainimport AirtableLoaderGet your API key here.Get ID of your base here.Get your table ID from the table url as shown here.api_key = "xxx"base_id = "xxx"table_id = "xxx"loader = AirtableLoader(api_key, table_id, base_id)docs = loader.load()Returns each table row as dict.len(docs) 3eval(docs[0].page_content) {'id': 'recF3GbGZCuh9sXIQ', 'createdTime': '2023-06-09T04:47:21.000Z', 'fields': {'Priority': 'High', 'Status': 'In progress', 'Name': 'Document Splitters'}}PreviousAirbyte JSONNextAlibaba Cloud MaxComputeCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.Skip to main content🦜ï¸�🔗 LangChainDocsUse casesIntegrationsAPILangSmithJS/TS DocsCTRLKIntegrationsCallbacksChat modelsDocument loadersEtherscan LoaderacreomAirbyte JSONAirtableAlibaba Cloud MaxComputeApify DatasetArxivAsyncHtmlLoaderAWS S3 DirectoryAWS S3 FileAZLyricsAzure Blob Storage ContainerAzure Blob Storage FileBibTeXBiliBiliBlackboardBlockchainBrave SearchBrowserlesschatgpt_loaderCollege ConfidentialConfluenceCoNLL-UCopy PasteCSVCube Semantic LayerDatadog LogsDiffbotDiscordDocugamiDuckDBEmailEmbaasEPubEverNoteexample_dataMicrosoft ExcelFacebook ChatFaunaFigmaGeopandasGitGitBookGitHubGoogle BigQueryGoogle Cloud Storage DirectoryGoogle Cloud Storage FileGoogle DriveGrobidGutenbergHacker NewsHuggingFace datasetiFixitImagesImage captionsIMSDbIuguJoplinJupyter NotebookLarkSuite (FeiShu)MastodonMediaWikiDumpMergeDocLoadermhtmlMicrosoft OneDriveMicrosoft PowerPointMicrosoft WordModern TreasuryNotion DB 1/2Notion DB 2/2ObsidianOpen Document Format (ODT)Open City DataOrg-modePandas DataFramePsychicPySpark DataFrame LoaderReadTheDocs DocumentationRecursive URL LoaderRedditRoamRocksetRSTSitemapSlackSnowflakeSource CodeSpreedlyStripeSubtitleTelegramTencent COS DirectoryTencent COS File2MarkdownTOMLTrelloTSVTwitterUnstructured FileURLWeatherWebBaseLoaderWhatsApp ChatWikipediaXMLXorbits Pandas DataFrameLoading documents from a YouTube urlYouTube transcriptsDocument transformersLLMsMemoryRetrieversText embedding modelsAgent toolkitsToolsVector storesGrouped by providerIntegrationsDocument loadersAirtableAirtablepip install pyairtablefrom langchain.document_loaders import AirtableLoaderGet your API key here.Get ID of your base here.Get your table ID fromAirtable | 🦜ï¸�🔗 Langchain'published_year': '1915', 'title': 'Die Feldgleichungen der Gravitation', 'publication': 'Sitzungsberichte der K{"o}niglich Preu{\\ss}ischen Akademie der Wissenschaften', 'authors': 'Einstein, Albert', 'abstract': 'Die Grundgleichungen der Gravitation, die ich hier entwickeln werde, wurden von mir in einer Abhandlung: ,,Die formale Grundlage der allgemeinen Relativit{"a}tstheorie`` in den Sitzungsberichten der Preu{\\ss}ischen Akademie der Wissenschaften 1915 ver{"o}ffentlicht.', 'url': 'https://doi.org/10.1002/andp.19163540702'}print(docs[0].page_content[:400]) # all pages of the pdf content ON THE ELECTRODYNAMICS OF MOVING BODIES By A. EINSTEIN June 30, 1905 It is known that Maxwell’s electrodynamics—as usually understood at the present time—when applied to moving bodies, leads to asymmetries which do not appear to be inherent in the phenomena. Take, for example, the recipro- cal electrodynamic action of a magnet and a conductor. The observable phe- nomenon here depends only on the rPreviousAzure Blob Storage FileNextBiliBiliInstallationExamplesCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc."einstein1905.pdf")bibtex_text = """ @article{einstein1915, title={Die Feldgleichungen der Gravitation}, abstract={Die Grundgleichungen der Gravitation, die ich hier entwickeln werde, wurden von mir in einer Abhandlung: ,,Die formale Grundlage der allgemeinen Relativit{\"a}tstheorie`` in den Sitzungsberichten der Preu{\ss}ischen Akademie der Wissenschaften 1915 ver{\"o}ffentlicht.}, author={Einstein, Albert}, journal={Sitzungsberichte der K{\"o}niglich Preu{\ss}ischen Akademie der Wissenschaften}, volume={1915}, number={1}, pages={844--847}, year={1915}, doi={10.1002/andp.19163540702}, link={https://onlinelibrary.wiley.com/doi/abs/10.1002/andp.19163540702}, file={einstein1905.pdf} } """# save bibtex_text to biblio.bib filewith open("./biblio.bib", "w") as file: file.write(bibtex_text)docs = BibtexLoader("./biblio.bib").load()docs[0].metadata {'id': 'einstein1915', 'published_year': '1915', 'title': 'Dieformat and reference management system commonly used in conjunction with LaTeX typesetting. It serves as a way to organize and store bibliographic information for academic and research documents.BibTeX files have a .bib extension and consist of plain text entries representing references to various publications, such as books, articles, conference papers, theses, and more. Each BibTeX entry follows a specific structure and contains fields for different bibliographic details like author names, publication title, journal or book title, year of publication, page numbers, and more.Bibtex files can also store the path to documents, such as .pdf files that can be retrieved.Installation​First, you need to install bibtexparser and PyMuPDF.#!pip install bibtexparser pymupdfExamples​BibtexLoader has these arguments:file_path: the path the the .bib bibtex fileoptional max_docs: default=None, i.e. not limit. Use it to limit number of retrieved documents.optional max_content_chars: default=4000. Use it to limit the number of characters in a single document.optional load_extra_meta: default=False. By default only the most important fields from the bibtex entries: Published (publication year), Title, Authors, Summary, Journal, Keywords, and URL. If True, it will also try to load return entry_id, note, doi, and links fields. optional file_pattern: default=r'[^:]+\.pdf'. Regex pattern to find files in the file entry. Default pattern supports Zotero flavour bibtex style and bare file path.from langchain.document_loaders import BibtexLoader# Create a dummy bibtex file and download a pdf.import urllib.requesturllib.request.urlretrieve( "https://www.fourmilab.ch/etexts/einstein/specrel/specrel.pdf", "einstein1905.pdf")bibtex_text = """Skip to main content🦜ï¸�🔗 LangChainDocsUse casesIntegrationsAPILangSmithJS/TS DocsCTRLKIntegrationsCallbacksChat modelsDocument loadersEtherscan LoaderacreomAirbyte JSONAirtableAlibaba Cloud MaxComputeApify DatasetArxivAsyncHtmlLoaderAWS S3 DirectoryAWS S3 FileAZLyricsAzure Blob Storage ContainerAzure Blob Storage FileBibTeXBiliBiliBlackboardBlockchainBrave SearchBrowserlesschatgpt_loaderCollege ConfidentialConfluenceCoNLL-UCopy PasteCSVCube Semantic LayerDatadog LogsDiffbotDiscordDocugamiDuckDBEmailEmbaasEPubEverNoteexample_dataMicrosoft ExcelFacebook ChatFaunaFigmaGeopandasGitGitBookGitHubGoogle BigQueryGoogle Cloud Storage DirectoryGoogle Cloud Storage FileGoogle DriveGrobidGutenbergHacker NewsHuggingFace datasetiFixitImagesImage captionsIMSDbIuguJoplinJupyter NotebookLarkSuite (FeiShu)MastodonMediaWikiDumpMergeDocLoadermhtmlMicrosoft OneDriveMicrosoft PowerPointMicrosoft WordModern TreasuryNotion DB 1/2Notion DB 2/2ObsidianOpen Document Format (ODT)Open City DataOrg-modePandas DataFramePsychicPySpark DataFrame LoaderReadTheDocs DocumentationRecursive URL LoaderRedditRoamRocksetRSTSitemapSlackSnowflakeSource CodeSpreedlyStripeSubtitleTelegramTencent COS DirectoryTencent COS File2MarkdownTOMLTrelloTSVTwitterUnstructured FileURLWeatherWebBaseLoaderWhatsApp ChatWikipediaXMLXorbits Pandas DataFrameLoading documents from a YouTube urlYouTube transcriptsDocument transformersLLMsMemoryRetrieversText embedding modelsAgent toolkitsToolsVector storesGrouped by providerIntegrationsDocument loadersBibTeXOn this pageBibTeXBibTeX is a file format and reference management system commonly used in conjunction with LaTeX typesetting. It serves as a way toBibTeX | 🦜ï¸�🔗 Langchaincontent straight away:\nThe new page menu\nImport a page or subpage by selecting \nImport Page\n from the New Page menu, or \nImport Subpage\n in the page action menu, found in the table of contents:\nImport from the page action menu\nWhen you choose your input source, instructions will explain how to proceed.\nAlthough GitBook supports importing content from different kinds of sources, the end result might be different from your source due to differences in product features and document format.\nLimits\nGitBook currently has the following limits for imported content:\nThe maximum number of pages that can be uploaded in a single import is \n20.\nThe maximum number of files (images etc.) that can be uploaded in a single import is \n20.\nGetting started - \nPrevious\nOverview\nNext\n - Getting started\nGit Sync\nLast modified \n4mo ago", lookup_str='', metadata={'source': 'https://docs.gitbook.com/getting-started/import', 'title': 'Import'}, lookup_index=0)PreviousGitNextGitHubLoad from single GitBook pageLoad from all paths in a given GitBookCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.https://docs.gitbook.com/advanced-guides/integrations Fetching text from https://docs.gitbook.com/billing-and-admin/account-settings Fetching text from https://docs.gitbook.com/billing-and-admin/plans Fetching text from https://docs.gitbook.com/troubleshooting/faqs Fetching text from https://docs.gitbook.com/troubleshooting/hard-refresh Fetching text from https://docs.gitbook.com/troubleshooting/report-bugs Fetching text from https://docs.gitbook.com/troubleshooting/connectivity-issues Fetching text from https://docs.gitbook.com/troubleshooting/supportprint(f"fetched {len(all_pages_data)} documents.")# show second documentall_pages_data[2] fetched 28 documents. Document(page_content="Import\nFind out how to easily migrate your existing documentation and which formats are supported.\nThe import function allows you to migrate and unify existing documentation in GitBook. You can choose to import single or multiple pages although limits apply. \nPermissions\nAll members with editor permission or above can use the import feature.\nSupported formats\nGitBook supports imports from websites or files that are:\nMarkdown (.md or .markdown)\nHTML (.html)\nMicrosoft Word (.docx).\nWe also support import from:\nConfluence\nNotion\nGitHub Wiki\nQuip\nDropbox Paper\nGoogle Docs\nYou can also upload a ZIP\n \ncontaining HTML or Markdown files when \nimporting multiple pages.\nNote: this feature is in beta.\nFeel free to suggest import sources we don't support yet and \nlet us know\n if you have any issues.\nImport panel\nWhen you create a new space, you'll have the option to import content straight away:\nThe new page menu\nImport a page or subpage by selecting \nImportFetching text from https://docs.gitbook.com/ Fetching text from https://docs.gitbook.com/getting-started/overview Fetching text from https://docs.gitbook.com/getting-started/import Fetching text from https://docs.gitbook.com/getting-started/git-sync Fetching text from https://docs.gitbook.com/getting-started/content-structure Fetching text from https://docs.gitbook.com/getting-started/collaboration Fetching text from https://docs.gitbook.com/getting-started/publishing Fetching text from https://docs.gitbook.com/tour/quick-find Fetching text from https://docs.gitbook.com/tour/editor Fetching text from https://docs.gitbook.com/tour/customization Fetching text from https://docs.gitbook.com/tour/member-management Fetching text from https://docs.gitbook.com/tour/pdf-export Fetching text from https://docs.gitbook.com/tour/activity-history Fetching text from https://docs.gitbook.com/tour/insights Fetching text from https://docs.gitbook.com/tour/notifications Fetching text from https://docs.gitbook.com/tour/internationalization Fetching text from https://docs.gitbook.com/tour/keyboard-shortcuts Fetching text from https://docs.gitbook.com/tour/seo Fetching text from https://docs.gitbook.com/advanced-guides/custom-domain Fetching text from https://docs.gitbook.com/advanced-guides/advanced-sharing-and-security Fetching text from https://docs.gitbook.com/advanced-guides/integrations Fetching text fromteams can document everything from products to internal knowledge bases and APIs.This notebook shows how to pull page data from any GitBook.from langchain.document_loaders import GitbookLoaderLoad from single GitBook page​loader = GitbookLoader("https://docs.gitbook.com")page_data = loader.load()page_data [Document(page_content='Introduction to GitBook\nGitBook is a modern documentation platform where teams can document everything from products to internal knowledge bases and APIs.\nWe want to help \nteams to work more efficiently\n by creating a simple yet powerful platform for them to \nshare their knowledge\n.\nOur mission is to make a \nuser-friendly\n and \ncollaborative\n product for everyone to create, edit and share knowledge through documentation.\nPublish your documentation in 5 easy steps\nImport\n\nMove your existing content to GitBook with ease.\nGit Sync\n\nBenefit from our bi-directional synchronisation with GitHub and GitLab.\nOrganise your content\n\nCreate pages and spaces and organize them into collections\nCollaborate\n\nInvite other users and collaborate asynchronously with ease.\nPublish your docs\n\nShare your documentation with selected users or with everyone.\nNext\n - Getting started\nOverview\nLast modified \n3mo ago', lookup_str='', metadata={'source': 'https://docs.gitbook.com', 'title': 'Introduction to GitBook'}, lookup_index=0)]Load from all paths in a given GitBook​For this to work, the GitbookLoader needs to be initialized with the root path (https://docs.gitbook.com in this example) and have load_all_paths set to True.loader = GitbookLoader("https://docs.gitbook.com", load_all_paths=True)all_pages_data = loader.load() Fetching text from https://docs.gitbook.com/ Fetching text fromSkip to main content🦜ï¸�🔗 LangChainDocsUse casesIntegrationsAPILangSmithJS/TS DocsCTRLKIntegrationsCallbacksChat modelsDocument loadersEtherscan LoaderacreomAirbyte JSONAirtableAlibaba Cloud MaxComputeApify DatasetArxivAsyncHtmlLoaderAWS S3 DirectoryAWS S3 FileAZLyricsAzure Blob Storage ContainerAzure Blob Storage FileBibTeXBiliBiliBlackboardBlockchainBrave SearchBrowserlesschatgpt_loaderCollege ConfidentialConfluenceCoNLL-UCopy PasteCSVCube Semantic LayerDatadog LogsDiffbotDiscordDocugamiDuckDBEmailEmbaasEPubEverNoteexample_dataMicrosoft ExcelFacebook ChatFaunaFigmaGeopandasGitGitBookGitHubGoogle BigQueryGoogle Cloud Storage DirectoryGoogle Cloud Storage FileGoogle DriveGrobidGutenbergHacker NewsHuggingFace datasetiFixitImagesImage captionsIMSDbIuguJoplinJupyter NotebookLarkSuite (FeiShu)MastodonMediaWikiDumpMergeDocLoadermhtmlMicrosoft OneDriveMicrosoft PowerPointMicrosoft WordModern TreasuryNotion DB 1/2Notion DB 2/2ObsidianOpen Document Format (ODT)Open City DataOrg-modePandas DataFramePsychicPySpark DataFrame LoaderReadTheDocs DocumentationRecursive URL LoaderRedditRoamRocksetRSTSitemapSlackSnowflakeSource CodeSpreedlyStripeSubtitleTelegramTencent COS DirectoryTencent COS File2MarkdownTOMLTrelloTSVTwitterUnstructured FileURLWeatherWebBaseLoaderWhatsApp ChatWikipediaXMLXorbits Pandas DataFrameLoading documents from a YouTube urlYouTube transcriptsDocument transformersLLMsMemoryRetrieversText embedding modelsAgent toolkitsToolsVector storesGrouped by providerIntegrationsDocument loadersGitBookOn this pageGitBookGitBook is a modern documentation platform where teams can document everything from products to internal knowledge bases and APIs.This notebook shows how to pull page dataGitBook | 🦜ï¸�🔗 Langchainby Microsoft.This covers how to load Microsoft PowerPoint documents into a document format that we can use downstream.from langchain.document_loaders import UnstructuredPowerPointLoaderloader = UnstructuredPowerPointLoader("example_data/fake-power-point.pptx")data = loader.load()data [Document(page_content='Adding a Bullet Slide\n\nFind the bullet slide layout\n\nUse _TextFrame.text for first bullet\n\nUse _TextFrame.add_paragraph() for subsequent bullets\n\nHere is a lot of text!\n\nHere is some text in a text box!', metadata={'source': 'example_data/fake-power-point.pptx'})]Retain Elements​Under the hood, Unstructured creates different "elements" for different chunks of text. By default we combine those together, but you can easily keep that separation by specifying mode="elements".loader = UnstructuredPowerPointLoader( "example_data/fake-power-point.pptx", mode="elements")data = loader.load()data[0] Document(page_content='Adding a Bullet Slide', lookup_str='', metadata={'source': 'example_data/fake-power-point.pptx'}, lookup_index=0)PreviousMicrosoft OneDriveNextMicrosoft WordRetain ElementsCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.Skip to main content🦜ï¸�🔗 LangChainDocsUse casesIntegrationsAPILangSmithJS/TS DocsCTRLKIntegrationsCallbacksChat modelsDocument loadersEtherscan LoaderacreomAirbyte JSONAirtableAlibaba Cloud MaxComputeApify DatasetArxivAsyncHtmlLoaderAWS S3 DirectoryAWS S3 FileAZLyricsAzure Blob Storage ContainerAzure Blob Storage FileBibTeXBiliBiliBlackboardBlockchainBrave SearchBrowserlesschatgpt_loaderCollege ConfidentialConfluenceCoNLL-UCopy PasteCSVCube Semantic LayerDatadog LogsDiffbotDiscordDocugamiDuckDBEmailEmbaasEPubEverNoteexample_dataMicrosoft ExcelFacebook ChatFaunaFigmaGeopandasGitGitBookGitHubGoogle BigQueryGoogle Cloud Storage DirectoryGoogle Cloud Storage FileGoogle DriveGrobidGutenbergHacker NewsHuggingFace datasetiFixitImagesImage captionsIMSDbIuguJoplinJupyter NotebookLarkSuite (FeiShu)MastodonMediaWikiDumpMergeDocLoadermhtmlMicrosoft OneDriveMicrosoft PowerPointMicrosoft WordModern TreasuryNotion DB 1/2Notion DB 2/2ObsidianOpen Document Format (ODT)Open City DataOrg-modePandas DataFramePsychicPySpark DataFrame LoaderReadTheDocs DocumentationRecursive URL LoaderRedditRoamRocksetRSTSitemapSlackSnowflakeSource CodeSpreedlyStripeSubtitleTelegramTencent COS DirectoryTencent COS File2MarkdownTOMLTrelloTSVTwitterUnstructured FileURLWeatherWebBaseLoaderWhatsApp ChatWikipediaXMLXorbits Pandas DataFrameLoading documents from a YouTube urlYouTube transcriptsDocument transformersLLMsMemoryRetrieversText embedding modelsAgent toolkitsToolsVector storesGrouped by providerIntegrationsDocument loadersMicrosoft PowerPointOn this pageMicrosoft PowerPointMicrosoft PowerPoint is a presentation program by Microsoft.This covers how to load Microsoft PowerPoint documents into a document format that we can use downstream.fromMicrosoft PowerPoint | 🦜ï¸�🔗 Langchain'transactionIndex': '1', 'from': '0x16545fb79dbee1ad3a7f868b7661c023f372d5de', 'to': '0x9dd134d14d1e65f84b706d6f205cd5b1cd03a46b', 'value': '11890330240000000000', 'gas': '90000', 'gasPrice': '20000000000', 'isError': '0', 'txreceipt_status': '', 'input': '0x', 'contractAddress': '', 'cumulativeGasUsed': '42000', 'gasUsed': '21000', 'confirmations': '15955132', 'methodId': '0x', 'functionName': ''}", metadata={'from': '0x16545fb79dbee1ad3a7f868b7661c023f372d5de', 'tx_hash': '0xc5ec8cecdc9f5ed55a5b8b0ad79c964fb5c49dc1136b6a49e981616c3e70bbe6', 'to': '0x9dd134d14d1e65f84b706d6f205cd5b1cd03a46b'})]PreviousDocument loadersNextacreomOverviewCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.'to': '0x9dd134d14d1e65f84b706d6f205cd5b1cd03a46b', 'value': '4556173496000000000', 'gas': '90000', 'gasPrice': '20000000000', 'isError': '0', 'txreceipt_status': '', 'input': '0x', 'contractAddress': '', 'cumulativeGasUsed': '168000', 'gasUsed': '21000', 'confirmations': '15958195', 'methodId': '0x', 'functionName': ''}", metadata={'from': '0x16545fb79dbee1ad3a7f868b7661c023f372d5de', 'tx_hash': '0xe76ca3603d2f4e7134bdd7a1c3fd553025fc0b793f3fd2a75cd206b8049e74ab', 'to': '0x9dd134d14d1e65f84b706d6f205cd5b1cd03a46b'}), Document(page_content="{'blockNumber': '1780120', 'timeStamp': '1467020353', 'hash': '0xc5ec8cecdc9f5ed55a5b8b0ad79c964fb5c49dc1136b6a49e981616c3e70bbe6', 'nonce': '1266', 'blockHash': '0xfc0e066e5b613239e1a01e6d582e7ab162ceb3ca4f719dfbd1a0c965adcfe1c5', 'transactionIndex': '1', 'from':'value': '9979637283000000000', 'gas': '90000', 'gasPrice': '20000000000', 'isError': '0', 'txreceipt_status': '', 'input': '0x', 'contractAddress': '', 'cumulativeGasUsed': '63000', 'gasUsed': '21000', 'confirmations': '15961208', 'methodId': '0x', 'functionName': ''}", metadata={'from': '0x16545fb79dbee1ad3a7f868b7661c023f372d5de', 'tx_hash': '0x958d85270b58b80f1ad228f716bbac8dd9da7c5f239e9f30d8edeb5bb9301d20', 'to': '0x9dd134d14d1e65f84b706d6f205cd5b1cd03a46b'}), Document(page_content="{'blockNumber': '1777057', 'timeStamp': '1466976422', 'hash': '0xe76ca3603d2f4e7134bdd7a1c3fd553025fc0b793f3fd2a75cd206b8049e74ab', 'nonce': '1248', 'blockHash': '0xc7cacda0ac38c99f1b9bccbeee1562a41781d2cfaa357e8c7b4af6a49584b968', 'transactionIndex': '7', 'from': '0x16545fb79dbee1ad3a7f868b7661c023f372d5de', 'to':'gas': '90000', 'gasPrice': '20000000000', 'isError': '0', 'txreceipt_status': '', 'input': '0x', 'contractAddress': '', 'cumulativeGasUsed': '21000', 'gasUsed': '21000', 'confirmations': '15964341', 'methodId': '0x', 'functionName': ''}", metadata={'from': '0x16545fb79dbee1ad3a7f868b7661c023f372d5de', 'tx_hash': '0x9d84470b54ab44b9074b108a0e506cd8badf30457d221e595bb68d63e926b865', 'to': '0x9dd134d14d1e65f84b706d6f205cd5b1cd03a46b'}), Document(page_content="{'blockNumber': '1774044', 'timeStamp': '1466932983', 'hash': '0x958d85270b58b80f1ad228f716bbac8dd9da7c5f239e9f30d8edeb5bb9301d20', 'nonce': '1240', 'blockHash': '0x69cee390378c3b886f9543fb3a1cb2fc97621ec155f7884564d4c866348ce539', 'transactionIndex': '2', 'from': '0x16545fb79dbee1ad3a7f868b7661c023f372d5de', 'to': '0x9dd134d14d1e65f84b706d6f205cd5b1cd03a46b', 'value': '9979637283000000000', 'gas':'isError': '0', 'txreceipt_status': '', 'input': '0x', 'contractAddress': '', 'cumulativeGasUsed': '21000', 'gasUsed': '21000', 'confirmations': '15967316', 'methodId': '0x', 'functionName': ''}", metadata={'from': '0x16545fb79dbee1ad3a7f868b7661c023f372d5de', 'tx_hash': '0x758efa27576cd17ebe7b842db4892eac6609e3962a4f9f57b7c84b7b1909512f', 'to': '0x9dd134d14d1e65f84b706d6f205cd5b1cd03a46b'}), Document(page_content="{'blockNumber': '1770911', 'timeStamp': '1466888890', 'hash': '0x9d84470b54ab44b9074b108a0e506cd8badf30457d221e595bb68d63e926b865', 'nonce': '1212', 'blockHash': '0x79a9de39276132dab8bf00dc3e060f0e8a14f5e16a0ee4e9cc491da31b25fe58', 'transactionIndex': '0', 'from': '0x16545fb79dbee1ad3a7f868b7661c023f372d5de', 'to': '0x9dd134d14d1e65f84b706d6f205cd5b1cd03a46b', 'value': '10918214730000000000', 'gas': '90000', 'gasPrice': '20000000000','', 'input': '0x', 'contractAddress': '', 'cumulativeGasUsed': '168000', 'gasUsed': '21000', 'confirmations': '15970357', 'methodId': '0x', 'functionName': ''}", metadata={'from': '0x16545fb79dbee1ad3a7f868b7661c023f372d5de', 'tx_hash': '0x620b91b12af7aac75553b47f15742e2825ea38919cfc8082c0666f404a0db28b', 'to': '0x9dd134d14d1e65f84b706d6f205cd5b1cd03a46b'}), Document(page_content="{'blockNumber': '1767936', 'timeStamp': '1466845682', 'hash': '0x758efa27576cd17ebe7b842db4892eac6609e3962a4f9f57b7c84b7b1909512f', 'nonce': '1211', 'blockHash': '0xb01d8fd47b3554a99352ac3e5baf5524f314cfbc4262afcfbea1467b2d682898', 'transactionIndex': '0', 'from': '0x16545fb79dbee1ad3a7f868b7661c023f372d5de', 'to': '0x9dd134d14d1e65f84b706d6f205cd5b1cd03a46b', 'value': '11914401843000000000', 'gas': '90000', 'gasPrice': '20000000000', 'isError': '0', 'txreceipt_status': '', 'input':'', 'cumulativeGasUsed': '63000', 'gasUsed': '21000', 'confirmations': '15973469', 'methodId': '0x', 'functionName': ''}", metadata={'from': '0x16545fb79dbee1ad3a7f868b7661c023f372d5de', 'tx_hash': '0xd01545872629956867cbd65fdf5e97d0dde1a112c12e76a1bfc92048d37f650f', 'to': '0x9dd134d14d1e65f84b706d6f205cd5b1cd03a46b'}), Document(page_content="{'blockNumber': '1764895', 'timeStamp': '1466801683', 'hash': '0x620b91b12af7aac75553b47f15742e2825ea38919cfc8082c0666f404a0db28b', 'nonce': '1186', 'blockHash': '0x2e687643becd3c36e0c396a02af0842775e17ccefa0904de5aeca0a9a1aa795e', 'transactionIndex': '7', 'from': '0x16545fb79dbee1ad3a7f868b7661c023f372d5de', 'to': '0x9dd134d14d1e65f84b706d6f205cd5b1cd03a46b', 'value': '8690241462000000000', 'gas': '90000', 'gasPrice': '20000000000', 'isError': '0', 'txreceipt_status': '', 'input': '0x', 'contractAddress': '', 'cumulativeGasUsed':'gasUsed': '21000', 'confirmations': '15976543', 'methodId': '0x', 'functionName': ''}", metadata={'from': '0x16545fb79dbee1ad3a7f868b7661c023f372d5de', 'tx_hash': '0xd6cce5b184dc7fce85f305ee832df647a9c4640b68e9b79b6f74dc38336d5622', 'to': '0x9dd134d14d1e65f84b706d6f205cd5b1cd03a46b'}), Document(page_content="{'blockNumber': '1761783', 'timeStamp': '1466757809', 'hash': '0xd01545872629956867cbd65fdf5e97d0dde1a112c12e76a1bfc92048d37f650f', 'nonce': '1169', 'blockHash': '0x7576961afa4218a3264addd37a41f55c444dd534e9410dbd6f93f7fe20e0363e', 'transactionIndex': '2', 'from': '0x16545fb79dbee1ad3a7f868b7661c023f372d5de', 'to': '0x9dd134d14d1e65f84b706d6f205cd5b1cd03a46b', 'value': '10197126683000000000', 'gas': '90000', 'gasPrice': '20000000000', 'isError': '0', 'txreceipt_status': '', 'input': '0x', 'contractAddress': '', 'cumulativeGasUsed': '63000', 'gasUsed': '21000','methodId': '0x', 'functionName': ''}", metadata={'from': '0x16545fb79dbee1ad3a7f868b7661c023f372d5de', 'tx_hash': '0xb9f891b7c3d00fcd64483189890591d2b7b910eda6172e3bf3973c5fd3d5a5ae', 'to': '0x9dd134d14d1e65f84b706d6f205cd5b1cd03a46b'}), Document(page_content="{'blockNumber': '1758709', 'timeStamp': '1466713652', 'hash': '0xd6cce5b184dc7fce85f305ee832df647a9c4640b68e9b79b6f74dc38336d5622', 'nonce': '1147', 'blockHash': '0x1660de1e73067251be0109d267a21ffc7d5bde21719a3664c7045c32e771ecf9', 'transactionIndex': '1', 'from': '0x16545fb79dbee1ad3a7f868b7661c023f372d5de', 'to': '0x9dd134d14d1e65f84b706d6f205cd5b1cd03a46b', 'value': '9758447294000000000', 'gas': '90000', 'gasPrice': '20000000000', 'isError': '0', 'txreceipt_status': '', 'input': '0x', 'contractAddress': '', 'cumulativeGasUsed': '42000', 'gasUsed': '21000', 'confirmations': '15976543','tx_hash': '0xc3849e550ca5276d7b3c51fa95ad3ae62c1c164799d33f4388fe60c4e1d4f7d8', 'to': '0x9dd134d14d1e65f84b706d6f205cd5b1cd03a46b'}), Document(page_content="{'blockNumber': '1755659', 'timeStamp': '1466669931', 'hash': '0xb9f891b7c3d00fcd64483189890591d2b7b910eda6172e3bf3973c5fd3d5a5ae', 'nonce': '1133', 'blockHash': '0x2983972217a91343860415d1744c2a55246a297c4810908bbd3184785bc9b0c2', 'transactionIndex': '14', 'from': '0x16545fb79dbee1ad3a7f868b7661c023f372d5de', 'to': '0x9dd134d14d1e65f84b706d6f205cd5b1cd03a46b', 'value': '13226475343000000000', 'gas': '90000', 'gasPrice': '20000000000', 'isError': '0', 'txreceipt_status': '', 'input': '0x', 'contractAddress': '', 'cumulativeGasUsed': '2674679', 'gasUsed': '21000', 'confirmations': '15979593', 'methodId': '0x', 'functionName': ''}", metadata={'from':'tx_hash': '0x28c327f462cc5013d81c8682c032f014083c6891938a7bdeee85a1c02c3e9ed4', 'to': '0x9dd134d14d1e65f84b706d6f205cd5b1cd03a46b'}), Document(page_content="{'blockNumber': '1752614', 'timeStamp': '1466626168', 'hash': '0xc3849e550ca5276d7b3c51fa95ad3ae62c1c164799d33f4388fe60c4e1d4f7d8', 'nonce': '1118', 'blockHash': '0x88ef054b98e47504332609394e15c0a4467f84042396717af6483f0bcd916127', 'transactionIndex': '11', 'from': '0x16545fb79dbee1ad3a7f868b7661c023f372d5de', 'to': '0x9dd134d14d1e65f84b706d6f205cd5b1cd03a46b', 'value': '11325836780000000000', 'gas': '90000', 'gasPrice': '20000000000', 'isError': '0', 'txreceipt_status': '', 'input': '0x', 'contractAddress': '', 'cumulativeGasUsed': '252000', 'gasUsed': '21000', 'confirmations': '15982638', 'methodId': '0x', 'functionName': ''}", metadata={'from': '0x16545fb79dbee1ad3a7f868b7661c023f372d5de','to': '0x9dd134d14d1e65f84b706d6f205cd5b1cd03a46b'}), Document(page_content="{'blockNumber': '1749459', 'timeStamp': '1466582044', 'hash': '0x28c327f462cc5013d81c8682c032f014083c6891938a7bdeee85a1c02c3e9ed4', 'nonce': '1096', 'blockHash': '0x5fc5d2a903977b35ce1239975ae23f9157d45d7bd8a8f6205e8ce270000797f9', 'transactionIndex': '1', 'from': '0x16545fb79dbee1ad3a7f868b7661c023f372d5de', 'to': '0x9dd134d14d1e65f84b706d6f205cd5b1cd03a46b', 'value': '10269065805000000000', 'gas': '90000', 'gasPrice': '20000000000', 'isError': '0', 'txreceipt_status': '', 'input': '0x', 'contractAddress': '', 'cumulativeGasUsed': '42000', 'gasUsed': '21000', 'confirmations': '15985793', 'methodId': '0x', 'functionName': ''}", metadata={'from': '0x16545fb79dbee1ad3a7f868b7661c023f372d5de', 'tx_hash':Document(page_content="{'blockNumber': '1746405', 'timeStamp': '1466538123', 'hash': '0xbd4f9602f7fff4b8cc2ab6286efdb85f97fa114a43f6df4e6abc88e85b89e97b', 'nonce': '1092', 'blockHash': '0x3af3966cdaf22e8b112792ee2e0edd21ceb5a0e7bf9d8c168a40cf22deb3690c', 'transactionIndex': '0', 'from': '0x16545fb79dbee1ad3a7f868b7661c023f372d5de', 'to': '0x9dd134d14d1e65f84b706d6f205cd5b1cd03a46b', 'value': '8433783799000000000', 'gas': '90000', 'gasPrice': '25689279306', 'isError': '0', 'txreceipt_status': '', 'input': '0x', 'contractAddress': '', 'cumulativeGasUsed': '21000', 'gasUsed': '21000', 'confirmations': '15988847', 'methodId': '0x', 'functionName': ''}", metadata={'from': '0x16545fb79dbee1ad3a7f868b7661c023f372d5de', 'tx_hash': '0xbd4f9602f7fff4b8cc2ab6286efdb85f97fa114a43f6df4e6abc88e85b89e97b', 'to':Document(page_content="{'blockNumber': '1743384', 'timeStamp': '1466494099', 'hash': '0xdbfcc15f02269fc3ae27f69e344a1ac4e08948b12b76ebdd78a64d8cafd511ef', 'nonce': '1068', 'blockHash': '0x997245108c84250057fda27306b53f9438ad40978a95ca51d8fd7477e73fbaa7', 'transactionIndex': '2', 'from': '0x16545fb79dbee1ad3a7f868b7661c023f372d5de', 'to': '0x9dd134d14d1e65f84b706d6f205cd5b1cd03a46b', 'value': '9541921352000000000', 'gas': '90000', 'gasPrice': '20000000000', 'isError': '0', 'txreceipt_status': '', 'input': '0x', 'contractAddress': '', 'cumulativeGasUsed': '119650', 'gasUsed': '21000', 'confirmations': '15991868', 'methodId': '0x', 'functionName': ''}", metadata={'from': '0x16545fb79dbee1ad3a7f868b7661c023f372d5de', 'tx_hash': '0xdbfcc15f02269fc3ae27f69e344a1ac4e08948b12b76ebdd78a64d8cafd511ef', 'to': '0x9dd134d14d1e65f84b706d6f205cd5b1cd03a46b'}),Document(page_content="{'blockNumber': '1740314', 'timeStamp': '1466450262', 'hash': '0x6e1a22dcc6e2c77a9451426fb49e765c3c459dae88350e3ca504f4831ec20e8a', 'nonce': '1051', 'blockHash': '0x588d17842819a81afae3ac6644d8005c12ce55ddb66c8d4c202caa91d4e8fdbe', 'transactionIndex': '6', 'from': '0x16545fb79dbee1ad3a7f868b7661c023f372d5de', 'to': '0x9dd134d14d1e65f84b706d6f205cd5b1cd03a46b', 'value': '8060633765000000000', 'gas': '90000', 'gasPrice': '22926905859', 'isError': '0', 'txreceipt_status': '', 'input': '0x', 'contractAddress': '', 'cumulativeGasUsed': '153077', 'gasUsed': '21000', 'confirmations': '15994938', 'methodId': '0x', 'functionName': ''}", metadata={'from': '0x16545fb79dbee1ad3a7f868b7661c023f372d5de', 'tx_hash': '0x6e1a22dcc6e2c77a9451426fb49e765c3c459dae88350e3ca504f4831ec20e8a', 'to': '0x9dd134d14d1e65f84b706d6f205cd5b1cd03a46b'}),Document(page_content="{'blockNumber': '1737276', 'timeStamp': '1466406037', 'hash': '0xa4e89bfaf075abbf48f96700979e6c7e11a776b9040113ba64ef9c29ac62b19b', 'nonce': '1024', 'blockHash': '0xe117cad73752bb485c3bef24556e45b7766b283229180fcabc9711f3524b9f79', 'transactionIndex': '35', 'from': '0x16545fb79dbee1ad3a7f868b7661c023f372d5de', 'to': '0x9dd134d14d1e65f84b706d6f205cd5b1cd03a46b', 'value': '9976891868000000000', 'gas': '90000', 'gasPrice': '20000000000', 'isError': '0', 'txreceipt_status': '', 'input': '0x', 'contractAddress': '', 'cumulativeGasUsed': '3187163', 'gasUsed': '21000', 'confirmations': '15997976', 'methodId': '0x', 'functionName': ''}", metadata={'from': '0x16545fb79dbee1ad3a7f868b7661c023f372d5de', 'tx_hash': '0xa4e89bfaf075abbf48f96700979e6c7e11a776b9040113ba64ef9c29ac62b19b', 'to': '0x9dd134d14d1e65f84b706d6f205cd5b1cd03a46b'}),Document(page_content="{'blockNumber': '1734172', 'timeStamp': '1466362463', 'hash': '0x7a062d25b83bafc9fe6b22bc6f5718bca333908b148676e1ac66c0adeccef647', 'nonce': '1016', 'blockHash': '0x8a8afe2b446713db88218553cfb5dd202422928e5e0bc00475ed2f37d95649de', 'transactionIndex': '4', 'from': '0x16545fb79dbee1ad3a7f868b7661c023f372d5de', 'to': '0x9dd134d14d1e65f84b706d6f205cd5b1cd03a46b', 'value': '6322276709000000000', 'gas': '90000', 'gasPrice': '20000000000', 'isError': '0', 'txreceipt_status': '', 'input': '0x', 'contractAddress': '', 'cumulativeGasUsed': '105333', 'gasUsed': '21000', 'confirmations': '16001080', 'methodId': '0x', 'functionName': ''}", metadata={'from': '0x16545fb79dbee1ad3a7f868b7661c023f372d5de', 'tx_hash': '0x7a062d25b83bafc9fe6b22bc6f5718bca333908b148676e1ac66c0adeccef647', 'to': '0x9dd134d14d1e65f84b706d6f205cd5b1cd03a46b'}),Document(page_content="{'blockNumber': '1733479', 'timeStamp': '1466352351', 'hash': '0x720d79bf78775f82b40280aae5abfc347643c5f6708d4bf4ec24d65cd01c7121', 'nonce': '3367', 'blockHash': '0x9928661e7ae125b3ae0bcf5e076555a3ee44c52ae31bd6864c9c93a6ebb3f43e', 'transactionIndex': '0', 'from': '0x3763e6e1228bfeab94191c856412d1bb0a8e6996', 'to': '0x9dd134d14d1e65f84b706d6f205cd5b1cd03a46b', 'value': '1570706444000000000', 'gas': '90000', 'gasPrice': '20000000000', 'isError': '0', 'txreceipt_status': '', 'input': '0x', 'contractAddress': '', 'cumulativeGasUsed': '21000', 'gasUsed': '21000', 'confirmations': '16001773', 'methodId': '0x', 'functionName': ''}", metadata={'from': '0x3763e6e1228bfeab94191c856412d1bb0a8e6996', 'tx_hash': '0x720d79bf78775f82b40280aae5abfc347643c5f6708d4bf4ec24d65cd01c7121', 'to': '0x9dd134d14d1e65f84b706d6f205cd5b1cd03a46b'}),Document(page_content="{'blockNumber': '1730337', 'timeStamp': '1466308222', 'hash': '0xceaffdb3766d2741057d402738eb41e1d1941939d9d438c102fb981fd47a87a4', 'nonce': '3344', 'blockHash': '0x3a52d28b8587d55c621144a161a0ad5c37dd9f7d63b629ab31da04fa410b2cfa', 'transactionIndex': '1', 'from': '0x3763e6e1228bfeab94191c856412d1bb0a8e6996', 'to': '0x9dd134d14d1e65f84b706d6f205cd5b1cd03a46b', 'value': '9783400526000000000', 'gas': '90000', 'gasPrice': '20000000000', 'isError': '0', 'txreceipt_status': '', 'input': '0x', 'contractAddress': '', 'cumulativeGasUsed': '60788', 'gasUsed': '21000', 'confirmations': '16004915', 'methodId': '0x', 'functionName': ''}", metadata={'from': '0x3763e6e1228bfeab94191c856412d1bb0a8e6996', 'tx_hash': '0xceaffdb3766d2741057d402738eb41e1d1941939d9d438c102fb981fd47a87a4', 'to': '0x9dd134d14d1e65f84b706d6f205cd5b1cd03a46b'}),'1727090', 'timeStamp': '1466262018', 'hash': '0xd5a779346d499aa722f72ffe7cd3c8594a9ddd91eb7e439e8ba92ceb7bc86928', 'nonce': '3267', 'blockHash': '0xc0cff378c3446b9b22d217c2c5f54b1c85b89a632c69c55b76cdffe88d2b9f4d', 'transactionIndex': '20', 'from': '0x3763e6e1228bfeab94191c856412d1bb0a8e6996', 'to': '0x9dd134d14d1e65f84b706d6f205cd5b1cd03a46b', 'value': '11521979886000000000', 'gas': '90000', 'gasPrice': '20000000000', 'isError': '0', 'txreceipt_status': '', 'input': '0x', 'contractAddress': '', 'cumulativeGasUsed': '3806725', 'gasUsed': '21000', 'confirmations': '16008162', 'methodId': '0x', 'functionName': ''}", metadata={'from': '0x3763e6e1228bfeab94191c856412d1bb0a8e6996', 'tx_hash': '0xd5a779346d499aa722f72ffe7cd3c8594a9ddd91eb7e439e8ba92ceb7bc86928', 'to': '0x9dd134d14d1e65f84b706d6f205cd5b1cd03a46b'}), Document(page_content="{'blockNumber':'nonce': '3155', 'blockHash': '0xc2c2207bcaf341eed07f984c9a90b3f8e8bdbdbd2ac6562f8c2f5bfa4b51299d', 'transactionIndex': '5', 'from': '0x3763e6e1228bfeab94191c856412d1bb0a8e6996', 'to': '0x9dd134d14d1e65f84b706d6f205cd5b1cd03a46b', 'value': '13149213761000000000', 'gas': '90000', 'gasPrice': '22655598156', 'isError': '0', 'txreceipt_status': '', 'input': '0x', 'contractAddress': '', 'cumulativeGasUsed': '126000', 'gasUsed': '21000', 'confirmations': '16011481', 'methodId': '0x', 'functionName': ''}", metadata={'from': '0x3763e6e1228bfeab94191c856412d1bb0a8e6996', 'tx_hash': '0xe00abf5fa83a4b23ee1cc7f07f9dda04ab5fa5efe358b315df8b76699a83efc4', 'to': '0x9dd134d14d1e65f84b706d6f205cd5b1cd03a46b'}), Document(page_content="{'blockNumber': '1727090', 'timeStamp': '1466262018', 'hash':'from': '0x2ceee24f8d03fc25648c68c8e6569aa0512f6ac3', 'contractAddress': '0x2ceee24f8d03fc25648c68c8e6569aa0512f6ac3', 'to': '0x9dd134d14d1e65f84b706d6f205cd5b1cd03a46b', 'value': '298131000000000', 'tokenName': 'ABCHANGE.io', 'tokenSymbol': 'XCH', 'tokenDecimal': '9', 'transactionIndex': '71', 'gas': '15000000', 'gasPrice': '48614996176', 'gasUsed': '5712724', 'cumulativeGasUsed': '11507920', 'input': 'deprecated', 'confirmations': '4492277'}Create a normal transaction loader with customized parametersloader = EtherscanLoader( account_address, page=2, offset=20, start_block=10000, end_block=8888888888, sort="asc",)result = loader.load()result 20 [Document(page_content="{'blockNumber': '1723771', 'timeStamp': '1466213371', 'hash': '0xe00abf5fa83a4b23ee1cc7f07f9dda04ab5fa5efe358b315df8b76699a83efc4', 'nonce':The filter is default to normal_transactionIf you have any questions, you can access Etherscan API Doc or contact me via i@inevitable.tech.All functions related to transactions histories are restricted 1000 histories maximum because of Etherscan limit. You can use the following parameters to find the transaction histories you need:offset: default to 20. Shows 20 transactions for one timepage: default to 1. This controls pagenation.start_block: Default to 0. The transaction histories starts from 0 block.end_block: Default to 99999999. The transaction histories starts from 99999999 blocksort: "desc" or "asc". Set default to "desc" to get latest transactions.Setup%pip install langchain -qfrom langchain.document_loaders import EtherscanLoaderimport osos.environ["ETHERSCAN_API_KEY"] = etherscanAPIKeyCreate a ERC20 transaction loaderaccount_address = "0x9dd134d14d1e65f84b706d6f205cd5b1cd03a46b"loader = EtherscanLoader(account_address, filter="erc20_transaction")result = loader.load()eval(result[0].page_content) {'blockNumber': '13242975', 'timeStamp': '1631878751', 'hash': '0x366dda325b1a6570928873665b6b418874a7dedf7fee9426158fa3536b621788', 'nonce': '28', 'blockHash': '0x5469dba1b1e1372962cf2be27ab2640701f88c00640c4d26b8cc2ae9ac256fb6', 'from':Etherscan loader use etherscan api to load transacactions histories under specific account on Ethereum Mainnet.You will need a Etherscan api key to proceed. The free api key has 5 calls per seconds quota.The loader supports the following six functinalities:Retrieve normal transactions under specifc account on Ethereum MainetRetrieve internal transactions under specifc account on Ethereum MainetRetrieve erc20 transactions under specifc account on Ethereum MainetRetrieve erc721 transactions under specifc account on Ethereum MainetRetrieve erc1155 transactions under specifc account on Ethereum MainetRetrieve ethereum balance in wei under specifc account on Ethereum MainetIf the account does not have corresponding transactions, the loader will a list with one document. The content of document is ''.You can pass differnt filters to loader to access different functionalities we mentioned above:"normal_transaction""internal_transaction""erc20_transaction""eth_balance""erc721_transaction""erc1155_transaction"Skip to main content🦜ï¸�🔗 LangChainDocsUse casesIntegrationsAPILangSmithJS/TS DocsCTRLKIntegrationsCallbacksChat modelsDocument loadersEtherscan LoaderacreomAirbyte JSONAirtableAlibaba Cloud MaxComputeApify DatasetArxivAsyncHtmlLoaderAWS S3 DirectoryAWS S3 FileAZLyricsAzure Blob Storage ContainerAzure Blob Storage FileBibTeXBiliBiliBlackboardBlockchainBrave SearchBrowserlesschatgpt_loaderCollege ConfidentialConfluenceCoNLL-UCopy PasteCSVCube Semantic LayerDatadog LogsDiffbotDiscordDocugamiDuckDBEmailEmbaasEPubEverNoteexample_dataMicrosoft ExcelFacebook ChatFaunaFigmaGeopandasGitGitBookGitHubGoogle BigQueryGoogle Cloud Storage DirectoryGoogle Cloud Storage FileGoogle DriveGrobidGutenbergHacker NewsHuggingFace datasetiFixitImagesImage captionsIMSDbIuguJoplinJupyter NotebookLarkSuite (FeiShu)MastodonMediaWikiDumpMergeDocLoadermhtmlMicrosoft OneDriveMicrosoft PowerPointMicrosoft WordModern TreasuryNotion DB 1/2Notion DB 2/2ObsidianOpen Document Format (ODT)Open City DataOrg-modePandas DataFramePsychicPySpark DataFrame LoaderReadTheDocs DocumentationRecursive URL LoaderRedditRoamRocksetRSTSitemapSlackSnowflakeSource CodeSpreedlyStripeSubtitleTelegramTencent COS DirectoryTencent COS File2MarkdownTOMLTrelloTSVTwitterUnstructured FileURLWeatherWebBaseLoaderWhatsApp ChatWikipediaXMLXorbits Pandas DataFrameLoading documents from a YouTube urlYouTube transcriptsDocument transformersLLMsMemoryRetrieversText embedding modelsAgent toolkitsToolsVector storesGrouped by providerIntegrationsDocument loadersEtherscan LoaderOn this pageEtherscan LoaderOverview​The Etherscan loader use etherscan api to load transacactions histories under specific account onEtherscan Loader | 🦜ï¸�🔗 Langchainthe weather data from the OpenWeatherMap's OneCall API, using the pyowm Python package. You must initialize the loader with your OpenWeatherMap API token and the names of the cities you want the weather data for.from langchain.document_loaders import WeatherDataLoader#!pip install pyowm# Set API key either by passing it in to constructor directly# or by setting the environment variable "OPENWEATHERMAP_API_KEY".from getpass import getpassOPENWEATHERMAP_API_KEY = getpass()loader = WeatherDataLoader.from_params( ["chennai", "vellore"], openweathermap_api_key=OPENWEATHERMAP_API_KEY)documents = loader.load()documentsPreviousURLNextWebBaseLoaderCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.Skip to main content🦜ï¸�🔗 LangChainDocsUse casesIntegrationsAPILangSmithJS/TS DocsCTRLKIntegrationsCallbacksChat modelsDocument loadersEtherscan LoaderacreomAirbyte JSONAirtableAlibaba Cloud MaxComputeApify DatasetArxivAsyncHtmlLoaderAWS S3 DirectoryAWS S3 FileAZLyricsAzure Blob Storage ContainerAzure Blob Storage FileBibTeXBiliBiliBlackboardBlockchainBrave SearchBrowserlesschatgpt_loaderCollege ConfidentialConfluenceCoNLL-UCopy PasteCSVCube Semantic LayerDatadog LogsDiffbotDiscordDocugamiDuckDBEmailEmbaasEPubEverNoteexample_dataMicrosoft ExcelFacebook ChatFaunaFigmaGeopandasGitGitBookGitHubGoogle BigQueryGoogle Cloud Storage DirectoryGoogle Cloud Storage FileGoogle DriveGrobidGutenbergHacker NewsHuggingFace datasetiFixitImagesImage captionsIMSDbIuguJoplinJupyter NotebookLarkSuite (FeiShu)MastodonMediaWikiDumpMergeDocLoadermhtmlMicrosoft OneDriveMicrosoft PowerPointMicrosoft WordModern TreasuryNotion DB 1/2Notion DB 2/2ObsidianOpen Document Format (ODT)Open City DataOrg-modePandas DataFramePsychicPySpark DataFrame LoaderReadTheDocs DocumentationRecursive URL LoaderRedditRoamRocksetRSTSitemapSlackSnowflakeSource CodeSpreedlyStripeSubtitleTelegramTencent COS DirectoryTencent COS File2MarkdownTOMLTrelloTSVTwitterUnstructured FileURLWeatherWebBaseLoaderWhatsApp ChatWikipediaXMLXorbits Pandas DataFrameLoading documents from a YouTube urlYouTube transcriptsDocument transformersLLMsMemoryRetrieversText embedding modelsAgent toolkitsToolsVector storesGrouped by providerIntegrationsDocument loadersWeatherWeatherOpenWeatherMap is an open source weather service providerThis loader fetches the weather data from the OpenWeatherMap's OneCall API, using the pyowm PythonWeather | 🦜ï¸�🔗 Langchain'1')", 'section_title': 'Introduction', 'section_number': '1', 'paper_title': 'LLaMA: Open and Efficient Foundation Language Models', 'file_path': '/Users/31treehaus/Desktop/Papers/2302.13971.pdf'}PreviousGoogle DriveNextGutenbergCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.'y': '536.27', 'h': '218.27', 'w': '9.46'}, {'page': '1', 'x': '306.14', 'y': '549.82', 'h': '218.65', 'w': '9.46'}, {'page': '1', 'x': '306.14', 'y': '563.37', 'h': '136.98', 'w': '9.46'}], [{'page': '1', 'x': '446.49', 'y': '563.37', 'h': '78.11', 'w': '9.46'}, {'page': '1', 'x': '304.69', 'y': '576.92', 'h': '138.32', 'w': '9.46'}], [{'page': '1', 'x': '447.75', 'y': '576.92', 'h': '76.66', 'w': '9.46'}, {'page': '1', 'x': '306.14', 'y': '590.47', 'h': '219.63', 'w': '9.46'}, {'page': '1', 'x': '306.14', 'y': '604.02', 'h': '218.27', 'w': '9.46'}, {'page': '1', 'x': '306.14', 'y': '617.56', 'h': '218.27', 'w': '9.46'}, {'page': '1', 'x': '306.14', 'y': '631.11', 'h': '220.18', 'w': '9.46'}]]", 'pages': "('1', '1')", 'section_title': 'Introduction',or GPT-3, we only use publicly available data, making our work compatible with open-sourcing, while most existing models rely on data which is either not publicly available or undocumented (e.g."Books -2TB" or "Social media conversations").There exist some exceptions, notably OPT (Zhang et al., 2022), GPT-NeoX (Black et al., 2022), BLOOM (Scao et al., 2022) and GLM (Zeng et al., 2022), but none that are competitive with PaLM-62B or Chinchilla.'docs[3].metadata {'text': 'Unlike Chinchilla, PaLM, or GPT-3, we only use publicly available data, making our work compatible with open-sourcing, while most existing models rely on data which is either not publicly available or undocumented (e.g."Books -2TB" or "Social media conversations").There exist some exceptions, notably OPT (Zhang et al., 2022), GPT-NeoX (Black et al., 2022), BLOOM (Scao et al., 2022) and GLM (Zeng et al., 2022), but none that are competitive with PaLM-62B or Chinchilla.', 'para': '2', 'bboxes': "[[{'page': '1', 'x': '317.05', 'y': '509.17', 'h': '207.73', 'w': '9.46'}, {'page': '1', 'x': '306.14', 'y': '522.72', 'h': '220.08', 'w': '9.46'}, {'page': '1', 'x': '306.14', 'y': '536.27', 'h': '218.27', 'w':for extracting, parsing, and re-structuring raw documents.It is particularly good for sturctured PDFs, like academic papers.This loader uses GROBIB to parse PDFs into Documents that retain metadata associated with the section of text.For users on Mac - (Note: additional instructions can be found here.)Install Java (Apple Silicon):$ arch -arm64 brew install openjdk@11$ brew --prefix openjdk@11/opt/homebrew/opt/openjdk@ 11In ~/.zshrc:export JAVA_HOME=/opt/homebrew/opt/openjdk@11export PATH=$JAVA_HOME/bin:$PATHThen, in Terminal:$ source ~/.zshrcConfirm install:$ which java/opt/homebrew/opt/openjdk@11/bin/java$ java -version openjdk version "11.0.19" 2023-04-18OpenJDK Runtime Environment Homebrew (build 11.0.19+0)OpenJDK 64-Bit Server VM Homebrew (build 11.0.19+0, mixed mode)Then, get Grobid:$ curl -LO https://github.com/kermitt2/grobid/archive/0.7.3.zip$ unzip 0.7.3.zipBuild$ ./gradlew clean installThen, run the server:get_ipython().system_raw('nohup ./gradlew run > grobid.log 2>&1 &')Now, we can use the data loader.from langchain.document_loaders.parsers import GrobidParserfrom langchain.document_loaders.generic import GenericLoaderloader = GenericLoader.from_filesystem( "../Papers/", glob="*", suffixes=[".pdf"], parser=GrobidParser(segment_sentences=False),)docs = loader.load()docs[3].page_content 'Unlike Chinchilla, PaLM, or GPT-3, we only use publicly available data, making our work compatible withSkip to main content🦜ï¸�🔗 LangChainDocsUse casesIntegrationsAPILangSmithJS/TS DocsCTRLKIntegrationsCallbacksChat modelsDocument loadersEtherscan LoaderacreomAirbyte JSONAirtableAlibaba Cloud MaxComputeApify DatasetArxivAsyncHtmlLoaderAWS S3 DirectoryAWS S3 FileAZLyricsAzure Blob Storage ContainerAzure Blob Storage FileBibTeXBiliBiliBlackboardBlockchainBrave SearchBrowserlesschatgpt_loaderCollege ConfidentialConfluenceCoNLL-UCopy PasteCSVCube Semantic LayerDatadog LogsDiffbotDiscordDocugamiDuckDBEmailEmbaasEPubEverNoteexample_dataMicrosoft ExcelFacebook ChatFaunaFigmaGeopandasGitGitBookGitHubGoogle BigQueryGoogle Cloud Storage DirectoryGoogle Cloud Storage FileGoogle DriveGrobidGutenbergHacker NewsHuggingFace datasetiFixitImagesImage captionsIMSDbIuguJoplinJupyter NotebookLarkSuite (FeiShu)MastodonMediaWikiDumpMergeDocLoadermhtmlMicrosoft OneDriveMicrosoft PowerPointMicrosoft WordModern TreasuryNotion DB 1/2Notion DB 2/2ObsidianOpen Document Format (ODT)Open City DataOrg-modePandas DataFramePsychicPySpark DataFrame LoaderReadTheDocs DocumentationRecursive URL LoaderRedditRoamRocksetRSTSitemapSlackSnowflakeSource CodeSpreedlyStripeSubtitleTelegramTencent COS DirectoryTencent COS File2MarkdownTOMLTrelloTSVTwitterUnstructured FileURLWeatherWebBaseLoaderWhatsApp ChatWikipediaXMLXorbits Pandas DataFrameLoading documents from a YouTube urlYouTube transcriptsDocument transformersLLMsMemoryRetrieversText embedding modelsAgent toolkitsToolsVector storesGrouped by providerIntegrationsDocument loadersGrobidGrobidGROBID is a machine learning library for extracting, parsing, and re-structuring raw documents.It is particularly good for sturcturedGrobid | 🦜ï¸�🔗 Langchainis a Document Database.Query Fauna documents#!pip install faunaQuery data example​from langchain.document_loaders.fauna import FaunaLoadersecret = ""query = "Item.all()" # Fauna query. Assumes that the collection is called "Item"field = "text" # The field that contains the page content. Assumes that the field is called "text"loader = FaunaLoader(query, field, secret)docs = loader.lazy_load()for value in docs: print(value)Query with Pagination​You get a after value if there are more data. You can get values after the curcor by passing in the after string in query. To learn more following this linkquery = """Item.paginate("hs+DzoPOg ... aY1hOohozrV7A")Item.all()"""loader = FaunaLoader(query, field, secret)PreviousFacebook ChatNextFigmaQuery data exampleQuery with PaginationCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.Skip to main content🦜ï¸�🔗 LangChainDocsUse casesIntegrationsAPILangSmithJS/TS DocsCTRLKIntegrationsCallbacksChat modelsDocument loadersEtherscan LoaderacreomAirbyte JSONAirtableAlibaba Cloud MaxComputeApify DatasetArxivAsyncHtmlLoaderAWS S3 DirectoryAWS S3 FileAZLyricsAzure Blob Storage ContainerAzure Blob Storage FileBibTeXBiliBiliBlackboardBlockchainBrave SearchBrowserlesschatgpt_loaderCollege ConfidentialConfluenceCoNLL-UCopy PasteCSVCube Semantic LayerDatadog LogsDiffbotDiscordDocugamiDuckDBEmailEmbaasEPubEverNoteexample_dataMicrosoft ExcelFacebook ChatFaunaFigmaGeopandasGitGitBookGitHubGoogle BigQueryGoogle Cloud Storage DirectoryGoogle Cloud Storage FileGoogle DriveGrobidGutenbergHacker NewsHuggingFace datasetiFixitImagesImage captionsIMSDbIuguJoplinJupyter NotebookLarkSuite (FeiShu)MastodonMediaWikiDumpMergeDocLoadermhtmlMicrosoft OneDriveMicrosoft PowerPointMicrosoft WordModern TreasuryNotion DB 1/2Notion DB 2/2ObsidianOpen Document Format (ODT)Open City DataOrg-modePandas DataFramePsychicPySpark DataFrame LoaderReadTheDocs DocumentationRecursive URL LoaderRedditRoamRocksetRSTSitemapSlackSnowflakeSource CodeSpreedlyStripeSubtitleTelegramTencent COS DirectoryTencent COS File2MarkdownTOMLTrelloTSVTwitterUnstructured FileURLWeatherWebBaseLoaderWhatsApp ChatWikipediaXMLXorbits Pandas DataFrameLoading documents from a YouTube urlYouTube transcriptsDocument transformersLLMsMemoryRetrieversText embedding modelsAgent toolkitsToolsVector storesGrouped by providerIntegrationsDocument loadersFaunaOn this pageFaunaFauna is a Document Database.Query Fauna documents#!pip install faunaQuery data example​fromFauna | 🦜ï¸�🔗 Langchainis an object storage service.AWS S3 BucketsThis covers how to load document objects from an AWS S3 File object.from langchain.document_loaders import S3FileLoader#!pip install boto3loader = S3FileLoader("testing-hwc", "fake.docx")loader.load() [Document(page_content='Lorem ipsum dolor sit amet.', lookup_str='', metadata={'source': '/var/folders/y6/8_bzdg295ld6s1_97_12m4lr0000gn/T/tmpxvave6wl/fake.docx'}, lookup_index=0)]PreviousAWS S3 DirectoryNextAZLyricsCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.Skip to main content🦜ï¸�🔗 LangChainDocsUse casesIntegrationsAPILangSmithJS/TS DocsCTRLKIntegrationsCallbacksChat modelsDocument loadersEtherscan LoaderacreomAirbyte JSONAirtableAlibaba Cloud MaxComputeApify DatasetArxivAsyncHtmlLoaderAWS S3 DirectoryAWS S3 FileAZLyricsAzure Blob Storage ContainerAzure Blob Storage FileBibTeXBiliBiliBlackboardBlockchainBrave SearchBrowserlesschatgpt_loaderCollege ConfidentialConfluenceCoNLL-UCopy PasteCSVCube Semantic LayerDatadog LogsDiffbotDiscordDocugamiDuckDBEmailEmbaasEPubEverNoteexample_dataMicrosoft ExcelFacebook ChatFaunaFigmaGeopandasGitGitBookGitHubGoogle BigQueryGoogle Cloud Storage DirectoryGoogle Cloud Storage FileGoogle DriveGrobidGutenbergHacker NewsHuggingFace datasetiFixitImagesImage captionsIMSDbIuguJoplinJupyter NotebookLarkSuite (FeiShu)MastodonMediaWikiDumpMergeDocLoadermhtmlMicrosoft OneDriveMicrosoft PowerPointMicrosoft WordModern TreasuryNotion DB 1/2Notion DB 2/2ObsidianOpen Document Format (ODT)Open City DataOrg-modePandas DataFramePsychicPySpark DataFrame LoaderReadTheDocs DocumentationRecursive URL LoaderRedditRoamRocksetRSTSitemapSlackSnowflakeSource CodeSpreedlyStripeSubtitleTelegramTencent COS DirectoryTencent COS File2MarkdownTOMLTrelloTSVTwitterUnstructured FileURLWeatherWebBaseLoaderWhatsApp ChatWikipediaXMLXorbits Pandas DataFrameLoading documents from a YouTube urlYouTube transcriptsDocument transformersLLMsMemoryRetrieversText embedding modelsAgent toolkitsToolsVector storesGrouped by providerIntegrationsDocument loadersAWS S3 FileAWS S3 FileAmazon Simple Storage Service (Amazon S3) is an object storage service.AWS S3 BucketsThis covers how to load document objects fromAWS S3 File | 🦜ï¸�🔗 Langchainalign-items: center;\n }\n\n .header nav a {\n font-size: 14px;\n font-weight: 500;\n text-decoration: none;\n color: #000;\n margin-left: 20px;\n }\n\n @media (max-width: 768px) {\n .header nav {\n display: none;\n }\n }\n \n \n\n
\n

Company Contact

\n \n
\n \nPreviousFaunaNextGeopandasCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.name="viewport" content="width=device-width, initial-scale=1.0">\n Skip to main content🦜�🔗 LangChainDocsUse casesIntegrationsAPILangSmithJS/TS DocsCTRLKIntegrationsCallbacksChat modelsDocument loadersEtherscan LoaderacreomAirbyte JSONAirtableAlibaba Cloud MaxComputeApify DatasetArxivAsyncHtmlLoaderAWS S3 DirectoryAWS S3 FileAZLyricsAzure Blob Storage ContainerAzure Blob Storage FileBibTeXBiliBiliBlackboardBlockchainBrave SearchBrowserlesschatgpt_loaderCollege ConfidentialConfluenceCoNLL-UCopy PasteCSVCube Semantic LayerDatadog LogsDiffbotDiscordDocugamiDuckDBEmailEmbaasEPubEverNoteexample_dataMicrosoft ExcelFacebook ChatFaunaFigmaGeopandasGitGitBookGitHubGoogle BigQueryGoogle Cloud Storage DirectoryGoogle Cloud Storage FileGoogle DriveGrobidGutenbergHacker NewsHuggingFace datasetiFixitImagesImage captionsIMSDbIuguJoplinJupyter NotebookLarkSuite (FeiShu)MastodonMediaWikiDumpMergeDocLoadermhtmlMicrosoft OneDriveMicrosoft PowerPointMicrosoft WordModern TreasuryNotion DB 1/2Notion DB 2/2ObsidianOpen Document Format (ODT)Open City DataOrg-modePandas DataFramePsychicPySpark DataFrame LoaderReadTheDocs DocumentationRecursive URL LoaderRedditRoamRocksetRSTSitemapSlackSnowflakeSource CodeSpreedlyStripeSubtitleTelegramTencent COS DirectoryTencent COS File2MarkdownTOMLTrelloTSVTwitterUnstructured FileURLWeatherWebBaseLoaderWhatsApp ChatWikipediaXMLXorbits Pandas DataFrameLoading documents from a YouTube urlYouTube transcriptsDocument transformersLLMsMemoryRetrieversText embedding modelsAgent toolkitsToolsVector storesGrouped by providerIntegrationsDocument loadersXorbits Pandas DataFrameXorbits Pandas DataFrameThis notebook goes over how to load data from a xorbits.pandas DataFrame.#!pip install xorbitsimportXorbits Pandas DataFrame | 🦜�🔗 Langchainthat works on top of your local folder of plain text files.This notebook covers how to load documents from an Obsidian database.Since Obsidian is just stored on disk as a folder of Markdown files, the loader just takes a path to this directory.Obsidian files also sometimes contain metadata which is a YAML block at the top of the file. These values will be added to the document's metadata. (ObsidianLoader can also be passed a collect_metadata=False argument to disable this behavior.)from langchain.document_loaders import ObsidianLoaderloader = ObsidianLoader("")docs = loader.load()PreviousNotion DB 2/2NextOpen Document Format (ODT)CommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.Skip to main content🦜�🔗 LangChainDocsUse casesIntegrationsAPILangSmithJS/TS DocsCTRLKIntegrationsCallbacksChat modelsDocument loadersEtherscan LoaderacreomAirbyte JSONAirtableAlibaba Cloud MaxComputeApify DatasetArxivAsyncHtmlLoaderAWS S3 DirectoryAWS S3 FileAZLyricsAzure Blob Storage ContainerAzure Blob Storage FileBibTeXBiliBiliBlackboardBlockchainBrave SearchBrowserlesschatgpt_loaderCollege ConfidentialConfluenceCoNLL-UCopy PasteCSVCube Semantic LayerDatadog LogsDiffbotDiscordDocugamiDuckDBEmailEmbaasEPubEverNoteexample_dataMicrosoft ExcelFacebook ChatFaunaFigmaGeopandasGitGitBookGitHubGoogle BigQueryGoogle Cloud Storage DirectoryGoogle Cloud Storage FileGoogle DriveGrobidGutenbergHacker NewsHuggingFace datasetiFixitImagesImage captionsIMSDbIuguJoplinJupyter NotebookLarkSuite (FeiShu)MastodonMediaWikiDumpMergeDocLoadermhtmlMicrosoft OneDriveMicrosoft PowerPointMicrosoft WordModern TreasuryNotion DB 1/2Notion DB 2/2ObsidianOpen Document Format (ODT)Open City DataOrg-modePandas DataFramePsychicPySpark DataFrame LoaderReadTheDocs DocumentationRecursive URL LoaderRedditRoamRocksetRSTSitemapSlackSnowflakeSource CodeSpreedlyStripeSubtitleTelegramTencent COS DirectoryTencent COS File2MarkdownTOMLTrelloTSVTwitterUnstructured FileURLWeatherWebBaseLoaderWhatsApp ChatWikipediaXMLXorbits Pandas DataFrameLoading documents from a YouTube urlYouTube transcriptsDocument transformersLLMsMemoryRetrieversText embedding modelsAgent toolkitsToolsVector storesGrouped by providerIntegrationsDocument loadersObsidianObsidianObsidian is a powerful and extensible knowledge baseObsidian | 🦜�🔗 Langchainencrypted, cloud-based and centralized instant messaging service. The application also provides optional end-to-end encrypted chats and video calling, VoIP, file sharing and several other features.This notebook covers how to load data from Telegram into a format that can be ingested into LangChain.from langchain.document_loaders import TelegramChatFileLoader, TelegramChatApiLoaderloader = TelegramChatFileLoader("example_data/telegram.json")loader.load() [Document(page_content="Henry on 2020-01-01T00:00:02: It's 2020...\n\nHenry on 2020-01-01T00:00:04: Fireworks!\n\nGrace 🧤 ðŸ\x8d’ on 2020-01-01T00:00:05: You're a minute late!\n\n", metadata={'source': 'example_data/telegram.json'})]TelegramChatApiLoader loads data directly from any specified chat from Telegram. In order to export the data, you will need to authenticate your Telegram account. You can get the API_HASH and API_ID from https://my.telegram.org/auth?to=appschat_entity – recommended to be the entity of a channel.loader = TelegramChatApiLoader( chat_entity="", # recommended to use Entity here api_hash="", api_id="", user_name="", # needed only for caching the session.)loader.load()PreviousSubtitleNextTencent COS DirectoryCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.Skip to main content🦜�🔗 LangChainDocsUse casesIntegrationsAPILangSmithJS/TS DocsCTRLKIntegrationsCallbacksChat modelsDocument loadersEtherscan LoaderacreomAirbyte JSONAirtableAlibaba Cloud MaxComputeApify DatasetArxivAsyncHtmlLoaderAWS S3 DirectoryAWS S3 FileAZLyricsAzure Blob Storage ContainerAzure Blob Storage FileBibTeXBiliBiliBlackboardBlockchainBrave SearchBrowserlesschatgpt_loaderCollege ConfidentialConfluenceCoNLL-UCopy PasteCSVCube Semantic LayerDatadog LogsDiffbotDiscordDocugamiDuckDBEmailEmbaasEPubEverNoteexample_dataMicrosoft ExcelFacebook ChatFaunaFigmaGeopandasGitGitBookGitHubGoogle BigQueryGoogle Cloud Storage DirectoryGoogle Cloud Storage FileGoogle DriveGrobidGutenbergHacker NewsHuggingFace datasetiFixitImagesImage captionsIMSDbIuguJoplinJupyter NotebookLarkSuite (FeiShu)MastodonMediaWikiDumpMergeDocLoadermhtmlMicrosoft OneDriveMicrosoft PowerPointMicrosoft WordModern TreasuryNotion DB 1/2Notion DB 2/2ObsidianOpen Document Format (ODT)Open City DataOrg-modePandas DataFramePsychicPySpark DataFrame LoaderReadTheDocs DocumentationRecursive URL LoaderRedditRoamRocksetRSTSitemapSlackSnowflakeSource CodeSpreedlyStripeSubtitleTelegramTencent COS DirectoryTencent COS File2MarkdownTOMLTrelloTSVTwitterUnstructured FileURLWeatherWebBaseLoaderWhatsApp ChatWikipediaXMLXorbits Pandas DataFrameLoading documents from a YouTube urlYouTube transcriptsDocument transformersLLMsMemoryRetrieversText embedding modelsAgent toolkitsToolsVector storesGrouped by providerIntegrationsDocument loadersTelegramTelegramTelegram Messenger is a globally accessible freemium, cross-platform, encrypted, cloud-based and centralized instant messaging service. The application also provides optional end-to-end encrypted chatsTelegram | 🦜�🔗 Langchainnotebook covers how to load documents from a Zipfile generated from a Slack export.In order to get this Slack export, follow these instructions:🧑 Instructions for ingesting your own dataset​Export your Slack data. You can do this by going to your Workspace Management page and clicking the Import/Export option ({your_slack_domain}.slack.com/services/export). Then, choose the right date range and click Start export. Slack will send you an email and a DM when the export is ready.The download will produce a .zip file in your Downloads folder (or wherever your downloads can be found, depending on your OS configuration).Copy the path to the .zip file, and assign it as LOCAL_ZIPFILE below.from langchain.document_loaders import SlackDirectoryLoader# Optionally set your Slack URL. This will give you proper URLs in the docs sources.SLACK_WORKSPACE_URL = "https://xxx.slack.com"LOCAL_ZIPFILE = "" # Paste the local paty to your Slack zip file here.loader = SlackDirectoryLoader(LOCAL_ZIPFILE, SLACK_WORKSPACE_URL)docs = loader.load()docsPreviousSitemapNextSnowflake🧑 Instructions for ingesting your own datasetCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.Skip to main content🦜�🔗 LangChainDocsUse casesIntegrationsAPILangSmithJS/TS DocsCTRLKIntegrationsCallbacksChat modelsDocument loadersEtherscan LoaderacreomAirbyte JSONAirtableAlibaba Cloud MaxComputeApify DatasetArxivAsyncHtmlLoaderAWS S3 DirectoryAWS S3 FileAZLyricsAzure Blob Storage ContainerAzure Blob Storage FileBibTeXBiliBiliBlackboardBlockchainBrave SearchBrowserlesschatgpt_loaderCollege ConfidentialConfluenceCoNLL-UCopy PasteCSVCube Semantic LayerDatadog LogsDiffbotDiscordDocugamiDuckDBEmailEmbaasEPubEverNoteexample_dataMicrosoft ExcelFacebook ChatFaunaFigmaGeopandasGitGitBookGitHubGoogle BigQueryGoogle Cloud Storage DirectoryGoogle Cloud Storage FileGoogle DriveGrobidGutenbergHacker NewsHuggingFace datasetiFixitImagesImage captionsIMSDbIuguJoplinJupyter NotebookLarkSuite (FeiShu)MastodonMediaWikiDumpMergeDocLoadermhtmlMicrosoft OneDriveMicrosoft PowerPointMicrosoft WordModern TreasuryNotion DB 1/2Notion DB 2/2ObsidianOpen Document Format (ODT)Open City DataOrg-modePandas DataFramePsychicPySpark DataFrame LoaderReadTheDocs DocumentationRecursive URL LoaderRedditRoamRocksetRSTSitemapSlackSnowflakeSource CodeSpreedlyStripeSubtitleTelegramTencent COS DirectoryTencent COS File2MarkdownTOMLTrelloTSVTwitterUnstructured FileURLWeatherWebBaseLoaderWhatsApp ChatWikipediaXMLXorbits Pandas DataFrameLoading documents from a YouTube urlYouTube transcriptsDocument transformersLLMsMemoryRetrieversText embedding modelsAgent toolkitsToolsVector storesGrouped by providerIntegrationsDocument loadersSlackOn this pageSlackSlack is an instant messaging program.This notebook covers how to load documents from a Zipfile generated from a Slack export.In order to get thisSlack | 🦜�🔗 Langchain
Team "Payroll (millions)" "Wins"
0 Nationals 81.34 98
1 Reds 82.20 97
88.19 61
Astros 60.65 55
PreviousCopy PasteNextCube Semantic LayerCustomizing the csv parsing and loadingSpecify a column to identify the document sourceUnstructuredCSVLoaderCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc. Marlins 118.07 69 Red Sox 173.18 69 Indians 78.43 68 Twins 94.08 66 Rockies 78.06 64 Cubs 88.1979 Padres 55.24 76 Mariners 81.97 75 Mets 93.35 74 Blue Jays 75.48 73 Royals 60.91 72 Dodgers 95.14 86 White Sox 96.92 85 Brewers 97.65 83 Phillies 174.54 81 Diamondbacks 74.28 81 Pirates 63.43 79 Orioles 81.43 93 Rays 64.17 90 Angels 154.49 89 Tigers 132.30 88 Cardinals 110.30 88 Dodgers82.20 97 Yankees 197.96 95 Giants 117.62 94 Braves 83.31 94 Athletics 55.37 94 Rangers 120.51 93 Rockies\n"Payroll (millions)": 78.06\n"Wins": 64', lookup_str='', metadata={'source': 'Rockies', 'row': 27}, lookup_index=0), Document(page_content='Team: Cubs\n"Payroll (millions)": 88.19\n"Wins": 61', lookup_str='', metadata={'source': 'Cubs', 'row': 28}, lookup_index=0), Document(page_content='Team: Astros\n"Payroll (millions)": 60.65\n"Wins": 55', lookup_str='', metadata={'source': 'Astros', 'row': 29}, lookup_index=0)]UnstructuredCSVLoader​You can also load the table using the UnstructuredCSVLoader. One advantage of using UnstructuredCSVLoader is that if you use it in "elements" mode, an HTML representation of the table will be available in the metadata.from langchain.document_loaders.csv_loader import UnstructuredCSVLoaderloader = UnstructuredCSVLoader( file_path="example_data/mlb_teams_2012.csv", mode="elements")docs = loader.load()print(docs[0].metadata["text_as_html"]) 81.97\n"Wins": 75', lookup_str='', metadata={'source': 'Mariners', 'row': 19}, lookup_index=0), Document(page_content='Team: Mets\n"Payroll (millions)": 93.35\n"Wins": 74', lookup_str='', metadata={'source': 'Mets', 'row': 20}, lookup_index=0), Document(page_content='Team: Blue Jays\n"Payroll (millions)": 75.48\n"Wins": 73', lookup_str='', metadata={'source': 'Blue Jays', 'row': 21}, lookup_index=0), Document(page_content='Team: Royals\n"Payroll (millions)": 60.91\n"Wins": 72', lookup_str='', metadata={'source': 'Royals', 'row': 22}, lookup_index=0), Document(page_content='Team: Marlins\n"Payroll (millions)": 118.07\n"Wins": 69', lookup_str='', metadata={'source': 'Marlins', 'row': 23}, lookup_index=0), Document(page_content='Team: Red Sox\n"Payroll (millions)": 173.18\n"Wins": 69', lookup_str='', metadata={'source': 'Red Sox', 'row': 24}, lookup_index=0), Document(page_content='Team: Indians\n"Payroll (millions)": 78.43\n"Wins": 68', lookup_str='', metadata={'source': 'Indians', 'row': 25}, lookup_index=0), Document(page_content='Team: Twins\n"Payroll (millions)": 94.08\n"Wins": 66', lookup_str='', metadata={'source': 'Twins', 'row': 26}, lookup_index=0), Document(page_content='Team: Rockies\n"Payroll (millions)": 78.06\n"Wins": 64',88', lookup_str='', metadata={'source': 'Cardinals', 'row': 11}, lookup_index=0), Document(page_content='Team: Dodgers\n"Payroll (millions)": 95.14\n"Wins": 86', lookup_str='', metadata={'source': 'Dodgers', 'row': 12}, lookup_index=0), Document(page_content='Team: White Sox\n"Payroll (millions)": 96.92\n"Wins": 85', lookup_str='', metadata={'source': 'White Sox', 'row': 13}, lookup_index=0), Document(page_content='Team: Brewers\n"Payroll (millions)": 97.65\n"Wins": 83', lookup_str='', metadata={'source': 'Brewers', 'row': 14}, lookup_index=0), Document(page_content='Team: Phillies\n"Payroll (millions)": 174.54\n"Wins": 81', lookup_str='', metadata={'source': 'Phillies', 'row': 15}, lookup_index=0), Document(page_content='Team: Diamondbacks\n"Payroll (millions)": 74.28\n"Wins": 81', lookup_str='', metadata={'source': 'Diamondbacks', 'row': 16}, lookup_index=0), Document(page_content='Team: Pirates\n"Payroll (millions)": 63.43\n"Wins": 79', lookup_str='', metadata={'source': 'Pirates', 'row': 17}, lookup_index=0), Document(page_content='Team: Padres\n"Payroll (millions)": 55.24\n"Wins": 76', lookup_str='', metadata={'source': 'Padres', 'row': 18}, lookup_index=0), Document(page_content='Team: Mariners\n"Payroll (millions)": 81.97\n"Wins": 75', lookup_str='', metadata={'source':94', lookup_str='', metadata={'source': 'Giants', 'row': 3}, lookup_index=0), Document(page_content='Team: Braves\n"Payroll (millions)": 83.31\n"Wins": 94', lookup_str='', metadata={'source': 'Braves', 'row': 4}, lookup_index=0), Document(page_content='Team: Athletics\n"Payroll (millions)": 55.37\n"Wins": 94', lookup_str='', metadata={'source': 'Athletics', 'row': 5}, lookup_index=0), Document(page_content='Team: Rangers\n"Payroll (millions)": 120.51\n"Wins": 93', lookup_str='', metadata={'source': 'Rangers', 'row': 6}, lookup_index=0), Document(page_content='Team: Orioles\n"Payroll (millions)": 81.43\n"Wins": 93', lookup_str='', metadata={'source': 'Orioles', 'row': 7}, lookup_index=0), Document(page_content='Team: Rays\n"Payroll (millions)": 64.17\n"Wins": 90', lookup_str='', metadata={'source': 'Rays', 'row': 8}, lookup_index=0), Document(page_content='Team: Angels\n"Payroll (millions)": 154.49\n"Wins": 89', lookup_str='', metadata={'source': 'Angels', 'row': 9}, lookup_index=0), Document(page_content='Team: Tigers\n"Payroll (millions)": 132.30\n"Wins": 88', lookup_str='', metadata={'source': 'Tigers', 'row': 10}, lookup_index=0), Document(page_content='Team: Cardinals\n"Payroll (millions)": 110.30\n"Wins": 88', lookup_str='', metadata={'source': 'Cardinals','row': 28}, lookup_index=0), Document(page_content='MLB Team: Cubs\nPayroll in millions: 88.19\nWins: 61', lookup_str='', metadata={'source': './example_data/mlb_teams_2012.csv', 'row': 29}, lookup_index=0), Document(page_content='MLB Team: Astros\nPayroll in millions: 60.65\nWins: 55', lookup_str='', metadata={'source': './example_data/mlb_teams_2012.csv', 'row': 30}, lookup_index=0)]Specify a column to identify the document source​Use the source_column argument to specify a source for the document created from each row. Otherwise file_path will be used as the source for all documents created from the CSV file.This is useful when using documents loaded from CSV files for chains that answer questions using sources.loader = CSVLoader(file_path="./example_data/mlb_teams_2012.csv", source_column="Team")data = loader.load()print(data) [Document(page_content='Team: Nationals\n"Payroll (millions)": 81.34\n"Wins": 98', lookup_str='', metadata={'source': 'Nationals', 'row': 0}, lookup_index=0), Document(page_content='Team: Reds\n"Payroll (millions)": 82.20\n"Wins": 97', lookup_str='', metadata={'source': 'Reds', 'row': 1}, lookup_index=0), Document(page_content='Team: Yankees\n"Payroll (millions)": 197.96\n"Wins": 95', lookup_str='', metadata={'source': 'Yankees', 'row': 2}, lookup_index=0), Document(page_content='Team: Giants\n"Payroll (millions)": 117.62\n"Wins": 94', lookup_str='', metadata={'source': 'Giants','row': 21}, lookup_index=0), Document(page_content='MLB Team: Blue Jays\nPayroll in millions: 75.48\nWins: 73', lookup_str='', metadata={'source': './example_data/mlb_teams_2012.csv', 'row': 22}, lookup_index=0), Document(page_content='MLB Team: Royals\nPayroll in millions: 60.91\nWins: 72', lookup_str='', metadata={'source': './example_data/mlb_teams_2012.csv', 'row': 23}, lookup_index=0), Document(page_content='MLB Team: Marlins\nPayroll in millions: 118.07\nWins: 69', lookup_str='', metadata={'source': './example_data/mlb_teams_2012.csv', 'row': 24}, lookup_index=0), Document(page_content='MLB Team: Red Sox\nPayroll in millions: 173.18\nWins: 69', lookup_str='', metadata={'source': './example_data/mlb_teams_2012.csv', 'row': 25}, lookup_index=0), Document(page_content='MLB Team: Indians\nPayroll in millions: 78.43\nWins: 68', lookup_str='', metadata={'source': './example_data/mlb_teams_2012.csv', 'row': 26}, lookup_index=0), Document(page_content='MLB Team: Twins\nPayroll in millions: 94.08\nWins: 66', lookup_str='', metadata={'source': './example_data/mlb_teams_2012.csv', 'row': 27}, lookup_index=0), Document(page_content='MLB Team: Rockies\nPayroll in millions: 78.06\nWins: 64', lookup_str='', metadata={'source': './example_data/mlb_teams_2012.csv', 'row': 28}, lookup_index=0), Document(page_content='MLB Team:'row': 14}, lookup_index=0), Document(page_content='MLB Team: Brewers\nPayroll in millions: 97.65\nWins: 83', lookup_str='', metadata={'source': './example_data/mlb_teams_2012.csv', 'row': 15}, lookup_index=0), Document(page_content='MLB Team: Phillies\nPayroll in millions: 174.54\nWins: 81', lookup_str='', metadata={'source': './example_data/mlb_teams_2012.csv', 'row': 16}, lookup_index=0), Document(page_content='MLB Team: Diamondbacks\nPayroll in millions: 74.28\nWins: 81', lookup_str='', metadata={'source': './example_data/mlb_teams_2012.csv', 'row': 17}, lookup_index=0), Document(page_content='MLB Team: Pirates\nPayroll in millions: 63.43\nWins: 79', lookup_str='', metadata={'source': './example_data/mlb_teams_2012.csv', 'row': 18}, lookup_index=0), Document(page_content='MLB Team: Padres\nPayroll in millions: 55.24\nWins: 76', lookup_str='', metadata={'source': './example_data/mlb_teams_2012.csv', 'row': 19}, lookup_index=0), Document(page_content='MLB Team: Mariners\nPayroll in millions: 81.97\nWins: 75', lookup_str='', metadata={'source': './example_data/mlb_teams_2012.csv', 'row': 20}, lookup_index=0), Document(page_content='MLB Team: Mets\nPayroll in millions: 93.35\nWins: 74', lookup_str='', metadata={'source': './example_data/mlb_teams_2012.csv', 'row': 21}, lookup_index=0), Document(page_content='MLB Team: Blue'row': 7}, lookup_index=0), Document(page_content='MLB Team: Orioles\nPayroll in millions: 81.43\nWins: 93', lookup_str='', metadata={'source': './example_data/mlb_teams_2012.csv', 'row': 8}, lookup_index=0), Document(page_content='MLB Team: Rays\nPayroll in millions: 64.17\nWins: 90', lookup_str='', metadata={'source': './example_data/mlb_teams_2012.csv', 'row': 9}, lookup_index=0), Document(page_content='MLB Team: Angels\nPayroll in millions: 154.49\nWins: 89', lookup_str='', metadata={'source': './example_data/mlb_teams_2012.csv', 'row': 10}, lookup_index=0), Document(page_content='MLB Team: Tigers\nPayroll in millions: 132.30\nWins: 88', lookup_str='', metadata={'source': './example_data/mlb_teams_2012.csv', 'row': 11}, lookup_index=0), Document(page_content='MLB Team: Cardinals\nPayroll in millions: 110.30\nWins: 88', lookup_str='', metadata={'source': './example_data/mlb_teams_2012.csv', 'row': 12}, lookup_index=0), Document(page_content='MLB Team: Dodgers\nPayroll in millions: 95.14\nWins: 86', lookup_str='', metadata={'source': './example_data/mlb_teams_2012.csv', 'row': 13}, lookup_index=0), Document(page_content='MLB Team: White Sox\nPayroll in millions: 96.92\nWins: 85', lookup_str='', metadata={'source': './example_data/mlb_teams_2012.csv', 'row': 14}, lookup_index=0), Document(page_content='MLB Team:'./example_data/mlb_teams_2012.csv', 'row': 0}, lookup_index=0), Document(page_content='MLB Team: Nationals\nPayroll in millions: 81.34\nWins: 98', lookup_str='', metadata={'source': './example_data/mlb_teams_2012.csv', 'row': 1}, lookup_index=0), Document(page_content='MLB Team: Reds\nPayroll in millions: 82.20\nWins: 97', lookup_str='', metadata={'source': './example_data/mlb_teams_2012.csv', 'row': 2}, lookup_index=0), Document(page_content='MLB Team: Yankees\nPayroll in millions: 197.96\nWins: 95', lookup_str='', metadata={'source': './example_data/mlb_teams_2012.csv', 'row': 3}, lookup_index=0), Document(page_content='MLB Team: Giants\nPayroll in millions: 117.62\nWins: 94', lookup_str='', metadata={'source': './example_data/mlb_teams_2012.csv', 'row': 4}, lookup_index=0), Document(page_content='MLB Team: Braves\nPayroll in millions: 83.31\nWins: 94', lookup_str='', metadata={'source': './example_data/mlb_teams_2012.csv', 'row': 5}, lookup_index=0), Document(page_content='MLB Team: Athletics\nPayroll in millions: 55.37\nWins: 94', lookup_str='', metadata={'source': './example_data/mlb_teams_2012.csv', 'row': 6}, lookup_index=0), Document(page_content='MLB Team: Rangers\nPayroll in millions: 120.51\nWins: 93', lookup_str='', metadata={'source': './example_data/mlb_teams_2012.csv', 'row': 7}, lookup_index=0),lookup_index=0), Document(page_content='Team: Twins\n"Payroll (millions)": 94.08\n"Wins": 66', lookup_str='', metadata={'source': './example_data/mlb_teams_2012.csv', 'row': 26}, lookup_index=0), Document(page_content='Team: Rockies\n"Payroll (millions)": 78.06\n"Wins": 64', lookup_str='', metadata={'source': './example_data/mlb_teams_2012.csv', 'row': 27}, lookup_index=0), Document(page_content='Team: Cubs\n"Payroll (millions)": 88.19\n"Wins": 61', lookup_str='', metadata={'source': './example_data/mlb_teams_2012.csv', 'row': 28}, lookup_index=0), Document(page_content='Team: Astros\n"Payroll (millions)": 60.65\n"Wins": 55', lookup_str='', metadata={'source': './example_data/mlb_teams_2012.csv', 'row': 29}, lookup_index=0)]Customizing the csv parsing and loading​See the csv module documentation for more information of what csv args are supported.loader = CSVLoader( file_path="./example_data/mlb_teams_2012.csv", csv_args={ "delimiter": ",", "quotechar": '"', "fieldnames": ["MLB Team", "Payroll in millions", "Wins"], },)data = loader.load()print(data) [Document(page_content='MLB Team: Team\nPayroll in millions: "Payroll (millions)"\nWins: "Wins"', lookup_str='', metadata={'source': './example_data/mlb_teams_2012.csv', 'row': 0},Mariners\n"Payroll (millions)": 81.97\n"Wins": 75', lookup_str='', metadata={'source': './example_data/mlb_teams_2012.csv', 'row': 19}, lookup_index=0), Document(page_content='Team: Mets\n"Payroll (millions)": 93.35\n"Wins": 74', lookup_str='', metadata={'source': './example_data/mlb_teams_2012.csv', 'row': 20}, lookup_index=0), Document(page_content='Team: Blue Jays\n"Payroll (millions)": 75.48\n"Wins": 73', lookup_str='', metadata={'source': './example_data/mlb_teams_2012.csv', 'row': 21}, lookup_index=0), Document(page_content='Team: Royals\n"Payroll (millions)": 60.91\n"Wins": 72', lookup_str='', metadata={'source': './example_data/mlb_teams_2012.csv', 'row': 22}, lookup_index=0), Document(page_content='Team: Marlins\n"Payroll (millions)": 118.07\n"Wins": 69', lookup_str='', metadata={'source': './example_data/mlb_teams_2012.csv', 'row': 23}, lookup_index=0), Document(page_content='Team: Red Sox\n"Payroll (millions)": 173.18\n"Wins": 69', lookup_str='', metadata={'source': './example_data/mlb_teams_2012.csv', 'row': 24}, lookup_index=0), Document(page_content='Team: Indians\n"Payroll (millions)": 78.43\n"Wins": 68', lookup_str='', metadata={'source': './example_data/mlb_teams_2012.csv', 'row': 25}, lookup_index=0), Document(page_content='Team: Twins\n"Payroll (millions)":(millions)": 95.14\n"Wins": 86', lookup_str='', metadata={'source': './example_data/mlb_teams_2012.csv', 'row': 12}, lookup_index=0), Document(page_content='Team: White Sox\n"Payroll (millions)": 96.92\n"Wins": 85', lookup_str='', metadata={'source': './example_data/mlb_teams_2012.csv', 'row': 13}, lookup_index=0), Document(page_content='Team: Brewers\n"Payroll (millions)": 97.65\n"Wins": 83', lookup_str='', metadata={'source': './example_data/mlb_teams_2012.csv', 'row': 14}, lookup_index=0), Document(page_content='Team: Phillies\n"Payroll (millions)": 174.54\n"Wins": 81', lookup_str='', metadata={'source': './example_data/mlb_teams_2012.csv', 'row': 15}, lookup_index=0), Document(page_content='Team: Diamondbacks\n"Payroll (millions)": 74.28\n"Wins": 81', lookup_str='', metadata={'source': './example_data/mlb_teams_2012.csv', 'row': 16}, lookup_index=0), Document(page_content='Team: Pirates\n"Payroll (millions)": 63.43\n"Wins": 79', lookup_str='', metadata={'source': './example_data/mlb_teams_2012.csv', 'row': 17}, lookup_index=0), Document(page_content='Team: Padres\n"Payroll (millions)": 55.24\n"Wins": 76', lookup_str='', metadata={'source': './example_data/mlb_teams_2012.csv', 'row': 18}, lookup_index=0), Document(page_content='Team: Mariners\n"Payroll (millions)": 81.97\n"Wins":55.37\n"Wins": 94', lookup_str='', metadata={'source': './example_data/mlb_teams_2012.csv', 'row': 5}, lookup_index=0), Document(page_content='Team: Rangers\n"Payroll (millions)": 120.51\n"Wins": 93', lookup_str='', metadata={'source': './example_data/mlb_teams_2012.csv', 'row': 6}, lookup_index=0), Document(page_content='Team: Orioles\n"Payroll (millions)": 81.43\n"Wins": 93', lookup_str='', metadata={'source': './example_data/mlb_teams_2012.csv', 'row': 7}, lookup_index=0), Document(page_content='Team: Rays\n"Payroll (millions)": 64.17\n"Wins": 90', lookup_str='', metadata={'source': './example_data/mlb_teams_2012.csv', 'row': 8}, lookup_index=0), Document(page_content='Team: Angels\n"Payroll (millions)": 154.49\n"Wins": 89', lookup_str='', metadata={'source': './example_data/mlb_teams_2012.csv', 'row': 9}, lookup_index=0), Document(page_content='Team: Tigers\n"Payroll (millions)": 132.30\n"Wins": 88', lookup_str='', metadata={'source': './example_data/mlb_teams_2012.csv', 'row': 10}, lookup_index=0), Document(page_content='Team: Cardinals\n"Payroll (millions)": 110.30\n"Wins": 88', lookup_str='', metadata={'source': './example_data/mlb_teams_2012.csv', 'row': 11}, lookup_index=0), Document(page_content='Team: Dodgers\n"Payroll (millions)": 95.14\n"Wins": 86', lookup_str='',a delimited text file that uses a comma to separate values. Each line of the file is a data record. Each record consists of one or more fields, separated by commas.Load csv data with a single row per document.from langchain.document_loaders.csv_loader import CSVLoaderloader = CSVLoader(file_path="./example_data/mlb_teams_2012.csv")data = loader.load()print(data) [Document(page_content='Team: Nationals\n"Payroll (millions)": 81.34\n"Wins": 98', lookup_str='', metadata={'source': './example_data/mlb_teams_2012.csv', 'row': 0}, lookup_index=0), Document(page_content='Team: Reds\n"Payroll (millions)": 82.20\n"Wins": 97', lookup_str='', metadata={'source': './example_data/mlb_teams_2012.csv', 'row': 1}, lookup_index=0), Document(page_content='Team: Yankees\n"Payroll (millions)": 197.96\n"Wins": 95', lookup_str='', metadata={'source': './example_data/mlb_teams_2012.csv', 'row': 2}, lookup_index=0), Document(page_content='Team: Giants\n"Payroll (millions)": 117.62\n"Wins": 94', lookup_str='', metadata={'source': './example_data/mlb_teams_2012.csv', 'row': 3}, lookup_index=0), Document(page_content='Team: Braves\n"Payroll (millions)": 83.31\n"Wins": 94', lookup_str='', metadata={'source': './example_data/mlb_teams_2012.csv', 'row': 4}, lookup_index=0), Document(page_content='Team: Athletics\n"Payroll (millions)": 55.37\n"Wins": 94', lookup_str='', metadata={'source':Skip to main content🦜�🔗 LangChainDocsUse casesIntegrationsAPILangSmithJS/TS DocsCTRLKIntegrationsCallbacksChat modelsDocument loadersEtherscan LoaderacreomAirbyte JSONAirtableAlibaba Cloud MaxComputeApify DatasetArxivAsyncHtmlLoaderAWS S3 DirectoryAWS S3 FileAZLyricsAzure Blob Storage ContainerAzure Blob Storage FileBibTeXBiliBiliBlackboardBlockchainBrave SearchBrowserlesschatgpt_loaderCollege ConfidentialConfluenceCoNLL-UCopy PasteCSVCube Semantic LayerDatadog LogsDiffbotDiscordDocugamiDuckDBEmailEmbaasEPubEverNoteexample_dataMicrosoft ExcelFacebook ChatFaunaFigmaGeopandasGitGitBookGitHubGoogle BigQueryGoogle Cloud Storage DirectoryGoogle Cloud Storage FileGoogle DriveGrobidGutenbergHacker NewsHuggingFace datasetiFixitImagesImage captionsIMSDbIuguJoplinJupyter NotebookLarkSuite (FeiShu)MastodonMediaWikiDumpMergeDocLoadermhtmlMicrosoft OneDriveMicrosoft PowerPointMicrosoft WordModern TreasuryNotion DB 1/2Notion DB 2/2ObsidianOpen Document Format (ODT)Open City DataOrg-modePandas DataFramePsychicPySpark DataFrame LoaderReadTheDocs DocumentationRecursive URL LoaderRedditRoamRocksetRSTSitemapSlackSnowflakeSource CodeSpreedlyStripeSubtitleTelegramTencent COS DirectoryTencent COS File2MarkdownTOMLTrelloTSVTwitterUnstructured FileURLWeatherWebBaseLoaderWhatsApp ChatWikipediaXMLXorbits Pandas DataFrameLoading documents from a YouTube urlYouTube transcriptsDocument transformersLLMsMemoryRetrieversText embedding modelsAgent toolkitsToolsVector storesGrouped by providerIntegrationsDocument loadersCSVOn this pageCSVA comma-separated values (CSV) file is a delimited text file that uses a comma to separate values. Each line of the file is a dataCSV | 🦜�🔗 Langchain118, 'favorite_count': 1286, 'favorited': False, 'retweeted': False, 'lang': 'en'}, 'contributors_enabled': False, 'is_translator': False, 'is_translation_enabled': False, 'profile_background_color': 'C0DEED', 'profile_background_image_url': 'http://abs.twimg.com/images/themes/theme1/bg.png', 'profile_background_image_url_https': 'https://abs.twimg.com/images/themes/theme1/bg.png', 'profile_background_tile': False, 'profile_image_url': 'http://pbs.twimg.com/profile_images/1590968738358079488/IY9Gx6Ok_normal.jpg', 'profile_image_url_https': 'https://pbs.twimg.com/profile_images/1590968738358079488/IY9Gx6Ok_normal.jpg', 'profile_banner_url': 'https://pbs.twimg.com/profile_banners/44196397/1576183471', 'profile_link_color': '0084B4', 'profile_sidebar_border_color': 'C0DEED', 'profile_sidebar_fill_color': 'DDEEF6', 'profile_text_color': '333333', 'profile_use_background_image': True, 'has_extended_profile': True, 'default_profile': False, 'default_profile_image': False, 'following': None, 'follow_request_sent': None, 'notifications': None, 'translator_type': 'none', 'withheld_in_countries': []}})]PreviousTSVNextUnstructured FileCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.2009', 'favourites_count': 21285, 'utc_offset': None, 'time_zone': None, 'geo_enabled': False, 'verified': False, 'statuses_count': 24795, 'lang': None, 'status': {'created_at': 'Tue Apr 18 03:45:50 +0000 2023', 'id': 1648170947541704705, 'id_str': '1648170947541704705', 'text': '@MrAndyNgo @REI One store after another shutting down', 'truncated': False, 'entities': {'hashtags': [], 'symbols': [], 'user_mentions': [{'screen_name': 'MrAndyNgo', 'name': 'Andy Ngô ��\u200d🌈', 'id': 2835451658, 'id_str': '2835451658', 'indices': [0, 10]}, {'screen_name': 'REI', 'name': 'REI', 'id': 16583846, 'id_str': '16583846', 'indices': [11, 15]}], 'urls': []}, 'source': 'Twitter for iPhone', 'in_reply_to_status_id': 1648134341678051328, 'in_reply_to_status_id_str': '1648134341678051328', 'in_reply_to_user_id': 2835451658, 'in_reply_to_user_id_str': '2835451658', 'in_reply_to_screen_name': 'MrAndyNgo', 'geo': None, 'coordinates': None, 'place': None, 'contributors': None, 'is_quote_status': False, 'retweet_count': 118, 'favorite_count': 1286, 'favorited': False,'profile_background_tile': False, 'profile_image_url': 'http://pbs.twimg.com/profile_images/1590968738358079488/IY9Gx6Ok_normal.jpg', 'profile_image_url_https': 'https://pbs.twimg.com/profile_images/1590968738358079488/IY9Gx6Ok_normal.jpg', 'profile_banner_url': 'https://pbs.twimg.com/profile_banners/44196397/1576183471', 'profile_link_color': '0084B4', 'profile_sidebar_border_color': 'C0DEED', 'profile_sidebar_fill_color': 'DDEEF6', 'profile_text_color': '333333', 'profile_use_background_image': True, 'has_extended_profile': True, 'default_profile': False, 'default_profile_image': False, 'following': None, 'follow_request_sent': None, 'notifications': None, 'translator_type': 'none', 'withheld_in_countries': []}}), Document(page_content='@TRHLofficial What’s he talking about and why is it sponsored by Erik’s son?', metadata={'created_at': 'Tue Apr 18 03:32:17 +0000 2023', 'user_info': {'id': 44196397, 'id_str': '44196397', 'name': 'Elon Musk', 'screen_name': 'elonmusk', 'location': 'A Shortfall of Gravitas', 'profile_location': None, 'description': 'nothing', 'url': None, 'entities': {'description': {'urls': []}}, 'protected': False, 'followers_count': 135528327, 'friends_count': 220, 'listed_count': 120478, 'created_at': 'Tue Jun 02 20:12:29 +0000 2009', 'favourites_count': 21285, 'utc_offset': None,@REI One store after another shutting down', 'truncated': False, 'entities': {'hashtags': [], 'symbols': [], 'user_mentions': [{'screen_name': 'MrAndyNgo', 'name': 'Andy Ngô ��\u200d🌈', 'id': 2835451658, 'id_str': '2835451658', 'indices': [0, 10]}, {'screen_name': 'REI', 'name': 'REI', 'id': 16583846, 'id_str': '16583846', 'indices': [11, 15]}], 'urls': []}, 'source': 'Twitter for iPhone', 'in_reply_to_status_id': 1648134341678051328, 'in_reply_to_status_id_str': '1648134341678051328', 'in_reply_to_user_id': 2835451658, 'in_reply_to_user_id_str': '2835451658', 'in_reply_to_screen_name': 'MrAndyNgo', 'geo': None, 'coordinates': None, 'place': None, 'contributors': None, 'is_quote_status': False, 'retweet_count': 118, 'favorite_count': 1286, 'favorited': False, 'retweeted': False, 'lang': 'en'}, 'contributors_enabled': False, 'is_translator': False, 'is_translation_enabled': False, 'profile_background_color': 'C0DEED', 'profile_background_image_url': 'http://abs.twimg.com/images/themes/theme1/bg.png', 'profile_background_image_url_https': 'https://abs.twimg.com/images/themes/theme1/bg.png', 'profile_background_tile': False, 'profile_image_url':'profile_link_color': '0084B4', 'profile_sidebar_border_color': 'C0DEED', 'profile_sidebar_fill_color': 'DDEEF6', 'profile_text_color': '333333', 'profile_use_background_image': True, 'has_extended_profile': True, 'default_profile': False, 'default_profile_image': False, 'following': None, 'follow_request_sent': None, 'notifications': None, 'translator_type': 'none', 'withheld_in_countries': []}}), Document(page_content='@KanekoaTheGreat �', metadata={'created_at': 'Tue Apr 18 03:35:48 +0000 2023', 'user_info': {'id': 44196397, 'id_str': '44196397', 'name': 'Elon Musk', 'screen_name': 'elonmusk', 'location': 'A Shortfall of Gravitas', 'profile_location': None, 'description': 'nothing', 'url': None, 'entities': {'description': {'urls': []}}, 'protected': False, 'followers_count': 135528327, 'friends_count': 220, 'listed_count': 120478, 'created_at': 'Tue Jun 02 20:12:29 +0000 2009', 'favourites_count': 21285, 'utc_offset': None, 'time_zone': None, 'geo_enabled': False, 'verified': False, 'statuses_count': 24795, 'lang': None, 'status': {'created_at': 'Tue Apr 18 03:45:50 +0000 2023', 'id': 1648170947541704705, 'id_str': '1648170947541704705', 'text': '@MrAndyNgo @REI One store after another shutting down', 'truncated': False,'indices': [0, 10]}, {'screen_name': 'REI', 'name': 'REI', 'id': 16583846, 'id_str': '16583846', 'indices': [11, 15]}], 'urls': []}, 'source': 'Twitter for iPhone', 'in_reply_to_status_id': 1648134341678051328, 'in_reply_to_status_id_str': '1648134341678051328', 'in_reply_to_user_id': 2835451658, 'in_reply_to_user_id_str': '2835451658', 'in_reply_to_screen_name': 'MrAndyNgo', 'geo': None, 'coordinates': None, 'place': None, 'contributors': None, 'is_quote_status': False, 'retweet_count': 118, 'favorite_count': 1286, 'favorited': False, 'retweeted': False, 'lang': 'en'}, 'contributors_enabled': False, 'is_translator': False, 'is_translation_enabled': False, 'profile_background_color': 'C0DEED', 'profile_background_image_url': 'http://abs.twimg.com/images/themes/theme1/bg.png', 'profile_background_image_url_https': 'https://abs.twimg.com/images/themes/theme1/bg.png', 'profile_background_tile': False, 'profile_image_url': 'http://pbs.twimg.com/profile_images/1590968738358079488/IY9Gx6Ok_normal.jpg', 'profile_image_url_https': 'https://pbs.twimg.com/profile_images/1590968738358079488/IY9Gx6Ok_normal.jpg', 'profile_banner_url': 'https://pbs.twimg.com/profile_banners/44196397/1576183471', 'profile_link_color': '0084B4', 'profile_sidebar_border_color':'translator_type': 'none', 'withheld_in_countries': []}}), Document(page_content='@KanekoaTheGreat The Golden Rule', metadata={'created_at': 'Tue Apr 18 03:37:17 +0000 2023', 'user_info': {'id': 44196397, 'id_str': '44196397', 'name': 'Elon Musk', 'screen_name': 'elonmusk', 'location': 'A Shortfall of Gravitas', 'profile_location': None, 'description': 'nothing', 'url': None, 'entities': {'description': {'urls': []}}, 'protected': False, 'followers_count': 135528327, 'friends_count': 220, 'listed_count': 120478, 'created_at': 'Tue Jun 02 20:12:29 +0000 2009', 'favourites_count': 21285, 'utc_offset': None, 'time_zone': None, 'geo_enabled': False, 'verified': False, 'statuses_count': 24795, 'lang': None, 'status': {'created_at': 'Tue Apr 18 03:45:50 +0000 2023', 'id': 1648170947541704705, 'id_str': '1648170947541704705', 'text': '@MrAndyNgo @REI One store after another shutting down', 'truncated': False, 'entities': {'hashtags': [], 'symbols': [], 'user_mentions': [{'screen_name': 'MrAndyNgo', 'name': 'Andy Ngô ��\u200d🌈', 'id': 2835451658, 'id_str': '2835451658', 'indices': [0, 10]}, {'screen_name': 'REI', 'name':'in_reply_to_status_id_str': '1648134341678051328', 'in_reply_to_user_id': 2835451658, 'in_reply_to_user_id_str': '2835451658', 'in_reply_to_screen_name': 'MrAndyNgo', 'geo': None, 'coordinates': None, 'place': None, 'contributors': None, 'is_quote_status': False, 'retweet_count': 118, 'favorite_count': 1286, 'favorited': False, 'retweeted': False, 'lang': 'en'}, 'contributors_enabled': False, 'is_translator': False, 'is_translation_enabled': False, 'profile_background_color': 'C0DEED', 'profile_background_image_url': 'http://abs.twimg.com/images/themes/theme1/bg.png', 'profile_background_image_url_https': 'https://abs.twimg.com/images/themes/theme1/bg.png', 'profile_background_tile': False, 'profile_image_url': 'http://pbs.twimg.com/profile_images/1590968738358079488/IY9Gx6Ok_normal.jpg', 'profile_image_url_https': 'https://pbs.twimg.com/profile_images/1590968738358079488/IY9Gx6Ok_normal.jpg', 'profile_banner_url': 'https://pbs.twimg.com/profile_banners/44196397/1576183471', 'profile_link_color': '0084B4', 'profile_sidebar_border_color': 'C0DEED', 'profile_sidebar_fill_color': 'DDEEF6', 'profile_text_color': '333333', 'profile_use_background_image': True, 'has_extended_profile': True, 'default_profile': False, 'default_profile_image': False, 'following': None, 'follow_request_sent': None, 'notifications': None, 'translator_type': 'none', 'withheld_in_countries': []}}),'location': 'A Shortfall of Gravitas', 'profile_location': None, 'description': 'nothing', 'url': None, 'entities': {'description': {'urls': []}}, 'protected': False, 'followers_count': 135528327, 'friends_count': 220, 'listed_count': 120478, 'created_at': 'Tue Jun 02 20:12:29 +0000 2009', 'favourites_count': 21285, 'utc_offset': None, 'time_zone': None, 'geo_enabled': False, 'verified': False, 'statuses_count': 24795, 'lang': None, 'status': {'created_at': 'Tue Apr 18 03:45:50 +0000 2023', 'id': 1648170947541704705, 'id_str': '1648170947541704705', 'text': '@MrAndyNgo @REI One store after another shutting down', 'truncated': False, 'entities': {'hashtags': [], 'symbols': [], 'user_mentions': [{'screen_name': 'MrAndyNgo', 'name': 'Andy Ngô ��\u200d🌈', 'id': 2835451658, 'id_str': '2835451658', 'indices': [0, 10]}, {'screen_name': 'REI', 'name': 'REI', 'id': 16583846, 'id_str': '16583846', 'indices': [11, 15]}], 'urls': []}, 'source': 'Twitter for iPhone', 'in_reply_to_status_id': 1648134341678051328, 'in_reply_to_status_id_str': '1648134341678051328',False, 'retweeted': False, 'lang': 'en'}, 'contributors_enabled': False, 'is_translator': False, 'is_translation_enabled': False, 'profile_background_color': 'C0DEED', 'profile_background_image_url': 'http://abs.twimg.com/images/themes/theme1/bg.png', 'profile_background_image_url_https': 'https://abs.twimg.com/images/themes/theme1/bg.png', 'profile_background_tile': False, 'profile_image_url': 'http://pbs.twimg.com/profile_images/1590968738358079488/IY9Gx6Ok_normal.jpg', 'profile_image_url_https': 'https://pbs.twimg.com/profile_images/1590968738358079488/IY9Gx6Ok_normal.jpg', 'profile_banner_url': 'https://pbs.twimg.com/profile_banners/44196397/1576183471', 'profile_link_color': '0084B4', 'profile_sidebar_border_color': 'C0DEED', 'profile_sidebar_fill_color': 'DDEEF6', 'profile_text_color': '333333', 'profile_use_background_image': True, 'has_extended_profile': True, 'default_profile': False, 'default_profile_image': False, 'following': None, 'follow_request_sent': None, 'notifications': None, 'translator_type': 'none', 'withheld_in_countries': []}}), Document(page_content='@KanekoaTheGreat @joshrogin @glennbeck Large ships are fundamentally vulnerable to ballistic (hypersonic) missiles', metadata={'created_at': 'Tue Apr 18 03:43:25 +0000 2023', 'user_info': {'id': 44196397, 'id_str': '44196397', 'name': 'Elon Musk', 'screen_name': 'elonmusk', 'location': 'A Shortfall of Gravitas', 'profile_location': None,None, 'time_zone': None, 'geo_enabled': False, 'verified': False, 'statuses_count': 24795, 'lang': None, 'status': {'created_at': 'Tue Apr 18 03:45:50 +0000 2023', 'id': 1648170947541704705, 'id_str': '1648170947541704705', 'text': '@MrAndyNgo @REI One store after another shutting down', 'truncated': False, 'entities': {'hashtags': [], 'symbols': [], 'user_mentions': [{'screen_name': 'MrAndyNgo', 'name': 'Andy Ngô ��\u200d🌈', 'id': 2835451658, 'id_str': '2835451658', 'indices': [0, 10]}, {'screen_name': 'REI', 'name': 'REI', 'id': 16583846, 'id_str': '16583846', 'indices': [11, 15]}], 'urls': []}, 'source': 'Twitter for iPhone', 'in_reply_to_status_id': 1648134341678051328, 'in_reply_to_status_id_str': '1648134341678051328', 'in_reply_to_user_id': 2835451658, 'in_reply_to_user_id_str': '2835451658', 'in_reply_to_screen_name': 'MrAndyNgo', 'geo': None, 'coordinates': None, 'place': None, 'contributors': None, 'is_quote_status': False, 'retweet_count': 118, 'favorite_count': 1286, 'favorited': False, 'retweeted': False, 'lang': 'en'}, 'contributors_enabled': False,You must initialize the loader with your Twitter API token, and you need to pass in the Twitter username you want to extract.from langchain.document_loaders import TwitterTweetLoader#!pip install tweepyloader = TwitterTweetLoader.from_bearer_token( oauth2_bearer_token="YOUR BEARER TOKEN", twitter_users=["elonmusk"], number_tweets=50, # Default value is 100)# Or load from access token and consumer keys# loader = TwitterTweetLoader.from_secrets(# access_token='YOUR ACCESS TOKEN',# access_token_secret='YOUR ACCESS TOKEN SECRET',# consumer_key='YOUR CONSUMER KEY',# consumer_secret='YOUR CONSUMER SECRET',# twitter_users=['elonmusk'],# number_tweets=50,# )documents = loader.load()documents[:5] [Document(page_content='@MrAndyNgo @REI One store after another shutting down', metadata={'created_at': 'Tue Apr 18 03:45:50 +0000 2023', 'user_info': {'id': 44196397, 'id_str': '44196397', 'name': 'Elon Musk', 'screen_name': 'elonmusk', 'location': 'A Shortfall of Gravitas', 'profile_location': None, 'description': 'nothing', 'url': None, 'entities': {'description': {'urls': []}}, 'protected': False, 'followers_count': 135528327, 'friends_count': 220, 'listed_count': 120478, 'created_at': 'Tue Jun 02 20:12:29 +0000 2009', 'favourites_count': 21285, 'utc_offset': None, 'time_zone': None, 'geo_enabled': False, 'verified': False,Skip to main content🦜�🔗 LangChainDocsUse casesIntegrationsAPILangSmithJS/TS DocsCTRLKIntegrationsCallbacksChat modelsDocument loadersEtherscan LoaderacreomAirbyte JSONAirtableAlibaba Cloud MaxComputeApify DatasetArxivAsyncHtmlLoaderAWS S3 DirectoryAWS S3 FileAZLyricsAzure Blob Storage ContainerAzure Blob Storage FileBibTeXBiliBiliBlackboardBlockchainBrave SearchBrowserlesschatgpt_loaderCollege ConfidentialConfluenceCoNLL-UCopy PasteCSVCube Semantic LayerDatadog LogsDiffbotDiscordDocugamiDuckDBEmailEmbaasEPubEverNoteexample_dataMicrosoft ExcelFacebook ChatFaunaFigmaGeopandasGitGitBookGitHubGoogle BigQueryGoogle Cloud Storage DirectoryGoogle Cloud Storage FileGoogle DriveGrobidGutenbergHacker NewsHuggingFace datasetiFixitImagesImage captionsIMSDbIuguJoplinJupyter NotebookLarkSuite (FeiShu)MastodonMediaWikiDumpMergeDocLoadermhtmlMicrosoft OneDriveMicrosoft PowerPointMicrosoft WordModern TreasuryNotion DB 1/2Notion DB 2/2ObsidianOpen Document Format (ODT)Open City DataOrg-modePandas DataFramePsychicPySpark DataFrame LoaderReadTheDocs DocumentationRecursive URL LoaderRedditRoamRocksetRSTSitemapSlackSnowflakeSource CodeSpreedlyStripeSubtitleTelegramTencent COS DirectoryTencent COS File2MarkdownTOMLTrelloTSVTwitterUnstructured FileURLWeatherWebBaseLoaderWhatsApp ChatWikipediaXMLXorbits Pandas DataFrameLoading documents from a YouTube urlYouTube transcriptsDocument transformersLLMsMemoryRetrieversText embedding modelsAgent toolkitsToolsVector storesGrouped by providerIntegrationsDocument loadersTwitterTwitterTwitter is an online social media and social networking service.This loader fetches the text from the Tweets of a list of Twitter users, using the tweepy Python package.Twitter | 🦜�🔗 Langchainloader works with .xml files. The page content will be the text extracted from the XML tags.from langchain.document_loaders import UnstructuredXMLLoaderloader = UnstructuredXMLLoader( "example_data/factbook.xml",)docs = loader.load()docs[0] Document(page_content='United States\n\nWashington, DC\n\nJoe Biden\n\nBaseball\n\nCanada\n\nOttawa\n\nJustin Trudeau\n\nHockey\n\nFrance\n\nParis\n\nEmmanuel Macron\n\nSoccer\n\nTrinidad & Tobado\n\nPort of Spain\n\nKeith Rowley\n\nTrack & Field', metadata={'source': 'example_data/factbook.xml'})PreviousWikipediaNextXorbits Pandas DataFrameCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.Skip to main content🦜�🔗 LangChainDocsUse casesIntegrationsAPILangSmithJS/TS DocsCTRLKIntegrationsCallbacksChat modelsDocument loadersEtherscan LoaderacreomAirbyte JSONAirtableAlibaba Cloud MaxComputeApify DatasetArxivAsyncHtmlLoaderAWS S3 DirectoryAWS S3 FileAZLyricsAzure Blob Storage ContainerAzure Blob Storage FileBibTeXBiliBiliBlackboardBlockchainBrave SearchBrowserlesschatgpt_loaderCollege ConfidentialConfluenceCoNLL-UCopy PasteCSVCube Semantic LayerDatadog LogsDiffbotDiscordDocugamiDuckDBEmailEmbaasEPubEverNoteexample_dataMicrosoft ExcelFacebook ChatFaunaFigmaGeopandasGitGitBookGitHubGoogle BigQueryGoogle Cloud Storage DirectoryGoogle Cloud Storage FileGoogle DriveGrobidGutenbergHacker NewsHuggingFace datasetiFixitImagesImage captionsIMSDbIuguJoplinJupyter NotebookLarkSuite (FeiShu)MastodonMediaWikiDumpMergeDocLoadermhtmlMicrosoft OneDriveMicrosoft PowerPointMicrosoft WordModern TreasuryNotion DB 1/2Notion DB 2/2ObsidianOpen Document Format (ODT)Open City DataOrg-modePandas DataFramePsychicPySpark DataFrame LoaderReadTheDocs DocumentationRecursive URL LoaderRedditRoamRocksetRSTSitemapSlackSnowflakeSource CodeSpreedlyStripeSubtitleTelegramTencent COS DirectoryTencent COS File2MarkdownTOMLTrelloTSVTwitterUnstructured FileURLWeatherWebBaseLoaderWhatsApp ChatWikipediaXMLXorbits Pandas DataFrameLoading documents from a YouTube urlYouTube transcriptsDocument transformersLLMsMemoryRetrieversText embedding modelsAgent toolkitsToolsVector storesGrouped by providerIntegrationsDocument loadersXMLXMLThe UnstructuredXMLLoader is used to load XML files. The loader works with .xml files. The page content will be the text extracted from the XML tags.fromXML | 🦜�🔗 Langchain"https://goo.gl/maps/NDSHwePEyaHMFGwh8",]loader = PlaywrightURLLoader(urls=urls, remove_selectors=["header", "footer"])data = loader.load()PreviousUnstructured FileNextWeatherSetupSetupCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.a list of URLs into a document format that we can use downstream.from langchain.document_loaders import UnstructuredURLLoaderurls = [ "https://www.understandingwar.org/backgrounder/russian-offensive-campaign-assessment-february-8-2023", "https://www.understandingwar.org/backgrounder/russian-offensive-campaign-assessment-february-9-2023",]Pass in ssl_verify=False with headers=headers to get past ssl_verification error.loader = UnstructuredURLLoader(urls=urls)data = loader.load()Selenium URL LoaderThis covers how to load HTML documents from a list of URLs using the SeleniumURLLoader.Using selenium allows us to load pages that require JavaScript to render.Setup​To use the SeleniumURLLoader, you will need to install selenium and unstructured.from langchain.document_loaders import SeleniumURLLoaderurls = [ "https://www.youtube.com/watch?v=dQw4w9WgXcQ", "https://goo.gl/maps/NDSHwePEyaHMFGwh8",]loader = SeleniumURLLoader(urls=urls)data = loader.load()Playwright URL LoaderThis covers how to load HTML documents from a list of URLs using the PlaywrightURLLoader.As in the Selenium case, Playwright allows us to load pages that need JavaScript to render.Setup​To use the PlaywrightURLLoader, you will need to install playwright and unstructured. Additionally, you will need to install the Playwright Chromium browser:# Install playwrightpip install "playwright"pip install "unstructured"playwright installfrom langchain.document_loaders import PlaywrightURLLoaderurls = [ "https://www.youtube.com/watch?v=dQw4w9WgXcQ", "https://goo.gl/maps/NDSHwePEyaHMFGwh8",]loader =Skip to main content🦜�🔗 LangChainDocsUse casesIntegrationsAPILangSmithJS/TS DocsCTRLKIntegrationsCallbacksChat modelsDocument loadersEtherscan LoaderacreomAirbyte JSONAirtableAlibaba Cloud MaxComputeApify DatasetArxivAsyncHtmlLoaderAWS S3 DirectoryAWS S3 FileAZLyricsAzure Blob Storage ContainerAzure Blob Storage FileBibTeXBiliBiliBlackboardBlockchainBrave SearchBrowserlesschatgpt_loaderCollege ConfidentialConfluenceCoNLL-UCopy PasteCSVCube Semantic LayerDatadog LogsDiffbotDiscordDocugamiDuckDBEmailEmbaasEPubEverNoteexample_dataMicrosoft ExcelFacebook ChatFaunaFigmaGeopandasGitGitBookGitHubGoogle BigQueryGoogle Cloud Storage DirectoryGoogle Cloud Storage FileGoogle DriveGrobidGutenbergHacker NewsHuggingFace datasetiFixitImagesImage captionsIMSDbIuguJoplinJupyter NotebookLarkSuite (FeiShu)MastodonMediaWikiDumpMergeDocLoadermhtmlMicrosoft OneDriveMicrosoft PowerPointMicrosoft WordModern TreasuryNotion DB 1/2Notion DB 2/2ObsidianOpen Document Format (ODT)Open City DataOrg-modePandas DataFramePsychicPySpark DataFrame LoaderReadTheDocs DocumentationRecursive URL LoaderRedditRoamRocksetRSTSitemapSlackSnowflakeSource CodeSpreedlyStripeSubtitleTelegramTencent COS DirectoryTencent COS File2MarkdownTOMLTrelloTSVTwitterUnstructured FileURLWeatherWebBaseLoaderWhatsApp ChatWikipediaXMLXorbits Pandas DataFrameLoading documents from a YouTube urlYouTube transcriptsDocument transformersLLMsMemoryRetrieversText embedding modelsAgent toolkitsToolsVector storesGrouped by providerIntegrationsDocument loadersURLOn this pageURLThis covers how to load HTML documents from a list of URLs into a document format that we can use downstream.from langchain.document_loaders importURL | 🦜�🔗 Langchainapp. Capture your thoughts and securely access them from any device.This notebook covers how to load documents from a Joplin database.Joplin has a REST API for accessing its local database. This loader uses the API to retrieve all notes in the database and their metadata. This requires an access token that can be obtained from the app by following these steps:Open the Joplin app. The app must stay open while the documents are being loaded.Go to settings / options and select "Web Clipper".Make sure that the Web Clipper service is enabled.Under "Advanced Options", copy the authorization token.You may either initialize the loader directly with the access token, or store it in the environment variable JOPLIN_ACCESS_TOKEN.An alternative to this approach is to export the Joplin's note database to Markdown files (optionally, with Front Matter metadata) and use a Markdown loader, such as ObsidianLoader, to load them.from langchain.document_loaders import JoplinLoaderloader = JoplinLoader(access_token="")docs = loader.load()PreviousIuguNextJupyter NotebookCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.Skip to main content🦜�🔗 LangChainDocsUse casesIntegrationsAPILangSmithJS/TS DocsCTRLKIntegrationsCallbacksChat modelsDocument loadersEtherscan LoaderacreomAirbyte JSONAirtableAlibaba Cloud MaxComputeApify DatasetArxivAsyncHtmlLoaderAWS S3 DirectoryAWS S3 FileAZLyricsAzure Blob Storage ContainerAzure Blob Storage FileBibTeXBiliBiliBlackboardBlockchainBrave SearchBrowserlesschatgpt_loaderCollege ConfidentialConfluenceCoNLL-UCopy PasteCSVCube Semantic LayerDatadog LogsDiffbotDiscordDocugamiDuckDBEmailEmbaasEPubEverNoteexample_dataMicrosoft ExcelFacebook ChatFaunaFigmaGeopandasGitGitBookGitHubGoogle BigQueryGoogle Cloud Storage DirectoryGoogle Cloud Storage FileGoogle DriveGrobidGutenbergHacker NewsHuggingFace datasetiFixitImagesImage captionsIMSDbIuguJoplinJupyter NotebookLarkSuite (FeiShu)MastodonMediaWikiDumpMergeDocLoadermhtmlMicrosoft OneDriveMicrosoft PowerPointMicrosoft WordModern TreasuryNotion DB 1/2Notion DB 2/2ObsidianOpen Document Format (ODT)Open City DataOrg-modePandas DataFramePsychicPySpark DataFrame LoaderReadTheDocs DocumentationRecursive URL LoaderRedditRoamRocksetRSTSitemapSlackSnowflakeSource CodeSpreedlyStripeSubtitleTelegramTencent COS DirectoryTencent COS File2MarkdownTOMLTrelloTSVTwitterUnstructured FileURLWeatherWebBaseLoaderWhatsApp ChatWikipediaXMLXorbits Pandas DataFrameLoading documents from a YouTube urlYouTube transcriptsDocument transformersLLMsMemoryRetrieversText embedding modelsAgent toolkitsToolsVector storesGrouped by providerIntegrationsDocument loadersJoplinJoplinJoplin is an open source note-taking app. Capture your thoughts and securely access them from any device.This notebook covers how to load documents fromJoplin | 🦜�🔗 Langchainthe most popular Brazilian singer since the end of the 60\'s and is called by his fans as "The King". I will keep this movie in my collection only because of these attractions (manly Dorothy Stratten). My vote is four.

Title (Brazil): "Muito Riso e Muita Alegria" ("Many Laughs and Lots of Happiness")', metadata={'label': 0})]Example​In this example, we use data from a dataset to answer a questionfrom langchain.indexes import VectorstoreIndexCreatorfrom langchain.document_loaders.hugging_face_dataset import HuggingFaceDatasetLoaderdataset_name = "tweet_eval"page_content_column = "text"name = "stance_climate"loader = HuggingFaceDatasetLoader(dataset_name, page_content_column, name)index = VectorstoreIndexCreator().from_loaders([loader]) Found cached dataset tweet_eval 0%| | 0/3 [00:00
It is ironic that this movie is about a detective agency where the detectives and clients get romantically involved with each other. Five years later, Bogdanovich\'s ex-girlfriend, Cybil Shepherd had a hit television series called "Moonlighting" stealing the story idea from Bogdanovich. Of course, there was a great difference in that the series relied on tons of witty dialogue, while this tries to make do with slapstick and a few screwball lines.

Bottom line: It ain\'t no "Paper Moon" and only a very pale version of "What\'s Up, Doc".', metadata={'label': 0}), Document(page_content="I can't believe that those praising this movie herein aren't thinking of some other film. I was prepared for the possibility that this would be awful, but the script (or lack thereof) makes for a film that's also pointless. On the plus side, the general level of craft on the part of the actors and technical crew is quite competent, but when you've got a sow's ear to work with you can't make a silk purse. Ben G fans should stick with just about any other movie he's been in. Dorothy S fans should stick to Galaxina. Peter B fans should stick to Last Picture Show and Target. Fans of cheap laughs at the expense of those who seem to be asking for it should stick to Peter B's amazingly awful book, Killing of the Unicorn.", metadata={'label': 0}), Document(page_content='Never cast models and Playboy bunnies in your films! Bob Fosse\'s "Star 80" about Dorothy Stratten, of whom Bogdanovich was obsessed enoughindeed. And leaves the audience bored and confused, and jealous, for that matter. This is a curio for people who want to see Dorothy Stratten, who was murdered right after filming. But Patti Hanson, who would, in real life, marry Keith Richards, was also a model, like Stratten, but is a lot better and has a more ample part. In fact, Stratten's part seemed forced; added. She doesn't have a lot to do with the story, which is pretty convoluted to begin with. All in all, every character in this film is somebody that very few people can relate with, unless you're millionaire from Manhattan with beautiful supermodels at your beckon call. For the rest of us, it's an irritating snore fest. That's what happens when you're out of touch. You entertain your few friends with inside jokes, and bore all the rest.", metadata={'label': 0}), Document(page_content='It was great to see some of my favorite stars of 30 years ago including John Ritter, Ben Gazarra and Audrey Hepburn. They looked quite wonderful. But that was it. They were not given any characters or good lines to work with. I neither understood or cared what the characters were doing.

Some of the smaller female roles were fine, Patty Henson and Colleen Camp were quite competent and confident in their small sidekick parts. They showed some talent and it is sad they didn\'t go on to star in more and better films. Sadly, I didn\'t think Dorothy Stratten got a chance to act in this her only important film role.

The film appears to have some fans, and I was very open-minded when I started watching it. I am a big Peter Bogdanovich fan and I enjoyed his last movie, "Cat\'s Meow" and all his early ones from "Targets" towere long on charm. "They All Laughed" tries to coast on its good intentions, but nobody- least of all Peter Bogdanovich - has the good sense to put on the brakes.

Due in no small part to the tragic death of Dorothy Stratten, this movie has a special place in the heart of Mr. Bogdanovich- he even bought it back from its producers, then distributed it on his own and went bankrupt when it didn\'t prove popular. His rise and fall is among the more sympathetic and tragic of Hollywood stories, so there\'s no joy in criticizing the film... there _is_ real emotional investment in Ms. Stratten\'s scenes. But "Laughed" is a faint echo of "The Last Picture Show", "Paper Moon" or "What\'s Up, Doc"- following "Daisy Miller" and "At Long Last Love", it was a thundering confirmation of the phase from which P.B. has never emerged.

All in all, though, the movie is harmless, only a waste of rental. I want to watch people having a good time, I\'ll go to the park on a sunny day. For filmic expressions of joy and love, I\'ll stick to Ernest Lubitsch and Jaques Demy...', metadata={'label': 0}), Document(page_content="This is said to be a personal film for Peter Bogdonavitch. He based it on his life but changed things around to fit the characters, who are detectives. These detectives date beautiful models and have no problem getting them. Sounds more like a millionaire playboy filmmaker than a detective, doesn't it? This entire movie was written by Peter, and it shows how out of touch with real people he was. You're supposed to write what you know, and he did that, indeed. And leaves the audience bored and confused, and jealous, for that matter. This isthat looks like it was a lot of fun to shoot\x97 TOO much fun, nobody is getting any actual work done, and that almost always makes for a movie that\'s no fun to watch.

Ritter dons glasses so as to hammer home his character\'s status as a sort of doppleganger of the bespectacled Bogdanovich; the scenes with the breezy Ms. Stratten are sweet, but have an embarrassing, look-guys-I\'m-dating-the-prom-queen feel to them. Ben Gazzara sports his usual cat\'s-got-canary grin in a futile attempt to elevate the meager plot, which requires him to pursue Audrey Hepburn with all the interest of a narcoleptic at an insomnia clinic. In the meantime, the budding couple\'s respective children (nepotism alert: Bogdanovich\'s daughters) spew cute and pick up some fairly disturbing pointers on \'love\' while observing their parents. (Ms. Hepburn, drawing on her dignity, manages to rise above the proceedings- but she has the monumental challenge of playing herself, ostensibly.) Everybody looks great, but so what? It\'s a movie and we can expect that much, if that\'s what you\'re looking for you\'d be better off picking up a copy of Vogue.

Oh- and it has to be mentioned that Colleen Camp thoroughly annoys, even apart from her singing, which, while competent, is wholly unconvincing... the country and western numbers are woefully mismatched with the standards on the soundtrack. Surely this is NOT what Gershwin (who wrote the song from which the movie\'s title is derived) had in mind; his stage musicals of the 20\'s may have been slight, but at least they were long on charm. "They All Laughed" tries to coast on its good intentions, butbeing portrayed the way she was in the movie is horrendous. I cannot believe out of all the actresses in the world who could play a much better Lucy, the producers decided to get Rachel York. She might be a good actress in other roles but to play the role of Lucille Ball is tough. It is pretty hard to find someone who could resemble Lucille Ball, but they could at least find someone a bit similar in looks and talent. If you noticed York\'s portrayal of Lucy in episodes of I Love Lucy like the chocolate factory or vitavetavegamin, nothing is similar in any way-her expression, voice, or movement.

To top it all off, Danny Pino playing Desi Arnaz is horrible. Pino does not qualify to play as Ricky. He\'s small and skinny, his accent is unreal, and once again, his acting is unbelievable. Although Fred and Ethel were not similar either, they were not as bad as the characters of Lucy and Ricky.

Overall, extremely horrible casting and the story is badly told. If people want to understand the real life situation of Lucille Ball, I suggest watching A&E Biography of Lucy and Desi, read the book from Lucille Ball herself, or PBS\' American Masters: Finding Lucy. If you want to see a docudrama, "Before the Laughter" would be a better choice. The casting of Lucille Ball and Desi Arnaz in "Before the Laughter" is much better compared to this. At least, a similar aspect is shown rather than nothing.', metadata={'label': 0}), Document(page_content='Who are these "They"- the actors? the filmmakers? Certainly couldn\'t be the audience- this is among the most air-puffed productions in existence. It\'s the kind of movie that looks like it was a lot of fun to shoot\x97 TOO much fun, nobody is gettingDocument(page_content="I would put this at the top of my list of films in the category of unwatchable trash! There are films that are bad, but the worst kind are the ones that are unwatchable but you are suppose to like them because they are supposed to be good for you! The sex sequences, so shocking in its day, couldn't even arouse a rabbit. The so called controversial politics is strictly high school sophomore amateur night Marxism. The film is self-consciously arty in the worst sense of the term. The photography is in a harsh grainy black and white. Some scenes are out of focus or taken from the wrong angle. Even the sound is bad! And some people call this art?

", metadata={'label': 0}), Document(page_content="Whoever wrote the screenplay for this movie obviously never consulted any books about Lucille Ball, especially her autobiography. I've never seen so many mistakes in a biopic, ranging from her early years in Celoron and Jamestown to her later years with Desi. I could write a whole list of factual errors, but it would go on for pages. In all, I believe that Lucille Ball is one of those inimitable people who simply cannot be portrayed by anyone other than themselves. If I were Lucie Arnaz and Desi, Jr., I would be irate at how many mistakes were made in this film. The filmmakers tried hard, but the movie seems awfully sloppy to me.", metadata={'label': 0}), Document(page_content='When I first saw a glimpse of this movie, I quickly noticed the actress who was playing the role of Lucille Ball. Rachel York\'s portrayal of Lucy is absolutely awful. Lucille Ball was an astounding comedian with incredible talent. To think about a legend like Lucille Ball being portrayed the way she was in the movie is horrendous. I cannot believe out of all theColumbus"). Then a screening at a local film museum beckoned - Finally I could see this film, except now I was as old as my parents were when they schlepped to see it!!

The ONLY reason this film was not condemned to the anonymous sands of time was because of the obscenity case sparked by its U.S. release. MILLIONS of people flocked to this stinker, thinking they were going to see a sex film...Instead, they got lots of closeups of gnarly, repulsive Swedes, on-street interviews in bland shopping malls, asinie political pretension...and feeble who-cares simulated sex scenes with saggy, pale actors.

Cultural icon, holy grail, historic artifact..whatever this thing was, shred it, burn it, then stuff the ashes in a lead box!

Elite esthetes still scrape to find value in its boring pseudo revolutionary political spewings..But if it weren\'t for the censorship scandal, it would have been ignored, then forgotten.

Instead, the "I Am Blank, Blank" rhythymed title was repeated endlessly for years as a titilation for porno films (I am Curious, Lavender - for gay films, I Am Curious, Black - for blaxploitation films, etc..) and every ten years or so the thing rises from the dead, to be viewed by a new generation of suckers who want to see that "naughty sex film" that "revolutionized the film industry"...

Yeesh, avoid like the plague..Or if you MUST see it - rent the video and fast forward to the "dirty" parts, just to get it over with.

', metadata={'label': 0}), Document(page_content="I would put this at the top of my list of films in the category ofthe future. This film is interesting as an experiment but tells no cogent story.

One might feel virtuous for sitting thru it because it touches on so many IMPORTANT issues but it does so without any discernable motive. The viewer comes away with no new perspectives (unless one comes up with one while one's mind wanders, as it will invariably do during this pointless film).

One might better spend one's time staring out a window at a tree growing.

", metadata={'label': 0}), Document(page_content="This film was probably inspired by Godard's Masculin, féminin and I urge you to see that film instead.

The film has two strong elements and those are, (1) the realistic acting (2) the impressive, undeservedly good, photo. Apart from that, what strikes me most is the endless stream of silliness. Lena Nyman has to be most annoying actress in the world. She acts so stupid and with all the nudity in this film,...it's unattractive. Comparing to Godard's film, intellectuality has been replaced with stupidity. Without going too far on this subject, I would say that follows from the difference in ideals between the French and the Swedish society.

A movie of its time, and place. 2/10.", metadata={'label': 0}), Document(page_content='Oh, brother...after hearing about this ridiculous film for umpteen years all I can think of is that old Peggy Lee song..

"Is that all there is??" ...I was just an early teen when this smoked fish hit the U.S. I was too young to get in the theater (although I did manage to sneak into "Goodbye Columbus"). Then a screening at a local film museum beckoned - Finally I could see this film,be shown in pornographic theaters in America. I AM CURIOUS-YELLOW is a good film for anyone wanting to study the meat and potatoes (no pun intended) of Swedish cinema. But really, this film doesn\'t have much of a plot.', metadata={'label': 0}), Document(page_content='"I Am Curious: Yellow" is a risible and pretentious steaming pile. It doesn\'t matter what one\'s political views are because this film can hardly be taken seriously on any level. As for the claim that frontal male nudity is an automatic NC-17, that isn\'t true. I\'ve seen R-rated films with male nudity. Granted, they only offer some fleeting views, but where are the R-rated films with gaping vulvas and flapping labia? Nowhere, because they don\'t exist. The same goes for those crappy cable shows: schlongs swinging in the breeze but not a clitoris in sight. And those pretentious indie movies like The Brown Bunny, in which we\'re treated to the site of Vincent Gallo\'s throbbing johnson, but not a trace of pink visible on Chloe Sevigny. Before crying (or implying) "double-standard" in matters of nudity, the mentally obtuse should take into account one unavoidably obvious anatomical difference between men and women: there are no genitals on display when actresses appears nude, and the same cannot be said for a man. In fact, you generally won\'t see female genitals in an American film in anything short of porn or explicit erotica. This alleged double-standard is less a double standard than an admittedly depressing ability to come to terms culturally with the insides of women\'s bodies.', metadata={'label': 0}), Document(page_content="If only to avoid making this type of film in the future. This film is interesting as an experiment but tells no cogent story.

I do commend the filmmakers for the fact that any sex shown in the film is shown for artistic purposes rather than just to shock people and make money to be shown in pornographic theaters in America. I AM CURIOUS-YELLOW is a good film forHub is home to over 5,000 datasets in more than 100 languages that can be used for a broad range of tasks across NLP, Computer Vision, and Audio. They used for a diverse range of tasks such as translation,Skip to main content🦜ï¸�🔗 LangChainDocsUse casesIntegrationsAPILangSmithJS/TS DocsCTRLKIntegrationsCallbacksChat modelsDocument loadersEtherscan LoaderacreomAirbyte JSONAirtableAlibaba Cloud MaxComputeApify DatasetArxivAsyncHtmlLoaderAWS S3 DirectoryAWS S3 FileAZLyricsAzure Blob Storage ContainerAzure Blob Storage FileBibTeXBiliBiliBlackboardBlockchainBrave SearchBrowserlesschatgpt_loaderCollege ConfidentialConfluenceCoNLL-UCopy PasteCSVCube Semantic LayerDatadog LogsDiffbotDiscordDocugamiDuckDBEmailEmbaasEPubEverNoteexample_dataMicrosoft ExcelFacebook ChatFaunaFigmaGeopandasGitGitBookGitHubGoogle BigQueryGoogle Cloud Storage DirectoryGoogle Cloud Storage FileGoogle DriveGrobidGutenbergHacker NewsHuggingFace datasetiFixitImagesImage captionsIMSDbIuguJoplinJupyter NotebookLarkSuite (FeiShu)MastodonMediaWikiDumpMergeDocLoadermhtmlMicrosoft OneDriveMicrosoft PowerPointMicrosoft WordModern TreasuryNotion DB 1/2Notion DB 2/2ObsidianOpen Document Format (ODT)Open City DataOrg-modePandas DataFramePsychicPySpark DataFrame LoaderReadTheDocs DocumentationRecursive URL LoaderRedditRoamRocksetRSTSitemapSlackSnowflakeSource CodeSpreedlyStripeSubtitleTelegramTencent COS DirectoryTencent COS File2MarkdownTOMLTrelloTSVTwitterUnstructured FileURLWeatherWebBaseLoaderWhatsApp ChatWikipediaXMLXorbits Pandas DataFrameLoading documents from a YouTube urlYouTube transcriptsDocument transformersLLMsMemoryRetrieversText embedding modelsAgent toolkitsToolsVector storesGrouped by providerIntegrationsDocument loadersHuggingFace datasetOn this pageHuggingFace datasetThe Hugging Face Hub is home to over 5,000 datasets in more than 100 languages that can be usedHuggingFace dataset | 🦜ï¸�🔗 Langchain'location': {'type': 'Point', 'coordinates': [-122.420084075249, 37.7083109744362]}, ':@computed_region_26cr_cadq': '9', ':@computed_region_rxqg_mtj9': '8', ':@computed_region_bh8s_q3mv': '309'}PreviousOpen Document Format (ODT)NextOrg-modeCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.data. For a dataset such as SF crime, to to the API tab on top right. That provides you with the dataset identifier.Use the dataset identifier to grab specific tables for a given city_id (data.sfgov.org) - E.g., vw6y-z8j6 for SF 311 data.E.g., tmnf-yvry for SF Police data.pip install sodapyfrom langchain.document_loaders import OpenCityDataLoaderdataset = "vw6y-z8j6" # 311 datadataset = "tmnf-yvry" # crime dataloader = OpenCityDataLoader(city_id="data.sfgov.org", dataset_id=dataset, limit=2000)docs = loader.load() WARNING:root:Requests made without an app_token will be subject to strict throttling limits.eval(docs[0].page_content) {'pdid': '4133422003074', 'incidntnum': '041334220', 'incident_code': '03074', 'category': 'ROBBERY', 'descript': 'ROBBERY, BODILY FORCE', 'dayofweek': 'Monday', 'date': '2004-11-22T00:00:00.000', 'time': '17:50', 'pddistrict': 'INGLESIDE', 'resolution': 'NONE', 'address': 'GENEVA AV / SANTOS ST', 'x': '-122.420084075249', 'y': '37.7083109744362', 'location': {'type': 'Point', 'coordinates':Skip to main content🦜ï¸�🔗 LangChainDocsUse casesIntegrationsAPILangSmithJS/TS DocsCTRLKIntegrationsCallbacksChat modelsDocument loadersEtherscan LoaderacreomAirbyte JSONAirtableAlibaba Cloud MaxComputeApify DatasetArxivAsyncHtmlLoaderAWS S3 DirectoryAWS S3 FileAZLyricsAzure Blob Storage ContainerAzure Blob Storage FileBibTeXBiliBiliBlackboardBlockchainBrave SearchBrowserlesschatgpt_loaderCollege ConfidentialConfluenceCoNLL-UCopy PasteCSVCube Semantic LayerDatadog LogsDiffbotDiscordDocugamiDuckDBEmailEmbaasEPubEverNoteexample_dataMicrosoft ExcelFacebook ChatFaunaFigmaGeopandasGitGitBookGitHubGoogle BigQueryGoogle Cloud Storage DirectoryGoogle Cloud Storage FileGoogle DriveGrobidGutenbergHacker NewsHuggingFace datasetiFixitImagesImage captionsIMSDbIuguJoplinJupyter NotebookLarkSuite (FeiShu)MastodonMediaWikiDumpMergeDocLoadermhtmlMicrosoft OneDriveMicrosoft PowerPointMicrosoft WordModern TreasuryNotion DB 1/2Notion DB 2/2ObsidianOpen Document Format (ODT)Open City DataOrg-modePandas DataFramePsychicPySpark DataFrame LoaderReadTheDocs DocumentationRecursive URL LoaderRedditRoamRocksetRSTSitemapSlackSnowflakeSource CodeSpreedlyStripeSubtitleTelegramTencent COS DirectoryTencent COS File2MarkdownTOMLTrelloTSVTwitterUnstructured FileURLWeatherWebBaseLoaderWhatsApp ChatWikipediaXMLXorbits Pandas DataFrameLoading documents from a YouTube urlYouTube transcriptsDocument transformersLLMsMemoryRetrieversText embedding modelsAgent toolkitsToolsVector storesGrouped by providerIntegrationsDocument loadersOpen City DataOpen City DataSocrata provides an API for city open data. For a dataset such as SF crime, to to the API tab on top right. ThatOpen City Data | 🦜ï¸�🔗 Langchainfor archived webpages. MHTML, sometimes referred as MHT, stands for MIME HTML is a single file in which entire webpage is archived. When one saves a webpage as MHTML format, this file extension will contain HTML code, images, audio files, flash animation etc.from langchain.document_loaders import MHTMLLoader# Create a new loader object for the MHTML fileloader = MHTMLLoader( file_path="../../../../../../tests/integration_tests/examples/example.mht")# Load the document from the filedocuments = loader.load()# Print the documents to see the resultsfor doc in documents: print(doc) page_content='LangChain\nLANG CHAIN 🦜ï¸�🔗Official Home Page\xa0\n\n\n\n\n\n\n\nIntegrations\n\n\n\nFeatures\n\n\n\n\nBlog\n\n\n\nConceptual Guide\n\n\n\n\nPython Repo\n\n\nJavaScript Repo\n\n\n\nPython Documentation \n\n\nJavaScript Documentation\n\n\n\n\nPython ChatLangChain \n\n\nJavaScript ChatLangChain\n\n\n\n\nDiscord \n\n\nTwitter\n\n\n\n\nIf you have any comments about our WEB page, you can \nwrite us at the address shown above. However, due to \nthe limited number of personnel in our corporate office, we are unable to \nprovide a direct response.\n\nCopyright © 2023-2023 LangChain Inc.\n\n\n' metadata={'source': '../../../../../../tests/integration_tests/examples/example.mht', 'title': 'LangChain'}PreviousMergeDocLoaderNextMicrosoft OneDriveCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.Skip to main content🦜ï¸�🔗 LangChainDocsUse casesIntegrationsAPILangSmithJS/TS DocsCTRLKIntegrationsCallbacksChat modelsDocument loadersEtherscan LoaderacreomAirbyte JSONAirtableAlibaba Cloud MaxComputeApify DatasetArxivAsyncHtmlLoaderAWS S3 DirectoryAWS S3 FileAZLyricsAzure Blob Storage ContainerAzure Blob Storage FileBibTeXBiliBiliBlackboardBlockchainBrave SearchBrowserlesschatgpt_loaderCollege ConfidentialConfluenceCoNLL-UCopy PasteCSVCube Semantic LayerDatadog LogsDiffbotDiscordDocugamiDuckDBEmailEmbaasEPubEverNoteexample_dataMicrosoft ExcelFacebook ChatFaunaFigmaGeopandasGitGitBookGitHubGoogle BigQueryGoogle Cloud Storage DirectoryGoogle Cloud Storage FileGoogle DriveGrobidGutenbergHacker NewsHuggingFace datasetiFixitImagesImage captionsIMSDbIuguJoplinJupyter NotebookLarkSuite (FeiShu)MastodonMediaWikiDumpMergeDocLoadermhtmlMicrosoft OneDriveMicrosoft PowerPointMicrosoft WordModern TreasuryNotion DB 1/2Notion DB 2/2ObsidianOpen Document Format (ODT)Open City DataOrg-modePandas DataFramePsychicPySpark DataFrame LoaderReadTheDocs DocumentationRecursive URL LoaderRedditRoamRocksetRSTSitemapSlackSnowflakeSource CodeSpreedlyStripeSubtitleTelegramTencent COS DirectoryTencent COS File2MarkdownTOMLTrelloTSVTwitterUnstructured FileURLWeatherWebBaseLoaderWhatsApp ChatWikipediaXMLXorbits Pandas DataFrameLoading documents from a YouTube urlYouTube transcriptsDocument transformersLLMsMemoryRetrieversText embedding modelsAgent toolkitsToolsVector storesGrouped by providerIntegrationsDocument loadersmhtmlmhtmlMHTML is a is used both for emails but also for archived webpages. MHTML, sometimes referred as MHT, stands for MIME HTML is amhtml | 🦜ï¸�🔗 Langchainstate explicit conditions on the set of moves so that the heat-bath random\nwalk, a generalization of the Glauber dynamics, is an expander in fixed\ndimension.'}docs[0].page_content[:400] # all pages of the Document content 'arXiv:1605.08386v1 [math.CO] 26 May 2016\nHEAT-BATH RANDOM WALKS WITH MARKOV BASES\nCAPRICE STANLEY AND TOBIAS WINDISCH\nAbstract. Graphs on lattice points are studied whose edges come from a ï¬�nite set of\nallowed moves of arbitrary length. We show that the diameter of these graphs on ï¬�bers of a\nï¬�xed integer matrix can be bounded from above by a constant. We then study the mixing\nbehaviour of heat-b'PreviousApify DatasetNextAsyncHtmlLoaderInstallationExamplesCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.for 2 million scholarly articles in the fields of physics, mathematics, computer science, quantitative biology, quantitative finance, statistics, electrical engineering and systems science, and economics.This notebook shows how to load scientific articles from Arxiv.org into a document format that we can use downstream.Installation​First, you need to install arxiv python package.#!pip install arxivSecond, you need to install PyMuPDF python package which transforms PDF files downloaded from the arxiv.org site into the text format.#!pip install pymupdfExamples​ArxivLoader has these arguments:query: free text which used to find documents in the Arxivoptional load_max_docs: default=100. Use it to limit number of downloaded documents. It takes time to download all 100 documents, so use a small number for experiments.optional load_all_available_meta: default=False. By default only the most important fields downloaded: Published (date when document was published/last updated), Title, Authors, Summary. If True, other fields also downloaded.from langchain.document_loaders import ArxivLoaderdocs = ArxivLoader(query="1605.08386", load_max_docs=2).load()len(docs)docs[0].metadata # meta-information of the Document {'Published': '2016-05-26', 'Title': 'Heat-bath random walks with Markov bases', 'Authors': 'Caprice Stanley, Tobias Windisch', 'Summary': 'Graphs on lattice points are studied whose edges come from a finite set of\nallowed moves of arbitrary length. We show that the diameter of these graphs on\nfibers of a fixed integer matrix can be bounded from above by a constant. We\nthen study the mixing behaviour of heat-bath random walks on these graphs. We\nalso state explicit conditions on the set of moves so that the heat-bath random\nwalk,Skip to main content🦜ï¸�🔗 LangChainDocsUse casesIntegrationsAPILangSmithJS/TS DocsCTRLKIntegrationsCallbacksChat modelsDocument loadersEtherscan LoaderacreomAirbyte JSONAirtableAlibaba Cloud MaxComputeApify DatasetArxivAsyncHtmlLoaderAWS S3 DirectoryAWS S3 FileAZLyricsAzure Blob Storage ContainerAzure Blob Storage FileBibTeXBiliBiliBlackboardBlockchainBrave SearchBrowserlesschatgpt_loaderCollege ConfidentialConfluenceCoNLL-UCopy PasteCSVCube Semantic LayerDatadog LogsDiffbotDiscordDocugamiDuckDBEmailEmbaasEPubEverNoteexample_dataMicrosoft ExcelFacebook ChatFaunaFigmaGeopandasGitGitBookGitHubGoogle BigQueryGoogle Cloud Storage DirectoryGoogle Cloud Storage FileGoogle DriveGrobidGutenbergHacker NewsHuggingFace datasetiFixitImagesImage captionsIMSDbIuguJoplinJupyter NotebookLarkSuite (FeiShu)MastodonMediaWikiDumpMergeDocLoadermhtmlMicrosoft OneDriveMicrosoft PowerPointMicrosoft WordModern TreasuryNotion DB 1/2Notion DB 2/2ObsidianOpen Document Format (ODT)Open City DataOrg-modePandas DataFramePsychicPySpark DataFrame LoaderReadTheDocs DocumentationRecursive URL LoaderRedditRoamRocksetRSTSitemapSlackSnowflakeSource CodeSpreedlyStripeSubtitleTelegramTencent COS DirectoryTencent COS File2MarkdownTOMLTrelloTSVTwitterUnstructured FileURLWeatherWebBaseLoaderWhatsApp ChatWikipediaXMLXorbits Pandas DataFrameLoading documents from a YouTube urlYouTube transcriptsDocument transformersLLMsMemoryRetrieversText embedding modelsAgent toolkitsToolsVector storesGrouped by providerIntegrationsDocument loadersArxivOn this pageArxivarXiv is an open-access archive for 2 million scholarly articles in the fields of physics, mathematics, computer science, quantitative biology,Arxiv | 🦜ï¸�🔗 Langchainvideo sharing and social media platform created by Google.PreviousPromptLayer ChatOpenAINextEtherscan LoaderCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.more.📄ï¸� URLThis covers how to load HTML documents from a list of URLs into a document format that we can use downstream.📄ï¸� WeatherOpenWeatherMap is an open source weather service provider📄ï¸� WebBaseLoaderThis covers how to use WebBaseLoader to load all text from HTML webpages into a document format that we can use downstream. For more custom logic for loading webpages look at some child class examples such as IMSDbLoader, AZLyricsLoader, and CollegeConfidentialLoader📄ï¸� WhatsApp ChatWhatsApp (also called WhatsApp Messenger) is a freeware, cross-platform, centralized instant messaging (IM) and voice-over-IP (VoIP) service. It allows users to send text and voice messages, make voice and video calls, and share images, documents, user locations, and other content.📄ï¸� WikipediaWikipedia is a multilingual free online encyclopedia written and maintained by a community of volunteers, known as Wikipedians, through open collaboration and using a wiki-based editing system called MediaWiki. Wikipedia is the largest and most-read reference work in history.📄ï¸� XMLThe UnstructuredXMLLoader is used to load XML files. The loader works with .xml files. The page content will be the text extracted from the XML tags.📄ï¸� Xorbits Pandas DataFrameThis notebook goes over how to load data from a xorbits.pandas DataFrame.📄ï¸� Loading documents from a YouTube urlBuilding chat or QA applications on YouTube videos is a topic of high interest.📄ï¸� YouTube transcriptsYouTube is an online video sharing and social media platform created by Google.PreviousPromptLayer ChatOpenAINextEtherscancross-platform, encrypted, cloud-based and centralized instant messaging service. The application also provides optional end-to-end encrypted chats and video calling, VoIP, file sharing and several other features.📄ï¸� Tencent COS DirectoryThis covers how to load document objects from a Tencent COS Directory.📄ï¸� Tencent COS FileThis covers how to load document object from a Tencent COS File.📄ï¸� 2Markdown2markdown service transforms website content into structured markdown files.📄ï¸� TOMLTOML is a file format for configuration files. It is intended to be easy to read and write, and is designed to map unambiguously to a dictionary. Its specification is open-source. TOML is implemented in many programming languages. The name TOML is an acronym for "Tom's Obvious, Minimal Language" referring to its creator, Tom Preston-Werner.📄ï¸� TrelloTrello is a web-based project management and collaboration tool that allows individuals and teams to organize and track their tasks and projects. It provides a visual interface known as a "board" where users can create lists and cards to represent their tasks and activities.📄ï¸� TSVA tab-separated values (TSV) file is a simple, text-based file format for storing tabular data.[3] Records are separated by newlines, and values within a record are separated by tab characters.📄ï¸� TwitterTwitter is an online social media and social networking service.📄ï¸� Unstructured FileThis notebook covers how to use Unstructured package to load files of many types. Unstructured currently supports loading of text files, powerpoints, html, pdfs, images, and more.📄ï¸� URLThis covers how to load HTML documents fromSnowflakeThis notebooks goes over how to load documents from Snowflake📄ï¸� Source CodeThis notebook covers how to load source code files using a special approach with language parsing: each top-level function and class in the code is loaded into separate documents. Any remaining code top-level code outside the already loaded functions and classes will be loaded into a seperate document.📄ï¸� SpreedlySpreedly is a service that allows you to securely store credit cards and use them to transact against any number of payment gateways and third party APIs. It does this by simultaneously providing a card tokenization/vault service as well as a gateway and receiver integration service. Payment methods tokenized by Spreedly are stored at Spreedly, allowing you to independently store a card and then pass that card to different end points based on your business requirements.📄ï¸� StripeStripe is an Irish-American financial services and software as a service (SaaS) company. It offers payment-processing software and application programming interfaces for e-commerce websites and mobile applications.📄ï¸� SubtitleThe SubRip file format is described on the Matroska multimedia container format website as "perhaps the most basic of all subtitle formats." SubRip (SubRip Text) files are named with the extension .srt, and contain formatted lines of plain text in groups separated by a blank line. Subtitles are numbered sequentially, starting at 1. The timecode format used is hoursseconds,milliseconds with time units fixed to two zero-padded digits and fractions fixed to three zero-padded digits (0000,000). The fractional separator used is the comma, since the program was written in France.📄ï¸� TelegramTelegram Messenger is a globally accessible freemium, cross-platform, encrypted, cloud-based and centralized instant messaging service. The application also provides optionalhow to load data from a pandas DataFrame.📄ï¸� PsychicThis notebook covers how to load documents from Psychic. See here for more details.📄ï¸� PySpark DataFrame LoaderThis notebook goes over how to load data from a PySpark DataFrame.📄ï¸� ReadTheDocs DocumentationRead the Docs is an open-sourced free software documentation hosting platform. It generates documentation written with the Sphinx documentation generator.📄ï¸� Recursive URL LoaderWe may want to process load all URLs under a root directory.📄ï¸� RedditReddit is an American social news aggregation, content rating, and discussion website.📄ï¸� RoamROAM is a note-taking tool for networked thought, designed to create a personal knowledge base.📄ï¸� RocksetRockset is a real-time analytics database which enables queries on massive, semi-structured data without operational burden. With Rockset, ingested data is queryable within one second and analytical queries against that data typically execute in milliseconds. Rockset is compute optimized, making it suitable for serving high concurrency applications in the sub-100TB range (or larger than 100s of TBs with rollups).📄ï¸� RSTA reStructured Text (RST) file is a file format for textual data used primarily in the Python programming language community for technical documentation.📄ï¸� SitemapExtends from the WebBaseLoader, SitemapLoader loads a sitemap from a given URL, and then scrape and load all pages in the sitemap, returning each page as a Document.📄ï¸� SlackSlack is an instant messaging program.📄ï¸� SnowflakeThis notebooks goes over how to load documents fromMicrosoft.📄ï¸� Microsoft PowerPointMicrosoft PowerPoint is a presentation program by Microsoft.📄ï¸� Microsoft WordMicrosoft Word is a word processor developed by Microsoft.📄ï¸� Modern TreasuryModern Treasury simplifies complex payment operations. It is a unified platform to power products and processes that move money.📄ï¸� Notion DB 1/2Notion is a collaboration platform with modified Markdown support that integrates kanban boards, tasks, wikis and databases. It is an all-in-one workspace for notetaking, knowledge and data management, and project and task management.📄ï¸� Notion DB 2/2Notion is a collaboration platform with modified Markdown support that integrates kanban boards, tasks, wikis and databases. It is an all-in-one workspace for notetaking, knowledge and data management, and project and task management.📄ï¸� ObsidianObsidian is a powerful and extensible knowledge base📄ï¸� Open Document Format (ODT)The Open Document Format for Office Applications (ODF), also known as OpenDocument, is an open file format for word processing documents, spreadsheets, presentations and graphics and using ZIP-compressed XML files. It was developed with the aim of providing an open, XML-based file format specification for office applications.📄ï¸� Open City DataSocrata provides an API for city open data.📄ï¸� Org-modeA Org Mode document is a document editing, formatting, and organizing mode, designed for notes, planning, and authoring within the free software text editor Emacs.📄ï¸� Pandas DataFrameThis notebook goes over how to load data from a pandas DataFrame.📄ï¸� PsychicThisIMSDbIMSDb is the Internet Movie Script Database.📄ï¸� IuguIugu is a Brazilian services and software as a service (SaaS) company. It offers payment-processing software and application programming interfaces for e-commerce websites and mobile applications.📄ï¸� JoplinJoplin is an open source note-taking app. Capture your thoughts and securely access them from any device.📄ï¸� Jupyter NotebookJupyter Notebook (formerly IPython Notebook) is a web-based interactive computational environment for creating notebook documents.📄ï¸� LarkSuite (FeiShu)LarkSuite is an enterprise collaboration platform developed by ByteDance.📄ï¸� MastodonMastodon is a federated social media and social networking service.📄ï¸� MediaWikiDumpMediaWiki XML Dumps contain the content of a wiki (wiki pages with all their revisions), without the site-related data. A XML dump does not create a full backup of the wiki database, the dump does not contain user accounts, images, edit logs, etc.📄ï¸� MergeDocLoaderMerge the documents returned from a set of specified data loaders.📄ï¸� mhtmlMHTML is a is used both for emails but also for archived webpages. MHTML, sometimes referred as MHT, stands for MIME HTML is a single file in which entire webpage is archived. When one saves a webpage as MHTML format, this file extension will contain HTML code, images, audio files, flash animation etc.📄ï¸� Microsoft OneDriveMicrosoft OneDrive (formerly SkyDrive) is a file hosting service operated by Microsoft.📄ï¸� Microsoft PowerPointMicrosoft PowerPoint is a presentation program byGoogle Cloud Storage DirectoryGoogle Cloud Storage is a managed service for storing unstructured data.📄ï¸� Google Cloud Storage FileGoogle Cloud Storage is a managed service for storing unstructured data.📄ï¸� Google DriveGoogle Drive is a file storage and synchronization service developed by Google.📄ï¸� GrobidGROBID is a machine learning library for extracting, parsing, and re-structuring raw documents.📄ï¸� GutenbergProject Gutenberg is an online library of free eBooks.📄ï¸� Hacker NewsHacker News (sometimes abbreviated as HN) is a social news website focusing on computer science and entrepreneurship. It is run by the investment fund and startup incubator Y Combinator. In general, content that can be submitted is defined as "anything that gratifies one's intellectual curiosity."📄ï¸� HuggingFace datasetThe Hugging Face Hub is home to over 5,000 datasets in more than 100 languages that can be used for a broad range of tasks across NLP, Computer Vision, and Audio. They used for a diverse range of tasks such as translation,📄ï¸� iFixitiFixit is the largest, open repair community on the web. The site contains nearly 100k repair manuals, 200k Questions & Answers on 42k devices, and all the data is licensed under CC-BY-NC-SA 3.0.📄ï¸� ImagesThis covers how to load images such as JPG or PNG into a document format that we can use downstream.📄ï¸� Image captionsBy default, the loader utilizes the pre-trained Salesforce BLIP image captioning model.📄ï¸� IMSDbIMSDb is the Internet Movie Scriptedited, searched, and exported.🗃ï¸� example_data1 items📄ï¸� Microsoft ExcelThe UnstructuredExcelLoader is used to load Microsoft Excel files. The loader works with both .xlsx and .xls files. The page content will be the raw text of the Excel file. If you use the loader in "elements" mode, an HTML representation of the Excel file will be available in the document metadata under the textashtml key.📄ï¸� Facebook ChatMessenger) is an American proprietary instant messaging app and platform developed by Meta Platforms. Originally developed as Facebook Chat in 2008, the company revamped its messaging service in 2010.📄ï¸� FaunaFauna is a Document Database.📄ï¸� FigmaFigma is a collaborative web application for interface design.📄ï¸� GeopandasGeopandas is an open source project to make working with geospatial data in python easier.📄ï¸� GitGit is a distributed version control system that tracks changes in any set of computer files, usually used for coordinating work among programmers collaboratively developing source code during software development.📄ï¸� GitBookGitBook is a modern documentation platform where teams can document everything from products to internal knowledge bases and APIs.📄ï¸� GitHubThis notebooks shows how you can load issues and pull requests (PRs) for a given repository on GitHub. We will use the LangChain Python repository as an example.📄ï¸� Google BigQueryGoogle BigQuery is a serverless and cost-effective enterprise data warehouse that works across clouds and scales with your data.📄ï¸� Google Cloud Storage DirectoryGoogle Cloud Storage is a managed service for storing unstructuredDatadog LogsDatadog is a monitoring and analytics platform for cloud-scale applications.📄ï¸� DiffbotUnlike traditional web scraping tools, Diffbot doesn't require any rules to read the content on a page.📄ï¸� DiscordDiscord is a VoIP and instant messaging social platform. Users have the ability to communicate with voice calls, video calls, text messaging, media and files in private chats or as part of communities called "servers". A server is a collection of persistent chat rooms and voice channels which can be accessed via invite links.📄ï¸� DocugamiThis notebook covers how to load documents from Docugami. It provides the advantages of using this system over alternative data loaders.📄ï¸� DuckDBDuckDB is an in-process SQL OLAP database management system.📄ï¸� EmailThis notebook shows how to load email (.eml) or Microsoft Outlook (.msg) files.📄ï¸� Embaasembaas is a fully managed NLP API service that offers features like embedding generation, document text extraction, document to embeddings and more. You can choose a variety of pre-trained models.📄ï¸� EPubEPUB is an e-book file format that uses the ".epub" file extension. The term is short for electronic publication and is sometimes styled ePub. EPUB is supported by many e-readers, and compatible software is available for most smartphones, tablets, and computers.📄ï¸� EverNoteEverNote is intended for archiving and creating notes in which photos, audio and saved web content can be embedded. Notes are stored in virtual "notebooks" and can be tagged, annotated, edited, searched, and exported.🗃ï¸� example_data1Brave SearchBrave Search is a search engine developed by Brave Software.📄ï¸� BrowserlessBrowserless is a service that allows you to run headless Chrome instances in the cloud. It's a great way to run browser-based automation at scale without having to worry about managing your own infrastructure.📄ï¸� chatgpt_loaderChatGPT Data📄ï¸� College ConfidentialCollege Confidential gives information on 3,800+ colleges and universities.📄ï¸� ConfluenceConfluence is a wiki collaboration platform that saves and organizes all of the project-related material. Confluence is a knowledge base that primarily handles content management activities.📄ï¸� CoNLL-UCoNLL-U is revised version of the CoNLL-X format. Annotations are encoded in plain text files (UTF-8, normalized to NFC, using only the LF character as line break, including an LF character at the end of file) with three types of lines:📄ï¸� Copy PasteThis notebook covers how to load a document object from something you just want to copy and paste. In this case, you don't even need to use a DocumentLoader, but rather can just construct the Document directly.📄ï¸� CSVA comma-separated values (CSV) file is a delimited text file that uses a comma to separate values. Each line of the file is a data record. Each record consists of one or more fields, separated by commas.📄ï¸� Cube Semantic LayerThis notebook demonstrates the process of retrieving Cube's data model metadata in a format suitable for passing to LLMs as embeddings, thereby enhancing contextual information.📄ï¸� Datadog LogsDatadog is a monitoring and analytics platform for cloud-scale(Amazon S3) is an object storage service.📄ï¸� AZLyricsAZLyrics is a large, legal, every day growing collection of lyrics.📄ï¸� Azure Blob Storage ContainerAzure Blob Storage is Microsoft's object storage solution for the cloud. Blob Storage is optimized for storing massive amounts of unstructured data. Unstructured data is data that doesn't adhere to a particular data model or definition, such as text or binary data.📄ï¸� Azure Blob Storage FileAzure Files offers fully managed file shares in the cloud that are accessible via the industry standard Server Message Block (SMB) protocol, Network File System (NFS) protocol, and Azure Files REST API.📄ï¸� BibTeXBibTeX is a file format and reference management system commonly used in conjunction with LaTeX typesetting. It serves as a way to organize and store bibliographic information for academic and research documents.📄ï¸� BiliBiliBilibili is one of the most beloved long-form video sites in China.📄ï¸� BlackboardBlackboard Learn (previously the Blackboard Learning Management System) is a web-based virtual learning environment and learning management system developed by Blackboard Inc. The software features course management, customizable open architecture, and scalable design that allows integration with student information systems and authentication protocols. It may be installed on local servers, hosted by Blackboard ASP Solutions, or provided as Software as a Service hosted on Amazon Web Services. Its main purposes are stated to include the addition of online elements to courses traditionally delivered face-to-face and development of completely online courses with few or no face-to-face meetings📄ï¸� BlockchainOverview📄ï¸� Brave SearchBrave Search is a search engine developed by Braveacreomacreom is a dev-first knowledge base with tasks running on local markdown files.📄ï¸� Airbyte JSONAirbyte is a data integration platform for ELT pipelines from APIs, databases & files to warehouses & lakes. It has the largest catalog of ELT connectors to data warehouses and databases.📄ï¸� Airtable* Get your API key here.📄ï¸� Alibaba Cloud MaxComputeAlibaba Cloud MaxCompute (previously known as ODPS) is a general purpose, fully managed, multi-tenancy data processing platform for large-scale data warehousing. MaxCompute supports various data importing solutions and distributed computing models, enabling users to effectively query massive datasets, reduce production costs, and ensure data security.📄ï¸� Apify DatasetApify Dataset is a scaleable append-only storage with sequential access built for storing structured web scraping results, such as a list of products or Google SERPs, and then export them to various formats like JSON, CSV, or Excel. Datasets are mainly used to save results of Apify Actors—serverless cloud programs for varius web scraping, crawling, and data extraction use cases.📄ï¸� ArxivarXiv is an open-access archive for 2 million scholarly articles in the fields of physics, mathematics, computer science, quantitative biology, quantitative finance, statistics, electrical engineering and systems science, and economics.📄ï¸� AsyncHtmlLoaderAsyncHtmlLoader loads raw HTML from a list of urls concurrently.📄ï¸� AWS S3 DirectoryAmazon Simple Storage Service (Amazon S3) is an object storage service📄ï¸� AWS S3 FileAmazon Simple Storage Service (Amazon S3) is an object storage service.📄ï¸�Skip to main content🦜ï¸�🔗 LangChainDocsUse casesIntegrationsAPILangSmithJS/TS DocsCTRLKIntegrationsCallbacksChat modelsDocument loadersEtherscan LoaderacreomAirbyte JSONAirtableAlibaba Cloud MaxComputeApify DatasetArxivAsyncHtmlLoaderAWS S3 DirectoryAWS S3 FileAZLyricsAzure Blob Storage ContainerAzure Blob Storage FileBibTeXBiliBiliBlackboardBlockchainBrave SearchBrowserlesschatgpt_loaderCollege ConfidentialConfluenceCoNLL-UCopy PasteCSVCube Semantic LayerDatadog LogsDiffbotDiscordDocugamiDuckDBEmailEmbaasEPubEverNoteexample_dataMicrosoft ExcelFacebook ChatFaunaFigmaGeopandasGitGitBookGitHubGoogle BigQueryGoogle Cloud Storage DirectoryGoogle Cloud Storage FileGoogle DriveGrobidGutenbergHacker NewsHuggingFace datasetiFixitImagesImage captionsIMSDbIuguJoplinJupyter NotebookLarkSuite (FeiShu)MastodonMediaWikiDumpMergeDocLoadermhtmlMicrosoft OneDriveMicrosoft PowerPointMicrosoft WordModern TreasuryNotion DB 1/2Notion DB 2/2ObsidianOpen Document Format (ODT)Open City DataOrg-modePandas DataFramePsychicPySpark DataFrame LoaderReadTheDocs DocumentationRecursive URL LoaderRedditRoamRocksetRSTSitemapSlackSnowflakeSource CodeSpreedlyStripeSubtitleTelegramTencent COS DirectoryTencent COS File2MarkdownTOMLTrelloTSVTwitterUnstructured FileURLWeatherWebBaseLoaderWhatsApp ChatWikipediaXMLXorbits Pandas DataFrameLoading documents from a YouTube urlYouTube transcriptsDocument transformersLLMsMemoryRetrieversText embedding modelsAgent toolkitsToolsVector storesGrouped by providerIntegrationsDocument loadersDocument loaders📄ï¸� Etherscan LoaderOverview📄ï¸� acreomacreom is a dev-first knowledge baseDocument loaders | 🦜ï¸�🔗 Langchainchat model and the chain.Wrong:chat = ChatOpenAI(temperature=0.9, callbacks=[ContextCallbackHandler(token)])chain = LLMChain(llm=chat, prompt=chat_prompt_template, callbacks=[ContextCallbackHandler(token)])Correct:handler = ContextCallbackHandler(token)chat = ChatOpenAI(temperature=0.9, callbacks=[callback])chain = LLMChain(llm=chat, prompt=chat_prompt_template, callbacks=[callback])Example​import osfrom langchain.chat_models import ChatOpenAIfrom langchain import LLMChainfrom langchain.prompts import PromptTemplatefrom langchain.prompts.chat import ( ChatPromptTemplate, HumanMessagePromptTemplate,)from langchain.callbacks import ContextCallbackHandlertoken = os.environ["CONTEXT_API_TOKEN"]human_message_prompt = HumanMessagePromptTemplate( prompt=PromptTemplate( template="What is a good name for a company that makes {product}?", input_variables=["product"], ))chat_prompt_template = ChatPromptTemplate.from_messages([human_message_prompt])callback = ContextCallbackHandler(token)chat = ChatOpenAI(temperature=0.9, callbacks=[callback])chain = LLMChain(llm=chat, prompt=chat_prompt_template, callbacks=[callback])print(chain.run("colorful socks"))PreviousArgillaNextInfino - LangChain LLM Monitoring ExampleInstallation and SetupGetting API CredentialsSetup ContextUsageUsing the Context callback within a Chat ModelUsing the Context callback within ChainsCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.Gain critical insights, optimise poor experiences, and minimise brand risks.In this guide we will show you how to integrate with Context.Installation and Setup​$ pip install context-python --upgradeGetting API Credentials​To get your Context API token:Go to the settings page within your Context account (https://go.getcontext.ai/settings).Generate a new API Token.Store this token somewhere secure.Setup Context​To use the ContextCallbackHandler, import the handler from Langchain and instantiate it with your Context API token.Ensure you have installed the context-python package before using the handler.import osfrom langchain.callbacks import ContextCallbackHandlertoken = os.environ["CONTEXT_API_TOKEN"]context_callback = ContextCallbackHandler(token)Usage​Using the Context callback within a Chat Model​The Context callback handler can be used to directly record transcripts between users and AI assistants.Example​import osfrom langchain.chat_models import ChatOpenAIfrom langchain.schema import ( SystemMessage, HumanMessage,)from langchain.callbacks import ContextCallbackHandlertoken = os.environ["CONTEXT_API_TOKEN"]chat = ChatOpenAI( headers={"user_id": "123"}, temperature=0, callbacks=[ContextCallbackHandler(token)])messages = [ SystemMessage( content="You are a helpful assistant that translates English to French." ), HumanMessage(content="I love programming."),]print(chat(messages))Using the Context callback within Chains​The Context callback handler can also be used to record the inputs and outputs of chains. Note that intermediate steps of the chain are not recorded - only the starting inputs and final outputs.Note: Ensure that you pass the same context object to the chat model and the chain.Wrong:chat = ChatOpenAI(temperature=0.9,Skip to main content🦜ï¸�🔗 LangChainDocsUse casesIntegrationsAPILangSmithJS/TS DocsCTRLKIntegrationsCallbacksArgillaContextInfino - LangChain LLM Monitoring ExamplePromptLayerStreamlitChat modelsDocument loadersDocument transformersLLMsMemoryRetrieversText embedding modelsAgent toolkitsToolsVector storesGrouped by providerIntegrationsCallbacksContextOn this pageContextContext provides product analytics for AI chatbots.Context helps you understand how users are interacting with your AI chat products.Context | 🦜ï¸�🔗 Langchainexample below won't work.from langchain.agents import AgentType, initialize_agent, load_toolsfrom langchain.callbacks import ArgillaCallbackHandler, StdOutCallbackHandlerfrom langchain.llms import OpenAIargilla_callback = ArgillaCallbackHandler( dataset_name="langchain-dataset", api_url=os.environ["ARGILLA_API_URL"], api_key=os.environ["ARGILLA_API_KEY"],)callbacks = [StdOutCallbackHandler(), argilla_callback]llm = OpenAI(temperature=0.9, callbacks=callbacks)tools = load_tools(["serpapi"], llm=llm, callbacks=callbacks)agent = initialize_agent( tools, llm, agent=AgentType.ZERO_SHOT_REACT_DESCRIPTION, callbacks=callbacks,)agent.run("Who was the first president of the United States of America?") > Entering new AgentExecutor chain... I need to answer a historical question Action: Search Action Input: "who was the first president of the United States of America" Observation: George Washington Thought: George Washington was the first president Final Answer: George Washington was the first president of the United States of America. > Finished chain. 'George Washington was the first president of the United States of America.'PreviousCallbacksNextContextInstallation and SetupGetting API CredentialsSetup ArgillaTrackingScenario 1: Tracking an LLMScenario 2: Tracking an LLM in a chainScenario 3: Using an Agent with ToolsCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.You are a playwright. Given the title of play, it is your job to write a synopsis for that title. Title: Documentary about Bigfoot in Paris Playwright: This is a synopsis for the above play: > Finished chain. [{'text': "\n\nDocumentary about Bigfoot in Paris focuses on the story of a documentary filmmaker and their search for evidence of the legendary Bigfoot creature in the city of Paris. The play follows the filmmaker as they explore the city, meeting people from all walks of life who have had encounters with the mysterious creature. Through their conversations, the filmmaker unravels the story of Bigfoot and finds out the truth about the creature's presence in Paris. As the story progresses, the filmmaker learns more and more about the mysterious creature, as well as the different perspectives of the people living in the city, and what they think of the creature. In the end, the filmmaker's findings lead them to some surprising and heartwarming conclusions about the creature's existence and the importance it holds in the lives of the people in Paris."}]Scenario 3: Using an Agent with Tools​Finally, as a more advanced workflow, you can create an agent that uses some tools. So that ArgillaCallbackHandler will keep track of the input and the output, but not about the intermediate steps/thoughts, so that given a prompt we log the original prompt and the final response to that given prompt.Note that for this scenario we'll be using Google Search API (Serp API) so you will need to both install google-search-results as pip install google-search-results, and to set the Serp API Key as os.environ["SERPAPI_API_KEY"] = "..." (you can find it at https://serpapi.com/dashboard), otherwise the example below won't work.from langchain.agents import AgentType, initialize_agent, load_toolsfromdawn arrives\n\nA fresh start, no reason to hide\n\nSomewhere down the road, there's a heart that beats\n\nBelieve in yourself, you'll always succeed.", generation_info={'finish_reason': 'stop', 'logprobs': None})]], llm_output={'token_usage': {'completion_tokens': 504, 'total_tokens': 528, 'prompt_tokens': 24}, 'model_name': 'text-davinci-003'})Scenario 2: Tracking an LLM in a chain​Then we can create a chain using a prompt template, and then track the initial prompt and the final response in Argilla.from langchain.callbacks import ArgillaCallbackHandler, StdOutCallbackHandlerfrom langchain.llms import OpenAIfrom langchain.chains import LLMChainfrom langchain.prompts import PromptTemplateargilla_callback = ArgillaCallbackHandler( dataset_name="langchain-dataset", api_url=os.environ["ARGILLA_API_URL"], api_key=os.environ["ARGILLA_API_KEY"],)callbacks = [StdOutCallbackHandler(), argilla_callback]llm = OpenAI(temperature=0.9, callbacks=callbacks)template = """You are a playwright. Given the title of play, it is your job to write a synopsis for that title.Title: {title}Playwright: This is a synopsis for the above play:"""prompt_template = PromptTemplate(input_variables=["title"], template=template)synopsis_chain = LLMChain(llm=llm, prompt=prompt_template, callbacks=callbacks)test_prompts = [{"title": "Documentary about Bigfoot in Paris"}]synopsis_chain.apply(test_prompts) > Entering new LLMChain chain... Prompt after formatting: You are a playwright. Given the title of play, it is your job to write a synopsisother magnet?\nA. "I find you very attractive!"', generation_info={'finish_reason': 'stop', 'logprobs': None})], [Generation(text="\n\nThe world is charged with the grandeur of God.\nIt will flame out, like shining from shook foil;\nIt gathers to a greatness, like the ooze of oil\nCrushed. Why do men then now not reck his rod?\n\nGenerations have trod, have trod, have trod;\nAnd all is seared with trade; bleared, smeared with toil;\nAnd wears man's smudge and shares man's smell: the soil\nIs bare now, nor can foot feel, being shod.\n\nAnd for all this, nature is never spent;\nThere lives the dearest freshness deep down things;\nAnd though the last lights off the black West went\nOh, morning, at the brown brink eastward, springs —\n\nBecause the Holy Ghost over the bent\nWorld broods with warm breast and with ah! bright wings.\n\n~Gerard Manley Hopkins", generation_info={'finish_reason': 'stop', 'logprobs': None})], [Generation(text='\n\nQ: What did one ocean say to the other ocean?\nA: Nothing, they just waved.', generation_info={'finish_reason': 'stop', 'logprobs': None})], [Generation(text="\n\nA poem for you\n\nOn a field of green\n\nThe sky so blue\n\nA gentle breeze, the sun above\n\nA beautiful world, for us to love\n\nLife is a journey, full of surprise\n\nFull of joy and full of surprise\n\nBe brave and take small steps\n\nThe future will be revealed with depth\n\nIn the morning, when dawn arrives\n\nA fresh start, no reason to hide\n\nSomewhere down the road,api_url=os.environ["ARGILLA_API_URL"], api_key=os.environ["ARGILLA_API_KEY"],)Scenario 1: Tracking an LLM​First, let's just run a single LLM a few times and capture the resulting prompt-response pairs in Argilla.from langchain.callbacks import ArgillaCallbackHandler, StdOutCallbackHandlerfrom langchain.llms import OpenAIargilla_callback = ArgillaCallbackHandler( dataset_name="langchain-dataset", api_url=os.environ["ARGILLA_API_URL"], api_key=os.environ["ARGILLA_API_KEY"],)callbacks = [StdOutCallbackHandler(), argilla_callback]llm = OpenAI(temperature=0.9, callbacks=callbacks)llm.generate(["Tell me a joke", "Tell me a poem"] * 3) LLMResult(generations=[[Generation(text='\n\nQ: What did the fish say when he hit the wall? \nA: Dam.', generation_info={'finish_reason': 'stop', 'logprobs': None})], [Generation(text='\n\nThe Moon \n\nThe moon is high in the midnight sky,\nSparkling like a star above.\nThe night so peaceful, so serene,\nFilling up the air with love.\n\nEver changing and renewing,\nA never-ending light of grace.\nThe moon remains a constant view,\nA reminder of life’s gentle pace.\n\nThrough time and space it guides us on,\nA never-fading beacon of hope.\nThe moon shines down on us all,\nAs it continues to rise and elope.', generation_info={'finish_reason': 'stop', 'logprobs': None})], [Generation(text='\n\nQ. What did one magnet say to the other magnet?\nA. "I find you very attractive!"', generation_info={'finish_reason':rg.TextField(name="prompt"), rg.TextField(name="response"), ], questions=[ rg.RatingQuestion( name="response-rating", description="How would you rate the quality of the response?", values=[1, 2, 3, 4, 5], required=True, ), rg.TextQuestion( name="response-feedback", description="What feedback do you have for the response?", required=False, ), ], guidelines="You're asked to rate the quality of the response and provide feedback.",)rg.init( api_url=os.environ["ARGILLA_API_URL"], api_key=os.environ["ARGILLA_API_KEY"],)dataset.push_to_argilla("langchain-dataset")📌 NOTE: at the moment, just the prompt-response pairs are supported as FeedbackDataset.fields, so the ArgillaCallbackHandler will just track the prompt i.e. the LLM input, and the response i.e. the LLM output.Tracking​To use the ArgillaCallbackHandler you can either use the following code, or just reproduce one of the examples presented in the following sections.from langchain.callbacks import ArgillaCallbackHandlerargilla_callback = ArgillaCallbackHandler( dataset_name="langchain-dataset", api_url=os.environ["ARGILLA_API_URL"],from data labeling to model monitoring.In this guide we will demonstrate how to track the inputs and reponses of your LLM to generate a dataset in Argilla, using the ArgillaCallbackHandler.It's useful to keep track of the inputs and outputs of your LLMs to generate datasets for future fine-tuning. This is especially useful when you're using a LLM to generate data for a specific task, such as question answering, summarization, or translation.Installation and Setup​pip install argilla --upgradepip install openaiGetting API Credentials​To get the Argilla API credentials, follow the next steps:Go to your Argilla UI.Click on your profile picture and go to "My settings".Then copy the API Key.In Argilla the API URL will be the same as the URL of your Argilla UI.To get the OpenAI API credentials, please visit https://platform.openai.com/account/api-keysimport osos.environ["ARGILLA_API_URL"] = "..."os.environ["ARGILLA_API_KEY"] = "..."os.environ["OPENAI_API_KEY"] = "..."Setup Argilla​To use the ArgillaCallbackHandler we will need to create a new FeedbackDataset in Argilla to keep track of your LLM experiments. To do so, please use the following code:import argilla as rgfrom packaging.version import parse as parse_versionif parse_version(rg.__version__) < parse_version("1.8.0"): raise RuntimeError( "`FeedbackDataset` is only available in Argilla v1.8.0 or higher, please " "upgrade `argilla` as `pip install argilla --upgrade`." )dataset = rg.FeedbackDataset( fields=[ rg.TextField(name="prompt"),Skip to main content🦜ï¸�🔗 LangChainDocsUse casesIntegrationsAPILangSmithJS/TS DocsCTRLKIntegrationsCallbacksArgillaContextInfino - LangChain LLM Monitoring ExamplePromptLayerStreamlitChat modelsDocument loadersDocument transformersLLMsMemoryRetrieversText embedding modelsAgent toolkitsToolsVector storesGrouped by providerIntegrationsCallbacksArgillaOn this pageArgillaArgilla is an open-source data curation platform for LLMs. Using Argilla, everyone can build robust language models through faster data curation using both human and machine feedback. We provide support for each step in the MLOps cycle,Argilla | 🦜ï¸�🔗 Langchainto Pope Innocent III."},"text":"\n\nKing Charles III swore fealty to Pope Innocent III."}]Step 5: Stop infino server​docker rm -f infino-example infino-examplePreviousContextNextPromptLayerStart Infino server, initialize the Infino clientRead the questions datasetLangChain OpenAI Q&A; Publish metrics and logs to InfinoCreate Metric ChartsFull text query on prompt or prompt outputs.Step 5: Stop infino serverCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.for", query, ":", response.text) Results for normandy : [{"time":1686821979,"fields":{"prompt":"In what country is Normandy located?"},"text":"In what country is Normandy located?"},{"time":1686821982,"fields":{"prompt_response":"\n\nNormandy is located in France."},"text":"\n\nNormandy is located in France."},{"time":1686821984,"fields":{"prompt_response":"\n\nThe Normans first settled in Normandy in the late 9th century."},"text":"\n\nThe Normans first settled in Normandy in the late 9th century."},{"time":1686821993,"fields":{"prompt":"Who gave their name to Normandy in the 1000's and 1100's"},"text":"Who gave their name to Normandy in the 1000's and 1100's"},{"time":1686821997,"fields":{"prompt_response":"\n\nThe Normans, a people from northern France, gave their name to Normandy in the 1000s and 1100s. The Normans were descended from Viking settlers who had come to the region in the late 800s."},"text":"\n\nThe Normans, a people from northern France, gave their name to Normandy in the 1000s and 1100s. The Normans were descended from Viking settlers who had come to the region in the late 800s."}] === Results for king charles III : [{"time":1686821998,"fields":{"prompt":"Who did King Charles III swear fealty to?"},"text":"Who did King Charles III swear fealty to?"},{"time":1686822000,"fields":{"prompt_response":"\n\nKing Charles III swore fealty to Pope Innocent III."},"text":"\n\nKing Charles III swore fealty to# Create the plot plt.plot(dates, y) # Set labels and title plt.xlabel("Time") plt.ylabel("Value") plt.title(title) plt.show()response = client.search_ts("__name__", "latency", 0, int(time.time()))plot(response.text, "Latency")response = client.search_ts("__name__", "error", 0, int(time.time()))plot(response.text, "Errors")response = client.search_ts("__name__", "prompt_tokens", 0, int(time.time()))plot(response.text, "Prompt Tokens")response = client.search_ts("__name__", "completion_tokens", 0, int(time.time()))plot(response.text, "Completion Tokens")response = client.search_ts("__name__", "total_tokens", 0, int(time.time()))plot(response.text, "Total Tokens") ![png](_infino_files/output_9_0.png) ![png](_infino_files/output_9_1.png) ![png](_infino_files/output_9_2.png) ![png](_infino_files/output_9_3.png) ![png](_infino_files/output_9_4.png) Full text query on prompt or prompt outputs.​# Search for a particular prompt text.query = "normandy"response = client.search_log(query, 0, int(time.time()))print("Results for", query, ":", response.text)print("===")query = "king charles III"response = client.search_log("king charles III", 0, int(time.time()))print("Results for", query, ":", response.text) Results for normandy :'completion_tokens': 78, 'prompt_tokens': 8}, 'model_name': 'text-davinci-003'} run=RunInfo(run_id=UUID('23f86775-e592-4cb8-baa3-46ebe74305b2')) Who was the duke in the battle of Hastings? generations=[[Generation(text='\n\nThe Duke of Normandy, William the Conqueror, was the leader of the Norman forces at the Battle of Hastings in 1066.', generation_info={'finish_reason': 'stop', 'logprobs': None})]] llm_output={'token_usage': {'total_tokens': 39, 'completion_tokens': 28, 'prompt_tokens': 11}, 'model_name': 'text-davinci-003'} run=RunInfo(run_id=UUID('ad5b7984-8758-4d95-a5eb-ee56e0218f6b'))Create Metric Charts​We now use matplotlib to create graphs of latency, errors and tokens consumed.# Helper function to create a graph using matplotlib.def plot(data, title): data = json.loads(data) # Extract x and y values from the data timestamps = [item["time"] for item in data] dates = [dt.datetime.fromtimestamp(ts) for ts in timestamps] y = [item["value"] for item in data] plt.rcParams["figure.figsize"] = [6, 4] plt.subplots_adjust(bottom=0.2) plt.xticks(rotation=25) ax = plt.gca() xfmt = md.DateFormatter("%Y-%m-%d %H:%M:%S") ax.xaxis.set_major_formatter(xfmt) # Create the plot plt.plot(dates, y)What is France a region of? generations=[[Generation(text='\n\nFrance is a region of Europe.', generation_info={'finish_reason': 'stop', 'logprobs': None})]] llm_output={'token_usage': {'total_tokens': 16, 'completion_tokens': 9, 'prompt_tokens': 7}, 'model_name': 'text-davinci-003'} run=RunInfo(run_id=UUID('6943880b-b4e4-4c74-9ca1-8c03c10f7e9c')) Who did King Charles III swear fealty to? generations=[[Generation(text='\n\nKing Charles III swore fealty to Pope Innocent III.', generation_info={'finish_reason': 'stop', 'logprobs': None})]] llm_output={'token_usage': {'total_tokens': 23, 'completion_tokens': 13, 'prompt_tokens': 10}, 'model_name': 'text-davinci-003'} run=RunInfo(run_id=UUID('c91fd663-09e6-4d00-b746-4c7fd96f9ceb')) When did the Frankish identity emerge? generations=[[Generation(text='\n\nThe Frankish identity began to emerge in the late 5th century, when the Franks began to expand their power and influence in the region. The Franks were a Germanic tribe that had migrated to the area from the east and had established a kingdom in what is now modern-day France. The Franks were eventually able to establish a powerful kingdom that lasted until the 10th century.', generation_info={'finish_reason': 'stop', 'logprobs': None})]] llm_output={'token_usage': {'total_tokens': 86, 'completion_tokens': 78, 'prompt_tokens': 8}, 'model_name':'stop', 'logprobs': None})]] llm_output={'token_usage': {'total_tokens': 45, 'completion_tokens': 39, 'prompt_tokens': 6}, 'model_name': 'text-davinci-003'} run=RunInfo(run_id=UUID('e32f31cb-ddc9-4863-8e6e-cb7a281a0ada')) What century did the Normans first gain their separate identity? generations=[[Generation(text='\n\nThe Normans first gained their separate identity in the 11th century.', generation_info={'finish_reason': 'stop', 'logprobs': None})]] llm_output={'token_usage': {'total_tokens': 28, 'completion_tokens': 16, 'prompt_tokens': 12}, 'model_name': 'text-davinci-003'} run=RunInfo(run_id=UUID('da9d8f73-b3b3-4bc5-8495-da8b11462a51')) Who gave their name to Normandy in the 1000's and 1100's generations=[[Generation(text='\n\nThe Normans, a people from northern France, gave their name to Normandy in the 1000s and 1100s. The Normans were descended from Viking settlers who had come to the region in the late 800s.', generation_info={'finish_reason': 'stop', 'logprobs': None})]] llm_output={'token_usage': {'total_tokens': 58, 'completion_tokens': 45, 'prompt_tokens': 13}, 'model_name': 'text-davinci-003'} run=RunInfo(run_id=UUID('bb5829bf-b6a6-4429-adfa-414ac5be46e5')) What is France a region of?16, 'completion_tokens': 9, 'prompt_tokens': 7}, 'model_name': 'text-davinci-003'} run=RunInfo(run_id=UUID('8de21639-acec-4bd1-a12d-8124de1e20da')) When were the Normans in Normandy? generations=[[Generation(text='\n\nThe Normans first settled in Normandy in the late 9th century.', generation_info={'finish_reason': 'stop', 'logprobs': None})]] llm_output={'token_usage': {'total_tokens': 24, 'completion_tokens': 16, 'prompt_tokens': 8}, 'model_name': 'text-davinci-003'} run=RunInfo(run_id=UUID('cf81fc86-250b-4e6e-9d92-2df3bebb019a')) From which countries did the Norse originate? generations=[[Generation(text='\n\nThe Norse originated from Scandinavia, which includes modern-day Norway, Sweden, and Denmark.', generation_info={'finish_reason': 'stop', 'logprobs': None})]] llm_output={'token_usage': {'total_tokens': 29, 'completion_tokens': 21, 'prompt_tokens': 8}, 'model_name': 'text-davinci-003'} run=RunInfo(run_id=UUID('50f42f5e-b4a4-411a-a049-f92cb573a74f')) Who was the Norse leader? generations=[[Generation(text='\n\nThe most famous Norse leader was the legendary Viking king Ragnar Lodbrok. He is believed to have lived in the 9th century and is renowned for his exploits in England and France.', generation_info={'finish_reason': 'stop', 'logprobs': None})]] llm_output={'token_usage':Charles III swear fealty to?When did the Frankish identity emerge?Who was the duke in the battle of Hastings?Who ruled the duchy of NormandyWhat religion were the NormansWhat type of major impact did the Norman dynasty have on modern Europe?Who was famed for their Christian spirit?Who assimilted the Roman language?Who ruled the country of Normandy?What principality did William the conquerer found?What is the original meaning of the word Norman?When was the Latin version of the word Norman first recorded?What name comes from the English words Normans/Normanz?"""questions = data.split("\n")LangChain OpenAI Q&A; Publish metrics and logs to Infino​# Set your key here.# os.environ["OPENAI_API_KEY"] = "YOUR_API_KEY"# Create callback handler. This logs latency, errors, token usage, prompts as well as prompt responses to Infino.handler = InfinoCallbackHandler( model_id="test_openai", model_version="0.1", verbose=False)# Create LLM.llm = OpenAI(temperature=0.1)# Number of questions to ask the OpenAI model. We limit to a short number here to save $$ while running this demo.num_questions = 10questions = questions[0:num_questions]for question in questions: print(question) # We send the question to OpenAI API, with Infino callback. llm_result = llm.generate([question], callbacks=[handler]) print(llm_result) In what country is Normandy located? generations=[[Generation(text='\n\nNormandy is located in France.', generation_info={'finish_reason': 'stop', 'logprobs': None})]] llm_output={'token_usage': {'total_tokens': 16, 'completion_tokens': 9, 'prompt_tokens': 7},Requirement already satisfied: charset-normalizer<4,>=2 in /Users/vinaykakade/.pyenv/versions/3.10.11/lib/python3.10/site-packages (from requests->infinopy) (3.1.0) Requirement already satisfied: idna<4,>=2.5 in /Users/vinaykakade/.pyenv/versions/3.10.11/lib/python3.10/site-packages (from requests->infinopy) (3.4) Requirement already satisfied: certifi>=2017.4.17 in /Users/vinaykakade/.pyenv/versions/3.10.11/lib/python3.10/site-packages (from requests->infinopy) (2023.5.7)Start Infino server, initialize the Infino client​# Start server using the Infino docker image.docker run --rm --detach --name infino-example -p 3000:3000 infinohq/infino:latest# Create Infino client.client = InfinoClient() 497a621125800abdd19f57ce7e033349b3cf83ca8cea6a74e8e28433a42ecaddRead the questions dataset​# These are a subset of questions from Stanford's QA dataset -# https://rajpurkar.github.io/SQuAD-explorer/data = """In what country is Normandy located?When were the Normans in Normandy?From which countries did the Norse originate?Who was the Norse leader?What century did the Normans first gain their separate identity?Who gave their name to Normandy in the 1000's and 1100'sWhat is France a region of?Who did King Charles III swear fealty to?When did the Frankish identity emerge?Who was the duke in(2.8.2) Requirement already satisfied: six>=1.5 in /Users/vinaykakade/.pyenv/versions/3.10.11/lib/python3.10/site-packages (from python-dateutil>=2.7->matplotlib) (1.16.0) Requirement already satisfied: infinopy in /Users/vinaykakade/.pyenv/versions/3.10.11/lib/python3.10/site-packages (0.0.1) Requirement already satisfied: docker in /Users/vinaykakade/.pyenv/versions/3.10.11/lib/python3.10/site-packages (from infinopy) (6.1.3) Requirement already satisfied: requests in /Users/vinaykakade/.pyenv/versions/3.10.11/lib/python3.10/site-packages (from infinopy) (2.31.0) Requirement already satisfied: packaging>=14.0 in /Users/vinaykakade/.pyenv/versions/3.10.11/lib/python3.10/site-packages (from docker->infinopy) (23.1) Requirement already satisfied: urllib3>=1.26.0 in /Users/vinaykakade/.pyenv/versions/3.10.11/lib/python3.10/site-packages (from docker->infinopy) (2.0.2) Requirement already satisfied: websocket-client>=0.32.0 in /Users/vinaykakade/.pyenv/versions/3.10.11/lib/python3.10/site-packages (from docker->infinopy) (1.5.2) Requirement already satisfied: charset-normalizer<4,>=2 in(from matplotlib) (0.11.0) Requirement already satisfied: fonttools>=4.22.0 in /Users/vinaykakade/.pyenv/versions/3.10.11/lib/python3.10/site-packages (from matplotlib) (4.39.4) Requirement already satisfied: kiwisolver>=1.0.1 in /Users/vinaykakade/.pyenv/versions/3.10.11/lib/python3.10/site-packages (from matplotlib) (1.4.4) Requirement already satisfied: numpy>=1.20 in /Users/vinaykakade/.pyenv/versions/3.10.11/lib/python3.10/site-packages (from matplotlib) (1.24.3) Requirement already satisfied: packaging>=20.0 in /Users/vinaykakade/.pyenv/versions/3.10.11/lib/python3.10/site-packages (from matplotlib) (23.1) Requirement already satisfied: pillow>=6.2.0 in /Users/vinaykakade/.pyenv/versions/3.10.11/lib/python3.10/site-packages (from matplotlib) (9.5.0) Requirement already satisfied: pyparsing>=2.3.1 in /Users/vinaykakade/.pyenv/versions/3.10.11/lib/python3.10/site-packages (from matplotlib) (3.0.9) Requirement already satisfied: python-dateutil>=2.7 in /Users/vinaykakade/.pyenv/versions/3.10.11/lib/python3.10/site-packages (from matplotlib) (2.8.2) Requirement already satisfied: six>=1.5 inSkip to main content🦜ï¸�🔗 LangChainDocsUse casesIntegrationsAPILangSmithJS/TS DocsCTRLKIntegrationsCallbacksArgillaContextInfino - LangChain LLM Monitoring ExamplePromptLayerStreamlitChat modelsDocument loadersDocument transformersLLMsMemoryRetrieversText embedding modelsAgent toolkitsToolsVector storesGrouped by providerIntegrationsCallbacksInfino - LangChain LLM Monitoring ExampleOn this pageInfino - LangChain LLM Monitoring ExampleThis example shows how one can track the following while calling OpenAI models via LangChain and Infino:prompt input,response from chatgpt or any other LangChain model,latency,errors,number of tokens consumed# Install necessary dependencies.pip install infinopypip install matplotlib# Remove the (1) import sys and sys.path.append(..) and (2) uncomment `!pip install langchain` after merging the PR for Infino/LangChain integration.import syssys.path.append("../../../../../langchain")#!pip install langchainimport datetime as dtfrom infinopy import InfinoClientimport jsonfrom langchain.llms import OpenAIfrom langchain.callbacks import InfinoCallbackHandlerimport matplotlib.pyplot as pltimport matplotlib.dates as mdimport osimport timeimport sys Requirement already satisfied: matplotlib in /Users/vinaykakade/.pyenv/versions/3.10.11/lib/python3.10/site-packages (3.7.1) Requirement already satisfied: contourpy>=1.0.1 in /Users/vinaykakade/.pyenv/versions/3.10.11/lib/python3.10/site-packages (from matplotlib) (1.0.7) Requirement already satisfied: cycler>=0.10 in /Users/vinaykakade/.pyenv/versions/3.10.11/lib/python3.10/site-packages (fromInfino - LangChain LLM Monitoring Example | 🦜ï¸�🔗 Langchainthoughts and actions live in your app.from langchain.llms import OpenAIfrom langchain.agents import AgentType, initialize_agent, load_toolsfrom langchain.callbacks import StreamlitCallbackHandlerimport streamlit as stllm = OpenAI(temperature=0, streaming=True)tools = load_tools(["ddg-search"])agent = initialize_agent( tools, llm, agent=AgentType.ZERO_SHOT_REACT_DESCRIPTION, verbose=True)if prompt := st.chat_input(): st.chat_message("user").write(prompt) with st.chat_message("assistant"): st_callback = StreamlitCallbackHandler(st.container()) response = agent.run(prompt, callbacks=[st_callback]) st.write(response)Note: You will need to set OPENAI_API_KEY for the above app code to run successfully. The easiest way to do this is via Streamlit secrets.toml, or any other local ENV management tool.Additional scenarios​Currently StreamlitCallbackHandler is geared towards use with a LangChain Agent Executor. Support for additional agent types, use directly with Chains, etc will be added in the future.PreviousPromptLayerNextChat modelsInstallation and SetupDisplay thoughts and actionsScenario 1: Using an Agent with ToolsAdditional scenariosCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.Skip to main content🦜ï¸�🔗 LangChainDocsUse casesIntegrationsAPILangSmithJS/TS DocsCTRLKIntegrationsCallbacksArgillaContextInfino - LangChain LLM Monitoring ExamplePromptLayerStreamlitChat modelsDocument loadersDocument transformersLLMsMemoryRetrieversText embedding modelsAgent toolkitsToolsVector storesGrouped by providerIntegrationsCallbacksStreamlitOn this pageStreamlitStreamlit is a faster way to build and share data apps. Streamlit turns data scripts into shareable web apps in minutes. All in pure Python. No front‑end experience required. See more examples at streamlit.io/generative-ai.In this guide we will demonstrate how to use StreamlitCallbackHandler to display the thoughts and actions of an agent in an interactive Streamlit app. Try it out with the running app below using the MRKL agent:Installation and Setup​pip install langchain streamlitYou can run streamlit hello to load a sample app and validate your install succeeded. See full instructions in Streamlit's Getting started documentation.Display thoughts and actions​To create a StreamlitCallbackHandler, you just need to provide a parent container to render the output.from langchain.callbacks import StreamlitCallbackHandlerimport streamlit as stst_callback = StreamlitCallbackHandler(st.container())Additional keyword arguments to customize the display behavior are described in the API reference.Scenario 1: Using an Agent with Tools​The primary supported use case today is visualizing the actions of an Agent with Tools (or Agent Executor). You can create an agent in your Streamlit app and simply pass the StreamlitCallbackHandler to agent.run() in order to visualize theStreamlit | 🦜ï¸�🔗 LangchainThis callback also works with any LLM implemented on LangChain.PreviousInfino - LangChain LLM Monitoring ExampleNextStreamlitInstallation and SetupGetting API CredentialsUsageSimple OpenAI ExampleGPT4All ExampleFull Featured ExampleCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.this example we unlock more of the power of PromptLayer.PromptLayer allows you to visually create, version, and track prompt templates. Using the Prompt Registry, we can programatically fetch the prompt template called example.We also define a pl_id_callback function which takes in the promptlayer_request_id and logs a score, metadata and links the prompt template used. Read more about tracking on our docs.import promptlayer # Don't forget this ğŸ�°from langchain.callbacks import PromptLayerCallbackHandlerfrom langchain.llms import OpenAIdef pl_id_callback(promptlayer_request_id): print("prompt layer id ", promptlayer_request_id) promptlayer.track.score( request_id=promptlayer_request_id, score=100 ) # score is an integer 0-100 promptlayer.track.metadata( request_id=promptlayer_request_id, metadata={"foo": "bar"} ) # metadata is a dictionary of key value pairs that is tracked on PromptLayer promptlayer.track.prompt( request_id=promptlayer_request_id, prompt_name="example", prompt_input_variables={"product": "toasters"}, version=1, ) # link the request to a prompt templateopenai_llm = OpenAI( model_name="text-davinci-002", callbacks=[PromptLayerCallbackHandler(pl_id_callback=pl_id_callback)],)example_prompt = promptlayer.prompts.get("example", version=1, langchain=True)openai_llm(example_prompt.format(product="toasters"))That is all it takes! After setup all your requests will show up on the PromptLayer dashboard.set it as an environment variabled called PROMPTLAYER_API_KEYUsage​Getting started with PromptLayerCallbackHandler is fairly simple, it takes two optional arguments:pl_tags - an optional list of strings that will be tracked as tags on PromptLayer.pl_id_callback - an optional function that will take promptlayer_request_id as an argument. This ID can be used with all of PromptLayer's tracking features to track, metadata, scores, and prompt usage.Simple OpenAI Example​In this simple example we use PromptLayerCallbackHandler with ChatOpenAI. We add a PromptLayer tag named chatopenaiimport promptlayer # Don't forget this ğŸ�°from langchain.callbacks import PromptLayerCallbackHandlerfrom langchain.chat_models import ChatOpenAIfrom langchain.schema import ( HumanMessage,)chat_llm = ChatOpenAI( temperature=0, callbacks=[PromptLayerCallbackHandler(pl_tags=["chatopenai"])],)llm_results = chat_llm( [ HumanMessage(content="What comes after 1,2,3 ?"), HumanMessage(content="Tell me another joke?"), ])print(llm_results)GPT4All Example​import promptlayer # Don't forget this ğŸ�°from langchain.callbacks import PromptLayerCallbackHandlerfrom langchain.llms import GPT4Allmodel = GPT4All(model="./models/gpt4all-model.bin", n_ctx=512, n_threads=8)response = model( "Once upon a time, ", callbacks=[PromptLayerCallbackHandler(pl_tags=["langchain", "gpt4all"])],)Full Featured Example​In this example we unlock more of the power of PromptLayer.PromptLayer allows you to visually create,Skip to main content🦜ï¸�🔗 LangChainDocsUse casesIntegrationsAPILangSmithJS/TS DocsCTRLKIntegrationsCallbacksArgillaContextInfino - LangChain LLM Monitoring ExamplePromptLayerStreamlitChat modelsDocument loadersDocument transformersLLMsMemoryRetrieversText embedding modelsAgent toolkitsToolsVector storesGrouped by providerIntegrationsCallbacksPromptLayerOn this pagePromptLayerPromptLayer is a an LLM observability platform that lets you visualize requests, version prompts, and track usage. In this guide we will go over how to setup the PromptLayerCallbackHandler. While PromptLayer does have LLMs that integrate directly with LangChain (eg PromptLayerOpenAI), this callback is the recommended way to integrate PromptLayer with LangChain.See our docs for more information.Installation and Setup​pip install promptlayer --upgradeGetting API Credentials​If you do not have a PromptLayer account, create one on promptlayer.com. Then get an API key by clicking on the settings cog in the navbar andPromptLayer | 🦜ï¸�🔗 LangchainCallbacks | 🦜ï¸�🔗 Langchain Skip to main content🦜ï¸�🔗 LangChainDocsUse casesIntegrationsAPILangSmithJS/TS DocsCTRLKIntegrationsCallbacksArgillaContextInfino - LangChain LLM Monitoring ExamplePromptLayerStreamlitChat modelsDocument loadersDocument transformersLLMsMemoryRetrieversText embedding modelsAgent toolkitsToolsVector storesGrouped by providerIntegrationsCallbacksCallbacks📄ï¸� ArgillaArgilla - Open-source data platform for LLMs📄ï¸� ContextContext - Product Analytics for AI Chatbots📄ï¸� Infino - LangChain LLM Monitoring ExampleThis example shows how one can track the following while calling OpenAI models via LangChain and Infino:📄ï¸� PromptLayerPromptLayer📄ï¸� StreamlitStreamlit is a faster way to build and share data apps.PreviousIntegrationsNextArgillaCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.���;B������� � ���*�[#�#Z*!0�4$;kBEVE#LQ�W�^ze�hfkZn~p[ru�w�w+x�~�$���C�{�S�O����u�S�����o�����0���U���,���4���N; Am�v�/&Y,�.,/�5�;�@�D�D�K�QuU�Y�]�^�^Be�jqs?s�yԀ�u� �2���K�x����_���$�������N�|�/�����������������z�BsIs��� �%�+�0�0p7�<�<�CH�K�O�S�S)T�Z�\�\�c>i�j!k�qJtxt{h~�~q��ي��������Ʀ��!���f�'���a��A���m���b���9�k�5��+�Z���'Q�� � ��Am?$�()�/H4%9�=�B�F�F�M�O$P�V�[afPf�lJq�r s�y<�r���LJX��l�������{�w���M�N�����6�������n�p�@�o�'�\���\�����video sharing and social media platform created by Google.T2Markdown | 🦜ï¸�🔗 LangchainNote depending on your set up, the service_account_path needs to be set up. See here for more details.from langchain.document_loaders import GoogleApiClient, GoogleApiYoutubeLoader# Init the GoogleApiClientfrom pathlib import Pathgoogle_api_client = GoogleApiClient(credentials_path=Path("your_path_creds.json"))# Use a Channelyoutube_loader_channel = GoogleApiYoutubeLoader( google_api_client=google_api_client, channel_name="Reducible", captions_language="en",)# Use Youtube Idsyoutube_loader_ids = GoogleApiYoutubeLoader( google_api_client=google_api_client, video_ids=["TrdevFK_am4"], add_video_info=True)# returns a list of Documentsyoutube_loader_channel.load()PreviousLoading documents from a YouTube urlNextDocument transformersAdd video infoAdd language preferencesYouTube loader from Google CloudPrerequisites🧑 Instructions for ingesting your Google Docs dataCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.video sharing and social media platform created by Google.This notebook covers how to load documents from YouTube transcripts.from langchain.document_loaders import YoutubeLoader# !pip install youtube-transcript-apiloader = YoutubeLoader.from_youtube_url( "https://www.youtube.com/watch?v=QsYGlZkevEg", add_video_info=True)loader.load()Add video info​# ! pip install pytubeloader = YoutubeLoader.from_youtube_url( "https://www.youtube.com/watch?v=QsYGlZkevEg", add_video_info=True)loader.load()Add language preferences​Language param : It's a list of language codes in a descending priority, en by default.translation param : It's a translate preference when the youtube does'nt have your select language, en by default.loader = YoutubeLoader.from_youtube_url( "https://www.youtube.com/watch?v=QsYGlZkevEg", add_video_info=True, language=["en", "id"], translation="en",)loader.load()YouTube loader from Google Cloud​Prerequisites​Create a Google Cloud project or use an existing projectEnable the Youtube ApiAuthorize credentials for desktop apppip install --upgrade google-api-python-client google-auth-httplib2 google-auth-oauthlib youtube-transcript-api🧑 Instructions for ingesting your Google Docs data​By default, the GoogleDriveLoader expects the credentials.json file to be ~/.credentials/credentials.json, but this is configurable using the credentials_file keyword argument. Same thing with token.json. Note that token.json will be created automatically the first time you use the loader.GoogleApiYoutubeLoader can load from a list of Google Docs document ids or a folder id. You can obtain your folder and document id from the URL:Skip to main content🦜ï¸�🔗 LangChainDocsUse casesIntegrationsAPILangSmithJS/TS DocsCTRLKIntegrationsCallbacksChat modelsDocument loadersEtherscan LoaderacreomAirbyte JSONAirtableAlibaba Cloud MaxComputeApify DatasetArxivAsyncHtmlLoaderAWS S3 DirectoryAWS S3 FileAZLyricsAzure Blob Storage ContainerAzure Blob Storage FileBibTeXBiliBiliBlackboardBlockchainBrave SearchBrowserlesschatgpt_loaderCollege ConfidentialConfluenceCoNLL-UCopy PasteCSVCube Semantic LayerDatadog LogsDiffbotDiscordDocugamiDuckDBEmailEmbaasEPubEverNoteexample_dataMicrosoft ExcelFacebook ChatFaunaFigmaGeopandasGitGitBookGitHubGoogle BigQueryGoogle Cloud Storage DirectoryGoogle Cloud Storage FileGoogle DriveGrobidGutenbergHacker NewsHuggingFace datasetiFixitImagesImage captionsIMSDbIuguJoplinJupyter NotebookLarkSuite (FeiShu)MastodonMediaWikiDumpMergeDocLoadermhtmlMicrosoft OneDriveMicrosoft PowerPointMicrosoft WordModern TreasuryNotion DB 1/2Notion DB 2/2ObsidianOpen Document Format (ODT)Open City DataOrg-modePandas DataFramePsychicPySpark DataFrame LoaderReadTheDocs DocumentationRecursive URL LoaderRedditRoamRocksetRSTSitemapSlackSnowflakeSource CodeSpreedlyStripeSubtitleTelegramTencent COS DirectoryTencent COS File2MarkdownTOMLTrelloTSVTwitterUnstructured FileURLWeatherWebBaseLoaderWhatsApp ChatWikipediaXMLXorbits Pandas DataFrameLoading documents from a YouTube urlYouTube transcriptsDocument transformersLLMsMemoryRetrieversText embedding modelsAgent toolkitsToolsVector storesGrouped by providerIntegrationsDocument loadersYouTube transcriptsOn this pageYouTube transcriptsYouTube is an online video sharing and social media platform created by Google.This notebook covers how to load documents from YouTube transcripts.fromYouTube transcripts | 🦜ï¸�🔗 Langchainnotebook covers how to load links to Gutenberg e-books into a document format that we can use downstream.from langchain.document_loaders import GutenbergLoaderloader = GutenbergLoader("https://www.gutenberg.org/cache/epub/69972/pg69972.txt")data = loader.load()data[0].page_content[:300] 'The Project Gutenberg eBook of The changed brides, by Emma Dorothy\r\n\n\nEliza Nevitte Southworth\r\n\n\n\r\n\n\nThis eBook is for the use of anyone anywhere in the United States and\r\n\n\nmost other parts of the world at no cost and with almost no restrictions\r\n\n\nwhatsoever. You may copy it, give it away or re-u'data[0].metadata {'source': 'https://www.gutenberg.org/cache/epub/69972/pg69972.txt'}PreviousGrobidNextHacker NewsCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.Skip to main content🦜ï¸�🔗 LangChainDocsUse casesIntegrationsAPILangSmithJS/TS DocsCTRLKIntegrationsCallbacksChat modelsDocument loadersEtherscan LoaderacreomAirbyte JSONAirtableAlibaba Cloud MaxComputeApify DatasetArxivAsyncHtmlLoaderAWS S3 DirectoryAWS S3 FileAZLyricsAzure Blob Storage ContainerAzure Blob Storage FileBibTeXBiliBiliBlackboardBlockchainBrave SearchBrowserlesschatgpt_loaderCollege ConfidentialConfluenceCoNLL-UCopy PasteCSVCube Semantic LayerDatadog LogsDiffbotDiscordDocugamiDuckDBEmailEmbaasEPubEverNoteexample_dataMicrosoft ExcelFacebook ChatFaunaFigmaGeopandasGitGitBookGitHubGoogle BigQueryGoogle Cloud Storage DirectoryGoogle Cloud Storage FileGoogle DriveGrobidGutenbergHacker NewsHuggingFace datasetiFixitImagesImage captionsIMSDbIuguJoplinJupyter NotebookLarkSuite (FeiShu)MastodonMediaWikiDumpMergeDocLoadermhtmlMicrosoft OneDriveMicrosoft PowerPointMicrosoft WordModern TreasuryNotion DB 1/2Notion DB 2/2ObsidianOpen Document Format (ODT)Open City DataOrg-modePandas DataFramePsychicPySpark DataFrame LoaderReadTheDocs DocumentationRecursive URL LoaderRedditRoamRocksetRSTSitemapSlackSnowflakeSource CodeSpreedlyStripeSubtitleTelegramTencent COS DirectoryTencent COS File2MarkdownTOMLTrelloTSVTwitterUnstructured FileURLWeatherWebBaseLoaderWhatsApp ChatWikipediaXMLXorbits Pandas DataFrameLoading documents from a YouTube urlYouTube transcriptsDocument transformersLLMsMemoryRetrieversText embedding modelsAgent toolkitsToolsVector storesGrouped by providerIntegrationsDocument loadersGutenbergGutenbergProject Gutenberg is an online library of free eBooks.This notebook covers how to load links to Gutenberg e-books into a document format that we can use downstream.fromGutenberg | 🦜ï¸�🔗 Langchainlangchain.indexes import VectorstoreIndexCreatorloader = ApifyDatasetLoader( dataset_id="your-dataset-id", dataset_mapping_function=lambda item: Document( page_content=item["text"] or "", metadata={"source": item["url"]} ),)index = VectorstoreIndexCreator().from_loaders([loader])query = "What is Apify?"result = index.query_with_sources(query)print(result["answer"])print(result["sources"]) Apify is a platform for developing, running, and sharing serverless cloud programs. It enables users to create web scraping and automation tools and publish them on the Apify platform. https://docs.apify.com/platform/actors, https://docs.apify.com/platform/actors/running/actors-in-store, https://docs.apify.com/platform/security, https://docs.apify.com/platform/actors/examplesPreviousAlibaba Cloud MaxComputeNextArxivPrerequisitesAn example with question answeringCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.append-only storage with sequential access built for storing structured web scraping results, such as a list of products or Google SERPs, and then export them to various formats like JSON, CSV, or Excel. Datasets are mainly used to save results of Apify Actors—serverless cloud programs for varius web scraping, crawling, and data extraction use cases.This notebook shows how to load Apify datasets to LangChain.Prerequisites​You need to have an existing dataset on the Apify platform. If you don't have one, please first check out this notebook on how to use Apify to extract content from documentation, knowledge bases, help centers, or blogs.#!pip install apify-clientFirst, import ApifyDatasetLoader into your source code:from langchain.document_loaders import ApifyDatasetLoaderfrom langchain.document_loaders.base import DocumentThen provide a function that maps Apify dataset record fields to LangChain Document format.For example, if your dataset items are structured like this:{ "url": "https://apify.com", "text": "Apify is the best web scraping and automation platform."}The mapping function in the code below will convert them to LangChain Document format, so that you can use them further with any LLM model (e.g. for question answering).loader = ApifyDatasetLoader( dataset_id="your-dataset-id", dataset_mapping_function=lambda dataset_item: Document( page_content=dataset_item["text"], metadata={"source": dataset_item["url"]} ),)data = loader.load()An example with question answering​In this example, we use data from a dataset to answer a question.from langchain.docstore.document import Documentfrom langchain.document_loaders import ApifyDatasetLoaderfrom langchain.indexes import VectorstoreIndexCreatorloader = ApifyDatasetLoader(Skip to main content🦜ï¸�🔗 LangChainDocsUse casesIntegrationsAPILangSmithJS/TS DocsCTRLKIntegrationsCallbacksChat modelsDocument loadersEtherscan LoaderacreomAirbyte JSONAirtableAlibaba Cloud MaxComputeApify DatasetArxivAsyncHtmlLoaderAWS S3 DirectoryAWS S3 FileAZLyricsAzure Blob Storage ContainerAzure Blob Storage FileBibTeXBiliBiliBlackboardBlockchainBrave SearchBrowserlesschatgpt_loaderCollege ConfidentialConfluenceCoNLL-UCopy PasteCSVCube Semantic LayerDatadog LogsDiffbotDiscordDocugamiDuckDBEmailEmbaasEPubEverNoteexample_dataMicrosoft ExcelFacebook ChatFaunaFigmaGeopandasGitGitBookGitHubGoogle BigQueryGoogle Cloud Storage DirectoryGoogle Cloud Storage FileGoogle DriveGrobidGutenbergHacker NewsHuggingFace datasetiFixitImagesImage captionsIMSDbIuguJoplinJupyter NotebookLarkSuite (FeiShu)MastodonMediaWikiDumpMergeDocLoadermhtmlMicrosoft OneDriveMicrosoft PowerPointMicrosoft WordModern TreasuryNotion DB 1/2Notion DB 2/2ObsidianOpen Document Format (ODT)Open City DataOrg-modePandas DataFramePsychicPySpark DataFrame LoaderReadTheDocs DocumentationRecursive URL LoaderRedditRoamRocksetRSTSitemapSlackSnowflakeSource CodeSpreedlyStripeSubtitleTelegramTencent COS DirectoryTencent COS File2MarkdownTOMLTrelloTSVTwitterUnstructured FileURLWeatherWebBaseLoaderWhatsApp ChatWikipediaXMLXorbits Pandas DataFrameLoading documents from a YouTube urlYouTube transcriptsDocument transformersLLMsMemoryRetrieversText embedding modelsAgent toolkitsToolsVector storesGrouped by providerIntegrationsDocument loadersApify DatasetOn this pageApify DatasetApify Dataset is a scaleable append-only storage with sequential access built for storing structured web scraping results, such as a list ofApify Dataset | 🦜ï¸�🔗 Langchainencyclopedia written and maintained by a community of volunteers, known as Wikipedians, through open collaboration and using a wiki-based editing system called MediaWiki. Wikipedia is the largest and most-read reference work in history.This notebook shows how to load wiki pages from wikipedia.org into the Document format that we use downstream.Installation​First, you need to install wikipedia python package.#!pip install wikipediaExamples​WikipediaLoader has these arguments:query: free text which used to find documents in Wikipediaoptional lang: default="en". Use it to search in a specific language part of Wikipediaoptional load_max_docs: default=100. Use it to limit number of downloaded documents. It takes time to download all 100 documents, so use a small number for experiments. There is a hard limit of 300 for now.optional load_all_available_meta: default=False. By default only the most important fields downloaded: Published (date when document was published/last updated), title, Summary. If True, other fields also downloaded.from langchain.document_loaders import WikipediaLoaderdocs = WikipediaLoader(query="HUNTER X HUNTER", load_max_docs=2).load()len(docs)docs[0].metadata # meta-information of the Documentdocs[0].page_content[:400] # a content of the DocumentPreviousWhatsApp ChatNextXMLInstallationExamplesCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.Skip to main content🦜ï¸�🔗 LangChainDocsUse casesIntegrationsAPILangSmithJS/TS DocsCTRLKIntegrationsCallbacksChat modelsDocument loadersEtherscan LoaderacreomAirbyte JSONAirtableAlibaba Cloud MaxComputeApify DatasetArxivAsyncHtmlLoaderAWS S3 DirectoryAWS S3 FileAZLyricsAzure Blob Storage ContainerAzure Blob Storage FileBibTeXBiliBiliBlackboardBlockchainBrave SearchBrowserlesschatgpt_loaderCollege ConfidentialConfluenceCoNLL-UCopy PasteCSVCube Semantic LayerDatadog LogsDiffbotDiscordDocugamiDuckDBEmailEmbaasEPubEverNoteexample_dataMicrosoft ExcelFacebook ChatFaunaFigmaGeopandasGitGitBookGitHubGoogle BigQueryGoogle Cloud Storage DirectoryGoogle Cloud Storage FileGoogle DriveGrobidGutenbergHacker NewsHuggingFace datasetiFixitImagesImage captionsIMSDbIuguJoplinJupyter NotebookLarkSuite (FeiShu)MastodonMediaWikiDumpMergeDocLoadermhtmlMicrosoft OneDriveMicrosoft PowerPointMicrosoft WordModern TreasuryNotion DB 1/2Notion DB 2/2ObsidianOpen Document Format (ODT)Open City DataOrg-modePandas DataFramePsychicPySpark DataFrame LoaderReadTheDocs DocumentationRecursive URL LoaderRedditRoamRocksetRSTSitemapSlackSnowflakeSource CodeSpreedlyStripeSubtitleTelegramTencent COS DirectoryTencent COS File2MarkdownTOMLTrelloTSVTwitterUnstructured FileURLWeatherWebBaseLoaderWhatsApp ChatWikipediaXMLXorbits Pandas DataFrameLoading documents from a YouTube urlYouTube transcriptsDocument transformersLLMsMemoryRetrieversText embedding modelsAgent toolkitsToolsVector storesGrouped by providerIntegrationsDocument loadersWikipediaOn this pageWikipediaWikipedia is a multilingual free online encyclopedia written and maintained by a community of volunteers, known as Wikipedians, through open collaborationWikipedia | 🦜ï¸�🔗 Langchainis an enterprise collaboration platform developed by ByteDance.This notebook covers how to load data from the LarkSuite REST API into a format that can be ingested into LangChain, along with example usage for text summarization.The LarkSuite API requires an access token (tenant_access_token or user_access_token), checkout LarkSuite open platform document for API details.from getpass import getpassfrom langchain.document_loaders.larksuite import LarkSuiteDocLoaderDOMAIN = input("larksuite domain")ACCESS_TOKEN = getpass("larksuite tenant_access_token or user_access_token")DOCUMENT_ID = input("larksuite document id")from pprint import pprintlarksuite_loader = LarkSuiteDocLoader(DOMAIN, ACCESS_TOKEN, DOCUMENT_ID)docs = larksuite_loader.load()pprint(docs) [Document(page_content='Test Doc\nThis is a Test Doc\n\n1\n2\n3\n\n', metadata={'document_id': 'V76kdbd2HoBbYJxdiNNccajunPf', 'revision_id': 11, 'title': 'Test Doc'})]# see https://python.langchain.com/docs/use_cases/summarization for more detailsfrom langchain.chains.summarize import load_summarize_chainchain = load_summarize_chain(llm, chain_type="map_reduce")chain.run(docs)PreviousJupyter NotebookNextMastodonCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.Skip to main content🦜ï¸�🔗 LangChainDocsUse casesIntegrationsAPILangSmithJS/TS DocsCTRLKIntegrationsCallbacksChat modelsDocument loadersEtherscan LoaderacreomAirbyte JSONAirtableAlibaba Cloud MaxComputeApify DatasetArxivAsyncHtmlLoaderAWS S3 DirectoryAWS S3 FileAZLyricsAzure Blob Storage ContainerAzure Blob Storage FileBibTeXBiliBiliBlackboardBlockchainBrave SearchBrowserlesschatgpt_loaderCollege ConfidentialConfluenceCoNLL-UCopy PasteCSVCube Semantic LayerDatadog LogsDiffbotDiscordDocugamiDuckDBEmailEmbaasEPubEverNoteexample_dataMicrosoft ExcelFacebook ChatFaunaFigmaGeopandasGitGitBookGitHubGoogle BigQueryGoogle Cloud Storage DirectoryGoogle Cloud Storage FileGoogle DriveGrobidGutenbergHacker NewsHuggingFace datasetiFixitImagesImage captionsIMSDbIuguJoplinJupyter NotebookLarkSuite (FeiShu)MastodonMediaWikiDumpMergeDocLoadermhtmlMicrosoft OneDriveMicrosoft PowerPointMicrosoft WordModern TreasuryNotion DB 1/2Notion DB 2/2ObsidianOpen Document Format (ODT)Open City DataOrg-modePandas DataFramePsychicPySpark DataFrame LoaderReadTheDocs DocumentationRecursive URL LoaderRedditRoamRocksetRSTSitemapSlackSnowflakeSource CodeSpreedlyStripeSubtitleTelegramTencent COS DirectoryTencent COS File2MarkdownTOMLTrelloTSVTwitterUnstructured FileURLWeatherWebBaseLoaderWhatsApp ChatWikipediaXMLXorbits Pandas DataFrameLoading documents from a YouTube urlYouTube transcriptsDocument transformersLLMsMemoryRetrieversText embedding modelsAgent toolkitsToolsVector storesGrouped by providerIntegrationsDocument loadersLarkSuite (FeiShu)LarkSuite (FeiShu)LarkSuite is an enterprise collaboration platform developed by ByteDance.This notebook covers how to loadLarkSuite (FeiShu) | 🦜ï¸�🔗 Langchain(Amazon S3) is an object storage serviceAWS S3 DirectoryThis covers how to load document objects from an AWS S3 Directory object.#!pip install boto3from langchain.document_loaders import S3DirectoryLoaderloader = S3DirectoryLoader("testing-hwc")loader.load()Specifying a prefix​You can also specify a prefix for more finegrained control over what files to load.loader = S3DirectoryLoader("testing-hwc", prefix="fake")loader.load() [Document(page_content='Lorem ipsum dolor sit amet.', lookup_str='', metadata={'source': '/var/folders/y6/8_bzdg295ld6s1_97_12m4lr0000gn/T/tmpujbkzf_l/fake.docx'}, lookup_index=0)]PreviousAsyncHtmlLoaderNextAWS S3 FileSpecifying a prefixCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.Skip to main content🦜ï¸�🔗 LangChainDocsUse casesIntegrationsAPILangSmithJS/TS DocsCTRLKIntegrationsCallbacksChat modelsDocument loadersEtherscan LoaderacreomAirbyte JSONAirtableAlibaba Cloud MaxComputeApify DatasetArxivAsyncHtmlLoaderAWS S3 DirectoryAWS S3 FileAZLyricsAzure Blob Storage ContainerAzure Blob Storage FileBibTeXBiliBiliBlackboardBlockchainBrave SearchBrowserlesschatgpt_loaderCollege ConfidentialConfluenceCoNLL-UCopy PasteCSVCube Semantic LayerDatadog LogsDiffbotDiscordDocugamiDuckDBEmailEmbaasEPubEverNoteexample_dataMicrosoft ExcelFacebook ChatFaunaFigmaGeopandasGitGitBookGitHubGoogle BigQueryGoogle Cloud Storage DirectoryGoogle Cloud Storage FileGoogle DriveGrobidGutenbergHacker NewsHuggingFace datasetiFixitImagesImage captionsIMSDbIuguJoplinJupyter NotebookLarkSuite (FeiShu)MastodonMediaWikiDumpMergeDocLoadermhtmlMicrosoft OneDriveMicrosoft PowerPointMicrosoft WordModern TreasuryNotion DB 1/2Notion DB 2/2ObsidianOpen Document Format (ODT)Open City DataOrg-modePandas DataFramePsychicPySpark DataFrame LoaderReadTheDocs DocumentationRecursive URL LoaderRedditRoamRocksetRSTSitemapSlackSnowflakeSource CodeSpreedlyStripeSubtitleTelegramTencent COS DirectoryTencent COS File2MarkdownTOMLTrelloTSVTwitterUnstructured FileURLWeatherWebBaseLoaderWhatsApp ChatWikipediaXMLXorbits Pandas DataFrameLoading documents from a YouTube urlYouTube transcriptsDocument transformersLLMsMemoryRetrieversText embedding modelsAgent toolkitsToolsVector storesGrouped by providerIntegrationsDocument loadersAWS S3 DirectoryOn this pageAWS S3 DirectoryAmazon Simple Storage Service (Amazon S3) is an object storage serviceAWS S3 DirectoryThis covers how to load documentAWS S3 Directory | 🦜ï¸�🔗 Langchainfile is a file format for textual data used primarily in the Python programming language community for technical documentation.UnstructuredRSTLoader​You can load data from RST files with UnstructuredRSTLoader using the following workflow.from langchain.document_loaders import UnstructuredRSTLoaderloader = UnstructuredRSTLoader(file_path="example_data/README.rst", mode="elements")docs = loader.load()print(docs[0]) page_content='Example Docs' metadata={'source': 'example_data/README.rst', 'filename': 'README.rst', 'file_directory': 'example_data', 'filetype': 'text/x-rst', 'page_number': 1, 'category': 'Title'}PreviousRocksetNextSitemapUnstructuredRSTLoaderCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.Skip to main content🦜ï¸�🔗 LangChainDocsUse casesIntegrationsAPILangSmithJS/TS DocsCTRLKIntegrationsCallbacksChat modelsDocument loadersEtherscan LoaderacreomAirbyte JSONAirtableAlibaba Cloud MaxComputeApify DatasetArxivAsyncHtmlLoaderAWS S3 DirectoryAWS S3 FileAZLyricsAzure Blob Storage ContainerAzure Blob Storage FileBibTeXBiliBiliBlackboardBlockchainBrave SearchBrowserlesschatgpt_loaderCollege ConfidentialConfluenceCoNLL-UCopy PasteCSVCube Semantic LayerDatadog LogsDiffbotDiscordDocugamiDuckDBEmailEmbaasEPubEverNoteexample_dataMicrosoft ExcelFacebook ChatFaunaFigmaGeopandasGitGitBookGitHubGoogle BigQueryGoogle Cloud Storage DirectoryGoogle Cloud Storage FileGoogle DriveGrobidGutenbergHacker NewsHuggingFace datasetiFixitImagesImage captionsIMSDbIuguJoplinJupyter NotebookLarkSuite (FeiShu)MastodonMediaWikiDumpMergeDocLoadermhtmlMicrosoft OneDriveMicrosoft PowerPointMicrosoft WordModern TreasuryNotion DB 1/2Notion DB 2/2ObsidianOpen Document Format (ODT)Open City DataOrg-modePandas DataFramePsychicPySpark DataFrame LoaderReadTheDocs DocumentationRecursive URL LoaderRedditRoamRocksetRSTSitemapSlackSnowflakeSource CodeSpreedlyStripeSubtitleTelegramTencent COS DirectoryTencent COS File2MarkdownTOMLTrelloTSVTwitterUnstructured FileURLWeatherWebBaseLoaderWhatsApp ChatWikipediaXMLXorbits Pandas DataFrameLoading documents from a YouTube urlYouTube transcriptsDocument transformersLLMsMemoryRetrieversText embedding modelsAgent toolkitsToolsVector storesGrouped by providerIntegrationsDocument loadersRSTOn this pageRSTA reStructured Text (RST) file is a file format for textual data used primarily in the Python programming language community for technicalRST | 🦜ï¸�🔗 LangchainData​ChatGPT is an artificial intelligence (AI) chatbot developed by OpenAI.This notebook covers how to load conversations.json from your ChatGPT data export folder.You can get your data export by email by going to: https://chat.openai.com/ -> (Profile) - Settings -> Export data -> Confirm export.from langchain.document_loaders.chatgpt import ChatGPTLoaderloader = ChatGPTLoader(log_file="./example_data/fake_conversations.json", num_logs=1)loader.load() [Document(page_content="AI Overlords - AI on 2065-01-24 05:20:50: Greetings, humans. I am Hal 9000. You can trust me completely.\n\nAI Overlords - human on 2065-01-24 05:21:20: Nice to meet you, Hal. I hope you won't develop a mind of your own.\n\n", metadata={'source': './example_data/fake_conversations.json'})]PreviousBrowserlessNextCollege ConfidentialChatGPT DataCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.Skip to main content🦜ï¸�🔗 LangChainDocsUse casesIntegrationsAPILangSmithJS/TS DocsCTRLKIntegrationsCallbacksChat modelsDocument loadersEtherscan LoaderacreomAirbyte JSONAirtableAlibaba Cloud MaxComputeApify DatasetArxivAsyncHtmlLoaderAWS S3 DirectoryAWS S3 FileAZLyricsAzure Blob Storage ContainerAzure Blob Storage FileBibTeXBiliBiliBlackboardBlockchainBrave SearchBrowserlesschatgpt_loaderCollege ConfidentialConfluenceCoNLL-UCopy PasteCSVCube Semantic LayerDatadog LogsDiffbotDiscordDocugamiDuckDBEmailEmbaasEPubEverNoteexample_dataMicrosoft ExcelFacebook ChatFaunaFigmaGeopandasGitGitBookGitHubGoogle BigQueryGoogle Cloud Storage DirectoryGoogle Cloud Storage FileGoogle DriveGrobidGutenbergHacker NewsHuggingFace datasetiFixitImagesImage captionsIMSDbIuguJoplinJupyter NotebookLarkSuite (FeiShu)MastodonMediaWikiDumpMergeDocLoadermhtmlMicrosoft OneDriveMicrosoft PowerPointMicrosoft WordModern TreasuryNotion DB 1/2Notion DB 2/2ObsidianOpen Document Format (ODT)Open City DataOrg-modePandas DataFramePsychicPySpark DataFrame LoaderReadTheDocs DocumentationRecursive URL LoaderRedditRoamRocksetRSTSitemapSlackSnowflakeSource CodeSpreedlyStripeSubtitleTelegramTencent COS DirectoryTencent COS File2MarkdownTOMLTrelloTSVTwitterUnstructured FileURLWeatherWebBaseLoaderWhatsApp ChatWikipediaXMLXorbits Pandas DataFrameLoading documents from a YouTube urlYouTube transcriptsDocument transformersLLMsMemoryRetrieversText embedding modelsAgent toolkitsToolsVector storesGrouped by providerIntegrationsDocument loaderschatgpt_loaderOn this pagechatgpt_loaderChatGPT Data​ChatGPT is an artificial intelligence (AI) chatbot developed bychatgpt_loader | 🦜ï¸�🔗 LangchainKevin Willmott & Spike Lee\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n FADE IN:\r\n \r\n SCENE FROM "GONE WITH'data[0].metadata {'source': 'https://imsdb.com/scripts/BlacKkKlansman.html'}PreviousImage captionsNextIuguCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.Database.This covers how to load IMSDb webpages into a document format that we can use downstream.from langchain.document_loaders import IMSDbLoaderloader = IMSDbLoader("https://imsdb.com/scripts/BlacKkKlansman.html")data = loader.load()data[0].page_content[:500] '\n\r\n\r\n\r\n\r\n BLACKKKLANSMAN\r\n \r\n \r\n \r\n \r\n Written by\r\n\r\n Charlie Wachtel & David Rabinowitz\r\n\r\n and\r\n\r\n Kevin Willmott & SpikeSkip to main content🦜ï¸�🔗 LangChainDocsUse casesIntegrationsAPILangSmithJS/TS DocsCTRLKIntegrationsCallbacksChat modelsDocument loadersEtherscan LoaderacreomAirbyte JSONAirtableAlibaba Cloud MaxComputeApify DatasetArxivAsyncHtmlLoaderAWS S3 DirectoryAWS S3 FileAZLyricsAzure Blob Storage ContainerAzure Blob Storage FileBibTeXBiliBiliBlackboardBlockchainBrave SearchBrowserlesschatgpt_loaderCollege ConfidentialConfluenceCoNLL-UCopy PasteCSVCube Semantic LayerDatadog LogsDiffbotDiscordDocugamiDuckDBEmailEmbaasEPubEverNoteexample_dataMicrosoft ExcelFacebook ChatFaunaFigmaGeopandasGitGitBookGitHubGoogle BigQueryGoogle Cloud Storage DirectoryGoogle Cloud Storage FileGoogle DriveGrobidGutenbergHacker NewsHuggingFace datasetiFixitImagesImage captionsIMSDbIuguJoplinJupyter NotebookLarkSuite (FeiShu)MastodonMediaWikiDumpMergeDocLoadermhtmlMicrosoft OneDriveMicrosoft PowerPointMicrosoft WordModern TreasuryNotion DB 1/2Notion DB 2/2ObsidianOpen Document Format (ODT)Open City DataOrg-modePandas DataFramePsychicPySpark DataFrame LoaderReadTheDocs DocumentationRecursive URL LoaderRedditRoamRocksetRSTSitemapSlackSnowflakeSource CodeSpreedlyStripeSubtitleTelegramTencent COS DirectoryTencent COS File2MarkdownTOMLTrelloTSVTwitterUnstructured FileURLWeatherWebBaseLoaderWhatsApp ChatWikipediaXMLXorbits Pandas DataFrameLoading documents from a YouTube urlYouTube transcriptsDocument transformersLLMsMemoryRetrieversText embedding modelsAgent toolkitsToolsVector storesGrouped by providerIntegrationsDocument loadersIMSDbIMSDbIMSDb is the Internet Movie Script Database.This covers how to load IMSDb webpages into a document format that we can use downstream.fromIMSDb | 🦜ï¸�🔗 Langchaininclude_card_name=False, include_checklist=False, include_comments=False,)documents = loader.load()print("Document: " + documents[0].page_content)print(documents[0].metadata)PreviousTOMLNextTSVFeaturesCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.langchain.document_loaders import TrelloLoader# Get the open cards from "Awesome Board"loader = TrelloLoader.from_credentials( "Awesome Board", api_key=API_KEY, token=TOKEN, card_filter="open",)documents = loader.load()print(documents[0].page_content)print(documents[0].metadata) Review Tech partner pages Comments: {'title': 'Review Tech partner pages', 'id': '6475357890dc8d17f73f2dcc', 'url': 'https://trello.com/c/b0OTZwkZ/1-review-tech-partner-pages', 'labels': ['Demand Marketing'], 'list': 'Done', 'closed': False, 'due_date': ''}# Get all the cards from "Awesome Board" but only include the# card list(column) as extra metadata.loader = TrelloLoader.from_credentials( "Awesome Board", api_key=API_KEY, token=TOKEN, extra_metadata=("list"),)documents = loader.load()print(documents[0].page_content)print(documents[0].metadata) Review Tech partner pages Comments: {'title': 'Review Tech partner pages', 'id': '6475357890dc8d17f73f2dcc', 'url': 'https://trello.com/c/b0OTZwkZ/1-review-tech-partner-pages', 'list': 'Done'}# Get the cards from "Another Board" and exclude the card name,# checklist and comments from the Document page_content text.loader = TrelloLoader.from_credentials( "test", api_key=API_KEY, token=TOKEN, include_card_name=False, include_checklist=False,and collaboration tool that allows individuals and teams to organize and track their tasks and projects. It provides a visual interface known as a "board" where users can create lists and cards to represent their tasks and activities.The TrelloLoader allows you to load cards from a Trello board and is implemented on top of py-trelloThis currently supports api_key/token only.Credentials generation: https://trello.com/power-ups/admin/Click in the manual token generation link to get the token.To specify the API key and token you can either set the environment variables TRELLO_API_KEY and TRELLO_TOKEN or you can pass api_key and token directly into the from_credentials convenience constructor method.This loader allows you to provide the board name to pull in the corresponding cards into Document objects.Notice that the board "name" is also called "title" in oficial documentation:https://support.atlassian.com/trello/docs/changing-a-boards-title-and-description/You can also specify several load parameters to include / remove different fields both from the document page_content properties and metadata.Features​Load cards from a Trello board.Filter cards based on their status (open or closed).Include card names, comments, and checklists in the loaded documents.Customize the additional metadata fields to include in the document.By default all card fields are included for the full text page_content and metadata accordinly.#!pip install py-trello beautifulsoup4# If you have already set the API key and token using environment variables,# you can skip this cell and comment out the `api_key` and `token` named arguments# in the initialization steps below.from getpass import getpassAPI_KEY = getpass()TOKEN = getpass() ········ ········from langchain.document_loaders import TrelloLoader# Get the open cards from "AwesomeSkip to main content🦜ï¸�🔗 LangChainDocsUse casesIntegrationsAPILangSmithJS/TS DocsCTRLKIntegrationsCallbacksChat modelsDocument loadersEtherscan LoaderacreomAirbyte JSONAirtableAlibaba Cloud MaxComputeApify DatasetArxivAsyncHtmlLoaderAWS S3 DirectoryAWS S3 FileAZLyricsAzure Blob Storage ContainerAzure Blob Storage FileBibTeXBiliBiliBlackboardBlockchainBrave SearchBrowserlesschatgpt_loaderCollege ConfidentialConfluenceCoNLL-UCopy PasteCSVCube Semantic LayerDatadog LogsDiffbotDiscordDocugamiDuckDBEmailEmbaasEPubEverNoteexample_dataMicrosoft ExcelFacebook ChatFaunaFigmaGeopandasGitGitBookGitHubGoogle BigQueryGoogle Cloud Storage DirectoryGoogle Cloud Storage FileGoogle DriveGrobidGutenbergHacker NewsHuggingFace datasetiFixitImagesImage captionsIMSDbIuguJoplinJupyter NotebookLarkSuite (FeiShu)MastodonMediaWikiDumpMergeDocLoadermhtmlMicrosoft OneDriveMicrosoft PowerPointMicrosoft WordModern TreasuryNotion DB 1/2Notion DB 2/2ObsidianOpen Document Format (ODT)Open City DataOrg-modePandas DataFramePsychicPySpark DataFrame LoaderReadTheDocs DocumentationRecursive URL LoaderRedditRoamRocksetRSTSitemapSlackSnowflakeSource CodeSpreedlyStripeSubtitleTelegramTencent COS DirectoryTencent COS File2MarkdownTOMLTrelloTSVTwitterUnstructured FileURLWeatherWebBaseLoaderWhatsApp ChatWikipediaXMLXorbits Pandas DataFrameLoading documents from a YouTube urlYouTube transcriptsDocument transformersLLMsMemoryRetrieversText embedding modelsAgent toolkitsToolsVector storesGrouped by providerIntegrationsDocument loadersTrelloOn this pageTrelloTrello is a web-based project management and collaboration tool that allows individuals and teams to organize and track their tasks and projects. It provides aTrello | 🦜ï¸�🔗 Langchain
\n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n
Nationals 81.34 98
Reds 82.20
BluesSTL1
FlyersPHI2
Maple LeafsTOR13
', 'category': 'Table'})PreviousNotebookNextFacebook ChatCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.files. The loader works with both .xlsx and .xls files. The page content will be the raw text of the Excel file. If you use the loader in "elements" mode, an HTML representation of the Excel file will be available in the document metadata under the text_as_html key.from langchain.document_loaders import UnstructuredExcelLoaderloader = UnstructuredExcelLoader("example_data/stanley-cups.xlsx", mode="elements")docs = loader.load()docs[0] Document(page_content='\n \n \n Team\n Location\n Stanley Cups\n \n \n Blues\n STL\n 1\n \n \n Flyers\n PHI\n 2\n \n \n Maple Leafs\n TOR\n 13\n \n \n', metadata={'source': 'example_data/stanley-cups.xlsx', 'filename': 'stanley-cups.xlsx', 'file_directory': 'example_data', 'filetype': 'application/vnd.openxmlformats-officedocument.spreadsheetml.sheet', 'page_number': 1, 'page_name': 'Stanley Cups', 'text_as_html': '\n \n \n \n \n \n \n \nSkip to main content🦜�🔗 LangChainDocsUse casesIntegrationsAPILangSmithJS/TS DocsCTRLKIntegrationsCallbacksChat modelsDocument loadersEtherscan LoaderacreomAirbyte JSONAirtableAlibaba Cloud MaxComputeApify DatasetArxivAsyncHtmlLoaderAWS S3 DirectoryAWS S3 FileAZLyricsAzure Blob Storage ContainerAzure Blob Storage FileBibTeXBiliBiliBlackboardBlockchainBrave SearchBrowserlesschatgpt_loaderCollege ConfidentialConfluenceCoNLL-UCopy PasteCSVCube Semantic LayerDatadog LogsDiffbotDiscordDocugamiDuckDBEmailEmbaasEPubEverNoteexample_dataMicrosoft ExcelFacebook ChatFaunaFigmaGeopandasGitGitBookGitHubGoogle BigQueryGoogle Cloud Storage DirectoryGoogle Cloud Storage FileGoogle DriveGrobidGutenbergHacker NewsHuggingFace datasetiFixitImagesImage captionsIMSDbIuguJoplinJupyter NotebookLarkSuite (FeiShu)MastodonMediaWikiDumpMergeDocLoadermhtmlMicrosoft OneDriveMicrosoft PowerPointMicrosoft WordModern TreasuryNotion DB 1/2Notion DB 2/2ObsidianOpen Document Format (ODT)Open City DataOrg-modePandas DataFramePsychicPySpark DataFrame LoaderReadTheDocs DocumentationRecursive URL LoaderRedditRoamRocksetRSTSitemapSlackSnowflakeSource CodeSpreedlyStripeSubtitleTelegramTencent COS DirectoryTencent COS File2MarkdownTOMLTrelloTSVTwitterUnstructured FileURLWeatherWebBaseLoaderWhatsApp ChatWikipediaXMLXorbits Pandas DataFrameLoading documents from a YouTube urlYouTube transcriptsDocument transformersLLMsMemoryRetrieversText embedding modelsAgent toolkitsToolsVector storesGrouped by providerIntegrationsDocument loadersMicrosoft ExcelMicrosoft ExcelThe UnstructuredExcelLoader is used to load Microsoft Excel files. The loader works with both .xlsx and .xls files. The page content will be theMicrosoft Excel | 🦜�🔗 Langchainsecret_access_key=SECRET_ACCESS_KEY,)data = loader.load()print(data) [Document(page_content='id: 1\ncontent: content1\nmeta_info: meta_info1', metadata={}), Document(page_content='id: 2\ncontent: content2\nmeta_info: meta_info2', metadata={}), Document(page_content='id: 3\ncontent: content3\nmeta_info: meta_info3', metadata={})]print(data[0].page_content) id: 1 content: content1 meta_info: meta_info1print(data[0].metadata) {}Specifying Which Columns are Content vs Metadata​You can configure which subset of columns should be loaded as the contents of the Document and which as the metadata using the page_content_columns and metadata_columns parameters.loader = MaxComputeLoader.from_params( base_query, endpoint, project, page_content_columns=["content"], # Specify Document page content metadata_columns=["id", "meta_info"], # Specify Document metadata access_id=ACCESS_ID, secret_access_key=SECRET_ACCESS_KEY,)data = loader.load()print(data[0].page_content) content: content1print(data[0].metadata) {'id': 1, 'meta_info': 'meta_info1'}PreviousAirtableNextApify DatasetBasic UsageSpecifying Which Columns are Content vs MetadataCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.(from pyodps) (1.26.15) Requirement already satisfied: idna>=2.5 in /Users/newboy/anaconda3/envs/langchain/lib/python3.9/site-packages (from pyodps) (3.4) Requirement already satisfied: certifi>=2017.4.17 in /Users/newboy/anaconda3/envs/langchain/lib/python3.9/site-packages (from pyodps) (2023.5.7) Installing collected packages: pyodps Successfully installed pyodps-0.11.4.post0Basic Usage​To instantiate the loader you'll need a SQL query to execute, your MaxCompute endpoint and project name, and you access ID and secret access key. The access ID and secret access key can either be passed in direct via the access_id and secret_access_key parameters or they can be set as environment variables MAX_COMPUTE_ACCESS_ID and MAX_COMPUTE_SECRET_ACCESS_KEY.from langchain.document_loaders import MaxComputeLoaderbase_query = """SELECT *FROM ( SELECT 1 AS id, 'content1' AS content, 'meta_info1' AS meta_info UNION ALL SELECT 2 AS id, 'content2' AS content, 'meta_info2' AS meta_info UNION ALL SELECT 3 AS id, 'content3' AS content, 'meta_info3' AS meta_info) mydata;"""endpoint = ""project = ""ACCESS_ID = ""SECRET_ACCESS_KEY = ""loader = MaxComputeLoader.from_params( base_query, endpoint, project, access_id=ACCESS_ID, secret_access_key=SECRET_ACCESS_KEY,)data = loader.load()print(data)Cloud MaxCompute (previously known as ODPS) is a general purpose, fully managed, multi-tenancy data processing platform for large-scale data warehousing. MaxCompute supports various data importing solutions and distributed computing models, enabling users to effectively query massive datasets, reduce production costs, and ensure data security.The MaxComputeLoader lets you execute a MaxCompute SQL query and loads the results as one document per row.pip install pyodps Collecting pyodps Downloading pyodps-0.11.4.post0-cp39-cp39-macosx_10_9_universal2.whl (2.0 MB) ���������������������������������������� 2.0/2.0 MB 1.7 MB/s eta 0:00:0000:0100:010m Requirement already satisfied: charset-normalizer>=2 in /Users/newboy/anaconda3/envs/langchain/lib/python3.9/site-packages (from pyodps) (3.1.0) Requirement already satisfied: urllib3<2.0,>=1.26.0 in /Users/newboy/anaconda3/envs/langchain/lib/python3.9/site-packages (from pyodps) (1.26.15) Requirement already satisfied:Skip to main content🦜�🔗 LangChainDocsUse casesIntegrationsAPILangSmithJS/TS DocsCTRLKIntegrationsCallbacksChat modelsDocument loadersEtherscan LoaderacreomAirbyte JSONAirtableAlibaba Cloud MaxComputeApify DatasetArxivAsyncHtmlLoaderAWS S3 DirectoryAWS S3 FileAZLyricsAzure Blob Storage ContainerAzure Blob Storage FileBibTeXBiliBiliBlackboardBlockchainBrave SearchBrowserlesschatgpt_loaderCollege ConfidentialConfluenceCoNLL-UCopy PasteCSVCube Semantic LayerDatadog LogsDiffbotDiscordDocugamiDuckDBEmailEmbaasEPubEverNoteexample_dataMicrosoft ExcelFacebook ChatFaunaFigmaGeopandasGitGitBookGitHubGoogle BigQueryGoogle Cloud Storage DirectoryGoogle Cloud Storage FileGoogle DriveGrobidGutenbergHacker NewsHuggingFace datasetiFixitImagesImage captionsIMSDbIuguJoplinJupyter NotebookLarkSuite (FeiShu)MastodonMediaWikiDumpMergeDocLoadermhtmlMicrosoft OneDriveMicrosoft PowerPointMicrosoft WordModern TreasuryNotion DB 1/2Notion DB 2/2ObsidianOpen Document Format (ODT)Open City DataOrg-modePandas DataFramePsychicPySpark DataFrame LoaderReadTheDocs DocumentationRecursive URL LoaderRedditRoamRocksetRSTSitemapSlackSnowflakeSource CodeSpreedlyStripeSubtitleTelegramTencent COS DirectoryTencent COS File2MarkdownTOMLTrelloTSVTwitterUnstructured FileURLWeatherWebBaseLoaderWhatsApp ChatWikipediaXMLXorbits Pandas DataFrameLoading documents from a YouTube urlYouTube transcriptsDocument transformersLLMsMemoryRetrieversText embedding modelsAgent toolkitsToolsVector storesGrouped by providerIntegrationsDocument loadersAlibaba Cloud MaxComputeOn this pageAlibaba Cloud MaxComputeAlibaba Cloud MaxCompute (previously known as ODPS) is a general purpose, fully managed,Alibaba Cloud MaxCompute | 🦜�🔗 Langchaincross-platform, centralized instant messaging (IM) and voice-over-IP (VoIP) service. It allows users to send text and voice messages, make voice and video calls, and share images, documents, user locations, and other content.This notebook covers how to load data from the WhatsApp Chats into a format that can be ingested into LangChain.from langchain.document_loaders import WhatsAppChatLoaderloader = WhatsAppChatLoader("example_data/whatsapp_chat.txt")loader.load()PreviousWebBaseLoaderNextWikipediaCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.Skip to main content🦜�🔗 LangChainDocsUse casesIntegrationsAPILangSmithJS/TS DocsCTRLKIntegrationsCallbacksChat modelsDocument loadersEtherscan LoaderacreomAirbyte JSONAirtableAlibaba Cloud MaxComputeApify DatasetArxivAsyncHtmlLoaderAWS S3 DirectoryAWS S3 FileAZLyricsAzure Blob Storage ContainerAzure Blob Storage FileBibTeXBiliBiliBlackboardBlockchainBrave SearchBrowserlesschatgpt_loaderCollege ConfidentialConfluenceCoNLL-UCopy PasteCSVCube Semantic LayerDatadog LogsDiffbotDiscordDocugamiDuckDBEmailEmbaasEPubEverNoteexample_dataMicrosoft ExcelFacebook ChatFaunaFigmaGeopandasGitGitBookGitHubGoogle BigQueryGoogle Cloud Storage DirectoryGoogle Cloud Storage FileGoogle DriveGrobidGutenbergHacker NewsHuggingFace datasetiFixitImagesImage captionsIMSDbIuguJoplinJupyter NotebookLarkSuite (FeiShu)MastodonMediaWikiDumpMergeDocLoadermhtmlMicrosoft OneDriveMicrosoft PowerPointMicrosoft WordModern TreasuryNotion DB 1/2Notion DB 2/2ObsidianOpen Document Format (ODT)Open City DataOrg-modePandas DataFramePsychicPySpark DataFrame LoaderReadTheDocs DocumentationRecursive URL LoaderRedditRoamRocksetRSTSitemapSlackSnowflakeSource CodeSpreedlyStripeSubtitleTelegramTencent COS DirectoryTencent COS File2MarkdownTOMLTrelloTSVTwitterUnstructured FileURLWeatherWebBaseLoaderWhatsApp ChatWikipediaXMLXorbits Pandas DataFrameLoading documents from a YouTube urlYouTube transcriptsDocument transformersLLMsMemoryRetrieversText embedding modelsAgent toolkitsToolsVector storesGrouped by providerIntegrationsDocument loadersWhatsApp ChatWhatsApp ChatWhatsApp (also called WhatsApp Messenger) is a freeware, cross-platform, centralized instant messaging (IM) and voice-over-IP (VoIP)WhatsApp Chat | 🦜�🔗 Langchainhttps://ipywidgets.readthedocs.io/en/stable/user_install.html from .autonotebook import tqdm as notebook_tqdm /Users/saitosean/dev/langchain/.venv/lib/python3.10/site-packages/transformers/generation/utils.py:1313: UserWarning: Using `max_length`'s default (20) to control the generation length. This behaviour is deprecated and will be removed from the config in v5 of Transformers -- we recommend using `max_new_tokens` to control the maximum length of the generation. warnings.warn( Using embedded DuckDB without persistence: data will be transientQuery​query = "What's the painting about?"index.query(query) ' The painting is about a battle scene.'query = "What kind of images are there?"index.query(query) ' There are images of a spiral galaxy, a painting of a battle scene, a flower in the dark, and a frog on a flower.'PreviousImagesNextIMSDbPrepare a list of image urls from WikimediaCreate the loaderCreate the indexQueryCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.image of a man on skis in the snow [SEP]', metadata={'image_path': 'https://upload.wikimedia.org/wikipedia/commons/thumb/b/b6/2022-01-22_Men%27s_World_Cup_at_2021-22_St._Moritz%E2%80%93Celerina_Luge_World_Cup_and_European_Championships_by_Sandro_Halank%E2%80%93257.jpg/288px-2022-01-22_Men%27s_World_Cup_at_2021-22_St._Moritz%E2%80%93Celerina_Luge_World_Cup_and_European_Championships_by_Sandro_Halank%E2%80%93257.jpg'}), Document(page_content='an image of a flower in the dark [SEP]', metadata={'image_path': 'https://upload.wikimedia.org/wikipedia/commons/thumb/9/99/Wiesen_Pippau_%28Crepis_biennis%29-20220624-RM-123950.jpg/224px-Wiesen_Pippau_%28Crepis_biennis%29-20220624-RM-123950.jpg'})]from PIL import Imageimport requestsImage.open(requests.get(list_image_urls[0], stream=True).raw).convert("RGB") ![png](_image_captions_files/output_7_0.png) Create the index​from langchain.indexes import VectorstoreIndexCreatorindex = VectorstoreIndexCreator().from_loaders([loader]) /Users/saitosean/dev/langchain/.venv/lib/python3.10/site-packages/tqdm/auto.py:21: TqdmWarning: IProgress not found. Please update jupyter and ipywidgets. See https://ipywidgets.readthedocs.io/en/stable/user_install.html fromDocument(page_content='an image of a shark swimming in the ocean [SEP]', metadata={'image_path': 'https://upload.wikimedia.org/wikipedia/commons/thumb/7/71/Tibur%C3%B3n_azul_%28Prionace_glauca%29%2C_canal_Fayal-Pico%2C_islas_Azores%2C_Portugal%2C_2020-07-27%2C_DD_14.jpg/270px-Tibur%C3%B3n_azul_%28Prionace_glauca%29%2C_canal_Fayal-Pico%2C_islas_Azores%2C_Portugal%2C_2020-07-27%2C_DD_14.jpg'}), Document(page_content='an image of a painting of a battle scene [SEP]', metadata={'image_path': 'https://upload.wikimedia.org/wikipedia/commons/thumb/2/21/Thure_de_Thulstrup_-_Battle_of_Shiloh.jpg/251px-Thure_de_Thulstrup_-_Battle_of_Shiloh.jpg'}), Document(page_content='an image of a passion fruit and a half cut passion [SEP]', metadata={'image_path': 'https://upload.wikimedia.org/wikipedia/commons/thumb/2/21/Passion_fruits_-_whole_and_halved.jpg/270px-Passion_fruits_-_whole_and_halved.jpg'}), Document(page_content='an image of the spiral galaxy [SEP]', metadata={'image_path': 'https://upload.wikimedia.org/wikipedia/commons/thumb/5/5e/Messier83_-_Heic1403a.jpg/277px-Messier83_-_Heic1403a.jpg'}), Document(page_content='an image of a man on skis in the snow [SEP]', metadata={'image_path':"https://upload.wikimedia.org/wikipedia/commons/thumb/b/b6/2022-01-22_Men%27s_World_Cup_at_2021-22_St._Moritz%E2%80%93Celerina_Luge_World_Cup_and_European_Championships_by_Sandro_Halank%E2%80%93257.jpg/288px-2022-01-22_Men%27s_World_Cup_at_2021-22_St._Moritz%E2%80%93Celerina_Luge_World_Cup_and_European_Championships_by_Sandro_Halank%E2%80%93257.jpg", "https://upload.wikimedia.org/wikipedia/commons/thumb/9/99/Wiesen_Pippau_%28Crepis_biennis%29-20220624-RM-123950.jpg/224px-Wiesen_Pippau_%28Crepis_biennis%29-20220624-RM-123950.jpg",]Create the loader​loader = ImageCaptionLoader(path_images=list_image_urls)list_docs = loader.load()list_docs /Users/saitosean/dev/langchain/.venv/lib/python3.10/site-packages/transformers/generation/utils.py:1313: UserWarning: Using `max_length`'s default (20) to control the generation length. This behaviour is deprecated and will be removed from the config in v5 of Transformers -- we recommend using `max_new_tokens` to control the maximum length of the generation. warnings.warn( [Document(page_content='an image of a frog on a flower [SEP]', metadata={'image_path': 'https://upload.wikimedia.org/wikipedia/commons/thumb/5/5a/Hyla_japonica_sep01.jpg/260px-Hyla_japonica_sep01.jpg'}),Salesforce BLIP image captioning model.This notebook shows how to use the ImageCaptionLoader to generate a query-able index of image captions#!pip install transformersfrom langchain.document_loaders import ImageCaptionLoaderPrepare a list of image urls from Wikimedia​list_image_urls = [ "https://upload.wikimedia.org/wikipedia/commons/thumb/5/5a/Hyla_japonica_sep01.jpg/260px-Hyla_japonica_sep01.jpg", "https://upload.wikimedia.org/wikipedia/commons/thumb/7/71/Tibur%C3%B3n_azul_%28Prionace_glauca%29%2C_canal_Fayal-Pico%2C_islas_Azores%2C_Portugal%2C_2020-07-27%2C_DD_14.jpg/270px-Tibur%C3%B3n_azul_%28Prionace_glauca%29%2C_canal_Fayal-Pico%2C_islas_Azores%2C_Portugal%2C_2020-07-27%2C_DD_14.jpg", "https://upload.wikimedia.org/wikipedia/commons/thumb/2/21/Thure_de_Thulstrup_-_Battle_of_Shiloh.jpg/251px-Thure_de_Thulstrup_-_Battle_of_Shiloh.jpg", "https://upload.wikimedia.org/wikipedia/commons/thumb/2/21/Passion_fruits_-_whole_and_halved.jpg/270px-Passion_fruits_-_whole_and_halved.jpg", "https://upload.wikimedia.org/wikipedia/commons/thumb/5/5e/Messier83_-_Heic1403a.jpg/277px-Messier83_-_Heic1403a.jpg",Skip to main content🦜�🔗 LangChainDocsUse casesIntegrationsAPILangSmithJS/TS DocsCTRLKIntegrationsCallbacksChat modelsDocument loadersEtherscan LoaderacreomAirbyte JSONAirtableAlibaba Cloud MaxComputeApify DatasetArxivAsyncHtmlLoaderAWS S3 DirectoryAWS S3 FileAZLyricsAzure Blob Storage ContainerAzure Blob Storage FileBibTeXBiliBiliBlackboardBlockchainBrave SearchBrowserlesschatgpt_loaderCollege ConfidentialConfluenceCoNLL-UCopy PasteCSVCube Semantic LayerDatadog LogsDiffbotDiscordDocugamiDuckDBEmailEmbaasEPubEverNoteexample_dataMicrosoft ExcelFacebook ChatFaunaFigmaGeopandasGitGitBookGitHubGoogle BigQueryGoogle Cloud Storage DirectoryGoogle Cloud Storage FileGoogle DriveGrobidGutenbergHacker NewsHuggingFace datasetiFixitImagesImage captionsIMSDbIuguJoplinJupyter NotebookLarkSuite (FeiShu)MastodonMediaWikiDumpMergeDocLoadermhtmlMicrosoft OneDriveMicrosoft PowerPointMicrosoft WordModern TreasuryNotion DB 1/2Notion DB 2/2ObsidianOpen Document Format (ODT)Open City DataOrg-modePandas DataFramePsychicPySpark DataFrame LoaderReadTheDocs DocumentationRecursive URL LoaderRedditRoamRocksetRSTSitemapSlackSnowflakeSource CodeSpreedlyStripeSubtitleTelegramTencent COS DirectoryTencent COS File2MarkdownTOMLTrelloTSVTwitterUnstructured FileURLWeatherWebBaseLoaderWhatsApp ChatWikipediaXMLXorbits Pandas DataFrameLoading documents from a YouTube urlYouTube transcriptsDocument transformersLLMsMemoryRetrieversText embedding modelsAgent toolkitsToolsVector storesGrouped by providerIntegrationsDocument loadersImage captionsOn this pageImage captionsBy default, the loader utilizes the pre-trained Salesforce BLIP image captioning model.This notebook shows how to use the ImageCaptionLoader to generateImage captions | 🦜�🔗 Langchaincompany. It offers payment-processing software and application programming interfaces for e-commerce websites and mobile applications.This notebook covers how to load data from the Stripe REST API into a format that can be ingested into LangChain, along with example usage for vectorization.import osfrom langchain.document_loaders import StripeLoaderfrom langchain.indexes import VectorstoreIndexCreatorThe Stripe API requires an access token, which can be found inside of the Stripe dashboard.This document loader also requires a resource option which defines what data you want to load.Following resources are available:balance_transations Documentationcharges Documentationcustomers Documentationevents Documentationrefunds Documentationdisputes Documentationstripe_loader = StripeLoader("charges")# Create a vectorstore retriever from the loader# see https://python.langchain.com/en/latest/modules/data_connection/getting_started.html for more detailsindex = VectorstoreIndexCreator().from_loaders([stripe_loader])stripe_doc_retriever = index.vectorstore.as_retriever()PreviousSpreedlyNextSubtitleCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.Skip to main content🦜�🔗 LangChainDocsUse casesIntegrationsAPILangSmithJS/TS DocsCTRLKIntegrationsCallbacksChat modelsDocument loadersEtherscan LoaderacreomAirbyte JSONAirtableAlibaba Cloud MaxComputeApify DatasetArxivAsyncHtmlLoaderAWS S3 DirectoryAWS S3 FileAZLyricsAzure Blob Storage ContainerAzure Blob Storage FileBibTeXBiliBiliBlackboardBlockchainBrave SearchBrowserlesschatgpt_loaderCollege ConfidentialConfluenceCoNLL-UCopy PasteCSVCube Semantic LayerDatadog LogsDiffbotDiscordDocugamiDuckDBEmailEmbaasEPubEverNoteexample_dataMicrosoft ExcelFacebook ChatFaunaFigmaGeopandasGitGitBookGitHubGoogle BigQueryGoogle Cloud Storage DirectoryGoogle Cloud Storage FileGoogle DriveGrobidGutenbergHacker NewsHuggingFace datasetiFixitImagesImage captionsIMSDbIuguJoplinJupyter NotebookLarkSuite (FeiShu)MastodonMediaWikiDumpMergeDocLoadermhtmlMicrosoft OneDriveMicrosoft PowerPointMicrosoft WordModern TreasuryNotion DB 1/2Notion DB 2/2ObsidianOpen Document Format (ODT)Open City DataOrg-modePandas DataFramePsychicPySpark DataFrame LoaderReadTheDocs DocumentationRecursive URL LoaderRedditRoamRocksetRSTSitemapSlackSnowflakeSource CodeSpreedlyStripeSubtitleTelegramTencent COS DirectoryTencent COS File2MarkdownTOMLTrelloTSVTwitterUnstructured FileURLWeatherWebBaseLoaderWhatsApp ChatWikipediaXMLXorbits Pandas DataFrameLoading documents from a YouTube urlYouTube transcriptsDocument transformersLLMsMemoryRetrieversText embedding modelsAgent toolkitsToolsVector storesGrouped by providerIntegrationsDocument loadersStripeStripeStripe is an Irish-American financial services and software as a service (SaaS) company. It offers payment-processing software and application programming interfaces for e-commerce websites and mobileStripe | 🦜�🔗 Langchainapi_token)documents = loader.load()Returns a list of documents with the following attributes:page_contentmetadatatable_namecolumn_namecolumn_data_typecolumn_titlecolumn_descriptioncolumn_valuespage_content='Users View City, None' metadata={'table_name': 'users_view', 'column_name': 'users_view.city', 'column_data_type': 'string', 'column_title': 'Users View City', 'column_description': 'None', 'column_member_type': 'dimension', 'column_values': ['Austin', 'Chicago', 'Los Angeles', 'Mountain View', 'New York', 'Palo Alto', 'San Francisco', 'Seattle']}PreviousCSVNextDatadog LogsAbout CubeExampleCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.retrieving Cube's data model metadata in a format suitable for passing to LLMs as embeddings, thereby enhancing contextual information.About Cube​Cube is the Semantic Layer for building data apps. It helps data engineers and application developers access data from modern data stores, organize it into consistent definitions, and deliver it to every application.Cube’s data model provides structure and definitions that are used as a context for LLM to understand data and generate correct queries. LLM doesn’t need to navigate complex joins and metrics calculations because Cube abstracts those and provides a simple interface that operates on the business-level terminology, instead of SQL table and column names. This simplification helps LLM to be less error-prone and avoid hallucinations.Example​Input arguments (mandatory)Cube Semantic Loader requires 2 arguments:cube_api_url: The URL of your Cube's deployment REST API. Please refer to the Cube documentation for more information on configuring the base path.cube_api_token: The authentication token generated based on your Cube's API secret. Please refer to the Cube documentation for instructions on generating JSON Web Tokens (JWT).Input arguments (optional)load_dimension_values: Whether to load dimension values for every string dimension or not.dimension_values_limit: Maximum number of dimension values to load.dimension_values_max_retries: Maximum number of retries to load dimension values.dimension_values_retry_delay: Delay between retries to load dimension values.import jwtfrom langchain.document_loaders import CubeSemanticLoaderapi_url = "https://api-example.gcp-us-central1.cubecloudapp.dev/cubejs-api/v1/meta"cubejs_api_secret = "api-secret-here"security_context = {}# Read more about security context here: https://cube.dev/docs/securityapi_token = jwt.encode(security_context, cubejs_api_secret, algorithm="HS256")loader = CubeSemanticLoader(api_url, api_token)documents = loader.load()Returns a list of documents with the followingSkip to main content🦜�🔗 LangChainDocsUse casesIntegrationsAPILangSmithJS/TS DocsCTRLKIntegrationsCallbacksChat modelsDocument loadersEtherscan LoaderacreomAirbyte JSONAirtableAlibaba Cloud MaxComputeApify DatasetArxivAsyncHtmlLoaderAWS S3 DirectoryAWS S3 FileAZLyricsAzure Blob Storage ContainerAzure Blob Storage FileBibTeXBiliBiliBlackboardBlockchainBrave SearchBrowserlesschatgpt_loaderCollege ConfidentialConfluenceCoNLL-UCopy PasteCSVCube Semantic LayerDatadog LogsDiffbotDiscordDocugamiDuckDBEmailEmbaasEPubEverNoteexample_dataMicrosoft ExcelFacebook ChatFaunaFigmaGeopandasGitGitBookGitHubGoogle BigQueryGoogle Cloud Storage DirectoryGoogle Cloud Storage FileGoogle DriveGrobidGutenbergHacker NewsHuggingFace datasetiFixitImagesImage captionsIMSDbIuguJoplinJupyter NotebookLarkSuite (FeiShu)MastodonMediaWikiDumpMergeDocLoadermhtmlMicrosoft OneDriveMicrosoft PowerPointMicrosoft WordModern TreasuryNotion DB 1/2Notion DB 2/2ObsidianOpen Document Format (ODT)Open City DataOrg-modePandas DataFramePsychicPySpark DataFrame LoaderReadTheDocs DocumentationRecursive URL LoaderRedditRoamRocksetRSTSitemapSlackSnowflakeSource CodeSpreedlyStripeSubtitleTelegramTencent COS DirectoryTencent COS File2MarkdownTOMLTrelloTSVTwitterUnstructured FileURLWeatherWebBaseLoaderWhatsApp ChatWikipediaXMLXorbits Pandas DataFrameLoading documents from a YouTube urlYouTube transcriptsDocument transformersLLMsMemoryRetrieversText embedding modelsAgent toolkitsToolsVector storesGrouped by providerIntegrationsDocument loadersCube Semantic LayerOn this pageCube Semantic LayerThis notebook demonstrates the process of retrieving Cube's data model metadata in a format suitable for passing to LLMs as embeddings, therebyCube Semantic Layer | 🦜�🔗 Langchainthat uses the ".epub" file extension. The term is short for electronic publication and is sometimes styled ePub. EPUB is supported by many e-readers, and compatible software is available for most smartphones, tablets, and computers.This covers how to load .epub documents into the Document format that we can use downstream. You'll need to install the pandoc package for this loader to work.#!pip install pandocfrom langchain.document_loaders import UnstructuredEPubLoaderloader = UnstructuredEPubLoader("winter-sports.epub")data = loader.load()Retain Elements​Under the hood, Unstructured creates different "elements" for different chunks of text. By default we combine those together, but you can easily keep that separation by specifying mode="elements".loader = UnstructuredEPubLoader("winter-sports.epub", mode="elements")data = loader.load()data[0] Document(page_content='The Project Gutenberg eBook of Winter Sports in\nSwitzerland, by E. F. Benson', lookup_str='', metadata={'source': 'winter-sports.epub', 'page_number': 1, 'category': 'Title'}, lookup_index=0)PreviousEmbaasNextEverNoteRetain ElementsCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.Skip to main content🦜�🔗 LangChainDocsUse casesIntegrationsAPILangSmithJS/TS DocsCTRLKIntegrationsCallbacksChat modelsDocument loadersEtherscan LoaderacreomAirbyte JSONAirtableAlibaba Cloud MaxComputeApify DatasetArxivAsyncHtmlLoaderAWS S3 DirectoryAWS S3 FileAZLyricsAzure Blob Storage ContainerAzure Blob Storage FileBibTeXBiliBiliBlackboardBlockchainBrave SearchBrowserlesschatgpt_loaderCollege ConfidentialConfluenceCoNLL-UCopy PasteCSVCube Semantic LayerDatadog LogsDiffbotDiscordDocugamiDuckDBEmailEmbaasEPubEverNoteexample_dataMicrosoft ExcelFacebook ChatFaunaFigmaGeopandasGitGitBookGitHubGoogle BigQueryGoogle Cloud Storage DirectoryGoogle Cloud Storage FileGoogle DriveGrobidGutenbergHacker NewsHuggingFace datasetiFixitImagesImage captionsIMSDbIuguJoplinJupyter NotebookLarkSuite (FeiShu)MastodonMediaWikiDumpMergeDocLoadermhtmlMicrosoft OneDriveMicrosoft PowerPointMicrosoft WordModern TreasuryNotion DB 1/2Notion DB 2/2ObsidianOpen Document Format (ODT)Open City DataOrg-modePandas DataFramePsychicPySpark DataFrame LoaderReadTheDocs DocumentationRecursive URL LoaderRedditRoamRocksetRSTSitemapSlackSnowflakeSource CodeSpreedlyStripeSubtitleTelegramTencent COS DirectoryTencent COS File2MarkdownTOMLTrelloTSVTwitterUnstructured FileURLWeatherWebBaseLoaderWhatsApp ChatWikipediaXMLXorbits Pandas DataFrameLoading documents from a YouTube urlYouTube transcriptsDocument transformersLLMsMemoryRetrieversText embedding modelsAgent toolkitsToolsVector storesGrouped by providerIntegrationsDocument loadersEPubOn this pageEPubEPUB is an e-book file format that uses the ".epub" file extension. The term is short for electronic publication and is sometimesEPub | 🦜�🔗 Langchain\n \n \n \n \n \n \n \n \n \n \n \n \n \n
TeamLocationStanley Cups
STL1
FlyersPHI2
Maple LeafsTOR13
', 'category': 'Table', 'source': 'https://drive.google.com/file/d/1aA6L2AR3g0CR-PW03HEZZo4NaVlKpaP7/view'})PreviousGoogle Cloud Storage FileNextGrobidPrerequisites🧑 Instructions for ingesting your Google Docs dataPassing in Optional File LoadersCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.can also process a folder with a mix of files and Google Docs/Sheets using the following pattern:folder_id = "1asMOHY1BqBS84JcRbOag5LOJac74gpmD"loader = GoogleDriveLoader( folder_id=folder_id, file_loader_cls=UnstructuredFileIOLoader, file_loader_kwargs={"mode": "elements"},)docs = loader.load()docs[0] Document(page_content='\n \n \n Team\n Location\n Stanley Cups\n \n \n Blues\n STL\n 1\n \n \n Flyers\n PHI\n 2\n \n \n Maple Leafs\n TOR\n 13\n \n \n', metadata={'filetype': 'application/vnd.openxmlformats-officedocument.spreadsheetml.sheet', 'page_number': 1, 'page_name': 'Stanley Cups', 'text_as_html': '\n \n \n \n \n \n \n \n \n \nFlyers\n PHI\n 2\n \n \n Maple Leafs\n TOR\n 13\n \n \n', metadata={'filetype': 'application/vnd.openxmlformats-officedocument.spreadsheetml.sheet', 'page_number': 1, 'page_name': 'Stanley Cups', 'text_as_html': '
TeamLocationStanley Cups
BluesSTL
\n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n
TeamLocationStanley Cups
BluesSTL1
FlyersPHI2
Maple LeafsTOR13
', 'category': 'Table', 'source': 'https://drive.google.com/file/d/1aA6L2AR3g0CR-PW03HEZZo4NaVlKpaP7/view'})You can also process a folder with a mix ofto recursively fetch files from subfolders. Defaults to False. recursive=False,)docs = loader.load()When you pass a folder_id by default all files of type document, sheet and pdf are loaded. You can modify this behaviour by passing a file_types argument loader = GoogleDriveLoader( folder_id="1yucgL9WGgWZdM1TOuKkeghlPizuzMYb5", file_types=["document", "sheet"] recursive=False)Passing in Optional File Loaders​When processing files other than Google Docs and Google Sheets, it can be helpful to pass an optional file loader to GoogleDriveLoader. If you pass in a file loader, that file loader will be used on documents that do not have a Google Docs or Google Sheets MIME type. Here is an example of how to load an Excel document from Google Drive using a file loader. from langchain.document_loaders import GoogleDriveLoaderfrom langchain.document_loaders import UnstructuredFileIOLoaderfile_id = "1x9WBtFPWMEAdjcJzPScRsjpjQvpSo_kz"loader = GoogleDriveLoader( file_ids=[file_id], file_loader_cls=UnstructuredFileIOLoader, file_loader_kwargs={"mode": "elements"},)docs = loader.load()docs[0] Document(page_content='\n \n \n Team\n Location\n Stanley Cups\n \n \n Blues\n STL\n 1\n \n \n Flyers\n PHI\n 2\nservice developed by Google.This notebook covers how to load documents from Google Drive. Currently, only Google Docs are supported.Prerequisites​Create a Google Cloud project or use an existing projectEnable the Google Drive APIAuthorize credentials for desktop apppip install --upgrade google-api-python-client google-auth-httplib2 google-auth-oauthlib🧑 Instructions for ingesting your Google Docs data​By default, the GoogleDriveLoader expects the credentials.json file to be ~/.credentials/credentials.json, but this is configurable using the credentials_path keyword argument. Same thing with token.json - token_path. Note that token.json will be created automatically the first time you use the loader.GoogleDriveLoader can load from a list of Google Docs document ids or a folder id. You can obtain your folder and document id from the URL:Folder: https://drive.google.com/drive/u/0/folders/1yucgL9WGgWZdM1TOuKkeghlPizuzMYb5 -> folder id is "1yucgL9WGgWZdM1TOuKkeghlPizuzMYb5"Document: https://docs.google.com/document/d/1bfaMQ18_i56204VaQDVeAFpqEijJTgvurupdEDiaUQw/edit -> document id is "1bfaMQ18_i56204VaQDVeAFpqEijJTgvurupdEDiaUQw"pip install --upgrade google-api-python-client google-auth-httplib2 google-auth-oauthlibfrom langchain.document_loaders import GoogleDriveLoaderloader = GoogleDriveLoader( folder_id="1yucgL9WGgWZdM1TOuKkeghlPizuzMYb5", # Optional: configure whether to recursively fetch files from subfolders. Defaults to False. recursive=False,)docs =Skip to main content🦜�🔗 LangChainDocsUse casesIntegrationsAPILangSmithJS/TS DocsCTRLKIntegrationsCallbacksChat modelsDocument loadersEtherscan LoaderacreomAirbyte JSONAirtableAlibaba Cloud MaxComputeApify DatasetArxivAsyncHtmlLoaderAWS S3 DirectoryAWS S3 FileAZLyricsAzure Blob Storage ContainerAzure Blob Storage FileBibTeXBiliBiliBlackboardBlockchainBrave SearchBrowserlesschatgpt_loaderCollege ConfidentialConfluenceCoNLL-UCopy PasteCSVCube Semantic LayerDatadog LogsDiffbotDiscordDocugamiDuckDBEmailEmbaasEPubEverNoteexample_dataMicrosoft ExcelFacebook ChatFaunaFigmaGeopandasGitGitBookGitHubGoogle BigQueryGoogle Cloud Storage DirectoryGoogle Cloud Storage FileGoogle DriveGrobidGutenbergHacker NewsHuggingFace datasetiFixitImagesImage captionsIMSDbIuguJoplinJupyter NotebookLarkSuite (FeiShu)MastodonMediaWikiDumpMergeDocLoadermhtmlMicrosoft OneDriveMicrosoft PowerPointMicrosoft WordModern TreasuryNotion DB 1/2Notion DB 2/2ObsidianOpen Document Format (ODT)Open City DataOrg-modePandas DataFramePsychicPySpark DataFrame LoaderReadTheDocs DocumentationRecursive URL LoaderRedditRoamRocksetRSTSitemapSlackSnowflakeSource CodeSpreedlyStripeSubtitleTelegramTencent COS DirectoryTencent COS File2MarkdownTOMLTrelloTSVTwitterUnstructured FileURLWeatherWebBaseLoaderWhatsApp ChatWikipediaXMLXorbits Pandas DataFrameLoading documents from a YouTube urlYouTube transcriptsDocument transformersLLMsMemoryRetrieversText embedding modelsAgent toolkitsToolsVector storesGrouped by providerIntegrationsDocument loadersGoogle DriveOn this pageGoogle DriveGoogle Drive is a file storage and synchronization service developed by Google.This notebook covers how to load documents from Google Drive. Currently, only Google DocsGoogle Drive | 🦜�🔗 Langchainexclude_dirs=exclude_dirs)# Lazy load eachdocs = [print(doc) or doc for doc in loader.lazy_load()]# Load all pagesdocs = loader.load()len(docs) 188docs[0].page_content[:50] '\n\n\n\n\nAgent Simulations | 🦜�🔗 Langchain\n\n\n\n\n\nSkip t'docs[0].metadata {'source': 'https://js.langchain.com/docs/use_cases/agent_simulations/', 'title': 'Agent Simulations | 🦜�🔗 Langchain', 'description': 'Agent simulations involve taking multiple agents and having them interact with each other.', 'language': 'en'}PreviousReadTheDocs DocumentationNextRedditCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.a root directory.For example, let's look at the LangChain JS documentation.This has many interesting child pages that we may want to read in bulk.Of course, the WebBaseLoader can load a list of pages. But, the challenge is traversing the tree of child pages and actually assembling that list!We do this using the RecursiveUrlLoader.This also gives us the flexibility to exclude some children (e.g., the api directory with > 800 child pages).from langchain.document_loaders.recursive_url_loader import RecursiveUrlLoaderLet's try a simple example.url = "https://js.langchain.com/docs/modules/memory/examples/"loader = RecursiveUrlLoader(url=url)docs = loader.load()len(docs) 12docs[0].page_content[:50] '\n\n\n\n\nBuffer Window Memory | 🦜�🔗 Langchain\n\n\n\n\n\nSki'docs[0].metadata {'source': 'https://js.langchain.com/docs/modules/memory/examples/buffer_window_memory', 'title': 'Buffer Window Memory | 🦜�🔗 Langchain', 'description': 'BufferWindowMemory keeps track of the back-and-forths in conversation, and then uses a window of size k to surface the last k back-and-forths to use as memory.', 'language': 'en'}Now, let's try a more extensive example, the docs root dir.We will skip everything under api.For this, we can lazy_load each page as we crawl the tree, using WebBaseLoader to load each as we go.url = "https://js.langchain.com/docs/"exclude_dirs = ["https://js.langchain.com/docs/api/"]loader = RecursiveUrlLoader(url=url, exclude_dirs=exclude_dirs)# Lazy load eachdocs = [print(doc) orSkip to main content🦜�🔗 LangChainDocsUse casesIntegrationsAPILangSmithJS/TS DocsCTRLKIntegrationsCallbacksChat modelsDocument loadersEtherscan LoaderacreomAirbyte JSONAirtableAlibaba Cloud MaxComputeApify DatasetArxivAsyncHtmlLoaderAWS S3 DirectoryAWS S3 FileAZLyricsAzure Blob Storage ContainerAzure Blob Storage FileBibTeXBiliBiliBlackboardBlockchainBrave SearchBrowserlesschatgpt_loaderCollege ConfidentialConfluenceCoNLL-UCopy PasteCSVCube Semantic LayerDatadog LogsDiffbotDiscordDocugamiDuckDBEmailEmbaasEPubEverNoteexample_dataMicrosoft ExcelFacebook ChatFaunaFigmaGeopandasGitGitBookGitHubGoogle BigQueryGoogle Cloud Storage DirectoryGoogle Cloud Storage FileGoogle DriveGrobidGutenbergHacker NewsHuggingFace datasetiFixitImagesImage captionsIMSDbIuguJoplinJupyter NotebookLarkSuite (FeiShu)MastodonMediaWikiDumpMergeDocLoadermhtmlMicrosoft OneDriveMicrosoft PowerPointMicrosoft WordModern TreasuryNotion DB 1/2Notion DB 2/2ObsidianOpen Document Format (ODT)Open City DataOrg-modePandas DataFramePsychicPySpark DataFrame LoaderReadTheDocs DocumentationRecursive URL LoaderRedditRoamRocksetRSTSitemapSlackSnowflakeSource CodeSpreedlyStripeSubtitleTelegramTencent COS DirectoryTencent COS File2MarkdownTOMLTrelloTSVTwitterUnstructured FileURLWeatherWebBaseLoaderWhatsApp ChatWikipediaXMLXorbits Pandas DataFrameLoading documents from a YouTube urlYouTube transcriptsDocument transformersLLMsMemoryRetrieversText embedding modelsAgent toolkitsToolsVector storesGrouped by providerIntegrationsDocument loadersRecursive URL LoaderRecursive URL LoaderWe may want to process load all URLs under a root directory.For example, let's look at the LangChain JS documentation.This has many interesting childRecursive URL Loader | 🦜�🔗 Langchainobject from something you just want to copy and paste. In this case, you don't even need to use a DocumentLoader, but rather can just construct the Document directly.from langchain.docstore.document import Documenttext = "..... put the text you copy pasted here......"doc = Document(page_content=text)Metadata​If you want to add metadata about the where you got this piece of text, you easily can with the metadata key.metadata = {"source": "internet", "date": "Friday"}doc = Document(page_content=text, metadata=metadata)PreviousCoNLL-UNextCSVMetadataCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.Skip to main content🦜�🔗 LangChainDocsUse casesIntegrationsAPILangSmithJS/TS DocsCTRLKIntegrationsCallbacksChat modelsDocument loadersEtherscan LoaderacreomAirbyte JSONAirtableAlibaba Cloud MaxComputeApify DatasetArxivAsyncHtmlLoaderAWS S3 DirectoryAWS S3 FileAZLyricsAzure Blob Storage ContainerAzure Blob Storage FileBibTeXBiliBiliBlackboardBlockchainBrave SearchBrowserlesschatgpt_loaderCollege ConfidentialConfluenceCoNLL-UCopy PasteCSVCube Semantic LayerDatadog LogsDiffbotDiscordDocugamiDuckDBEmailEmbaasEPubEverNoteexample_dataMicrosoft ExcelFacebook ChatFaunaFigmaGeopandasGitGitBookGitHubGoogle BigQueryGoogle Cloud Storage DirectoryGoogle Cloud Storage FileGoogle DriveGrobidGutenbergHacker NewsHuggingFace datasetiFixitImagesImage captionsIMSDbIuguJoplinJupyter NotebookLarkSuite (FeiShu)MastodonMediaWikiDumpMergeDocLoadermhtmlMicrosoft OneDriveMicrosoft PowerPointMicrosoft WordModern TreasuryNotion DB 1/2Notion DB 2/2ObsidianOpen Document Format (ODT)Open City DataOrg-modePandas DataFramePsychicPySpark DataFrame LoaderReadTheDocs DocumentationRecursive URL LoaderRedditRoamRocksetRSTSitemapSlackSnowflakeSource CodeSpreedlyStripeSubtitleTelegramTencent COS DirectoryTencent COS File2MarkdownTOMLTrelloTSVTwitterUnstructured FileURLWeatherWebBaseLoaderWhatsApp ChatWikipediaXMLXorbits Pandas DataFrameLoading documents from a YouTube urlYouTube transcriptsDocument transformersLLMsMemoryRetrieversText embedding modelsAgent toolkitsToolsVector storesGrouped by providerIntegrationsDocument loadersCopy PasteOn this pageCopy PasteThis notebook covers how to load a document object from something you just want to copy and paste. In this case, you don't even needCopy Paste | 🦜�🔗 Langchain'https://python.langchain.com/en/latest/index.html'})]PreviousDatadog LogsNextDiscordCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.Extract structured information from text.\nSummarization: Summarizing longer documents into shorter, more condensed chunks of information. A type of Data Augmented Generation.\nEvaluation: Generative models are notoriously hard to evaluate with traditional metrics. One new way of evaluating them is using language models themselves to do the evaluation. LangChain provides some prompts/chains for assisting in this.\nReference Docs\nAll of LangChain’s reference documentation, in one place. Full documentation on all methods, classes, installation methods, and integration setups for LangChain.\nReference Documentation\nLangChain Ecosystem\nGuides for how other companies/products can be used with LangChain\nLangChain Ecosystem\nAdditional Resources\nAdditional collection of resources we think may be useful as you develop your application!\nLangChainHub: The LangChainHub is a place to share and explore other prompts, chains, and agents.\nGlossary: A glossary of all related terms, papers, methods, etc. Whether implemented in LangChain or not!\nGallery: A collection of our favorite projects that use LangChain. Useful for finding inspiration or seeing how things were done in other applications.\nDeployments: A collection of instructions, code snippets, and template repositories for deploying LangChain apps.\nTracing: A guide on using tracing in LangChain to visualize the execution of chains and agents.\nModel Laboratory: Experimenting with different prompts, models, and chains is a big part of developing the best possible application. The ModelLaboratory makes it easy to do so.\nDiscord: Join us on our Discord to discuss all things LangChain!\nProduction Support: As you move your LangChains into production, we’d love to offer more comprehensive support. Please fill out this form and we’ll set up a dedicated support Slack channel.', metadata={'source': 'https://python.langchain.com/en/latest/index.html'})]PreviousDatadogconcept of persisting state between calls of a chain/agent. LangChain provides a standard interface for memory, a collection of memory implementations, and examples of chains/agents that use memory.\nIndexes: Language models are often more powerful when combined with your own text data - this module covers best practices for doing exactly that.\nChains: Chains go beyond just a single LLM call, and are sequences of calls (whether to an LLM or a different utility). LangChain provides a standard interface for chains, lots of integrations with other tools, and end-to-end chains for common applications.\nAgents: Agents involve an LLM making decisions about which Actions to take, taking that Action, seeing an Observation, and repeating that until done. LangChain provides a standard interface for agents, a selection of agents to choose from, and examples of end to end agents.\nUse Cases\nThe above modules can be used in a variety of ways. LangChain also provides guidance and assistance in this. Below are some of the common use cases LangChain supports.\nPersonal Assistants: The main LangChain use case. Personal assistants need to take actions, remember interactions, and have knowledge about your data.\nQuestion Answering: The second big LangChain use case. Answering questions over specific documents, only utilizing the information in those documents to construct an answer.\nChatbots: Since language models are good at producing text, that makes them ideal for creating chatbots.\nQuerying Tabular Data: If you want to understand how to use LLMs to query data that is stored in a tabular format (csvs, SQL, dataframes, etc) you should read this page.\nInteracting with APIs: Enabling LLMs to interact with APIs is extremely powerful in order to give them more up-to-date information and allow them to take actions.\nExtraction: Extract structured information from text.\nSummarization: Summarizing longer documents intoThe result is a website transformed into clean structured data (like JSON or CSV), ready for your application.This covers how to extract HTML documents from a list of URLs using the Diffbot extract API, into a document format that we can use downstream.urls = [ "https://python.langchain.com/en/latest/index.html",]The Diffbot Extract API Requires an API token. Once you have it, you can extract the data.Read instructions how to get the Diffbot API Token.import osfrom langchain.document_loaders import DiffbotLoaderloader = DiffbotLoader(urls=urls, api_token=os.environ.get("DIFFBOT_API_TOKEN"))With the .load() method, you can see the documents loadedloader.load() [Document(page_content='LangChain is a framework for developing applications powered by language models. We believe that the most powerful and differentiated applications will not only call out to a language model via an API, but will also:\nBe data-aware: connect a language model to other sources of data\nBe agentic: allow a language model to interact with its environment\nThe LangChain framework is designed with the above principles in mind.\nThis is the Python specific portion of the documentation. For a purely conceptual guide to LangChain, see here. For the JavaScript documentation, see here.\nGetting Started\nCheckout the below guide for a walkthrough of how to get started using LangChain to create an Language Model application.\nGetting Started Documentation\nModules\nThere are several main modules that LangChain provides support for. For each module we provide some examples to get started, how-to guides, reference docs, and conceptual guides. These modules are, in increasing order of complexity:\nModels: The various model types and model integrations LangChain supports.\nPrompts: This includes prompt management, prompt optimization, and prompt serialization.\nMemory: Memory is the concept of persisting state between calls of a chain/agent. LangChain provides a standard interface forIt starts with computer vision, which classifies a page into one of 20 possible types. Content is then interpreted by a machine learning model trained to identify the key attributes on a page based on its type.Skip to main content🦜�🔗 LangChainDocsUse casesIntegrationsAPILangSmithJS/TS DocsCTRLKIntegrationsCallbacksChat modelsDocument loadersEtherscan LoaderacreomAirbyte JSONAirtableAlibaba Cloud MaxComputeApify DatasetArxivAsyncHtmlLoaderAWS S3 DirectoryAWS S3 FileAZLyricsAzure Blob Storage ContainerAzure Blob Storage FileBibTeXBiliBiliBlackboardBlockchainBrave SearchBrowserlesschatgpt_loaderCollege ConfidentialConfluenceCoNLL-UCopy PasteCSVCube Semantic LayerDatadog LogsDiffbotDiscordDocugamiDuckDBEmailEmbaasEPubEverNoteexample_dataMicrosoft ExcelFacebook ChatFaunaFigmaGeopandasGitGitBookGitHubGoogle BigQueryGoogle Cloud Storage DirectoryGoogle Cloud Storage FileGoogle DriveGrobidGutenbergHacker NewsHuggingFace datasetiFixitImagesImage captionsIMSDbIuguJoplinJupyter NotebookLarkSuite (FeiShu)MastodonMediaWikiDumpMergeDocLoadermhtmlMicrosoft OneDriveMicrosoft PowerPointMicrosoft WordModern TreasuryNotion DB 1/2Notion DB 2/2ObsidianOpen Document Format (ODT)Open City DataOrg-modePandas DataFramePsychicPySpark DataFrame LoaderReadTheDocs DocumentationRecursive URL LoaderRedditRoamRocksetRSTSitemapSlackSnowflakeSource CodeSpreedlyStripeSubtitleTelegramTencent COS DirectoryTencent COS File2MarkdownTOMLTrelloTSVTwitterUnstructured FileURLWeatherWebBaseLoaderWhatsApp ChatWikipediaXMLXorbits Pandas DataFrameLoading documents from a YouTube urlYouTube transcriptsDocument transformersLLMsMemoryRetrieversText embedding modelsAgent toolkitsToolsVector storesGrouped by providerIntegrationsDocument loadersDiffbotDiffbotUnlike traditional web scraping tools, Diffbot doesn't require any rules to read the content on a page.Diffbot | 🦜�🔗 Langchainunified platform to power products and processes that move money.Connect to banks and payment systemsTrack transactions and balances in real-timeAutomate payment operations for scaleThis notebook covers how to load data from the Modern Treasury REST API into a format that can be ingested into LangChain, along with example usage for vectorization.import osfrom langchain.document_loaders import ModernTreasuryLoaderfrom langchain.indexes import VectorstoreIndexCreatorThe Modern Treasury API requires an organization ID and API key, which can be found in the Modern Treasury dashboard within developer settings.This document loader also requires a resource option which defines what data you want to load.Following resources are available:payment_orders Documentationexpected_payments Documentationreturns Documentationincoming_payment_details Documentationcounterparties Documentationinternal_accounts Documentationexternal_accounts Documentationtransactions Documentationledgers Documentationledger_accounts Documentationledger_transactions Documentationevents Documentationinvoices Documentationmodern_treasury_loader = ModernTreasuryLoader("payment_orders")# Create a vectorstore retriever from the loader# see https://python.langchain.com/en/latest/modules/data_connection/getting_started.html for more detailsindex = VectorstoreIndexCreator().from_loaders([modern_treasury_loader])modern_treasury_doc_retriever = index.vectorstore.as_retriever()PreviousMicrosoft WordNextNotion DB 1/2CommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.Skip to main content🦜�🔗 LangChainDocsUse casesIntegrationsAPILangSmithJS/TS DocsCTRLKIntegrationsCallbacksChat modelsDocument loadersEtherscan LoaderacreomAirbyte JSONAirtableAlibaba Cloud MaxComputeApify DatasetArxivAsyncHtmlLoaderAWS S3 DirectoryAWS S3 FileAZLyricsAzure Blob Storage ContainerAzure Blob Storage FileBibTeXBiliBiliBlackboardBlockchainBrave SearchBrowserlesschatgpt_loaderCollege ConfidentialConfluenceCoNLL-UCopy PasteCSVCube Semantic LayerDatadog LogsDiffbotDiscordDocugamiDuckDBEmailEmbaasEPubEverNoteexample_dataMicrosoft ExcelFacebook ChatFaunaFigmaGeopandasGitGitBookGitHubGoogle BigQueryGoogle Cloud Storage DirectoryGoogle Cloud Storage FileGoogle DriveGrobidGutenbergHacker NewsHuggingFace datasetiFixitImagesImage captionsIMSDbIuguJoplinJupyter NotebookLarkSuite (FeiShu)MastodonMediaWikiDumpMergeDocLoadermhtmlMicrosoft OneDriveMicrosoft PowerPointMicrosoft WordModern TreasuryNotion DB 1/2Notion DB 2/2ObsidianOpen Document Format (ODT)Open City DataOrg-modePandas DataFramePsychicPySpark DataFrame LoaderReadTheDocs DocumentationRecursive URL LoaderRedditRoamRocksetRSTSitemapSlackSnowflakeSource CodeSpreedlyStripeSubtitleTelegramTencent COS DirectoryTencent COS File2MarkdownTOMLTrelloTSVTwitterUnstructured FileURLWeatherWebBaseLoaderWhatsApp ChatWikipediaXMLXorbits Pandas DataFrameLoading documents from a YouTube urlYouTube transcriptsDocument transformersLLMsMemoryRetrieversText embedding modelsAgent toolkitsToolsVector storesGrouped by providerIntegrationsDocument loadersModern TreasuryModern TreasuryModern Treasury simplifies complex payment operations. It is a unified platform to power products and processes that move money.Connect to banks and payment systemsTrack transactions and balancesModern Treasury | 🦜�🔗 Langchainlookup_index=0)]PreviousGoogle BigQueryNextGoogle Cloud Storage FileSpecifying a prefixCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.lookup_index=0)]Specifying a prefix​You can also specify a prefix for more finegrained control over what files to load.loader = GCSDirectoryLoader(project_name="aist", bucket="testing-hwc", prefix="fake")loader.load() /Users/harrisonchase/workplace/langchain/.venv/lib/python3.10/site-packages/google/auth/_default.py:83: UserWarning: Your application has authenticated using end user credentials from Google Cloud SDK without a quota project. You might receive a "quota exceeded" or "API not enabled" error. We recommend you rerun `gcloud auth application-default login` and make sure a quota project is added. Or you can use service accounts instead. For more information about service accounts, see https://cloud.google.com/docs/authentication/ warnings.warn(_CLOUD_SDK_CREDENTIALS_WARNING) /Users/harrisonchase/workplace/langchain/.venv/lib/python3.10/site-packages/google/auth/_default.py:83: UserWarning: Your application has authenticated using end user credentials from Google Cloud SDK without a quota project. You might receive a "quota exceeded" or "API not enabled" error. We recommend you rerun `gcloud auth application-default login` and make sure a quota project is added. Or you can use service accounts instead. For more information about service accounts, see https://cloud.google.com/docs/authentication/ warnings.warn(_CLOUD_SDK_CREDENTIALS_WARNING) [Document(page_content='Lorem ipsum dolor sit amet.', lookup_str='', metadata={'source': '/var/folders/y6/8_bzdg295ld6s1_97_12m4lr0000gn/T/tmpylg6291i/fake.docx'}, lookup_index=0)]PreviousGoogle BigQueryNextGoogle Cloud Storage FileSpecifying aa managed service for storing unstructured data.This covers how to load document objects from an Google Cloud Storage (GCS) directory (bucket).# !pip install google-cloud-storagefrom langchain.document_loaders import GCSDirectoryLoaderloader = GCSDirectoryLoader(project_name="aist", bucket="testing-hwc")loader.load() /Users/harrisonchase/workplace/langchain/.venv/lib/python3.10/site-packages/google/auth/_default.py:83: UserWarning: Your application has authenticated using end user credentials from Google Cloud SDK without a quota project. You might receive a "quota exceeded" or "API not enabled" error. We recommend you rerun `gcloud auth application-default login` and make sure a quota project is added. Or you can use service accounts instead. For more information about service accounts, see https://cloud.google.com/docs/authentication/ warnings.warn(_CLOUD_SDK_CREDENTIALS_WARNING) /Users/harrisonchase/workplace/langchain/.venv/lib/python3.10/site-packages/google/auth/_default.py:83: UserWarning: Your application has authenticated using end user credentials from Google Cloud SDK without a quota project. You might receive a "quota exceeded" or "API not enabled" error. We recommend you rerun `gcloud auth application-default login` and make sure a quota project is added. Or you can use service accounts instead. For more information about service accounts, see https://cloud.google.com/docs/authentication/ warnings.warn(_CLOUD_SDK_CREDENTIALS_WARNING) [Document(page_content='Lorem ipsum dolor sit amet.', lookup_str='', metadata={'source': '/var/folders/y6/8_bzdg295ld6s1_97_12m4lr0000gn/T/tmpz37njh7u/fake.docx'}, lookup_index=0)]Specifying a prefix​You canSkip to main content🦜�🔗 LangChainDocsUse casesIntegrationsAPILangSmithJS/TS DocsCTRLKIntegrationsCallbacksChat modelsDocument loadersEtherscan LoaderacreomAirbyte JSONAirtableAlibaba Cloud MaxComputeApify DatasetArxivAsyncHtmlLoaderAWS S3 DirectoryAWS S3 FileAZLyricsAzure Blob Storage ContainerAzure Blob Storage FileBibTeXBiliBiliBlackboardBlockchainBrave SearchBrowserlesschatgpt_loaderCollege ConfidentialConfluenceCoNLL-UCopy PasteCSVCube Semantic LayerDatadog LogsDiffbotDiscordDocugamiDuckDBEmailEmbaasEPubEverNoteexample_dataMicrosoft ExcelFacebook ChatFaunaFigmaGeopandasGitGitBookGitHubGoogle BigQueryGoogle Cloud Storage DirectoryGoogle Cloud Storage FileGoogle DriveGrobidGutenbergHacker NewsHuggingFace datasetiFixitImagesImage captionsIMSDbIuguJoplinJupyter NotebookLarkSuite (FeiShu)MastodonMediaWikiDumpMergeDocLoadermhtmlMicrosoft OneDriveMicrosoft PowerPointMicrosoft WordModern TreasuryNotion DB 1/2Notion DB 2/2ObsidianOpen Document Format (ODT)Open City DataOrg-modePandas DataFramePsychicPySpark DataFrame LoaderReadTheDocs DocumentationRecursive URL LoaderRedditRoamRocksetRSTSitemapSlackSnowflakeSource CodeSpreedlyStripeSubtitleTelegramTencent COS DirectoryTencent COS File2MarkdownTOMLTrelloTSVTwitterUnstructured FileURLWeatherWebBaseLoaderWhatsApp ChatWikipediaXMLXorbits Pandas DataFrameLoading documents from a YouTube urlYouTube transcriptsDocument transformersLLMsMemoryRetrieversText embedding modelsAgent toolkitsToolsVector storesGrouped by providerIntegrationsDocument loadersGoogle Cloud Storage DirectoryOn this pageGoogle Cloud Storage DirectoryGoogle Cloud Storage is a managed service for storing unstructured data.This covers how to load document objects from an Google Cloud StorageGoogle Cloud Storage Directory | 🦜�🔗 LangchainThanks!\n\n', metadata={'source': 'example_data/facebook_chat.json'})]PreviousMicrosoft ExcelNextFaunaCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.by Meta Platforms. Originally developed as Facebook Chat in 2008, the company revamped its messaging service in 2010.This notebook covers how to load data from the Facebook Chats into a format that can be ingested into LangChain.# pip install pandasfrom langchain.document_loaders import FacebookChatLoaderloader = FacebookChatLoader("example_data/facebook_chat.json")loader.load() [Document(page_content='User 2 on 2023-02-05 03:46:11: Bye!\n\nUser 1 on 2023-02-05 03:43:55: Oh no worries! Bye\n\nUser 2 on 2023-02-05 03:24:37: No Im sorry it was my mistake, the blue one is not for sale\n\nUser 1 on 2023-02-05 03:05:40: I thought you were selling the blue one!\n\nUser 1 on 2023-02-05 03:05:09: Im not interested in this bag. Im interested in the blue one!\n\nUser 2 on 2023-02-05 03:04:28: Here is $129\n\nUser 2 on 2023-02-05 03:04:05: Online is at least $100\n\nUser 1 on 2023-02-05 02:59:59: How much do you want?\n\nUser 2 on 2023-02-04 22:17:56: Goodmorning! $50 is too low.\n\nUser 1 on 2023-02-04 14:17:02: Hi! Im interested in your bag. Im offering $50. Let me know if you are interested. Thanks!\n\n', metadata={'source': 'example_data/facebook_chat.json'})]PreviousMicrosoftSkip to main content🦜�🔗 LangChainDocsUse casesIntegrationsAPILangSmithJS/TS DocsCTRLKIntegrationsCallbacksChat modelsDocument loadersEtherscan LoaderacreomAirbyte JSONAirtableAlibaba Cloud MaxComputeApify DatasetArxivAsyncHtmlLoaderAWS S3 DirectoryAWS S3 FileAZLyricsAzure Blob Storage ContainerAzure Blob Storage FileBibTeXBiliBiliBlackboardBlockchainBrave SearchBrowserlesschatgpt_loaderCollege ConfidentialConfluenceCoNLL-UCopy PasteCSVCube Semantic LayerDatadog LogsDiffbotDiscordDocugamiDuckDBEmailEmbaasEPubEverNoteexample_dataMicrosoft ExcelFacebook ChatFaunaFigmaGeopandasGitGitBookGitHubGoogle BigQueryGoogle Cloud Storage DirectoryGoogle Cloud Storage FileGoogle DriveGrobidGutenbergHacker NewsHuggingFace datasetiFixitImagesImage captionsIMSDbIuguJoplinJupyter NotebookLarkSuite (FeiShu)MastodonMediaWikiDumpMergeDocLoadermhtmlMicrosoft OneDriveMicrosoft PowerPointMicrosoft WordModern TreasuryNotion DB 1/2Notion DB 2/2ObsidianOpen Document Format (ODT)Open City DataOrg-modePandas DataFramePsychicPySpark DataFrame LoaderReadTheDocs DocumentationRecursive URL LoaderRedditRoamRocksetRSTSitemapSlackSnowflakeSource CodeSpreedlyStripeSubtitleTelegramTencent COS DirectoryTencent COS File2MarkdownTOMLTrelloTSVTwitterUnstructured FileURLWeatherWebBaseLoaderWhatsApp ChatWikipediaXMLXorbits Pandas DataFrameLoading documents from a YouTube urlYouTube transcriptsDocument transformersLLMsMemoryRetrieversText embedding modelsAgent toolkitsToolsVector storesGrouped by providerIntegrationsDocument loadersFacebook ChatFacebook ChatMessenger is an American proprietary instant messaging app and platform developed by Meta Platforms. Originally developed as Facebook Chat in 2008, the company revamped its messaging serviceFacebook Chat | 🦜�🔗 Langchaincovers how to load data from an .ipynb notebook into a format suitable by LangChain.from langchain.document_loaders import NotebookLoaderloader = NotebookLoader("example_data/notebook.ipynb")NotebookLoader.load() loads the .ipynb notebook file into a Document object.Parameters:include_outputs (bool): whether to include cell outputs in the resulting document (default is False).max_output_length (int): the maximum number of characters to include from each cell output (default is 10).remove_newline (bool): whether to remove newline characters from the cell sources and outputs (default is False).traceback (bool): whether to include full traceback (default is False).loader.load(include_outputs=True, max_output_length=20, remove_newline=True)PreviousEverNoteNextMicrosoft ExcelCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.Skip to main content🦜�🔗 LangChainDocsUse casesIntegrationsAPILangSmithJS/TS DocsCTRLKIntegrationsCallbacksChat modelsDocument loadersEtherscan LoaderacreomAirbyte JSONAirtableAlibaba Cloud MaxComputeApify DatasetArxivAsyncHtmlLoaderAWS S3 DirectoryAWS S3 FileAZLyricsAzure Blob Storage ContainerAzure Blob Storage FileBibTeXBiliBiliBlackboardBlockchainBrave SearchBrowserlesschatgpt_loaderCollege ConfidentialConfluenceCoNLL-UCopy PasteCSVCube Semantic LayerDatadog LogsDiffbotDiscordDocugamiDuckDBEmailEmbaasEPubEverNoteexample_dataNotebookMicrosoft ExcelFacebook ChatFaunaFigmaGeopandasGitGitBookGitHubGoogle BigQueryGoogle Cloud Storage DirectoryGoogle Cloud Storage FileGoogle DriveGrobidGutenbergHacker NewsHuggingFace datasetiFixitImagesImage captionsIMSDbIuguJoplinJupyter NotebookLarkSuite (FeiShu)MastodonMediaWikiDumpMergeDocLoadermhtmlMicrosoft OneDriveMicrosoft PowerPointMicrosoft WordModern TreasuryNotion DB 1/2Notion DB 2/2ObsidianOpen Document Format (ODT)Open City DataOrg-modePandas DataFramePsychicPySpark DataFrame LoaderReadTheDocs DocumentationRecursive URL LoaderRedditRoamRocksetRSTSitemapSlackSnowflakeSource CodeSpreedlyStripeSubtitleTelegramTencent COS DirectoryTencent COS File2MarkdownTOMLTrelloTSVTwitterUnstructured FileURLWeatherWebBaseLoaderWhatsApp ChatWikipediaXMLXorbits Pandas DataFrameLoading documents from a YouTube urlYouTube transcriptsDocument transformersLLMsMemoryRetrieversText embedding modelsAgent toolkitsToolsVector storesGrouped by providerIntegrationsDocument loadersexample_dataNotebookNotebookThis notebook covers how to load data from an .ipynb notebook into a format suitable by LangChain.fromNotebook | 🦜�🔗 Langchainrunning on local markdown files.Below is an example on how to load a local acreom vault into Langchain. As the local vault in acreom is a folder of plain text .md files, the loader requires the path to the directory. Vault files may contain some metadata which is stored as a YAML header. These values will be added to the document’s metadata if collect_metadata is set to true. from langchain.document_loaders import AcreomLoaderloader = AcreomLoader("", collect_metadata=False)docs = loader.load()PreviousEtherscan LoaderNextAirbyte JSONCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.Skip to main content🦜�🔗 LangChainDocsUse casesIntegrationsAPILangSmithJS/TS DocsCTRLKIntegrationsCallbacksChat modelsDocument loadersEtherscan LoaderacreomAirbyte JSONAirtableAlibaba Cloud MaxComputeApify DatasetArxivAsyncHtmlLoaderAWS S3 DirectoryAWS S3 FileAZLyricsAzure Blob Storage ContainerAzure Blob Storage FileBibTeXBiliBiliBlackboardBlockchainBrave SearchBrowserlesschatgpt_loaderCollege ConfidentialConfluenceCoNLL-UCopy PasteCSVCube Semantic LayerDatadog LogsDiffbotDiscordDocugamiDuckDBEmailEmbaasEPubEverNoteexample_dataMicrosoft ExcelFacebook ChatFaunaFigmaGeopandasGitGitBookGitHubGoogle BigQueryGoogle Cloud Storage DirectoryGoogle Cloud Storage FileGoogle DriveGrobidGutenbergHacker NewsHuggingFace datasetiFixitImagesImage captionsIMSDbIuguJoplinJupyter NotebookLarkSuite (FeiShu)MastodonMediaWikiDumpMergeDocLoadermhtmlMicrosoft OneDriveMicrosoft PowerPointMicrosoft WordModern TreasuryNotion DB 1/2Notion DB 2/2ObsidianOpen Document Format (ODT)Open City DataOrg-modePandas DataFramePsychicPySpark DataFrame LoaderReadTheDocs DocumentationRecursive URL LoaderRedditRoamRocksetRSTSitemapSlackSnowflakeSource CodeSpreedlyStripeSubtitleTelegramTencent COS DirectoryTencent COS File2MarkdownTOMLTrelloTSVTwitterUnstructured FileURLWeatherWebBaseLoaderWhatsApp ChatWikipediaXMLXorbits Pandas DataFrameLoading documents from a YouTube urlYouTube transcriptsDocument transformersLLMsMemoryRetrieversText embedding modelsAgent toolkitsToolsVector storesGrouped by providerIntegrationsDocument loadersacreomacreomacreom is a dev-first knowledge base with tasks running on local markdown files.Below is an example on how to load a local acreom vault intoacreom | 🦜�🔗 Langchain- March 2022**', metadata={'title': 'Summer Training Program', 'created': time.struct_time(tm_year=2022, tm_mon=12, tm_mday=27, tm_hour=1, tm_min=59, tm_sec=48, tm_wday=1, tm_yday=361, tm_isdst=-1), 'note-attributes.author': 'Mike McGarry', 'note-attributes.source': 'mobile.iphone', 'source': 'example_data/testing.enex'})]PreviousEPubNextNotebookCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.creating notes in which photos, audio and saved web content can be embedded. Notes are stored in virtual "notebooks" and can be tagged, annotated, edited, searched, and exported.This notebook shows how to load an Evernote export file (.enex) from disk.A document will be created for each note in the export.# lxml and html2text are required to parse EverNote notes# !pip install lxml# !pip install html2textfrom langchain.document_loaders import EverNoteLoader# By default all notes are combined into a single Documentloader = EverNoteLoader("example_data/testing.enex")loader.load() [Document(page_content='testing this\n\nwhat happens?\n\nto the world?**Jan - March 2022**', metadata={'source': 'example_data/testing.enex'})]# It's likely more useful to return a Document for each noteloader = EverNoteLoader("example_data/testing.enex", load_single_document=False)loader.load() [Document(page_content='testing this\n\nwhat happens?\n\nto the world?', metadata={'title': 'testing', 'created': time.struct_time(tm_year=2023, tm_mon=2, tm_mday=9, tm_hour=3, tm_min=47, tm_sec=46, tm_wday=3, tm_yday=40, tm_isdst=-1), 'updated': time.struct_time(tm_year=2023, tm_mon=2, tm_mday=9, tm_hour=3, tm_min=53, tm_sec=28, tm_wday=3, tm_yday=40, tm_isdst=-1), 'note-attributes.author': 'Harrison Chase', 'source': 'example_data/testing.enex'}), Document(page_content='**Jan - March 2022**', metadata={'title': 'Summer Training Program', 'created':Skip to main content🦜�🔗 LangChainDocsUse casesIntegrationsAPILangSmithJS/TS DocsCTRLKIntegrationsCallbacksChat modelsDocument loadersEtherscan LoaderacreomAirbyte JSONAirtableAlibaba Cloud MaxComputeApify DatasetArxivAsyncHtmlLoaderAWS S3 DirectoryAWS S3 FileAZLyricsAzure Blob Storage ContainerAzure Blob Storage FileBibTeXBiliBiliBlackboardBlockchainBrave SearchBrowserlesschatgpt_loaderCollege ConfidentialConfluenceCoNLL-UCopy PasteCSVCube Semantic LayerDatadog LogsDiffbotDiscordDocugamiDuckDBEmailEmbaasEPubEverNoteexample_dataMicrosoft ExcelFacebook ChatFaunaFigmaGeopandasGitGitBookGitHubGoogle BigQueryGoogle Cloud Storage DirectoryGoogle Cloud Storage FileGoogle DriveGrobidGutenbergHacker NewsHuggingFace datasetiFixitImagesImage captionsIMSDbIuguJoplinJupyter NotebookLarkSuite (FeiShu)MastodonMediaWikiDumpMergeDocLoadermhtmlMicrosoft OneDriveMicrosoft PowerPointMicrosoft WordModern TreasuryNotion DB 1/2Notion DB 2/2ObsidianOpen Document Format (ODT)Open City DataOrg-modePandas DataFramePsychicPySpark DataFrame LoaderReadTheDocs DocumentationRecursive URL LoaderRedditRoamRocksetRSTSitemapSlackSnowflakeSource CodeSpreedlyStripeSubtitleTelegramTencent COS DirectoryTencent COS File2MarkdownTOMLTrelloTSVTwitterUnstructured FileURLWeatherWebBaseLoaderWhatsApp ChatWikipediaXMLXorbits Pandas DataFrameLoading documents from a YouTube urlYouTube transcriptsDocument transformersLLMsMemoryRetrieversText embedding modelsAgent toolkitsToolsVector storesGrouped by providerIntegrationsDocument loadersEverNoteEverNoteEverNote is intended for archiving and creating notes in which photos, audio and saved web content can be embedded. Notes are stored in virtualEverNote | 🦜�🔗 Langchainmost beloved long-form video sites in China.This loader utilizes the bilibili-api to fetch the text transcript from Bilibili.With this BiliBiliLoader, users can easily obtain the transcript of their desired video content on the platform.#!pip install bilibili-api-pythonfrom langchain.document_loaders import BiliBiliLoaderloader = BiliBiliLoader(["https://www.bilibili.com/video/BV1xt411o7Xu/"])loader.load()PreviousBibTeXNextBlackboardCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.Skip to main content🦜�🔗 LangChainDocsUse casesIntegrationsAPILangSmithJS/TS DocsCTRLKIntegrationsCallbacksChat modelsDocument loadersEtherscan LoaderacreomAirbyte JSONAirtableAlibaba Cloud MaxComputeApify DatasetArxivAsyncHtmlLoaderAWS S3 DirectoryAWS S3 FileAZLyricsAzure Blob Storage ContainerAzure Blob Storage FileBibTeXBiliBiliBlackboardBlockchainBrave SearchBrowserlesschatgpt_loaderCollege ConfidentialConfluenceCoNLL-UCopy PasteCSVCube Semantic LayerDatadog LogsDiffbotDiscordDocugamiDuckDBEmailEmbaasEPubEverNoteexample_dataMicrosoft ExcelFacebook ChatFaunaFigmaGeopandasGitGitBookGitHubGoogle BigQueryGoogle Cloud Storage DirectoryGoogle Cloud Storage FileGoogle DriveGrobidGutenbergHacker NewsHuggingFace datasetiFixitImagesImage captionsIMSDbIuguJoplinJupyter NotebookLarkSuite (FeiShu)MastodonMediaWikiDumpMergeDocLoadermhtmlMicrosoft OneDriveMicrosoft PowerPointMicrosoft WordModern TreasuryNotion DB 1/2Notion DB 2/2ObsidianOpen Document Format (ODT)Open City DataOrg-modePandas DataFramePsychicPySpark DataFrame LoaderReadTheDocs DocumentationRecursive URL LoaderRedditRoamRocksetRSTSitemapSlackSnowflakeSource CodeSpreedlyStripeSubtitleTelegramTencent COS DirectoryTencent COS File2MarkdownTOMLTrelloTSVTwitterUnstructured FileURLWeatherWebBaseLoaderWhatsApp ChatWikipediaXMLXorbits Pandas DataFrameLoading documents from a YouTube urlYouTube transcriptsDocument transformersLLMsMemoryRetrieversText embedding modelsAgent toolkitsToolsVector storesGrouped by providerIntegrationsDocument loadersBiliBiliBiliBiliBilibili is one of the most beloved long-form video sites in China.This loader utilizes the bilibili-api to fetch the text transcriptBiliBili | 🦜�🔗 Langchain© 2023 LangChain, Inc.' 75.48', ' "Wins"': ' 73'}), Document(page_content='Royals', metadata={' "Payroll (millions)"': ' 60.91', ' "Wins"': ' 72'}), Document(page_content='Marlins', metadata={' "Payroll (millions)"': ' 118.07', ' "Wins"': ' 69'}), Document(page_content='Red Sox', metadata={' "Payroll (millions)"': ' 173.18', ' "Wins"': ' 69'}), Document(page_content='Indians', metadata={' "Payroll (millions)"': ' 78.43', ' "Wins"': ' 68'}), Document(page_content='Twins', metadata={' "Payroll (millions)"': ' 94.08', ' "Wins"': ' 66'}), Document(page_content='Rockies', metadata={' "Payroll (millions)"': ' 78.06', ' "Wins"': ' 64'}), Document(page_content='Cubs', metadata={' "Payroll (millions)"': ' 88.19', ' "Wins"': ' 61'}), Document(page_content='Astros', metadata={' "Payroll (millions)"': ' 60.65', ' "Wins"': ' 55'})]PreviousPsychicNextReadTheDocs DocumentationCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain,' 95.14', ' "Wins"': ' 86'}), Document(page_content='White Sox', metadata={' "Payroll (millions)"': ' 96.92', ' "Wins"': ' 85'}), Document(page_content='Brewers', metadata={' "Payroll (millions)"': ' 97.65', ' "Wins"': ' 83'}), Document(page_content='Phillies', metadata={' "Payroll (millions)"': ' 174.54', ' "Wins"': ' 81'}), Document(page_content='Diamondbacks', metadata={' "Payroll (millions)"': ' 74.28', ' "Wins"': ' 81'}), Document(page_content='Pirates', metadata={' "Payroll (millions)"': ' 63.43', ' "Wins"': ' 79'}), Document(page_content='Padres', metadata={' "Payroll (millions)"': ' 55.24', ' "Wins"': ' 76'}), Document(page_content='Mariners', metadata={' "Payroll (millions)"': ' 81.97', ' "Wins"': ' 75'}), Document(page_content='Mets', metadata={' "Payroll (millions)"': ' 93.35', ' "Wins"': ' 74'}), Document(page_content='Blue Jays', metadata={' "Payroll (millions)"': ' 75.48', ' "Wins"': '117.62', ' "Wins"': ' 94'}), Document(page_content='Braves', metadata={' "Payroll (millions)"': ' 83.31', ' "Wins"': ' 94'}), Document(page_content='Athletics', metadata={' "Payroll (millions)"': ' 55.37', ' "Wins"': ' 94'}), Document(page_content='Rangers', metadata={' "Payroll (millions)"': ' 120.51', ' "Wins"': ' 93'}), Document(page_content='Orioles', metadata={' "Payroll (millions)"': ' 81.43', ' "Wins"': ' 93'}), Document(page_content='Rays', metadata={' "Payroll (millions)"': ' 64.17', ' "Wins"': ' 90'}), Document(page_content='Angels', metadata={' "Payroll (millions)"': ' 154.49', ' "Wins"': ' 89'}), Document(page_content='Tigers', metadata={' "Payroll (millions)"': ' 132.30', ' "Wins"': ' 88'}), Document(page_content='Cardinals', metadata={' "Payroll (millions)"': ' 110.30', ' "Wins"': ' 88'}), Document(page_content='Dodgers', metadata={' "Payroll (millions)"': ' 95.14', ' "Wins"': 'data from a PySpark DataFrame.#!pip install pysparkfrom pyspark.sql import SparkSessionspark = SparkSession.builder.getOrCreate() Setting default log level to "WARN". To adjust logging level use sc.setLogLevel(newLevel). For SparkR, use setLogLevel(newLevel). 23/05/31 14:08:33 WARN NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicabledf = spark.read.csv("example_data/mlb_teams_2012.csv", header=True)from langchain.document_loaders import PySparkDataFrameLoaderloader = PySparkDataFrameLoader(spark, df, page_content_column="Team")loader.load() [Stage 8:> (0 + 1) / 1] [Document(page_content='Nationals', metadata={' "Payroll (millions)"': ' 81.34', ' "Wins"': ' 98'}), Document(page_content='Reds', metadata={' "Payroll (millions)"': ' 82.20', ' "Wins"': ' 97'}), Document(page_content='Yankees', metadata={' "Payroll (millions)"': ' 197.96', ' "Wins"': ' 95'}), Document(page_content='Giants', metadata={' "Payroll (millions)"': ' 117.62', ' "Wins"': ' 94'}),Skip to main content🦜�🔗 LangChainDocsUse casesIntegrationsAPILangSmithJS/TS DocsCTRLKIntegrationsCallbacksChat modelsDocument loadersEtherscan LoaderacreomAirbyte JSONAirtableAlibaba Cloud MaxComputeApify DatasetArxivAsyncHtmlLoaderAWS S3 DirectoryAWS S3 FileAZLyricsAzure Blob Storage ContainerAzure Blob Storage FileBibTeXBiliBiliBlackboardBlockchainBrave SearchBrowserlesschatgpt_loaderCollege ConfidentialConfluenceCoNLL-UCopy PasteCSVCube Semantic LayerDatadog LogsDiffbotDiscordDocugamiDuckDBEmailEmbaasEPubEverNoteexample_dataMicrosoft ExcelFacebook ChatFaunaFigmaGeopandasGitGitBookGitHubGoogle BigQueryGoogle Cloud Storage DirectoryGoogle Cloud Storage FileGoogle DriveGrobidGutenbergHacker NewsHuggingFace datasetiFixitImagesImage captionsIMSDbIuguJoplinJupyter NotebookLarkSuite (FeiShu)MastodonMediaWikiDumpMergeDocLoadermhtmlMicrosoft OneDriveMicrosoft PowerPointMicrosoft WordModern TreasuryNotion DB 1/2Notion DB 2/2ObsidianOpen Document Format (ODT)Open City DataOrg-modePandas DataFramePsychicPySpark DataFrame LoaderReadTheDocs DocumentationRecursive URL LoaderRedditRoamRocksetRSTSitemapSlackSnowflakeSource CodeSpreedlyStripeSubtitleTelegramTencent COS DirectoryTencent COS File2MarkdownTOMLTrelloTSVTwitterUnstructured FileURLWeatherWebBaseLoaderWhatsApp ChatWikipediaXMLXorbits Pandas DataFrameLoading documents from a YouTube urlYouTube transcriptsDocument transformersLLMsMemoryRetrieversText embedding modelsAgent toolkitsToolsVector storesGrouped by providerIntegrationsDocument loadersPySpark DataFrame LoaderPySpark DataFrame LoaderThis notebook goes over how to load data from a PySpark DataFrame.#!pip install pysparkfrom pyspark.sql import SparkSessionsparkPySpark DataFrame Loader | 🦜�🔗 LangchainUsers have the ability to communicate with voice calls, video calls, text messaging, media and files in private chats or as part of communities called "servers". A server is a collection of persistent chat rooms and voice channels which can be accessed via invite links.Follow these steps to download your Discord data:Go to your User SettingsThen go to Privacy and SafetyHead over to the Request all of my Data and click on Request Data buttonIt might take 30 days for you to receive your data. You'll receive an email at the address which is registered with Discord. That email will have a download button using which you would be able to download your personal Discord data.import pandas as pdimport ospath = input('Please enter the path to the contents of the Discord "messages" folder: ')li = []for f in os.listdir(path): expected_csv_path = os.path.join(path, f, "messages.csv") csv_exists = os.path.isfile(expected_csv_path) if csv_exists: df = pd.read_csv(expected_csv_path, index_col=None, header=0) li.append(df)df = pd.concat(li, axis=0, ignore_index=True, sort=False)from langchain.document_loaders.discord import DiscordChatLoaderloader = DiscordChatLoader(df, user_id_col="ID")print(loader.load())PreviousDiffbotNextDocugamiCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.Skip to main content🦜�🔗 LangChainDocsUse casesIntegrationsAPILangSmithJS/TS DocsCTRLKIntegrationsCallbacksChat modelsDocument loadersEtherscan LoaderacreomAirbyte JSONAirtableAlibaba Cloud MaxComputeApify DatasetArxivAsyncHtmlLoaderAWS S3 DirectoryAWS S3 FileAZLyricsAzure Blob Storage ContainerAzure Blob Storage FileBibTeXBiliBiliBlackboardBlockchainBrave SearchBrowserlesschatgpt_loaderCollege ConfidentialConfluenceCoNLL-UCopy PasteCSVCube Semantic LayerDatadog LogsDiffbotDiscordDocugamiDuckDBEmailEmbaasEPubEverNoteexample_dataMicrosoft ExcelFacebook ChatFaunaFigmaGeopandasGitGitBookGitHubGoogle BigQueryGoogle Cloud Storage DirectoryGoogle Cloud Storage FileGoogle DriveGrobidGutenbergHacker NewsHuggingFace datasetiFixitImagesImage captionsIMSDbIuguJoplinJupyter NotebookLarkSuite (FeiShu)MastodonMediaWikiDumpMergeDocLoadermhtmlMicrosoft OneDriveMicrosoft PowerPointMicrosoft WordModern TreasuryNotion DB 1/2Notion DB 2/2ObsidianOpen Document Format (ODT)Open City DataOrg-modePandas DataFramePsychicPySpark DataFrame LoaderReadTheDocs DocumentationRecursive URL LoaderRedditRoamRocksetRSTSitemapSlackSnowflakeSource CodeSpreedlyStripeSubtitleTelegramTencent COS DirectoryTencent COS File2MarkdownTOMLTrelloTSVTwitterUnstructured FileURLWeatherWebBaseLoaderWhatsApp ChatWikipediaXMLXorbits Pandas DataFrameLoading documents from a YouTube urlYouTube transcriptsDocument transformersLLMsMemoryRetrieversText embedding modelsAgent toolkitsToolsVector storesGrouped by providerIntegrationsDocument loadersDiscordDiscordDiscord is a VoIP and instant messaging social platform. Users have the ability to communicate with voice calls, video calls, text messaging, media and filesDiscord | 🦜�🔗 Langchainvalue with a new line.For example, if you wanted to join sentence1 and sentence2 with a space instead of a new line, you could set content_columns_joiner like so:RocksetLoader( RocksetClient(Regions.usw2a1, ""), models.QueryRequestSql(query="SELECT * FROM langchain_demo LIMIT 1 WHERE id=38"), ["sentence1", "sentence2"], content_columns_joiner=lambda docs: " ".join( [doc[1] for doc in docs] ), # join with space instead of /n)The page_content of the resulting Document would be:This is the first sentence. This is the second sentence.Oftentimes you want to include the column name in the page_content. You can do that like this:RocksetLoader( RocksetClient(Regions.usw2a1, ""), models.QueryRequestSql(query="SELECT * FROM langchain_demo LIMIT 1 WHERE id=38"), ["sentence1", "sentence2"], content_columns_joiner=lambda docs: "\n".join( [f"{doc[0]}: {doc[1]}" for doc in docs] ),)This would result in the following page_content:sentence1: This is the first sentence.sentence2: This is the second sentence.PreviousRoamNextRSTSetting up the environmentUsing multiple columns as contentCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.), Document( page_content="Morbi tortor enim, commodo id efficitur vitae, fringilla nec mi. Nullam molestie faucibus aliquet. Praesent a est facilisis, condimentum justo sit amet, viverra erat. Fusce volutpat nisi vel purus blandit, et facilisis felis accumsan. Phasellus luctus ligula ultrices tellus tempor hendrerit. Donec at ultricies leo.", metadata={"id": 87732, "date": "2022-11-13T18:49:04.000000Z"} )]Using multiple columns as content​You can choose to use multiple columns as content:from langchain.document_loaders import RocksetLoaderfrom rockset import RocksetClient, Regions, modelsloader = RocksetLoader( RocksetClient(Regions.usw2a1, ""), models.QueryRequestSql(query="SELECT * FROM langchain_demo LIMIT 1 WHERE id=38"), ["sentence1", "sentence2"], # TWO content columns)Assuming the "sentence1" field is "This is the first sentence." and the "sentence2" field is "This is the second sentence.", the page_content of the resulting Document would be:This is the first sentence.This is the second sentence.You can define you own function to join content columns by setting the content_columns_joiner argument in the RocksetLoader constructor. content_columns_joiner is a method that takes in a List[Tuple[str, Any]]] as an argument, representing a list of tuples of (column name, column value). By default, this is a method that joins each column value with a new line.For example, if you wanted to join sentence1 and sentence2 with aused as metadata (if you do not pass anything into metadata_keys, the whole Rockset document will be used as metadata). To execute the query and access an iterator over the resulting Documents, run:loader.lazy_load()To execute the query and access all resulting Documents at once, run:loader.load()Here is an example response of loader.load():[ Document( page_content="Lorem ipsum dolor sit amet, consectetur adipiscing elit. Maecenas a libero porta, dictum ipsum eget, hendrerit neque. Morbi blandit, ex ut suscipit viverra, enim velit tincidunt tellus, a tempor velit nunc et ex. Proin hendrerit odio nec convallis lobortis. Aenean in purus dolor. Vestibulum orci orci, laoreet eget magna in, commodo euismod justo.", metadata={"id": 83209, "date": "2022-11-13T18:26:45.000000Z"} ), Document( page_content="Integer at finibus odio. Nam sit amet enim cursus lacus gravida feugiat vestibulum sed libero. Aenean eleifend est quis elementum tincidunt. Curabitur sit amet ornare erat. Nulla id dolor ut magna volutpat sodales fringilla vel ipsum. Donec ultricies, lacus sed fermentum dignissim, lorem elit aliquam ligula, sed suscipit sapien purus nec ligula.", metadata={"id": 89313, "date": "2022-11-13T18:28:53.000000Z"} ), Document( page_content="Morbi tortorwhich enables queries on massive, semi-structured data without operational burden. With Rockset, ingested data is queryable within one second and analytical queries against that data typically execute in milliseconds. Rockset is compute optimized, making it suitable for serving high concurrency applications in the sub-100TB range (or larger than 100s of TBs with rollups).This notebook demonstrates how to use Rockset as a document loader in langchain. To get started, make sure you have a Rockset account and an API key available.Setting up the environment​Go to the Rockset console and get an API key. Find your API region from the API reference. For the purpose of this notebook, we will assume you're using Rockset from Oregon(us-west-2).Set your the environment variable ROCKSET_API_KEY.Install the Rockset python client, which will be used by langchain to interact with the Rockset database.$ pip3 install rocksetLoading DocumentsThe Rockset integration with LangChain allows you to load documents from Rockset collections with SQL queries. In order to do this you must construct a RocksetLoader object. Here is an example snippet that initializes a RocksetLoader.from langchain.document_loaders import RocksetLoaderfrom rockset import RocksetClient, Regions, modelsloader = RocksetLoader( RocksetClient(Regions.usw2a1, ""), models.QueryRequestSql(query="SELECT * FROM langchain_demo LIMIT 3"), # SQL query ["text"], # content columns metadata_keys=["id", "date"], # metadata columns)Here, you can see that the following query is run:SELECT * FROM langchain_demo LIMIT 3The text column in the collection is used as the page content, and the record's id and date columns are used as metadata (if you do not pass anything into metadata_keys, the whole Rockset document willSkip to main content🦜�🔗 LangChainDocsUse casesIntegrationsAPILangSmithJS/TS DocsCTRLKIntegrationsCallbacksChat modelsDocument loadersEtherscan LoaderacreomAirbyte JSONAirtableAlibaba Cloud MaxComputeApify DatasetArxivAsyncHtmlLoaderAWS S3 DirectoryAWS S3 FileAZLyricsAzure Blob Storage ContainerAzure Blob Storage FileBibTeXBiliBiliBlackboardBlockchainBrave SearchBrowserlesschatgpt_loaderCollege ConfidentialConfluenceCoNLL-UCopy PasteCSVCube Semantic LayerDatadog LogsDiffbotDiscordDocugamiDuckDBEmailEmbaasEPubEverNoteexample_dataMicrosoft ExcelFacebook ChatFaunaFigmaGeopandasGitGitBookGitHubGoogle BigQueryGoogle Cloud Storage DirectoryGoogle Cloud Storage FileGoogle DriveGrobidGutenbergHacker NewsHuggingFace datasetiFixitImagesImage captionsIMSDbIuguJoplinJupyter NotebookLarkSuite (FeiShu)MastodonMediaWikiDumpMergeDocLoadermhtmlMicrosoft OneDriveMicrosoft PowerPointMicrosoft WordModern TreasuryNotion DB 1/2Notion DB 2/2ObsidianOpen Document Format (ODT)Open City DataOrg-modePandas DataFramePsychicPySpark DataFrame LoaderReadTheDocs DocumentationRecursive URL LoaderRedditRoamRocksetRSTSitemapSlackSnowflakeSource CodeSpreedlyStripeSubtitleTelegramTencent COS DirectoryTencent COS File2MarkdownTOMLTrelloTSVTwitterUnstructured FileURLWeatherWebBaseLoaderWhatsApp ChatWikipediaXMLXorbits Pandas DataFrameLoading documents from a YouTube urlYouTube transcriptsDocument transformersLLMsMemoryRetrieversText embedding modelsAgent toolkitsToolsVector storesGrouped by providerIntegrationsDocument loadersRocksetOn this pageRocksetRockset is a real-time analytics database which enables queries on massive, semi-structured data without operational burden. With Rockset, ingestedRockset | 🦜�🔗 Langchainnetworked thought, designed to create a personal knowledge base.This notebook covers how to load documents from a Roam database. This takes a lot of inspiration from the example repo here.🧑 Instructions for ingesting your own dataset​Export your dataset from Roam Research. You can do this by clicking on the three dots in the upper right hand corner and then clicking Export.When exporting, make sure to select the Markdown & CSV format option.This will produce a .zip file in your Downloads folder. Move the .zip file into this repository.Run the following command to unzip the zip file (replace the Export... with your own file name as needed).unzip Roam-Export-1675782732639.zip -d Roam_DBfrom langchain.document_loaders import RoamLoaderloader = RoamLoader("Roam_DB")docs = loader.load()PreviousRedditNextRockset🧑 Instructions for ingesting your own datasetCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.Skip to main content🦜�🔗 LangChainDocsUse casesIntegrationsAPILangSmithJS/TS DocsCTRLKIntegrationsCallbacksChat modelsDocument loadersEtherscan LoaderacreomAirbyte JSONAirtableAlibaba Cloud MaxComputeApify DatasetArxivAsyncHtmlLoaderAWS S3 DirectoryAWS S3 FileAZLyricsAzure Blob Storage ContainerAzure Blob Storage FileBibTeXBiliBiliBlackboardBlockchainBrave SearchBrowserlesschatgpt_loaderCollege ConfidentialConfluenceCoNLL-UCopy PasteCSVCube Semantic LayerDatadog LogsDiffbotDiscordDocugamiDuckDBEmailEmbaasEPubEverNoteexample_dataMicrosoft ExcelFacebook ChatFaunaFigmaGeopandasGitGitBookGitHubGoogle BigQueryGoogle Cloud Storage DirectoryGoogle Cloud Storage FileGoogle DriveGrobidGutenbergHacker NewsHuggingFace datasetiFixitImagesImage captionsIMSDbIuguJoplinJupyter NotebookLarkSuite (FeiShu)MastodonMediaWikiDumpMergeDocLoadermhtmlMicrosoft OneDriveMicrosoft PowerPointMicrosoft WordModern TreasuryNotion DB 1/2Notion DB 2/2ObsidianOpen Document Format (ODT)Open City DataOrg-modePandas DataFramePsychicPySpark DataFrame LoaderReadTheDocs DocumentationRecursive URL LoaderRedditRoamRocksetRSTSitemapSlackSnowflakeSource CodeSpreedlyStripeSubtitleTelegramTencent COS DirectoryTencent COS File2MarkdownTOMLTrelloTSVTwitterUnstructured FileURLWeatherWebBaseLoaderWhatsApp ChatWikipediaXMLXorbits Pandas DataFrameLoading documents from a YouTube urlYouTube transcriptsDocument transformersLLMsMemoryRetrieversText embedding modelsAgent toolkitsToolsVector storesGrouped by providerIntegrationsDocument loadersRoamOn this pageRoamROAM is a note-taking tool for networked thought, designed to create a personal knowledge base.This notebook covers how to load documents from aRoam | 🦜�🔗 Langchain'allow-external-ingress-high-ports', 'allow-external-ingress-http', 'allow-external-ingress-https', 'container_id:c7d8ecd27b5b3cfdf3b0df04b8965af6f233f56b7c3c2ffabfab5e3b6ccbd6a5', 'container_name:lab_datadog_1', 'datadog.pipelines:false', 'datadog.submission_auth:private_api_key', 'docker_image:datadog/agent:7.41.1', 'env:dd101-dev', 'hostname:lab-host', 'image_name:datadog/agent', 'image_tag:7.41.1', 'instance-id:7497601202021312403', 'instance-type:custom-1-4096', 'instruqt_aws_accounts:', 'instruqt_azure_subscriptions:', 'instruqt_gcp_projects:', 'internal-hostname:lab-host.d4rjybavkary.svc.cluster.local', 'numeric_project_id:3390740675', 'p-d4rjybavkary', 'project:instruqt-prod', 'service:agent', 'short_image:agent', 'source:agent', 'zone:europe-west1-b'], 'timestamp': datetime.datetime(2023, 7, 7, 13, 57, 27, 206000, tzinfo=tzutc())})]PreviousCube Semantic LayerNextDiffbotCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.'container_name:lab_datadog_1', 'datadog.pipelines:false', 'datadog.submission_auth:private_api_key', 'docker_image:datadog/agent:7.41.1', 'env:dd101-dev', 'hostname:lab-host', 'image_name:datadog/agent', 'image_tag:7.41.1', 'instance-id:7497601202021312403', 'instance-type:custom-1-4096', 'instruqt_aws_accounts:', 'instruqt_azure_subscriptions:', 'instruqt_gcp_projects:', 'internal-hostname:lab-host.d4rjybavkary.svc.cluster.local', 'numeric_project_id:3390740675', 'p-d4rjybavkary', 'project:instruqt-prod', 'service:agent', 'short_image:agent', 'source:agent', 'zone:europe-west1-b'], 'timestamp': datetime.datetime(2023, 7, 7, 13, 57, 27, 206000, tzinfo=tzutc())}), Document(page_content='message: grep: /etc/datadog-agent/system-probe.yaml: No such file or directory', metadata={'id': 'AgAAAYkwpLImvkjRpgAAAAAAAAAYAAAAAEFZa3dwTUFsQUFEWmZfLU5QdElnM3dBWgAAACQAAAAAMDE4OTMwYTQtYzk3OS00MmJjLTlhNDAtOTY4N2EwY2I5ZDdk', 'status': 'error', 'service': 'agent', 'tags': ['accessible-from-goog-gke-node', 'allow-external-ingress-high-ports', 'allow-external-ingress-http',analytics platform for cloud-scale applications.This loader fetches the logs from your applications in Datadog using the datadog_api_client Python package. You must initialize the loader with your Datadog API key and APP key, and you need to pass in the query to extract the desired logs.from langchain.document_loaders import DatadogLogsLoader#!pip install datadog-api-clientquery = "service:agent status:error"loader = DatadogLogsLoader( query=query, api_key=DD_API_KEY, app_key=DD_APP_KEY, from_time=1688732708951, # Optional, timestamp in milliseconds to_time=1688736308951, # Optional, timestamp in milliseconds limit=100, # Optional, default is 100)documents = loader.load()documents [Document(page_content='message: grep: /etc/datadog-agent/system-probe.yaml: No such file or directory', metadata={'id': 'AgAAAYkwpLImvkjRpQAAAAAAAAAYAAAAAEFZa3dwTUFsQUFEWmZfLU5QdElnM3dBWQAAACQAAAAAMDE4OTMwYTQtYzk3OS00MmJjLTlhNDAtOTY4N2EwY2I5ZDdk', 'status': 'error', 'service': 'agent', 'tags': ['accessible-from-goog-gke-node', 'allow-external-ingress-high-ports', 'allow-external-ingress-http', 'allow-external-ingress-https', 'container_id:c7d8ecd27b5b3cfdf3b0df04b8965af6f233f56b7c3c2ffabfab5e3b6ccbd6a5',Skip to main content🦜�🔗 LangChainDocsUse casesIntegrationsAPILangSmithJS/TS DocsCTRLKIntegrationsCallbacksChat modelsDocument loadersEtherscan LoaderacreomAirbyte JSONAirtableAlibaba Cloud MaxComputeApify DatasetArxivAsyncHtmlLoaderAWS S3 DirectoryAWS S3 FileAZLyricsAzure Blob Storage ContainerAzure Blob Storage FileBibTeXBiliBiliBlackboardBlockchainBrave SearchBrowserlesschatgpt_loaderCollege ConfidentialConfluenceCoNLL-UCopy PasteCSVCube Semantic LayerDatadog LogsDiffbotDiscordDocugamiDuckDBEmailEmbaasEPubEverNoteexample_dataMicrosoft ExcelFacebook ChatFaunaFigmaGeopandasGitGitBookGitHubGoogle BigQueryGoogle Cloud Storage DirectoryGoogle Cloud Storage FileGoogle DriveGrobidGutenbergHacker NewsHuggingFace datasetiFixitImagesImage captionsIMSDbIuguJoplinJupyter NotebookLarkSuite (FeiShu)MastodonMediaWikiDumpMergeDocLoadermhtmlMicrosoft OneDriveMicrosoft PowerPointMicrosoft WordModern TreasuryNotion DB 1/2Notion DB 2/2ObsidianOpen Document Format (ODT)Open City DataOrg-modePandas DataFramePsychicPySpark DataFrame LoaderReadTheDocs DocumentationRecursive URL LoaderRedditRoamRocksetRSTSitemapSlackSnowflakeSource CodeSpreedlyStripeSubtitleTelegramTencent COS DirectoryTencent COS File2MarkdownTOMLTrelloTSVTwitterUnstructured FileURLWeatherWebBaseLoaderWhatsApp ChatWikipediaXMLXorbits Pandas DataFrameLoading documents from a YouTube urlYouTube transcriptsDocument transformersLLMsMemoryRetrieversText embedding modelsAgent toolkitsToolsVector storesGrouped by providerIntegrationsDocument loadersDatadog LogsDatadog LogsDatadog is a monitoring and analytics platform for cloud-scale applications.This loader fetches the logs from your applications in Datadog usingDatadog Logs | 🦜�🔗 Langchainin the cloud that are accessible via the industry standard Server Message Block (SMB) protocol, Network File System (NFS) protocol, and Azure Files REST API.This covers how to load document objects from a Azure Files.#!pip install azure-storage-blobfrom langchain.document_loaders import AzureBlobStorageFileLoaderloader = AzureBlobStorageFileLoader( conn_str="", container="", blob_name="",)loader.load() [Document(page_content='Lorem ipsum dolor sit amet.', lookup_str='', metadata={'source': '/var/folders/y6/8_bzdg295ld6s1_97_12m4lr0000gn/T/tmpxvave6wl/fake.docx'}, lookup_index=0)]PreviousAzure Blob Storage ContainerNextBibTeXCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.Skip to main content🦜�🔗 LangChainDocsUse casesIntegrationsAPILangSmithJS/TS DocsCTRLKIntegrationsCallbacksChat modelsDocument loadersEtherscan LoaderacreomAirbyte JSONAirtableAlibaba Cloud MaxComputeApify DatasetArxivAsyncHtmlLoaderAWS S3 DirectoryAWS S3 FileAZLyricsAzure Blob Storage ContainerAzure Blob Storage FileBibTeXBiliBiliBlackboardBlockchainBrave SearchBrowserlesschatgpt_loaderCollege ConfidentialConfluenceCoNLL-UCopy PasteCSVCube Semantic LayerDatadog LogsDiffbotDiscordDocugamiDuckDBEmailEmbaasEPubEverNoteexample_dataMicrosoft ExcelFacebook ChatFaunaFigmaGeopandasGitGitBookGitHubGoogle BigQueryGoogle Cloud Storage DirectoryGoogle Cloud Storage FileGoogle DriveGrobidGutenbergHacker NewsHuggingFace datasetiFixitImagesImage captionsIMSDbIuguJoplinJupyter NotebookLarkSuite (FeiShu)MastodonMediaWikiDumpMergeDocLoadermhtmlMicrosoft OneDriveMicrosoft PowerPointMicrosoft WordModern TreasuryNotion DB 1/2Notion DB 2/2ObsidianOpen Document Format (ODT)Open City DataOrg-modePandas DataFramePsychicPySpark DataFrame LoaderReadTheDocs DocumentationRecursive URL LoaderRedditRoamRocksetRSTSitemapSlackSnowflakeSource CodeSpreedlyStripeSubtitleTelegramTencent COS DirectoryTencent COS File2MarkdownTOMLTrelloTSVTwitterUnstructured FileURLWeatherWebBaseLoaderWhatsApp ChatWikipediaXMLXorbits Pandas DataFrameLoading documents from a YouTube urlYouTube transcriptsDocument transformersLLMsMemoryRetrieversText embedding modelsAgent toolkitsToolsVector storesGrouped by providerIntegrationsDocument loadersAzure Blob Storage FileAzure Blob Storage FileAzure Files offers fully managed file shares in the cloud that are accessible via the industry standard Server Message Block (SMB) protocol, NetworkAzure Blob Storage File | 🦜�🔗 Langchain(millions)"': 93.35, ' "Wins"': 74} page_content='Blue Jays' metadata={' "Payroll (millions)"': 75.48, ' "Wins"': 73} page_content='Royals' metadata={' "Payroll (millions)"': 60.91, ' "Wins"': 72} page_content='Marlins' metadata={' "Payroll (millions)"': 118.07, ' "Wins"': 69} page_content='Red Sox' metadata={' "Payroll (millions)"': 173.18, ' "Wins"': 69} page_content='Indians' metadata={' "Payroll (millions)"': 78.43, ' "Wins"': 68} page_content='Twins' metadata={' "Payroll (millions)"': 94.08, ' "Wins"': 66} page_content='Rockies' metadata={' "Payroll (millions)"': 78.06, ' "Wins"': 64} page_content='Cubs' metadata={' "Payroll (millions)"': 88.19, ' "Wins"': 61} page_content='Astros' metadata={' "Payroll (millions)"': 60.65, ' "Wins"': 55}PreviousOrg-modeNextPsychicCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.64.17, ' "Wins"': 90} page_content='Angels' metadata={' "Payroll (millions)"': 154.49, ' "Wins"': 89} page_content='Tigers' metadata={' "Payroll (millions)"': 132.3, ' "Wins"': 88} page_content='Cardinals' metadata={' "Payroll (millions)"': 110.3, ' "Wins"': 88} page_content='Dodgers' metadata={' "Payroll (millions)"': 95.14, ' "Wins"': 86} page_content='White Sox' metadata={' "Payroll (millions)"': 96.92, ' "Wins"': 85} page_content='Brewers' metadata={' "Payroll (millions)"': 97.65, ' "Wins"': 83} page_content='Phillies' metadata={' "Payroll (millions)"': 174.54, ' "Wins"': 81} page_content='Diamondbacks' metadata={' "Payroll (millions)"': 74.28, ' "Wins"': 81} page_content='Pirates' metadata={' "Payroll (millions)"': 63.43, ' "Wins"': 79} page_content='Padres' metadata={' "Payroll (millions)"': 55.24, ' "Wins"': 76} page_content='Mariners' metadata={' "Payroll (millions)"': 81.97, ' "Wins"': 75} page_content='Mets' metadata={' "Payroll (millions)"': 93.35, ' "Wins"': 74}64}), Document(page_content='Cubs', metadata={' "Payroll (millions)"': 88.19, ' "Wins"': 61}), Document(page_content='Astros', metadata={' "Payroll (millions)"': 60.65, ' "Wins"': 55})]# Use lazy load for larger table, which won't read the full table into memoryfor i in loader.lazy_load(): print(i) page_content='Nationals' metadata={' "Payroll (millions)"': 81.34, ' "Wins"': 98} page_content='Reds' metadata={' "Payroll (millions)"': 82.2, ' "Wins"': 97} page_content='Yankees' metadata={' "Payroll (millions)"': 197.96, ' "Wins"': 95} page_content='Giants' metadata={' "Payroll (millions)"': 117.62, ' "Wins"': 94} page_content='Braves' metadata={' "Payroll (millions)"': 83.31, ' "Wins"': 94} page_content='Athletics' metadata={' "Payroll (millions)"': 55.37, ' "Wins"': 94} page_content='Rangers' metadata={' "Payroll (millions)"': 120.51, ' "Wins"': 93} page_content='Orioles' metadata={' "Payroll (millions)"': 81.43, ' "Wins"': 93} page_content='Rays' metadata={' "Payroll (millions)"': 64.17, ' "Wins"': 90}' "Wins"': 81}), Document(page_content='Pirates', metadata={' "Payroll (millions)"': 63.43, ' "Wins"': 79}), Document(page_content='Padres', metadata={' "Payroll (millions)"': 55.24, ' "Wins"': 76}), Document(page_content='Mariners', metadata={' "Payroll (millions)"': 81.97, ' "Wins"': 75}), Document(page_content='Mets', metadata={' "Payroll (millions)"': 93.35, ' "Wins"': 74}), Document(page_content='Blue Jays', metadata={' "Payroll (millions)"': 75.48, ' "Wins"': 73}), Document(page_content='Royals', metadata={' "Payroll (millions)"': 60.91, ' "Wins"': 72}), Document(page_content='Marlins', metadata={' "Payroll (millions)"': 118.07, ' "Wins"': 69}), Document(page_content='Red Sox', metadata={' "Payroll (millions)"': 173.18, ' "Wins"': 69}), Document(page_content='Indians', metadata={' "Payroll (millions)"': 78.43, ' "Wins"': 68}), Document(page_content='Twins', metadata={' "Payroll (millions)"': 94.08, ' "Wins"': 66}), Document(page_content='Rockies', metadata={' "Payroll (millions)"': 78.06, ' "Wins"': 64}), Document(page_content='Cubs', metadata={' "Payroll55.37, ' "Wins"': 94}), Document(page_content='Rangers', metadata={' "Payroll (millions)"': 120.51, ' "Wins"': 93}), Document(page_content='Orioles', metadata={' "Payroll (millions)"': 81.43, ' "Wins"': 93}), Document(page_content='Rays', metadata={' "Payroll (millions)"': 64.17, ' "Wins"': 90}), Document(page_content='Angels', metadata={' "Payroll (millions)"': 154.49, ' "Wins"': 89}), Document(page_content='Tigers', metadata={' "Payroll (millions)"': 132.3, ' "Wins"': 88}), Document(page_content='Cardinals', metadata={' "Payroll (millions)"': 110.3, ' "Wins"': 88}), Document(page_content='Dodgers', metadata={' "Payroll (millions)"': 95.14, ' "Wins"': 86}), Document(page_content='White Sox', metadata={' "Payroll (millions)"': 96.92, ' "Wins"': 85}), Document(page_content='Brewers', metadata={' "Payroll (millions)"': 97.65, ' "Wins"': 83}), Document(page_content='Phillies', metadata={' "Payroll (millions)"': 174.54, ' "Wins"': 81}), Document(page_content='Diamondbacks', metadata={' "Payroll (millions)"': 74.28, ' "Wins"': 81}), Document(page_content='Pirates',197.96 95 3 Giants 117.62 94 4 Braves 83.31 94 from langchain.document_loaders import DataFrameLoaderloader = DataFrameLoader(df, page_content_column="Team")loader.load() [Document(page_content='Nationals', metadata={' "Payroll (millions)"': 81.34, ' "Wins"': 98}), Document(page_content='Reds', metadata={' "Payroll (millions)"': 82.2, ' "Wins"': 97}), Document(page_content='Yankees', metadata={' "Payroll (millions)"': 197.96, ' "Wins"': 95}), Document(page_content='Giants', metadata={' "Payroll (millions)"': 117.62, ' "Wins"': 94}), Document(page_content='Braves', metadata={' "Payroll (millions)"': 83.31, ' "Wins"': 94}), Document(page_content='Athletics', metadata={' "Payroll (millions)"': 55.37, ' "Wins"': 94}),over how to load data from a pandas DataFrame.#!pip install pandasimport pandas as pddf = pd.read_csv("example_data/mlb_teams_2012.csv")df.head()
Skip to main content🦜�🔗 LangChainDocsUse casesIntegrationsAPILangSmithJS/TS DocsCTRLKIntegrationsCallbacksChat modelsDocument loadersEtherscan LoaderacreomAirbyte JSONAirtableAlibaba Cloud MaxComputeApify DatasetArxivAsyncHtmlLoaderAWS S3 DirectoryAWS S3 FileAZLyricsAzure Blob Storage ContainerAzure Blob Storage FileBibTeXBiliBiliBlackboardBlockchainBrave SearchBrowserlesschatgpt_loaderCollege ConfidentialConfluenceCoNLL-UCopy PasteCSVCube Semantic LayerDatadog LogsDiffbotDiscordDocugamiDuckDBEmailEmbaasEPubEverNoteexample_dataMicrosoft ExcelFacebook ChatFaunaFigmaGeopandasGitGitBookGitHubGoogle BigQueryGoogle Cloud Storage DirectoryGoogle Cloud Storage FileGoogle DriveGrobidGutenbergHacker NewsHuggingFace datasetiFixitImagesImage captionsIMSDbIuguJoplinJupyter NotebookLarkSuite (FeiShu)MastodonMediaWikiDumpMergeDocLoadermhtmlMicrosoft OneDriveMicrosoft PowerPointMicrosoft WordModern TreasuryNotion DB 1/2Notion DB 2/2ObsidianOpen Document Format (ODT)Open City DataOrg-modePandas DataFramePsychicPySpark DataFrame LoaderReadTheDocs DocumentationRecursive URL LoaderRedditRoamRocksetRSTSitemapSlackSnowflakeSource CodeSpreedlyStripeSubtitleTelegramTencent COS DirectoryTencent COS File2MarkdownTOMLTrelloTSVTwitterUnstructured FileURLWeatherWebBaseLoaderWhatsApp ChatWikipediaXMLXorbits Pandas DataFrameLoading documents from a YouTube urlYouTube transcriptsDocument transformersLLMsMemoryRetrieversText embedding modelsAgent toolkitsToolsVector storesGrouped by providerIntegrationsDocument loadersPandas DataFramePandas DataFrameThis notebook goes over how to load data from a pandas DataFrame.#!pip install pandasimport pandas as pddf =Pandas DataFrame | 🦜�🔗 Langchainde\npltfom for sharing both protrined modes an fal document dist\n{ation pipeline We demonutate that LayootPareer shea fr both\nlightweight and lrgeseledgtieation pipelines in eal-word uae ces\nThe leary pblely smal at Btspe://layost-pareergsthab So\n\n\n\n‘Keywords: Document Image Analysis» Deep Learning Layout Analysis\n‘Character Renguition - Open Serres dary « Tol\n\n\nIntroduction\n\n\n‘Deep Learning(DL)-based approaches are the state-of-the-art for a wide range of\ndoctiment image analysis (DIA) tea including document image clasiffeation [I]\n", lookup_str='', metadata={'source': 'layout-parser-paper-fast.jpg'}, lookup_index=0)Retain Elements​Under the hood, Unstructured creates different "elements" for different chunks of text. By default we combine those together, but you can easily keep that separation by specifying mode="elements".loader = UnstructuredImageLoader("layout-parser-paper-fast.jpg", mode="elements")data = loader.load()data[0] Document(page_content='LayoutParser: A Unified Toolkit for Deep\nLearning Based Document Image Analysis\n', lookup_str='', metadata={'source': 'layout-parser-paper-fast.jpg', 'filename': 'layout-parser-paper-fast.jpg', 'page_number': 1, 'category': 'Title'}, lookup_index=0)PreviousiFixitNextImage captionsUsing UnstructuredRetain ElementsCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.PNG into a document format that we can use downstream.Using Unstructured​#!pip install pdfminerfrom langchain.document_loaders.image import UnstructuredImageLoaderloader = UnstructuredImageLoader("layout-parser-paper-fast.jpg")data = loader.load()data[0] Document(page_content="LayoutParser: A Unified Toolkit for Deep\nLearning Based Document Image Analysis\n\n\n‘Zxjiang Shen' (F3}, Ruochen Zhang�, Melissa Dell*, Benjamin Charles Germain\nLeet, Jacob Carlson, and Weining LiF\n\n\nsugehen\n\nshangthrows, et\n\n“Abstract. Recent advanocs in document image analysis (DIA) have been\n‘pimarliy driven bythe application of neural networks dell roar\n{uteomer could be aly deployed in production and extended fo farther\n[nvetigtion. However, various factory ke lcely organize codebanee\nsnd sophisticated modal cnigurations compat the ey ree of\n‘erin! innovation by wide sence, Though there have been sng\n‘Hors to improve reuablty and simplify deep lees (DL) mode\n‘aon, sone of them ae optimized for challenge inthe demain of DIA,\nThis roprscte a major gap in the extng fol, sw DIA i eal to\nscademic research acon wie range of dpi in the social ssencee\n[rary for streamlining the sage of DL in DIA research and appicn\n‘tons The core LayoutFaraer brary comes with a sch of simple and\nIntative interfaee or applying and eutomiing DI. odel fr Inyo de\npltfom for sharing both protrined modes an fal document dist\n{ation pipeline WeSkip to main content🦜�🔗 LangChainDocsUse casesIntegrationsAPILangSmithJS/TS DocsCTRLKIntegrationsCallbacksChat modelsDocument loadersEtherscan LoaderacreomAirbyte JSONAirtableAlibaba Cloud MaxComputeApify DatasetArxivAsyncHtmlLoaderAWS S3 DirectoryAWS S3 FileAZLyricsAzure Blob Storage ContainerAzure Blob Storage FileBibTeXBiliBiliBlackboardBlockchainBrave SearchBrowserlesschatgpt_loaderCollege ConfidentialConfluenceCoNLL-UCopy PasteCSVCube Semantic LayerDatadog LogsDiffbotDiscordDocugamiDuckDBEmailEmbaasEPubEverNoteexample_dataMicrosoft ExcelFacebook ChatFaunaFigmaGeopandasGitGitBookGitHubGoogle BigQueryGoogle Cloud Storage DirectoryGoogle Cloud Storage FileGoogle DriveGrobidGutenbergHacker NewsHuggingFace datasetiFixitImagesImage captionsIMSDbIuguJoplinJupyter NotebookLarkSuite (FeiShu)MastodonMediaWikiDumpMergeDocLoadermhtmlMicrosoft OneDriveMicrosoft PowerPointMicrosoft WordModern TreasuryNotion DB 1/2Notion DB 2/2ObsidianOpen Document Format (ODT)Open City DataOrg-modePandas DataFramePsychicPySpark DataFrame LoaderReadTheDocs DocumentationRecursive URL LoaderRedditRoamRocksetRSTSitemapSlackSnowflakeSource CodeSpreedlyStripeSubtitleTelegramTencent COS DirectoryTencent COS File2MarkdownTOMLTrelloTSVTwitterUnstructured FileURLWeatherWebBaseLoaderWhatsApp ChatWikipediaXMLXorbits Pandas DataFrameLoading documents from a YouTube urlYouTube transcriptsDocument transformersLLMsMemoryRetrieversText embedding modelsAgent toolkitsToolsVector storesGrouped by providerIntegrationsDocument loadersImagesOn this pageImagesThis covers how to load images such as JPG or PNG into a document format that we can use downstream.Using Unstructured​#!pipImages | 🦜�🔗 Langchainimport OpenAIfrom langchain.chains import RetrievalQAWithSourcesChaintext_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0)texts = text_splitter.split_documents(documents)embeddings = OpenAIEmbeddings()docsearch = Chroma.from_documents(texts, embeddings)chain = RetrievalQAWithSourcesChain.from_chain_type( OpenAI(temperature=0), chain_type="stuff", retriever=docsearch.as_retriever())chain({"question": "what is psychic?"}, return_only_outputs=True)PreviousPandas DataFrameNextPySpark DataFrame LoaderPrerequisitesLoading documentsConverting the docs to embeddingsCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.Psychic. See here for more details.Prerequisites​Follow the Quick Start section in this documentLog into the Psychic dashboard and get your secret keyInstall the frontend react library into your web app and have a user authenticate a connection. The connection will be created using the connection id that you specify.Loading documents​Use the PsychicLoader class to load in documents from a connection. Each connection has a connector id (corresponding to the SaaS app that was connected) and a connection id (which you passed in to the frontend library).# Uncomment this to install psychicapi if you don't already have it installedpoetry run pip -q install psychicapi [notice] A new release of pip is available: 23.0.1 -> 23.1.2 [notice] To update, run: pip install --upgrade pipfrom langchain.document_loaders import PsychicLoaderfrom psychicapi import ConnectorId# Create a document loader for google drive. We can also load from other connectors by setting the connector_id to the appropriate value e.g. ConnectorId.notion.value# This loader uses our test credentialsgoogle_drive_loader = PsychicLoader( api_key="7ddb61c1-8b6a-4d31-a58e-30d1c9ea480e", connector_id=ConnectorId.gdrive.value, connection_id="google-test",)documents = google_drive_loader.load()Converting the docs to embeddings​We can now convert these documents into embeddings and store them in a vector database like Chromafrom langchain.embeddings.openai import OpenAIEmbeddingsfrom langchain.vectorstores import Chromafrom langchain.text_splitter import CharacterTextSplitterfrom langchain.llms import OpenAIfrom langchain.chains import RetrievalQAWithSourcesChaintext_splitter =Skip to main content🦜�🔗 LangChainDocsUse casesIntegrationsAPILangSmithJS/TS DocsCTRLKIntegrationsCallbacksChat modelsDocument loadersEtherscan LoaderacreomAirbyte JSONAirtableAlibaba Cloud MaxComputeApify DatasetArxivAsyncHtmlLoaderAWS S3 DirectoryAWS S3 FileAZLyricsAzure Blob Storage ContainerAzure Blob Storage FileBibTeXBiliBiliBlackboardBlockchainBrave SearchBrowserlesschatgpt_loaderCollege ConfidentialConfluenceCoNLL-UCopy PasteCSVCube Semantic LayerDatadog LogsDiffbotDiscordDocugamiDuckDBEmailEmbaasEPubEverNoteexample_dataMicrosoft ExcelFacebook ChatFaunaFigmaGeopandasGitGitBookGitHubGoogle BigQueryGoogle Cloud Storage DirectoryGoogle Cloud Storage FileGoogle DriveGrobidGutenbergHacker NewsHuggingFace datasetiFixitImagesImage captionsIMSDbIuguJoplinJupyter NotebookLarkSuite (FeiShu)MastodonMediaWikiDumpMergeDocLoadermhtmlMicrosoft OneDriveMicrosoft PowerPointMicrosoft WordModern TreasuryNotion DB 1/2Notion DB 2/2ObsidianOpen Document Format (ODT)Open City DataOrg-modePandas DataFramePsychicPySpark DataFrame LoaderReadTheDocs DocumentationRecursive URL LoaderRedditRoamRocksetRSTSitemapSlackSnowflakeSource CodeSpreedlyStripeSubtitleTelegramTencent COS DirectoryTencent COS File2MarkdownTOMLTrelloTSVTwitterUnstructured FileURLWeatherWebBaseLoaderWhatsApp ChatWikipediaXMLXorbits Pandas DataFrameLoading documents from a YouTube urlYouTube transcriptsDocument transformersLLMsMemoryRetrieversText embedding modelsAgent toolkitsToolsVector storesGrouped by providerIntegrationsDocument loadersPsychicOn this pagePsychicThis notebook covers how to load documents from Psychic. See here for more details.Prerequisites​Follow the Quick Start section in thisPsychic | 🦜�🔗 Langchainpoints:\n\nRoses are red\n\nViolets are blue', metadata={'source': ['example_data/fake.docx', 'example_data/fake-email.eml']})PreviousTwitterNextURLRetain ElementsDefine a Partitioning StrategyPDF ExampleUnstructured APICommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.286.14), (36.34, 286.14), (36.34, 258.36)), 'system': 'PixelSpace', 'layout_width': 612, 'layout_height': 792}, 'filename': 'layout-parser-paper.pdf', 'file_directory': './example_data', 'filetype': 'application/pdf', 'page_number': 1, 'category': 'Title'})]Unstructured API​If you want to get up and running with less set up, you can simply run pip install unstructured and use UnstructuredAPIFileLoader or UnstructuredAPIFileIOLoader. That will process your document using the hosted Unstructured API. You can generate a free Unstructured API key here. The Unstructured documentation page will have instructions on how to generate an API key once they’re available. Check out the instructions here if you’d like to self-host the Unstructured API or run it locally.from langchain.document_loaders import UnstructuredAPIFileLoaderfilenames = ["example_data/fake.docx", "example_data/fake-email.eml"]loader = UnstructuredAPIFileLoader( file_path=filenames[0], api_key="FAKE_API_KEY",)docs = loader.load()docs[0] Document(page_content='Lorem ipsum dolor sit amet.', metadata={'source': 'example_data/fake.docx'})You can also batch multiple files through the Unstructured API in a single API using UnstructuredAPIFileLoader.loader = UnstructuredAPIFileLoader( file_path=filenames, api_key="FAKE_API_KEY",)docs = loader.load()docs[0] Document(page_content='Lorem ipsum dolor sit amet.\n\nThis is a test email to use for unit tests.\n\nImportant points:\n\nRoses are red\n\nViolets are blue',Brown University ruochen zhang@brown.edu 3 Harvard University {melissadell,jacob carlson}@fas.harvard.edu 4 University of Washington bcgl@cs.washington.edu 5 University of Waterloo w422li@uwaterloo.ca', metadata={'source': './example_data/layout-parser-paper.pdf', 'coordinates': {'points': ((207.23000000000002, 202.57205439999996), (207.23000000000002, 311.8195408), (408.12676, 311.8195408), (408.12676, 202.57205439999996)), 'system': 'PixelSpace', 'layout_width': 612, 'layout_height': 792}, 'filename': 'layout-parser-paper.pdf', 'file_directory': './example_data', 'filetype': 'application/pdf', 'page_number': 1, 'category': 'UncategorizedText'}), Document(page_content='1 2 0 2', metadata={'source': './example_data/layout-parser-paper.pdf', 'coordinates': {'points': ((16.34, 213.36), (16.34, 253.36), (36.34, 253.36), (36.34, 213.36)), 'system': 'PixelSpace', 'layout_width': 612, 'layout_height': 792}, 'filename': 'layout-parser-paper.pdf', 'file_directory': './example_data', 'filetype': 'application/pdf', 'page_number': 1, 'category': 'UncategorizedText'}), Document(page_content='n u J', metadata={'source': './example_data/layout-parser-paper.pdf', 'coordinates': {'points': ((16.34, 258.36), (16.34, 286.14), (36.34, 286.14), (36.34,[Document(page_content='LayoutParser: A Uni�ed Toolkit for Deep Learning Based Document Image Analysis', metadata={'source': './example_data/layout-parser-paper.pdf', 'coordinates': {'points': ((157.62199999999999, 114.23496279999995), (157.62199999999999, 146.5141628), (457.7358962799999, 146.5141628), (457.7358962799999, 114.23496279999995)), 'system': 'PixelSpace', 'layout_width': 612, 'layout_height': 792}, 'filename': 'layout-parser-paper.pdf', 'file_directory': './example_data', 'filetype': 'application/pdf', 'page_number': 1, 'category': 'Title'}), Document(page_content='Zejiang Shen1 ((cid:0)), Ruochen Zhang2, Melissa Dell3, Benjamin Charles Germain Lee4, Jacob Carlson3, and Weining Li5', metadata={'source': './example_data/layout-parser-paper.pdf', 'coordinates': {'points': ((134.809, 168.64029940800003), (134.809, 192.2517444), (480.5464199080001, 192.2517444), (480.5464199080001, 168.64029940800003)), 'system': 'PixelSpace', 'layout_width': 612, 'layout_height': 792}, 'filename': 'layout-parser-paper.pdf', 'file_directory': './example_data', 'filetype': 'application/pdf', 'page_number': 1, 'category': 'UncategorizedText'}), Document(page_content='1 Allen Institute for AI shannons@allenai.org 2 Brown University ruochen zhang@brown.edu 3 Harvard University {melissadell,jacobmode="elements")docs = loader.load()docs[:5] [Document(page_content='LayoutParser : A Uni�ed Toolkit for Deep Learning Based Document Image Analysis', lookup_str='', metadata={'source': '../../layout-parser-paper.pdf'}, lookup_index=0), Document(page_content='Zejiang Shen 1 ( (ea)\n ), Ruochen Zhang 2 , Melissa Dell 3 , Benjamin Charles Germain Lee 4 , Jacob Carlson 3 , and Weining Li 5', lookup_str='', metadata={'source': '../../layout-parser-paper.pdf'}, lookup_index=0), Document(page_content='Allen Institute for AI shannons@allenai.org', lookup_str='', metadata={'source': '../../layout-parser-paper.pdf'}, lookup_index=0), Document(page_content='Brown University ruochen zhang@brown.edu', lookup_str='', metadata={'source': '../../layout-parser-paper.pdf'}, lookup_index=0), Document(page_content='Harvard University { melissadell,jacob carlson } @fas.harvard.edu', lookup_str='', metadata={'source': '../../layout-parser-paper.pdf'}, lookup_index=0)]If you need to post process the unstructured elements after extraction, you can pass in a list of Element -> Element functions to the post_processors kwarg when you instantiate the UnstructuredFileLoader. This applies to other Unstructured loaders as well. Below is an example. Post processors are only applied if you run the loader in "elements" mode.from langchain.document_loaders import UnstructuredFileLoaderfrom unstructured.cleaners.core import clean_extra_whitespaceloader = UnstructuredFileLoader( "./example_data/layout-parser-paper.pdf", mode="elements", post_processors=[clean_extra_whitespace],)docs = loader.load()docs[:5] [Document(page_content='LayoutParser: A Uni�edloader.load()docs[:5] [Document(page_content='1', lookup_str='', metadata={'source': 'layout-parser-paper-fast.pdf', 'filename': 'layout-parser-paper-fast.pdf', 'page_number': 1, 'category': 'UncategorizedText'}, lookup_index=0), Document(page_content='2', lookup_str='', metadata={'source': 'layout-parser-paper-fast.pdf', 'filename': 'layout-parser-paper-fast.pdf', 'page_number': 1, 'category': 'UncategorizedText'}, lookup_index=0), Document(page_content='0', lookup_str='', metadata={'source': 'layout-parser-paper-fast.pdf', 'filename': 'layout-parser-paper-fast.pdf', 'page_number': 1, 'category': 'UncategorizedText'}, lookup_index=0), Document(page_content='2', lookup_str='', metadata={'source': 'layout-parser-paper-fast.pdf', 'filename': 'layout-parser-paper-fast.pdf', 'page_number': 1, 'category': 'UncategorizedText'}, lookup_index=0), Document(page_content='n', lookup_str='', metadata={'source': 'layout-parser-paper-fast.pdf', 'filename': 'layout-parser-paper-fast.pdf', 'page_number': 1, 'category': 'Title'}, lookup_index=0)]PDF Example​Processing PDF documents works exactly the same way. Unstructured detects the file type and extracts the same types of elements. Modes of operation are single all the text from all elements are combined into one (default)elements maintain individual elementspaged texts from each page are only combinedwget https://raw.githubusercontent.com/Unstructured-IO/unstructured/main/example-docs/layout-parser-paper.pdf -P "../../"loader = UnstructuredFileLoader( "./example_data/layout-parser-paper.pdf", mode="elements")docs = loader.load()docs[:5]Court. My fellow Americans.', lookup_str='', metadata={'source': '../../state_of_the_union.txt'}, lookup_index=0), Document(page_content='Last year COVID-19 kept us apart. This year we are finally together again.', lookup_str='', metadata={'source': '../../state_of_the_union.txt'}, lookup_index=0), Document(page_content='Tonight, we meet as Democrats Republicans and Independents. But most importantly as Americans.', lookup_str='', metadata={'source': '../../state_of_the_union.txt'}, lookup_index=0), Document(page_content='With a duty to one another to the American people to the Constitution.', lookup_str='', metadata={'source': '../../state_of_the_union.txt'}, lookup_index=0), Document(page_content='And with an unwavering resolve that freedom will always triumph over tyranny.', lookup_str='', metadata={'source': '../../state_of_the_union.txt'}, lookup_index=0)]Define a Partitioning Strategy​Unstructured document loader allow users to pass in a strategy parameter that lets unstructured know how to partition the document. Currently supported strategies are "hi_res" (the default) and "fast". Hi res partitioning strategies are more accurate, but take longer to process. Fast strategies partition the document more quickly, but trade-off accuracy. Not all document types have separate hi res and fast partitioning strategies. For those document types, the strategy kwarg is ignored. In some cases, the high res strategy will fallback to fast if there is a dependency missing (i.e. a model for document partitioning). You can see how to apply a strategy to an UnstructuredFileLoader below.from langchain.document_loaders import UnstructuredFileLoaderloader = UnstructuredFileLoader( "layout-parser-paper-fast.pdf", strategy="fast", mode="elements")docs = loader.load()docs[:5] [Document(page_content='1', lookup_str='',use Unstructured package to load files of many types. Unstructured currently supports loading of text files, powerpoints, html, pdfs, images, and more.# # Install packagepip install "unstructured[local-inference]"pip install layoutparser[layoutmodels,tesseract]# # Install other dependencies# # https://github.com/Unstructured-IO/unstructured/blob/main/docs/source/installing.rst# !brew install libmagic# !brew install poppler# !brew install tesseract# # If parsing xml / html documents:# !brew install libxml2# !brew install libxslt# import nltk# nltk.download('punkt')from langchain.document_loaders import UnstructuredFileLoaderloader = UnstructuredFileLoader("./example_data/state_of_the_union.txt")docs = loader.load()docs[0].page_content[:400] 'Madam Speaker, Madam Vice President, our First Lady and Second Gentleman. Members of Congress and the Cabinet. Justices of the Supreme Court. My fellow Americans.\n\nLast year COVID-19 kept us apart. This year we are finally together again.\n\nTonight, we meet as Democrats Republicans and Independents. But most importantly as Americans.\n\nWith a duty to one another to the American people to the Constit'Retain Elements​Under the hood, Unstructured creates different "elements" for different chunks of text. By default we combine those together, but you can easily keep that separation by specifying mode="elements".loader = UnstructuredFileLoader( "./example_data/state_of_the_union.txt", mode="elements")docs = loader.load()docs[:5] [Document(page_content='Madam Speaker, Madam Vice President, our First Lady and Second Gentleman. Members of Congress and the Cabinet. Justices of the Supreme Court. My fellow Americans.', lookup_str='', metadata={'source': '../../state_of_the_union.txt'},Skip to main content🦜�🔗 LangChainDocsUse casesIntegrationsAPILangSmithJS/TS DocsCTRLKIntegrationsCallbacksChat modelsDocument loadersEtherscan LoaderacreomAirbyte JSONAirtableAlibaba Cloud MaxComputeApify DatasetArxivAsyncHtmlLoaderAWS S3 DirectoryAWS S3 FileAZLyricsAzure Blob Storage ContainerAzure Blob Storage FileBibTeXBiliBiliBlackboardBlockchainBrave SearchBrowserlesschatgpt_loaderCollege ConfidentialConfluenceCoNLL-UCopy PasteCSVCube Semantic LayerDatadog LogsDiffbotDiscordDocugamiDuckDBEmailEmbaasEPubEverNoteexample_dataMicrosoft ExcelFacebook ChatFaunaFigmaGeopandasGitGitBookGitHubGoogle BigQueryGoogle Cloud Storage DirectoryGoogle Cloud Storage FileGoogle DriveGrobidGutenbergHacker NewsHuggingFace datasetiFixitImagesImage captionsIMSDbIuguJoplinJupyter NotebookLarkSuite (FeiShu)MastodonMediaWikiDumpMergeDocLoadermhtmlMicrosoft OneDriveMicrosoft PowerPointMicrosoft WordModern TreasuryNotion DB 1/2Notion DB 2/2ObsidianOpen Document Format (ODT)Open City DataOrg-modePandas DataFramePsychicPySpark DataFrame LoaderReadTheDocs DocumentationRecursive URL LoaderRedditRoamRocksetRSTSitemapSlackSnowflakeSource CodeSpreedlyStripeSubtitleTelegramTencent COS DirectoryTencent COS File2MarkdownTOMLTrelloTSVTwitterUnstructured FileURLWeatherWebBaseLoaderWhatsApp ChatWikipediaXMLXorbits Pandas DataFrameLoading documents from a YouTube urlYouTube transcriptsDocument transformersLLMsMemoryRetrieversText embedding modelsAgent toolkitsToolsVector storesGrouped by providerIntegrationsDocument loadersUnstructured FileOn this pageUnstructured FileThis notebook covers how to use Unstructured package to load files of many types. Unstructured currently supports loading of text files,Unstructured File | 🦜�🔗 Langchainlangchain.document_loaders import NotionDBLoaderloader = NotionDBLoader( integration_token=NOTION_TOKEN, database_id=DATABASE_ID, request_timeout_sec=30, # optional, defaults to 10)docs = loader.load()print(docs) PreviousNotion DB 1/2NextObsidianRequirementsSetup1. Create a Notion Table Database2. Create a Notion Integration3. Connect the Integration to the Database4. Get the Database IDUsageCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.Once the integration is created, you'll be provided with an Integration Token (API key). Copy this token and keep it safe, as you'll need it to use the NotionDBLoader.3. Connect the Integration to the Database​To connect your integration to the database, follow these steps:Open your database in Notion.Click on the three-dot menu icon in the top right corner of the database view.Click on the "+ New integration" button.Find your integration, you may need to start typing its name in the search box.Click on the "Connect" button to connect the integration to the database.4. Get the Database ID​To get the database ID, follow these steps:Open your database in Notion.Click on the three-dot menu icon in the top right corner of the database view.Select "Copy link" from the menu to copy the database URL to your clipboard.The database ID is the long string of alphanumeric characters found in the URL. It typically looks like this: https://www.notion.so/username/8935f9d140a04f95a872520c4f123456?v=.... In this example, the database ID is 8935f9d140a04f95a872520c4f123456.With the database properly set up and the integration token and database ID in hand, you can now use the NotionDBLoader code to load content and metadata from your Notion database.Usage​NotionDBLoader is part of the langchain package's document loaders. You can use it as follows:from getpass import getpassNOTION_TOKEN = getpass()DATABASE_ID = getpass() ········ ········from langchain.document_loaders import NotionDBLoaderloader = NotionDBLoader(is a collaboration platform with modified Markdown support that integrates kanban boards, tasks, wikis and databases. It is an all-in-one workspace for notetaking, knowledge and data management, and project and task management.NotionDBLoader is a Python class for loading content from a Notion database. It retrieves pages from the database, reads their content, and returns a list of Document objects.Requirements​A Notion DatabaseNotion Integration TokenSetup​1. Create a Notion Table Database​Create a new table database in Notion. You can add any column to the database and they will be treated as metadata. For example you can add the following columns:Title: set Title as the default property.Categories: A Multi-select property to store categories associated with the page.Keywords: A Multi-select property to store keywords associated with the page.Add your content to the body of each page in the database. The NotionDBLoader will extract the content and metadata from these pages.2. Create a Notion Integration​To create a Notion Integration, follow these steps:Visit the Notion Developers page and log in with your Notion account.Click on the "+ New integration" button.Give your integration a name and choose the workspace where your database is located.Select the require capabilities, this extension only need the Read content capabilityClick the "Submit" button to create the integration.Skip to main content🦜�🔗 LangChainDocsUse casesIntegrationsAPILangSmithJS/TS DocsCTRLKIntegrationsCallbacksChat modelsDocument loadersEtherscan LoaderacreomAirbyte JSONAirtableAlibaba Cloud MaxComputeApify DatasetArxivAsyncHtmlLoaderAWS S3 DirectoryAWS S3 FileAZLyricsAzure Blob Storage ContainerAzure Blob Storage FileBibTeXBiliBiliBlackboardBlockchainBrave SearchBrowserlesschatgpt_loaderCollege ConfidentialConfluenceCoNLL-UCopy PasteCSVCube Semantic LayerDatadog LogsDiffbotDiscordDocugamiDuckDBEmailEmbaasEPubEverNoteexample_dataMicrosoft ExcelFacebook ChatFaunaFigmaGeopandasGitGitBookGitHubGoogle BigQueryGoogle Cloud Storage DirectoryGoogle Cloud Storage FileGoogle DriveGrobidGutenbergHacker NewsHuggingFace datasetiFixitImagesImage captionsIMSDbIuguJoplinJupyter NotebookLarkSuite (FeiShu)MastodonMediaWikiDumpMergeDocLoadermhtmlMicrosoft OneDriveMicrosoft PowerPointMicrosoft WordModern TreasuryNotion DB 1/2Notion DB 2/2ObsidianOpen Document Format (ODT)Open City DataOrg-modePandas DataFramePsychicPySpark DataFrame LoaderReadTheDocs DocumentationRecursive URL LoaderRedditRoamRocksetRSTSitemapSlackSnowflakeSource CodeSpreedlyStripeSubtitleTelegramTencent COS DirectoryTencent COS File2MarkdownTOMLTrelloTSVTwitterUnstructured FileURLWeatherWebBaseLoaderWhatsApp ChatWikipediaXMLXorbits Pandas DataFrameLoading documents from a YouTube urlYouTube transcriptsDocument transformersLLMsMemoryRetrieversText embedding modelsAgent toolkitsToolsVector storesGrouped by providerIntegrationsDocument loadersNotion DB 2/2On this pageNotion DB 2/2Notion is a collaboration platform with modified Markdown support that integrates kanban boards, tasks,Notion DB 2/2 | 🦜�🔗 Langchain// Code for: function main() { --8<-- main();PreviousSnowflakeNextSpreedlySplittingCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.main() Splitting​Additional splitting could be needed for those functions, classes, or scripts that are too big.loader = GenericLoader.from_filesystem( "./example_data/source_code", glob="*", suffixes=[".js"], parser=LanguageParser(language=Language.JS),)docs = loader.load()from langchain.text_splitter import ( RecursiveCharacterTextSplitter, Language,)js_splitter = RecursiveCharacterTextSplitter.from_language( language=Language.JS, chunk_size=60, chunk_overlap=0)result = js_splitter.split_documents(docs)len(result) 7print("\n\n--8<--\n\n".join([document.page_content for document in result])) class MyClass { constructor(name) { this.name = name; --8<-- } --8<-- greet() { console.log(`Hello, ${this.name}!`); } } --8<-- function main() { const name = prompt("Enter your name:"); --8<-- const obj = new MyClass(name); obj.greet(); } --8<-- // Code for: class MyClass { // Code for: function main() { --8<--${this.name}!`); } } --8<-- function main() { const name = prompt("Enter your name:"); const obj = new MyClass(name); obj.greet(); } --8<-- // Code for: class MyClass { // Code for: function main() { main();The parser can be disabled for small files. The parameter parser_threshold indicates the minimum number of lines that the source code file must have to be segmented using the parser.loader = GenericLoader.from_filesystem( "./example_data/source_code", glob="*", suffixes=[".py"], parser=LanguageParser(language=Language.PYTHON, parser_threshold=1000),)docs = loader.load()len(docs) 1print(docs[0].page_content) class MyClass: def __init__(self, name): self.name = name def greet(self): print(f"Hello, {self.name}!") def main(): name = input("Enter your name: ") obj = MyClass(name) obj.greet() if __name__ == "__main__": main() Splitting​Additional splitting could{'content_type': 'functions_classes', 'language': , 'source': 'example_data/source_code/example.js'} {'content_type': 'simplified_code', 'language': , 'source': 'example_data/source_code/example.js'}print("\n\n--8<--\n\n".join([document.page_content for document in docs])) class MyClass: def __init__(self, name): self.name = name def greet(self): print(f"Hello, {self.name}!") --8<-- def main(): name = input("Enter your name: ") obj = MyClass(name) obj.greet() --8<-- # Code for: class MyClass: # Code for: def main(): if __name__ == "__main__": main() --8<-- class MyClass { constructor(name) { this.name = name; } greet() { console.log(`Hello, ${this.name}!`); } }files using a special approach with language parsing: each top-level function and class in the code is loaded into separate documents. Any remaining code top-level code outside the already loaded functions and classes will be loaded into a seperate document.This approach can potentially improve the accuracy of QA models over source code. Currently, the supported languages for code parsing are Python and JavaScript. The language used for parsing can be configured, along with the minimum number of lines required to activate the splitting based on syntax.pip install esprimaimport warningswarnings.filterwarnings("ignore")from pprint import pprintfrom langchain.text_splitter import Languagefrom langchain.document_loaders.generic import GenericLoaderfrom langchain.document_loaders.parsers import LanguageParserloader = GenericLoader.from_filesystem( "./example_data/source_code", glob="*", suffixes=[".py", ".js"], parser=LanguageParser(),)docs = loader.load()len(docs) 6for document in docs: pprint(document.metadata) {'content_type': 'functions_classes', 'language': , 'source': 'example_data/source_code/example.py'} {'content_type': 'functions_classes', 'language': , 'source': 'example_data/source_code/example.py'} {'content_type': 'simplified_code', 'language': , 'source': 'example_data/source_code/example.py'} {'content_type': 'functions_classes', 'language': , 'source': 'example_data/source_code/example.js'} {'content_type': 'functions_classes', 'language':Skip to main content🦜�🔗 LangChainDocsUse casesIntegrationsAPILangSmithJS/TS DocsCTRLKIntegrationsCallbacksChat modelsDocument loadersEtherscan LoaderacreomAirbyte JSONAirtableAlibaba Cloud MaxComputeApify DatasetArxivAsyncHtmlLoaderAWS S3 DirectoryAWS S3 FileAZLyricsAzure Blob Storage ContainerAzure Blob Storage FileBibTeXBiliBiliBlackboardBlockchainBrave SearchBrowserlesschatgpt_loaderCollege ConfidentialConfluenceCoNLL-UCopy PasteCSVCube Semantic LayerDatadog LogsDiffbotDiscordDocugamiDuckDBEmailEmbaasEPubEverNoteexample_dataMicrosoft ExcelFacebook ChatFaunaFigmaGeopandasGitGitBookGitHubGoogle BigQueryGoogle Cloud Storage DirectoryGoogle Cloud Storage FileGoogle DriveGrobidGutenbergHacker NewsHuggingFace datasetiFixitImagesImage captionsIMSDbIuguJoplinJupyter NotebookLarkSuite (FeiShu)MastodonMediaWikiDumpMergeDocLoadermhtmlMicrosoft OneDriveMicrosoft PowerPointMicrosoft WordModern TreasuryNotion DB 1/2Notion DB 2/2ObsidianOpen Document Format (ODT)Open City DataOrg-modePandas DataFramePsychicPySpark DataFrame LoaderReadTheDocs DocumentationRecursive URL LoaderRedditRoamRocksetRSTSitemapSlackSnowflakeSource CodeSpreedlyStripeSubtitleTelegramTencent COS DirectoryTencent COS File2MarkdownTOMLTrelloTSVTwitterUnstructured FileURLWeatherWebBaseLoaderWhatsApp ChatWikipediaXMLXorbits Pandas DataFrameLoading documents from a YouTube urlYouTube transcriptsDocument transformersLLMsMemoryRetrieversText embedding modelsAgent toolkitsToolsVector storesGrouped by providerIntegrationsDocument loadersSource CodeOn this pageSource CodeThis notebook covers how to load source code files using a special approach with language parsing: each top-level function and class in the code is loadedSource Code | 🦜�🔗 Langchain'coordinates': [-122.420084075249, 37.7083109744362]}, ':@computed_region_26cr_cadq': '9', ':@computed_region_rxqg_mtj9': '8', ':@computed_region_bh8s_q3mv': '309', ':@computed_region_6qbp_sg9q': nan, ':@computed_region_qgnn_b9vv': nan, ':@computed_region_ajp5_b2md': nan, ':@computed_region_yftq_j783': nan, ':@computed_region_p5aj_wyqh': nan, ':@computed_region_fyvs_ahh9': nan, ':@computed_region_6pnf_4xz7': nan, ':@computed_region_jwn9_ihcz': nan, ':@computed_region_9dfj_4gjx': nan, ':@computed_region_4isq_27mq': nan, ':@computed_region_pigm_ib2e': nan, ':@computed_region_9jxd_iqea': nan, ':@computed_region_6ezc_tdp2': nan, ':@computed_region_h4ep_8xdi': nan, ':@computed_region_n4xg_c4py': nan, ':@computed_region_fcz8_est8': nan, ':@computed_region_nqbw_i6c3': nan, ':@computed_region_2dwj_jsy4': nan, 'Latitude': 37.7083109744362, 'Longitude': -122.420084075249})PreviousFigmaNextGitCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.of the sample of SF crimne data. import matplotlib.pyplot as plt# Load San Francisco map datasf = gpd.read_file("https://data.sfgov.org/resource/3psu-pn9h.geojson")# Plot the San Francisco map and the pointsfig, ax = plt.subplots(figsize=(10, 10))sf.plot(ax=ax, color="white", edgecolor="black")gdf.plot(ax=ax, color="red", markersize=5)plt.show() ![png](_geopandas_files/output_7_0.png) Load GeoPandas dataframe as a Document for downstream processing (embedding, chat, etc). The geometry will be the default page_content columns, and all other columns are placed in metadata.But, we can specify the page_content_column.from langchain.document_loaders import GeoDataFrameLoaderloader = GeoDataFrameLoader(data_frame=gdf, page_content_column="geometry")docs = loader.load()docs[0] Document(page_content='POINT (-122.420084075249 37.7083109744362)', metadata={'pdid': '4133422003074', 'incidntnum': '041334220', 'incident_code': '03074', 'category': 'ROBBERY', 'descript': 'ROBBERY, BODILY FORCE', 'dayofweek': 'Monday', 'date': '2004-11-22T00:00:00.000', 'time': '17:50', 'pddistrict': 'INGLESIDE', 'resolution': 'NONE', 'address': 'GENEVA AV / SANTOS ST', 'x': '-122.420084075249', 'y': '37.7083109744362', 'location': {'type': 'Point', 'coordinates': [-122.420084075249, 37.7083109744362]},make working with geospatial data in python easier. GeoPandas extends the datatypes used by pandas to allow spatial operations on geometric types. Geometric operations are performed by shapely. Geopandas further depends on fiona for file access and matplotlib for plotting.LLM applications (chat, QA) that utilize geospatial data are an interesting area for exploration.pip install sodapy pip install pandas pip install geopandasimport astimport pandas as pdimport geopandas as gpdfrom langchain.document_loaders import OpenCityDataLoaderCreate a GeoPandas dataframe from Open City Data as an example input.# Load Open City Datadataset = "tmnf-yvry" # San Francisco crime dataloader = OpenCityDataLoader(city_id="data.sfgov.org", dataset_id=dataset, limit=5000)docs = loader.load()# Convert list of dictionaries to DataFramedf = pd.DataFrame([ast.literal_eval(d.page_content) for d in docs])# Extract latitude and longitudedf["Latitude"] = df["location"].apply(lambda loc: loc["coordinates"][1])df["Longitude"] = df["location"].apply(lambda loc: loc["coordinates"][0])# Create geopandas DFgdf = gpd.GeoDataFrame( df, geometry=gpd.points_from_xy(df.Longitude, df.Latitude), crs="EPSG:4326")# Only keep valid longitudes and latitudes for San Franciscogdf = gdf[ (gdf["Longitude"] >= -123.173825) & (gdf["Longitude"] <= -122.281780) & (gdf["Latitude"] >= 37.623983) & (gdf["Latitude"] <= 37.929824)]Visiualization of the sample of SF crimne data. import matplotlib.pyplot as plt# Load San Francisco mapSkip to main content🦜�🔗 LangChainDocsUse casesIntegrationsAPILangSmithJS/TS DocsCTRLKIntegrationsCallbacksChat modelsDocument loadersEtherscan LoaderacreomAirbyte JSONAirtableAlibaba Cloud MaxComputeApify DatasetArxivAsyncHtmlLoaderAWS S3 DirectoryAWS S3 FileAZLyricsAzure Blob Storage ContainerAzure Blob Storage FileBibTeXBiliBiliBlackboardBlockchainBrave SearchBrowserlesschatgpt_loaderCollege ConfidentialConfluenceCoNLL-UCopy PasteCSVCube Semantic LayerDatadog LogsDiffbotDiscordDocugamiDuckDBEmailEmbaasEPubEverNoteexample_dataMicrosoft ExcelFacebook ChatFaunaFigmaGeopandasGitGitBookGitHubGoogle BigQueryGoogle Cloud Storage DirectoryGoogle Cloud Storage FileGoogle DriveGrobidGutenbergHacker NewsHuggingFace datasetiFixitImagesImage captionsIMSDbIuguJoplinJupyter NotebookLarkSuite (FeiShu)MastodonMediaWikiDumpMergeDocLoadermhtmlMicrosoft OneDriveMicrosoft PowerPointMicrosoft WordModern TreasuryNotion DB 1/2Notion DB 2/2ObsidianOpen Document Format (ODT)Open City DataOrg-modePandas DataFramePsychicPySpark DataFrame LoaderReadTheDocs DocumentationRecursive URL LoaderRedditRoamRocksetRSTSitemapSlackSnowflakeSource CodeSpreedlyStripeSubtitleTelegramTencent COS DirectoryTencent COS File2MarkdownTOMLTrelloTSVTwitterUnstructured FileURLWeatherWebBaseLoaderWhatsApp ChatWikipediaXMLXorbits Pandas DataFrameLoading documents from a YouTube urlYouTube transcriptsDocument transformersLLMsMemoryRetrieversText embedding modelsAgent toolkitsToolsVector storesGrouped by providerIntegrationsDocument loadersGeopandasGeopandasGeopandas is an open source project to make working with geospatial data in python easier. GeoPandas extends the datatypes used byGeopandas | 🦜�🔗 Langchainbut it is time to go home 🥲

================================================================================

Last day of the honeymoon. And it’s ! This cute tabby came to the restaurant to beg for food and got some chicken.

================================================================================The toot texts (the documents' page_content) is by default HTML as returned by the Mastodon API.PreviousLarkSuite (FeiShu)NextMediaWikiDumpCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.and social networking service.This loader fetches the text from the "toots" of a list of Mastodon accounts, using the Mastodon.py Python package.Public accounts can the queried by default without any authentication. If non-public accounts or instances are queried, you have to register an application for your account which gets you an access token, and set that token and your account's API base URL.Then you need to pass in the Mastodon account names you want to extract, in the @account@instance format.from langchain.document_loaders import MastodonTootsLoader#!pip install Mastodon.pyloader = MastodonTootsLoader( mastodon_accounts=["@Gargron@mastodon.social"], number_toots=50, # Default value is 100)# Or set up access information to use a Mastodon app.# Note that the access token can either be passed into# constructor or you can set the envirovnment "MASTODON_ACCESS_TOKEN".# loader = MastodonTootsLoader(# access_token="",# api_base_url="",# mastodon_accounts=["@Gargron@mastodon.social"],# number_toots=50, # Default value is 100# )documents = loader.load()for doc in documents[:3]: print(doc.page_content) print("=" * 80)

It is tough to leave this behind and go back to reality. And some people live here! I’m sure there are downsides but it sounds pretty good to me right now.

================================================================================

I wish we could stay here a little longer, but it is time to go home 🥲

Skip to main content🦜ï¸�🔗 LangChainDocsUse casesIntegrationsAPILangSmithJS/TS DocsCTRLKIntegrationsCallbacksChat modelsDocument loadersEtherscan LoaderacreomAirbyte JSONAirtableAlibaba Cloud MaxComputeApify DatasetArxivAsyncHtmlLoaderAWS S3 DirectoryAWS S3 FileAZLyricsAzure Blob Storage ContainerAzure Blob Storage FileBibTeXBiliBiliBlackboardBlockchainBrave SearchBrowserlesschatgpt_loaderCollege ConfidentialConfluenceCoNLL-UCopy PasteCSVCube Semantic LayerDatadog LogsDiffbotDiscordDocugamiDuckDBEmailEmbaasEPubEverNoteexample_dataMicrosoft ExcelFacebook ChatFaunaFigmaGeopandasGitGitBookGitHubGoogle BigQueryGoogle Cloud Storage DirectoryGoogle Cloud Storage FileGoogle DriveGrobidGutenbergHacker NewsHuggingFace datasetiFixitImagesImage captionsIMSDbIuguJoplinJupyter NotebookLarkSuite (FeiShu)MastodonMediaWikiDumpMergeDocLoadermhtmlMicrosoft OneDriveMicrosoft PowerPointMicrosoft WordModern TreasuryNotion DB 1/2Notion DB 2/2ObsidianOpen Document Format (ODT)Open City DataOrg-modePandas DataFramePsychicPySpark DataFrame LoaderReadTheDocs DocumentationRecursive URL LoaderRedditRoamRocksetRSTSitemapSlackSnowflakeSource CodeSpreedlyStripeSubtitleTelegramTencent COS DirectoryTencent COS File2MarkdownTOMLTrelloTSVTwitterUnstructured FileURLWeatherWebBaseLoaderWhatsApp ChatWikipediaXMLXorbits Pandas DataFrameLoading documents from a YouTube urlYouTube transcriptsDocument transformersLLMsMemoryRetrieversText embedding modelsAgent toolkitsToolsVector storesGrouped by providerIntegrationsDocument loadersMastodonMastodonMastodon is a federated social media and social networking service.This loader fetches the text from the "toots" of a list ofMastodon | 🦜ï¸�🔗 LangchainSQL OLAP database management system.Load a DuckDB query with one document per row.#!pip install duckdbfrom langchain.document_loaders import DuckDBLoaderTeam,PayrollNationals,81.34Reds,82.20 Writing example.csvloader = DuckDBLoader("SELECT * FROM read_csv_auto('example.csv')")data = loader.load()print(data) [Document(page_content='Team: Nationals\nPayroll: 81.34', metadata={}), Document(page_content='Team: Reds\nPayroll: 82.2', metadata={})]Specifying Which Columns are Content vs Metadata​loader = DuckDBLoader( "SELECT * FROM read_csv_auto('example.csv')", page_content_columns=["Team"], metadata_columns=["Payroll"],)data = loader.load()print(data) [Document(page_content='Team: Nationals', metadata={'Payroll': 81.34}), Document(page_content='Team: Reds', metadata={'Payroll': 82.2})]Adding Source to Metadata​loader = DuckDBLoader( "SELECT Team, Payroll, Team As source FROM read_csv_auto('example.csv')", metadata_columns=["source"],)data = loader.load()print(data) [Document(page_content='Team: Nationals\nPayroll: 81.34\nsource: Nationals', metadata={'source': 'Nationals'}), Document(page_content='Team: Reds\nPayroll: 82.2\nsource: Reds', metadata={'source': 'Reds'})]PreviousDocugamiNextEmailSpecifying Which Columns are Content vs MetadataAdding Source to MetadataCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.Skip to main content🦜ï¸�🔗 LangChainDocsUse casesIntegrationsAPILangSmithJS/TS DocsCTRLKIntegrationsCallbacksChat modelsDocument loadersEtherscan LoaderacreomAirbyte JSONAirtableAlibaba Cloud MaxComputeApify DatasetArxivAsyncHtmlLoaderAWS S3 DirectoryAWS S3 FileAZLyricsAzure Blob Storage ContainerAzure Blob Storage FileBibTeXBiliBiliBlackboardBlockchainBrave SearchBrowserlesschatgpt_loaderCollege ConfidentialConfluenceCoNLL-UCopy PasteCSVCube Semantic LayerDatadog LogsDiffbotDiscordDocugamiDuckDBEmailEmbaasEPubEverNoteexample_dataMicrosoft ExcelFacebook ChatFaunaFigmaGeopandasGitGitBookGitHubGoogle BigQueryGoogle Cloud Storage DirectoryGoogle Cloud Storage FileGoogle DriveGrobidGutenbergHacker NewsHuggingFace datasetiFixitImagesImage captionsIMSDbIuguJoplinJupyter NotebookLarkSuite (FeiShu)MastodonMediaWikiDumpMergeDocLoadermhtmlMicrosoft OneDriveMicrosoft PowerPointMicrosoft WordModern TreasuryNotion DB 1/2Notion DB 2/2ObsidianOpen Document Format (ODT)Open City DataOrg-modePandas DataFramePsychicPySpark DataFrame LoaderReadTheDocs DocumentationRecursive URL LoaderRedditRoamRocksetRSTSitemapSlackSnowflakeSource CodeSpreedlyStripeSubtitleTelegramTencent COS DirectoryTencent COS File2MarkdownTOMLTrelloTSVTwitterUnstructured FileURLWeatherWebBaseLoaderWhatsApp ChatWikipediaXMLXorbits Pandas DataFrameLoading documents from a YouTube urlYouTube transcriptsDocument transformersLLMsMemoryRetrieversText embedding modelsAgent toolkitsToolsVector storesGrouped by providerIntegrationsDocument loadersDuckDBOn this pageDuckDBDuckDB is an in-process SQL OLAP database management system.Load a DuckDB query with one document per row.#!pipDuckDB | 🦜ï¸�🔗 Langchaincustom scraping rules​The SitemapLoader uses beautifulsoup4 for the scraping process, and it scrapes every element on the page by default. The SitemapLoader constructor accepts a custom scraping function. This feature can be helpful to tailor the scraping process to your specific needs; for example, you might want to avoid scraping headers or navigation elements. The following example shows how to develop and use a custom function to avoid navigation and header elements.Import the beautifulsoup4 library and define the custom function.pip install beautifulsoup4from bs4 import BeautifulSoupdef remove_nav_and_header_elements(content: BeautifulSoup) -> str: # Find all 'nav' and 'header' elements in the BeautifulSoup object nav_elements = content.find_all("nav") header_elements = content.find_all("header") # Remove each 'nav' and 'header' element from the BeautifulSoup object for element in nav_elements + header_elements: element.decompose() return str(content.get_text())Add your custom function to the SitemapLoader object.loader = SitemapLoader( "https://langchain.readthedocs.io/sitemap.xml", filter_urls=["https://python.langchain.com/en/latest/"], parsing_function=remove_nav_and_header_elements,)Local Sitemap​The sitemap loader can also be used to load local files.sitemap_loader = SitemapLoader(web_path="example_data/sitemap.xml", is_local=True)docs = sitemap_loader.load() Fetching pages: 100%|####################################################################################################################################| 3/3 [00:00<00:00, 3.91it/s]PreviousRSTNextSlackFiltering sitemap URLsAdd custom scraping rulesLocal SitemapCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.in LangChain or not!\nGallery: A collection of our favorite projects that use LangChain. Useful for finding inspiration or seeing how things were done in other applications.\nDeployments: A collection of instructions, code snippets, and template repositories for deploying LangChain apps.\nTracing: A guide on using tracing in LangChain to visualize the execution of chains and agents.\nModel Laboratory: Experimenting with different prompts, models, and chains is a big part of developing the best possible application. The ModelLaboratory makes it easy to do so.\nDiscord: Join us on our Discord to discuss all things LangChain!\nProduction Support: As you move your LangChains into production, we’d love to offer more comprehensive support. Please fill out this form and we’ll set up a dedicated support Slack channel.\n\n\n\n\n\n\n\n\n\n\n\nnext\nQuickstart Guide\n\n\n\n\n\n\n\n\n\n Contents\n \n\n\nGetting Started\nModules\nUse Cases\nReference Docs\nLangChain Ecosystem\nAdditional Resources\n\n\n\n\n\n\n\n\n\nBy Harrison Chase\n\n\n\n\n \n © Copyright 2023, Harrison Chase.\n \n\n\n\n\n Last updated on Mar 27, 2023.\n \n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n', lookup_str='', metadata={'source': 'https://python.langchain.com/en/latest/', 'loc': 'https://python.langchain.com/en/latest/', 'lastmod': '2023-03-27T22:50:49.790324+00:00', 'changefreq': 'daily', 'priority': '0.9'}, lookup_index=0)Add custom scraping rules​The SitemapLoader uses beautifulsoup4 for the scrapingPersonal assistants need to take actions, remember interactions, and have knowledge about your data.\nQuestion Answering: The second big LangChain use case. Answering questions over specific documents, only utilizing the information in those documents to construct an answer.\nChatbots: Since language models are good at producing text, that makes them ideal for creating chatbots.\nQuerying Tabular Data: If you want to understand how to use LLMs to query data that is stored in a tabular format (csvs, SQL, dataframes, etc) you should read this page.\nInteracting with APIs: Enabling LLMs to interact with APIs is extremely powerful in order to give them more up-to-date information and allow them to take actions.\nExtraction: Extract structured information from text.\nSummarization: Summarizing longer documents into shorter, more condensed chunks of information. A type of Data Augmented Generation.\nEvaluation: Generative models are notoriously hard to evaluate with traditional metrics. One new way of evaluating them is using language models themselves to do the evaluation. LangChain provides some prompts/chains for assisting in this.\n\n\n\n\n\nReference Docs#\nAll of LangChain’s reference documentation, in one place. Full documentation on all methods, classes, installation methods, and integration setups for LangChain.\n\nReference Documentation\n\n\n\n\n\nLangChain Ecosystem#\nGuides for how other companies/products can be used with LangChain\n\nLangChain Ecosystem\n\n\n\n\n\nAdditional Resources#\nAdditional collection of resources we think may be useful as you develop your application!\n\nLangChainHub: The LangChainHub is a place to share and explore other prompts, chains, and agents.\nGlossary: A glossary of all related terms, papers, methods, etc. Whether implemented in LangChain or not!\nGallery: A collection of our favorite projects that use LangChain.documentation. For a purely conceptual guide to LangChain, see here. For the JavaScript documentation, see here.\n\nGetting Started#\nCheckout the below guide for a walkthrough of how to get started using LangChain to create an Language Model application.\n\nGetting Started Documentation\n\n\n\n\n\nModules#\nThere are several main modules that LangChain provides support for.\nFor each module we provide some examples to get started, how-to guides, reference docs, and conceptual guides.\nThese modules are, in increasing order of complexity:\n\nModels: The various model types and model integrations LangChain supports.\nPrompts: This includes prompt management, prompt optimization, and prompt serialization.\nMemory: Memory is the concept of persisting state between calls of a chain/agent. LangChain provides a standard interface for memory, a collection of memory implementations, and examples of chains/agents that use memory.\nIndexes: Language models are often more powerful when combined with your own text data - this module covers best practices for doing exactly that.\nChains: Chains go beyond just a single LLM call, and are sequences of calls (whether to an LLM or a different utility). LangChain provides a standard interface for chains, lots of integrations with other tools, and end-to-end chains for common applications.\nAgents: Agents involve an LLM making decisions about which Actions to take, taking that Action, seeing an Observation, and repeating that until done. LangChain provides a standard interface for agents, a selection of agents to choose from, and examples of end to end agents.\n\n\n\n\n\nUse Cases#\nThe above modules can be used in a variety of ways. LangChain also provides guidance and assistance in this. Below are some of the common use cases LangChain supports.\n\nPersonal Assistants: The main LangChain use case. Personal assistants need to take actions, remember interactions, and have knowledge about your data.\nQuestionEcosystem\nAI21 Labs\nAtlasDB\nBanana\nCerebriumAI\nChroma\nCohere\nDeepInfra\nDeep Lake\nForefrontAI\nGoogle Search Wrapper\nGoogle Serper Wrapper\nGooseAI\nGraphsignal\nHazy Research\nHelicone\nHugging Face\nMilvus\nModal\nNLPCloud\nOpenAI\nOpenSearch\nPetals\nPGVector\nPinecone\nPromptLayer\nQdrant\nRunhouse\nSearxNG Search API\nSerpAPI\nStochasticAI\nUnstructured\nWeights & Biases\nWeaviate\nWolfram Alpha Wrapper\nWriter\n\n\n\nAdditional Resources\n\nLangChainHub\nGlossary\nLangChain Gallery\nDeployments\nTracing\nDiscord\nProduction Support\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n.rst\n\n\n\n\n\n\n\n.pdf\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\nWelcome to LangChain\n\n\n\n\n Contents \n\n\n\nGetting Started\nModules\nUse Cases\nReference Docs\nLangChain Ecosystem\nAdditional Resources\n\n\n\n\n\n\n\n\nWelcome to LangChain#\nLangChain is a framework for developing applications powered by language models. We believe that the most powerful and differentiated applications will not only call out to a language model via an API, but will also:\n\nBe data-aware: connect a language model to other sources of data\nBe agentic: allow a language model to interact with its environment\n\nThe LangChain framework is designed with the above principles in mind.\nThis is the Python specific portion of the documentation. For a purely conceptual guide to LangChain, see here. For the JavaScript documentation, seeStarted\nDefining Custom Tools\nMulti Input Tools\nBash\nBing Search\nChatGPT Plugins\nGoogle Search\nGoogle Serper API\nHuman as a tool\nIFTTT WebHooks\nPython REPL\nRequests\nSearch Tools\nSearxNG Search API\nSerpAPI\nWolfram Alpha\nZapier Natural Language Actions API\n\n\nAgents\nAgent Types\nCustom Agent\nConversation Agent (for Chat Models)\nConversation Agent\nMRKL\nMRKL Chat\nReAct\nSelf Ask With Search\n\n\nToolkits\nCSV Agent\nJSON Agent\nOpenAPI Agent\nPandas Dataframe Agent\nPython Agent\nSQL Database Agent\nVectorstore Agent\n\n\nAgent Executors\nHow to combine agents and vectorstores\nHow to use the async API for Agents\nHow to create ChatGPT Clone\nHow to access intermediate steps\nHow to cap the max number of iterations\nHow to add SharedMemory to an Agent and its Tools\n\n\n\n\n\nUse Cases\n\nPersonal Assistants\nQuestion Answering over Docs\nChatbots\nQuerying Tabular Data\nInteracting with APIs\nSummarization\nExtraction\nEvaluation\nAgent Benchmarking: Search + Calculator\nAgent VectorDB Question Answering Benchmarking\nBenchmarking Template\nData Augmented Question Answering\nUsing Hugging Face Datasets\nLLM Math\nQuestion Answering Benchmarking: Paul Graham Essay\nQuestion Answering Benchmarking: State of the Union Address\nQA Generation\nQuestion Answering\nSQL Question Answering Benchmarking: Chinook\n\n\n\nReference\n\nInstallation\nIntegrations\nAPI References\nPrompts\nPromptTemplates\nExample Selector\n\n\nUtilities\nPython REPL\nSerpAPI\nSearxNG Search\nDocstore\nText Splitter\nEmbeddings\nVectorStores\n\n\nChains\nAgents\n\n\n\nEcosystem\n\nLangChain Ecosystem\nAI21Text Splitter\nHuggingFace Length Function\nLatex Text Splitter\nMarkdown Text Splitter\nNLTK Text Splitter\nPython Code Text Splitter\nRecursiveCharacterTextSplitter\nSpacy Text Splitter\ntiktoken (OpenAI) Length Function\nTiktokenText Splitter\n\n\nVectorstores\nGetting Started\nAtlasDB\nChroma\nDeep Lake\nElasticSearch\nFAISS\nMilvus\nOpenSearch\nPGVector\nPinecone\nQdrant\nRedis\nWeaviate\n\n\nRetrievers\nChatGPT Plugin Retriever\nVectorStore Retriever\n\n\n\n\nMemory\nGetting Started\nHow-To Guides\nConversationBufferMemory\nConversationBufferWindowMemory\nEntity Memory\nConversation Knowledge Graph Memory\nConversationSummaryMemory\nConversationSummaryBufferMemory\nConversationTokenBufferMemory\nHow to add Memory to an LLMChain\nHow to add memory to a Multi-Input Chain\nHow to add Memory to an Agent\nHow to customize conversational memory\nHow to create a custom Memory class\nHow to use multiple memroy classes in the same chain\n\n\n\n\nChains\nGetting Started\nHow-To Guides\nAsync API for Chain\nLoading from LangChainHub\nLLM Chain\nSequential Chains\nSerialization\nTransformation Chain\nAnalyze Document\nChat Index\nGraph QA\nHypothetical Document Embeddings\nQuestion Answering with Sources\nQuestion Answering\nSummarization\nRetrieval Question/Answering\nRetrieval Question Answering with Sources\nVector DB Text Generation\nAPI Chains\nSelf-Critique Chain with Constitutional AI\nBashChain\nLLMCheckerChain\nLLM Math\nLLMRequestsChain\nLLMSummarizationCheckerChain\nModeration\nPAL\nSQLite example\n\n\nReference\n\n\nAgents\nGetting Started\nTools\nGetting Started\nDefining Custom Tools\nMulti Input Tools\nBash\nBingModels\nGetting Started\nHow-To Guides\nHow to use few shot examples\nHow to stream responses\n\n\nIntegrations\nAzure\nOpenAI\nPromptLayer ChatOpenAI\n\n\n\n\nText Embedding Models\nAzureOpenAI\nCohere\nFake Embeddings\nHugging Face Hub\nInstructEmbeddings\nOpenAI\nSageMaker Endpoint Embeddings\nSelf Hosted Embeddings\nTensorflowHub\n\n\n\n\nPrompts\nPrompt Templates\nGetting Started\nHow-To Guides\nHow to create a custom prompt template\nHow to create a prompt template that uses few shot examples\nHow to work with partial Prompt Templates\nHow to serialize prompts\n\n\nReference\nPromptTemplates\nExample Selector\n\n\n\n\nChat Prompt Template\nExample Selectors\nHow to create a custom example selector\nLengthBased ExampleSelector\nMaximal Marginal Relevance ExampleSelector\nNGram Overlap ExampleSelector\nSimilarity ExampleSelector\n\n\nOutput Parsers\nOutput Parsers\nCommaSeparatedListOutputParser\nOutputFixingParser\nPydanticOutputParser\nRetryOutputParser\nStructured Output Parser\n\n\n\n\nIndexes\nGetting Started\nDocument Loaders\nCoNLL-U\nAirbyte JSON\nAZLyrics\nBlackboard\nCollege Confidential\nCopy Paste\nCSV Loader\nDirectory Loader\nEmail\nEverNote\nFacebook Chat\nFigma\nGCS Directory\nGCS File Storage\nGitBook\nGoogle Drive\nGutenberg\nHacker News\nHTML\niFixit\nImages\nIMSDb\nMarkdown\nNotebook\nNotion\nObsidian\nPDF\nPowerPoint\nReadTheDocs Documentation\nRoam\ns3 Directory\ns3 File\nSubtitle Files\nTelegram\nUnstructured File Loader\nURL\nWeb Base\nWord Documents\nYouTube\n\n\nText Splitters\nGetting Started\nCharacter Text Splitter\nHuggingFace Length Function\nLatex Text Splitter\nMarkdown Text"https://langchain.readthedocs.io/sitemap.xml", filter_urls=["https://python.langchain.com/en/latest/"],)documents = loader.load()documents[0] Document(page_content='\n\n\n\n\n\nWelcome to LangChain — 🦜🔗 LangChain 0.0.123\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\nSkip to main content\n\n\n\n\n\n\n\n\n\n\nCtrl+K\n\n\n\n\n\n\n\n\n\n\n\n\n🦜🔗 LangChain 0.0.123\n\n\n\nGetting Started\n\nQuickstart Guide\n\nModules\n\nModels\nLLMs\nGetting Started\nGeneric Functionality\nHow to use the async API for LLMs\nHow to write a custom LLM wrapper\nHow (and why) to use the fake LLM\nHow to cache LLM calls\nHow to serialize LLM classes\nHow to stream LLM responses\nHow to track token usage\n\n\nIntegrations\nAI21\nAleph Alpha\nAnthropic\nAzure OpenAI LLM Example\nBanana\nCerebriumAI LLM Example\nCohere\nDeepInfra LLM Example\nForefrontAI LLM Example\nGooseAI LLM Example\nHugging Face Hub\nManifest\nModal\nOpenAI\nPetals LLM Example\nPromptLayer OpenAI\nSageMakerEndpoint\nSelf-Hosted Models via Runhouse\nStochasticAI\nWriter\n\n\nReference\n\n\nChat Models\nGetting Started\nHow-To Guides\nHow to use few shot examples\nHow to streamrepositories for deploying LangChain apps.\nDiscord: Join us on our Discord to discuss all things LangChain!\nTracing: A guide on using tracing in LangChain to visualize the execution of chains and agents.\nProduction Support: As you move your LangChains into production, we’d love to offer more comprehensive support. Please fill out this form and we’ll set up a dedicated support Slack channel.\n\n\n\n\n\n\n\n\n\n\n\nnext\nQuickstart Guide\n\n\n\n\n\n\n\n\n\n Contents\n \n\n\nGetting Started\nModules\nUse Cases\nReference Docs\nLangChain Ecosystem\nAdditional Resources\n\n\n\n\n\n\n\n\n\nBy Harrison Chase\n\n\n\n\n \n © Copyright 2023, Harrison Chase.\n \n\n\n\n\n Last updated on Mar 24, 2023.\n \n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n', lookup_str='', metadata={'source': 'https://python.langchain.com/en/stable/', 'loc': 'https://python.langchain.com/en/stable/', 'lastmod': '2023-03-24T19:30:54.647430+00:00', 'changefreq': 'weekly', 'priority': '1'}, lookup_index=0)Filtering sitemap URLs​Sitemaps can be massive files, with thousands of URLs. Often you don't need every single one of them. You can filter the URLs by passing a list of strings or regex patterns to the url_filter parameter. Only URLs that match one of the patterns will be loaded.loader = SitemapLoader( "https://langchain.readthedocs.io/sitemap.xml",Summarizing longer documents into shorter, more condensed chunks of information. A type of Data Augmented Generation.\nQuerying Tabular Data: If you want to understand how to use LLMs to query data that is stored in a tabular format (csvs, SQL, dataframes, etc) you should read this page.\nEvaluation: Generative models are notoriously hard to evaluate with traditional metrics. One new way of evaluating them is using language models themselves to do the evaluation. LangChain provides some prompts/chains for assisting in this.\nGenerate similar examples: Generating similar examples to a given input. This is a common use case for many applications, and LangChain provides some prompts/chains for assisting in this.\nCompare models: Experimenting with different prompts, models, and chains is a big part of developing the best possible application. The ModelLaboratory makes it easy to do so.\n\n\n\n\n\nReference Docs#\nAll of LangChain’s reference documentation, in one place. Full documentation on all methods, classes, installation methods, and integration setups for LangChain.\n\nReference Documentation\n\n\n\n\n\nLangChain Ecosystem#\nGuides for how other companies/products can be used with LangChain\n\nLangChain Ecosystem\n\n\n\n\n\nAdditional Resources#\nAdditional collection of resources we think may be useful as you develop your application!\n\nLangChainHub: The LangChainHub is a place to share and explore other prompts, chains, and agents.\nGlossary: A glossary of all related terms, papers, methods, etc. Whether implemented in LangChain or not!\nGallery: A collection of our favorite projects that use LangChain. Useful for finding inspiration or seeing how things were done in other applications.\nDeployments: A collection of instructions, code snippets, and template repositories for deploying LangChain apps.\nDiscord: Join us on our Discord to discuss all thingsother tools, and end-to-end chains for common applications.\nIndexes: Language models are often more powerful when combined with your own text data - this module covers best practices for doing exactly that.\nAgents: Agents involve an LLM making decisions about which Actions to take, taking that Action, seeing an Observation, and repeating that until done. LangChain provides a standard interface for agents, a selection of agents to choose from, and examples of end to end agents.\nMemory: Memory is the concept of persisting state between calls of a chain/agent. LangChain provides a standard interface for memory, a collection of memory implementations, and examples of chains/agents that use memory.\nChat: Chat models are a variation on Language Models that expose a different API - rather than working with raw text, they work with messages. LangChain provides a standard interface for working with them and doing all the same things as above.\n\n\n\n\n\nUse Cases#\nThe above modules can be used in a variety of ways. LangChain also provides guidance and assistance in this. Below are some of the common use cases LangChain supports.\n\nAgents: Agents are systems that use a language model to interact with other tools. These can be used to do more grounded question/answering, interact with APIs, or even take actions.\nChatbots: Since language models are good at producing text, that makes them ideal for creating chatbots.\nData Augmented Generation: Data Augmented Generation involves specific types of chains that first interact with an external datasource to fetch data to use in the generation step. Examples of this include summarization of long pieces of text and question/answering over specific data sources.\nQuestion Answering: Answering questions over specific documents, only utilizing the information in those documents to construct an answer. A type of Data Augmented Generation.\nSummarization: Summarizing longer documents into shorter, more condensed chunks of information. A type ofis often not enough to\ncreate a truly powerful app - the real power comes when you are able to\ncombine them with other sources of computation or knowledge.\nThis library is aimed at assisting in the development of those types of applications. Common examples of these types of applications include:\nâ�“ Question Answering over specific documents\n\nDocumentation\nEnd-to-end Example: Question Answering over Notion Database\n\n💬 Chatbots\n\nDocumentation\nEnd-to-end Example: Chat-LangChain\n\n🤖 Agents\n\nDocumentation\nEnd-to-end Example: GPT+WolframAlpha\n\n\nGetting Started#\nCheckout the below guide for a walkthrough of how to get started using LangChain to create an Language Model application.\n\nGetting Started Documentation\n\n\n\n\n\nModules#\nThere are several main modules that LangChain provides support for.\nFor each module we provide some examples to get started, how-to guides, reference docs, and conceptual guides.\nThese modules are, in increasing order of complexity:\n\nPrompts: This includes prompt management, prompt optimization, and prompt serialization.\nLLMs: This includes a generic interface for all LLMs, and common utilities for working with LLMs.\nDocument Loaders: This includes a standard interface for loading documents, as well as specific integrations to all types of text data sources.\nUtils: Language models are often more powerful when interacting with other sources of knowledge or computation. This can include Python REPLs, embeddings, search engines, and more. LangChain provides a large collection of common utils to use in your application.\nChains: Chains go beyond just a single LLM call, and are sequences of calls (whether to an LLM or a different utility). LangChain provides a standard interface for chains, lots of integrations with other tools, and end-to-end chains for common applications.\nIndexes: Language models are often moreReferences\nPrompts\nPromptTemplates\nExample Selector\n\n\nUtilities\nPython REPL\nSerpAPI\nSearxNG Search\nDocstore\nText Splitter\nEmbeddings\nVectorStores\n\n\nChains\nAgents\n\n\n\nEcosystem\n\nLangChain Ecosystem\nAI21 Labs\nAtlasDB\nBanana\nCerebriumAI\nChroma\nCohere\nDeepInfra\nDeep Lake\nForefrontAI\nGoogle Search Wrapper\nGoogle Serper Wrapper\nGooseAI\nGraphsignal\nHazy Research\nHelicone\nHugging Face\nMilvus\nModal\nNLPCloud\nOpenAI\nOpenSearch\nPetals\nPGVector\nPinecone\nPromptLayer\nQdrant\nRunhouse\nSearxNG Search API\nSerpAPI\nStochasticAI\nUnstructured\nWeights & Biases\nWeaviate\nWolfram Alpha Wrapper\nWriter\n\n\n\nAdditional Resources\n\nLangChainHub\nGlossary\nLangChain Gallery\nDeployments\nTracing\nDiscord\nProduction Support\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n.rst\n\n\n\n\n\n\n\n.pdf\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\nWelcome to LangChain\n\n\n\n\n Contents \n\n\n\nGetting Started\nModules\nUse Cases\nReference Docs\nLangChain Ecosystem\nAdditional Resources\n\n\n\n\n\n\n\n\nWelcome to LangChain#\nLarge language models (LLMs) are emerging as a transformative technology, enabling\ndevelopers to build applications that they previously could not.\nBut using these LLMs in isolation is often not enough to\ncreate a truly powerful app - the real power comes when you are ableAgent\nConversation Agent (for Chat Models)\nChatGPT Plugins\nCustom Agent\nDefining Custom Tools\nHuman as a tool\nIntermediate Steps\nLoading from LangChainHub\nMax Iterations\nMulti Input Tools\nSearch Tools\nSerialization\nAdding SharedMemory to an Agent and its Tools\nCSV Agent\nJSON Agent\nOpenAPI Agent\nPandas Dataframe Agent\nPython Agent\nSQL Database Agent\nVectorstore Agent\nMRKL\nMRKL Chat\nReAct\nSelf Ask With Search\n\n\nReference\n\n\nMemory\nGetting Started\nKey Concepts\nHow-To Guides\nConversationBufferMemory\nConversationBufferWindowMemory\nEntity Memory\nConversation Knowledge Graph Memory\nConversationSummaryMemory\nConversationSummaryBufferMemory\nConversationTokenBufferMemory\nAdding Memory To an LLMChain\nAdding Memory to a Multi-Input Chain\nAdding Memory to an Agent\nChatGPT Clone\nConversation Agent\nConversational Memory Customization\nCustom Memory\nMultiple Memory\n\n\n\n\nChat\nGetting Started\nKey Concepts\nHow-To Guides\nAgent\nChat Vector DB\nFew Shot Examples\nMemory\nPromptLayer ChatOpenAI\nStreaming\nRetrieval Question/Answering\nRetrieval Question Answering with Sources\n\n\n\n\n\nUse Cases\n\nAgents\nChatbots\nGenerate Examples\nData Augmented Generation\nQuestion Answering\nSummarization\nQuerying Tabular Data\nExtraction\nEvaluation\nAgent Benchmarking: Search + Calculator\nAgent VectorDB Question Answering Benchmarking\nBenchmarking Template\nData Augmented Question Answering\nUsing Hugging Face Datasets\nLLM Math\nQuestion Answering Benchmarking: Paul Graham Essay\nQuestion Answering Benchmarking: State of the Union Address\nQA Generation\nQuestion Answering\nSQL Question Answering Benchmarking: Chinook\n\n\nModel Comparison\n\nReference\n\nInstallation\nIntegrations\nAPI References\nPrompts\nPromptTemplates\nExample Selector\n\n\nUtilities\nPythonDocumentation\nRoam\ns3 Directory\ns3 File\nSubtitle Files\nTelegram\nUnstructured File Loader\nURL\nWeb Base\nWord Documents\nYouTube\n\n\n\n\nUtils\nKey Concepts\nGeneric Utilities\nBash\nBing Search\nGoogle Search\nGoogle Serper API\nIFTTT WebHooks\nPython REPL\nRequests\nSearxNG Search API\nSerpAPI\nWolfram Alpha\nZapier Natural Language Actions API\n\n\nReference\nPython REPL\nSerpAPI\nSearxNG Search\nDocstore\nText Splitter\nEmbeddings\nVectorStores\n\n\n\n\nIndexes\nGetting Started\nKey Concepts\nHow To Guides\nEmbeddings\nHypothetical Document Embeddings\nText Splitter\nVectorStores\nAtlasDB\nChroma\nDeep Lake\nElasticSearch\nFAISS\nMilvus\nOpenSearch\nPGVector\nPinecone\nQdrant\nRedis\nWeaviate\nChatGPT Plugin Retriever\nVectorStore Retriever\nAnalyze Document\nChat Index\nGraph QA\nQuestion Answering with Sources\nQuestion Answering\nSummarization\nRetrieval Question/Answering\nRetrieval Question Answering with Sources\nVector DB Text Generation\n\n\n\n\nChains\nGetting Started\nHow-To Guides\nGeneric Chains\nLoading from LangChainHub\nLLM Chain\nSequential Chains\nSerialization\nTransformation Chain\n\n\nUtility Chains\nAPI Chains\nSelf-Critique Chain with Constitutional AI\nBashChain\nLLMCheckerChain\nLLM Math\nLLMRequestsChain\nLLMSummarizationCheckerChain\nModeration\nPAL\nSQLite example\n\n\nAsync API for Chain\n\n\nKey Concepts\nReference\n\n\nAgents\nGetting Started\nKey Concepts\nHow-To Guides\nAgents and Vectorstores\nAsync API for Agent\nConversation Agent (for Chat Models)\nChatGPT Plugins\nCustom Agent\nDefiningto main content\n\n\n\n\n\n\n\n\n\n\nCtrl+K\n\n\n\n\n\n\n\n\n\n\n\n\n🦜🔗 LangChain 0.0.123\n\n\n\nGetting Started\n\nQuickstart Guide\n\nModules\n\nPrompt Templates\nGetting Started\nKey Concepts\nHow-To Guides\nCreate a custom prompt template\nCreate a custom example selector\nProvide few shot examples to a prompt\nPrompt Serialization\nExample Selectors\nOutput Parsers\n\n\nReference\nPromptTemplates\nExample Selector\n\n\n\n\nLLMs\nGetting Started\nKey Concepts\nHow-To Guides\nGeneric Functionality\nCustom LLM\nFake LLM\nLLM Caching\nLLM Serialization\nToken Usage Tracking\n\n\nIntegrations\nAI21\nAleph Alpha\nAnthropic\nAzure OpenAI LLM Example\nBanana\nCerebriumAI LLM Example\nCohere\nDeepInfra LLM Example\nForefrontAI LLM Example\nGooseAI LLM Example\nHugging Face Hub\nManifest\nModal\nOpenAI\nPetals LLM Example\nPromptLayer OpenAI\nSageMakerEndpoint\nSelf-Hosted Models via Runhouse\nStochasticAI\nWriter\n\n\nAsync API for LLM\nStreaming with LLMs\n\n\nReference\n\n\nDocument Loaders\nKey Concepts\nHow To Guides\nCoNLL-U\nAirbyte JSON\nAZLyrics\nBlackboard\nCollege Confidential\nCopy Paste\nCSV Loader\nDirectory Loader\nEmail\nEverNote\nFacebook Chat\nFigma\nGCS Directory\nGCS File Storage\nGitBook\nGoogle Drive\nGutenberg\nHacker News\nHTML\niFixit\nImages\nIMSDb\nMarkdown\nNotebook\nNotion\nObsidian\nPDF\nPowerPoint\nReadTheDocs Documentation\nRoam\ns3 Directory\ns3 File\nSubtitleSitemapLoader loads a sitemap from a given URL, and then scrape and load all pages in the sitemap, returning each page as a Document.The scraping is done concurrently. There are reasonable limits to concurrent requests, defaulting to 2 per second. If you aren't concerned about being a good citizen, or you control the scrapped server, or don't care about load. Note, while this will speed up the scraping process, but it may cause the server to block you. Be careful!pip install nest_asyncio Requirement already satisfied: nest_asyncio in /Users/tasp/Code/projects/langchain/.venv/lib/python3.10/site-packages (1.5.6) [notice] A new release of pip available: 22.3.1 -> 23.0.1 [notice] To update, run: pip install --upgrade pip# fixes a bug with asyncio and jupyterimport nest_asyncionest_asyncio.apply()from langchain.document_loaders.sitemap import SitemapLoadersitemap_loader = SitemapLoader(web_path="https://langchain.readthedocs.io/sitemap.xml")docs = sitemap_loader.load()You can change the requests_per_second parameter to increase the max concurrent requests. and use requests_kwargs to pass kwargs when send requests.sitemap_loader.requests_per_second = 2# Optional: avoid `[SSL: CERTIFICATE_VERIFY_FAILED]` issuesitemap_loader.requests_kwargs = {"verify": False}docs[0] Document(page_content='\n\n\n\n\n\nWelcome to LangChain — 🦜🔗 LangChain 0.0.123\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\nSkip toSkip to main content🦜ï¸�🔗 LangChainDocsUse casesIntegrationsAPILangSmithJS/TS DocsCTRLKIntegrationsCallbacksChat modelsDocument loadersEtherscan LoaderacreomAirbyte JSONAirtableAlibaba Cloud MaxComputeApify DatasetArxivAsyncHtmlLoaderAWS S3 DirectoryAWS S3 FileAZLyricsAzure Blob Storage ContainerAzure Blob Storage FileBibTeXBiliBiliBlackboardBlockchainBrave SearchBrowserlesschatgpt_loaderCollege ConfidentialConfluenceCoNLL-UCopy PasteCSVCube Semantic LayerDatadog LogsDiffbotDiscordDocugamiDuckDBEmailEmbaasEPubEverNoteexample_dataMicrosoft ExcelFacebook ChatFaunaFigmaGeopandasGitGitBookGitHubGoogle BigQueryGoogle Cloud Storage DirectoryGoogle Cloud Storage FileGoogle DriveGrobidGutenbergHacker NewsHuggingFace datasetiFixitImagesImage captionsIMSDbIuguJoplinJupyter NotebookLarkSuite (FeiShu)MastodonMediaWikiDumpMergeDocLoadermhtmlMicrosoft OneDriveMicrosoft PowerPointMicrosoft WordModern TreasuryNotion DB 1/2Notion DB 2/2ObsidianOpen Document Format (ODT)Open City DataOrg-modePandas DataFramePsychicPySpark DataFrame LoaderReadTheDocs DocumentationRecursive URL LoaderRedditRoamRocksetRSTSitemapSlackSnowflakeSource CodeSpreedlyStripeSubtitleTelegramTencent COS DirectoryTencent COS File2MarkdownTOMLTrelloTSVTwitterUnstructured FileURLWeatherWebBaseLoaderWhatsApp ChatWikipediaXMLXorbits Pandas DataFrameLoading documents from a YouTube urlYouTube transcriptsDocument transformersLLMsMemoryRetrieversText embedding modelsAgent toolkitsToolsVector storesGrouped by providerIntegrationsDocument loadersSitemapOn this pageSitemapExtends from the WebBaseLoader, SitemapLoader loads a sitemap from a given URL, and then scrape and load all pages inSitemap | 🦜ï¸�🔗 Langchainasia_pacific\neurope\nlatin_america\nnorth_america\nhomepage: http://www.cybersource.com\ndisplay_api_url: https://api.cybersource.com\ncompany_name: CyberSource REST', metadata={'source': 'https://core.spreedly.com/v1/gateways_options.json'})]PreviousSource CodeNextStripeCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.visa', metadata={'source': 'https://core.spreedly.com/v1/gateways_options.json'}), Document(page_content='mdd_field_57\nmdd_field_58\nmdd_field_59\nmdd_field_60\nmdd_field_61\nmdd_field_62\nmdd_field_63\nmdd_field_64\nmdd_field_65\nmdd_field_66\nmdd_field_67\nmdd_field_68\nmdd_field_69\nmdd_field_70\nmdd_field_71\nmdd_field_72\nmdd_field_73\nmdd_field_74\nmdd_field_75\nmdd_field_76\nmdd_field_77\nmdd_field_78\nmdd_field_79\nmdd_field_80\nmdd_field_81\nmdd_field_82\nmdd_field_83\nmdd_field_84\nmdd_field_85\nmdd_field_86\nmdd_field_87\nmdd_field_88\nmdd_field_89\nmdd_field_90\nmdd_field_91\nmdd_field_92\nmdd_field_93\nmdd_field_94\nmdd_field_95\nmdd_field_96\nmdd_field_97\nmdd_field_98\nmdd_field_99\nmdd_field_100\nsupported_countries: US\nAE\nBR\nCA\nCN\nDK\nFI\nFR\nDE\nIN\nJP\nMX\nNO\nSE\nGB\nSG\nLB\nPK\nsupported_cardtypes: visa\nmaster\namerican_express\ndiscover\ndiners_club\njcb\nmaestro\nelo\nunion_pay\ncartes_bancaires\nmada\nregions: asia_pacific\neurope\nlatin_america\nnorth_america\nhomepage:visa\nmaster\namerican_express\ndiscover\njcb\nmaestro\nelo\nnaranja\ncabal\nunionpay\nregions: asia_pacific\neurope\nmiddle_east\nnorth_america\nhomepage: http://worldpay.com\ndisplay_api_url: https://secure.worldpay.com/jsp/merchant/xml/paymentService.jsp\ncompany_name: WorldPay', metadata={'source': 'https://core.spreedly.com/v1/gateways_options.json'}), Document(page_content='gateway_specific_fields: receipt_email\nradar_session_id\nskip_radar_rules\napplication_fee\nstripe_account\nmetadata\nidempotency_key\nreason\nrefund_application_fee\nrefund_fee_amount\nreverse_transfer\naccount_id\ncustomer_id\nvalidate\nmake_default\ncancellation_reason\ncapture_method\nconfirm\nconfirmation_method\ncustomer\ndescription\nmoto\noff_session\non_behalf_of\npayment_method_types\nreturn_email\nreturn_url\nsave_payment_method\nsetup_future_usage\nstatement_descriptor\nstatement_descriptor_suffix\ntransfer_amount\ntransfer_destination\ntransfer_group\napplication_fee_amount\nrequest_three_d_secure\nerror_on_requires_action\nnetwork_transaction_id\nclaim_without_transaction_id\nfulfillment_date\nevent_type\nmodal_challenge\nidempotent_request\nmerchant_reference\ncustomer_reference\nshipping_address_zip\nshipping_from_zip\nshipping_amount\nline_items\nsupported_countries: AE\nAT\nAU\nBE\nBG\nBR\nCA\nCH\nCY\nCZ\nDE\nDK\nEE\nES\nFI\nFR\nGB\nGR\nHK\nHU\nIE\nIN\nIT\nJP\nLT\nLU\nLV\nMT\nMX\nMY\nNL\nNO\nNZ\nPL\nPT\nRO\nSE\nSG\nSI\nSK\nUS\nsupported_cardtypes: visa', metadata={'source':VI\nVN\nVU\nWF\nWS\nYE\nYT\nZA\nZM\nsupported_cardtypes:KZ\nLA\nLC\nLI\nLK\nLS\nLT\nLU\nLV\nMA\nMC\nMD\nME\nMG\nMH\nMK\nML\nMN\nMO\nMP\nMQ\nMR\nMS\nMT\nMU\nMV\nMW\nMX\nMY\nMZ\nNA\nNC\nNE\nNF\nNG\nNI\nNL\nNO\nNP\nNR\nNU\nNZ\nOM\nPA\nPE\nPF\nPH\nPK\nPL\nPN\nPR\nPT\nPW\nPY\nQA\nRE\nRO\nRS\nRU\nRW\nSA\nSB\nSC\nSE\nSG\nSI\nSK\nSL\nSM\nSN\nST\nSV\nSZ\nTC\nTD\nTF\nTG\nTH\nTJ\nTK\nTM\nTO\nTR\nTT\nTV\nTW\nTZ\nUA\nUG\nUS\nUY\nUZ\nVA\nVC\nVE\nVI\nVN\nVU\nWF\nWS\nDocument(page_content='BG\nBH\nBI\nBJ\nBM\nBN\nBO\nBR\nBS\nBT\nBW\nBY\nBZ\nCA\nCC\nCF\nCH\nCK\nCL\nCM\nCN\nCO\nCR\nCV\nCX\nCY\nCZ\nDE\nDJ\nDK\nDO\nDZ\nEC\nEE\nEG\nEH\nES\nET\nFI\nFJ\nFK\nFM\nFO\nFR\nGA\nGB\nGD\nGE\nGF\nGG\nGH\nGI\nGL\nGM\nGN\nGP\nGQ\nGR\nGT\nGU\nGW\nGY\nHK\nHM\nHN\nHR\nHT\nHU\nID\nIE\nIL\nIM\nIN\nIO\nIS\nIT\nJE\nJM\nJO\nJP\nKE\nKG\nKH\nKI\nKM\nKN\nKR\nKW\nKY\nKZ\nLA\nLC\nLI\nLK\nTest the retrieverspreedly_doc_retriever.get_relevant_documents("CRC") [Document(page_content='installment_grace_period_duration\nreference_data_code\ninvoice_number\ntax_management_indicator\noriginal_amount\ninvoice_amount\nvat_tax_rate\nmobile_remote_payment_type\ngratuity_amount\nmdd_field_1\nmdd_field_2\nmdd_field_3\nmdd_field_4\nmdd_field_5\nmdd_field_6\nmdd_field_7\nmdd_field_8\nmdd_field_9\nmdd_field_10\nmdd_field_11\nmdd_field_12\nmdd_field_13\nmdd_field_14\nmdd_field_15\nmdd_field_16\nmdd_field_17\nmdd_field_18\nmdd_field_19\nmdd_field_20\nsupported_countries: US\nAE\nBR\nCA\nCN\nDK\nFI\nFR\nDE\nIN\nJP\nMX\nNO\nSE\nGB\nSG\nLB\nPK\nsupported_cardtypes: visa\nmaster\namerican_express\ndiscover\ndiners_club\njcb\ndankort\nmaestro\nelo\nregions: asia_pacific\neurope\nlatin_america\nnorth_america\nhomepage: http://www.cybersource.com\ndisplay_api_url: https://ics2wsa.ic3.com/commerce/1.x/transactionProcessor\ncompany_name: CyberSource', metadata={'source': 'https://core.spreedly.com/v1/gateways_options.json'}),to securely store credit cards and use them to transact against any number of payment gateways and third party APIs. It does this by simultaneously providing a card tokenization/vault service as well as a gateway and receiver integration service. Payment methods tokenized by Spreedly are stored at Spreedly, allowing you to independently store a card and then pass that card to different end points based on your business requirements.This notebook covers how to load data from the Spreedly REST API into a format that can be ingested into LangChain, along with example usage for vectorization.Note: this notebook assumes the following packages are installed: openai, chromadb, and tiktoken.import osfrom langchain.document_loaders import SpreedlyLoaderfrom langchain.indexes import VectorstoreIndexCreatorSpreedly API requires an access token, which can be found inside the Spreedly Admin Console.This document loader does not currently support pagination, nor access to more complex objects which require additional parameters. It also requires a resource option which defines what objects you want to load.Following resources are available:gateways_options: Documentationgateways: Documentationreceivers_options: Documentationreceivers: Documentationpayment_methods: Documentationcertificates: Documentationtransactions: Documentationenvironments: Documentationspreedly_loader = SpreedlyLoader( os.environ["SPREEDLY_ACCESS_TOKEN"], "gateways_options")# Create a vectorstore retriever from the loader# see https://python.langchain.com/en/latest/modules/data_connection/getting_started.html for more detailsindex = VectorstoreIndexCreator().from_loaders([spreedly_loader])spreedly_doc_retriever = index.vectorstore.as_retriever() Using embedded DuckDB without persistence: data will be transient# Test the retrieverspreedly_doc_retriever.get_relevant_documents("CRC")Skip to main content🦜ï¸�🔗 LangChainDocsUse casesIntegrationsAPILangSmithJS/TS DocsCTRLKIntegrationsCallbacksChat modelsDocument loadersEtherscan LoaderacreomAirbyte JSONAirtableAlibaba Cloud MaxComputeApify DatasetArxivAsyncHtmlLoaderAWS S3 DirectoryAWS S3 FileAZLyricsAzure Blob Storage ContainerAzure Blob Storage FileBibTeXBiliBiliBlackboardBlockchainBrave SearchBrowserlesschatgpt_loaderCollege ConfidentialConfluenceCoNLL-UCopy PasteCSVCube Semantic LayerDatadog LogsDiffbotDiscordDocugamiDuckDBEmailEmbaasEPubEverNoteexample_dataMicrosoft ExcelFacebook ChatFaunaFigmaGeopandasGitGitBookGitHubGoogle BigQueryGoogle Cloud Storage DirectoryGoogle Cloud Storage FileGoogle DriveGrobidGutenbergHacker NewsHuggingFace datasetiFixitImagesImage captionsIMSDbIuguJoplinJupyter NotebookLarkSuite (FeiShu)MastodonMediaWikiDumpMergeDocLoadermhtmlMicrosoft OneDriveMicrosoft PowerPointMicrosoft WordModern TreasuryNotion DB 1/2Notion DB 2/2ObsidianOpen Document Format (ODT)Open City DataOrg-modePandas DataFramePsychicPySpark DataFrame LoaderReadTheDocs DocumentationRecursive URL LoaderRedditRoamRocksetRSTSitemapSlackSnowflakeSource CodeSpreedlyStripeSubtitleTelegramTencent COS DirectoryTencent COS File2MarkdownTOMLTrelloTSVTwitterUnstructured FileURLWeatherWebBaseLoaderWhatsApp ChatWikipediaXMLXorbits Pandas DataFrameLoading documents from a YouTube urlYouTube transcriptsDocument transformersLLMsMemoryRetrieversText embedding modelsAgent toolkitsToolsVector storesGrouped by providerIntegrationsDocument loadersSpreedlySpreedlySpreedly is a service that allows you to securely store credit cards and use them to transact against any number of payment gateways and thirdSpreedly | 🦜ï¸�🔗 Langchain© 2023 LangChain, Inc.96%\n \nFull Time\n\n\n\n\n 4%\n \nPart Time\n\n\n\n\n\n\n\n 94%\n \n\n\n\n\nResidency\n\n\n\n 6%\n \nIn State\n\n\n\n\n 94%\n \nOut-of-State\n\n\n\n\n\n\n\n Data Source: IPEDs and Peterson\'s Databases © 2022 Peterson\'s LLC All rights reserved\n \n', lookup_str='', metadata={'source': 'https://www.collegeconfidential.com/colleges/brown-university/'}, lookup_index=0)]Previouschatgpt_loaderNextConfluenceCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain,Financial Aid):\n \n\n\n\n $82,286\n \n\n\n\n $82,286\n \n\n\n\n\n\n\n\n\n\n\n\nStudent Life\n\n Wondering what life at Brown is like? There are approximately \n 10,696 students enrolled at \n Brown, \n including 7,349 undergraduate students and \n 3,347 graduate students.\n 96% percent of students attend school \n full-time, \n 6% percent are from RI and \n 94% percent of students are from other states.\n \n\n\n\n\n\n None\n \n\n\n\n\nUndergraduate Enrollment\n\n\n\n$2,466\n \n\n\n\n $2,466\n \n\n\n\n\nHousing\n\n\n\n $15,840\n \n\n\n\n $15,840\n \n\n\n\n\nBooks\n\n\n\n $1,300\n \n\n\n\n $1,300\n \n\n\n\n\n\n Total (Before Financial Aid):\n740 - 800\n \n \n\n\n\n\n\n\n Brown Tuition & Fees\n \n\n\n\nTuition & Fees\n\n\n\n $82,286\n \nIn State\n\n\n\n\n $82,286\n \nOut-of-State\n\n\n\n\n\n\n\nCost Breakdown\n\n\nIn State\n\n\nOut-of-State\n\n\n\n\nState Tuition\n\n\n\n $62,680\n \n\n\n\n $62,680\n \n\n\n\n\nFees\n\n\n\n $2,466\nScore\nEstimated Chance\n\n\n35 and Above\nGood\n\n\n33 to 35\nAvg\n\n\n33 and Less\nLow\n\n\n\n\n\n\nStand out on your college application\n\n• Qualify for scholarships\n• Most students who retest improve their score\n\nSponsored by ACT\n\n\n Take the Next ACT Test\n \n\n\n\n\n\nBrown SAT Scores\n\n\n\n\nic_reflect\n\n\n\n\n\n\n\n\nComposite SAT Range\n\n\n \n 720 - 770\n \n \n\n\n\nic_reflect\n\n\n\n\n\n\n\n\nMath SAT Range\n\n\n \n Not available\n \n \n\n\n\nic_reflect\n\n\n\n\n\n\n\n\nReading SAT Range\n\n\n \n 740 - 800\nFall first-year applications to Brown is \n Jan 5. \n \n \n \n\n \n The deadline for Fall transfer applications to Brown is \n Mar 1. \n \n \n \n\n \n Check the school website \n for more information about deadlines for specific programs or special admissions programs\n \n \n\n\n\n\n\n\nBrown ACT Scores\n\n\n\n\nic_reflect\n\n\n\n\n\n\n\n\nACT Range\n\n\n \n 33 - 35\n \n \n\n\n\nEstimated Chance of Acceptance by ACT Score\n\n\nACT Score\nEstimated Chance\n\n\n35 and Above\nGood\n\n\n33 towithout leaving home.\nConsidering Going to School in Rhode Island?\nSee a full list of colleges in Rhode Island and save your favorites to your college list.\n\n\n\nCollege Info\n\n\n\n\n\n\n\n\n\n Providence, RI 02912\n \n\n\n\n Campus Setting: Urban\n \n\n\n\n\n\n\n\n (401) 863-2378\n \n\n Website\n \n\n Virtual Tour\n \n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\nBrown Application Deadline\n\n\n\nFirst-Year Applications are Due\n\nJan 5\n\nTransfer Applications are Due\n\nMar 1\n\n\n\n \n The deadline for Fall first-year applications to Brown is \nthose with no debt. For context, compare this number with the average national debt, which is around $36,000 per borrower. \nThe 2023-2024 FAFSA Opened on October 1st, 2022\nSome financial aid is awarded on a first-come, first-served basis, so fill out the FAFSA as soon as you can. Visit the FAFSA website to apply for student aid. Remember, the first F in FAFSA stands for FREE! You should never have to pay to submit the Free Application for Federal Student Aid (FAFSA), so be very wary of anyone asking you for money.\nLearn more about Tuition and Financial Aid at Brown.\nBased on this information, does Brown seem like a good fit? Remember, a school that is perfect for one person may be a terrible fit for someone else! So ask yourself: Is Brown a good school for you?\nIf Brown University seems like a school you want to apply to, click the heart button to save it to your college list.\n\nStill Exploring Schools?\nChoose one of the options below to learn more about Brown:\nAdmissions\nStudent Life\nAcademics\nTuition & Aid\nBrown Community Forums\nThen use the college admissions predictor to take a data science look at your chances of getting into some of the best colleges and universities in the U.S.\nWhere is Brown?\nBrown is located in the urban setting of Providence, Rhode Island, less than an hour from Boston. \nIf you would like to see Brown for yourself, plan a visit. The best way to reach campus is to take Interstate 95 to Providence, or book a flight to the nearest airport, T.F. Green.\nYou can also take a virtual campus tour to get a sense of what Brown and Providence are like without leaving home.\nConsidering Going to School in Rhode Island?\nSee a full list of collegesto get a sense of how satisfied students are with their school experience, and if they have the support necessary to succeed in college. \nApproximately 98% of first-year, full-time undergrads who start at Browncome back their sophomore year. 95% of Brown undergrads graduate within six years. The average six-year graduation rate for U.S. colleges and universities is 61% for public schools, and 67% for private, non-profit schools.\nJob Outcomes for Brown Grads\nJob placement stats are a good resource for understanding the value of a degree from Brown by providing a look on how job placement has gone for other grads. \nCheck with Brown directly, for information on any information on starting salaries for recent grads.\nBrown\'s Endowment\nAn endowment is the total value of a school\'s investments, donations, and assets. Endowment is not necessarily an indicator of the quality of a school, but it can give you a sense of how much money a college can afford to invest in expanding programs, improving facilities, and support students. \nAs of 2022, the total market value of Brown University\'s endowment was $4.7 billion. The average college endowment was $905 million in 2021. The school spends $34,086 for each full-time student enrolled. \nTuition and Financial Aid at Brown\nTuition is another important factor when choose a college. Some colleges may have high tuition, but do a better job at meeting students\' financial need.\nBrown meets 100% of the demonstrated financial need for undergraduates. The average financial aid package for a full-time, first-year student is around $59,749 a year. \nThe average student debt for graduates in the class of 2022 was around $24,102 per student, not including those with no debt. For context, compare this number with the average national debt, which is around\nLove music or performing? Join a campus band, sing in a chorus, or perform with one of the school\'s theater groups.\nInterested in journalism or communications? Brown students can write for the campus newspaper, host a radio show or be a producer for the student-run television channel.\nInterested in joining a fraternity or sorority? Brown has fraternities and sororities.\nPlanning to play sports? Brown has many options for athletes. See them all and learn more about life at Brown on the Student Life page.\n\n\n\n2022 Brown Facts At-A-Glance\n\n\n\n\n\nAcademic Calendar\nOther\n\n\nOverall Acceptance Rate\n6%\n\n\nEarly Decision Acceptance Rate\n16%\n\n\nEarly Action Acceptance Rate\nEA not offered\n\n\nApplicants Submitting SAT scores\n51%\n\n\nTuition\n$62,680\n\n\nPercent of Need Met\n100%\n\n\nAverage First-Year Financial Aid Package\n$59,749\n\n\n\n\nIs Brown a Good School?\n\nDifferent people have different ideas about what makes a "good" school. Some factors that can help you determine what a good school for you might be include admissions criteria, acceptance rate, tuition costs, and more.\nLet\'s take a look at these factors to get a clearer sense of what Brown offers and if it could be the right college for you.\nBrown Acceptance Rate 2022\nIt is extremely difficult to get into Brown. Around 6% of applicants get into Brown each year. In 2022, just 2,568 out of the 46,568 students who applied were accepted.\nRetention and Graduation Rates at Brown\nRetention refers to the number of students that stay enrolled at a school over time. This is a way to get a sense of how satisfied students are with their school experience, and if they have the supportcolleges and universities.This covers how to load College Confidential webpages into a document format that we can use downstream.from langchain.document_loaders import CollegeConfidentialLoaderloader = CollegeConfidentialLoader( "https://www.collegeconfidential.com/colleges/brown-university/")data = loader.load()data [Document(page_content='\n\n\n\n\n\n\n\nA68FEB02-9D19-447C-B8BC-818149FD6EAF\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n Media (2)\n \n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\nE45B8B13-33D4-450E-B7DB-F66EFE8F2097\n\n\n\n\n\n\n\n\n\nE45B8B13-33D4-450E-B7DB-F66EFE8F2097\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\nAbout Brown\n\n\n\n\n\n\nBrown University Overview\nBrown University is a private, nonprofit school in the urban setting of Providence, Rhode Island. Brown was founded in 1764 and the school currently enrolls around 10,696 students a year, including 7,349 undergraduates. Brown provides on-campus housing for students. Most students live in off campus housing.\n📆 Mark your calendar! January 5, 2023 is the final deadline to submit an application for the Fall 2023 semester. \nThere are many ways for students to get involved at Brown! \nLove music or performing? Join a campus band, sing in a chorus, or performSkip to main content🦜ï¸�🔗 LangChainDocsUse casesIntegrationsAPILangSmithJS/TS DocsCTRLKIntegrationsCallbacksChat modelsDocument loadersEtherscan LoaderacreomAirbyte JSONAirtableAlibaba Cloud MaxComputeApify DatasetArxivAsyncHtmlLoaderAWS S3 DirectoryAWS S3 FileAZLyricsAzure Blob Storage ContainerAzure Blob Storage FileBibTeXBiliBiliBlackboardBlockchainBrave SearchBrowserlesschatgpt_loaderCollege ConfidentialConfluenceCoNLL-UCopy PasteCSVCube Semantic LayerDatadog LogsDiffbotDiscordDocugamiDuckDBEmailEmbaasEPubEverNoteexample_dataMicrosoft ExcelFacebook ChatFaunaFigmaGeopandasGitGitBookGitHubGoogle BigQueryGoogle Cloud Storage DirectoryGoogle Cloud Storage FileGoogle DriveGrobidGutenbergHacker NewsHuggingFace datasetiFixitImagesImage captionsIMSDbIuguJoplinJupyter NotebookLarkSuite (FeiShu)MastodonMediaWikiDumpMergeDocLoadermhtmlMicrosoft OneDriveMicrosoft PowerPointMicrosoft WordModern TreasuryNotion DB 1/2Notion DB 2/2ObsidianOpen Document Format (ODT)Open City DataOrg-modePandas DataFramePsychicPySpark DataFrame LoaderReadTheDocs DocumentationRecursive URL LoaderRedditRoamRocksetRSTSitemapSlackSnowflakeSource CodeSpreedlyStripeSubtitleTelegramTencent COS DirectoryTencent COS File2MarkdownTOMLTrelloTSVTwitterUnstructured FileURLWeatherWebBaseLoaderWhatsApp ChatWikipediaXMLXorbits Pandas DataFrameLoading documents from a YouTube urlYouTube transcriptsDocument transformersLLMsMemoryRetrieversText embedding modelsAgent toolkitsToolsVector storesGrouped by providerIntegrationsDocument loadersCollege ConfidentialCollege ConfidentialCollege Confidential gives information on 3,800+ colleges and universities.This covers how to load College Confidential webpages into a document format that we can useCollege Confidential | 🦜ï¸�🔗 Langchainq are the key and query vectors respectively, which are produced by forwarding linear modules on x, and v is the vector that is calculated by propagating the same linear module on x again. The key vector represents what the token contains, and the query vector represents what the token is looking for. The vector v is the information that the token will communicate to other tokens if it finds them interesting, and it gets aggregated for the purposes of the self-attention mechanism.'PreviousXorbits Pandas DataFrameNextYouTube transcriptsYouTube url to textBuilding a chat app from YouTube videoCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.embeddings)# Build a QA chainqa_chain = RetrievalQA.from_chain_type( llm=ChatOpenAI(model_name="gpt-3.5-turbo", temperature=0), chain_type="stuff", retriever=vectordb.as_retriever(),)# Ask a question!query = "Why do we need to zero out the gradient before backprop at each step?"qa_chain.run(query) "We need to zero out the gradient before backprop at each step because the backward pass accumulates gradients in the grad attribute of each parameter. If we don't reset the grad to zero before each backward pass, the gradients will accumulate and add up, leading to incorrect updates and slower convergence. By resetting the grad to zero before each backward pass, we ensure that the gradients are calculated correctly and that the optimization process works as intended."query = "What is the difference between an encoder and decoder?"qa_chain.run(query) 'In the context of transformers, an encoder is a component that reads in a sequence of input tokens and generates a sequence of hidden representations. On the other hand, a decoder is a component that takes in a sequence of hidden representations and generates a sequence of output tokens. The main difference between the two is that the encoder is used to encode the input sequence into a fixed-length representation, while the decoder is used to decode the fixed-length representation into an output sequence. In machine translation, for example, the encoder reads in the source language sentence and generates a fixed-length representation, which is then used by the decoder to generate the target language sentence.'query = "For any token, what are x, k, v, and q?"qa_chain.run(query) 'For any token, x is the input vector that contains the private information of that token, k and q are the key and query vectors respectively, which are produced by forwarding linear modules on x, and[download] 100% of 134.98MiB [ExtractAudio] Not converting audio /Users/31treehaus/Desktop/AI/langchain-fork/docs/modules/indexes/document_loaders/examples/The spelled-out intro to neural networks and backpropagation: building micrograd.m4a; file is already in target format m4a# Returns a list of Documents, which can be easily viewed or parseddocs[0].page_content[0:500] "Hello, my name is Andrej and I've been training deep neural networks for a bit more than a decade. And in this lecture I'd like to show you what neural network training looks like under the hood. So in particular we are going to start with a blank Jupyter notebook and by the end of this lecture we will define and train a neural net and you'll get to see everything that goes on under the hood and exactly sort of how that works on an intuitive level. Now specifically what I would like to do is I w"Building a chat app from YouTube video​Given Documents, we can easily enable chat / question+answering.from langchain.chains import RetrievalQAfrom langchain.vectorstores import FAISSfrom langchain.chat_models import ChatOpenAIfrom langchain.embeddings import OpenAIEmbeddingsfrom langchain.text_splitter import RecursiveCharacterTextSplitter# Combine doccombined_docs = [doc.page_content for doc in docs]text = " ".join(combined_docs)# Split themtext_splitter = RecursiveCharacterTextSplitter(chunk_size=1500, chunk_overlap=150)splits = text_splitter.split_text(text)# Build an indexembeddings = OpenAIEmbeddings()vectordb = FAISS.from_texts(splits, embeddings)# Build a QA chainqa_chain = RetrievalQA.from_chain_type(format(s): 140 [dashsegments] Total fragments: 11 [download] Destination: /Users/31treehaus/Desktop/AI/langchain-fork/docs/modules/indexes/document_loaders/examples/Let's build GPT: from scratch, in code, spelled out..m4a [download] 100% of 107.73MiB in 00:00:18 at 5.92MiB/s [FixupM4a] Correcting container of "/Users/31treehaus/Desktop/AI/langchain-fork/docs/modules/indexes/document_loaders/examples/Let's build GPT: from scratch, in code, spelled out..m4a" [ExtractAudio] Not converting audio /Users/31treehaus/Desktop/AI/langchain-fork/docs/modules/indexes/document_loaders/examples/Let's build GPT: from scratch, in code, spelled out..m4a; file is already in target format m4a [youtube] Extracting URL: https://youtu.be/VMj-3S1tku0 [youtube] VMj-3S1tku0: Downloading webpage [youtube] VMj-3S1tku0: Downloading android player API JSON [info] VMj-3S1tku0: Downloading 1 format(s): 140 [download] /Users/31treehaus/Desktop/AI/langchain-fork/docs/modules/indexes/document_loaders/examples/The spelled-out intro to neural networks and backpropagation: building micrograd.m4a has already been downloaded [download] 100% of 134.98MiBchat or QA applications on YouTube videos is a topic of high interest.Below we show how to easily go from a YouTube url to text to chat!We wil use the OpenAIWhisperParser, which will use the OpenAI Whisper API to transcribe audio to text.Note: You will need to have an OPENAI_API_KEY supplied.from langchain.document_loaders.generic import GenericLoaderfrom langchain.document_loaders.parsers import OpenAIWhisperParserfrom langchain.document_loaders.blob_loaders.youtube_audio import YoutubeAudioLoaderWe will use yt_dlp to download audio for YouTube urls.We will use pydub to split downloaded audio files (such that we adhere to Whisper API's 25MB file size limit).pip install yt_dlp pip install pydubYouTube url to text​Use YoutubeAudioLoader to fetch / download the audio files.Then, ues OpenAIWhisperParser() to transcribe them to text.Let's take the first lecture of Andrej Karpathy's YouTube course as an example! # Two Karpathy lecture videosurls = ["https://youtu.be/kCc8FmEb1nY", "https://youtu.be/VMj-3S1tku0"]# Directory to save audio filessave_dir = "~/Downloads/YouTube"# Transcribe the videos to textloader = GenericLoader(YoutubeAudioLoader(urls, save_dir), OpenAIWhisperParser())docs = loader.load() [youtube] Extracting URL: https://youtu.be/kCc8FmEb1nY [youtube] kCc8FmEb1nY: Downloading webpage [youtube] kCc8FmEb1nY: Downloading android player API JSON [info] kCc8FmEb1nY: Downloading 1 format(s): 140 [dashsegments] Total fragments: 11Skip to main content🦜ï¸�🔗 LangChainDocsUse casesIntegrationsAPILangSmithJS/TS DocsCTRLKIntegrationsCallbacksChat modelsDocument loadersEtherscan LoaderacreomAirbyte JSONAirtableAlibaba Cloud MaxComputeApify DatasetArxivAsyncHtmlLoaderAWS S3 DirectoryAWS S3 FileAZLyricsAzure Blob Storage ContainerAzure Blob Storage FileBibTeXBiliBiliBlackboardBlockchainBrave SearchBrowserlesschatgpt_loaderCollege ConfidentialConfluenceCoNLL-UCopy PasteCSVCube Semantic LayerDatadog LogsDiffbotDiscordDocugamiDuckDBEmailEmbaasEPubEverNoteexample_dataMicrosoft ExcelFacebook ChatFaunaFigmaGeopandasGitGitBookGitHubGoogle BigQueryGoogle Cloud Storage DirectoryGoogle Cloud Storage FileGoogle DriveGrobidGutenbergHacker NewsHuggingFace datasetiFixitImagesImage captionsIMSDbIuguJoplinJupyter NotebookLarkSuite (FeiShu)MastodonMediaWikiDumpMergeDocLoadermhtmlMicrosoft OneDriveMicrosoft PowerPointMicrosoft WordModern TreasuryNotion DB 1/2Notion DB 2/2ObsidianOpen Document Format (ODT)Open City DataOrg-modePandas DataFramePsychicPySpark DataFrame LoaderReadTheDocs DocumentationRecursive URL LoaderRedditRoamRocksetRSTSitemapSlackSnowflakeSource CodeSpreedlyStripeSubtitleTelegramTencent COS DirectoryTencent COS File2MarkdownTOMLTrelloTSVTwitterUnstructured FileURLWeatherWebBaseLoaderWhatsApp ChatWikipediaXMLXorbits Pandas DataFrameLoading documents from a YouTube urlYouTube transcriptsDocument transformersLLMsMemoryRetrieversText embedding modelsAgent toolkitsToolsVector storesGrouped by providerIntegrationsDocument loadersLoading documents from a YouTube urlOn this pageLoading documents from a YouTube urlBuilding chat or QA applications on YouTube videos is a topic of high interest.Below we show how toLoading documents from a YouTube url | 🦜ï¸�🔗 Langchainis revised version of the CoNLL-X format. Annotations are encoded in plain text files (UTF-8, normalized to NFC, using only the LF character as line break, including an LF character at the end of file) with three types of lines:Word lines containing the annotation of a word/token in 10 fields separated by single tab characters; see below.Blank lines marking sentence boundaries.Comment lines starting with hash (#).This is an example of how to load a file in CoNLL-U format. The whole file is treated as one document. The example data (conllu.conllu) is based on one of the standard UD/CoNLL-U examples.from langchain.document_loaders import CoNLLULoaderloader = CoNLLULoader("example_data/conllu.conllu")document = loader.load()document [Document(page_content='They buy and sell books.', metadata={'source': 'example_data/conllu.conllu'})]PreviousConfluenceNextCopy PasteCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.Skip to main content🦜ï¸�🔗 LangChainDocsUse casesIntegrationsAPILangSmithJS/TS DocsCTRLKIntegrationsCallbacksChat modelsDocument loadersEtherscan LoaderacreomAirbyte JSONAirtableAlibaba Cloud MaxComputeApify DatasetArxivAsyncHtmlLoaderAWS S3 DirectoryAWS S3 FileAZLyricsAzure Blob Storage ContainerAzure Blob Storage FileBibTeXBiliBiliBlackboardBlockchainBrave SearchBrowserlesschatgpt_loaderCollege ConfidentialConfluenceCoNLL-UCopy PasteCSVCube Semantic LayerDatadog LogsDiffbotDiscordDocugamiDuckDBEmailEmbaasEPubEverNoteexample_dataMicrosoft ExcelFacebook ChatFaunaFigmaGeopandasGitGitBookGitHubGoogle BigQueryGoogle Cloud Storage DirectoryGoogle Cloud Storage FileGoogle DriveGrobidGutenbergHacker NewsHuggingFace datasetiFixitImagesImage captionsIMSDbIuguJoplinJupyter NotebookLarkSuite (FeiShu)MastodonMediaWikiDumpMergeDocLoadermhtmlMicrosoft OneDriveMicrosoft PowerPointMicrosoft WordModern TreasuryNotion DB 1/2Notion DB 2/2ObsidianOpen Document Format (ODT)Open City DataOrg-modePandas DataFramePsychicPySpark DataFrame LoaderReadTheDocs DocumentationRecursive URL LoaderRedditRoamRocksetRSTSitemapSlackSnowflakeSource CodeSpreedlyStripeSubtitleTelegramTencent COS DirectoryTencent COS File2MarkdownTOMLTrelloTSVTwitterUnstructured FileURLWeatherWebBaseLoaderWhatsApp ChatWikipediaXMLXorbits Pandas DataFrameLoading documents from a YouTube urlYouTube transcriptsDocument transformersLLMsMemoryRetrieversText embedding modelsAgent toolkitsToolsVector storesGrouped by providerIntegrationsDocument loadersCoNLL-UCoNLL-UCoNLL-U is revised version of the CoNLL-X format. Annotations are encoded in plain text filesCoNLL-U | 🦜ï¸�🔗 LangchainText Extraction APICommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.managed NLP API service that offers features like embedding generation, document text extraction, document to embeddings and more. You can choose a variety of pre-trained models.Prerequisites​Create a free embaas account at https://embaas.io/register and generate an API keyDocument Text Extraction API​The document text extraction API allows you to extract the text from a given document. The API supports a variety of document formats, including PDF, mp3, mp4 and more. For a full list of supported formats, check out the API docs (link below).# Set API keyembaas_api_key = "YOUR_API_KEY"# or set environment variableos.environ["EMBAAS_API_KEY"] = "YOUR_API_KEY"Using a blob (bytes)​from langchain.document_loaders.embaas import EmbaasBlobLoaderfrom langchain.document_loaders.blob_loaders import Blobblob_loader = EmbaasBlobLoader()blob = Blob.from_path("example.pdf")documents = blob_loader.load(blob)# You can also directly create embeddings with your preferred embeddings modelblob_loader = EmbaasBlobLoader(params={"model": "e5-large-v2", "should_embed": True})blob = Blob.from_path("example.pdf")documents = blob_loader.load(blob)print(documents[0]["metadata"]["embedding"])Using a file​from langchain.document_loaders.embaas import EmbaasLoaderfile_loader = EmbaasLoader(file_path="example.pdf")documents = file_loader.load()# Disable automatic text splittingfile_loader = EmbaasLoader(file_path="example.mp3", params={"should_chunk": False})documents = file_loader.load()For more detailed information about the embaas document text extraction API, please refer to the official embaas API documentation.PreviousEmailNextEPubPrerequisitesDocument Text Extraction APICommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright ©Skip to main content🦜ï¸�🔗 LangChainDocsUse casesIntegrationsAPILangSmithJS/TS DocsCTRLKIntegrationsCallbacksChat modelsDocument loadersEtherscan LoaderacreomAirbyte JSONAirtableAlibaba Cloud MaxComputeApify DatasetArxivAsyncHtmlLoaderAWS S3 DirectoryAWS S3 FileAZLyricsAzure Blob Storage ContainerAzure Blob Storage FileBibTeXBiliBiliBlackboardBlockchainBrave SearchBrowserlesschatgpt_loaderCollege ConfidentialConfluenceCoNLL-UCopy PasteCSVCube Semantic LayerDatadog LogsDiffbotDiscordDocugamiDuckDBEmailEmbaasEPubEverNoteexample_dataMicrosoft ExcelFacebook ChatFaunaFigmaGeopandasGitGitBookGitHubGoogle BigQueryGoogle Cloud Storage DirectoryGoogle Cloud Storage FileGoogle DriveGrobidGutenbergHacker NewsHuggingFace datasetiFixitImagesImage captionsIMSDbIuguJoplinJupyter NotebookLarkSuite (FeiShu)MastodonMediaWikiDumpMergeDocLoadermhtmlMicrosoft OneDriveMicrosoft PowerPointMicrosoft WordModern TreasuryNotion DB 1/2Notion DB 2/2ObsidianOpen Document Format (ODT)Open City DataOrg-modePandas DataFramePsychicPySpark DataFrame LoaderReadTheDocs DocumentationRecursive URL LoaderRedditRoamRocksetRSTSitemapSlackSnowflakeSource CodeSpreedlyStripeSubtitleTelegramTencent COS DirectoryTencent COS File2MarkdownTOMLTrelloTSVTwitterUnstructured FileURLWeatherWebBaseLoaderWhatsApp ChatWikipediaXMLXorbits Pandas DataFrameLoading documents from a YouTube urlYouTube transcriptsDocument transformersLLMsMemoryRetrieversText embedding modelsAgent toolkitsToolsVector storesGrouped by providerIntegrationsDocument loadersEmbaasOn this pageEmbaasembaas is a fully managed NLP API service that offers features like embedding generation, document text extraction, document to embeddings andEmbaas | 🦜ï¸�🔗 LangchainRequired:\n\n - None\n\n\n## Step 1\nTake one banana from the bunch.\nDon't squeeze too hard!\n\n\n## Step 2\nHold the banana in your left hand and grip the stem between your right thumb and forefinger.\n\n\n## Step 3\nPull the stem downward until the peel splits.\n\n\n## Step 4\nInsert your thumbs into the split of the peel and pull the two sides apart.\nExpose the top of the banana. It may be slightly squished from pulling on the stem, but this will not affect the flavor.\n\n\n## Step 5\nPull open the peel, starting from your original split, and opening it along the length of the banana.\n\n\n## Step 6\nRemove fruit from peel.\n\n\n## Step 7\nEat and enjoy!\nThis is where you'll need your teeth.\nDo not choke on banana!\n", lookup_str='', metadata={'source': 'https://www.ifixit.com/Teardown/Banana+Teardown/811', 'title': 'Banana Teardown'}, lookup_index=0)]PreviousHuggingFace datasetNextImagesSearching iFixit using /suggestCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.fruit. Good source of potassium. Yellow.\n== Background Information ==\n\nCommonly misspelled, this wildly popular, phone shaped fruit serves as nutrition and an obstacle to slow down vehicles racing close behind you. Also used commonly as a synonym for “crazyâ€� or “insaneâ€�.\n\nBotanically, the banana is considered a berry, although it isn’t included in the culinary berry category containing strawberries and raspberries. Belonging to the genus Musa, the banana originated in Southeast Asia and Australia. Now largely cultivated throughout South and Central America, bananas are largely available throughout the world. They are especially valued as a staple food group in developing countries due to the banana tree’s ability to produce fruit year round.\n\nThe banana can be easily opened. Simply remove the outer yellow shell by cracking the top of the stem. Then, with the broken piece, peel downward on each side until the fruity components on the inside are exposed. Once the shell has been removed it cannot be put back together.\n\n== Technical Specifications ==\n\n* Dimensions: Variable depending on genetics of the parent tree\n* Color: Variable depending on ripeness, region, and season\n\n== Additional Information ==\n\n[link|https://en.wikipedia.org/wiki/Banana|Wiki: Banana]', lookup_str='', metadata={'source': 'https://www.ifixit.com/Device/Banana', 'title': 'Banana'}, lookup_index=0), Document(page_content="# Banana Teardown\nIn this teardown, we open a banana to see what's inside. Yellow and delicious, but most importantly, yellow.\n\n\n###Tools Required:\n\n - Fingers\n\n - Teeth\n\n - Thumbs\n\n\n###Parts Required:\n\n - None\n\n\n## Step 1\nTake one banana from thep�.��Y�����L �<B%�) *�0!5K5<*@`@,G�M�O�O|V�Z�Z�awh�i�i�p�s=vqv}U��Ѝ��2�r��Ūh������U�+�������������r�� W ;�^�W"(3(�.�4z;,@�@,A�G�L�LOS�X�\{]�]/dhj=lGn�o�o�v�z�z�������~���F�W���H��ܬ�����#�������_�������e�����%���������� o�d � 6'�+�/66e<�C�I�NBS=W�\�b�i�p�u�{�у�K�u�ǝ����N�b���;��N�������������h�����������h���&��<���i ����n$�'(�-�2�2�7T:�:A�AOF{F�L�L2TyY�Y8`�ab�h�kMp�p�w {<{��&�;��������e����Ǣ����a�دѴ��J�w���%���D������������\�������3��� 4| 1| 0| 0| 2649| 7.225| null| C| +-----------+--------+------+--------------------+------+----+-----+-----+----------------+-------+-----+--------+ only showing top 20 rows agent = create_spark_dataframe_agent(llm=OpenAI(temperature=0), df=df, verbose=True)agent.run("how many rows are there?") > Entering new AgentExecutor chain... Thought: I need to find out how many rows are in the dataframe Action: python_repl_ast Action Input: df.count() Observation: 891 Thought: I now know the final answer Final Answer: There are 891 rows in the dataframe. > Finished chain. 'There are 891 rows in the dataframe.'agent.run("how many people have more than 3 siblings") > Entering new AgentExecutor chain... Thought: I need to find out how many people have more than 3 siblings Action: python_repl_ast Action Input: df.filter(df.SibSp > 3).count() Observation: 30 Thought: I now know the final answer Final Answer: 30 people have more than 3 siblings. > Finished chain. '30 people have more than 3 siblings.'agent.run("whats the square root of the average age?") > Entering new AgentExecutor chain... Thought: I need16| 1| 2|Hewlett, Mrs. (Ma...|female|55.0| 0| 0| 248706| 16.0| null| S| | 17| 0| 3|Rice, Master. Eugene| male| 2.0| 4| 1| 382652| 29.125| null| Q| | 18| 1| 2|Williams, Mr. Cha...| male|null| 0| 0| 244373| 13.0| null| S| | 19| 0| 3|Vander Planke, Mr...|female|31.0| 1| 0| 345763| 18.0| null| S| | 20| 1| 3|Masselmani, Mrs. ...|female|null| 0| 0|16.7| G6| S| | 12| 1| 1|Bonnell, Miss. El...|female|58.0| 0| 0| 113783| 26.55| C103| S| | 13| 0| 3|Saundercock, Mr. ...| male|20.0| 0| 0| A/5. 2151| 8.05| null| S| | 14| 0| 3|Andersson, Mr. An...| male|39.0| 1| 5| 347082| 31.275| null| S| | 15| 0| 3|Vestrom, Miss. Hu...|female|14.0| 0| 0| 350406| 7.8542| null| S| | 16| 1|Tim...| male|54.0| 0| 0| 17463|51.8625| E46| S| | 8| 0| 3|Palsson, Master. ...| male| 2.0| 3| 1| 349909| 21.075| null| S| | 9| 1| 3|Johnson, Mrs. Osc...|female|27.0| 0| 2| 347742|11.1333| null| S| | 10| 1| 2|Nasser, Mrs. Nich...|female|14.0| 1| 0| 237736|30.0708| null| C| | 11| 1| 3|Sandstrom, Miss. ...|female| 4.0| 1| 1| PP 9549| 16.7| G6| S|3| 1| 3|Heikkinen, Miss. ...|female|26.0| 0| 0|STON/O2. 3101282| 7.925| null| S| | 4| 1| 1|Futrelle, Mrs. Ja...|female|35.0| 1| 0| 113803| 53.1| C123| S| | 5| 0| 3|Allen, Mr. Willia...| male|35.0| 0| 0| 373450| 8.05| null| S| | 6| 0| 3| Moran, Mr. James| male|null| 0| 0| 330877| 8.4583| null| Q| | 7| 0| 1|McCarthy, Mr. Tim...| male|54.0| 0|address Setting default log level to "WARN". To adjust logging level use sc.setLogLevel(newLevel). For SparkR, use setLogLevel(newLevel). 23/05/15 20:33:10 WARN NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable +-----------+--------+------+--------------------+------+----+-----+-----+----------------+-------+-----+--------+ |PassengerId|Survived|Pclass| Name| Sex| Age|SibSp|Parch| Ticket| Fare|Cabin|Embarked| +-----------+--------+------+--------------------+------+----+-----+-----+----------------+-------+-----+--------+ | 1| 0| 3|Braund, Mr. Owen ...| male|22.0| 1| 0| A/5 21171| 7.25| null| S| | 2| 1| 1|Cumings, Mrs. Joh...|female|38.0| 1| 0| PC 17599|71.2833| C85| C| | 3| 1|Skip to main content🦜ï¸�🔗 LangChainDocsUse casesIntegrationsAPILangSmithJS/TS DocsCTRLKIntegrationsCallbacksChat modelsDocument loadersDocument transformersLLMsMemoryRetrieversText embedding modelsAgent toolkitsAmadeus ToolkitAzure Cognitive Services ToolkitCSV AgentDocument ComparisonGitHubGmail ToolkitJiraJSON AgentMultion ToolkitOffice365 ToolkitOpenAPI agentsNatural Language APIsPandas Dataframe AgentPlayWright Browser ToolkitPowerBI Dataset AgentPython AgentSpark Dataframe AgentSpark SQL AgentSQL Database AgentVectorstore AgentXorbits AgentToolsVector storesGrouped by providerIntegrationsAgent toolkitsSpark Dataframe AgentOn this pageSpark Dataframe AgentThis notebook shows how to use agents to interact with a Spark dataframe and Spark Connect. It is mostly optimized for question answering.NOTE: this agent calls the Python agent under the hood, which executes LLM generated Python code - this can be bad if the LLM generated Python code is harmful. Use cautiously.import osos.environ["OPENAI_API_KEY"] = "...input your openai api key here..."from langchain.llms import OpenAIfrom pyspark.sql import SparkSessionfrom langchain.agents import create_spark_dataframe_agentspark = SparkSession.builder.getOrCreate()csv_file_path = "titanic.csv"df = spark.read.csv(csv_file_path, header=True, inferSchema=True)df.show() 23/05/15 20:33:10 WARN Utils: Your hostname, Mikes-Mac-mini.local resolves to a loopback address: 127.0.0.1; using 192.168.68.115 instead (on interface en1) 23/05/15 20:33:10 WARN Utils: Set SPARK_LOCAL_IP if you need to bind to another address Setting default log level to "WARN". To adjust logging level useSpark Dataframe Agent | 🦜ï¸�🔗 LangchainSpark SQL AgentThis notebook shows how to use agents to interact with a Spark SQL. Similar to SQL Database Agent, it is designed to address general inquiries about Spark SQL and facilitate error recovery.📄ï¸� SQL Database AgentThis notebook showcases an agent designed to interact with a sql databases. The agent builds off of SQLDatabaseChain and is designed to answer more general questions about a database, as well as recover from errors.📄ï¸� Vectorstore AgentThis notebook showcases an agent designed to retrieve information from one or more vectorstores, either with or without sources.📄ï¸� Xorbits AgentThis notebook shows how to use agents to interact with Xorbits Pandas dataframe and Xorbits Numpy ndarray. It is mostly optimized for question answering.PreviousTensorflowHubNextAmadeus ToolkitCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.The agent is able to iteratively explore the blob to find what it needs to answer the user's question.📄ï¸� Multion ToolkitThis notebook walks you through connecting LangChain to the MultiOn Client in your browser📄ï¸� Office365 ToolkitThis notebook walks through connecting LangChain to Office365 email and calendar.📄ï¸� OpenAPI agentsWe can construct agents to consume arbitrary APIs, here APIs conformant to the OpenAPI/Swagger specification.📄ï¸� Natural Language APIsNatural Language API Toolkits (NLAToolkits) permit LangChain Agents to efficiently plan and combine calls across endpoints. This notebook demonstrates a sample composition of the Speak, Klarna, and Spoonacluar APIs.📄ï¸� Pandas Dataframe AgentThis notebook shows how to use agents to interact with a pandas dataframe. It is mostly optimized for question answering.📄ï¸� PlayWright Browser ToolkitThis toolkit is used to interact with the browser. While other tools (like the Requests tools) are fine for static sites, Browser toolkits let your agent navigate the web and interact with dynamically rendered sites. Some tools bundled within the Browser toolkit include:📄ï¸� PowerBI Dataset AgentThis notebook showcases an agent designed to interact with a Power BI Dataset. The agent is designed to answer more general questions about a dataset, as well as recover from errors.📄ï¸� Python AgentThis notebook showcases an agent designed to write and execute python code to answer a question.📄ï¸� Spark Dataframe AgentThis notebook shows how to use agents to interact with a Spark dataframe and Spark Connect. It is mostly optimized for question answering.📄ï¸� Spark SQL AgentThis notebook shows how to use agents to interactSkip to main content🦜ï¸�🔗 LangChainDocsUse casesIntegrationsAPILangSmithJS/TS DocsCTRLKIntegrationsCallbacksChat modelsDocument loadersDocument transformersLLMsMemoryRetrieversText embedding modelsAgent toolkitsAmadeus ToolkitAzure Cognitive Services ToolkitCSV AgentDocument ComparisonGitHubGmail ToolkitJiraJSON AgentMultion ToolkitOffice365 ToolkitOpenAPI agentsNatural Language APIsPandas Dataframe AgentPlayWright Browser ToolkitPowerBI Dataset AgentPython AgentSpark Dataframe AgentSpark SQL AgentSQL Database AgentVectorstore AgentXorbits AgentToolsVector storesGrouped by providerIntegrationsAgent toolkitsAgent toolkits📄ï¸� Amadeus ToolkitThis notebook walks you through connecting LangChain to the Amadeus travel information API📄ï¸� Azure Cognitive Services ToolkitThis toolkit is used to interact with the Azure Cognitive Services API to achieve some multimodal capabilities.📄ï¸� CSV AgentThis notebook shows how to use agents to interact with a csv. It is mostly optimized for question answering.📄ï¸� Document ComparisonThis notebook shows how to use an agent to compare two documents.📄ï¸� GitHubThis notebook goes over how to use the GitHub tool.📄ï¸� Gmail ToolkitThis notebook walks through connecting a LangChain email to the Gmail API.📄ï¸� JiraThis notebook goes over how to use the Jira tool.📄ï¸� JSON AgentThis notebook showcases an agent designed to interact with large JSON/dict objects. This is useful when you want to answer questions about a JSON blob that's too large to fit in the context window of an LLM. The agent is able to iteratively explore the blob to find what it needs to answer the user'sAgent toolkits | 🦜ï¸�🔗 LangchainFake Embeddings | 🦜ï¸�🔗 Langchain Skip to main content🦜ï¸�🔗 LangChainDocsUse casesIntegrationsAPILangSmithJS/TS DocsCTRLKIntegrationsCallbacksChat modelsDocument loadersDocument transformersLLMsMemoryRetrieversText embedding modelsAleph AlphaAzureOpenAIBedrock EmbeddingsClarifaiCohereDashScopeDeepInfraElasticsearchEmbaasFake EmbeddingsGoogle Cloud Platform Vertex AI PaLMGPT4AllHugging Face HubInstructEmbeddingsJinaLlama-cppLocalAIMiniMaxModelScopeMosaicML embeddingsNLP CloudOpenAISageMaker Endpoint EmbeddingsSelf Hosted EmbeddingsSentence Transformers EmbeddingsSpacy EmbeddingTensorflowHubAgent toolkitsToolsVector storesGrouped by providerIntegrationsText embedding modelsFake EmbeddingsFake EmbeddingsLangChain also provides a fake embedding class. You can use this to test your pipelines.from langchain.embeddings import FakeEmbeddingsembeddings = FakeEmbeddings(size=1352)query_result = embeddings.embed_query("foo")doc_results = embeddings.embed_documents(["foo"])PreviousEmbaasNextGoogle Cloud Platform Vertex AI PaLMCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.Print created embeddingsfor i, doc_text_embedding in enumerate(doc_texts_embeddings): print(f"Embedding for document {i + 1}: {doc_text_embedding}")# Using a different model and/or custom instructionembeddings = EmbaasEmbeddings( model="instructor-large", instruction="Represent the Wikipedia document for retrieval",)For more detailed information about the embaas Embeddings API, please refer to the official embaas API documentation.PreviousElasticsearchNextFake EmbeddingsPrerequisitesCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.Skip to main content🦜ï¸�🔗 LangChainDocsUse casesIntegrationsAPILangSmithJS/TS DocsCTRLKIntegrationsCallbacksChat modelsDocument loadersDocument transformersLLMsMemoryRetrieversText embedding modelsAleph AlphaAzureOpenAIBedrock EmbeddingsClarifaiCohereDashScopeDeepInfraElasticsearchEmbaasFake EmbeddingsGoogle Cloud Platform Vertex AI PaLMGPT4AllHugging Face HubInstructEmbeddingsJinaLlama-cppLocalAIMiniMaxModelScopeMosaicML embeddingsNLP CloudOpenAISageMaker Endpoint EmbeddingsSelf Hosted EmbeddingsSentence Transformers EmbeddingsSpacy EmbeddingTensorflowHubAgent toolkitsToolsVector storesGrouped by providerIntegrationsText embedding modelsEmbaasOn this pageEmbaasembaas is a fully managed NLP API service that offers features like embedding generation, document text extraction, document to embeddings and more. You can choose a variety of pre-trained models.In this tutorial, we will show you how to use the embaas Embeddings API to generate embeddings for a given text.Prerequisites​Create your free embaas account at https://embaas.io/register and generate an API key.# Set API keyembaas_api_key = "YOUR_API_KEY"# or set environment variableos.environ["EMBAAS_API_KEY"] = "YOUR_API_KEY"from langchain.embeddings import EmbaasEmbeddingsembeddings = EmbaasEmbeddings()# Create embeddings for a single documentdoc_text = "This is a test document."doc_text_embedding = embeddings.embed_query(doc_text)# Print created embeddingprint(doc_text_embedding)# Create embeddings for multiple documentsdoc_texts = ["This is a test document.", "This is another test document."]doc_texts_embeddings = embeddings.embed_documents(doc_texts)# Print created embeddingsfor i, doc_text_embedding in enumerate(doc_texts_embeddings):Embaas | 🦜ï¸�🔗 LangchainSkip to main content🦜ï¸�🔗 LangChainDocsUse casesIntegrationsAPILangSmithJS/TS DocsCTRLKIntegrationsCallbacksChat modelsDocument loadersDocument transformersLLMsMemoryRetrieversText embedding modelsAleph AlphaAzureOpenAIBedrock EmbeddingsClarifaiCohereDashScopeDeepInfraElasticsearchEmbaasFake EmbeddingsGoogle Cloud Platform Vertex AI PaLMGPT4AllHugging Face HubInstructEmbeddingsJinaLlama-cppLocalAIMiniMaxModelScopeMosaicML embeddingsNLP CloudOpenAISageMaker Endpoint EmbeddingsSelf Hosted EmbeddingsSentence Transformers EmbeddingsSpacy EmbeddingTensorflowHubAgent toolkitsToolsVector storesGrouped by providerIntegrationsText embedding modelsMiniMaxMiniMaxMiniMax offers an embeddings service.This example goes over how to use LangChain to interact with MiniMax Inference for text embedding.import osos.environ["MINIMAX_GROUP_ID"] = "MINIMAX_GROUP_ID"os.environ["MINIMAX_API_KEY"] = "MINIMAX_API_KEY"from langchain.embeddings import MiniMaxEmbeddingsembeddings = MiniMaxEmbeddings()query_text = "This is a test query."query_result = embeddings.embed_query(query_text)document_text = "This is a test document."document_result = embeddings.embed_documents([document_text])import numpy as npquery_numpy = np.array(query_result)document_numpy = np.array(document_result[0])similarity = np.dot(query_numpy, document_numpy) / ( np.linalg.norm(query_numpy) * np.linalg.norm(document_numpy))print(f"Cosine similarity between document and query: {similarity}") Cosine similarity between document and query: 0.1573236279277012PreviousLocalAINextModelScopeCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.MiniMax | 🦜ï¸�🔗 LangchainLlama-cpp | 🦜ï¸�🔗 Langchain Skip to main content🦜ï¸�🔗 LangChainDocsUse casesIntegrationsAPILangSmithJS/TS DocsCTRLKIntegrationsCallbacksChat modelsDocument loadersDocument transformersLLMsMemoryRetrieversText embedding modelsAleph AlphaAzureOpenAIBedrock EmbeddingsClarifaiCohereDashScopeDeepInfraElasticsearchEmbaasFake EmbeddingsGoogle Cloud Platform Vertex AI PaLMGPT4AllHugging Face HubInstructEmbeddingsJinaLlama-cppLocalAIMiniMaxModelScopeMosaicML embeddingsNLP CloudOpenAISageMaker Endpoint EmbeddingsSelf Hosted EmbeddingsSentence Transformers EmbeddingsSpacy EmbeddingTensorflowHubAgent toolkitsToolsVector storesGrouped by providerIntegrationsText embedding modelsLlama-cppLlama-cppThis notebook goes over how to use Llama-cpp embeddings within LangChainpip install llama-cpp-pythonfrom langchain.embeddings import LlamaCppEmbeddingsllama = LlamaCppEmbeddings(model_path="/path/to/model/ggml-model-q4_0.bin")text = "This is a test document."query_result = llama.embed_query(text)doc_result = llama.embed_documents([text])PreviousJinaNextLocalAICommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.InstructEmbeddings | 🦜ï¸�🔗 Langchain Skip to main content🦜ï¸�🔗 LangChainDocsUse casesIntegrationsAPILangSmithJS/TS DocsCTRLKIntegrationsCallbacksChat modelsDocument loadersDocument transformersLLMsMemoryRetrieversText embedding modelsAleph AlphaAzureOpenAIBedrock EmbeddingsClarifaiCohereDashScopeDeepInfraElasticsearchEmbaasFake EmbeddingsGoogle Cloud Platform Vertex AI PaLMGPT4AllHugging Face HubInstructEmbeddingsJinaLlama-cppLocalAIMiniMaxModelScopeMosaicML embeddingsNLP CloudOpenAISageMaker Endpoint EmbeddingsSelf Hosted EmbeddingsSentence Transformers EmbeddingsSpacy EmbeddingTensorflowHubAgent toolkitsToolsVector storesGrouped by providerIntegrationsText embedding modelsInstructEmbeddingsInstructEmbeddingsLet's load the HuggingFace instruct Embeddings class.from langchain.embeddings import HuggingFaceInstructEmbeddingsembeddings = HuggingFaceInstructEmbeddings( query_instruction="Represent the query for retrieval: ") load INSTRUCTOR_Transformer max_seq_length 512text = "This is a test document."query_result = embeddings.embed_query(text)PreviousHugging Face HubNextJinaCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.Jina | 🦜ï¸�🔗 Langchain Skip to main content🦜ï¸�🔗 LangChainDocsUse casesIntegrationsAPILangSmithJS/TS DocsCTRLKIntegrationsCallbacksChat modelsDocument loadersDocument transformersLLMsMemoryRetrieversText embedding modelsAleph AlphaAzureOpenAIBedrock EmbeddingsClarifaiCohereDashScopeDeepInfraElasticsearchEmbaasFake EmbeddingsGoogle Cloud Platform Vertex AI PaLMGPT4AllHugging Face HubInstructEmbeddingsJinaLlama-cppLocalAIMiniMaxModelScopeMosaicML embeddingsNLP CloudOpenAISageMaker Endpoint EmbeddingsSelf Hosted EmbeddingsSentence Transformers EmbeddingsSpacy EmbeddingTensorflowHubAgent toolkitsToolsVector storesGrouped by providerIntegrationsText embedding modelsJinaJinaLet's load the Jina Embedding class.from langchain.embeddings import JinaEmbeddingsembeddings = JinaEmbeddings( jina_auth_token=jina_auth_token, model_name="ViT-B-32::openai")text = "This is a test document."query_result = embeddings.embed_query(text)doc_result = embeddings.embed_documents([text])In the above example, ViT-B-32::openai, OpenAI's pretrained ViT-B-32 model is used. For a full list of models, see here.PreviousInstructEmbeddingsNextLlama-cppCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.Hugging Face Hub | 🦜ï¸�🔗 Langchain Skip to main content🦜ï¸�🔗 LangChainDocsUse casesIntegrationsAPILangSmithJS/TS DocsCTRLKIntegrationsCallbacksChat modelsDocument loadersDocument transformersLLMsMemoryRetrieversText embedding modelsAleph AlphaAzureOpenAIBedrock EmbeddingsClarifaiCohereDashScopeDeepInfraElasticsearchEmbaasFake EmbeddingsGoogle Cloud Platform Vertex AI PaLMGPT4AllHugging Face HubInstructEmbeddingsJinaLlama-cppLocalAIMiniMaxModelScopeMosaicML embeddingsNLP CloudOpenAISageMaker Endpoint EmbeddingsSelf Hosted EmbeddingsSentence Transformers EmbeddingsSpacy EmbeddingTensorflowHubAgent toolkitsToolsVector storesGrouped by providerIntegrationsText embedding modelsHugging Face HubHugging Face HubLet's load the Hugging Face Embedding class.from langchain.embeddings import HuggingFaceEmbeddingsembeddings = HuggingFaceEmbeddings()text = "This is a test document."query_result = embeddings.embed_query(text)doc_result = embeddings.embed_documents([text])PreviousGPT4AllNextInstructEmbeddingsCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.import NLPCloudEmbeddingsimport osos.environ["NLPCLOUD_API_KEY"] = "xxx"nlpcloud_embd = NLPCloudEmbeddings()text = "This is a test document."query_result = nlpcloud_embd.embed_query(text)doc_result = nlpcloud_embd.embed_documents([text])PreviousMosaicML embeddingsNextOpenAICommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.Skip to main content🦜ï¸�🔗 LangChainDocsUse casesIntegrationsAPILangSmithJS/TS DocsCTRLKIntegrationsCallbacksChat modelsDocument loadersDocument transformersLLMsMemoryRetrieversText embedding modelsAleph AlphaAzureOpenAIBedrock EmbeddingsClarifaiCohereDashScopeDeepInfraElasticsearchEmbaasFake EmbeddingsGoogle Cloud Platform Vertex AI PaLMGPT4AllHugging Face HubInstructEmbeddingsJinaLlama-cppLocalAIMiniMaxModelScopeMosaicML embeddingsNLP CloudOpenAISageMaker Endpoint EmbeddingsSelf Hosted EmbeddingsSentence Transformers EmbeddingsSpacy EmbeddingTensorflowHubAgent toolkitsToolsVector storesGrouped by providerIntegrationsText embedding modelsNLP CloudNLP CloudNLP Cloud is an artificial intelligence platform that allows you to use the most advanced AI engines, and even train your own engines with your own data. The embeddings endpoint offers several models:paraphrase-multilingual-mpnet-base-v2: Paraphrase Multilingual MPNet Base V2 is a very fast model based on Sentence Transformers that is perfectly suited for embeddings extraction in more than 50 languages (see the full list here).gpt-j: GPT-J returns advanced embeddings. It might return better results than Sentence Transformers based models (see above) but it is also much slower.dolphin: Dolphin returns advanced embeddings. It might return better results than Sentence Transformers based models (see above) but it is also much slower. It natively understands the following languages: Bulgarian, Catalan, Chinese, Croatian, Czech, Danish, Dutch, English, French, German, Hungarian, Italian, Japanese, Polish, Portuguese, Romanian, Russian, Serbian, Slovenian, Spanish, Swedish, and Ukrainian.pip install nlpcloudfrom langchain.embeddings import NLPCloudEmbeddingsimport osos.environ["NLPCLOUD_API_KEY"]NLP Cloud | 🦜ï¸�🔗 Langchain"This is not a test document."])PreviousSelf Hosted EmbeddingsNextSpacy EmbeddingCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.Skip to main content🦜ï¸�🔗 LangChainDocsUse casesIntegrationsAPILangSmithJS/TS DocsCTRLKIntegrationsCallbacksChat modelsDocument loadersDocument transformersLLMsMemoryRetrieversText embedding modelsAleph AlphaAzureOpenAIBedrock EmbeddingsClarifaiCohereDashScopeDeepInfraElasticsearchEmbaasFake EmbeddingsGoogle Cloud Platform Vertex AI PaLMGPT4AllHugging Face HubInstructEmbeddingsJinaLlama-cppLocalAIMiniMaxModelScopeMosaicML embeddingsNLP CloudOpenAISageMaker Endpoint EmbeddingsSelf Hosted EmbeddingsSentence Transformers EmbeddingsSpacy EmbeddingTensorflowHubAgent toolkitsToolsVector storesGrouped by providerIntegrationsText embedding modelsSentence Transformers EmbeddingsSentence Transformers EmbeddingsSentenceTransformers embeddings are called using the HuggingFaceEmbeddings integration. We have also added an alias for SentenceTransformerEmbeddings for users who are more familiar with directly using that package.SentenceTransformers is a python package that can generate text and image embeddings, originating from Sentence-BERTpip install sentence_transformers > /dev/null [notice] A new release of pip is available: 23.0.1 -> 23.1.1 [notice] To update, run: pip install --upgrade pipfrom langchain.embeddings import HuggingFaceEmbeddings, SentenceTransformerEmbeddingsembeddings = HuggingFaceEmbeddings(model_name="all-MiniLM-L6-v2")# Equivalent to SentenceTransformerEmbeddings(model_name="all-MiniLM-L6-v2")text = "This is a test document."query_result = embeddings.embed_query(text)doc_result = embeddings.embed_documents([text, "This is not a test document."])PreviousSelf Hosted EmbeddingsNextSpacySentence Transformers Embeddings | 🦜ï¸�🔗 Langchainnumpy as npquery_numpy = np.array(query_result)for doc_res, doc in zip(document_result, docs): document_numpy = np.array(doc_res) similarity = np.dot(query_numpy, document_numpy) / ( np.linalg.norm(query_numpy) * np.linalg.norm(document_numpy) ) print(f'Cosine similarity between "{doc}" and query: {similarity}') Cosine similarity between "Dog is not a cat" and query: 0.7489097144129355 Cosine similarity between "Beta is the second letter of Greek alphabet" and query: 0.9519380640702013PreviousDashScopeNextElasticsearchCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.Skip to main content🦜ï¸�🔗 LangChainDocsUse casesIntegrationsAPILangSmithJS/TS DocsCTRLKIntegrationsCallbacksChat modelsDocument loadersDocument transformersLLMsMemoryRetrieversText embedding modelsAleph AlphaAzureOpenAIBedrock EmbeddingsClarifaiCohereDashScopeDeepInfraElasticsearchEmbaasFake EmbeddingsGoogle Cloud Platform Vertex AI PaLMGPT4AllHugging Face HubInstructEmbeddingsJinaLlama-cppLocalAIMiniMaxModelScopeMosaicML embeddingsNLP CloudOpenAISageMaker Endpoint EmbeddingsSelf Hosted EmbeddingsSentence Transformers EmbeddingsSpacy EmbeddingTensorflowHubAgent toolkitsToolsVector storesGrouped by providerIntegrationsText embedding modelsDeepInfraDeepInfraDeepInfra is a serverless inference as a service that provides access to a variety of LLMs and embeddings models. This notebook goes over how to use LangChain with DeepInfra for text embeddings.# sign up for an account: https://deepinfra.com/login?utm_source=langchainfrom getpass import getpassDEEPINFRA_API_TOKEN = getpass() ········import osos.environ["DEEPINFRA_API_TOKEN"] = DEEPINFRA_API_TOKENfrom langchain.embeddings import DeepInfraEmbeddingsembeddings = DeepInfraEmbeddings( model_id="sentence-transformers/clip-ViT-B-32", query_instruction="", embed_instruction="",)docs = ["Dog is not a cat", "Beta is the second letter of Greek alphabet"]document_result = embeddings.embed_documents(docs)query = "What is the first letter of Greek alphabet"query_result = embeddings.embed_query(query)import numpy as npquery_numpy = np.array(query_result)for doc_res, docDeepInfra | 🦜ï¸�🔗 LangchainAzureOpenAI | 🦜ï¸�🔗 Langchain Skip to main content🦜ï¸�🔗 LangChainDocsUse casesIntegrationsAPILangSmithJS/TS DocsCTRLKIntegrationsCallbacksChat modelsDocument loadersDocument transformersLLMsMemoryRetrieversText embedding modelsAleph AlphaAzureOpenAIBedrock EmbeddingsClarifaiCohereDashScopeDeepInfraElasticsearchEmbaasFake EmbeddingsGoogle Cloud Platform Vertex AI PaLMGPT4AllHugging Face HubInstructEmbeddingsJinaLlama-cppLocalAIMiniMaxModelScopeMosaicML embeddingsNLP CloudOpenAISageMaker Endpoint EmbeddingsSelf Hosted EmbeddingsSentence Transformers EmbeddingsSpacy EmbeddingTensorflowHubAgent toolkitsToolsVector storesGrouped by providerIntegrationsText embedding modelsAzureOpenAIAzureOpenAILet's load the OpenAI Embedding class with environment variables set to indicate to use Azure endpoints.# set the environment variables needed for openai package to know to reach out to azureimport osos.environ["OPENAI_API_TYPE"] = "azure"os.environ["OPENAI_API_BASE"] = "https:// List[List[float]]: response_json = json.loads(output.read().decode("utf-8")) return response_json["vectors"]content_handler = ContentHandler()embeddings = SagemakerEndpointEmbeddings( # endpoint_name="endpoint-name", # credentials_profile_name="credentials-profile-name", endpoint_name="huggingface-pytorch-inference-2023-03-21-16-14-03-834", region_name="us-east-1", content_handler=content_handler,)query_result = embeddings.embed_query("foo")doc_results = embeddings.embed_documents(["foo"])doc_resultsPreviousOpenAINextSelf Hosted EmbeddingsCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.Skip to main content🦜ï¸�🔗 LangChainDocsUse casesIntegrationsAPILangSmithJS/TS DocsCTRLKIntegrationsCallbacksChat modelsDocument loadersDocument transformersLLMsMemoryRetrieversText embedding modelsAleph AlphaAzureOpenAIBedrock EmbeddingsClarifaiCohereDashScopeDeepInfraElasticsearchEmbaasFake EmbeddingsGoogle Cloud Platform Vertex AI PaLMGPT4AllHugging Face HubInstructEmbeddingsJinaLlama-cppLocalAIMiniMaxModelScopeMosaicML embeddingsNLP CloudOpenAISageMaker Endpoint EmbeddingsSelf Hosted EmbeddingsSentence Transformers EmbeddingsSpacy EmbeddingTensorflowHubAgent toolkitsToolsVector storesGrouped by providerIntegrationsText embedding modelsSageMaker Endpoint EmbeddingsSageMaker Endpoint EmbeddingsLet's load the SageMaker Endpoints Embeddings class. The class can be used if you host, e.g. your own Hugging Face model on SageMaker.For instructions on how to do this, please see here. Note: In order to handle batched requests, you will need to adjust the return line in the predict_fn() function within the custom inference.py script:Change fromreturn {"vectors": sentence_embeddings[0].tolist()}to:return {"vectors": sentence_embeddings.tolist()}.pip3 install langchain boto3from typing import Dict, Listfrom langchain.embeddings import SagemakerEndpointEmbeddingsfrom langchain.embeddings.sagemaker_endpoint import EmbeddingsContentHandlerimport jsonclass ContentHandler(EmbeddingsContentHandler): content_type = "application/json" accepts = "application/json" def transform_input(self, inputs: list[str], model_kwargs: Dict) -> bytes: input_str = json.dumps({"inputs": inputs, **model_kwargs}) returnSageMaker Endpoint Embeddings | 🦜ï¸�🔗 Langchain= "This is a test document."query_result = embeddings.embed_query(text)doc_result = embeddings.embed_documents([text])# if you are behind an explicit proxy, you can use the OPENAI_PROXY environment variable to pass throughos.environ["OPENAI_PROXY"] = "http://proxy.yourcompany.com:8080"PreviousLlama-cppNextMiniMaxCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.Skip to main content🦜ï¸�🔗 LangChainDocsUse casesIntegrationsAPILangSmithJS/TS DocsCTRLKIntegrationsCallbacksChat modelsDocument loadersDocument transformersLLMsMemoryRetrieversText embedding modelsAleph AlphaAzureOpenAIBedrock EmbeddingsClarifaiCohereDashScopeDeepInfraElasticsearchEmbaasFake EmbeddingsGoogle Cloud Platform Vertex AI PaLMGPT4AllHugging Face HubInstructEmbeddingsJinaLlama-cppLocalAIMiniMaxModelScopeMosaicML embeddingsNLP CloudOpenAISageMaker Endpoint EmbeddingsSelf Hosted EmbeddingsSentence Transformers EmbeddingsSpacy EmbeddingTensorflowHubAgent toolkitsToolsVector storesGrouped by providerIntegrationsText embedding modelsLocalAILocalAILet's load the LocalAI Embedding class. In order to use the LocalAI Embedding class, you need to have the LocalAI service hosted somewhere and configure the embedding models. See the documentation at https://localai.io/basics/getting_started/index.html and https://localai.io/features/embeddings/index.html.from langchain.embeddings import LocalAIEmbeddingsembeddings = LocalAIEmbeddings(openai_api_base="http://localhost:8080", model="embedding-model-name")text = "This is a test document."query_result = embeddings.embed_query(text)doc_result = embeddings.embed_documents([text])Let's load the LocalAI Embedding class with first generation models (e.g. text-search-ada-doc-001/text-search-ada-query-001). Note: These are not recommended models - see herefrom langchain.embeddings.openai import LocalAIEmbeddingsembeddings = LocalAIEmbeddings(openai_api_base="http://localhost:8080", model="embedding-model-name")text = "This is a test document."query_result = embeddings.embed_query(text)doc_result =LocalAI | 🦜ï¸�🔗 Langchainfor document {i+1}: {embedding}")# Create an embedding for a single queryquery = "This is a single query."query_embedding = embeddings.embed_query(query)# Print query embeddingprint(f"Embedding for query: {query_embedding}")Testing with Existing Elasticsearch client connection​This can be used with any Elasticsearch deployment# Create Elasticsearch connectiones_connection = Elasticsearch( hosts=["https://es_cluster_url:port"], basic_auth=("user", "password"))# Instantiate ElasticsearchEmbeddings using es_connectionembeddings = ElasticsearchEmbeddings.from_es_connection( model_id, es_connection,)# Create embeddings for multiple documentsdocuments = [ "This is an example document.", "Another example document to generate embeddings for.",]document_embeddings = embeddings.embed_documents(documents)# Print document embeddingsfor i, embedding in enumerate(document_embeddings): print(f"Embedding for document {i+1}: {embedding}")# Create an embedding for a single queryquery = "This is a single query."query_embedding = embeddings.embed_query(query)# Print query embeddingprint(f"Embedding for query: {query_embedding}")PreviousDeepInfraNextEmbaasTesting with from_credentialsTesting with Existing Elasticsearch client connectionCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.Skip to main content🦜ï¸�🔗 LangChainDocsUse casesIntegrationsAPILangSmithJS/TS DocsCTRLKIntegrationsCallbacksChat modelsDocument loadersDocument transformersLLMsMemoryRetrieversText embedding modelsAleph AlphaAzureOpenAIBedrock EmbeddingsClarifaiCohereDashScopeDeepInfraElasticsearchEmbaasFake EmbeddingsGoogle Cloud Platform Vertex AI PaLMGPT4AllHugging Face HubInstructEmbeddingsJinaLlama-cppLocalAIMiniMaxModelScopeMosaicML embeddingsNLP CloudOpenAISageMaker Endpoint EmbeddingsSelf Hosted EmbeddingsSentence Transformers EmbeddingsSpacy EmbeddingTensorflowHubAgent toolkitsToolsVector storesGrouped by providerIntegrationsText embedding modelsElasticsearchOn this pageElasticsearchWalkthrough of how to generate embeddings using a hosted embedding model in ElasticsearchThe easiest way to instantiate the ElasticsearchEmbeddings class it eitherusing the from_credentials constructor if you are using Elastic Cloudor using the from_es_connection constructor with any Elasticsearch clusterpip -q install elasticsearch langchainimport elasticsearchfrom langchain.embeddings.elasticsearch import ElasticsearchEmbeddings# Define the model IDmodel_id = "your_model_id"Testing with from_credentials​This required an Elastic Cloud cloud_id# Instantiate ElasticsearchEmbeddings using credentialsembeddings = ElasticsearchEmbeddings.from_credentials( model_id, es_cloud_id="your_cloud_id", es_user="your_user", es_password="your_password",)# Create embeddings for multiple documentsdocuments = [ "This is an example document.", "Another example document to generate embeddings for.",]document_embeddings = embeddings.embed_documents(documents)# Print document embeddingsfor i, embedding in enumerate(document_embeddings): print(f"Embedding for document {i+1}: {embedding}")# Create an embedding for a singleElasticsearch | 🦜ï¸�🔗 LangchainSkip to main content🦜ï¸�🔗 LangChainDocsUse casesIntegrationsAPILangSmithJS/TS DocsCTRLKIntegrationsCallbacksChat modelsDocument loadersDocument transformersLLMsMemoryRetrieversText embedding modelsAleph AlphaAzureOpenAIBedrock EmbeddingsClarifaiCohereDashScopeDeepInfraElasticsearchEmbaasFake EmbeddingsGoogle Cloud Platform Vertex AI PaLMGPT4AllHugging Face HubInstructEmbeddingsJinaLlama-cppLocalAIMiniMaxModelScopeMosaicML embeddingsNLP CloudOpenAISageMaker Endpoint EmbeddingsSelf Hosted EmbeddingsSentence Transformers EmbeddingsSpacy EmbeddingTensorflowHubAgent toolkitsToolsVector storesGrouped by providerIntegrationsText embedding modelsOpenAIOpenAILet's load the OpenAI Embedding class.from langchain.embeddings import OpenAIEmbeddingsembeddings = OpenAIEmbeddings()text = "This is a test document."query_result = embeddings.embed_query(text)doc_result = embeddings.embed_documents([text])Let's load the OpenAI Embedding class with first generation models (e.g. text-search-ada-doc-001/text-search-ada-query-001). Note: These are not recommended models - see herefrom langchain.embeddings.openai import OpenAIEmbeddingsembeddings = OpenAIEmbeddings()text = "This is a test document."query_result = embeddings.embed_query(text)doc_result = embeddings.embed_documents([text])# if you are behind an explicit proxy, you can use the OPENAI_PROXY environment variable to pass throughos.environ["OPENAI_PROXY"] = "http://proxy.yourcompany.com:8080"PreviousNLP CloudNextSageMaker Endpoint EmbeddingsCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.OpenAI | 🦜ï¸�🔗 LangchainCohere | 🦜ï¸�🔗 Langchain Skip to main content🦜ï¸�🔗 LangChainDocsUse casesIntegrationsAPILangSmithJS/TS DocsCTRLKIntegrationsCallbacksChat modelsDocument loadersDocument transformersLLMsMemoryRetrieversText embedding modelsAleph AlphaAzureOpenAIBedrock EmbeddingsClarifaiCohereDashScopeDeepInfraElasticsearchEmbaasFake EmbeddingsGoogle Cloud Platform Vertex AI PaLMGPT4AllHugging Face HubInstructEmbeddingsJinaLlama-cppLocalAIMiniMaxModelScopeMosaicML embeddingsNLP CloudOpenAISageMaker Endpoint EmbeddingsSelf Hosted EmbeddingsSentence Transformers EmbeddingsSpacy EmbeddingTensorflowHubAgent toolkitsToolsVector storesGrouped by providerIntegrationsText embedding modelsCohereCohereLet's load the Cohere Embedding class.from langchain.embeddings import CohereEmbeddingsembeddings = CohereEmbeddings(cohere_api_key=cohere_api_key)text = "This is a test document."query_result = embeddings.embed_query(text)doc_result = embeddings.embed_documents([text])PreviousClarifaiNextDashScopeCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.= embeddings.embed_query(text)doc_results = embeddings.embed_documents(["foo"])doc_resultsPreviousSpacy EmbeddingNextAgent toolkitsCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.Skip to main content🦜ï¸�🔗 LangChainDocsUse casesIntegrationsAPILangSmithJS/TS DocsCTRLKIntegrationsCallbacksChat modelsDocument loadersDocument transformersLLMsMemoryRetrieversText embedding modelsAleph AlphaAzureOpenAIBedrock EmbeddingsClarifaiCohereDashScopeDeepInfraElasticsearchEmbaasFake EmbeddingsGoogle Cloud Platform Vertex AI PaLMGPT4AllHugging Face HubInstructEmbeddingsJinaLlama-cppLocalAIMiniMaxModelScopeMosaicML embeddingsNLP CloudOpenAISageMaker Endpoint EmbeddingsSelf Hosted EmbeddingsSentence Transformers EmbeddingsSpacy EmbeddingTensorflowHubAgent toolkitsToolsVector storesGrouped by providerIntegrationsText embedding modelsTensorflowHubTensorflowHubLet's load the TensorflowHub Embedding class.from langchain.embeddings import TensorflowHubEmbeddingsembeddings = TensorflowHubEmbeddings() 2023-01-30 23:53:01.652176: I tensorflow/core/platform/cpu_feature_guard.cc:193] This TensorFlow binary is optimized with oneAPI Deep Neural Network Library (oneDNN) to use the following CPU instructions in performance-critical operations: AVX2 FMA To enable them in other operations, rebuild TensorFlow with the appropriate compiler flags. 2023-01-30 23:53:34.362802: I tensorflow/core/platform/cpu_feature_guard.cc:193] This TensorFlow binary is optimized with oneAPI Deep Neural Network Library (oneDNN) to use the following CPU instructions in performance-critical operations: AVX2 FMA To enable them in other operations, rebuild TensorFlow with the appropriate compiler flags.text = "This is a test document."query_result = embeddings.embed_query(text)doc_results = embeddings.embed_documents(["foo"])doc_resultsPreviousSpacyTensorflowHub | 🦜ï¸�🔗 Langchainobjc[45711]: Class GGMLMetalClass is implemented in both /Users/rlm/anaconda3/envs/lcn2/lib/python3.9/site-packages/gpt4all/llmodel_DO_NOT_MODIFY/build/libreplit-mainline-metal.dylib (0x29fe18208) and /Users/rlm/anaconda3/envs/lcn2/lib/python3.9/site-packages/gpt4all/llmodel_DO_NOT_MODIFY/build/libllamamodel-mainline-metal.dylib (0x2a0244208). One of the two will be used. Which one is undefined.text = "This is a test document."query_result = gpt4all_embd.embed_query(text)doc_result = gpt4all_embd.embed_documents([text])PreviousGoogle Cloud Platform Vertex AI PaLMNextHugging Face HubCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.Skip to main content🦜ï¸�🔗 LangChainDocsUse casesIntegrationsAPILangSmithJS/TS DocsCTRLKIntegrationsCallbacksChat modelsDocument loadersDocument transformersLLMsMemoryRetrieversText embedding modelsAleph AlphaAzureOpenAIBedrock EmbeddingsClarifaiCohereDashScopeDeepInfraElasticsearchEmbaasFake EmbeddingsGoogle Cloud Platform Vertex AI PaLMGPT4AllHugging Face HubInstructEmbeddingsJinaLlama-cppLocalAIMiniMaxModelScopeMosaicML embeddingsNLP CloudOpenAISageMaker Endpoint EmbeddingsSelf Hosted EmbeddingsSentence Transformers EmbeddingsSpacy EmbeddingTensorflowHubAgent toolkitsToolsVector storesGrouped by providerIntegrationsText embedding modelsGPT4AllGPT4AllThis notebook explains how to use GPT4All embeddings with LangChain.pip install gpt4allfrom langchain.embeddings import GPT4AllEmbeddingsgpt4all_embd = GPT4AllEmbeddings() 100%|████████████████████████| 45.5M/45.5M [00:02<00:00, 18.5MiB/s] Model downloaded at: /Users/rlm/.cache/gpt4all/ggml-all-MiniLM-L6-v2-f16.bin objc[45711]: Class GGMLMetalClass is implemented in bothGPT4All | 🦜ï¸�🔗 Langchain'ssh_private_key':''},# name='my-cluster')embeddings = SelfHostedHuggingFaceEmbeddings(hardware=gpu)text = "This is a test document."query_result = embeddings.embed_query(text)And similarly for SelfHostedHuggingFaceInstructEmbeddings:embeddings = SelfHostedHuggingFaceInstructEmbeddings(hardware=gpu)Now let's load an embedding model with a custom load function:def get_pipeline(): from transformers import ( AutoModelForCausalLM, AutoTokenizer, pipeline, ) # Must be inside the function in notebooks model_id = "facebook/bart-base" tokenizer = AutoTokenizer.from_pretrained(model_id) model = AutoModelForCausalLM.from_pretrained(model_id) return pipeline("feature-extraction", model=model, tokenizer=tokenizer)def inference_fn(pipeline, prompt): # Return last hidden state of the model if isinstance(prompt, list): return [emb[0][-1] for emb in pipeline(prompt)] return pipeline(prompt)[0][-1]embeddings = SelfHostedEmbeddings( model_load_fn=get_pipeline, hardware=gpu, model_reqs=["./", "torch", "transformers"], inference_fn=inference_fn,)query_result = embeddings.embed_query(text)PreviousSageMaker Endpoint EmbeddingsNextSentence Transformers EmbeddingsCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.Skip to main content🦜ï¸�🔗 LangChainDocsUse casesIntegrationsAPILangSmithJS/TS DocsCTRLKIntegrationsCallbacksChat modelsDocument loadersDocument transformersLLMsMemoryRetrieversText embedding modelsAleph AlphaAzureOpenAIBedrock EmbeddingsClarifaiCohereDashScopeDeepInfraElasticsearchEmbaasFake EmbeddingsGoogle Cloud Platform Vertex AI PaLMGPT4AllHugging Face HubInstructEmbeddingsJinaLlama-cppLocalAIMiniMaxModelScopeMosaicML embeddingsNLP CloudOpenAISageMaker Endpoint EmbeddingsSelf Hosted EmbeddingsSentence Transformers EmbeddingsSpacy EmbeddingTensorflowHubAgent toolkitsToolsVector storesGrouped by providerIntegrationsText embedding modelsSelf Hosted EmbeddingsSelf Hosted EmbeddingsLet's load the SelfHostedEmbeddings, SelfHostedHuggingFaceEmbeddings, and SelfHostedHuggingFaceInstructEmbeddings classes.from langchain.embeddings import ( SelfHostedEmbeddings, SelfHostedHuggingFaceEmbeddings, SelfHostedHuggingFaceInstructEmbeddings,)import runhouse as rh# For an on-demand A100 with GCP, Azure, or Lambdagpu = rh.cluster(name="rh-a10x", instance_type="A100:1", use_spot=False)# For an on-demand A10G with AWS (no single A100s on AWS)# gpu = rh.cluster(name='rh-a10x', instance_type='g5.2xlarge', provider='aws')# For an existing cluster# gpu = rh.cluster(ips=[''],# ssh_creds={'ssh_user': '...', 'ssh_private_key':''},#Self Hosted Embeddings | 🦜ï¸�🔗 Langchainlanguage processing tasks, such as document similarity comparison or text classification.​embeddings = embedder.embed_documents(texts)for i, embedding in enumerate(embeddings): print(f"Embedding for document {i+1}: {embedding}")Generate and print an embedding for a single piece of text. You can also generate an embedding for a single piece of text, such as a search query. This can be useful for tasks like information retrieval, where you want to find documents that are similar to a given query.​query = "Quick foxes and lazy dogs."query_embedding = embedder.embed_query(query)print(f"Embedding for query: {query_embedding}")PreviousSentence Transformers EmbeddingsNextTensorflowHubLoading the Spacy embedding class to generate and query embeddingsCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.Skip to main content🦜ï¸�🔗 LangChainDocsUse casesIntegrationsAPILangSmithJS/TS DocsCTRLKIntegrationsCallbacksChat modelsDocument loadersDocument transformersLLMsMemoryRetrieversText embedding modelsAleph AlphaAzureOpenAIBedrock EmbeddingsClarifaiCohereDashScopeDeepInfraElasticsearchEmbaasFake EmbeddingsGoogle Cloud Platform Vertex AI PaLMGPT4AllHugging Face HubInstructEmbeddingsJinaLlama-cppLocalAIMiniMaxModelScopeMosaicML embeddingsNLP CloudOpenAISageMaker Endpoint EmbeddingsSelf Hosted EmbeddingsSentence Transformers EmbeddingsSpacy EmbeddingTensorflowHubAgent toolkitsToolsVector storesGrouped by providerIntegrationsText embedding modelsSpacy EmbeddingOn this pageSpacy EmbeddingLoading the Spacy embedding class to generate and query embeddings​Import the necessary classes​from langchain.embeddings.spacy_embeddings import SpacyEmbeddingsInitialize SpacyEmbeddings.This will load the Spacy model into memory.​embedder = SpacyEmbeddings()Define some example texts . These could be any documents that you want to analyze - for example, news articles, social media posts, or product reviews.​texts = [ "The quick brown fox jumps over the lazy dog.", "Pack my box with five dozen liquor jugs.", "How vexingly quick daft zebras jump!", "Bright vixens jump; dozy fowl quack.",]Generate and print embeddings for the texts . The SpacyEmbeddings class generates an embedding for each document, which is a numerical representation of the document's content. These embeddings can be used for various natural language processing tasks, such as document similarity comparison or text classification.​embeddings =Spacy Embedding | 🦜ï¸�🔗 Langchainnp.dot(query_numpy, document_numpy) / ( np.linalg.norm(query_numpy) * np.linalg.norm(document_numpy))print(f"Cosine similarity between document and query: {similarity}")PreviousModelScopeNextNLP CloudCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.Skip to main content🦜ï¸�🔗 LangChainDocsUse casesIntegrationsAPILangSmithJS/TS DocsCTRLKIntegrationsCallbacksChat modelsDocument loadersDocument transformersLLMsMemoryRetrieversText embedding modelsAleph AlphaAzureOpenAIBedrock EmbeddingsClarifaiCohereDashScopeDeepInfraElasticsearchEmbaasFake EmbeddingsGoogle Cloud Platform Vertex AI PaLMGPT4AllHugging Face HubInstructEmbeddingsJinaLlama-cppLocalAIMiniMaxModelScopeMosaicML embeddingsNLP CloudOpenAISageMaker Endpoint EmbeddingsSelf Hosted EmbeddingsSentence Transformers EmbeddingsSpacy EmbeddingTensorflowHubAgent toolkitsToolsVector storesGrouped by providerIntegrationsText embedding modelsMosaicML embeddingsMosaicML embeddingsMosaicML offers a managed inference service. You can either use a variety of open source models, or deploy your own.This example goes over how to use LangChain to interact with MosaicML Inference for text embedding.# sign up for an account: https://forms.mosaicml.com/demo?utm_source=langchainfrom getpass import getpassMOSAICML_API_TOKEN = getpass()import osos.environ["MOSAICML_API_TOKEN"] = MOSAICML_API_TOKENfrom langchain.embeddings import MosaicMLInstructorEmbeddingsembeddings = MosaicMLInstructorEmbeddings( query_instruction="Represent the query for retrieval: ")query_text = "This is a test query."query_result = embeddings.embed_query(query_text)document_text = "This is a test document."document_result = embeddings.embed_documents([document_text])import numpy as npquery_numpy = np.array(query_result)document_numpy = np.array(document_result[0])similarity = np.dot(query_numpy, document_numpy) / ( np.linalg.norm(query_numpy) *MosaicML embeddings | 🦜ï¸�🔗 LangchainDashScope | 🦜ï¸�🔗 Langchain Skip to main content🦜ï¸�🔗 LangChainDocsUse casesIntegrationsAPILangSmithJS/TS DocsCTRLKIntegrationsCallbacksChat modelsDocument loadersDocument transformersLLMsMemoryRetrieversText embedding modelsAleph AlphaAzureOpenAIBedrock EmbeddingsClarifaiCohereDashScopeDeepInfraElasticsearchEmbaasFake EmbeddingsGoogle Cloud Platform Vertex AI PaLMGPT4AllHugging Face HubInstructEmbeddingsJinaLlama-cppLocalAIMiniMaxModelScopeMosaicML embeddingsNLP CloudOpenAISageMaker Endpoint EmbeddingsSelf Hosted EmbeddingsSentence Transformers EmbeddingsSpacy EmbeddingTensorflowHubAgent toolkitsToolsVector storesGrouped by providerIntegrationsText embedding modelsDashScopeDashScopeLet's load the DashScope Embedding class.from langchain.embeddings import DashScopeEmbeddingsembeddings = DashScopeEmbeddings( model="text-embedding-v1", dashscope_api_key="your-dashscope-api-key")text = "This is a test document."query_result = embeddings.embed_query(text)print(query_result)doc_results = embeddings.embed_documents(["foo"])print(doc_results)PreviousCohereNextDeepInfraCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.ModelScope | 🦜ï¸�🔗 Langchain Skip to main content🦜ï¸�🔗 LangChainDocsUse casesIntegrationsAPILangSmithJS/TS DocsCTRLKIntegrationsCallbacksChat modelsDocument loadersDocument transformersLLMsMemoryRetrieversText embedding modelsAleph AlphaAzureOpenAIBedrock EmbeddingsClarifaiCohereDashScopeDeepInfraElasticsearchEmbaasFake EmbeddingsGoogle Cloud Platform Vertex AI PaLMGPT4AllHugging Face HubInstructEmbeddingsJinaLlama-cppLocalAIMiniMaxModelScopeMosaicML embeddingsNLP CloudOpenAISageMaker Endpoint EmbeddingsSelf Hosted EmbeddingsSentence Transformers EmbeddingsSpacy EmbeddingTensorflowHubAgent toolkitsToolsVector storesGrouped by providerIntegrationsText embedding modelsModelScopeModelScopeLet's load the ModelScope Embedding class.from langchain.embeddings import ModelScopeEmbeddingsmodel_id = "damo/nlp_corom_sentence-embedding_english-base"embeddings = ModelScopeEmbeddings(model_id=model_id)text = "This is a test document."query_result = embeddings.embed_query(text)doc_results = embeddings.embed_documents(["foo"])PreviousMiniMaxNextMosaicML embeddingsCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.MiniMaxMiniMax offers an embeddings service.📄ï¸� ModelScopeLet's load the ModelScope Embedding class.📄ï¸� MosaicML embeddingsMosaicML offers a managed inference service. You can either use a variety of open source models, or deploy your own.📄ï¸� NLP CloudNLP Cloud is an artificial intelligence platform that allows you to use the most advanced AI engines, and even train your own engines with your own data.📄ï¸� OpenAILet's load the OpenAI Embedding class.📄ï¸� SageMaker Endpoint EmbeddingsLet's load the SageMaker Endpoints Embeddings class. The class can be used if you host, e.g. your own Hugging Face model on SageMaker.📄ï¸� Self Hosted EmbeddingsLet's load the SelfHostedEmbeddings, SelfHostedHuggingFaceEmbeddings, and SelfHostedHuggingFaceInstructEmbeddings classes.📄ï¸� Sentence Transformers EmbeddingsSentenceTransformers embeddings are called using the HuggingFaceEmbeddings integration. We have also added an alias for SentenceTransformerEmbeddings for users who are more familiar with directly using that package.📄ï¸� Spacy EmbeddingLoading the Spacy embedding class to generate and query embeddings📄ï¸� TensorflowHubLet's load the TensorflowHub Embedding class.PreviousZepNextAleph AlphaCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.serverless inference as a service that provides access to a variety of LLMs and embeddings models. This notebook goes over how to use LangChain with DeepInfra for text embeddings.📄ï¸� ElasticsearchWalkthrough of how to generate embeddings using a hosted embedding model in Elasticsearch📄ï¸� Embaasembaas is a fully managed NLP API service that offers features like embedding generation, document text extraction, document to embeddings and more. You can choose a variety of pre-trained models.📄ï¸� Fake EmbeddingsLangChain also provides a fake embedding class. You can use this to test your pipelines.📄ï¸� Google Cloud Platform Vertex AI PaLMNote: This is seperate from the Google PaLM integration. Google has chosen to offer an enterprise version of PaLM through GCP, and this supports the models made available through there.📄ï¸� GPT4AllThis notebook explains how to use GPT4All embeddings with LangChain.📄ï¸� Hugging Face HubLet's load the Hugging Face Embedding class.📄ï¸� InstructEmbeddingsLet's load the HuggingFace instruct Embeddings class.📄ï¸� JinaLet's load the Jina Embedding class.📄ï¸� Llama-cppThis notebook goes over how to use Llama-cpp embeddings within LangChain📄ï¸� LocalAILet's load the LocalAI Embedding class. In order to use the LocalAI Embedding class, you need to have the LocalAI service hosted somewhere and configure the embedding models. See the documentation at https//localai.io/features/embeddings/index.html.📄ï¸� MiniMaxMiniMax offers an embeddingsSkip to main content🦜ï¸�🔗 LangChainDocsUse casesIntegrationsAPILangSmithJS/TS DocsCTRLKIntegrationsCallbacksChat modelsDocument loadersDocument transformersLLMsMemoryRetrieversText embedding modelsAleph AlphaAzureOpenAIBedrock EmbeddingsClarifaiCohereDashScopeDeepInfraElasticsearchEmbaasFake EmbeddingsGoogle Cloud Platform Vertex AI PaLMGPT4AllHugging Face HubInstructEmbeddingsJinaLlama-cppLocalAIMiniMaxModelScopeMosaicML embeddingsNLP CloudOpenAISageMaker Endpoint EmbeddingsSelf Hosted EmbeddingsSentence Transformers EmbeddingsSpacy EmbeddingTensorflowHubAgent toolkitsToolsVector storesGrouped by providerIntegrationsText embedding modelsText embedding models📄ï¸� Aleph AlphaThere are two possible ways to use Aleph Alpha's semantic embeddings. If you have texts with a dissimilar structure (e.g. a Document and a Query) you would want to use asymmetric embeddings. Conversely, for texts with comparable structures, symmetric embeddings are the suggested approach.📄ï¸� AzureOpenAILet's load the OpenAI Embedding class with environment variables set to indicate to use Azure endpoints.📄ï¸� Bedrock Embeddings📄ï¸� ClarifaiClarifai is an AI Platform that provides the full AI lifecycle ranging from data exploration, data labeling, model training, evaluation, and inference.📄ï¸� CohereLet's load the Cohere Embedding class.📄ï¸� DashScopeLet's load the DashScope Embedding class.📄ï¸� DeepInfraDeepInfra is a serverless inference as a service that provides access to a variety of LLMs and embeddings models.Text embedding models | 🦜ï¸�🔗 Langchainis a collaboration platform with modified Markdown support that integrates kanban boards, tasks, wikis and databases. It is an all-in-one workspace for notetaking, knowledge and data management, and project and task management.This notebook covers how to load documents from a Notion database dump.In order to get this notion dump, follow these instructions:🧑 Instructions for ingesting your own dataset​Export your dataset from Notion. You can do this by clicking on the three dots in the upper right hand corner and then clicking Export.When exporting, make sure to select the Markdown & CSV format option.This will produce a .zip file in your Downloads folder. Move the .zip file into this repository.Run the following command to unzip the zip file (replace the Export... with your own file name as needed).unzip Export-d3adfe0f-3131-4bf3-8987-a52017fc1bae.zip -d Notion_DBRun the following command to ingest the data.from langchain.document_loaders import NotionDirectoryLoaderloader = NotionDirectoryLoader("Notion_DB")docs = loader.load()PreviousModern TreasuryNextNotion DB 2/2🧑 Instructions for ingesting your own datasetCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.Skip to main content🦜ï¸�🔗 LangChainDocsUse casesIntegrationsAPILangSmithJS/TS DocsCTRLKIntegrationsCallbacksChat modelsDocument loadersEtherscan LoaderacreomAirbyte JSONAirtableAlibaba Cloud MaxComputeApify DatasetArxivAsyncHtmlLoaderAWS S3 DirectoryAWS S3 FileAZLyricsAzure Blob Storage ContainerAzure Blob Storage FileBibTeXBiliBiliBlackboardBlockchainBrave SearchBrowserlesschatgpt_loaderCollege ConfidentialConfluenceCoNLL-UCopy PasteCSVCube Semantic LayerDatadog LogsDiffbotDiscordDocugamiDuckDBEmailEmbaasEPubEverNoteexample_dataMicrosoft ExcelFacebook ChatFaunaFigmaGeopandasGitGitBookGitHubGoogle BigQueryGoogle Cloud Storage DirectoryGoogle Cloud Storage FileGoogle DriveGrobidGutenbergHacker NewsHuggingFace datasetiFixitImagesImage captionsIMSDbIuguJoplinJupyter NotebookLarkSuite (FeiShu)MastodonMediaWikiDumpMergeDocLoadermhtmlMicrosoft OneDriveMicrosoft PowerPointMicrosoft WordModern TreasuryNotion DB 1/2Notion DB 2/2ObsidianOpen Document Format (ODT)Open City DataOrg-modePandas DataFramePsychicPySpark DataFrame LoaderReadTheDocs DocumentationRecursive URL LoaderRedditRoamRocksetRSTSitemapSlackSnowflakeSource CodeSpreedlyStripeSubtitleTelegramTencent COS DirectoryTencent COS File2MarkdownTOMLTrelloTSVTwitterUnstructured FileURLWeatherWebBaseLoaderWhatsApp ChatWikipediaXMLXorbits Pandas DataFrameLoading documents from a YouTube urlYouTube transcriptsDocument transformersLLMsMemoryRetrieversText embedding modelsAgent toolkitsToolsVector storesGrouped by providerIntegrationsDocument loadersNotion DB 1/2On this pageNotion DB 1/2Notion is a collaboration platform with modified Markdown support that integrates kanban boards, tasks,Notion DB 1/2 | 🦜ï¸�🔗 Langchaina managed service for storing unstructured data.This covers how to load document objects from an Google Cloud Storage (GCS) file object (blob).# !pip install google-cloud-storagefrom langchain.document_loaders import GCSFileLoaderloader = GCSFileLoader(project_name="aist", bucket="testing-hwc", blob="fake.docx")loader.load() /Users/harrisonchase/workplace/langchain/.venv/lib/python3.10/site-packages/google/auth/_default.py:83: UserWarning: Your application has authenticated using end user credentials from Google Cloud SDK without a quota project. You might receive a "quota exceeded" or "API not enabled" error. We recommend you rerun `gcloud auth application-default login` and make sure a quota project is added. Or you can use service accounts instead. For more information about service accounts, see https://cloud.google.com/docs/authentication/ warnings.warn(_CLOUD_SDK_CREDENTIALS_WARNING) [Document(page_content='Lorem ipsum dolor sit amet.', lookup_str='', metadata={'source': '/var/folders/y6/8_bzdg295ld6s1_97_12m4lr0000gn/T/tmp3srlf8n8/fake.docx'}, lookup_index=0)]PreviousGoogle Cloud Storage DirectoryNextGoogle DriveCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.Skip to main content🦜ï¸�🔗 LangChainDocsUse casesIntegrationsAPILangSmithJS/TS DocsCTRLKIntegrationsCallbacksChat modelsDocument loadersEtherscan LoaderacreomAirbyte JSONAirtableAlibaba Cloud MaxComputeApify DatasetArxivAsyncHtmlLoaderAWS S3 DirectoryAWS S3 FileAZLyricsAzure Blob Storage ContainerAzure Blob Storage FileBibTeXBiliBiliBlackboardBlockchainBrave SearchBrowserlesschatgpt_loaderCollege ConfidentialConfluenceCoNLL-UCopy PasteCSVCube Semantic LayerDatadog LogsDiffbotDiscordDocugamiDuckDBEmailEmbaasEPubEverNoteexample_dataMicrosoft ExcelFacebook ChatFaunaFigmaGeopandasGitGitBookGitHubGoogle BigQueryGoogle Cloud Storage DirectoryGoogle Cloud Storage FileGoogle DriveGrobidGutenbergHacker NewsHuggingFace datasetiFixitImagesImage captionsIMSDbIuguJoplinJupyter NotebookLarkSuite (FeiShu)MastodonMediaWikiDumpMergeDocLoadermhtmlMicrosoft OneDriveMicrosoft PowerPointMicrosoft WordModern TreasuryNotion DB 1/2Notion DB 2/2ObsidianOpen Document Format (ODT)Open City DataOrg-modePandas DataFramePsychicPySpark DataFrame LoaderReadTheDocs DocumentationRecursive URL LoaderRedditRoamRocksetRSTSitemapSlackSnowflakeSource CodeSpreedlyStripeSubtitleTelegramTencent COS DirectoryTencent COS File2MarkdownTOMLTrelloTSVTwitterUnstructured FileURLWeatherWebBaseLoaderWhatsApp ChatWikipediaXMLXorbits Pandas DataFrameLoading documents from a YouTube urlYouTube transcriptsDocument transformersLLMsMemoryRetrieversText embedding modelsAgent toolkitsToolsVector storesGrouped by providerIntegrationsDocument loadersGoogle Cloud Storage FileGoogle Cloud Storage FileGoogle Cloud Storage is a managed service for storing unstructured data.This covers how to load document objects from an Google Cloud StorageGoogle Cloud Storage File | 🦜ï¸�🔗 Langchainfree software documentation hosting platform. It generates documentation written with the Sphinx documentation generator.This notebook covers how to load content from HTML that was generated as part of a Read-The-Docs build.For an example of this in the wild, see here.This assumes that the HTML has already been scraped into a folder. This can be done by uncommenting and running the following command#!pip install beautifulsoup4#!wget -r -A.html -P rtdocs https://python.langchain.com/en/latest/from langchain.document_loaders import ReadTheDocsLoaderloader = ReadTheDocsLoader("rtdocs", features="html.parser")docs = loader.load()PreviousPySpark DataFrame LoaderNextRecursive URL LoaderCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.Skip to main content🦜ï¸�🔗 LangChainDocsUse casesIntegrationsAPILangSmithJS/TS DocsCTRLKIntegrationsCallbacksChat modelsDocument loadersEtherscan LoaderacreomAirbyte JSONAirtableAlibaba Cloud MaxComputeApify DatasetArxivAsyncHtmlLoaderAWS S3 DirectoryAWS S3 FileAZLyricsAzure Blob Storage ContainerAzure Blob Storage FileBibTeXBiliBiliBlackboardBlockchainBrave SearchBrowserlesschatgpt_loaderCollege ConfidentialConfluenceCoNLL-UCopy PasteCSVCube Semantic LayerDatadog LogsDiffbotDiscordDocugamiDuckDBEmailEmbaasEPubEverNoteexample_dataMicrosoft ExcelFacebook ChatFaunaFigmaGeopandasGitGitBookGitHubGoogle BigQueryGoogle Cloud Storage DirectoryGoogle Cloud Storage FileGoogle DriveGrobidGutenbergHacker NewsHuggingFace datasetiFixitImagesImage captionsIMSDbIuguJoplinJupyter NotebookLarkSuite (FeiShu)MastodonMediaWikiDumpMergeDocLoadermhtmlMicrosoft OneDriveMicrosoft PowerPointMicrosoft WordModern TreasuryNotion DB 1/2Notion DB 2/2ObsidianOpen Document Format (ODT)Open City DataOrg-modePandas DataFramePsychicPySpark DataFrame LoaderReadTheDocs DocumentationRecursive URL LoaderRedditRoamRocksetRSTSitemapSlackSnowflakeSource CodeSpreedlyStripeSubtitleTelegramTencent COS DirectoryTencent COS File2MarkdownTOMLTrelloTSVTwitterUnstructured FileURLWeatherWebBaseLoaderWhatsApp ChatWikipediaXMLXorbits Pandas DataFrameLoading documents from a YouTube urlYouTube transcriptsDocument transformersLLMsMemoryRetrieversText embedding modelsAgent toolkitsToolsVector storesGrouped by providerIntegrationsDocument loadersReadTheDocs DocumentationReadTheDocs DocumentationRead the Docs is an open-sourced free software documentation hosting platform. It generates documentation written with the Sphinx documentation generator.This notebook covers howReadTheDocs Documentation | 🦜ï¸�🔗 Langchain267 forms: name: charizard url: https://pokeapi.co/api/v2/pokemon-form/6/ game_indices: game_index: 180 version: name: red url: https://pokeapi.co/api/v2/version/1/ game_index: 180 version: name: blue url: https://pokeapi.co/api/v2/version/2/ game_index: 180 version: nPreviousacreomNextAirtableCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.Have docker desktop installedSteps:1) Clone Airbyte from GitHub - git clone https://github.com/airbytehq/airbyte.git2) Switch into Airbyte directory - cd airbyte3) Start Airbyte - docker compose up4) In your browser, just visit http://localhost:8000. You will be asked for a username and password. By default, that's username airbyte and password password.5) Setup any source you wish.6) Set destination as Local JSON, with specified destination path - lets say /json_data. Set up manual sync.7) Run the connection.7) To see what files are create, you can navigate to: file:///tmp/airbyte_local8) Find your data and copy path. That path should be saved in the file variable below. It should start with /tmp/airbyte_localfrom langchain.document_loaders import AirbyteJSONLoaderls /tmp/airbyte_local/json_data/ _airbyte_raw_pokemon.jsonlloader = AirbyteJSONLoader("/tmp/airbyte_local/json_data/_airbyte_raw_pokemon.jsonl")data = loader.load()print(data[0].page_content[:500]) abilities: ability: name: blaze url: https://pokeapi.co/api/v2/ability/66/ is_hidden: False slot: 1 ability: name: solar-power url: https://pokeapi.co/api/v2/ability/94/ is_hidden: True slot: 3 base_experience: 267 forms: name: charizard url:ELT pipelines from APIs, databases & files to warehouses & lakes. It has the largest catalog of ELT connectors to data warehouses and databases.This covers how to load any source from Airbyte into a local JSON file that can be read in as a documentPrereqs:Skip to main content🦜ï¸�🔗 LangChainDocsUse casesIntegrationsAPILangSmithJS/TS DocsCTRLKIntegrationsCallbacksChat modelsDocument loadersEtherscan LoaderacreomAirbyte JSONAirtableAlibaba Cloud MaxComputeApify DatasetArxivAsyncHtmlLoaderAWS S3 DirectoryAWS S3 FileAZLyricsAzure Blob Storage ContainerAzure Blob Storage FileBibTeXBiliBiliBlackboardBlockchainBrave SearchBrowserlesschatgpt_loaderCollege ConfidentialConfluenceCoNLL-UCopy PasteCSVCube Semantic LayerDatadog LogsDiffbotDiscordDocugamiDuckDBEmailEmbaasEPubEverNoteexample_dataMicrosoft ExcelFacebook ChatFaunaFigmaGeopandasGitGitBookGitHubGoogle BigQueryGoogle Cloud Storage DirectoryGoogle Cloud Storage FileGoogle DriveGrobidGutenbergHacker NewsHuggingFace datasetiFixitImagesImage captionsIMSDbIuguJoplinJupyter NotebookLarkSuite (FeiShu)MastodonMediaWikiDumpMergeDocLoadermhtmlMicrosoft OneDriveMicrosoft PowerPointMicrosoft WordModern TreasuryNotion DB 1/2Notion DB 2/2ObsidianOpen Document Format (ODT)Open City DataOrg-modePandas DataFramePsychicPySpark DataFrame LoaderReadTheDocs DocumentationRecursive URL LoaderRedditRoamRocksetRSTSitemapSlackSnowflakeSource CodeSpreedlyStripeSubtitleTelegramTencent COS DirectoryTencent COS File2MarkdownTOMLTrelloTSVTwitterUnstructured FileURLWeatherWebBaseLoaderWhatsApp ChatWikipediaXMLXorbits Pandas DataFrameLoading documents from a YouTube urlYouTube transcriptsDocument transformersLLMsMemoryRetrieversText embedding modelsAgent toolkitsToolsVector storesGrouped by providerIntegrationsDocument loadersAirbyte JSONAirbyte JSONAirbyte is a data integration platform for ELT pipelines from APIs, databases & files to warehouses & lakes. It has the largest catalog ofAirbyte JSON | 🦜ï¸�🔗 LangchainSnowflakepip install snowflake-connector-pythonimport settings as sfrom langchain.document_loaders import SnowflakeLoaderQUERY = "select text, survey_id from CLOUD_DATA_SOLUTIONS.HAPPY_OR_NOT.OPEN_FEEDBACK limit 10"snowflake_loader = SnowflakeLoader( query=QUERY, user=s.SNOWFLAKE_USER, password=s.SNOWFLAKE_PASS, account=s.SNOWFLAKE_ACCOUNT, warehouse=s.SNOWFLAKE_WAREHOUSE, role=s.SNOWFLAKE_ROLE, database=s.SNOWFLAKE_DATABASE, schema=s.SNOWFLAKE_SCHEMA,)snowflake_documents = snowflake_loader.load()print(snowflake_documents)from snowflakeLoader import SnowflakeLoaderimport settings as sQUERY = "select text, survey_id as source from CLOUD_DATA_SOLUTIONS.HAPPY_OR_NOT.OPEN_FEEDBACK limit 10"snowflake_loader = SnowflakeLoader( query=QUERY, user=s.SNOWFLAKE_USER, password=s.SNOWFLAKE_PASS, account=s.SNOWFLAKE_ACCOUNT, warehouse=s.SNOWFLAKE_WAREHOUSE, role=s.SNOWFLAKE_ROLE, database=s.SNOWFLAKE_DATABASE, schema=s.SNOWFLAKE_SCHEMA, metadata_columns=["source"],)snowflake_documents = snowflake_loader.load()print(snowflake_documents)PreviousSlackNextSource CodeCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.Skip to main content🦜ï¸�🔗 LangChainDocsUse casesIntegrationsAPILangSmithJS/TS DocsCTRLKIntegrationsCallbacksChat modelsDocument loadersEtherscan LoaderacreomAirbyte JSONAirtableAlibaba Cloud MaxComputeApify DatasetArxivAsyncHtmlLoaderAWS S3 DirectoryAWS S3 FileAZLyricsAzure Blob Storage ContainerAzure Blob Storage FileBibTeXBiliBiliBlackboardBlockchainBrave SearchBrowserlesschatgpt_loaderCollege ConfidentialConfluenceCoNLL-UCopy PasteCSVCube Semantic LayerDatadog LogsDiffbotDiscordDocugamiDuckDBEmailEmbaasEPubEverNoteexample_dataMicrosoft ExcelFacebook ChatFaunaFigmaGeopandasGitGitBookGitHubGoogle BigQueryGoogle Cloud Storage DirectoryGoogle Cloud Storage FileGoogle DriveGrobidGutenbergHacker NewsHuggingFace datasetiFixitImagesImage captionsIMSDbIuguJoplinJupyter NotebookLarkSuite (FeiShu)MastodonMediaWikiDumpMergeDocLoadermhtmlMicrosoft OneDriveMicrosoft PowerPointMicrosoft WordModern TreasuryNotion DB 1/2Notion DB 2/2ObsidianOpen Document Format (ODT)Open City DataOrg-modePandas DataFramePsychicPySpark DataFrame LoaderReadTheDocs DocumentationRecursive URL LoaderRedditRoamRocksetRSTSitemapSlackSnowflakeSource CodeSpreedlyStripeSubtitleTelegramTencent COS DirectoryTencent COS File2MarkdownTOMLTrelloTSVTwitterUnstructured FileURLWeatherWebBaseLoaderWhatsApp ChatWikipediaXMLXorbits Pandas DataFrameLoading documents from a YouTube urlYouTube transcriptsDocument transformersLLMsMemoryRetrieversText embedding modelsAgent toolkitsToolsVector storesGrouped by providerIntegrationsDocument loadersSnowflakeSnowflakeThis notebooks goes over how to load documents from Snowflakepip install snowflake-connector-pythonimport settings as sfrom langchain.document_loaders importSnowflake | 🦜ï¸�🔗 Langchainhow to load document objects from a Tencent COS Directory.#! pip install cos-python-sdk-v5from langchain.document_loaders import TencentCOSDirectoryLoaderfrom qcloud_cos import CosConfigconf = CosConfig( Region="your cos region", SecretId="your cos secret_id", SecretKey="your cos secret_key",)loader = TencentCOSDirectoryLoader(conf=conf, bucket="you_cos_bucket")loader.load()Specifying a prefix​You can also specify a prefix for more finegrained control over what files to load.loader = TencentCOSDirectoryLoader(conf=conf, bucket="you_cos_bucket", prefix="fake")loader.load()PreviousTelegramNextTencent COS FileSpecifying a prefixCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.Skip to main content🦜ï¸�🔗 LangChainDocsUse casesIntegrationsAPILangSmithJS/TS DocsCTRLKIntegrationsCallbacksChat modelsDocument loadersEtherscan LoaderacreomAirbyte JSONAirtableAlibaba Cloud MaxComputeApify DatasetArxivAsyncHtmlLoaderAWS S3 DirectoryAWS S3 FileAZLyricsAzure Blob Storage ContainerAzure Blob Storage FileBibTeXBiliBiliBlackboardBlockchainBrave SearchBrowserlesschatgpt_loaderCollege ConfidentialConfluenceCoNLL-UCopy PasteCSVCube Semantic LayerDatadog LogsDiffbotDiscordDocugamiDuckDBEmailEmbaasEPubEverNoteexample_dataMicrosoft ExcelFacebook ChatFaunaFigmaGeopandasGitGitBookGitHubGoogle BigQueryGoogle Cloud Storage DirectoryGoogle Cloud Storage FileGoogle DriveGrobidGutenbergHacker NewsHuggingFace datasetiFixitImagesImage captionsIMSDbIuguJoplinJupyter NotebookLarkSuite (FeiShu)MastodonMediaWikiDumpMergeDocLoadermhtmlMicrosoft OneDriveMicrosoft PowerPointMicrosoft WordModern TreasuryNotion DB 1/2Notion DB 2/2ObsidianOpen Document Format (ODT)Open City DataOrg-modePandas DataFramePsychicPySpark DataFrame LoaderReadTheDocs DocumentationRecursive URL LoaderRedditRoamRocksetRSTSitemapSlackSnowflakeSource CodeSpreedlyStripeSubtitleTelegramTencent COS DirectoryTencent COS File2MarkdownTOMLTrelloTSVTwitterUnstructured FileURLWeatherWebBaseLoaderWhatsApp ChatWikipediaXMLXorbits Pandas DataFrameLoading documents from a YouTube urlYouTube transcriptsDocument transformersLLMsMemoryRetrieversText embedding modelsAgent toolkitsToolsVector storesGrouped by providerIntegrationsDocument loadersTencent COS DirectoryOn this pageTencent COS DirectoryThis covers how to load document objects from a Tencent COS Directory.#! pip install cos-python-sdk-v5fromTencent COS Directory | 🦜ï¸�🔗 Langchain},)docs = loader.load()PreviousWeatherNextWhatsApp ChatLoading multiple webpagesLoad multiple urls concurrentlyLoading a xml file, or using a different BeautifulSoup parserUsing proxiesCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.and ≤2,500,000\nThermal Efficiency\nAppendix A, Section 2.\n\n\nSteam\nOil-fired\n>2,500,000 and ≤5,000,000\nThermal Efficiency\nAppendix A, Section 2.\n\n\n\u2003\n\n>5,000,000\nThermal Efficiency\nAppendix A, Section 2.OR\nAppendix A, Section 3. with Section 2.4.3.2.\n\n\n\n*\u2009Equipment classes for commercial packaged boilers as of July 22, 2009 (74 FR 36355) distinguish between gas-fired natural draft and all other gas-fired (except natural draft).\n\n(c) Field Tests. The field test provisions of appendix A may be used only to test a unit of commercial packaged boiler with rated input greater than 5,000,000 Btu/h.\n[81 FR 89305, Dec. 9, 2016]\n\n\nEnergy Efficiency Standards\n\n', lookup_str='', metadata={'source': 'https://www.govinfo.gov/content/pkg/CFR-2018-title10-vol3/xml/CFR-2018-title10-vol3-sec431-86.xml'}, lookup_index=0)]Using proxies​Sometimes you might need to use proxies to get around IP blocks. You can pass in a dictionary of proxies to the loader (and requests underneath) to use them.loader = WebBaseLoader( "https://www.walmart.com/search?q=parrots", proxies={ "http": "http://{username}:{password}:@proxy.service.com:6666/", "https": "https://{username}:{password}:@proxy.service.com:6666/", },)docs = loader.load()PreviousWeatherNextWhatsApp ChatLoadingTesting and Calculations. Determine the thermal efficiency or combustion efficiency of commercial packaged boilers by conducting the appropriate test procedure(s) indicated in Table 1 of this section.\n\nTable 1—Test Requirements for Commercial Packaged Boiler Equipment Classes\n\nEquipment category\nSubcategory\nCertified rated inputBtu/h\n\nStandards efficiency metric(§\u2009431.87)\n\nTest procedure(corresponding to\nstandards efficiency\nmetric required\nby §\u2009431.87)\n\n\n\nHot Water\nGas-fired\n≥300,000 and ≤2,500,000\nThermal Efficiency\nAppendix A, Section 2.\n\n\nHot Water\nGas-fired\n>2,500,000\nCombustion Efficiency\nAppendix A, Section 3.\n\n\nHot Water\nOil-fired\n≥300,000 and ≤2,500,000\nThermal Efficiency\nAppendix A, Section 2.\n\n\nHot Water\nOil-fired\n>2,500,000\nCombustion Efficiency\nAppendix A, Section 3.\n\n\nSteam\nGas-fired (all*)\n≥300,000 and ≤2,500,000\nThermal Efficiency\nAppendix A, Section 2.\n\n\nSteam\nGas-fired (all*)\n>2,500,000 and ≤5,000,000\nThermal Efficiency\nAppendix A, Section 2.\n\n\n\u2003\n\n>5,000,000\nThermal Efficiency\nAppendix A, Section 2.OR\nAppendix A, Section 3 with Section 2.4.3.2.\n\n\n\nSteam\nOil-fired\n≥300,000 and ≤2,500,000\nThermal Efficiency\nAppendixlookup_str='', metadata={'source': 'https://www.espn.com/'}, lookup_index=0), Document(page_content='GoogleSearch Images Maps Play YouTube News Gmail Drive More »Web History | Settings | Sign in\xa0Advanced searchAdvertisingBusiness SolutionsAbout Google© 2023 - Privacy - Terms ', lookup_str='', metadata={'source': 'https://google.com'}, lookup_index=0)]Loading a xml file, or using a different BeautifulSoup parser​You can also look at SitemapLoader for an example of how to load a sitemap file, which is an example of using this feature.loader = WebBaseLoader( "https://www.govinfo.gov/content/pkg/CFR-2018-title10-vol3/xml/CFR-2018-title10-vol3-sec431-86.xml")loader.default_parser = "xml"docs = loader.load()docs [Document(page_content='\n\n10\nEnergy\n3\n2018-01-01\n2018-01-01\nfalse\nUniform test method for the measurement of energy efficiency of commercial packaged boilers.\n§ 431.86\nSection § 431.86\n\nEnergy\nDEPARTMENT OF ENERGY\nENERGY CONSERVATION\nENERGY EFFICIENCY PROGRAM FOR CERTAIN COMMERCIAL AND INDUSTRIAL EQUIPMENT\nCommercial Packaged Boilers\nTest Procedures\n\n\n\n\n§\u2009431.86\nUniform test method for the measurement of energy efficiency of commercial packaged boilers.\n(a) Scope. This section provides test procedures, pursuant to the Energy Policy and Conservation Act (EPCA), as amended, which must be followed for measuring the combustion efficiency and/or thermal efficiency of a gas- or oil-fired commercial packaged boiler.\n(b) Testing and Calculations. Determine the thermal efficiency or combustion efficiency of commercial packaged boilers by conductingPublic LeagueReactivateMock Draft Now\n\nESPN+\n\n\n\n\nNHL: Select Games\n\n\n\n\n\n\n\nXFL\n\n\n\n\n\n\n\nMLB: Select Games\n\n\n\n\n\n\n\nNCAA Baseball\n\n\n\n\n\n\n\nNCAA Softball\n\n\n\n\n\n\n\nCricket: Select Matches\n\n\n\n\n\n\n\nMel Kiper's NFL Mock Draft 3.0\n\n\nQuick Links\n\n\n\n\nMen's Tournament Challenge\n\n\n\n\n\n\n\nWomen's Tournament Challenge\n\n\n\n\n\n\n\nNFL Draft Order\n\n\n\n\n\n\n\nHow To Watch NHL Games\n\n\n\n\n\n\n\nFantasy Baseball: Sign Up\n\n\n\n\n\n\n\nHow To Watch PGA TOUR\n\n\nESPN Sites\n\n\n\n\nESPN Deportes\n\n\n\n\n\n\n\nAndscape\n\n\n\n\n\n\n\nespnW\n\n\n\n\n\n\n\nESPNFC\n\n\n\n\n\n\n\nX Games\n\n\n\n\n\n\n\nSEC Network\n\n\nESPN Apps\n\n\n\n\nESPN\n\n\n\n\n\n\n\nESPN Fantasy\n\n\nFollow ESPN\n\n\n\n\nFacebook\n\n\n\n\n\n\n\nTwitter\n\n\n\n\n\n\n\nInstagram\n\n\n\n\n\n\n\nSnapchat\n\n\n\n\n\n\n\nYouTube\n\n\n\n\n\n\n\nThe ESPN Daily Podcast\n\n\nTerms of UsePrivacy PolicyYour US State Privacy RightsChildren's Online Privacy PolicyInterest-Based AdsAbout Nielsen MeasurementDo Not Sell or Share My Personal InformationContact UsDisney Ad Sales SiteWork for ESPNCopyright: © ESPN Enterprises, Inc. All rights reserved.\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n", lookup_str='',NowAnne-Marie Sorvin-USA TODAY SBoston Bruins record tracker: Wins, points, milestonesThe B's are on pace for NHL records in wins and points, along with some individual superlatives as well. Follow along here with our updated tracker.Mandatory Credit: William Purnell-USA TODAY Sports2023 NFL full draft order: AFC, NFC team picks for all roundsStarting with the Carolina Panthers at No. 1 overall, here's the entire 2023 NFL draft broken down round by round. How to Watch on ESPN+Gregory Fisher/Icon Sportswire2023 NCAA men's hockey: Results, bracket, how to watchThe matchups in Tampa promise to be thrillers, featuring plenty of star power, high-octane offense and stellar defense.(AP Photo/Koji Sasahara, File)How to watch the PGA Tour, Masters, PGA Championship and FedEx Cup playoffs on ESPN, ESPN+Here's everything you need to know about how to watch the PGA Tour, Masters, PGA Championship and FedEx Cup playoffs on ESPN and ESPN+.Hailie Lynch/XFLHow to watch the XFL: 2023 schedule, teams, players, news, moreEvery XFL game will be streamed on ESPN+. Find out when and where else you can watch the eight teams compete. Sign up to play the #1 Fantasy Baseball GameReactivate A LeagueCreate A LeagueJoin a Public LeaguePractice With a Mock DraftSports BettingAP Photo/Mike KropfMarch Madness betting 2023: Bracket odds, lines, tips, moreThe 2023 NCAA tournament brackets have finally been released, and we have everything you need to know to make a bet on all of the March Madness games. Sign up to play the #1 Fantasy game!Create A LeagueJoin Public LeagueReactivateMock Draft Now\n\nESPN+\n\n\n\n\nNHL: Selectplan July friendly in San DiegoOn paper, Padres overtake DodgersFavorites FantasyManage FavoritesFantasy HomeCustomize ESPNSign UpLog InMarch Madness LiveESPNMarch Madness LiveWatch every men's NCAA tournament game live! ICYMI1:42Austin Peay's coach, pitcher and catcher all ejected after retaliation pitchAustin Peay's pitcher, catcher and coach were all ejected after a pitch was thrown at Liberty's Nathan Keeter, who earlier in the game hit a home run and celebrated while running down the third-base line. Men's Tournament ChallengeIllustration by ESPNMen's Tournament ChallengeCheck your bracket(s) in the 2023 Men's Tournament Challenge, which you can follow throughout the Big Dance. Women's Tournament ChallengeIllustration by ESPNWomen's Tournament ChallengeCheck your bracket(s) in the 2023 Women's Tournament Challenge, which you can follow throughout the Big Dance. Best of ESPN+AP Photo/Lynne SladkyFantasy Baseball ESPN+ Cheat Sheet: Sleepers, busts, rookies and closersYou've read their names all preseason long, it'd be a shame to forget them on draft day. The ESPN+ Cheat Sheet is one way to make sure that doesn't happen.Steph Chambers/Getty ImagesPassan's 2023 MLB season preview: Bold predictions and moreOpening Day is just over a week away -- and Jeff Passan has everything you need to know covered from every possible angle.Photo by Bob Kupbens/Icon Sportswire2023 NFL free agency: Best team fits for unsigned playersWhere could Ezekiel Elliott land? Let's match remaining free agents to teams and find fits for two trade candidates.Illustration by ESPN2023 NFL mock draft: Mel Kiper's first-round pick predictionsMel Kiper Jr. makes his predictions for Round 1 of the NFL draft, including projecting a trade in the top five. Trending NowAnne-Marie Sorvin-USA TODAY SBoston Bruins record tracker: Wins, points,scandal.13hJohn KeimESPNIOWA STAR STEPS UP AGAINJ-Will: Caitlin Clark is the biggest brand in college sports right now8h0:47'The better the opponent, the better she plays': Clark draws comparisons to TaurasiCaitlin Clark's performance on Sunday had longtime observers going back decades to find comparisons.16hKevin PeltonWOMEN'S ELITE EIGHT SCOREBOARDMONDAY'S GAMESCheck your bracket!NBA DRAFTHow top prospects fared on the road to the Final FourThe 2023 NCAA tournament is down to four teams, and ESPN's Jonathan Givony recaps the players who saw their NBA draft stock change.11hJonathan GivonyAndy Lyons/Getty ImagesTALKING BASKETBALLWhy AD needs to be more assertive with LeBron on the court9h1:33Why Perk won't blame Kyrie for Mavs' woes8h1:48WHERE EVERY TEAM STANDSNew NFL Power Rankings: Post-free-agency 1-32 poll, plus underrated offseason movesThe free agent frenzy has come and gone. Which teams have improved their 2023 outlook, and which teams have taken a hit?12hNFL Nation reportersIllustration by ESPNTHE BUCK STOPS WITH BELICHICKBruschi: Fair to criticize Bill Belichick for Patriots' struggles10h1:27 Top HeadlinesQB Jackson has requested trade from RavensSources: Texas hiring Terry as full-time coachJets GM: No rush on Rodgers; Lamar not optionLove to leave North Carolina, enter transfer portalBelichick to angsty Pats fans: See last 25 yearsEmbiid out, Harden due back vs. Jokic, NuggetsLynch: Purdy 'earned the right' to start for NinersMan Utd, Wrexham plan July friendly in San DiegoOn paper, Padres overtake DodgersFavorites FantasyManage FavoritesFantasyESPN Daily Podcast\n\n\nAre you ready for Opening Day? Here's your guide to MLB's offseason chaosWait, Jacob deGrom is on the Rangers now? Xander Bogaerts and Trea Turner signed where? And what about Carlos Correa? Yeah, you're going to need to read up before Opening Day.12hESPNIllustration by ESPNEverything you missed in the MLB offseason3h2:33World Series odds, win totals, props for every teamPlay fantasy baseball for free!TOP HEADLINESQB Jackson has requested trade from RavensSources: Texas hiring Terry as full-time coachJets GM: No rush on Rodgers; Lamar not optionLove to leave North Carolina, enter transfer portalBelichick to angsty Pats fans: See last 25 yearsEmbiid out, Harden due back vs. Jokic, NuggetsLynch: Purdy 'earned the right' to start for NinersMan Utd, Wrexham plan July friendly in San DiegoOn paper, Padres overtake DodgersLAMAR WANTS OUT OF BALTIMOREMarcus Spears identifies the two teams that need Lamar Jackson the most7h2:00Would Lamar sit out? Will Ravens draft a QB? Jackson trade request insightsLamar Jackson has asked Baltimore to trade him, but Ravens coach John Harbaugh hopes the QB will be back.3hJamison HensleyBallard, Colts will consider trading for QB JacksonJackson to Indy? Washington? Barnwell ranks the QB's trade fitsSNYDER'S TUMULTUOUS 24-YEAR RUNHow Washington’s NFL franchise sank on and off the field under owner Dan SnyderSnyder purchased one of the NFL's marquee franchises in 1999. Twenty-four years later, and with the team up for sale, he leaves a legacy of on-field futility and off-field scandal.13hJohn KeimESPNIOWA STAR STEPS UP AGAINJ-Will:BBRN FBRugbyWNBAWorld Baseball ClassicWWEX GamesXFLMore ESPNFantasyListenWatchESPN+\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n \n\nSUBSCRIBE NOW\n\n\n\n\n\nNHL: Select Games\n\n\n\n\n\n\n\nXFL\n\n\n\n\n\n\n\nMLB: Select Games\n\n\n\n\n\n\n\nNCAA Baseball\n\n\n\n\n\n\n\nNCAA Softball\n\n\n\n\n\n\n\nCricket: Select Matches\n\n\n\n\n\n\n\nMel Kiper's NFL Mock Draft 3.0\n\n\nQuick Links\n\n\n\n\nMen's Tournament Challenge\n\n\n\n\n\n\n\nWomen's Tournament Challenge\n\n\n\n\n\n\n\nNFL Draft Order\n\n\n\n\n\n\n\nHow To Watch NHL Games\n\n\n\n\n\n\n\nFantasy Baseball: Sign Up\n\n\n\n\n\n\n\nHow To Watch PGA TOUR\n\n\n\n\n\n\nFavorites\n\n\n\n\n\n\n Manage Favorites\n \n\n\n\nCustomize ESPNSign UpLog InESPN Sites\n\n\n\n\nESPN Deportes\n\n\n\n\n\n\n\nAndscape\n\n\n\n\n\n\n\nespnW\n\n\n\n\n\n\n\nESPNFC\n\n\n\n\n\n\n\nX Games\n\n\n\n\n\n\n\nSEC Network\n\n\nESPN Apps\n\n\n\n\nESPN\n\n\n\n\n\n\n\nESPN Fantasy\n\n\nFollow ESPN\n\n\n\n\nFacebook\n\n\n\n\n\n\n\nTwitter\n\n\n\n\n\n\n\nInstagram\n\n\n\n\n\n\n\nSnapchat\n\n\n\n\n\n\n\nYouTube\n\n\n\n\n\n\n\nThe ESPN Dailynest_asyncionest_asyncio.apply() Requirement already satisfied: nest_asyncio in /Users/harrisonchase/.pyenv/versions/3.9.1/envs/langchain/lib/python3.9/site-packages (1.5.6)loader = WebBaseLoader(["https://www.espn.com/", "https://google.com"])loader.requests_per_second = 1docs = loader.aload()docs [Document(page_content="\n\n\n\n\n\n\n\n\nESPN - Serving Sports Fans. Anytime. Anywhere.\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n Skip to main content\n \n\n Skip to navigation\n \n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n<\n\n>\n\n\n\n\n\n\n\n\n\nMenuESPN\n\n\nSearch\n\n\n\nscores\n\n\n\nNFLNBANCAAMNCAAWNHLSoccer…MLBNCAAFGolfTennisSports BettingBoxingCFLNCAACricketF1HorseLLWSMMANASCARNBA G LeagueOlympic SportsRacingRN BBRN FBRugbyWNBAWorld Baseball ClassicWWEX GamesXFLMoreApps\n\n\n\n\nESPN\n\n\n\n\n\n\n\nESPN Fantasy\n\n\nFollow ESPN\n\n\n\n\nFacebook\n\n\n\n\n\n\n\nTwitter\n\n\n\n\n\n\n\nInstagram\n\n\n\n\n\n\n\nSnapchat\n\n\n\n\n\n\n\nYouTube\n\n\n\n\n\n\n\nThe ESPN Daily Podcast\n\n\nTerms of UsePrivacy PolicyYour US State Privacy RightsChildren's Online Privacy PolicyInterest-Based AdsAbout Nielsen MeasurementDo Not Sell or Share My Personal InformationContact UsDisney Ad Sales SiteWork for ESPNCopyright: © ESPN Enterprises, Inc. All rights reserved.\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n", lookup_str='', metadata={'source': 'https://www.espn.com/'}, lookup_index=0), Document(page_content='GoogleSearch Images Maps Play YouTube News Gmail Drive More »Web History | Settings | Sign in\xa0Advanced searchAdvertisingBusiness SolutionsAbout Google© 2023 - Privacy - Terms ', lookup_str='', metadata={'source': 'https://google.com'}, lookup_index=0)]Load multiple urls concurrently​You can speed up the scraping process by scraping and parsing multiple urls concurrently.There are reasonable limits to concurrent requests, defaulting to 2 per second. If you aren't concerned about being a good citizen, or you control the server you are scraping and don't care about load, you can change the requests_per_second parameter to increase the max concurrent requests. Note, while this will speed up the scraping process, but may cause the server to block you. Be careful!pip install nest_asyncio# fixes a bug with asyncio and jupyterimport nest_asyncionest_asyncio.apply() Requirement already satisfied: nest_asyncio inthe XFL: 2023 schedule, teams, players, news, moreEvery XFL game will be streamed on ESPN+. Find out when and where else you can watch the eight teams compete. Sign up to play the #1 Fantasy Baseball GameReactivate A LeagueCreate A LeagueJoin a Public LeaguePractice With a Mock DraftSports BettingAP Photo/Mike KropfMarch Madness betting 2023: Bracket odds, lines, tips, moreThe 2023 NCAA tournament brackets have finally been released, and we have everything you need to know to make a bet on all of the March Madness games. Sign up to play the #1 Fantasy game!Create A LeagueJoin Public LeagueReactivateMock Draft Now\n\nESPN+\n\n\n\n\nNHL: Select Games\n\n\n\n\n\n\n\nXFL\n\n\n\n\n\n\n\nMLB: Select Games\n\n\n\n\n\n\n\nNCAA Baseball\n\n\n\n\n\n\n\nNCAA Softball\n\n\n\n\n\n\n\nCricket: Select Matches\n\n\n\n\n\n\n\nMel Kiper's NFL Mock Draft 3.0\n\n\nQuick Links\n\n\n\n\nMen's Tournament Challenge\n\n\n\n\n\n\n\nWomen's Tournament Challenge\n\n\n\n\n\n\n\nNFL Draft Order\n\n\n\n\n\n\n\nHow To Watch NHL Games\n\n\n\n\n\n\n\nFantasy Baseball: Sign Up\n\n\n\n\n\n\n\nHow To Watch PGA TOUR\n\n\nESPN Sites\n\n\n\n\nESPN Deportes\n\n\n\n\n\n\n\nAndscape\n\n\n\n\n\n\n\nespnW\n\n\n\n\n\n\n\nESPNFC\n\n\n\n\n\n\n\nX Games\n\n\n\n\n\n\n\nSEC Network\n\n\nESPN Apps\n\n\n\n\nESPN\n\n\n\n\n\n\n\nESPNforget them on draft day. The ESPN+ Cheat Sheet is one way to make sure that doesn't happen.Steph Chambers/Getty ImagesPassan's 2023 MLB season preview: Bold predictions and moreOpening Day is just over a week away -- and Jeff Passan has everything you need to know covered from every possible angle.Photo by Bob Kupbens/Icon Sportswire2023 NFL free agency: Best team fits for unsigned playersWhere could Ezekiel Elliott land? Let's match remaining free agents to teams and find fits for two trade candidates.Illustration by ESPN2023 NFL mock draft: Mel Kiper's first-round pick predictionsMel Kiper Jr. makes his predictions for Round 1 of the NFL draft, including projecting a trade in the top five. Trending NowAnne-Marie Sorvin-USA TODAY SBoston Bruins record tracker: Wins, points, milestonesThe B's are on pace for NHL records in wins and points, along with some individual superlatives as well. Follow along here with our updated tracker.Mandatory Credit: William Purnell-USA TODAY Sports2023 NFL full draft order: AFC, NFC team picks for all roundsStarting with the Carolina Panthers at No. 1 overall, here's the entire 2023 NFL draft broken down round by round. How to Watch on ESPN+Gregory Fisher/Icon Sportswire2023 NCAA men's hockey: Results, bracket, how to watchThe matchups in Tampa promise to be thrillers, featuring plenty of star power, high-octane offense and stellar defense.(AP Photo/Koji Sasahara, File)How to watch the PGA Tour, Masters, PGA Championship and FedEx Cup playoffs on ESPN, ESPN+Here's everything you need to know about how to watch the PGA Tour, Masters, PGA Championship and FedEx Cup playoffs on ESPN and ESPN+.Hailie Lynch/XFLHow to watch the XFL: 2023 schedule, teams, players, news, moreEvery XFL gamemovesThe free agent frenzy has come and gone. Which teams have improved their 2023 outlook, and which teams have taken a hit?12hNFL Nation reportersIllustration by ESPNTHE BUCK STOPS WITH BELICHICKBruschi: Fair to criticize Bill Belichick for Patriots' struggles10h1:27 Top HeadlinesQB Jackson has requested trade from RavensSources: Texas hiring Terry as full-time coachJets GM: No rush on Rodgers; Lamar not optionLove to leave North Carolina, enter transfer portalBelichick to angsty Pats fans: See last 25 yearsEmbiid out, Harden due back vs. Jokic, NuggetsLynch: Purdy 'earned the right' to start for NinersMan Utd, Wrexham plan July friendly in San DiegoOn paper, Padres overtake DodgersFavorites FantasyManage FavoritesFantasy HomeCustomize ESPNSign UpLog InMarch Madness LiveESPNMarch Madness LiveWatch every men's NCAA tournament game live! ICYMI1:42Austin Peay's coach, pitcher and catcher all ejected after retaliation pitchAustin Peay's pitcher, catcher and coach were all ejected after a pitch was thrown at Liberty's Nathan Keeter, who earlier in the game hit a home run and celebrated while running down the third-base line. Men's Tournament ChallengeIllustration by ESPNMen's Tournament ChallengeCheck your bracket(s) in the 2023 Men's Tournament Challenge, which you can follow throughout the Big Dance. Women's Tournament ChallengeIllustration by ESPNWomen's Tournament ChallengeCheck your bracket(s) in the 2023 Women's Tournament Challenge, which you can follow throughout the Big Dance. Best of ESPN+AP Photo/Lynne SladkyFantasy Baseball ESPN+ Cheat Sheet: Sleepers, busts, rookies and closersYou've read their names all preseason long, it'd be a shame to forget them on draft day. The ESPN+ Cheat Sheet is one way to make sure that doesn'tSpears identifies the two teams that need Lamar Jackson the most7h2:00Would Lamar sit out? Will Ravens draft a QB? Jackson trade request insightsLamar Jackson has asked Baltimore to trade him, but Ravens coach John Harbaugh hopes the QB will be back.3hJamison HensleyBallard, Colts will consider trading for QB JacksonJackson to Indy? Washington? Barnwell ranks the QB's trade fitsSNYDER'S TUMULTUOUS 24-YEAR RUNHow Washington’s NFL franchise sank on and off the field under owner Dan SnyderSnyder purchased one of the NFL's marquee franchises in 1999. Twenty-four years later, and with the team up for sale, he leaves a legacy of on-field futility and off-field scandal.13hJohn KeimESPNIOWA STAR STEPS UP AGAINJ-Will: Caitlin Clark is the biggest brand in college sports right now8h0:47'The better the opponent, the better she plays': Clark draws comparisons to TaurasiCaitlin Clark's performance on Sunday had longtime observers going back decades to find comparisons.16hKevin PeltonWOMEN'S ELITE EIGHT SCOREBOARDMONDAY'S GAMESCheck your bracket!NBA DRAFTHow top prospects fared on the road to the Final FourThe 2023 NCAA tournament is down to four teams, and ESPN's Jonathan Givony recaps the players who saw their NBA draft stock change.11hJonathan GivonyAndy Lyons/Getty ImagesTALKING BASKETBALLWhy AD needs to be more assertive with LeBron on the court9h1:33Why Perk won't blame Kyrie for Mavs' woes8h1:48WHERE EVERY TEAM STANDSNew NFL Power Rankings: Post-free-agency 1-32 poll, plus underrated offseason movesThe free agent frenzy has come and gone. Which teams have improved their 2023 outlook,\n\n\n\nCustomize ESPNSign UpLog InESPN Sites\n\n\n\n\nESPN Deportes\n\n\n\n\n\n\n\nAndscape\n\n\n\n\n\n\n\nespnW\n\n\n\n\n\n\n\nESPNFC\n\n\n\n\n\n\n\nX Games\n\n\n\n\n\n\n\nSEC Network\n\n\nESPN Apps\n\n\n\n\nESPN\n\n\n\n\n\n\n\nESPN Fantasy\n\n\nFollow ESPN\n\n\n\n\nFacebook\n\n\n\n\n\n\n\nTwitter\n\n\n\n\n\n\n\nInstagram\n\n\n\n\n\n\n\nSnapchat\n\n\n\n\n\n\n\nYouTube\n\n\n\n\n\n\n\nThe ESPN Daily Podcast\n\n\nAre you ready for Opening Day? Here's your guide to MLB's offseason chaosWait, Jacob deGrom is on the Rangers now? Xander Bogaerts and Trea Turner signed where? And what about Carlos Correa? Yeah, you're going to need to read up before Opening Day.12hESPNIllustration by ESPNEverything you missed in the MLB offseason3h2:33World Series odds, win totals, props for every teamPlay fantasy baseball for free!TOP HEADLINESQB Jackson has requested trade from RavensSources: Texas hiring Terry as full-time coachJets GM: No rush on Rodgers; Lamar not optionLove to leave North Carolina, enter transfer portalBelichick to angsty Pats fans: See last 25 yearsEmbiid out, Harden due back vs. Jokic, NuggetsLynch: Purdy 'earned the right' to start for NinersMan Utd, Wrexham plan July friendly in San DiegoOn paper, Padres overtake DodgersLAMAR WANTS OUT OF BALTIMOREMarcus Spears identifies the two teams that need Lamar Jackson the most7h2:00Would Lamar sit\n\n Skip to navigation\n \n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n<\n\n>\n\n\n\n\n\n\n\n\n\nMenuESPN\n\n\nSearch\n\n\n\nscores\n\n\n\nNFLNBANCAAMNCAAWNHLSoccer…MLBNCAAFGolfTennisSports BettingBoxingCFLNCAACricketF1HorseLLWSMMANASCARNBA G LeagueOlympic SportsRacingRN BBRN FBRugbyWNBAWorld Baseball ClassicWWEX GamesXFLMore ESPNFantasyListenWatchESPN+\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n \n\nSUBSCRIBE NOW\n\n\n\n\n\nNHL: Select Games\n\n\n\n\n\n\n\nXFL\n\n\n\n\n\n\n\nMLB: Select Games\n\n\n\n\n\n\n\nNCAA Baseball\n\n\n\n\n\n\n\nNCAA Softball\n\n\n\n\n\n\n\nCricket: Select Matches\n\n\n\n\n\n\n\nMel Kiper's NFL Mock Draft 3.0\n\n\nQuick Links\n\n\n\n\nMen's Tournament Challenge\n\n\n\n\n\n\n\nWomen's Tournament Challenge\n\n\n\n\n\n\n\nNFL Draft Order\n\n\n\n\n\n\n\nHow To Watch NHL Games\n\n\n\n\n\n\n\nFantasy Baseball: Sign Up\n\n\n\n\n\n\n\nHow To Watch PGA TOUR\n\n\n\n\n\n\nFavorites\n\n\n\n\n\n\n Manage Favorites\n \n\n\n\nCustomize ESPNSign UpLog InESPNlookup_str='', metadata={'source': 'https://www.espn.com/'}, lookup_index=0)]"""# Use this piece of code for testing new custom BeautifulSoup parsersimport requestsfrom bs4 import BeautifulSouphtml_doc = requests.get("{INSERT_NEW_URL_HERE}")soup = BeautifulSoup(html_doc.text, 'html.parser')# Beautiful soup logic to be exported to langchain.document_loaders.webpage.py# Example: transcript = soup.select_one("td[class='scrtext']").text# BS4 documentation can be found here: https://www.crummy.com/software/BeautifulSoup/bs4/doc/""";Loading multiple webpages​You can also load multiple webpages at once by passing in a list of urls to the loader. This will return a list of documents in the same order as the urls passed in.loader = WebBaseLoader(["https://www.espn.com/", "https://google.com"])docs = loader.load()docs [Document(page_content="\n\n\n\n\n\n\n\n\nESPN - Serving Sports Fans. Anytime. Anywhere.\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n Skip to main content\n \n\n Skip to navigation\nPublic LeagueReactivateMock Draft Now\n\nESPN+\n\n\n\n\nNHL: Select Games\n\n\n\n\n\n\n\nXFL\n\n\n\n\n\n\n\nMLB: Select Games\n\n\n\n\n\n\n\nNCAA Baseball\n\n\n\n\n\n\n\nNCAA Softball\n\n\n\n\n\n\n\nCricket: Select Matches\n\n\n\n\n\n\n\nMel Kiper's NFL Mock Draft 3.0\n\n\nQuick Links\n\n\n\n\nMen's Tournament Challenge\n\n\n\n\n\n\n\nWomen's Tournament Challenge\n\n\n\n\n\n\n\nNFL Draft Order\n\n\n\n\n\n\n\nHow To Watch NHL Games\n\n\n\n\n\n\n\nFantasy Baseball: Sign Up\n\n\n\n\n\n\n\nHow To Watch PGA TOUR\n\n\nESPN Sites\n\n\n\n\nESPN Deportes\n\n\n\n\n\n\n\nAndscape\n\n\n\n\n\n\n\nespnW\n\n\n\n\n\n\n\nESPNFC\n\n\n\n\n\n\n\nX Games\n\n\n\n\n\n\n\nSEC Network\n\n\nESPN Apps\n\n\n\n\nESPN\n\n\n\n\n\n\n\nESPN Fantasy\n\n\nFollow ESPN\n\n\n\n\nFacebook\n\n\n\n\n\n\n\nTwitter\n\n\n\n\n\n\n\nInstagram\n\n\n\n\n\n\n\nSnapchat\n\n\n\n\n\n\n\nYouTube\n\n\n\n\n\n\n\nThe ESPN Daily Podcast\n\n\nTerms of UsePrivacy PolicyYour US State Privacy RightsChildren's Online Privacy PolicyInterest-Based AdsAbout Nielsen MeasurementDo Not Sell or Share My Personal InformationContact UsDisney Ad Sales SiteWork for ESPNCopyright: © ESPN Enterprises, Inc. All rights reserved.\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n", lookup_str='',NowAnne-Marie Sorvin-USA TODAY SBoston Bruins record tracker: Wins, points, milestonesThe B's are on pace for NHL records in wins and points, along with some individual superlatives as well. Follow along here with our updated tracker.Mandatory Credit: William Purnell-USA TODAY Sports2023 NFL full draft order: AFC, NFC team picks for all roundsStarting with the Carolina Panthers at No. 1 overall, here's the entire 2023 NFL draft broken down round by round. How to Watch on ESPN+Gregory Fisher/Icon Sportswire2023 NCAA men's hockey: Results, bracket, how to watchThe matchups in Tampa promise to be thrillers, featuring plenty of star power, high-octane offense and stellar defense.(AP Photo/Koji Sasahara, File)How to watch the PGA Tour, Masters, PGA Championship and FedEx Cup playoffs on ESPN, ESPN+Here's everything you need to know about how to watch the PGA Tour, Masters, PGA Championship and FedEx Cup playoffs on ESPN and ESPN+.Hailie Lynch/XFLHow to watch the XFL: 2023 schedule, teams, players, news, moreEvery XFL game will be streamed on ESPN+. Find out when and where else you can watch the eight teams compete. Sign up to play the #1 Fantasy Baseball GameReactivate A LeagueCreate A LeagueJoin a Public LeaguePractice With a Mock DraftSports BettingAP Photo/Mike KropfMarch Madness betting 2023: Bracket odds, lines, tips, moreThe 2023 NCAA tournament brackets have finally been released, and we have everything you need to know to make a bet on all of the March Madness games. Sign up to play the #1 Fantasy game!Create A LeagueJoin Public LeagueReactivateMock Draft Now\n\nESPN+\n\n\n\n\nNHL: Selectplan July friendly in San DiegoOn paper, Padres overtake DodgersFavorites FantasyManage FavoritesFantasy HomeCustomize ESPNSign UpLog InMarch Madness LiveESPNMarch Madness LiveWatch every men's NCAA tournament game live! ICYMI1:42Austin Peay's coach, pitcher and catcher all ejected after retaliation pitchAustin Peay's pitcher, catcher and coach were all ejected after a pitch was thrown at Liberty's Nathan Keeter, who earlier in the game hit a home run and celebrated while running down the third-base line. Men's Tournament ChallengeIllustration by ESPNMen's Tournament ChallengeCheck your bracket(s) in the 2023 Men's Tournament Challenge, which you can follow throughout the Big Dance. Women's Tournament ChallengeIllustration by ESPNWomen's Tournament ChallengeCheck your bracket(s) in the 2023 Women's Tournament Challenge, which you can follow throughout the Big Dance. Best of ESPN+AP Photo/Lynne SladkyFantasy Baseball ESPN+ Cheat Sheet: Sleepers, busts, rookies and closersYou've read their names all preseason long, it'd be a shame to forget them on draft day. The ESPN+ Cheat Sheet is one way to make sure that doesn't happen.Steph Chambers/Getty ImagesPassan's 2023 MLB season preview: Bold predictions and moreOpening Day is just over a week away -- and Jeff Passan has everything you need to know covered from every possible angle.Photo by Bob Kupbens/Icon Sportswire2023 NFL free agency: Best team fits for unsigned playersWhere could Ezekiel Elliott land? Let's match remaining free agents to teams and find fits for two trade candidates.Illustration by ESPN2023 NFL mock draft: Mel Kiper's first-round pick predictionsMel Kiper Jr. makes his predictions for Round 1 of the NFL draft, including projecting a trade in the top five. Trending NowAnne-Marie Sorvin-USA TODAY SBoston Bruins record tracker: Wins, points,scandal.13hJohn KeimESPNIOWA STAR STEPS UP AGAINJ-Will: Caitlin Clark is the biggest brand in college sports right now8h0:47'The better the opponent, the better she plays': Clark draws comparisons to TaurasiCaitlin Clark's performance on Sunday had longtime observers going back decades to find comparisons.16hKevin PeltonWOMEN'S ELITE EIGHT SCOREBOARDMONDAY'S GAMESCheck your bracket!NBA DRAFTHow top prospects fared on the road to the Final FourThe 2023 NCAA tournament is down to four teams, and ESPN's Jonathan Givony recaps the players who saw their NBA draft stock change.11hJonathan GivonyAndy Lyons/Getty ImagesTALKING BASKETBALLWhy AD needs to be more assertive with LeBron on the court10h1:33Why Perk won't blame Kyrie for Mavs' woes8h1:48WHERE EVERY TEAM STANDSNew NFL Power Rankings: Post-free-agency 1-32 poll, plus underrated offseason movesThe free agent frenzy has come and gone. Which teams have improved their 2023 outlook, and which teams have taken a hit?12hNFL Nation reportersIllustration by ESPNTHE BUCK STOPS WITH BELICHICKBruschi: Fair to criticize Bill Belichick for Patriots' struggles10h1:27 Top HeadlinesQB Jackson has requested trade from RavensSources: Texas hiring Terry as full-time coachJets GM: No rush on Rodgers; Lamar not optionLove to leave North Carolina, enter transfer portalBelichick to angsty Pats fans: See last 25 yearsEmbiid out, Harden due back vs. Jokic, NuggetsLynch: Purdy 'earned the right' to start for NinersMan Utd, Wrexham plan July friendly in San DiegoOn paper, Padres overtake DodgersFavorites FantasyManage FavoritesFantasyESPN Daily Podcast\n\n\nAre you ready for Opening Day? Here's your guide to MLB's offseason chaosWait, Jacob deGrom is on the Rangers now? Xander Bogaerts and Trea Turner signed where? And what about Carlos Correa? Yeah, you're going to need to read up before Opening Day.12hESPNIllustration by ESPNEverything you missed in the MLB offseason3h2:33World Series odds, win totals, props for every teamPlay fantasy baseball for free!TOP HEADLINESQB Jackson has requested trade from RavensSources: Texas hiring Terry as full-time coachJets GM: No rush on Rodgers; Lamar not optionLove to leave North Carolina, enter transfer portalBelichick to angsty Pats fans: See last 25 yearsEmbiid out, Harden due back vs. Jokic, NuggetsLynch: Purdy 'earned the right' to start for NinersMan Utd, Wrexham plan July friendly in San DiegoOn paper, Padres overtake DodgersLAMAR WANTS OUT OF BALTIMOREMarcus Spears identifies the two teams that need Lamar Jackson the most8h2:00Would Lamar sit out? Will Ravens draft a QB? Jackson trade request insightsLamar Jackson has asked Baltimore to trade him, but Ravens coach John Harbaugh hopes the QB will be back.3hJamison HensleyBallard, Colts will consider trading for QB JacksonJackson to Indy? Washington? Barnwell ranks the QB's trade fitsSNYDER'S TUMULTUOUS 24-YEAR RUNHow Washington’s NFL franchise sank on and off the field under owner Dan SnyderSnyder purchased one of the NFL's marquee franchises in 1999. Twenty-four years later, and with the team up for sale, he leaves a legacy of on-field futility and off-field scandal.13hJohn KeimESPNIOWA STAR STEPS UP AGAINJ-Will:BBRN FBRugbyWNBAWorld Baseball ClassicWWEX GamesXFLMore ESPNFantasyListenWatchESPN+\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n \n\nSUBSCRIBE NOW\n\n\n\n\n\nNHL: Select Games\n\n\n\n\n\n\n\nXFL\n\n\n\n\n\n\n\nMLB: Select Games\n\n\n\n\n\n\n\nNCAA Baseball\n\n\n\n\n\n\n\nNCAA Softball\n\n\n\n\n\n\n\nCricket: Select Matches\n\n\n\n\n\n\n\nMel Kiper's NFL Mock Draft 3.0\n\n\nQuick Links\n\n\n\n\nMen's Tournament Challenge\n\n\n\n\n\n\n\nWomen's Tournament Challenge\n\n\n\n\n\n\n\nNFL Draft Order\n\n\n\n\n\n\n\nHow To Watch NHL Games\n\n\n\n\n\n\n\nFantasy Baseball: Sign Up\n\n\n\n\n\n\n\nHow To Watch PGA TOUR\n\n\n\n\n\n\nFavorites\n\n\n\n\n\n\n Manage Favorites\n \n\n\n\nCustomize ESPNSign UpLog InESPN Sites\n\n\n\n\nESPN Deportes\n\n\n\n\n\n\n\nAndscape\n\n\n\n\n\n\n\nespnW\n\n\n\n\n\n\n\nESPNFC\n\n\n\n\n\n\n\nX Games\n\n\n\n\n\n\n\nSEC Network\n\n\nESPN Apps\n\n\n\n\nESPN\n\n\n\n\n\n\n\nESPN Fantasy\n\n\nFollow ESPN\n\n\n\n\nFacebook\n\n\n\n\n\n\n\nTwitter\n\n\n\n\n\n\n\nInstagram\n\n\n\n\n\n\n\nSnapchat\n\n\n\n\n\n\n\nYouTube\n\n\n\n\n\n\n\nThe ESPN Dailyto load all text from HTML webpages into a document format that we can use downstream. For more custom logic for loading webpages look at some child class examples such as IMSDbLoader, AZLyricsLoader, and CollegeConfidentialLoaderfrom langchain.document_loaders import WebBaseLoaderloader = WebBaseLoader("https://www.espn.com/")To bypass SSL verification errors during fetching, you can set the "verify" option:loader.requests_kwargs = {'verify':False}data = loader.load()data [Document(page_content="\n\n\n\n\n\n\n\n\nESPN - Serving Sports Fans. Anytime. Anywhere.\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n Skip to main content\n \n\n Skip to navigation\n \n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n<\n\n>\n\n\n\n\n\n\n\n\n\nMenuESPN\n\n\nSearch\n\n\n\nscores\n\n\n\nNFLNBANCAAMNCAAWNHLSoccer…MLBNCAAFGolfTennisSports BettingBoxingCFLNCAACricketF1HorseLLWSMMANASCARNBA G LeagueOlympic SportsRacingRN BBRN FBRugbyWNBAWorld Baseball ClassicWWEX GamesXFLMoreSkip to main content🦜ï¸�🔗 LangChainDocsUse casesIntegrationsAPILangSmithJS/TS DocsCTRLKIntegrationsCallbacksChat modelsDocument loadersEtherscan LoaderacreomAirbyte JSONAirtableAlibaba Cloud MaxComputeApify DatasetArxivAsyncHtmlLoaderAWS S3 DirectoryAWS S3 FileAZLyricsAzure Blob Storage ContainerAzure Blob Storage FileBibTeXBiliBiliBlackboardBlockchainBrave SearchBrowserlesschatgpt_loaderCollege ConfidentialConfluenceCoNLL-UCopy PasteCSVCube Semantic LayerDatadog LogsDiffbotDiscordDocugamiDuckDBEmailEmbaasEPubEverNoteexample_dataMicrosoft ExcelFacebook ChatFaunaFigmaGeopandasGitGitBookGitHubGoogle BigQueryGoogle Cloud Storage DirectoryGoogle Cloud Storage FileGoogle DriveGrobidGutenbergHacker NewsHuggingFace datasetiFixitImagesImage captionsIMSDbIuguJoplinJupyter NotebookLarkSuite (FeiShu)MastodonMediaWikiDumpMergeDocLoadermhtmlMicrosoft OneDriveMicrosoft PowerPointMicrosoft WordModern TreasuryNotion DB 1/2Notion DB 2/2ObsidianOpen Document Format (ODT)Open City DataOrg-modePandas DataFramePsychicPySpark DataFrame LoaderReadTheDocs DocumentationRecursive URL LoaderRedditRoamRocksetRSTSitemapSlackSnowflakeSource CodeSpreedlyStripeSubtitleTelegramTencent COS DirectoryTencent COS File2MarkdownTOMLTrelloTSVTwitterUnstructured FileURLWeatherWebBaseLoaderWhatsApp ChatWikipediaXMLXorbits Pandas DataFrameLoading documents from a YouTube urlYouTube transcriptsDocument transformersLLMsMemoryRetrieversText embedding modelsAgent toolkitsToolsVector storesGrouped by providerIntegrationsDocument loadersWebBaseLoaderOn this pageWebBaseLoaderThis covers how to use WebBaseLoader to load all text from HTML webpages into a document format that we can use downstream.WebBaseLoader | 🦜ï¸�🔗 LangchainAPI to find all the documents ID that you are interested in. This link provides a list of endpoints that will be helpful to retrieve the documents ID.For instance, to retrieve information about all objects that are stored at the root of the Documents folder, you need make a request to: https://graph.microsoft.com/v1.0/drives/{YOUR DRIVE ID}/root/children. Once you have the list of IDs that you are interested in, then you can instantiate the loader with the following parameters.from langchain.document_loaders.onedrive import OneDriveLoaderloader = OneDriveLoader(drive_id="YOUR DRIVE ID", object_ids=["ID_1", "ID_2"], auth_with_token=True)documents = loader.load()PreviousmhtmlNextMicrosoft PowerPointPrerequisites🧑 Instructions for ingesting your documents from OneDrive🔑 Authentication🗂ï¸� Documents loaderCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.= "YOUR CLIENT SECRET"This loader uses an authentication called on behalf of a user. It is a 2 step authentication with user consent. When you instantiate the loader, it will call will print a url that the user must visit to give consent to the app on the required permissions. The user must then visit this url and give consent to the application. Then the user must copy the resulting page url and paste it back on the console. The method will then return True if the login attempt was succesful.from langchain.document_loaders.onedrive import OneDriveLoaderloader = OneDriveLoader(drive_id="YOUR DRIVE ID")Once the authentication has been done, the loader will store a token (o365_token.txt) at ~/.credentials/ folder. This token could be used later to authenticate without the copy/paste steps explained earlier. To use this token for authentication, you need to change the auth_with_token parameter to True in the instantiation of the loader.from langchain.document_loaders.onedrive import OneDriveLoaderloader = OneDriveLoader(drive_id="YOUR DRIVE ID", auth_with_token=True)🗂ï¸� Documents loader​📑 Loading documents from a OneDrive Directory​OneDriveLoader can load documents from a specific folder within your OneDrive. For instance, you want to load all documents that are stored at Documents/clients folder within your OneDrive.from langchain.document_loaders.onedrive import OneDriveLoaderloader = OneDriveLoader(drive_id="YOUR DRIVE ID", folder_path="Documents/clients", auth_with_token=True)documents = loader.load()📑 Loading documents from a list of Documents IDs​Another possibility is to provide a list of object_id for each document you want to load. For that, you will need to query the Microsoft Graph API to find all the documents ID that you are interested in. This link provides a list of endpointsSkyDrive) is a file hosting service operated by Microsoft.This notebook covers how to load documents from OneDrive. Currently, only docx, doc, and pdf files are supported.Prerequisites​Register an application with the Microsoft identity platform instructions.When registration finishes, the Azure portal displays the app registration's Overview pane. You see the Application (client) ID. Also called the client ID, this value uniquely identifies your application in the Microsoft identity platform.During the steps you will be following at item 1, you can set the redirect URI as http://localhost:8000/callbackDuring the steps you will be following at item 1, generate a new password (client_secret) under Application Secrets section.Follow the instructions at this document to add the following SCOPES (offline_access and Files.Read.All) to your application.Visit the Graph Explorer Playground to obtain your OneDrive ID. The first step is to ensure you are logged in with the account associated your OneDrive account. Then you need to make a request to https://graph.microsoft.com/v1.0/me/drive and the response will return a payload with a field id that holds the ID of your OneDrive account.You need to install the o365 package using the command pip install o365.At the end of the steps you must have the following values: CLIENT_IDCLIENT_SECRETDRIVE_ID🧑 Instructions for ingesting your documents from OneDrive​🔑 Authentication​By default, the OneDriveLoader expects that the values of CLIENT_ID and CLIENT_SECRET must be stored as environment variables named O365_CLIENT_ID and O365_CLIENT_SECRET respectively. You could pass those environment variables through a .env file at the root of your application or using the following command in your script.os.environ['O365_CLIENT_ID'] = "YOUR CLIENT ID"os.environ['O365_CLIENT_SECRET'] = "YOUR CLIENT SECRET"This loader uses an authentication called on behalfSkip to main content🦜ï¸�🔗 LangChainDocsUse casesIntegrationsAPILangSmithJS/TS DocsCTRLKIntegrationsCallbacksChat modelsDocument loadersEtherscan LoaderacreomAirbyte JSONAirtableAlibaba Cloud MaxComputeApify DatasetArxivAsyncHtmlLoaderAWS S3 DirectoryAWS S3 FileAZLyricsAzure Blob Storage ContainerAzure Blob Storage FileBibTeXBiliBiliBlackboardBlockchainBrave SearchBrowserlesschatgpt_loaderCollege ConfidentialConfluenceCoNLL-UCopy PasteCSVCube Semantic LayerDatadog LogsDiffbotDiscordDocugamiDuckDBEmailEmbaasEPubEverNoteexample_dataMicrosoft ExcelFacebook ChatFaunaFigmaGeopandasGitGitBookGitHubGoogle BigQueryGoogle Cloud Storage DirectoryGoogle Cloud Storage FileGoogle DriveGrobidGutenbergHacker NewsHuggingFace datasetiFixitImagesImage captionsIMSDbIuguJoplinJupyter NotebookLarkSuite (FeiShu)MastodonMediaWikiDumpMergeDocLoadermhtmlMicrosoft OneDriveMicrosoft PowerPointMicrosoft WordModern TreasuryNotion DB 1/2Notion DB 2/2ObsidianOpen Document Format (ODT)Open City DataOrg-modePandas DataFramePsychicPySpark DataFrame LoaderReadTheDocs DocumentationRecursive URL LoaderRedditRoamRocksetRSTSitemapSlackSnowflakeSource CodeSpreedlyStripeSubtitleTelegramTencent COS DirectoryTencent COS File2MarkdownTOMLTrelloTSVTwitterUnstructured FileURLWeatherWebBaseLoaderWhatsApp ChatWikipediaXMLXorbits Pandas DataFrameLoading documents from a YouTube urlYouTube transcriptsDocument transformersLLMsMemoryRetrieversText embedding modelsAgent toolkitsToolsVector storesGrouped by providerIntegrationsDocument loadersMicrosoft OneDriveOn this pageMicrosoft OneDriveMicrosoft OneDrive (formerly SkyDrive) is a file hosting service operated by Microsoft.This notebook covers how to load documents fromMicrosoft OneDrive | 🦜ï¸�🔗 LangchainMicrosoft's object storage solution for the cloud. Blob Storage is optimized for storing massive amounts of unstructured data. Unstructured data is data that doesn't adhere to a particular data model or definition, such as text or binary data.Azure Blob Storage is designed for:Serving images or documents directly to a browser.Storing files for distributed access.Streaming video and audio.Writing to log files.Storing data for backup and restore, disaster recovery, and archiving.Storing data for analysis by an on-premises or Azure-hosted service.This notebook covers how to load document objects from a container on Azure Blob Storage.#!pip install azure-storage-blobfrom langchain.document_loaders import AzureBlobStorageContainerLoaderloader = AzureBlobStorageContainerLoader(conn_str="", container="")loader.load() [Document(page_content='Lorem ipsum dolor sit amet.', lookup_str='', metadata={'source': '/var/folders/y6/8_bzdg295ld6s1_97_12m4lr0000gn/T/tmpaa9xl6ch/fake.docx'}, lookup_index=0)]Specifying a prefix​You can also specify a prefix for more finegrained control over what files to load.loader = AzureBlobStorageContainerLoader( conn_str="", container="", prefix="")loader.load() [Document(page_content='Lorem ipsum dolor sit amet.', lookup_str='', metadata={'source': '/var/folders/y6/8_bzdg295ld6s1_97_12m4lr0000gn/T/tmpujbkzf_l/fake.docx'}, lookup_index=0)]PreviousAZLyricsNextAzure Blob Storage FileSpecifying a prefixCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.Skip to main content🦜ï¸�🔗 LangChainDocsUse casesIntegrationsAPILangSmithJS/TS DocsCTRLKIntegrationsCallbacksChat modelsDocument loadersEtherscan LoaderacreomAirbyte JSONAirtableAlibaba Cloud MaxComputeApify DatasetArxivAsyncHtmlLoaderAWS S3 DirectoryAWS S3 FileAZLyricsAzure Blob Storage ContainerAzure Blob Storage FileBibTeXBiliBiliBlackboardBlockchainBrave SearchBrowserlesschatgpt_loaderCollege ConfidentialConfluenceCoNLL-UCopy PasteCSVCube Semantic LayerDatadog LogsDiffbotDiscordDocugamiDuckDBEmailEmbaasEPubEverNoteexample_dataMicrosoft ExcelFacebook ChatFaunaFigmaGeopandasGitGitBookGitHubGoogle BigQueryGoogle Cloud Storage DirectoryGoogle Cloud Storage FileGoogle DriveGrobidGutenbergHacker NewsHuggingFace datasetiFixitImagesImage captionsIMSDbIuguJoplinJupyter NotebookLarkSuite (FeiShu)MastodonMediaWikiDumpMergeDocLoadermhtmlMicrosoft OneDriveMicrosoft PowerPointMicrosoft WordModern TreasuryNotion DB 1/2Notion DB 2/2ObsidianOpen Document Format (ODT)Open City DataOrg-modePandas DataFramePsychicPySpark DataFrame LoaderReadTheDocs DocumentationRecursive URL LoaderRedditRoamRocksetRSTSitemapSlackSnowflakeSource CodeSpreedlyStripeSubtitleTelegramTencent COS DirectoryTencent COS File2MarkdownTOMLTrelloTSVTwitterUnstructured FileURLWeatherWebBaseLoaderWhatsApp ChatWikipediaXMLXorbits Pandas DataFrameLoading documents from a YouTube urlYouTube transcriptsDocument transformersLLMsMemoryRetrieversText embedding modelsAgent toolkitsToolsVector storesGrouped by providerIntegrationsDocument loadersAzure Blob Storage ContainerOn this pageAzure Blob Storage ContainerAzure Blob Storage is Microsoft's object storage solution for the cloud. Blob Storage is optimized for storing massive amounts of unstructuredAzure Blob Storage Container | 🦜ï¸�🔗 LangchainETF growing say 10% per year will only grow about 6%, which really ruins the long term effects of compounding interest.\n\nHowever stocks are only taxed on realised gains which is why they look more interesting to hold long term.\n\nI do not like the lack of diversification this brings, as I am looking to spend tonnes of time picking the right long term stocks.\n\nIt would be ideal to find a few stocks that over the long term somewhat follows the indexes. Does anyone have suggestions?\n\nI have looked at Nasdaq Inc. which quite closely follows Nasdaq 100. \n\nI really appreciate any help.', metadata={'post_subreddit': 'r/investing', 'post_category': 'new', 'post_title': 'Stocks that track an index', 'post_score': 7, 'post_id': '130auvj', 'post_url': 'https://www.reddit.com/r/investing/comments/130auvj/stocks_that_track_an_index/', 'post_author': Redditor(name='LeAlbertP')})]PreviousRecursive URL LoaderNextRoamCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.answers are just opinions of Redditors and should be used as a starting point for your research. You should strongly consider seeing a registered investment adviser if you need professional support before making any financial decisions!', metadata={'post_subreddit': 'r/investing', 'post_category': 'new', 'post_title': 'Daily General Discussion and Advice Thread - April 27, 2023', 'post_score': 5, 'post_id': '130eszz', 'post_url': 'https://www.reddit.com/r/investing/comments/130eszz/daily_general_discussion_and_advice_thread_april/', 'post_author': Redditor(name='AutoModerator')}), Document(page_content="Based on recent news about salt battery advancements and the overall issues of lithium, I was wondering what would be feasible ways to invest into non-lithium based battery technologies? CATL is of course a choice, but the selection of brokers I currently have in my disposal don't provide HK stocks at all.", metadata={'post_subreddit': 'r/investing', 'post_category': 'new', 'post_title': 'Investing in non-lithium battery technologies?', 'post_score': 2, 'post_id': '130d6qp', 'post_url': 'https://www.reddit.com/r/investing/comments/130d6qp/investing_in_nonlithium_battery_technologies/', 'post_author': Redditor(name='-manabreak')}), Document(page_content='Hello everyone,\n\nI would really like to invest in an ETF that follows spy or another big index, as I think this form of investment suits me best. \n\nThe problem is, that I live in Denmark where ETFs and funds are taxed annually on unrealised gains at quite a steep rate. This means that an ETF growing say 10% per year will only grow about 6%, which really ruins the longMaybe you would just like to throw out a neat fact that doesn\'t warrant a self post? Feel free to post here! \n\nIf your question is "I have $10,000, what do I do?" or other "advice for my personal situation" questions, you should include relevant information, such as the following:\n\n* How old are you? What country do you live in? \n* Are you employed/making income? How much? \n* What are your objectives with this money? (Buy a house? Retirement savings?) \n* What is your time horizon? Do you need this money next month? Next 20yrs? \n* What is your risk tolerance? (Do you mind risking it at blackjack or do you need to know its 100% safe?) \n* What are you current holdings? (Do you already have exposure to specific funds and sectors? Any other assets?) \n* Any big debts (include interest rate) or expenses? \n* And any other relevant financial information will be useful to give you a proper answer. \n\nPlease consider consulting our FAQ first - https://www.reddit.com/r/investing/wiki/faq\nAnd our [side bar](https://www.reddit.com/r/investing/about/sidebar) also has useful resources. \n\nIf you are new to investing - please refer to Wiki - [Getting Started](https://www.reddit.com/r/investing/wiki/index/gettingstarted/)\n\nThe reading list in the wiki has a list of books ranging from light reading to advanced topics depending on your knowledge level. Link here - [Reading List](https://www.reddit.com/r/investing/wiki/readinglist)\n\nCheck the resources in the sidebar.\n\nBe aware that these answers are just opinions of Redditors and should be used as a starting point for your research.apply my own due diligence. However, I am interested if anyone knows as a UK resident how fees and exchange rate differences would impact performance?\n\nI am planning to create a pie of index funds (perhaps UK, US, europe) or find a fund with a good track record of long term growth at low rates. \n\nDoes anyone have any ideas?', metadata={'post_subreddit': 'r/investing', 'post_category': 'new', 'post_title': 'Long term retirement funds fees/exchange rate query', 'post_score': 1, 'post_id': '130pa6m', 'post_url': 'https://www.reddit.com/r/investing/comments/130pa6m/long_term_retirement_funds_feesexchange_rate_query/', 'post_author': Redditor(name='Badmanshiz')}), Document(page_content='I much prefer the Roth IRA and would rather rollover my 401k to that every year instead of keeping it in the limited 401k options. But if I rollover, will I be able to continue contributing to my 401k? Or will that close my account? I realize that there are tax implications of doing this but I still think it is the better option.', metadata={'post_subreddit': 'r/investing', 'post_category': 'new', 'post_title': 'Is it possible to rollover my 401k every year?', 'post_score': 3, 'post_id': '130ja0h', 'post_url': 'https://www.reddit.com/r/investing/comments/130ja0h/is_it_possible_to_rollover_my_401k_every_year/', 'post_author': Redditor(name='AnCap_Catholic')}), Document(page_content='Have a general question? Want to offer some commentary on markets? Maybe you would just like to throw out a neat fact that doesn\'t warrant a selfwebsite.This loader fetches the text from the Posts of Subreddits or Reddit users, using the praw Python package.Make a Reddit Application and initialize the loader with with your Reddit API credentials.from langchain.document_loaders import RedditPostsLoader# !pip install praw# load using 'subreddit' modeloader = RedditPostsLoader( client_id="YOUR CLIENT ID", client_secret="YOUR CLIENT SECRET", user_agent="extractor by u/Master_Ocelot8179", categories=["new", "hot"], # List of categories to load posts from mode="subreddit", search_queries=[ "investing", "wallstreetbets", ], # List of subreddits to load posts from number_posts=20, # Default value is 10)# # or load using 'username' mode# loader = RedditPostsLoader(# client_id="YOUR CLIENT ID",# client_secret="YOUR CLIENT SECRET",# user_agent="extractor by u/Master_Ocelot8179",# categories=['new', 'hot'],# mode = 'username',# search_queries=['ga3far', 'Master_Ocelot8179'], # List of usernames to load posts from# number_posts=20# )# Note: Categories can be only of following value - "controversial" "hot" "new" "rising" "top"documents = loader.load()documents[:5] [Document(page_content='Hello, I am not looking for investment advice. I will apply my own due diligence. However, I am interested if anyone knows as a UK resident how feesSkip to main content🦜ï¸�🔗 LangChainDocsUse casesIntegrationsAPILangSmithJS/TS DocsCTRLKIntegrationsCallbacksChat modelsDocument loadersEtherscan LoaderacreomAirbyte JSONAirtableAlibaba Cloud MaxComputeApify DatasetArxivAsyncHtmlLoaderAWS S3 DirectoryAWS S3 FileAZLyricsAzure Blob Storage ContainerAzure Blob Storage FileBibTeXBiliBiliBlackboardBlockchainBrave SearchBrowserlesschatgpt_loaderCollege ConfidentialConfluenceCoNLL-UCopy PasteCSVCube Semantic LayerDatadog LogsDiffbotDiscordDocugamiDuckDBEmailEmbaasEPubEverNoteexample_dataMicrosoft ExcelFacebook ChatFaunaFigmaGeopandasGitGitBookGitHubGoogle BigQueryGoogle Cloud Storage DirectoryGoogle Cloud Storage FileGoogle DriveGrobidGutenbergHacker NewsHuggingFace datasetiFixitImagesImage captionsIMSDbIuguJoplinJupyter NotebookLarkSuite (FeiShu)MastodonMediaWikiDumpMergeDocLoadermhtmlMicrosoft OneDriveMicrosoft PowerPointMicrosoft WordModern TreasuryNotion DB 1/2Notion DB 2/2ObsidianOpen Document Format (ODT)Open City DataOrg-modePandas DataFramePsychicPySpark DataFrame LoaderReadTheDocs DocumentationRecursive URL LoaderRedditRoamRocksetRSTSitemapSlackSnowflakeSource CodeSpreedlyStripeSubtitleTelegramTencent COS DirectoryTencent COS File2MarkdownTOMLTrelloTSVTwitterUnstructured FileURLWeatherWebBaseLoaderWhatsApp ChatWikipediaXMLXorbits Pandas DataFrameLoading documents from a YouTube urlYouTube transcriptsDocument transformersLLMsMemoryRetrieversText embedding modelsAgent toolkitsToolsVector storesGrouped by providerIntegrationsDocument loadersRedditRedditReddit is an American social news aggregation, content rating, and discussion website.This loader fetches the text from the Posts of Subreddits or Reddit users, using theReddit | 🦜ï¸�🔗 Langchainmode="elements", process_attachments=True,)data = loader.load()data[0] Document(page_content='This is a test email to use for unit tests.', metadata={'source': 'example_data/fake-email.eml', 'filename': 'fake-email.eml', 'file_directory': 'example_data', 'date': '2022-12-16T17:04:16-05:00', 'filetype': 'message/rfc822', 'sent_from': ['Matthew Robinson '], 'sent_to': ['Matthew Robinson '], 'subject': 'Test Email', 'category': 'NarrativeText'})Using OutlookMessageLoader​#!pip install extract_msgfrom langchain.document_loaders import OutlookMessageLoaderloader = OutlookMessageLoader("example_data/fake-email.msg")data = loader.load()data[0] Document(page_content='This is a test email to experiment with the MS Outlook MSG Extractor\r\n\r\n\r\n-- \r\n\r\n\r\nKind regards\r\n\r\n\r\n\r\n\r\nBrian Zhou\r\n\r\n', metadata={'subject': 'Test for TIF files', 'sender': 'Brian Zhou ', 'date': 'Mon, 18 Nov 2013 16:26:24 +0800'})PreviousDuckDBNextEmbaasUsing UnstructuredRetain ElementsProcessing AttachmentsUsing OutlookMessageLoaderCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.or Microsoft Outlook (.msg) files.Using Unstructured​#!pip install unstructuredfrom langchain.document_loaders import UnstructuredEmailLoaderloader = UnstructuredEmailLoader("example_data/fake-email.eml")data = loader.load()data [Document(page_content='This is a test email to use for unit tests.\n\nImportant points:\n\nRoses are red\n\nViolets are blue', metadata={'source': 'example_data/fake-email.eml'})]Retain Elements​Under the hood, Unstructured creates different "elements" for different chunks of text. By default we combine those together, but you can easily keep that separation by specifying mode="elements".loader = UnstructuredEmailLoader("example_data/fake-email.eml", mode="elements")data = loader.load()data[0] Document(page_content='This is a test email to use for unit tests.', metadata={'source': 'example_data/fake-email.eml', 'filename': 'fake-email.eml', 'file_directory': 'example_data', 'date': '2022-12-16T17:04:16-05:00', 'filetype': 'message/rfc822', 'sent_from': ['Matthew Robinson '], 'sent_to': ['Matthew Robinson '], 'subject': 'Test Email', 'category': 'NarrativeText'})Processing Attachments​You can process attachments with UnstructuredEmailLoader by setting process_attachments=True in the constructor. By default, attachments will be partitioned using the partition function from unstructured. You can use a different partitioning function by passing the function to the attachment_partitioner kwarg.loader = UnstructuredEmailLoader( "example_data/fake-email.eml", mode="elements", process_attachments=True,)data =Skip to main content🦜ï¸�🔗 LangChainDocsUse casesIntegrationsAPILangSmithJS/TS DocsCTRLKIntegrationsCallbacksChat modelsDocument loadersEtherscan LoaderacreomAirbyte JSONAirtableAlibaba Cloud MaxComputeApify DatasetArxivAsyncHtmlLoaderAWS S3 DirectoryAWS S3 FileAZLyricsAzure Blob Storage ContainerAzure Blob Storage FileBibTeXBiliBiliBlackboardBlockchainBrave SearchBrowserlesschatgpt_loaderCollege ConfidentialConfluenceCoNLL-UCopy PasteCSVCube Semantic LayerDatadog LogsDiffbotDiscordDocugamiDuckDBEmailEmbaasEPubEverNoteexample_dataMicrosoft ExcelFacebook ChatFaunaFigmaGeopandasGitGitBookGitHubGoogle BigQueryGoogle Cloud Storage DirectoryGoogle Cloud Storage FileGoogle DriveGrobidGutenbergHacker NewsHuggingFace datasetiFixitImagesImage captionsIMSDbIuguJoplinJupyter NotebookLarkSuite (FeiShu)MastodonMediaWikiDumpMergeDocLoadermhtmlMicrosoft OneDriveMicrosoft PowerPointMicrosoft WordModern TreasuryNotion DB 1/2Notion DB 2/2ObsidianOpen Document Format (ODT)Open City DataOrg-modePandas DataFramePsychicPySpark DataFrame LoaderReadTheDocs DocumentationRecursive URL LoaderRedditRoamRocksetRSTSitemapSlackSnowflakeSource CodeSpreedlyStripeSubtitleTelegramTencent COS DirectoryTencent COS File2MarkdownTOMLTrelloTSVTwitterUnstructured FileURLWeatherWebBaseLoaderWhatsApp ChatWikipediaXMLXorbits Pandas DataFrameLoading documents from a YouTube urlYouTube transcriptsDocument transformersLLMsMemoryRetrieversText embedding modelsAgent toolkitsToolsVector storesGrouped by providerIntegrationsDocument loadersEmailOn this pageEmailThis notebook shows how to load email (.eml) or Microsoft Outlook (.msg) files.Using Unstructured​#!pip installEmail | 🦜ï¸�🔗 Langchain'title': "ChatOpenAI models don't work with prompts created via ChatPromptTemplate.from_role_strings", 'creator': 'UmerHA', 'created_at': '2023-05-20T10:39:18Z', 'comments': 1, 'state': 'open', 'labels': [], 'assignee': None, 'milestone': None, 'locked': False, 'number': 5027, 'is_pull_request': False}PreviousGitBookNextGoogle BigQuerySetup access tokenLoad Issues and PRsOnly load issuesCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.official example notebooks/scripts - [X] My own modified scripts ### Related Components - [X] LLMs/Chat Models - [ ] Embedding Models - [X] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [ ] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [ ] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction ``` import os os.environ["OPENAI_API_KEY"] = "..." from langchain.chains import LLMChain from langchain.chat_models import ChatOpenAI from langchain.prompts import PromptTemplate from langchain.prompts.chat import ChatPromptTemplate from langchain.schema import messages_from_dict role_strings = [ ("system", "you are a bird expert"), ("human", "which bird has a point beak?") ] prompt = ChatPromptTemplate.from_role_strings(role_strings) chain = LLMChain(llm=ChatOpenAI(), prompt=prompt) chain.run({}) ``` ### Expected behavior Chain should run {'url': 'https://github.com/hwchase17/langchain/issues/5027', 'title': "ChatOpenAI models don't work with- @eyurtsev {'url': 'https://github.com/hwchase17/langchain/pull/5408', 'title': 'DocumentLoader for GitHub', 'creator': 'UmerHA', 'created_at': '2023-05-29T14:50:53Z', 'comments': 0, 'state': 'open', 'labels': ['enhancement', 'lgtm', 'doc loader'], 'assignee': None, 'milestone': None, 'locked': False, 'number': 5408, 'is_pull_request': True}Only load issues​By default, the GitHub API returns considers pull requests to also be issues. To only get 'pure' issues (i.e., no pull requests), use include_prs=Falseloader = GitHubIssuesLoader( repo="hwchase17/langchain", access_token=ACCESS_TOKEN, # delete/comment out this argument if you've set the access token as an env var. creator="UmerHA", include_prs=False,)docs = loader.load()print(docs[0].page_content)print(docs[0].metadata) ### System Info LangChain version = 0.0.167 Python version = 3.11.0 System = Windows 11 (using Jupyter) ### Who can help? - @hwchase17 - @agola11 - @UmerHA (I have a fix ready, will submit a PR) ### Information - [ ] The official example notebooks/scripts - [X] My own modified scriptsrequests (PRs) for a given repository on GitHub. We will use the LangChain Python repository as an example.Setup access token​To access the GitHub API, you need a personal access token - you can set up yours here: https://github.com/settings/tokens?type=beta. You can either set this token as the environment variable GITHUB_PERSONAL_ACCESS_TOKEN and it will be automatically pulled in, or you can pass it in directly at initializaiton as the access_token named parameter.# If you haven't set your access token as an environment variable, pass it in here.from getpass import getpassACCESS_TOKEN = getpass()Load Issues and PRs​from langchain.document_loaders import GitHubIssuesLoaderloader = GitHubIssuesLoader( repo="hwchase17/langchain", access_token=ACCESS_TOKEN, # delete/comment out this argument if you've set the access token as an env var. creator="UmerHA",)Let's load all issues and PRs created by "UmerHA".Here's a list of all filters you can use:include_prsmilestonestateassigneecreatormentionedlabelssortdirectionsinceFor more info, see https://docs.github.com/en/rest/issues/issues?apiVersion=2022-11-28#list-repository-issues.docs = loader.load()print(docs[0].page_content)print(docs[0].metadata) # Creates GitHubLoader (#5257) GitHubLoader is a DocumentLoader that loads issues and PRs from GitHub. Fixes #5257 Community members can review the PR once tests pass. Tag maintainers/contributors who might be interested: DataLoaders - @eyurtsev {'url':Skip to main content🦜ï¸�🔗 LangChainDocsUse casesIntegrationsAPILangSmithJS/TS DocsCTRLKIntegrationsCallbacksChat modelsDocument loadersEtherscan LoaderacreomAirbyte JSONAirtableAlibaba Cloud MaxComputeApify DatasetArxivAsyncHtmlLoaderAWS S3 DirectoryAWS S3 FileAZLyricsAzure Blob Storage ContainerAzure Blob Storage FileBibTeXBiliBiliBlackboardBlockchainBrave SearchBrowserlesschatgpt_loaderCollege ConfidentialConfluenceCoNLL-UCopy PasteCSVCube Semantic LayerDatadog LogsDiffbotDiscordDocugamiDuckDBEmailEmbaasEPubEverNoteexample_dataMicrosoft ExcelFacebook ChatFaunaFigmaGeopandasGitGitBookGitHubGoogle BigQueryGoogle Cloud Storage DirectoryGoogle Cloud Storage FileGoogle DriveGrobidGutenbergHacker NewsHuggingFace datasetiFixitImagesImage captionsIMSDbIuguJoplinJupyter NotebookLarkSuite (FeiShu)MastodonMediaWikiDumpMergeDocLoadermhtmlMicrosoft OneDriveMicrosoft PowerPointMicrosoft WordModern TreasuryNotion DB 1/2Notion DB 2/2ObsidianOpen Document Format (ODT)Open City DataOrg-modePandas DataFramePsychicPySpark DataFrame LoaderReadTheDocs DocumentationRecursive URL LoaderRedditRoamRocksetRSTSitemapSlackSnowflakeSource CodeSpreedlyStripeSubtitleTelegramTencent COS DirectoryTencent COS File2MarkdownTOMLTrelloTSVTwitterUnstructured FileURLWeatherWebBaseLoaderWhatsApp ChatWikipediaXMLXorbits Pandas DataFrameLoading documents from a YouTube urlYouTube transcriptsDocument transformersLLMsMemoryRetrieversText embedding modelsAgent toolkitsToolsVector storesGrouped by providerIntegrationsDocument loadersGitHubOn this pageGitHubThis notebooks shows how you can load issues and pull requests (PRs) for a given repository on GitHub. We will use the LangChain Python repositoryGitHub | 🦜ï¸�🔗 Langchain# Polygon Mainnet contract address)blockchainType = BlockchainType.POLYGON_MAINNETblockchainLoader = BlockchainDocumentLoader( contract_address=contractAddress, blockchainType=blockchainType, api_key=alchemyApiKey,)nfts = blockchainLoader.load()nfts[:2]PreviousBlackboardNextBrave SearchOverviewLoad NFTs into Document LoaderOption 1: Ethereum Mainnet (default BlockchainType)Option 2: Polygon MainnetCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.is to provide a means of testing functionality in the Langchain Document Loader for Blockchain.Initially this Loader supports:Loading NFTs as Documents from NFT Smart Contracts (ERC721 and ERC1155)Ethereum Mainnnet, Ethereum Testnet, Polygon Mainnet, Polygon Testnet (default is eth-mainnet)Alchemy's getNFTsForCollection APIIt can be extended if the community finds value in this loader. Specifically:Additional APIs can be added (e.g. Tranction-related APIs)This Document Loader Requires:A free Alchemy API KeyThe output takes the following format:pageContent= Individual NFTmetadata={'source': '0x1a92f7381b9f03921564a437210bb9396471050c', 'blockchain': 'eth-mainnet', 'tokenId': '0x15'})Load NFTs into Document Loader​# get ALCHEMY_API_KEY from https://www.alchemy.com/alchemyApiKey = "..."Option 1: Ethereum Mainnet (default BlockchainType)​from langchain.document_loaders.blockchain import ( BlockchainDocumentLoader, BlockchainType,)contractAddress = "0xbc4ca0eda7647a8ab7c2061c2e118a18a936f13d" # Bored Ape Yacht Club contract addressblockchainType = BlockchainType.ETH_MAINNET # default value, optional parameterblockchainLoader = BlockchainDocumentLoader( contract_address=contractAddress, api_key=alchemyApiKey)nfts = blockchainLoader.load()nfts[:2]Option 2: Polygon Mainnet​contractAddress = ( "0x448676ffCd0aDf2D85C1f0565e8dde6924A9A7D9" # Polygon Mainnet contract address)blockchainType =Skip to main content🦜ï¸�🔗 LangChainDocsUse casesIntegrationsAPILangSmithJS/TS DocsCTRLKIntegrationsCallbacksChat modelsDocument loadersEtherscan LoaderacreomAirbyte JSONAirtableAlibaba Cloud MaxComputeApify DatasetArxivAsyncHtmlLoaderAWS S3 DirectoryAWS S3 FileAZLyricsAzure Blob Storage ContainerAzure Blob Storage FileBibTeXBiliBiliBlackboardBlockchainBrave SearchBrowserlesschatgpt_loaderCollege ConfidentialConfluenceCoNLL-UCopy PasteCSVCube Semantic LayerDatadog LogsDiffbotDiscordDocugamiDuckDBEmailEmbaasEPubEverNoteexample_dataMicrosoft ExcelFacebook ChatFaunaFigmaGeopandasGitGitBookGitHubGoogle BigQueryGoogle Cloud Storage DirectoryGoogle Cloud Storage FileGoogle DriveGrobidGutenbergHacker NewsHuggingFace datasetiFixitImagesImage captionsIMSDbIuguJoplinJupyter NotebookLarkSuite (FeiShu)MastodonMediaWikiDumpMergeDocLoadermhtmlMicrosoft OneDriveMicrosoft PowerPointMicrosoft WordModern TreasuryNotion DB 1/2Notion DB 2/2ObsidianOpen Document Format (ODT)Open City DataOrg-modePandas DataFramePsychicPySpark DataFrame LoaderReadTheDocs DocumentationRecursive URL LoaderRedditRoamRocksetRSTSitemapSlackSnowflakeSource CodeSpreedlyStripeSubtitleTelegramTencent COS DirectoryTencent COS File2MarkdownTOMLTrelloTSVTwitterUnstructured FileURLWeatherWebBaseLoaderWhatsApp ChatWikipediaXMLXorbits Pandas DataFrameLoading documents from a YouTube urlYouTube transcriptsDocument transformersLLMsMemoryRetrieversText embedding modelsAgent toolkitsToolsVector storesGrouped by providerIntegrationsDocument loadersBlockchainOn this pageBlockchainOverview​The intention of this notebook is to provide a means of testing functionality in the Langchain Document Loader for Blockchain.Initially this LoaderBlockchain | 🦜ï¸�🔗 Langchainshouting the ...', 'Answer (1 of 15): A better question would be, “What’s up with Obama’s first name?â€� President Barack Hussein Obama’s father’s name was Barack Hussein Obama. He was named after his father. Hussein, Obama’s middle name, is a very common Arabic name, meaning "good," "handsome," or ...', 'Barack Obama, in full Barack Hussein Obama II, (born August 4, 1961, Honolulu, Hawaii, U.S.), 44th president of the United States (2009–17) and the first African American to hold the office. Before winning the presidency, Obama represented Illinois in the U.S.']PreviousBlockchainNextBrowserlessInstallation and SetupExampleCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.will be required for opt-in data-collection.Installation and Setup​To get access to the Brave Search API, you need to create an account and get an API key.api_key = "..."from langchain.document_loaders import BraveSearchLoaderExample​loader = BraveSearchLoader( query="obama middle name", api_key=api_key, search_kwargs={"count": 3})docs = loader.load()len(docs) 3[doc.metadata for doc in docs] [{'title': "Obama's Middle Name -- My Last Name -- is 'Hussein.' So?", 'link': 'https://www.cair.com/cair_in_the_news/obamas-middle-name-my-last-name-is-hussein-so/'}, {'title': "What's up with Obama's middle name? - Quora", 'link': 'https://www.quora.com/Whats-up-with-Obamas-middle-name'}, {'title': 'Barack Obama | Biography, Parents, Education, Presidency, Books, ...', 'link': 'https://www.britannica.com/biography/Barack-Obama'}][doc.page_content for doc in docs] ['I wasn’t sure whether to laugh or cry a few days back listening to radio talk show host Bill Cunningham repeatedly scream Barack Obama’s middle name — my last name — as if he had anti-Muslim Tourette’s. “Hussein,â€� Cunningham hissed like he was beckoning Satan when shouting the ...', 'Answer (1 of 15): A better question wouldof search results without relying on any third-parties, with the remainder being retrieved server-side from the Bing API or (on an opt-in basis) client-side from Google. According to Brave, the index was kept "intentionally smaller than that of Google or Bing" in order to help avoid spam and other low-quality content, with the disadvantage that "Brave Search is not yet as good as Google in recovering long-tail queries."Brave Search Premium: As of April 2023 Brave Search is an ad-free website, but it will eventually switch to a new model that will include ads and premium users will get an ad-free experience. User data including IP addresses won't be collected from its users by default. A premium accountis a search engine developed by Brave Software.Brave Search uses its own web index. As of May 2022, it covered over 10 billion pages and was used to serve 92%Skip to main content🦜ï¸�🔗 LangChainDocsUse casesIntegrationsAPILangSmithJS/TS DocsCTRLKIntegrationsCallbacksChat modelsDocument loadersEtherscan LoaderacreomAirbyte JSONAirtableAlibaba Cloud MaxComputeApify DatasetArxivAsyncHtmlLoaderAWS S3 DirectoryAWS S3 FileAZLyricsAzure Blob Storage ContainerAzure Blob Storage FileBibTeXBiliBiliBlackboardBlockchainBrave SearchBrowserlesschatgpt_loaderCollege ConfidentialConfluenceCoNLL-UCopy PasteCSVCube Semantic LayerDatadog LogsDiffbotDiscordDocugamiDuckDBEmailEmbaasEPubEverNoteexample_dataMicrosoft ExcelFacebook ChatFaunaFigmaGeopandasGitGitBookGitHubGoogle BigQueryGoogle Cloud Storage DirectoryGoogle Cloud Storage FileGoogle DriveGrobidGutenbergHacker NewsHuggingFace datasetiFixitImagesImage captionsIMSDbIuguJoplinJupyter NotebookLarkSuite (FeiShu)MastodonMediaWikiDumpMergeDocLoadermhtmlMicrosoft OneDriveMicrosoft PowerPointMicrosoft WordModern TreasuryNotion DB 1/2Notion DB 2/2ObsidianOpen Document Format (ODT)Open City DataOrg-modePandas DataFramePsychicPySpark DataFrame LoaderReadTheDocs DocumentationRecursive URL LoaderRedditRoamRocksetRSTSitemapSlackSnowflakeSource CodeSpreedlyStripeSubtitleTelegramTencent COS DirectoryTencent COS File2MarkdownTOMLTrelloTSVTwitterUnstructured FileURLWeatherWebBaseLoaderWhatsApp ChatWikipediaXMLXorbits Pandas DataFrameLoading documents from a YouTube urlYouTube transcriptsDocument transformersLLMsMemoryRetrieversText embedding modelsAgent toolkitsToolsVector storesGrouped by providerIntegrationsDocument loadersBrave SearchOn this pageBrave SearchBrave Search is a search engine developed by Brave Software.Brave Search uses its own web index. As of MayBrave Search | 🦜ï¸�🔗 Langchaindidn't wanna wanna leave you\nI didn't wanna fight\nStarted to cry but then remembered I\n\nI can buy myself flowers\nWrite my name in the sand\nTalk to myself for hours (Yeah)\nSay things you don't understand\nI can take myself dancing\nAnd I can hold my own hand\nYeah, I can love me better than\nYeah, I can love me better than you can, uh\n\nCan love me better\nI can love me better, baby\nCan love me better\nI can love me better, baby (Than you can)\nCan love me better\nI can love me better, baby\nCan love me better\nI\n", lookup_str='', metadata={'source': 'https://www.azlyrics.com/lyrics/mileycyrus/flowers.html'}, lookup_index=0)]PreviousAWS S3 FileNextAzure Blob Storage ContainerCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.every day growing collection of lyrics.This covers how to load AZLyrics webpages into a document format that we can use downstream.from langchain.document_loaders import AZLyricsLoaderloader = AZLyricsLoader("https://www.azlyrics.com/lyrics/mileycyrus/flowers.html")data = loader.load()data [Document(page_content="Miley Cyrus - Flowers Lyrics | AZLyrics.com\n\r\nWe were good, we were gold\nKinda dream that can't be sold\nWe were right till we weren't\nBuilt a home and watched it burn\n\nI didn't wanna leave you\nI didn't wanna lie\nStarted to cry but then remembered I\n\nI can buy myself flowers\nWrite my name in the sand\nTalk to myself for hours\nSay things you don't understand\nI can take myself dancing\nAnd I can hold my own hand\nYeah, I can love me better than you can\n\nCan love me better\nI can love me better, baby\nCan love me better\nI can love me better, baby\n\nPaint my nails, cherry red\nMatch the roses that you left\nNo remorse, no regret\nI forgive every word you said\n\nI didn't wanna leave you, baby\nI didn't wanna fight\nStarted to cry but then remembered I\n\nI can buy myself flowers\nWrite my name in the sand\nTalk to myself for hours, yeah\nSay things you don't understand\nI can take myself dancing\nAnd I can hold my own hand\nYeah, I can love me better than you can\n\nCan love me better\nI can love me better, baby\nCan love me better\nI can love me better, baby\nCan love me better\nI can love me better, baby\nCan love me better\nI\n\nI didn't wanna wanna leave you\nI didn't wanna fight\nStarted to cry but then rememberedSkip to main content🦜ï¸�🔗 LangChainDocsUse casesIntegrationsAPILangSmithJS/TS DocsCTRLKIntegrationsCallbacksChat modelsDocument loadersEtherscan LoaderacreomAirbyte JSONAirtableAlibaba Cloud MaxComputeApify DatasetArxivAsyncHtmlLoaderAWS S3 DirectoryAWS S3 FileAZLyricsAzure Blob Storage ContainerAzure Blob Storage FileBibTeXBiliBiliBlackboardBlockchainBrave SearchBrowserlesschatgpt_loaderCollege ConfidentialConfluenceCoNLL-UCopy PasteCSVCube Semantic LayerDatadog LogsDiffbotDiscordDocugamiDuckDBEmailEmbaasEPubEverNoteexample_dataMicrosoft ExcelFacebook ChatFaunaFigmaGeopandasGitGitBookGitHubGoogle BigQueryGoogle Cloud Storage DirectoryGoogle Cloud Storage FileGoogle DriveGrobidGutenbergHacker NewsHuggingFace datasetiFixitImagesImage captionsIMSDbIuguJoplinJupyter NotebookLarkSuite (FeiShu)MastodonMediaWikiDumpMergeDocLoadermhtmlMicrosoft OneDriveMicrosoft PowerPointMicrosoft WordModern TreasuryNotion DB 1/2Notion DB 2/2ObsidianOpen Document Format (ODT)Open City DataOrg-modePandas DataFramePsychicPySpark DataFrame LoaderReadTheDocs DocumentationRecursive URL LoaderRedditRoamRocksetRSTSitemapSlackSnowflakeSource CodeSpreedlyStripeSubtitleTelegramTencent COS DirectoryTencent COS File2MarkdownTOMLTrelloTSVTwitterUnstructured FileURLWeatherWebBaseLoaderWhatsApp ChatWikipediaXMLXorbits Pandas DataFrameLoading documents from a YouTube urlYouTube transcriptsDocument transformersLLMsMemoryRetrieversText embedding modelsAgent toolkitsToolsVector storesGrouped by providerIntegrationsDocument loadersAZLyricsAZLyricsAZLyrics is a large, legal, every day growing collection of lyrics.This covers how to load AZLyrics webpages into a document formatAZLyrics | 🦜ï¸�🔗 Langchainis a social news website focusing on computer science and entrepreneurship. It is run by the investment fund and startup incubator Y Combinator. In general, content that can be submitted is defined as "anything that gratifies one's intellectual curiosity."This notebook covers how to pull page data and comments from Hacker Newsfrom langchain.document_loaders import HNLoaderloader = HNLoader("https://news.ycombinator.com/item?id=34817881")data = loader.load()data[0].page_content[:300] "delta_p_delta_x 73 days ago \n | next [–] \n\nAstrophysical and cosmological simulations are often insightful. They're also very cross-disciplinary; besides the obvious astrophysics, there's networking and sysadmin, parallel computing and algorithm theory (so that the simulation programs a"data[0].metadata {'source': 'https://news.ycombinator.com/item?id=34817881', 'title': 'What Lights the Universe’s Standard Candles?'}PreviousGutenbergNextHuggingFace datasetCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.Skip to main content🦜ï¸�🔗 LangChainDocsUse casesIntegrationsAPILangSmithJS/TS DocsCTRLKIntegrationsCallbacksChat modelsDocument loadersEtherscan LoaderacreomAirbyte JSONAirtableAlibaba Cloud MaxComputeApify DatasetArxivAsyncHtmlLoaderAWS S3 DirectoryAWS S3 FileAZLyricsAzure Blob Storage ContainerAzure Blob Storage FileBibTeXBiliBiliBlackboardBlockchainBrave SearchBrowserlesschatgpt_loaderCollege ConfidentialConfluenceCoNLL-UCopy PasteCSVCube Semantic LayerDatadog LogsDiffbotDiscordDocugamiDuckDBEmailEmbaasEPubEverNoteexample_dataMicrosoft ExcelFacebook ChatFaunaFigmaGeopandasGitGitBookGitHubGoogle BigQueryGoogle Cloud Storage DirectoryGoogle Cloud Storage FileGoogle DriveGrobidGutenbergHacker NewsHuggingFace datasetiFixitImagesImage captionsIMSDbIuguJoplinJupyter NotebookLarkSuite (FeiShu)MastodonMediaWikiDumpMergeDocLoadermhtmlMicrosoft OneDriveMicrosoft PowerPointMicrosoft WordModern TreasuryNotion DB 1/2Notion DB 2/2ObsidianOpen Document Format (ODT)Open City DataOrg-modePandas DataFramePsychicPySpark DataFrame LoaderReadTheDocs DocumentationRecursive URL LoaderRedditRoamRocksetRSTSitemapSlackSnowflakeSource CodeSpreedlyStripeSubtitleTelegramTencent COS DirectoryTencent COS File2MarkdownTOMLTrelloTSVTwitterUnstructured FileURLWeatherWebBaseLoaderWhatsApp ChatWikipediaXMLXorbits Pandas DataFrameLoading documents from a YouTube urlYouTube transcriptsDocument transformersLLMsMemoryRetrieversText embedding modelsAgent toolkitsToolsVector storesGrouped by providerIntegrationsDocument loadersHacker NewsHacker NewsHacker News (sometimes abbreviated as HN) is a social news website focusing on computer science and entrepreneurship. It is run by the investment fundHacker News | 🦜ï¸�🔗 Langchain
Team "Payroll (millions)" "Wins"
0 Nationals 81.34 98
1 Reds 82.20 97
2 Yankees 197.96
Rockies, 78.06, 64
Cubs, 88.19, 61
Astros, 60.65, 55
PreviousTrelloNextTwitterUnstructuredTSVLoaderCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.Mariners, 81.97, 75 Mets, 93.35, 74 Blue Jays, 75.48, 73 Royals, 60.91, 72 Marlins, 118.07, 69 Red Sox, 173.18, 69 Indians, 78.43, 68 Twins, 94.08, 66 88 Dodgers, 95.14, 86 White Sox, 96.92, 85 Brewers, 97.65, 83 Phillies, 174.54, 81 Diamondbacks, 74.28, 81 Pirates, 63.43, 79 Padres, 55.24, 76 Mariners, 81.97, Braves, 83.31, 94 Athletics, 55.37, 94 Rangers, 120.51, 93 Orioles, 81.43, 93 Rays, 64.17, 90 Angels, 154.49, 89 Tigers, 132.30, 88 Cardinals, 110.30, 88 is a simple, text-based file format for storing tabular data.[3] Records are separated by newlines, and values within a record are separated by tab characters.UnstructuredTSVLoader​You can also load the table using the UnstructuredTSVLoader. One advantage of using UnstructuredTSVLoader is that if you use it in "elements" mode, an HTML representation of the table will be available in the metadata.from langchain.document_loaders.tsv import UnstructuredTSVLoaderloader = UnstructuredTSVLoader( file_path="example_data/mlb_teams_2012.csv", mode="elements")docs = loader.load()print(docs[0].metadata["text_as_html"])
Nationals, 81.34, 98
Reds, 82.20, 97
Yankees, 197.96, 95
Giants, 117.62, 94
Braves,Skip to main content🦜ï¸�🔗 LangChainDocsUse casesIntegrationsAPILangSmithJS/TS DocsCTRLKIntegrationsCallbacksChat modelsDocument loadersEtherscan LoaderacreomAirbyte JSONAirtableAlibaba Cloud MaxComputeApify DatasetArxivAsyncHtmlLoaderAWS S3 DirectoryAWS S3 FileAZLyricsAzure Blob Storage ContainerAzure Blob Storage FileBibTeXBiliBiliBlackboardBlockchainBrave SearchBrowserlesschatgpt_loaderCollege ConfidentialConfluenceCoNLL-UCopy PasteCSVCube Semantic LayerDatadog LogsDiffbotDiscordDocugamiDuckDBEmailEmbaasEPubEverNoteexample_dataMicrosoft ExcelFacebook ChatFaunaFigmaGeopandasGitGitBookGitHubGoogle BigQueryGoogle Cloud Storage DirectoryGoogle Cloud Storage FileGoogle DriveGrobidGutenbergHacker NewsHuggingFace datasetiFixitImagesImage captionsIMSDbIuguJoplinJupyter NotebookLarkSuite (FeiShu)MastodonMediaWikiDumpMergeDocLoadermhtmlMicrosoft OneDriveMicrosoft PowerPointMicrosoft WordModern TreasuryNotion DB 1/2Notion DB 2/2ObsidianOpen Document Format (ODT)Open City DataOrg-modePandas DataFramePsychicPySpark DataFrame LoaderReadTheDocs DocumentationRecursive URL LoaderRedditRoamRocksetRSTSitemapSlackSnowflakeSource CodeSpreedlyStripeSubtitleTelegramTencent COS DirectoryTencent COS File2MarkdownTOMLTrelloTSVTwitterUnstructured FileURLWeatherWebBaseLoaderWhatsApp ChatWikipediaXMLXorbits Pandas DataFrameLoading documents from a YouTube urlYouTube transcriptsDocument transformersLLMsMemoryRetrieversText embedding modelsAgent toolkitsToolsVector storesGrouped by providerIntegrationsDocument loadersTSVOn this pageTSVA tab-separated values (TSV) file is a simple, text-based file format for storing tabular data.[3] Records are separated byTSV | 🦜ï¸�🔗 Langchainrel="mask-icon" href="https://lilianweng.github.io/safari-pinned-tab.'PreviousArxivNextAWS S3 DirectoryCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.content="com.espn.score_center"/>\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\nUse any or all of the above description/syntax/sample output sections. You may also want to add "see also" or other sections.\n\nNote that the above example also uses the Template:T template.\n\nCategory:Documentation templatesCategory:Template documentation', metadata={'source': 'Documentation/doc'}), Document(page_content='Description\nA template link with a variable number of parameters (0-20).\n\nSyntax\n \n\nSource\nImproved version not needing t/piece subtemplate developed on Templates wiki see the list of authors. Copied here via CC-By-SA 3.0 license.\n\nExample\n\nCategory:General wiki templates\nCategory:Template documentation', metadata={'source': 'T/doc'}), Document(page_content='\t\n\t\t \n\t\n\t\t Aliases\n\t Relatives\n\t Affiliation\n Occupation\n \n Biographical information\n Marital status\n \tDate of birth\n Place of birth\n Date of death\n Place of death\n \n Physical description\n Species\n Gender\n Height\n Weight\n Eye color\n\t\n Appearances\n Portrayed by\n Appears in\n Debut\n ',View or edit this documentation. (About template documentation)\n|-\n| Editors can experiment in this template\'s [ sandbox] and [ test case] pages.\n|}Category:Documentation templates', metadata={'source': 'Documentation'}), Document(page_content='Description\nThis template is used to insert descriptions on template pages.\n\nSyntax\nAdd at the end of the template page.\n\nAdd to transclude an alternative page from the /doc subpage.\n\nUsage\n\nOn the Template page\nThis is the normal format when used:\n\nTEMPLATE CODE\nAny categories to be inserted into articles by the template\n{{Documentation}}\n\nIf your template is not a completed div or table, you may need to close the tags just before {{Documentation}} is inserted (within the noinclude tags).\n\nA line break right before {{Documentation}} can also be useful as it helps prevent the documentation template "running into" previous code.\n\nOn the documentation page\nThe documentation page is usually located on the /doc subpage for a template, but a different page can be specified with the first parameter of the template (see Syntax).\n\nNormally, you will want to write something like the following on the documentation page:\n\n==Description==\nThis template is used to do something.\n\n==Syntax==\nType {{t|templatename}} somewhere.\n\n==Samples==\n{{templatename|input}} \n\nresults in...\n\n{{templatename|input}}\n\nAny categories for the template itself\n[[Category:Template documentation]]\n\nUse any or all ofa wiki (wiki pages with all their revisions), without the site-related data. A XML dump does not create a full backup of the wiki database, the dump does not contain user accounts, images, edit logs, etc.This covers how to load a MediaWiki XML dump file into a document format that we can use downstream.It uses mwxml from mediawiki-utilities to dump and mwparserfromhell from earwig to parse MediaWiki wikicode.Dump files can be obtained with dumpBackup.php or on the Special:Statistics page of the Wiki.# mediawiki-utilities supports XML schema 0.11 in unmerged branchespip install -qU git+https://github.com/mediawiki-utilities/python-mwtypes@updates_schema_0.11# mediawiki-utilities mwxml has a bug, fix PR pendingpip install -qU git+https://github.com/gdedrouas/python-mwxml@xml_format_0.11pip install -qU mwparserfromhellfrom langchain.document_loaders import MWDumpLoaderloader = MWDumpLoader("example_data/testmw_pages_current.xml", encoding="utf8")documents = loader.load()print(f"You have {len(documents)} document(s) in your data ") You have 177 document(s) in your data documents[:5] [Document(page_content='\t\n\t\n\tArtist\n\tReleased\n\tRecorded\n\tLength\n\tLabel\n\tProducer', metadata={'source': 'Album'}), Document(page_content='{| class="article-table plainlinks" style="width:100%;"\n|- style="font-size:18px;"\n! style="padding:0px;" | Template documentation\n|-\n| Note: portions of the template sample may not be visible without values provided.\n|-\n| View or edit this documentation. (About template documentation)\n|-\n|Skip to main content🦜ï¸�🔗 LangChainDocsUse casesIntegrationsAPILangSmithJS/TS DocsCTRLKIntegrationsCallbacksChat modelsDocument loadersEtherscan LoaderacreomAirbyte JSONAirtableAlibaba Cloud MaxComputeApify DatasetArxivAsyncHtmlLoaderAWS S3 DirectoryAWS S3 FileAZLyricsAzure Blob Storage ContainerAzure Blob Storage FileBibTeXBiliBiliBlackboardBlockchainBrave SearchBrowserlesschatgpt_loaderCollege ConfidentialConfluenceCoNLL-UCopy PasteCSVCube Semantic LayerDatadog LogsDiffbotDiscordDocugamiDuckDBEmailEmbaasEPubEverNoteexample_dataMicrosoft ExcelFacebook ChatFaunaFigmaGeopandasGitGitBookGitHubGoogle BigQueryGoogle Cloud Storage DirectoryGoogle Cloud Storage FileGoogle DriveGrobidGutenbergHacker NewsHuggingFace datasetiFixitImagesImage captionsIMSDbIuguJoplinJupyter NotebookLarkSuite (FeiShu)MastodonMediaWikiDumpMergeDocLoadermhtmlMicrosoft OneDriveMicrosoft PowerPointMicrosoft WordModern TreasuryNotion DB 1/2Notion DB 2/2ObsidianOpen Document Format (ODT)Open City DataOrg-modePandas DataFramePsychicPySpark DataFrame LoaderReadTheDocs DocumentationRecursive URL LoaderRedditRoamRocksetRSTSitemapSlackSnowflakeSource CodeSpreedlyStripeSubtitleTelegramTencent COS DirectoryTencent COS File2MarkdownTOMLTrelloTSVTwitterUnstructured FileURLWeatherWebBaseLoaderWhatsApp ChatWikipediaXMLXorbits Pandas DataFrameLoading documents from a YouTube urlYouTube transcriptsDocument transformersLLMsMemoryRetrieversText embedding modelsAgent toolkitsToolsVector storesGrouped by providerIntegrationsDocument loadersMediaWikiDumpMediaWikiDumpMediaWiki XML Dumps contain the content of a wiki (wiki pages with all their revisions), without the site-related data. A XML dump doesMediaWikiDump | 🦜ï¸�🔗 Langchainin any set of computer files, usually used for coordinating work among programmers collaboratively developing source code during software development.This notebook shows how to load text files from Git repository.Load existing repository from disk​pip install GitPythonfrom git import Reporepo = Repo.clone_from( "https://github.com/hwchase17/langchain", to_path="./example_data/test_repo1")branch = repo.head.referencefrom langchain.document_loaders import GitLoaderloader = GitLoader(repo_path="./example_data/test_repo1/", branch=branch)data = loader.load()len(data)print(data[0]) page_content='.venv\n.github\n.git\n.mypy_cache\n.pytest_cache\nDockerfile' metadata={'file_path': '.dockerignore', 'file_name': '.dockerignore', 'file_type': ''}Clone repository from url​from langchain.document_loaders import GitLoaderloader = GitLoader( clone_url="https://github.com/hwchase17/langchain", repo_path="./example_data/test_repo2/", branch="master",)data = loader.load()len(data) 1074Filtering files to load​from langchain.document_loaders import GitLoader# eg. loading only python filesloader = GitLoader( repo_path="./example_data/test_repo1/", file_filter=lambda file_path: file_path.endswith(".py"),)PreviousGeopandasNextGitBookLoad existing repository from diskClone repository from urlFiltering files to loadCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.Skip to main content🦜ï¸�🔗 LangChainDocsUse casesIntegrationsAPILangSmithJS/TS DocsCTRLKIntegrationsCallbacksChat modelsDocument loadersEtherscan LoaderacreomAirbyte JSONAirtableAlibaba Cloud MaxComputeApify DatasetArxivAsyncHtmlLoaderAWS S3 DirectoryAWS S3 FileAZLyricsAzure Blob Storage ContainerAzure Blob Storage FileBibTeXBiliBiliBlackboardBlockchainBrave SearchBrowserlesschatgpt_loaderCollege ConfidentialConfluenceCoNLL-UCopy PasteCSVCube Semantic LayerDatadog LogsDiffbotDiscordDocugamiDuckDBEmailEmbaasEPubEverNoteexample_dataMicrosoft ExcelFacebook ChatFaunaFigmaGeopandasGitGitBookGitHubGoogle BigQueryGoogle Cloud Storage DirectoryGoogle Cloud Storage FileGoogle DriveGrobidGutenbergHacker NewsHuggingFace datasetiFixitImagesImage captionsIMSDbIuguJoplinJupyter NotebookLarkSuite (FeiShu)MastodonMediaWikiDumpMergeDocLoadermhtmlMicrosoft OneDriveMicrosoft PowerPointMicrosoft WordModern TreasuryNotion DB 1/2Notion DB 2/2ObsidianOpen Document Format (ODT)Open City DataOrg-modePandas DataFramePsychicPySpark DataFrame LoaderReadTheDocs DocumentationRecursive URL LoaderRedditRoamRocksetRSTSitemapSlackSnowflakeSource CodeSpreedlyStripeSubtitleTelegramTencent COS DirectoryTencent COS File2MarkdownTOMLTrelloTSVTwitterUnstructured FileURLWeatherWebBaseLoaderWhatsApp ChatWikipediaXMLXorbits Pandas DataFrameLoading documents from a YouTube urlYouTube transcriptsDocument transformersLLMsMemoryRetrieversText embedding modelsAgent toolkitsToolsVector storesGrouped by providerIntegrationsDocument loadersGitOn this pageGitGit is a distributed version control system that tracks changes in any set of computer files, usually used for coordinating work among programmers collaboratively developing source code duringGit | 🦜ï¸�🔗 Langchainas final and correct and is not subject to challenge or dispute by either party.', metadata={'xpath': '/docset:OFFICELEASE-section/docset:OFFICELEASE/docset:THISOFFICELEASE/docset:WITNESSETH-section/docset:WITNESSETH/docset:TheTerms/dg:chunk/docset:BasicLeaseInformation/docset:BASICLEASEINFORMATIONANDDEFINEDTERMS-section/docset:BASICLEASEINFORMATIONANDDEFINEDTERMS/docset:DhaGroup/docset:DhaGroup/docset:Premises[2]/docset:RentableAreaofthePremises-section/docset:RentableAreaofthePremises', 'id': 'md8rieecquyv', 'name': 'Shorebucks LLC_NJ.pdf', 'structure': 'div', 'tag': 'RentableAreaofthePremises', 'Landlord': 'DHA Group', 'Tenant': 'Shorebucks LLC'})]}This time the answer is correct, since the self-querying retriever created a filter on the landlord attribute of the metadata, correctly filtering to document that specifically is about the DHA Group landlord. The resulting source chunks are all relevant to this landlord, and this improves answer accuracy even though the landlord is not directly mentioned in the specific chunk that contains the correct answer.PreviousDiscordNextDuckDBPrerequisitesQuick startAdvantages vs Other Chunking TechniquesLoad DocumentsBasic Use: Docugami Loader for Document QAUsing Docugami to Add Metadata to Chunks for High Accuracy Document QACommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.'structure': 'p', 'tag': 'DhaGroup', 'Landlord': 'DHA Group', 'Tenant': 'Shorebucks LLC'}), Document(page_content="1.16 Landlord 's Notice Address . DHA Group , Suite 1010 , 111 Bauer Dr , Oakland , New Jersey , 07436 , with a copy to the Building Management Office at the Project , Attention: On - Site Property Manager .", metadata={'xpath': '/docset:OFFICELEASE-section/docset:OFFICELEASE/docset:THISOFFICELEASE/docset:WITNESSETH-section/docset:WITNESSETH/docset:GrossRentCreditTheRentCredit-section/docset:GrossRentCreditTheRentCredit/docset:Period/docset:ApplicableSalesTax/docset:PercentageRent/docset:PercentageRent/docset:NoticeAddress[2]/docset:LandlordsNoticeAddress-section/docset:LandlordsNoticeAddress[2]', 'id': 'md8rieecquyv', 'name': 'Shorebucks LLC_NJ.pdf', 'structure': 'div', 'tag': 'LandlordsNoticeAddress', 'Landlord': 'DHA Group', 'Tenant': 'Shorebucks LLC'}), Document(page_content='1.6 Rentable Area of the Premises. 13,500 square feet . This square footage figure includes an add-on factor for Common Areas in the Building and has been agreed upon by the parties as final and correct and is not subject to challenge or dispute by either party.', metadata={'xpath':limited liability company authorized to transact business in New Jersey .', metadata={'xpath': '/docset:OFFICELEASE-section/docset:OFFICELEASE/docset:THISOFFICELEASE/docset:WITNESSETH-section/docset:WITNESSETH/docset:TheTerms/dg:chunk/docset:BasicLeaseInformation/docset:BASICLEASEINFORMATIONANDDEFINEDTERMS-section/docset:BASICLEASEINFORMATIONANDDEFINEDTERMS/docset:DhaGroup/docset:DhaGroup/docset:DhaGroup/docset:Landlord-section/docset:DhaGroup', 'id': 'md8rieecquyv', 'name': 'Shorebucks LLC_NJ.pdf', 'structure': 'div', 'tag': 'DhaGroup', 'Landlord': 'DHA Group', 'Tenant': 'Shorebucks LLC'}), Document(page_content='WITNESSES: LANDLORD: DHA Group , a Delaware limited liability company', metadata={'xpath': '/docset:OFFICELEASE-section/docset:OFFICELEASE/docset:THISOFFICELEASE/docset:WITNESSETH-section/docset:WITNESSETH/docset:GrossRentCreditTheRentCredit-section/docset:GrossRentCreditTheRentCredit/docset:Guaranty-section/docset:Guaranty[2]/docset:SIGNATURESONNEXTPAGE-section/docset:INWITNESSWHEREOF-section/docset:INWITNESSWHEREOF/docset:Behalf/docset:Witnesses/xhtml:table/xhtml:tbody/xhtml:tr[3]/xhtml:td[2]/docset:DhaGroup', 'id': 'md8rieecquyv', 'name': 'Shorebucks LLC_NJ.pdf', 'structure': 'p', 'tag': 'DhaGroup', 'Landlord':{key} for this chunk", type="string", ) for key in documents[0].metadata if key.lower() not in EXCLUDE_KEYS]document_content_description = "Contents of this chunk"llm = OpenAI(temperature=0)vectordb = Chroma.from_documents(documents=documents, embedding=embedding)retriever = SelfQueryRetriever.from_llm( llm, vectordb, document_content_description, metadata_field_info, verbose=True)qa_chain = RetrievalQA.from_chain_type( llm=OpenAI(), chain_type="stuff", retriever=retriever, return_source_documents=True) Using embedded DuckDB without persistence: data will be transientLet's run the same question again. It returns the correct result since all the chunks have metadata key/value pairs on them carrying key information about the document even if this information is physically very far away from the source chunk used to generate the answer.qa_chain("What is rentable area for the property owned by DHA Group?") query='rentable area' filter=Comparison(comparator=, attribute='Landlord', value='DHA Group') {'query': 'What is rentable area for the property owned by DHA Group?', 'result': ' 13,500 square feet.', 'source_documents': [Document(page_content='1.1 Landlord . DHA Group , a Delaware limited liability company authorized to transact business in New Jersey .', metadata={'xpath':'dsyfhh4vpeyf', 'name': 'Shorebucks LLC_CO.pdf', 'structure': 'div', 'tag': 'RentableAreaofthePremises', 'Landlord': 'Perry & Blair LLC', 'Tenant': 'Shorebucks LLC'})]Docugami can help here. Chunks are annotated with additional metadata created using different techniques if a user has been using Docugami. More technical approaches will be added later.Specifically, let's look at the additional metadata that is returned on the documents returned by docugami, in the form of some simple key/value pairs on all the text chunks:loader = DocugamiLoader(docset_id="wh2kned25uqm")documents = loader.load()documents[0].metadata {'xpath': '/docset:OFFICELEASEAGREEMENT-section/docset:OFFICELEASEAGREEMENT/docset:ThisOfficeLeaseAgreement', 'id': 'v1bvgaozfkak', 'name': 'TruTone Lane 2.docx', 'structure': 'p', 'tag': 'ThisOfficeLeaseAgreement', 'Landlord': 'BUBBA CENTER PARTNERSHIP', 'Tenant': 'Truetone Lane LLC'}We can use a self-querying retriever to improve our query accuracy, using this additional metadata:from langchain.chains.query_constructor.schema import AttributeInfofrom langchain.retrievers.self_query.base import SelfQueryRetrieverEXCLUDE_KEYS = ["id", "xpath", "structure"]metadata_field_info = [ AttributeInfo( name=key, description=f"The {key} for this chunk", type="string",at the Project , Attention: On - Site Property Manager .", metadata={'xpath': '/docset:OFFICELEASE-section/docset:OFFICELEASE/docset:THISOFFICELEASE/docset:WITNESSETH-section/docset:WITNESSETH/docset:GrossRentCreditTheRentCredit-section/docset:GrossRentCreditTheRentCredit/docset:Period/docset:ApplicableSalesTax/docset:PercentageRent/docset:PercentageRent/docset:NoticeAddress[2]/docset:LandlordsNoticeAddress-section/docset:LandlordsNoticeAddress[2]', 'id': 'md8rieecquyv', 'name': 'Shorebucks LLC_NJ.pdf', 'structure': 'div', 'tag': 'LandlordsNoticeAddress', 'Landlord': 'DHA Group', 'Tenant': 'Shorebucks LLC'}), Document(page_content='1.6 Rentable Area of the Premises. 9,753 square feet . This square footage figure includes an add-on factor for Common Areas in the Building and has been agreed upon by the parties as final and correct and is not subject to challenge or dispute by either party.', metadata={'xpath': '/docset:OFFICELEASE-section/docset:OFFICELEASE/docset:THISOFFICELEASE/docset:WITNESSETH-section/docset:WITNESSETH/docset:TheTerms/dg:chunk/docset:BasicLeaseInformation/docset:BASICLEASEINFORMATIONANDDEFINEDTERMS-section/docset:BASICLEASEINFORMATIONANDDEFINEDTERMS/docset:PerryBlair/docset:PerryBlair/docset:Premises[2]/docset:RentableAreaofthePremises-section/docset:RentableAreaofthePremises', 'id': 'dsyfhh4vpeyf', 'name': 'ShorebucksLANDLORD: DHA Group , a Delaware limited liability company', metadata={'xpath': '/docset:OFFICELEASE-section/docset:OFFICELEASE/docset:THISOFFICELEASE/docset:WITNESSETH-section/docset:WITNESSETH/docset:GrossRentCreditTheRentCredit-section/docset:GrossRentCreditTheRentCredit/docset:Guaranty-section/docset:Guaranty[2]/docset:SIGNATURESONNEXTPAGE-section/docset:INWITNESSWHEREOF-section/docset:INWITNESSWHEREOF/docset:Behalf/docset:Witnesses/xhtml:table/xhtml:tbody/xhtml:tr[3]/xhtml:td[2]/docset:DhaGroup', 'id': 'md8rieecquyv', 'name': 'Shorebucks LLC_NJ.pdf', 'structure': 'p', 'tag': 'DhaGroup', 'Landlord': 'DHA Group', 'Tenant': 'Shorebucks LLC'}), Document(page_content="1.16 Landlord 's Notice Address . DHA Group , Suite 1010 , 111 Bauer Dr , Oakland , New Jersey , 07436 , with a copy to the Building Management Office at the Project , Attention: On - Site Property Manager .", metadata={'xpath':# the correct answer should be 13,500 ' 9,753 square feet'At first glance the answer may seem reasonable, but if you review the source chunks carefully for this answer, you will see that the chunking of the document did not end up putting the Landlord name and the rentable area in the same context, since they are far apart in the document. The retriever therefore ends up finding unrelated chunks from other documents not even related to the Menlo Group landlord. That landlord happens to be mentioned on the first page of the file Shorebucks LLC_NJ.pdf file, and while one of the source chunks used by the chain is indeed from that doc that contains the correct answer (13,500), other source chunks from different docs are included, and the answer is therefore incorrect.chain_response["source_documents"] [Document(page_content='1.1 Landlord . DHA Group , a Delaware limited liability company authorized to transact business in New Jersey .', metadata={'xpath': '/docset:OFFICELEASE-section/docset:OFFICELEASE/docset:THISOFFICELEASE/docset:WITNESSETH-section/docset:WITNESSETH/docset:TheTerms/dg:chunk/docset:BasicLeaseInformation/docset:BASICLEASEINFORMATIONANDDEFINEDTERMS-section/docset:BASICLEASEINFORMATIONANDDEFINEDTERMS/docset:DhaGroup/docset:DhaGroup/docset:DhaGroup/docset:Landlord-section/docset:DhaGroup', 'id': 'md8rieecquyv', 'name': 'Shorebucks LLC_NJ.pdf', 'structure': 'div', 'tag': 'DhaGroup', 'Landlord': 'DHA Group', 'Tenant': 'Shorebucks LLC'}), Document(page_content='WITNESSES: LANDLORD: DHA Group , a Delaware limited liability(subject to availability of space) for the then Building Standard charge .", metadata={'xpath': '/docset:OFFICELEASE-section/docset:OFFICELEASE/docset:THISOFFICELEASE/docset:WITNESSETH-section/docset:WITNESSETH/docset:GrossRentCreditTheRentCredit-section/docset:GrossRentCreditTheRentCredit/docset:Period/docset:ApplicableSalesTax/docset:PercentageRent/docset:TheTerms/docset:Indemnification/docset:INDEMNIFICATION-section/docset:INDEMNIFICATION/docset:Waiver/docset:Waiver/docset:Signs/docset:SIGNS-section/docset:SIGNS', 'id': 'qkn9cyqsiuch', 'name': 'Shorebucks LLC_AZ.pdf', 'structure': 'div', 'tag': 'SIGNS', 'Landlord': 'Menlo Group', 'Tenant': 'Shorebucks LLC'})]}Using Docugami to Add Metadata to Chunks for High Accuracy Document QA​One issue with large documents is that the correct answer to your question may depend on chunks that are far apart in the document. Typical chunking techniques, even with overlap, will struggle with providing the LLM sufficent context to answer such questions. With upcoming very large context LLMs, it may be possible to stuff a lot of tokens, perhaps even entire documents, inside the context but this will still hit limits at some point with very long documents, or a lot of documents.For example, if we ask a more complex question that requires the LLM to draw on chunks from different parts of the document, even OpenAI's powerful LLM is unable to answer correctly.chain_response = qa_chain("What is rentable area for the property owned by DHA Group?")chain_response["result"] # the correct answer should be 13,500 'portion thereof as may be necessary, in order that they or any of them may gain access to the Premises for the purpose of showing the Premises to potential new tenants or real estate brokers. In addition, Landlord shall be entitled to place a "FOR RENT " or "FOR LEASE" sign (not exceeding 8.5 â€� x 11 â€�) in the front window of the Premises during the last six months of the term of this Lease .', metadata={'xpath': '/docset:Rider/docset:RIDERTOLEASE-section/docset:RIDERTOLEASE/docset:FixedRent/docset:TermYearPeriod/docset:Lease/docset:_42FLandlordSAccess-section/docset:_42FLandlordSAccess/docset:LandlordsRights/docset:Landlord', 'id': 'omvs4mysdk6b', 'name': 'TruTone Lane 1.docx', 'structure': 'p', 'tag': 'Landlord', 'Landlord': 'BIRCH STREET , LLC', 'Tenant': 'Trutone Lane LLC'}), Document(page_content="24. SIGNS . No signage shall be placed by Tenant on any portion of the Project . However, Tenant shall be permitted to place a sign bearing its name in a location approved by Landlord near the entrance to the Premises (at Tenant's cost ) and will be furnished a single listing of its name in the Building's directory (at Landlord 's cost ), all in accordance with the criteria adopted from time to time by Landlord for the Project . Any changes or additional listings in the directory shall be furnished (subject to availability of space) for the then Building Standard charge .", metadata={'xpath':erecting or removing such signs shall be repaired promptly by the Tenant at the Tenant ’s expense . Any signs or other form of identification allowed must conform to all applicable laws, ordinances, etc. governing the same. Tenant also agrees to have any window or glass identification completely removed and cleaned at its expense promptly upon vacating the Premises. \n\n ARTICLE VII UTILITIES 7.01', metadata={'xpath': '/docset:OFFICELEASEAGREEMENT-section/docset:OFFICELEASEAGREEMENT/docset:ThisOFFICELEASEAGREEMENTThis/docset:ArticleIBasic/docset:ArticleIiiUseAndCareOf/docset:ARTICLEIIIUSEANDCAREOFPREMISES-section/docset:ARTICLEIIIUSEANDCAREOFPREMISES/docset:NoOtherPurposes/docset:TenantsResponsibility/dg:chunk', 'id': 'g2fvhekmltza', 'name': 'TruTone Lane 6.pdf', 'structure': 'lim', 'tag': 'chunk', 'Landlord': 'GLORY ROAD LLC', 'Tenant': 'Truetone Lane LLC'}), Document(page_content='Landlord , its agents, servants, employees, licensees, invitees, and contractors during the last year of the term of this Lease at any and all times during regular business hours, after 24 hour notice to tenant, to pass and repass on and through the Premises, or such portion thereof as may be necessary, in order that they or any of them may gain access to theand must conform to any applicable laws, ordinances, etc. governing the same. The tenant must also remove and clean any window or glass identification promptly upon vacating the premises.', 'source_documents': [Document(page_content='ARTICLE VI SIGNAGE 6.01 Signage . Tenant may place or attach to the Premises signs (digital or otherwise) or other such identification as needed after receiving written permission from the Landlord , which permission shall not be unreasonably withheld. Any damage caused to the Premises by the Tenant ’s erecting or removing such signs shall be repaired promptly by the Tenant at the Tenant ’s expense . Any signs or other form of identification allowed must conform to all applicable laws, ordinances, etc. governing the same. Tenant also agrees to have any window or glass identification completely removed and cleaned at its expense promptly upon vacating the Premises.', metadata={'xpath': '/docset:OFFICELEASEAGREEMENT-section/docset:OFFICELEASEAGREEMENT/docset:Article/docset:ARTICLEVISIGNAGE-section/docset:_601Signage-section/docset:_601Signage', 'id': 'v1bvgaozfkak', 'name': 'TruTone Lane 2.docx', 'structure': 'div', 'tag': '_601Signage', 'Landlord': 'BUBBA CENTER PARTNERSHIP', 'Tenant': 'Truetone Lane LLC'}), Document(page_content='Signage. Tenant may place or attach to the Premises signs (digital or otherwise) or other such identification as needed after receiving written permission from the Landlord , which permission shall not be unreasonably withheld. Any damage caused to the Premises by the Tenant ’s erecting or removing such signs shall be repaired promptly by the Tenant at thethis one. We can just use the same code, but use the DocugamiLoader for better chunking, instead of loading text or PDF files directly with basic splitting techniques.poetry run pip -q install openai tiktoken chromadbfrom langchain.schema import Documentfrom langchain.vectorstores import Chromafrom langchain.embeddings import OpenAIEmbeddingsfrom langchain.llms import OpenAIfrom langchain.chains import RetrievalQA# For this example, we already have a processed docset for a set of lease documentsloader = DocugamiLoader(docset_id="wh2kned25uqm")documents = loader.load()The documents returned by the loader are already split, so we don't need to use a text splitter. Optionally, we can use the metadata on each document, for example the structure or tag attributes, to do any post-processing we want.We will just use the output of the DocugamiLoader as-is to set up a retrieval QA chain the usual way.embedding = OpenAIEmbeddings()vectordb = Chroma.from_documents(documents=documents, embedding=embedding)retriever = vectordb.as_retriever()qa_chain = RetrievalQA.from_chain_type( llm=OpenAI(), chain_type="stuff", retriever=retriever, return_source_documents=True) Using embedded DuckDB without persistence: data will be transient# Try out the retriever with an example queryqa_chain("What can tenants do with signage on their properties?") {'query': 'What can tenants do with signage on their properties?', 'result': ' Tenants may place signs (digital or otherwise) or other form of identification on the premises after receiving written permission from the landlord which shall not be unreasonably withheld. The tenant is responsible for any damage caused to the premises and must conform to any applicable laws, ordinances, etc. governing the same. The tenant must also'id': '43rj0ds7s0ur', 'name': 'NDA simple layout.docx', 'structure': 'p', 'tag': 'TheParties'}), Document(page_content='DOCUGAMI INC . : \n\n Caleb Divine : \n\n Signature: Signature: Name: \n\n Jean Paoli Name: Title: \n\n CEO Title:', metadata={'xpath': '/docset:MutualNon-disclosure/docset:Witness/docset:TheParties/docset:DocugamiInc/docset:DocugamiInc/xhtml:table', 'id': '43rj0ds7s0ur', 'name': 'NDA simple layout.docx', 'structure': '', 'tag': 'table'})]The metadata for each Document (really, a chunk of an actual PDF, DOC or DOCX) contains some useful additional information:id and name: ID and Name of the file (PDF, DOC or DOCX) the chunk is sourced from within Docugami.xpath: XPath inside the XML representation of the document, for the chunk. Useful for source citations directly to the actual chunk inside the document XML.structure: Structural attributes of the chunk, e.g. h1, h2, div, table, td, etc. Useful to filter out certain kinds of chunks if needed by the caller.tag: Semantic tag for the chunk, using various generative and extractive techniques. More details here: https://github.com/docugami/DFM-benchmarksBasic Use: Docugami Loader for Document QA​You can use the Docugami Loader like a standard loader for Document QA over multiple docs, albeit with much better chunks that follow the natural contours of the document. There are many great tutorials on how to do this, e.g. this one. We can just use the same code, but use the DocugamiLoader for betterof Washington , excluding its body of law controlling conflict of laws. This Agreement is the complete and exclusive understanding and agreement between the parties regarding the subject matter of this Agreement and supersedes all prior agreements, understandings and communications, oral or written, between the parties regarding the subject matter of this Agreement . If any provision of this Agreement is held invalid or unenforceable by a court of competent jurisdiction, that provision of this Agreement will be enforced to the maximum extent permissible and the other provisions of this Agreement will remain in full force and effect. Neither party may assign this Agreement , in whole or in part, by operation of law or otherwise, without the other party’s prior written consent, and any attempted assignment without such consent will be void. This Agreement may be executed in counterparts, each of which will be deemed an original, but all of which together will constitute one and the same instrument.', metadata={'xpath': '/docset:MutualNon-disclosure/docset:MutualNon-disclosure/docset:MUTUALNON-DISCLOSUREAGREEMENT-section/docset:MUTUALNON-DISCLOSUREAGREEMENT/docset:Consideration/docset:Purposes/docset:Accordance/docset:Miscellaneous-section/docset:Miscellaneous', 'id': '43rj0ds7s0ur', 'name': 'NDA simple layout.docx', 'structure': 'div', 'tag': 'Miscellaneous'}), Document(page_content='[SIGNATURE PAGE FOLLOWS] IN WITNESS WHEREOF, the parties hereto have executed this Mutual Non-Disclosure Agreement by their duly authorized officers or representatives as of the date first set forth above.', metadata={'xpath': '/docset:MutualNon-disclosure/docset:Witness/docset:TheParties/docset:TheParties', 'id':enjoin any unauthorized use or disclosure of its Confidential Information , in addition to any other rights and remedies that it may have at law or otherwise.', metadata={'xpath': '/docset:MutualNon-disclosure/docset:MutualNon-disclosure/docset:MUTUALNON-DISCLOSUREAGREEMENT-section/docset:MUTUALNON-DISCLOSUREAGREEMENT/docset:Consideration/docset:Purposes/docset:EquitableRelief/docset:EquitableRelief-section/docset:EquitableRelief[2]', 'id': '43rj0ds7s0ur', 'name': 'NDA simple layout.docx', 'structure': 'div', 'tag': 'EquitableRelief'}), Document(page_content='10. Non-compete. To the maximum extent permitted by applicable law, during the Term of this Agreement and for a period of one ( 1 ) year thereafter, Caleb Divine may not market software products or do business that directly or indirectly competes with Docugami software products .', metadata={'xpath': '/docset:MutualNon-disclosure/docset:MutualNon-disclosure/docset:MUTUALNON-DISCLOSUREAGREEMENT-section/docset:MUTUALNON-DISCLOSUREAGREEMENT/docset:Consideration/docset:Purposes/docset:TheMaximumExtent/docset:Non-compete-section/docset:Non-compete', 'id': '43rj0ds7s0ur', 'name': 'NDA simple layout.docx', 'structure': 'div', 'tag': 'Non-compete'}), Document(page_content='11. Miscellaneous. This Agreement will be governed and construed in accordance with the laws of the State of Washington , excluding its body of law controlling conflict of laws. This Agreement is theDISCLOSING PARTY “AS IS â€�.', metadata={'xpath': '/docset:MutualNon-disclosure/docset:MutualNon-disclosure/docset:MUTUALNON-DISCLOSUREAGREEMENT-section/docset:MUTUALNON-DISCLOSUREAGREEMENT/docset:Consideration/docset:Purposes/docset:NoWarranty/docset:NoWarranty-section/docset:NoWarranty[2]', 'id': '43rj0ds7s0ur', 'name': 'NDA simple layout.docx', 'structure': 'div', 'tag': 'NoWarranty'}), Document(page_content='8. Term. This Agreement will remain in effect for a period of seven ( 7 ) years from the date of last disclosure of Confidential Information by either party, at which time it will terminate.', metadata={'xpath': '/docset:MutualNon-disclosure/docset:MutualNon-disclosure/docset:MUTUALNON-DISCLOSUREAGREEMENT-section/docset:MUTUALNON-DISCLOSUREAGREEMENT/docset:Consideration/docset:Purposes/docset:ThisAgreement/docset:Term-section/docset:Term', 'id': '43rj0ds7s0ur', 'name': 'NDA simple layout.docx', 'structure': 'div', 'tag': 'Term'}), Document(page_content='9. Equitable Relief . Each party acknowledges that the unauthorized use or disclosure of the disclosing party’s Confidential Information may cause the disclosing party to incur irreparable harm and significant damages, the degree of which may be difficult to ascertain. Accordingly, each party agrees that the disclosing party will have the right to seek immediate equitable relief to enjoin any unauthorized use or disclosure of its Confidential Information , in addition to any other rights andcontain or are based upon the disclosing party’s Confidential Information .', metadata={'xpath': '/docset:MutualNon-disclosure/docset:MutualNon-disclosure/docset:MUTUALNON-DISCLOSUREAGREEMENT-section/docset:MUTUALNON-DISCLOSUREAGREEMENT/docset:Consideration/docset:Purposes/docset:TheCompletion/docset:ReturnofConfidentialInformation-section/docset:ReturnofConfidentialInformation', 'id': '43rj0ds7s0ur', 'name': 'NDA simple layout.docx', 'structure': 'div', 'tag': 'ReturnofConfidentialInformation'}), Document(page_content='6. No Obligations . Each party retains the right to determine whether to disclose any Confidential Information to the other party.', metadata={'xpath': '/docset:MutualNon-disclosure/docset:MutualNon-disclosure/docset:MUTUALNON-DISCLOSUREAGREEMENT-section/docset:MUTUALNON-DISCLOSUREAGREEMENT/docset:Consideration/docset:Purposes/docset:NoObligations/docset:NoObligations-section/docset:NoObligations[2]', 'id': '43rj0ds7s0ur', 'name': 'NDA simple layout.docx', 'structure': 'div', 'tag': 'NoObligations'}), Document(page_content='7. No Warranty. ALL CONFIDENTIAL INFORMATION IS PROVIDED BY THE DISCLOSING PARTY “AS IS â€�.', metadata={'xpath':'id': '43rj0ds7s0ur', 'name': 'NDA simple layout.docx', 'structure': 'p', 'tag': 'TheReceivingParty'}), Document(page_content='4. Compelled Disclosure . Nothing in this Agreement will be deemed to restrict a party from disclosing the other party’s Confidential Information to the extent required by any order, subpoena, law, statute or regulation; provided, that the party required to make such a disclosure uses reasonable efforts to give the other party reasonable advance notice of such required disclosure in order to enable the other party to prevent or limit such disclosure.', metadata={'xpath': '/docset:MutualNon-disclosure/docset:MutualNon-disclosure/docset:MUTUALNON-DISCLOSUREAGREEMENT-section/docset:MUTUALNON-DISCLOSUREAGREEMENT/docset:Consideration/docset:Purposes/docset:Disclosure/docset:CompelledDisclosure-section/docset:CompelledDisclosure', 'id': '43rj0ds7s0ur', 'name': 'NDA simple layout.docx', 'structure': 'div', 'tag': 'CompelledDisclosure'}), Document(page_content='5. Return of Confidential Information . Upon the completion or abandonment of the Purpose, and in any event upon the disclosing party’s request, the receiving party will promptly return to the disclosing party all tangible items and embodiments containing or consisting of the disclosing party’s Confidential Information and all copies thereof (including electronic copies), and any notes, analyses, compilations, studies, interpretations, memoranda or other documents (regardless of the form thereof) prepared by or on behalf of the receiving party that contain or are based upon the disclosing party’s Confidential Information .', metadata={'xpath':available to the public through no act or failure to act by the receiving party;', metadata={'xpath': '/docset:MutualNon-disclosure/docset:MutualNon-disclosure/docset:MUTUALNON-DISCLOSUREAGREEMENT-section/docset:MUTUALNON-DISCLOSUREAGREEMENT/docset:Consideration/docset:Purposes/docset:TheDate/docset:TheDate/docset:TheDate', 'id': '43rj0ds7s0ur', 'name': 'NDA simple layout.docx', 'structure': 'p', 'tag': 'TheDate'}), Document(page_content='(ii) were rightfully known by the receiving party prior to receiving such information or materials from the disclosing party;', metadata={'xpath': '/docset:MutualNon-disclosure/docset:MutualNon-disclosure/docset:MUTUALNON-DISCLOSUREAGREEMENT-section/docset:MUTUALNON-DISCLOSUREAGREEMENT/docset:Consideration/docset:Purposes/docset:TheDate/docset:SuchInformation/docset:TheReceivingParty', 'id': '43rj0ds7s0ur', 'name': 'NDA simple layout.docx', 'structure': 'p', 'tag': 'TheReceivingParty'}), Document(page_content='(iii) are rightfully acquired by the receiving party from a third party who has the right to disclose such information or materials without breach of any confidentiality obligation to the disclosing party;', metadata={'xpath': '/docset:MutualNon-disclosure/docset:MutualNon-disclosure/docset:MUTUALNON-DISCLOSUREAGREEMENT-section/docset:MUTUALNON-DISCLOSUREAGREEMENT/docset:Consideration/docset:Purposes/docset:TheDate/docset:TheReceivingParty/docset:TheReceivingParty', 'id':Confidential Information as those set forth in this Agreement .", metadata={'xpath': '/docset:MutualNon-disclosure/docset:MutualNon-disclosure/docset:MUTUALNON-DISCLOSUREAGREEMENT-section/docset:MUTUALNON-DISCLOSUREAGREEMENT/docset:Consideration/docset:Purposes/docset:Obligations/docset:ObligationsAndRestrictions-section/docset:ObligationsAndRestrictions', 'id': '43rj0ds7s0ur', 'name': 'NDA simple layout.docx', 'structure': 'div', 'tag': 'ObligationsAndRestrictions'}), Document(page_content='3. Exceptions. The obligations and restrictions in Section 2 will not apply to any information or materials that:', metadata={'xpath': '/docset:MutualNon-disclosure/docset:MutualNon-disclosure/docset:MUTUALNON-DISCLOSUREAGREEMENT-section/docset:MUTUALNON-DISCLOSUREAGREEMENT/docset:Consideration/docset:Purposes/docset:Exceptions/docset:Exceptions-section/docset:Exceptions[2]', 'id': '43rj0ds7s0ur', 'name': 'NDA simple layout.docx', 'structure': 'div', 'tag': 'Exceptions'}), Document(page_content='(i) were, at the date of disclosure, or have subsequently become, generally known or available to the public through no act or failure to act by the receiving party;', metadata={'xpath':disclosure, a person exercising reasonable business judgment would understand to be confidential or proprietary.', metadata={'xpath': '/docset:MutualNon-disclosure/docset:MutualNon-disclosure/docset:MUTUALNON-DISCLOSUREAGREEMENT-section/docset:MUTUALNON-DISCLOSUREAGREEMENT/docset:Consideration/docset:Purposes/docset:Purposes/docset:ConfidentialInformation-section/docset:ConfidentialInformation[2]', 'id': '43rj0ds7s0ur', 'name': 'NDA simple layout.docx', 'structure': 'div', 'tag': 'ConfidentialInformation'}), Document(page_content="2. Obligations and Restrictions . Each party agrees: (i) to maintain the other party's Confidential Information in strict confidence; (ii) not to disclose such Confidential Information to any third party; and (iii) not to use such Confidential Information for any purpose except for the Purpose. Each party may disclose the other party’s Confidential Information to its employees and consultants who have a bona fide need to know such Confidential Information for the Purpose, but solely to the extent necessary to pursue the Purpose and for no other purpose; provided, that each such employee and consultant first executes a written agreement (or is otherwise already bound by a written agreement) that contains use and nondisclosure restrictions at least as protective of the other party’s Confidential Information as those set forth in this Agreement .", metadata={'xpath':'name': 'NDA simple layout.docx', 'structure': 'p', 'tag': 'Discussions'}), Document(page_content='In consideration of the foregoing, the parties agree as follows:', metadata={'xpath': '/docset:MutualNon-disclosure/docset:MutualNon-disclosure/docset:MUTUALNON-DISCLOSUREAGREEMENT-section/docset:MUTUALNON-DISCLOSUREAGREEMENT/docset:Consideration/docset:Consideration', 'id': '43rj0ds7s0ur', 'name': 'NDA simple layout.docx', 'structure': 'p', 'tag': 'Consideration'}), Document(page_content='1. Confidential Information . For purposes of this Agreement , “ Confidential Information â€� means any information or materials disclosed by one party to the other party that: (i) if disclosed in writing or in the form of tangible materials, is marked “confidentialâ€� or “proprietaryâ€� at the time of such disclosure; (ii) if disclosed orally or by visual presentation, is identified as “confidentialâ€� or “proprietaryâ€� at the time of such disclosure, and is summarized in a writing sent by the disclosing party to the receiving party within thirty ( 30 ) days after any such disclosure; or (iii) due to its nature or the circumstances of its disclosure, a person exercising reasonable business judgment would understand to be confidential or proprietary.', metadata={'xpath':loader.load()docs [Document(page_content='MUTUAL NON-DISCLOSURE AGREEMENT This Mutual Non-Disclosure Agreement (this “ Agreement â€�) is entered into and made effective as of April 4 , 2018 between Docugami Inc. , a Delaware corporation , whose address is 150 Lake Street South , Suite 221 , Kirkland , Washington 98033 , and Caleb Divine , an individual, whose address is 1201 Rt 300 , Newburgh NY 12550 .', metadata={'xpath': '/docset:MutualNon-disclosure/docset:MutualNon-disclosure/docset:MUTUALNON-DISCLOSUREAGREEMENT-section/docset:MUTUALNON-DISCLOSUREAGREEMENT/docset:ThisMutualNon-disclosureAgreement', 'id': '43rj0ds7s0ur', 'name': 'NDA simple layout.docx', 'structure': 'p', 'tag': 'ThisMutualNon-disclosureAgreement'}), Document(page_content='The above named parties desire to engage in discussions regarding a potential agreement or other transaction between the parties (the “Purposeâ€�). In connection with such discussions, it may be necessary for the parties to disclose to each other certain confidential information or materials to enable them to evaluate whether to enter into such agreement or transaction.', metadata={'xpath': '/docset:MutualNon-disclosure/docset:MutualNon-disclosure/docset:MUTUALNON-DISCLOSUREAGREEMENT-section/docset:MUTUALNON-DISCLOSUREAGREEMENT/docset:Discussions', 'id': '43rj0ds7s0ur', 'name': 'NDA simple layout.docx', 'structure': 'p',XML tree of chunks of varying sizes, from single words or numerical values to entire sections. These chunks follow the semantic contours of the document, providing a more meaningful representation than arbitrary length or simple whitespace-based chunking.Structured Representation: In addition, the XML tree indicates the structural contours of every document, using attributes denoting headings, paragraphs, lists, tables, and other common elements, and does that consistently across all supported document formats, such as scanned PDFs or DOCX files. It appropriately handles long-form document characteristics like page headers/footers or multi-column flows for clean text extraction.Semantic Annotations: Chunks are annotated with semantic tags that are coherent across the document set, facilitating consistent hierarchical queries across multiple documents, even if they are written and formatted differently. For example, in set of lease agreements, you can easily identify key provisions like the Landlord, Tenant, or Renewal Date, as well as more complex information such as the wording of any sub-lease provision or whether a specific jurisdiction has an exception section within a Termination Clause.Additional Metadata: Chunks are also annotated with additional metadata, if a user has been using Docugami. This additional metadata can be used for high-accuracy Document QA without context window restrictions. See detailed code walk-through below.import osfrom langchain.document_loaders import DocugamiLoaderLoad Documents​If the DOCUGAMI_API_KEY environment variable is set, there is no need to pass it in to the loader explicitly otherwise you can pass it in as the access_token parameter.DOCUGAMI_API_KEY = os.environ.get("DOCUGAMI_API_KEY")# To load all docs in the given docset ID, just don't provide document_idsloader = DocugamiLoader(docset_id="ecxqpipcoe2p", document_ids=["43rj0ds7s0ur"])docs = loader.load()docs [Document(page_content='MUTUAL NON-DISCLOSUREhow to load documents from Docugami. It provides the advantages of using this system over alternative data loaders.Prerequisites​Install necessary python packages.Grab an access token for your workspace, and make sure it is set as the DOCUGAMI_API_KEY environment variable.Grab some docset and document IDs for your processed documents, as described here: https://help.docugami.com/home/docugami-api# You need the lxml package to use the DocugamiLoaderpip install lxmlQuick start​Create a Docugami workspace (free trials available)Add your documents (PDF, DOCX or DOC) and allow Docugami to ingest and cluster them into sets of similar documents, e.g. NDAs, Lease Agreements, and Service Agreements. There is no fixed set of document types supported by the system, the clusters created depend on your particular documents, and you can change the docset assignments later.Create an access token via the Developer Playground for your workspace. Detailed instructionsExplore the Docugami API to get a list of your processed docset IDs, or just the document IDs for a particular docset. Use the DocugamiLoader as detailed below, to get rich semantic chunks for your documents.Optionally, build and publish one or more reports or abstracts. This helps Docugami improve the semantic XML with better tags based on your preferences, which are then added to the DocugamiLoader output as metadata. Use techniques like self-querying retriever to do high accuracy Document QA.Advantages vs Other Chunking Techniques​Appropriate chunking of your documents is critical for retrieval from documents. Many chunking techniques exist, including simple ones that rely on whitespace and recursive chunk splitting based on character length. Docugami offers a different approach:Intelligent Chunking: Docugami breaks down every document into a hierarchical semantic XML tree of chunks of varying sizes, from single words or numerical values to entire sections. These chunksSkip to main content🦜ï¸�🔗 LangChainDocsUse casesIntegrationsAPILangSmithJS/TS DocsCTRLKIntegrationsCallbacksChat modelsDocument loadersEtherscan LoaderacreomAirbyte JSONAirtableAlibaba Cloud MaxComputeApify DatasetArxivAsyncHtmlLoaderAWS S3 DirectoryAWS S3 FileAZLyricsAzure Blob Storage ContainerAzure Blob Storage FileBibTeXBiliBiliBlackboardBlockchainBrave SearchBrowserlesschatgpt_loaderCollege ConfidentialConfluenceCoNLL-UCopy PasteCSVCube Semantic LayerDatadog LogsDiffbotDiscordDocugamiDuckDBEmailEmbaasEPubEverNoteexample_dataMicrosoft ExcelFacebook ChatFaunaFigmaGeopandasGitGitBookGitHubGoogle BigQueryGoogle Cloud Storage DirectoryGoogle Cloud Storage FileGoogle DriveGrobidGutenbergHacker NewsHuggingFace datasetiFixitImagesImage captionsIMSDbIuguJoplinJupyter NotebookLarkSuite (FeiShu)MastodonMediaWikiDumpMergeDocLoadermhtmlMicrosoft OneDriveMicrosoft PowerPointMicrosoft WordModern TreasuryNotion DB 1/2Notion DB 2/2ObsidianOpen Document Format (ODT)Open City DataOrg-modePandas DataFramePsychicPySpark DataFrame LoaderReadTheDocs DocumentationRecursive URL LoaderRedditRoamRocksetRSTSitemapSlackSnowflakeSource CodeSpreedlyStripeSubtitleTelegramTencent COS DirectoryTencent COS File2MarkdownTOMLTrelloTSVTwitterUnstructured FileURLWeatherWebBaseLoaderWhatsApp ChatWikipediaXMLXorbits Pandas DataFrameLoading documents from a YouTube urlYouTube transcriptsDocument transformersLLMsMemoryRetrieversText embedding modelsAgent toolkitsToolsVector storesGrouped by providerIntegrationsDocument loadersDocugamiOn this pageDocugamiThis notebook covers how to load documents from Docugami. It provides the advantages of using this system over alternative dataDocugami | 🦜ï¸�🔗 Langchaindata loaders.from langchain.document_loaders import WebBaseLoaderloader_web = WebBaseLoader( "https://github.com/basecamp/handbook/blob/master/37signals-is-you.md")from langchain.document_loaders import PyPDFLoaderloader_pdf = PyPDFLoader("../MachineLearning-Lecture01.pdf")from langchain.document_loaders.merge import MergedDataLoaderloader_all = MergedDataLoader(loaders=[loader_web, loader_pdf])docs_all = loader_all.load()len(docs_all) 23PreviousMediaWikiDumpNextmhtmlCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.Skip to main content🦜ï¸�🔗 LangChainDocsUse casesIntegrationsAPILangSmithJS/TS DocsCTRLKIntegrationsCallbacksChat modelsDocument loadersEtherscan LoaderacreomAirbyte JSONAirtableAlibaba Cloud MaxComputeApify DatasetArxivAsyncHtmlLoaderAWS S3 DirectoryAWS S3 FileAZLyricsAzure Blob Storage ContainerAzure Blob Storage FileBibTeXBiliBiliBlackboardBlockchainBrave SearchBrowserlesschatgpt_loaderCollege ConfidentialConfluenceCoNLL-UCopy PasteCSVCube Semantic LayerDatadog LogsDiffbotDiscordDocugamiDuckDBEmailEmbaasEPubEverNoteexample_dataMicrosoft ExcelFacebook ChatFaunaFigmaGeopandasGitGitBookGitHubGoogle BigQueryGoogle Cloud Storage DirectoryGoogle Cloud Storage FileGoogle DriveGrobidGutenbergHacker NewsHuggingFace datasetiFixitImagesImage captionsIMSDbIuguJoplinJupyter NotebookLarkSuite (FeiShu)MastodonMediaWikiDumpMergeDocLoadermhtmlMicrosoft OneDriveMicrosoft PowerPointMicrosoft WordModern TreasuryNotion DB 1/2Notion DB 2/2ObsidianOpen Document Format (ODT)Open City DataOrg-modePandas DataFramePsychicPySpark DataFrame LoaderReadTheDocs DocumentationRecursive URL LoaderRedditRoamRocksetRSTSitemapSlackSnowflakeSource CodeSpreedlyStripeSubtitleTelegramTencent COS DirectoryTencent COS File2MarkdownTOMLTrelloTSVTwitterUnstructured FileURLWeatherWebBaseLoaderWhatsApp ChatWikipediaXMLXorbits Pandas DataFrameLoading documents from a YouTube urlYouTube transcriptsDocument transformersLLMsMemoryRetrieversText embedding modelsAgent toolkitsToolsVector storesGrouped by providerIntegrationsDocument loadersMergeDocLoaderMergeDocLoaderMerge the documents returned from a set of specified data loaders.from langchain.document_loaders import WebBaseLoaderloader_web = WebBaseLoader(MergeDocLoader | 🦜ï¸�🔗 Langchaincompatible with courses that use the new Blackboard interface. To use this loader, you must have the BbRouter cookie. You can get this cookie by logging into the course and then copying the value of the BbRouter cookie from the browser's developer tools.from langchain.document_loaders import BlackboardLoaderloader = BlackboardLoader( blackboard_course_url="https://blackboard.example.com/webapps/blackboard/execute/announcement?method=search&context=course_entry&course_id=_123456_1", bbrouter="expires:12345...", load_all_recursively=True,)documents = loader.load()PreviousBiliBiliNextBlockchainCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.System) is a web-based virtual learning environment and learning management system developed by Blackboard Inc. The software features course management, customizable open architecture, and scalable design that allows integration with student information systems and authentication protocols. It may be installed on local servers, hosted by Blackboard ASP Solutions, or provided as Software as a Service hosted on Amazon Web Services. Its main purposes are stated to include the addition of online elements to courses traditionally delivered face-to-face and development of completely online courses with few or no face-to-face meetingsThis covers how to load data from a Blackboard Learn instance.This loader is not compatible with all Blackboard courses. It is onlySkip to main content🦜ï¸�🔗 LangChainDocsUse casesIntegrationsAPILangSmithJS/TS DocsCTRLKIntegrationsCallbacksChat modelsDocument loadersEtherscan LoaderacreomAirbyte JSONAirtableAlibaba Cloud MaxComputeApify DatasetArxivAsyncHtmlLoaderAWS S3 DirectoryAWS S3 FileAZLyricsAzure Blob Storage ContainerAzure Blob Storage FileBibTeXBiliBiliBlackboardBlockchainBrave SearchBrowserlesschatgpt_loaderCollege ConfidentialConfluenceCoNLL-UCopy PasteCSVCube Semantic LayerDatadog LogsDiffbotDiscordDocugamiDuckDBEmailEmbaasEPubEverNoteexample_dataMicrosoft ExcelFacebook ChatFaunaFigmaGeopandasGitGitBookGitHubGoogle BigQueryGoogle Cloud Storage DirectoryGoogle Cloud Storage FileGoogle DriveGrobidGutenbergHacker NewsHuggingFace datasetiFixitImagesImage captionsIMSDbIuguJoplinJupyter NotebookLarkSuite (FeiShu)MastodonMediaWikiDumpMergeDocLoadermhtmlMicrosoft OneDriveMicrosoft PowerPointMicrosoft WordModern TreasuryNotion DB 1/2Notion DB 2/2ObsidianOpen Document Format (ODT)Open City DataOrg-modePandas DataFramePsychicPySpark DataFrame LoaderReadTheDocs DocumentationRecursive URL LoaderRedditRoamRocksetRSTSitemapSlackSnowflakeSource CodeSpreedlyStripeSubtitleTelegramTencent COS DirectoryTencent COS File2MarkdownTOMLTrelloTSVTwitterUnstructured FileURLWeatherWebBaseLoaderWhatsApp ChatWikipediaXMLXorbits Pandas DataFrameLoading documents from a YouTube urlYouTube transcriptsDocument transformersLLMsMemoryRetrieversText embedding modelsAgent toolkitsToolsVector storesGrouped by providerIntegrationsDocument loadersBlackboardBlackboardBlackboard Learn (previously the Blackboard Learning Management System) is a web-based virtual learning environment and learning management system developed by Blackboard Inc. TheBlackboard | 🦜ï¸�🔗 Langchainmusic, etc. Each kind of document possesses its special classification problems. When not otherwise specified, text classification is implied. DoPreviousBrave SearchNextchatgpt_loaderCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.Chrome instances in the cloud. It's a great way to run browser-based automation at scale without having to worry about managing your own infrastructure.To use Browserless as a document loader, initialize a BrowserlessLoader instance as shown in this notebook. Note that by default, BrowserlessLoader returns the innerText of the page's body element. To disable this and get the raw HTML, set text_content to False.from langchain.document_loaders import BrowserlessLoaderBROWSERLESS_API_TOKEN = "YOUR_BROWSERLESS_API_TOKEN"loader = BrowserlessLoader( api_token=BROWSERLESS_API_TOKEN, urls=[ "https://en.wikipedia.org/wiki/Document_classification", ], text_content=True,)documents = loader.load()print(documents[0].page_content[:1000]) Jump to content Main menu Search Create account Log in Personal tools Toggle the table of contents Document classification 17 languages Article Talk Read Edit View history Tools From Wikipedia, the free encyclopedia Document classification or document categorization is a problem in library science, information science and computer science. The task is to assign a document to one or more classes or categories. This may be done "manually" (or "intellectually") or algorithmically. The intellectual classification of documents has mostly been the province of library science, while the algorithmic classification of documents is mainly in information science and computer science. The problems are overlapping, however, and there is therefore interdisciplinary research on document classification. The documents to be classified may be texts, images, music, etc. Each kind of document possesses its special classification problems. When not otherwise specified, textSkip to main content🦜ï¸�🔗 LangChainDocsUse casesIntegrationsAPILangSmithJS/TS DocsCTRLKIntegrationsCallbacksChat modelsDocument loadersEtherscan LoaderacreomAirbyte JSONAirtableAlibaba Cloud MaxComputeApify DatasetArxivAsyncHtmlLoaderAWS S3 DirectoryAWS S3 FileAZLyricsAzure Blob Storage ContainerAzure Blob Storage FileBibTeXBiliBiliBlackboardBlockchainBrave SearchBrowserlesschatgpt_loaderCollege ConfidentialConfluenceCoNLL-UCopy PasteCSVCube Semantic LayerDatadog LogsDiffbotDiscordDocugamiDuckDBEmailEmbaasEPubEverNoteexample_dataMicrosoft ExcelFacebook ChatFaunaFigmaGeopandasGitGitBookGitHubGoogle BigQueryGoogle Cloud Storage DirectoryGoogle Cloud Storage FileGoogle DriveGrobidGutenbergHacker NewsHuggingFace datasetiFixitImagesImage captionsIMSDbIuguJoplinJupyter NotebookLarkSuite (FeiShu)MastodonMediaWikiDumpMergeDocLoadermhtmlMicrosoft OneDriveMicrosoft PowerPointMicrosoft WordModern TreasuryNotion DB 1/2Notion DB 2/2ObsidianOpen Document Format (ODT)Open City DataOrg-modePandas DataFramePsychicPySpark DataFrame LoaderReadTheDocs DocumentationRecursive URL LoaderRedditRoamRocksetRSTSitemapSlackSnowflakeSource CodeSpreedlyStripeSubtitleTelegramTencent COS DirectoryTencent COS File2MarkdownTOMLTrelloTSVTwitterUnstructured FileURLWeatherWebBaseLoaderWhatsApp ChatWikipediaXMLXorbits Pandas DataFrameLoading documents from a YouTube urlYouTube transcriptsDocument transformersLLMsMemoryRetrieversText embedding modelsAgent toolkitsToolsVector storesGrouped by providerIntegrationsDocument loadersBrowserlessBrowserlessBrowserless is a service that allows you to run headless Chrome instances in the cloud. It's a great way to run browser-based automation at scale withoutBrowserless | 🦜ï¸�🔗 Langchainmultimedia container format website as "perhaps the most basic of all subtitle formats." SubRip (SubRip Text) files are named with the extension .srt, and contain formatted lines of plain text in groups separated by a blank line. Subtitles are numbered sequentially, starting at 1. The timecode format used is hours:minutes:seconds,milliseconds with time units fixed to two zero-padded digits and fractions fixed to three zero-padded digits (00:00:00,000). The fractional separator used is the comma, since the program was written in France.How to load data from subtitle (.srt) filesPlease, download the example .srt file from here.pip install pysrtfrom langchain.document_loaders import SRTLoaderloader = SRTLoader( "example_data/Star_Wars_The_Clone_Wars_S06E07_Crisis_at_the_Heart.srt")docs = loader.load()docs[0].page_content[:100] 'Corruption discovered\nat the core of the Banking Clan! Reunited, Rush Clovis\nand Senator A'PreviousStripeNextTelegramCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.Skip to main content🦜ï¸�🔗 LangChainDocsUse casesIntegrationsAPILangSmithJS/TS DocsCTRLKIntegrationsCallbacksChat modelsDocument loadersEtherscan LoaderacreomAirbyte JSONAirtableAlibaba Cloud MaxComputeApify DatasetArxivAsyncHtmlLoaderAWS S3 DirectoryAWS S3 FileAZLyricsAzure Blob Storage ContainerAzure Blob Storage FileBibTeXBiliBiliBlackboardBlockchainBrave SearchBrowserlesschatgpt_loaderCollege ConfidentialConfluenceCoNLL-UCopy PasteCSVCube Semantic LayerDatadog LogsDiffbotDiscordDocugamiDuckDBEmailEmbaasEPubEverNoteexample_dataMicrosoft ExcelFacebook ChatFaunaFigmaGeopandasGitGitBookGitHubGoogle BigQueryGoogle Cloud Storage DirectoryGoogle Cloud Storage FileGoogle DriveGrobidGutenbergHacker NewsHuggingFace datasetiFixitImagesImage captionsIMSDbIuguJoplinJupyter NotebookLarkSuite (FeiShu)MastodonMediaWikiDumpMergeDocLoadermhtmlMicrosoft OneDriveMicrosoft PowerPointMicrosoft WordModern TreasuryNotion DB 1/2Notion DB 2/2ObsidianOpen Document Format (ODT)Open City DataOrg-modePandas DataFramePsychicPySpark DataFrame LoaderReadTheDocs DocumentationRecursive URL LoaderRedditRoamRocksetRSTSitemapSlackSnowflakeSource CodeSpreedlyStripeSubtitleTelegramTencent COS DirectoryTencent COS File2MarkdownTOMLTrelloTSVTwitterUnstructured FileURLWeatherWebBaseLoaderWhatsApp ChatWikipediaXMLXorbits Pandas DataFrameLoading documents from a YouTube urlYouTube transcriptsDocument transformersLLMsMemoryRetrieversText embedding modelsAgent toolkitsToolsVector storesGrouped by providerIntegrationsDocument loadersSubtitleSubtitleThe SubRip file format is described on the Matroska multimedia container format website as "perhaps the most basic of all subtitle formats." SubRipSubtitle | 🦜ï¸�🔗 Langchainwhether to remove newline characters from the cell sources and outputs (default is False).\', \'* `traceback` (bool): whether to include full traceback (default is False).\']\'\n\n \'code\' cell: \'[\'loader.load(include_outputs=True, max_output_length=20, remove_newline=True)\']\'\n\n', metadata={'source': 'example_data/notebook.html'})]PreviousJoplinNextLarkSuite (FeiShu)CommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.is a web-based interactive computational environment for creating notebook documents.This notebook covers how to load data from a Jupyter notebook (.html) into a format suitable by LangChain.from langchain.document_loaders import NotebookLoaderloader = NotebookLoader( "example_data/notebook.html", include_outputs=True, max_output_length=20, remove_newline=True,)NotebookLoader.load() loads the .html notebook file into a Document object.Parameters:include_outputs (bool): whether to include cell outputs in the resulting document (default is False).max_output_length (int): the maximum number of characters to include from each cell output (default is 10).remove_newline (bool): whether to remove newline characters from the cell sources and outputs (default is False).traceback (bool): whether to include full traceback (default is False).loader.load() [Document(page_content='\'markdown\' cell: \'[\'# Notebook\', \'\', \'This notebook covers how to load data from an .html notebook into a format suitable by LangChain.\']\'\n\n \'code\' cell: \'[\'from langchain.document_loaders import NotebookLoader\']\'\n\n \'code\' cell: \'[\'loader = NotebookLoader("example_data/notebook.html")\']\'\n\n \'markdown\' cell: \'[\'`NotebookLoader.load()` loads the `.html` notebook file into a `Document` object.\', \'\', \'**Parameters**:\', \'\', \'* `include_outputs` (bool): whether to include cell outputs in the resulting document (default is False).\', \'* `max_output_length` (int): the maximum number of characters to include from each cell output (default is 10).\', \'* `remove_newline` (bool): whether to remove newline characters from the cell sources and outputs (default is False).\', \'*Skip to main content🦜ï¸�🔗 LangChainDocsUse casesIntegrationsAPILangSmithJS/TS DocsCTRLKIntegrationsCallbacksChat modelsDocument loadersEtherscan LoaderacreomAirbyte JSONAirtableAlibaba Cloud MaxComputeApify DatasetArxivAsyncHtmlLoaderAWS S3 DirectoryAWS S3 FileAZLyricsAzure Blob Storage ContainerAzure Blob Storage FileBibTeXBiliBiliBlackboardBlockchainBrave SearchBrowserlesschatgpt_loaderCollege ConfidentialConfluenceCoNLL-UCopy PasteCSVCube Semantic LayerDatadog LogsDiffbotDiscordDocugamiDuckDBEmailEmbaasEPubEverNoteexample_dataMicrosoft ExcelFacebook ChatFaunaFigmaGeopandasGitGitBookGitHubGoogle BigQueryGoogle Cloud Storage DirectoryGoogle Cloud Storage FileGoogle DriveGrobidGutenbergHacker NewsHuggingFace datasetiFixitImagesImage captionsIMSDbIuguJoplinJupyter NotebookLarkSuite (FeiShu)MastodonMediaWikiDumpMergeDocLoadermhtmlMicrosoft OneDriveMicrosoft PowerPointMicrosoft WordModern TreasuryNotion DB 1/2Notion DB 2/2ObsidianOpen Document Format (ODT)Open City DataOrg-modePandas DataFramePsychicPySpark DataFrame LoaderReadTheDocs DocumentationRecursive URL LoaderRedditRoamRocksetRSTSitemapSlackSnowflakeSource CodeSpreedlyStripeSubtitleTelegramTencent COS DirectoryTencent COS File2MarkdownTOMLTrelloTSVTwitterUnstructured FileURLWeatherWebBaseLoaderWhatsApp ChatWikipediaXMLXorbits Pandas DataFrameLoading documents from a YouTube urlYouTube transcriptsDocument transformersLLMsMemoryRetrieversText embedding modelsAgent toolkitsToolsVector storesGrouped by providerIntegrationsDocument loadersJupyter NotebookJupyter NotebookJupyter Notebook (formerly IPython Notebook) is a web-based interactive computational environment for creating notebook documents.This notebook covers how to load data from aJupyter Notebook | 🦜ï¸�🔗 Langchain(SaaS) company. It offers payment-processing software and application programming interfaces for e-commerce websites and mobile applications.This notebook covers how to load data from the Iugu REST API into a format that can be ingested into LangChain, along with example usage for vectorization.import osfrom langchain.document_loaders import IuguLoaderfrom langchain.indexes import VectorstoreIndexCreatorThe Iugu API requires an access token, which can be found inside of the Iugu dashboard.This document loader also requires a resource option which defines what data you want to load.Following resources are available:Documentation Documentationiugu_loader = IuguLoader("charges")# Create a vectorstore retriever from the loader# see https://python.langchain.com/en/latest/modules/data_connection/getting_started.html for more detailsindex = VectorstoreIndexCreator().from_loaders([iugu_loader])iugu_doc_retriever = index.vectorstore.as_retriever()PreviousIMSDbNextJoplinCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.Skip to main content🦜ï¸�🔗 LangChainDocsUse casesIntegrationsAPILangSmithJS/TS DocsCTRLKIntegrationsCallbacksChat modelsDocument loadersEtherscan LoaderacreomAirbyte JSONAirtableAlibaba Cloud MaxComputeApify DatasetArxivAsyncHtmlLoaderAWS S3 DirectoryAWS S3 FileAZLyricsAzure Blob Storage ContainerAzure Blob Storage FileBibTeXBiliBiliBlackboardBlockchainBrave SearchBrowserlesschatgpt_loaderCollege ConfidentialConfluenceCoNLL-UCopy PasteCSVCube Semantic LayerDatadog LogsDiffbotDiscordDocugamiDuckDBEmailEmbaasEPubEverNoteexample_dataMicrosoft ExcelFacebook ChatFaunaFigmaGeopandasGitGitBookGitHubGoogle BigQueryGoogle Cloud Storage DirectoryGoogle Cloud Storage FileGoogle DriveGrobidGutenbergHacker NewsHuggingFace datasetiFixitImagesImage captionsIMSDbIuguJoplinJupyter NotebookLarkSuite (FeiShu)MastodonMediaWikiDumpMergeDocLoadermhtmlMicrosoft OneDriveMicrosoft PowerPointMicrosoft WordModern TreasuryNotion DB 1/2Notion DB 2/2ObsidianOpen Document Format (ODT)Open City DataOrg-modePandas DataFramePsychicPySpark DataFrame LoaderReadTheDocs DocumentationRecursive URL LoaderRedditRoamRocksetRSTSitemapSlackSnowflakeSource CodeSpreedlyStripeSubtitleTelegramTencent COS DirectoryTencent COS File2MarkdownTOMLTrelloTSVTwitterUnstructured FileURLWeatherWebBaseLoaderWhatsApp ChatWikipediaXMLXorbits Pandas DataFrameLoading documents from a YouTube urlYouTube transcriptsDocument transformersLLMsMemoryRetrieversText embedding modelsAgent toolkitsToolsVector storesGrouped by providerIntegrationsDocument loadersIuguIuguIugu is a Brazilian services and software as a service (SaaS) company. It offers payment-processing software and application programming interfaces for e-commerce websites andIugu | 🦜ï¸�🔗 LangchainIt is intended to be easy to read and write, and is designed to map unambiguously to a dictionary. Its specification is open-source. TOML is implemented in many programming languages. The name TOML is an acronym for "Tom's Obvious, Minimal Language" referring to its creator, Tom Preston-Werner.If you need to load Toml files, use the TomlLoader.from langchain.document_loaders import TomlLoaderloader = TomlLoader("example_data/fake_rule.toml")rule = loader.load()rule [Document(page_content='{"internal": {"creation_date": "2023-05-01", "updated_date": "2022-05-01", "release": ["release_type"], "min_endpoint_version": "some_semantic_version", "os_list": ["operating_system_list"]}, "rule": {"uuid": "some_uuid", "name": "Fake Rule Name", "description": "Fake description of rule", "query": "process where process.name : \\"somequery\\"\\n", "threat": [{"framework": "MITRE ATT&CK", "tactic": {"name": "Execution", "id": "TA0002", "reference": "https://attack.mitre.org/tactics/TA0002/"}}]}}', metadata={'source': 'example_data/fake_rule.toml'})]Previous2MarkdownNextTrelloCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.Skip to main content🦜ï¸�🔗 LangChainDocsUse casesIntegrationsAPILangSmithJS/TS DocsCTRLKIntegrationsCallbacksChat modelsDocument loadersEtherscan LoaderacreomAirbyte JSONAirtableAlibaba Cloud MaxComputeApify DatasetArxivAsyncHtmlLoaderAWS S3 DirectoryAWS S3 FileAZLyricsAzure Blob Storage ContainerAzure Blob Storage FileBibTeXBiliBiliBlackboardBlockchainBrave SearchBrowserlesschatgpt_loaderCollege ConfidentialConfluenceCoNLL-UCopy PasteCSVCube Semantic LayerDatadog LogsDiffbotDiscordDocugamiDuckDBEmailEmbaasEPubEverNoteexample_dataMicrosoft ExcelFacebook ChatFaunaFigmaGeopandasGitGitBookGitHubGoogle BigQueryGoogle Cloud Storage DirectoryGoogle Cloud Storage FileGoogle DriveGrobidGutenbergHacker NewsHuggingFace datasetiFixitImagesImage captionsIMSDbIuguJoplinJupyter NotebookLarkSuite (FeiShu)MastodonMediaWikiDumpMergeDocLoadermhtmlMicrosoft OneDriveMicrosoft PowerPointMicrosoft WordModern TreasuryNotion DB 1/2Notion DB 2/2ObsidianOpen Document Format (ODT)Open City DataOrg-modePandas DataFramePsychicPySpark DataFrame LoaderReadTheDocs DocumentationRecursive URL LoaderRedditRoamRocksetRSTSitemapSlackSnowflakeSource CodeSpreedlyStripeSubtitleTelegramTencent COS DirectoryTencent COS File2MarkdownTOMLTrelloTSVTwitterUnstructured FileURLWeatherWebBaseLoaderWhatsApp ChatWikipediaXMLXorbits Pandas DataFrameLoading documents from a YouTube urlYouTube transcriptsDocument transformersLLMsMemoryRetrieversText embedding modelsAgent toolkitsToolsVector storesGrouped by providerIntegrationsDocument loadersTOMLTOMLTOML is a file format for configuration files. It is intended to be easy to read and write, and is designed to map unambiguouslyTOML | 🦜ï¸�🔗 LangchainA collection of instructions, code snippets, and template repositories for deploying LangChain apps. - [Tracing](https://python.langchain.com/en/latest/additional_resources/tracing.html): A guide on using tracing in LangChain to visualize the execution of chains and agents. - [Model Laboratory](https://python.langchain.com/en/latest/additional_resources/model_laboratory.html): Experimenting with different prompts, models, and chains is a big part of developing the best possible application. The ModelLaboratory makes it easy to do so. - [Discord](https://discord.gg/6adMQxSpJS): Join us on our Discord to discuss all things LangChain! - [YouTube](https://python.langchain.com/en/latest/additional_resources/youtube.html): A collection of the LangChain tutorials and videos. - [Production Support](https://forms.gle/57d8AmXBYp8PP8tZA): As you move your LangChains into production, we’d love to offer more comprehensive support. Please fill out this form and we’ll set up a dedicated support Slack channel.PreviousTencent COS FileNextTOMLCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.longer documents. A type of Data-Augmented Generation. - [Evaluation](https://python.langchain.com/en/latest/use_cases/evaluation.html): Generative models are hard to evaluate with traditional metrics. One promising approach is to use language models themselves to do the evaluation. ## Reference Docs [\#](\#reference-docs "Permalink to this headline") Full documentation on all methods, classes, installation methods, and integration setups for LangChain. - [Reference Documentation](https://python.langchain.com/en/latest/reference.html) ## LangChain Ecosystem [\#](\#langchain-ecosystem "Permalink to this headline") Guides for how other companies/products can be used with LangChain. - [LangChain Ecosystem](https://python.langchain.com/en/latest/ecosystem.html) ## Additional Resources [\#](\#additional-resources "Permalink to this headline") Additional resources we think may be useful as you develop your application! - [LangChainHub](https://github.com/hwchase17/langchain-hub): The LangChainHub is a place to share and explore other prompts, chains, and agents. - [Gallery](https://python.langchain.com/en/latest/additional_resources/gallery.html): A collection of our favorite projects that use LangChain. Useful for finding inspiration or seeing how things were done in other applications. - [Deployments](https://python.langchain.com/en/latest/additional_resources/deployments.html): A collection of instructions, code snippets, andto events can be an effective way to evaluate their long-range reasoning and planning abilities. - [Personal Assistants](https://python.langchain.com/en/latest/use_cases/personal_assistants.html): One of the primary LangChain use cases. Personal assistants need to take actions, remember interactions, and have knowledge about your data. - [Question Answering](https://python.langchain.com/en/latest/use_cases/question_answering.html): Another common LangChain use case. Answering questions over specific documents, only utilizing the information in those documents to construct an answer. - [Chatbots](https://python.langchain.com/en/latest/use_cases/chatbots.html): Language models love to chat, making this a very natural use of them. - [Querying Tabular Data](https://python.langchain.com/en/latest/use_cases/tabular.html): Recommended reading if you want to use language models to query structured data (CSVs, SQL, dataframes, etc). - [Code Understanding](https://python.langchain.com/en/latest/use_cases/code.html): Recommended reading if you want to use language models to analyze code. - [Interacting with APIs](https://python.langchain.com/en/latest/use_cases/apis.html): Enabling language models to interact with APIs is extremely powerful. It gives them access to up-to-date information and allows them to take actions. - [Extraction](https://python.langchain.com/en/latest/use_cases/extraction.html): Extract structured information from text. - [Summarization](https://python.langchain.com/en/latest/use_cases/summarization.html): Compressing longer documents. A type of Data-Augmented Generation. -state that is persisted between calls of a chain/agent. - [Indexes](https://python.langchain.com/en/latest/modules/data_connection.html): Language models become much more powerful when combined with application-specific data - this module contains interfaces and integrations for loading, querying and updating external data. - [Chains](https://python.langchain.com/en/latest/modules/chains.html): Chains are structured sequences of calls (to an LLM or to a different utility). - [Agents](https://python.langchain.com/en/latest/modules/agents.html): An agent is a Chain in which an LLM, given a high-level directive and a set of tools, repeatedly decides an action, executes the action and observes the outcome until the high-level directive is complete. - [Callbacks](https://python.langchain.com/en/latest/modules/callbacks/getting_started.html): Callbacks let you log and stream the intermediate steps of any chain, making it easy to observe, debug, and evaluate the internals of an application. ## Use Cases [\#](\#use-cases "Permalink to this headline") Best practices and built-in implementations for common LangChain use cases: - [Autonomous Agents](https://python.langchain.com/en/latest/use_cases/autonomous_agents.html): Autonomous agents are long-running agents that take many steps in an attempt to accomplish an objective. Examples include AutoGPT and BabyAGI. - [Agent Simulations](https://python.langchain.com/en/latest/use_cases/agent_simulations.html): Putting agents in a sandbox and observing how they interact with each other and react to events can be an effective way to evaluate their long-range reasoning and planning abilities.to this headline") How to get started using LangChain to create an Language Model application. - [Quickstart Guide](https://python.langchain.com/en/latest/getting_started/getting_started.html) Concepts and terminology. - [Concepts and terminology](https://python.langchain.com/en/latest/getting_started/concepts.html) Tutorials created by community experts and presented on YouTube. - [Tutorials](https://python.langchain.com/en/latest/getting_started/tutorials.html) ## Modules [\#](\#modules "Permalink to this headline") These modules are the core abstractions which we view as the building blocks of any LLM-powered application. For each module LangChain provides standard, extendable interfaces. LanghChain also provides external integrations and even end-to-end implementations for off-the-shelf use. The docs for each module contain quickstart examples, how-to guides, reference docs, and conceptual guides. The modules are (from least to most complex): - [Models](https://python.langchain.com/en/latest/modules/models.html): Supported model types and integrations. - [Prompts](https://python.langchain.com/en/latest/modules/prompts.html): Prompt management, optimization, and serialization. - [Memory](https://python.langchain.com/en/latest/modules/memory.html): Memory refers to state that is persisted between calls of a chain/agent. -You will need to get your own API key. See https://2markdown.com/loginapi_key = ""from langchain.document_loaders import ToMarkdownLoaderloader = ToMarkdownLoader.from_api_key( url="https://python.langchain.com/en/latest/", api_key=api_key)docs = loader.load()print(docs[0].page_content) ## Contents - [Getting Started](#getting-started) - [Modules](#modules) - [Use Cases](#use-cases) - [Reference Docs](#reference-docs) - [LangChain Ecosystem](#langchain-ecosystem) - [Additional Resources](#additional-resources) ## Welcome to LangChain [\#](\#welcome-to-langchain "Permalink to this headline") **LangChain** is a framework for developing applications powered by language models. We believe that the most powerful and differentiated applications will not only call out to a language model, but will also be: 1. _Data-aware_: connect a language model to other sources of data 2. _Agentic_: allow a language model to interact with its environment The LangChain framework is designed around these principles. This is the Python specific portion of the documentation. For a purely conceptual guide to LangChain, see [here](https://docs.langchain.com/docs/). For the JavaScript documentation, see [here](https://js.langchain.com/docs/). ## Getting Started [\#](\#getting-started "Permalink to this headline") How to get started using LangChain to createSkip to main content🦜ï¸�🔗 LangChainDocsUse casesIntegrationsAPILangSmithJS/TS DocsCTRLKIntegrationsCallbacksChat modelsDocument loadersEtherscan LoaderacreomAirbyte JSONAirtableAlibaba Cloud MaxComputeApify DatasetArxivAsyncHtmlLoaderAWS S3 DirectoryAWS S3 FileAZLyricsAzure Blob Storage ContainerAzure Blob Storage FileBibTeXBiliBiliBlackboardBlockchainBrave SearchBrowserlesschatgpt_loaderCollege ConfidentialConfluenceCoNLL-UCopy PasteCSVCube Semantic LayerDatadog LogsDiffbotDiscordDocugamiDuckDBEmailEmbaasEPubEverNoteexample_dataMicrosoft ExcelFacebook ChatFaunaFigmaGeopandasGitGitBookGitHubGoogle BigQueryGoogle Cloud Storage DirectoryGoogle Cloud Storage FileGoogle DriveGrobidGutenbergHacker NewsHuggingFace datasetiFixitImagesImage captionsIMSDbIuguJoplinJupyter NotebookLarkSuite (FeiShu)MastodonMediaWikiDumpMergeDocLoadermhtmlMicrosoft OneDriveMicrosoft PowerPointMicrosoft WordModern TreasuryNotion DB 1/2Notion DB 2/2ObsidianOpen Document Format (ODT)Open City DataOrg-modePandas DataFramePsychicPySpark DataFrame LoaderReadTheDocs DocumentationRecursive URL LoaderRedditRoamRocksetRSTSitemapSlackSnowflakeSource CodeSpreedlyStripeSubtitleTelegramTencent COS DirectoryTencent COS File2MarkdownTOMLTrelloTSVTwitterUnstructured FileURLWeatherWebBaseLoaderWhatsApp ChatWikipediaXMLXorbits Pandas DataFrameLoading documents from a YouTube urlYouTube transcriptsDocument transformersLLMsMemoryRetrieversText embedding modelsAgent toolkitsToolsVector storesGrouped by providerIntegrationsDocument loaders2Markdown2Markdown2markdown service transforms website content into structured markdown files.# You will need to get your own API key. See https://2markdown.com/loginapi_key =��u~Z,�5����X# 3 ANv�v��1#J%Y'k)�+=0�3�7�;�?SD�H�J�J�Q�W�]�d*lYr�x7{a{m��څ���\�j�������������|� �������{�m�\�������&�2�g�e�3��� ;�3Z"&�)�.&3�8m=2AEbK�M�M�S�W<\~`Ld�gMk�k�q�u�z8~�`�������G���(�ѱ�7�[���������6���r�:���p�����r{@ ��� �"^&"(R(P/�4;�>"?�F[M�S�Yi`$g"nQu#|,��7�f����N����I�����Ӱ���#�\���A�����%����������W�����&���#�� ��T� �&�,3�9�>D�I�OlUuZ�Z�`�f�l�n�nlt�zy�…� ������N�_������7�x��P��������f�>����"�{��{��s�!���x�-���.com/ipad-select/|Official Apple Product Page]\n* [link|https://en.wikipedia.org/wiki/IPad#iPad|Official iPad Wikipedia]", lookup_str='', metadata={'source': 'https://www.ifixit.com/Device/Standard_iPad', 'title': 'Standard iPad'}, lookup_index=0)]Searching iFixit using /suggest​If you're looking for a more general'Bloomberg.com', 'link': 'https://www.bloomberg.com/quote/AAPL:US', 'snippet': 'AAPL:USNASDAQ GS. Apple Inc. COMPANY INFO ; Open. ' '170.09 ; Prev Close. 169.59 ; Volume. 48,425,696 ; ' 'Market Cap. 2.667T ; Day Range. 167.54170.35.', 'position': 4}, {'title': 'Apple Inc. (AAPL) Company Profile & Facts - Yahoo ' 'Finance', 'link': 'https://finance.yahoo.com/quote/AAPL/profile/', 'snippet': 'Apple Inc. designs, manufactures, and markets ' 'smartphones, personal computers, tablets, wearables, ''manufacturer of personal computers, smartphones, ' 'tablet computers, computer peripherals, and computer ' '...', 'attributes': {'Related People': 'Steve Jobs Steve Wozniak Jony ' 'Ive Tim Cook Angela Ahrendts', 'Date': '1976 - present'}, 'imageUrl': 'https://encrypted-tbn0.gstatic.com/images?q=tbn:ANd9GcS3liELlhrMz3Wpsox29U8jJ3L8qETR0hBWHXbFnwjwQc34zwZvFELst2E&s', 'position': 3}, {'title': 'AAPL: Apple Inc Stock Price Quote - NASDAQ GS - ' 'Bloomberg.com','link': 'https://en.wikipedia.org/wiki/Litigation_involving_Apple_Inc.'}, {'title': 'Apple Store', 'link': 'https://en.wikipedia.org/wiki/Apple_Store'}], 'imageUrl': 'https://encrypted-tbn0.gstatic.com/images?q=tbn:ANd9GcRvmB5fT1LjqpZx02UM7IJq0Buoqt0DZs_y0dqwxwSWyP4PIN9FaxuTea0&s', 'position': 2}, {'title': 'Apple Inc. | History, Products, Headquarters, & Facts ' '| Britannica', 'link': 'https://www.britannica.com/topic/Apple-Inc', 'snippet': 'Apple Inc., formerly Apple Computer, Inc., American ' 'manufacturer of personal computers, smartphones, ''Mac; Full list', 'Founders': 'Steve Jobs; Steve Wozniak; Ronald ' 'Wayne; Mike Markkula'}, 'sitelinks': [{'title': 'History', 'link': 'https://en.wikipedia.org/wiki/History_of_Apple_Inc.'}, {'title': 'Timeline of Apple Inc. products', 'link': 'https://en.wikipedia.org/wiki/Timeline_of_Apple_Inc._products'}, {'title': 'Litigation involving Apple Inc.', 'link':'link': 'https://www.apple.com/mac/'}, {'title': 'Watch', 'link': 'https://www.apple.com/watch/'}], 'position': 1}, {'title': 'Apple Inc. - Wikipedia', 'link': 'https://en.wikipedia.org/wiki/Apple_Inc.', 'snippet': 'Apple Inc. is an American multinational technology ' 'company headquartered in Cupertino, California. ' "Apple is the world's largest technology company by " 'revenue, ...', 'attributes': {'Products': 'AirPods; Apple Watch; iPad; iPhone; ''TV, plus explore accessories, entertainment, ...', 'sitelinks': [{'title': 'Support', 'link': 'https://support.apple.com/'}, {'title': 'iPhone', 'link': 'https://www.apple.com/iphone/'}, {'title': 'Site Map', 'link': 'https://www.apple.com/sitemap/'}, {'title': 'Business', 'link': 'https://www.apple.com/business/'}, {'title': 'Mac','Founded': 'April 1, 1976, Los Altos, CA', 'Founders': 'Steve Jobs, Steve Wozniak, ' 'Ronald Wayne, and more', 'Products': 'iPhone, iPad, Apple TV, and ' 'more'}}, 'organic': [{'title': 'Apple', 'link': 'https://www.apple.com/', 'snippet': 'Discover the innovative world of Apple and shop ' 'everything iPhone, iPad, Apple Watch, Mac, and Apple ' 'TV, plus explore accessories, entertainment, ...','revenue, with US$394.3 billion in 2022 ' 'revenue. As of March 2023, Apple is the ' "world's biggest...", 'descriptionSource': 'Wikipedia', 'descriptionLink': 'https://en.wikipedia.org/wiki/Apple_Inc.', 'attributes': {'Customer service': '1 (800) 275-2273', 'CEO': 'Tim Cook (Aug 24, 2011–)', 'Headquarters': 'Cupertino, CA', 'Founded': 'April 1,10, 'type': 'search'}, 'knowledgeGraph': {'title': 'Apple', 'type': 'Technology company', 'website': 'http://www.apple.com/', 'imageUrl': 'https://encrypted-tbn0.gstatic.com/images?q=tbn:ANd9GcQwGQRv5TjjkycpctY66mOg_e2-npacrmjAb6_jAWhzlzkFE3OTjxyzbA&s=0', 'description': 'Apple Inc. is an American multinational ' 'technology company headquartered in ' 'Cupertino, California. Apple is the ' "world's largest technology company by "Answer", func=search.run, description="useful for when you need to ask with search", )]self_ask_with_search = initialize_agent( tools, llm, agent=AgentType.SELF_ASK_WITH_SEARCH, verbose=True)self_ask_with_search.run( "What is the hometown of the reigning men's U.S. Open champion?") > Entering new AgentExecutor chain... Yes. Follow up: Who is the reigning men's U.S. Open champion? Intermediate answer: Current champions Carlos Alcaraz, 2022 men's singles champion. Follow up: Where is Carlos Alcaraz from? Intermediate answer: El Palmar, Spain So the final answer is: El Palmar, Spain > Finished chain. 'El Palmar, Spain'Obtaining results with metadata​If you would also like to obtain the results in a structured way including metadata. For this we will be using the results method of the wrapper.search = GoogleSerperAPIWrapper()results = search.results("Apple Inc.")pprint.pp(results) {'searchParameters': {'q': 'Apple Inc.', 'gl': 'us', 'hl': 'en', 'num': 10,Skip to main content🦜ï¸�🔗 LangChainDocsUse casesIntegrationsAPILangSmithJS/TS DocsCTRLKIntegrationsCallbacksChat modelsDocument loadersDocument transformersLLMsMemoryRetrieversText embedding modelsAgent toolkitsToolsApifyArXiv API ToolawslambdaShell ToolBing SearchBrave SearchChatGPT PluginsDataForSeo API WrapperDuckDuckGo SearchFile System ToolsGolden QueryGoogle PlacesGoogle SearchGoogle Serper APIGradio ToolsGraphQL toolhuggingface_toolsHuman as a toolIFTTT WebHooksLemon AI NLP Workflow AutomationMetaphor SearchOpenWeatherMap APIPubMed ToolRequestsSceneXplainSearch ToolsSearxNG Search APISerpAPITwilioWikipediaWolfram AlphaYouTubeSearchToolZapier Natural Language Actions APIVector storesGrouped by providerIntegrationsToolsGoogle Serper APIOn this pageGoogle Serper APIThis notebook goes over how to use the Google Serper component to search the web. First you need to sign up for a free account at serper.dev and get your api key.import osimport pprintos.environ["SERPER_API_KEY"] = ""from langchain.utilities import GoogleSerperAPIWrappersearch = GoogleSerperAPIWrapper()search.run("Obama's first name?") 'Barack Hussein Obama II'As part of a Self Ask With Search Chain​os.environ["OPENAI_API_KEY"] = ""from langchain.utilities import GoogleSerperAPIWrapperfrom langchain.llms.openai import OpenAIfrom langchain.agents import initialize_agent, Toolfrom langchain.agents import AgentTypellm = OpenAI(temperature=0)search = GoogleSerperAPIWrapper()tools = [ Tool( name="Intermediate Answer", func=search.run,Google Serper API | 🦜ï¸�🔗 Langchainwrite_tool, list_tool = toolswrite_tool.run({"file_path": "example.txt", "text": "Hello World!"}) 'File written successfully to example.txt.'# List files in the working directorylist_tool.run({}) 'example.txt'PreviousDuckDuckGo SearchNextGolden QueryThe FileManagementToolkitSelecting File System ToolsCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.File System Tools​If you only want to select certain tools, you can pass them in as arguments when initializing the toolkit, or you can individually initialize the desired tools.tools = FileManagementToolkit( root_dir=str(working_directory.name), selected_tools=["read_file", "write_file", "list_directory"],).get_tools()tools [ReadFileTool(name='read_file', description='Read file from disk', args_schema=, return_direct=False, verbose=False, callback_manager=, root_dir='/var/folders/gf/6rnp_mbx5914kx7qmmh7xzmw0000gn/T/tmpxb8c3aug'), WriteFileTool(name='write_file', description='Write file to disk', args_schema=, return_direct=False, verbose=False, callback_manager=, root_dir='/var/folders/gf/6rnp_mbx5914kx7qmmh7xzmw0000gn/T/tmpxb8c3aug'), ListDirectoryTool(name='list_directory', description='List files and directories in a specified folder', args_schema=, return_direct=False, verbose=False, callback_manager=, root_dir='/var/folders/gf/6rnp_mbx5914kx7qmmh7xzmw0000gn/T/tmpxb8c3aug')]read_tool, write_tool, list_tool = toolswrite_tool.run({"file_path":description='Move or rename a file from one location to another', args_schema=, return_direct=False, verbose=False, callback_manager=, root_dir='/var/folders/gf/6rnp_mbx5914kx7qmmh7xzmw0000gn/T/tmpxb8c3aug'), ReadFileTool(name='read_file', description='Read file from disk', args_schema=, return_direct=False, verbose=False, callback_manager=, root_dir='/var/folders/gf/6rnp_mbx5914kx7qmmh7xzmw0000gn/T/tmpxb8c3aug'), WriteFileTool(name='write_file', description='Write file to disk', args_schema=, return_direct=False, verbose=False, callback_manager=, root_dir='/var/folders/gf/6rnp_mbx5914kx7qmmh7xzmw0000gn/T/tmpxb8c3aug'), ListDirectoryTool(name='list_directory', description='List files and directories in a specified folder', args_schema=, return_direct=False, verbose=False, callback_manager=, root_dir='/var/folders/gf/6rnp_mbx5914kx7qmmh7xzmw0000gn/T/tmpxb8c3aug')]Selectingstraightforward prompt injection.toolkit = FileManagementToolkit( root_dir=str(working_directory.name)) # If you don't provide a root_dir, operations will default to the current working directorytoolkit.get_tools() [CopyFileTool(name='copy_file', description='Create a copy of a file in a specified location', args_schema=, return_direct=False, verbose=False, callback_manager=, root_dir='/var/folders/gf/6rnp_mbx5914kx7qmmh7xzmw0000gn/T/tmpxb8c3aug'), DeleteFileTool(name='file_delete', description='Delete a file', args_schema=, return_direct=False, verbose=False, callback_manager=, root_dir='/var/folders/gf/6rnp_mbx5914kx7qmmh7xzmw0000gn/T/tmpxb8c3aug'), FileSearchTool(name='file_search', description='Recursively search for files in a subdirectory that match the regex pattern', args_schema=, return_direct=False, verbose=False, callback_manager=, root_dir='/var/folders/gf/6rnp_mbx5914kx7qmmh7xzmw0000gn/T/tmpxb8c3aug'), MoveFileTool(name='move_file', description='Move or rename a file from one location to another', args_schema= Finished chain.PreviousGoogle Serper APINextGraphQL toolUsing a toolUsing within an agentCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.Job Status: Status.PROCESSING eta: None Observation: /Users/harrisonchase/workplace/langchain/docs/modules/agents/tools/integrations/2e280ce4-4974-4420-8680-450825c31601/tmpfmiz2g1c.jpg Thought: Do I need to use a tool? Yes Action: ImageCaptioner Action Input: /Users/harrisonchase/workplace/langchain/docs/modules/agents/tools/integrations/2e280ce4-4974-4420-8680-450825c31601/tmpfmiz2g1c.jpg Job Status: Status.STARTING eta: None Observation: a painting of a dog sitting on a skateboard Thought: Do I need to use a tool? Yes Action: TextToVideo Action Input: a painting of a dog sitting on a skateboard Job Status: Status.STARTING eta: None Due to heavy traffic on this app, the prediction will take approximately 73 seconds.For faster predictions without waiting in queue, you may duplicate the space using: Client.duplicate(damo-vilab/modelscope-text-to-video-synthesis) Job Status: Status.IN_QUEUE eta: 73.89824726581574 Due to heavy traffic on this app, the prediction will take approximately 42 seconds.For faster predictions without waiting in queue, you may duplicate the space using: Client.duplicate(damo-vilab/modelscope-text-to-video-synthesis) Job Status: Status.IN_QUEUE eta: 42.49370198879602 Job Status: Status.IN_QUEUE eta: 21.314297944849187 Observation:"but improve my prompt prior to using an image generator." "Please caption the generated image and create a video for it using the improved prompt." )) Loaded as API: https://gradio-client-demos-stable-diffusion.hf.space ✔ Loaded as API: https://taesiri-blip-2.hf.space ✔ Loaded as API: https://microsoft-promptist.hf.space ✔ Loaded as API: https://damo-vilab-modelscope-text-to-video-synthesis.hf.space ✔ > Entering new AgentExecutor chain... Thought: Do I need to use a tool? Yes Action: StableDiffusionPromptGenerator Action Input: A dog riding a skateboard Job Status: Status.STARTING eta: None Observation: A dog riding a skateboard, digital painting, artstation, concept art, smooth, sharp focus, illustration, art by artgerm and greg rutkowski and alphonse mucha Thought: Do I need to use a tool? Yes Action: StableDiffusion Action Input: A dog riding a skateboard, digital painting, artstation, concept art, smooth, sharp focus, illustration, art by artgerm and greg rutkowski and alphonse mucha Job Status: Status.STARTING eta: None Job Status: Status.PROCESSING eta: None Observation:a tool​from gradio_tools.tools import StableDiffusionToollocal_file_path = StableDiffusionTool().langchain.run( "Please create a photo of a dog riding a skateboard")local_file_path Loaded as API: https://gradio-client-demos-stable-diffusion.hf.space ✔ Job Status: Status.STARTING eta: None '/Users/harrisonchase/workplace/langchain/docs/modules/agents/tools/integrations/b61c1dd9-47e2-46f1-a47c-20d27640993d/tmp4ap48vnm.jpg'from PIL import Imageim = Image.open(local_file_path)display(im) ![png](_gradio_tools_files/output_7_0.png) Using within an agent​from langchain.agents import initialize_agentfrom langchain.llms import OpenAIfrom gradio_tools.tools import ( StableDiffusionTool, ImageCaptioningTool, StableDiffusionPromptGeneratorTool, TextToVideoTool,)from langchain.memory import ConversationBufferMemoryllm = OpenAI(temperature=0)memory = ConversationBufferMemory(memory_key="chat_history")tools = [ StableDiffusionTool().langchain, ImageCaptioningTool().langchain, StableDiffusionPromptGeneratorTool().langchain, TextToVideoTool().langchain,]agent = initialize_agent( tools, llm, memory=memory, agent="conversational-react-description", verbose=True)output = agent.run( input=( "Please create a photo of a dog riding a skateboard " "but improve my prompt prior to using an image generator."Skip to main content🦜ï¸�🔗 LangChainDocsUse casesIntegrationsAPILangSmithJS/TS DocsCTRLKIntegrationsCallbacksChat modelsDocument loadersDocument transformersLLMsMemoryRetrieversText embedding modelsAgent toolkitsToolsApifyArXiv API ToolawslambdaShell ToolBing SearchBrave SearchChatGPT PluginsDataForSeo API WrapperDuckDuckGo SearchFile System ToolsGolden QueryGoogle PlacesGoogle SearchGoogle Serper APIGradio ToolsGraphQL toolhuggingface_toolsHuman as a toolIFTTT WebHooksLemon AI NLP Workflow AutomationMetaphor SearchOpenWeatherMap APIPubMed ToolRequestsSceneXplainSearch ToolsSearxNG Search APISerpAPITwilioWikipediaWolfram AlphaYouTubeSearchToolZapier Natural Language Actions APIVector storesGrouped by providerIntegrationsToolsGradio ToolsOn this pageGradio ToolsThere are many 1000s of Gradio apps on Hugging Face Spaces. This library puts them at the tips of your LLM's fingers 🦾Specifically, gradio-tools is a Python library for converting Gradio apps into tools that can be leveraged by a large language model (LLM)-based agent to complete its task. For example, an LLM could use a Gradio tool to transcribe a voice recording it finds online and then summarize it for you. Or it could use a different Gradio tool to apply OCR to a document on your Google Drive and then answer questions about it.It's very easy to create you own tool if you want to use a space that's not one of the pre-built tools. Please see this section of the gradio-tools documentation for information on how to do that. All contributions are welcome!# !pip install gradio_toolsUsing a tool​from gradio_tools.tools import StableDiffusionToollocal_file_path =Gradio Tools | 🦜ï¸�🔗 Langchainloaded directly using the load_huggingface_tool function.# Requires transformers>=4.29.0 and huggingface_hub>=0.14.1pip install --upgrade transformers huggingface_hub > /dev/nullfrom langchain.agents import load_huggingface_tooltool = load_huggingface_tool("lysandre/hf-model-downloads")print(f"{tool.name}: {tool.description}") model_download_counter: This is a tool that returns the most downloaded model of a given task on the Hugging Face Hub. It takes the name of the category (such as text-classification, depth-estimation, etc), and returns the name of the checkpointtool.run("text-classification") 'facebook/bart-large-mnli'PreviousGraphQL toolNextHuman as a toolHuggingFace ToolsCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.Skip to main content🦜ï¸�🔗 LangChainDocsUse casesIntegrationsAPILangSmithJS/TS DocsCTRLKIntegrationsCallbacksChat modelsDocument loadersDocument transformersLLMsMemoryRetrieversText embedding modelsAgent toolkitsToolsApifyArXiv API ToolawslambdaShell ToolBing SearchBrave SearchChatGPT PluginsDataForSeo API WrapperDuckDuckGo SearchFile System ToolsGolden QueryGoogle PlacesGoogle SearchGoogle Serper APIGradio ToolsGraphQL toolhuggingface_toolsHuman as a toolIFTTT WebHooksLemon AI NLP Workflow AutomationMetaphor SearchOpenWeatherMap APIPubMed ToolRequestsSceneXplainSearch ToolsSearxNG Search APISerpAPITwilioWikipediaWolfram AlphaYouTubeSearchToolZapier Natural Language Actions APIVector storesGrouped by providerIntegrationsToolshuggingface_toolsOn this pagehuggingface_toolsHuggingFace Tools​Huggingface Tools supporting text I/O can behuggingface_tools | 🦜ï¸�🔗 Langchain{\n \"title\": \"A New Hope\"\n },\n {\n \"title\": \"The Empire Strikes Back\"\n },\n {\n \"title\": \"Return of the Jedi\"\n },\n {\n \"title\": \"The Phantom Menace\"\n },\n {\n \"title\": \"Attack of the Clones\"\n },\n {\n \"title\": \"Revenge of the Sith\"\n }\n ]\n }\n}" Thought: I now know the titles of all the star wars films Final Answer: The titles of all the star wars films are: A New Hope, The Empire Strikes Back, Return of the Jedi, The Phantom Menace, Attack of the Clones, and Revenge of the Sith. > Finished chain. 'The titles of all the star wars films are: A New Hope, The Empire Strikes Back, Return of the Jedi, The Phantom Menace, Attack of the Clones, and Revenge of the Sith.'PreviousGradio ToolsNexthuggingface_toolsCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.the tool.from langchain import OpenAIfrom langchain.agents import load_tools, initialize_agent, AgentTypefrom langchain.utilities import GraphQLAPIWrapperllm = OpenAI(temperature=0)tools = load_tools( ["graphql"], graphql_endpoint="https://swapi-graphql.netlify.app/.netlify/functions/index",)agent = initialize_agent( tools, llm, agent=AgentType.ZERO_SHOT_REACT_DESCRIPTION, verbose=True)Now, we can use the Agent to run queries against the Star Wars GraphQL API. Let's ask the Agent to list all the Star Wars films and their release dates.graphql_fields = """allFilms { films { title director releaseDate speciesConnection { species { name classification homeworld { name } } } } }"""suffix = "Search for the titles of all the stawars films stored in the graphql database that has this schema "agent.run(suffix + graphql_fields) > Entering new AgentExecutor chain... I need to query the graphql database to get the titles of all the star wars films Action: query_graphql Action Input: query { allFilms { films { title } } } Observation: "{\n \"allFilms\": {\n \"films\": [\n {\n \"title\": \"A NewSkip to main content🦜ï¸�🔗 LangChainDocsUse casesIntegrationsAPILangSmithJS/TS DocsCTRLKIntegrationsCallbacksChat modelsDocument loadersDocument transformersLLMsMemoryRetrieversText embedding modelsAgent toolkitsToolsApifyArXiv API ToolawslambdaShell ToolBing SearchBrave SearchChatGPT PluginsDataForSeo API WrapperDuckDuckGo SearchFile System ToolsGolden QueryGoogle PlacesGoogle SearchGoogle Serper APIGradio ToolsGraphQL toolhuggingface_toolsHuman as a toolIFTTT WebHooksLemon AI NLP Workflow AutomationMetaphor SearchOpenWeatherMap APIPubMed ToolRequestsSceneXplainSearch ToolsSearxNG Search APISerpAPITwilioWikipediaWolfram AlphaYouTubeSearchToolZapier Natural Language Actions APIVector storesGrouped by providerIntegrationsToolsGraphQL toolGraphQL toolThis Jupyter Notebook demonstrates how to use the BaseGraphQLTool component with an Agent.GraphQL is a query language for APIs and a runtime for executing those queries against your data. GraphQL provides a complete and understandable description of the data in your API, gives clients the power to ask for exactly what they need and nothing more, makes it easier to evolve APIs over time, and enables powerful developer tools.By including a BaseGraphQLTool in the list of tools provided to an Agent, you can grant your Agent the ability to query data from GraphQL APIs for any purposes you need.In this example, we'll be using the public Star Wars GraphQL API available at the following endpoint: https://swapi-graphql.netlify.app/.netlify/functions/index.First, you need to install httpx and gql Python packages.pip install httpx gql > /dev/nullNow, let's create a BaseGraphQLTool instance with the specified Star Wars API endpoint and initialize an Agent with the tool.from langchain import OpenAIfrom langchain.agents import load_tools, initialize_agent,GraphQL tool | 🦜ï¸�🔗 Langchainstudy the mixing behaviour of heat-bath random walks on these graphs. We\nalso state explicit conditions on the set of moves so that the heat-bath random\nwalk, a generalization of the Glauber dynamics, is an expander in fixed\ndimension.\n\nPublished: 2003-03-18\nTitle: Calculation of fluxes of charged particles and neutrinos from atmospheric showers\nAuthors: V. Plyaskin\nSummary: The results on the fluxes of charged particles and neutrinos from a\n3-dimensional (3D) simulation of atmospheric showers are presented. An\nagreement of calculated fluxes with data on charged particles from the AMS and\nCAPRICE detectors is demonstrated. Predictions on neutrino fluxes at different\nexperimental sites are compared with results from other calculations.'Now, we are trying to find information about non-existing article. In this case, the response is "No good Arxiv Result was found"docs = arxiv.run("1605.08386WWW")docs 'No good Arxiv Result was found'PreviousApifyNextawslambdaThe ArXiv API WrapperCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.Heat-bath random walks with Markov bases\nAuthors: Caprice Stanley, Tobias Windisch\nSummary: Graphs on lattice points are studied whose edges come from a finite set of\nallowed moves of arbitrary length. We show that the diameter of these graphs on\nfibers of a fixed integer matrix can be bounded from above by a constant. We\nthen study the mixing behaviour of heat-bath random walks on these graphs. We\nalso state explicit conditions on the set of moves so that the heat-bath random\nwalk, a generalization of the Glauber dynamics, is an expander in fixed\ndimension.'Now, we want to get information about one author, Caprice Stanley.This query returns information about three articles. By default, the query returns information only about three top articles.docs = arxiv.run("Caprice Stanley")docs 'Published: 2017-10-10\nTitle: On Mixing Behavior of a Family of Random Walks Determined by a Linear Recurrence\nAuthors: Caprice Stanley, Seth Sullivant\nSummary: We study random walks on the integers mod $G_n$ that are determined by an\ninteger sequence $\\{ G_n \\}_{n \\geq 1}$ generated by a linear recurrence\nrelation. Fourier analysis provides explicit formulas to compute the\neigenvalues of the transition matrices and we use this to bound the mixing time\nof the random walks.\n\nPublished: 2016-05-26\nTitle: Heat-bath random walks with Markov bases\nAuthors: Caprice Stanley, Tobias Windisch\nSummary: Graphs on lattice points are studied whose edges come from a finite set of\nallowed moves of arbitrary length. We show that the diameter of these graphs on\nfibers of a fixed integer matrix can be bounded from above by a constant. We\nthen study the mixing behaviour of heat-bath random walks on these graphs. We\nalso state explicit conditionsObservation: Published: 2016-05-26 Title: Heat-bath random walks with Markov bases Authors: Caprice Stanley, Tobias Windisch Summary: Graphs on lattice points are studied whose edges come from a finite set of allowed moves of arbitrary length. We show that the diameter of these graphs on fibers of a fixed integer matrix can be bounded from above by a constant. We then study the mixing behaviour of heat-bath random walks on these graphs. We also state explicit conditions on the set of moves so that the heat-bath random walk, a generalization of the Glauber dynamics, is an expander in fixed dimension. Thought:The paper is about heat-bath random walks with Markov bases on graphs of lattice points. Final Answer: The paper 1605.08386 is about heat-bath random walks with Markov bases on graphs of lattice points. > Finished chain. 'The paper 1605.08386 is about heat-bath random walks with Markov bases on graphs of lattice points.'The ArXiv API Wrapper​The tool wraps the API Wrapper. Below, we can explore some of the features it provides.from langchain.utilities import ArxivAPIWrapperRun a query to get information about some scientific article/articles. The query text is limited to 300 characters.It returns these article fields:Publishing dateTitleAuthorsSummaryNext query returns information about one article with arxiv Id equal "1605.08386". arxiv = ArxivAPIWrapper()docs = arxiv.run("1605.08386")docs 'Published: 2016-05-26\nTitle: Heat-bath random walks with Markov bases\nAuthors: Caprice Stanley, TobiasSkip to main content🦜ï¸�🔗 LangChainDocsUse casesIntegrationsAPILangSmithJS/TS DocsCTRLKIntegrationsCallbacksChat modelsDocument loadersDocument transformersLLMsMemoryRetrieversText embedding modelsAgent toolkitsToolsApifyArXiv API ToolawslambdaShell ToolBing SearchBrave SearchChatGPT PluginsDataForSeo API WrapperDuckDuckGo SearchFile System ToolsGolden QueryGoogle PlacesGoogle SearchGoogle Serper APIGradio ToolsGraphQL toolhuggingface_toolsHuman as a toolIFTTT WebHooksLemon AI NLP Workflow AutomationMetaphor SearchOpenWeatherMap APIPubMed ToolRequestsSceneXplainSearch ToolsSearxNG Search APISerpAPITwilioWikipediaWolfram AlphaYouTubeSearchToolZapier Natural Language Actions APIVector storesGrouped by providerIntegrationsToolsArXiv API ToolOn this pageArXiv API ToolThis notebook goes over how to use the arxiv component. First, you need to install arxiv python package.pip install arxivfrom langchain.chat_models import ChatOpenAIfrom langchain.agents import load_tools, initialize_agent, AgentTypellm = ChatOpenAI(temperature=0.0)tools = load_tools( ["arxiv"],)agent_chain = initialize_agent( tools, llm, agent=AgentType.ZERO_SHOT_REACT_DESCRIPTION, verbose=True,)agent_chain.run( "What's the paper 1605.08386 about?",) > Entering new AgentExecutor chain... I need to use Arxiv to search for the paper. Action: Arxiv Action Input: "1605.08386" Observation: Published: 2016-05-26ArXiv API Tool | 🦜ï¸�🔗 Langchainusing a wiki-based editing system called MediaWiki. Wikipedia is the largest and most-read reference work in history.📄ï¸� Wolfram AlphaThis notebook goes over how to use the wolfram alpha component.📄ï¸� YouTubeSearchToolThis notebook shows how to use a tool to search YouTube📄ï¸� Zapier Natural Language Actions API\PreviousXorbits AgentNextApifyCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.IFTTT Webhooks.📄ï¸� Lemon AI NLP Workflow Automation\📄ï¸� Metaphor SearchMetaphor is a search engine fully designed to be used by LLMs. You can search and then get the contents for any page.📄ï¸� OpenWeatherMap APIThis notebook goes over how to use the OpenWeatherMap component to fetch weather information.📄ï¸� PubMed ToolThis notebook goes over how to use PubMed as a tool📄ï¸� RequestsThe web contains a lot of information that LLMs do not have access to. In order to easily let LLMs interact with that information, we provide a wrapper around the Python Requests module that takes in a URL and fetches data from that URL.📄ï¸� SceneXplainSceneXplain is an ImageCaptioning service accessible through the SceneXplain Tool.📄ï¸� Search ToolsThis notebook shows off usage of various search tools.📄ï¸� SearxNG Search APIThis notebook goes over how to use a self hosted SearxNG search API to search the web.📄ï¸� SerpAPIThis notebook goes over how to use the SerpAPI component to search the web.📄ï¸� TwilioThis notebook goes over how to use the Twilio API wrapper to send a message through SMS or Twilio Messaging Channels.📄ï¸� WikipediaWikipedia is a multilingual free online encyclopedia written and maintained by a community of volunteers, known as Wikipedians, through open collaboration and using a wiki-based editing system called MediaWiki. Wikipedia is the largest and most-read reference work inAPI wrapper to obtain search engine results. The DataForSeo API allows users to retrieve SERP from most popular search engines like Google, Bing, Yahoo. It also allows to get SERPs from different search engine types like Maps, News, Events, etc.📄ï¸� DuckDuckGo SearchThis notebook goes over how to use the duck-duck-go search component.📄ï¸� File System ToolsLangChain provides tools for interacting with a local file system out of the box. This notebook walks through some of them.📄ï¸� Golden QueryThis notebook goes over how to use the golden-query tool.📄ï¸� Google PlacesThis notebook goes through how to use Google Places API📄ï¸� Google SearchThis notebook goes over how to use the google search component.📄ï¸� Google Serper APIThis notebook goes over how to use the Google Serper component to search the web. First you need to sign up for a free account at serper.dev and get your api key.📄ï¸� Gradio ToolsThere are many 1000s of Gradio apps on Hugging Face Spaces. This library puts them at the tips of your LLM's fingers 🦾📄ï¸� GraphQL toolThis Jupyter Notebook demonstrates how to use the BaseGraphQLTool component with an Agent.📄ï¸� huggingface_toolsHuggingFace Tools📄ï¸� Human as a toolHuman are AGI so they can certainly be used as a tool to help out AI agent📄ï¸� IFTTT WebHooksThis notebook shows how to use IFTTT Webhooks.📄ï¸� Lemon AI NLP WorkflowSkip to main content🦜ï¸�🔗 LangChainDocsUse casesIntegrationsAPILangSmithJS/TS DocsCTRLKIntegrationsCallbacksChat modelsDocument loadersDocument transformersLLMsMemoryRetrieversText embedding modelsAgent toolkitsToolsApifyArXiv API ToolawslambdaShell ToolBing SearchBrave SearchChatGPT PluginsDataForSeo API WrapperDuckDuckGo SearchFile System ToolsGolden QueryGoogle PlacesGoogle SearchGoogle Serper APIGradio ToolsGraphQL toolhuggingface_toolsHuman as a toolIFTTT WebHooksLemon AI NLP Workflow AutomationMetaphor SearchOpenWeatherMap APIPubMed ToolRequestsSceneXplainSearch ToolsSearxNG Search APISerpAPITwilioWikipediaWolfram AlphaYouTubeSearchToolZapier Natural Language Actions APIVector storesGrouped by providerIntegrationsToolsTools📄ï¸� ApifyThis notebook shows how to use the Apify integration for LangChain.📄ï¸� ArXiv API ToolThis notebook goes over how to use the arxiv component.📄ï¸� awslambdaAWS Lambda API📄ï¸� Shell ToolGiving agents access to the shell is powerful (though risky outside a sandboxed environment).📄ï¸� Bing SearchThis notebook goes over how to use the bing search component.📄ï¸� Brave SearchThis notebook goes over how to use the Brave Search tool.📄ï¸� ChatGPT PluginsThis example shows how to use ChatGPT Plugins within LangChain abstractions.📄ï¸� DataForSeo API WrapperThis notebook demonstrates how to use the DataForSeo API wrapper to obtain search engine results. The DataForSeo API allows users to retrieveTools | 🦜ï¸�🔗 Langchainwith mummy not thereâ€�\n\nYou dig deep and come up with,","index":0,"logprobs":null,"finish_reason":"length"}],"usage":{"prompt_tokens":4,"completion_tokens":16,"total_tokens":20}} Thought: I now know the final answer. Final Answer: The response of the POST request is {"id":"cmpl-70Ivzip3dazrIXU8DSVJGzFJj2rdv","object":"text_completion","created":1680307139,"model":"davinci","choices":[{"text":" with mummy not thereâ€�\n\nYou dig deep and come up with,","index":0,"logprobs":null,"finish_reason":"length"}],"usage":{"prompt_tokens":4,"completion_tokens":16,"total_tokens":20}} > Finished chain. 'The response of the POST request is {"id":"cmpl-70Ivzip3dazrIXU8DSVJGzFJj2rdv","object":"text_completion","created":1680307139,"model":"davinci","choices":[{"text":" with mummy not thereâ€�\\n\\nYou dig deep and come up with,","index":0,"logprobs":null,"finish_reason":"length"}],"usage":{"prompt_tokens":4,"completion_tokens":16,"total_tokens":20}}'PreviousOffice365 ToolkitNextNatural Language APIs1st example: hierarchical planning agentTo start, let's collect some OpenAPI specs.How big is this spec?Let's see some examples!2nd example: "json explorer" agentCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.a dict, get the value directly.') Thought: I should look at the $ref key to get the value directly Action: json_spec_get_value Action Input: data["paths"]["/completions"]["post"]["requestBody"]["content"]["application/json"]["schema"]["$ref"] Observation: #/components/schemas/CreateCompletionRequest Thought: I should look at the CreateCompletionRequest schema to see what parameters are required Action: json_spec_list_keys Action Input: data["components"]["schemas"]["CreateCompletionRequest"] Observation: ['type', 'properties', 'required'] Thought: I should look at the required key to see what parameters are required Action: json_spec_get_value Action Input: data["components"]["schemas"]["CreateCompletionRequest"]["required"] Observation: ['model'] Thought: I now know the final answer Final Answer: The required parameters for a POST request to the /completions endpoint are 'model'. > Finished chain. Observation: The required parameters for a POST request to the /completions endpoint are 'model'. Thought: I now know the parameters needed to make the request. Action: requests_post Action Input: { "url": "https://api.openai.com/v1/completions", "data": { "model": "davinci", "prompt": "tell me a joke" } } Observation: {"id":"cmpl-70Ivzip3dazrIXU8DSVJGzFJj2rdv","object":"text_completion","created":1680307139,"model":"davinci","choices":[{"text":" with mummy not thereâ€�\n\nYouAction Input: data["paths"]["/completions"]["post"] Observation: ['operationId', 'tags', 'summary', 'requestBody', 'responses', 'x-oaiMeta'] Thought: I should look at the requestBody key to see what parameters are required Action: json_spec_list_keys Action Input: data["paths"]["/completions"]["post"]["requestBody"] Observation: ['required', 'content'] Thought: I should look at the content key to see what parameters are required Action: json_spec_list_keys Action Input: data["paths"]["/completions"]["post"]["requestBody"]["content"] Observation: ['application/json'] Thought: I should look at the application/json key to see what parameters are required Action: json_spec_list_keys Action Input: data["paths"]["/completions"]["post"]["requestBody"]["content"]["application/json"] Observation: ['schema'] Thought: I should look at the schema key to see what parameters are required Action: json_spec_list_keys Action Input: data["paths"]["/completions"]["post"]["requestBody"]["content"]["application/json"]["schema"] Observation: ['$ref'] Thought: I should look at the $ref key to see what parameters are required Action: json_spec_list_keys Action Input: data["paths"]["/completions"]["post"]["requestBody"]["content"]["application/json"]["schema"]["$ref"] Observation: ValueError('Value at path `data["paths"]["/completions"]["post"]["requestBody"]["content"]["application/json"]["schema"]["$ref"]` is not a dict, get the value directly.') Thought: I should look at the $refendpoint is data["paths"][2] Thought: I should find the required parameters for the POST request. Action: json_explorer Action Input: What are the required parameters for a POST request to the /completions endpoint? > Entering new AgentExecutor chain... Action: json_spec_list_keys Action Input: data Observation: ['openapi', 'info', 'servers', 'tags', 'paths', 'components', 'x-oaiMeta'] Thought: I should look at the paths key to see what endpoints exist Action: json_spec_list_keys Action Input: data["paths"] Observation: ['/engines', '/engines/{engine_id}', '/completions', '/chat/completions', '/edits', '/images/generations', '/images/edits', '/images/variations', '/embeddings', '/audio/transcriptions', '/audio/translations', '/engines/{engine_id}/search', '/files', '/files/{file_id}', '/files/{file_id}/content', '/answers', '/classifications', '/fine-tunes', '/fine-tunes/{fine_tune_id}', '/fine-tunes/{fine_tune_id}/cancel', '/fine-tunes/{fine_tune_id}/events', '/models', '/models/{model}', '/moderations'] Thought: I should look at the /completions endpoint to see what parameters are required Action: json_spec_list_keys Action Input: data["paths"]["/completions"] Observation: ['post'] Thought: I should look at the post key to see what parameters are required Action: json_spec_list_keys Action Input: data["paths"]["/completions"]["post"]> Finished chain. Observation: The base url for the API is https://api.openai.com/v1 Thought: I should find the path for the /completions endpoint. Action: json_explorer Action Input: What is the path for the /completions endpoint? > Entering new AgentExecutor chain... Action: json_spec_list_keys Action Input: data Observation: ['openapi', 'info', 'servers', 'tags', 'paths', 'components', 'x-oaiMeta'] Thought: I should look at the paths key to see what endpoints exist Action: json_spec_list_keys Action Input: data["paths"] Observation: ['/engines', '/engines/{engine_id}', '/completions', '/chat/completions', '/edits', '/images/generations', '/images/edits', '/images/variations', '/embeddings', '/audio/transcriptions', '/audio/translations', '/engines/{engine_id}/search', '/files', '/files/{file_id}', '/files/{file_id}/content', '/answers', '/classifications', '/fine-tunes', '/fine-tunes/{fine_tune_id}', '/fine-tunes/{fine_tune_id}/cancel', '/fine-tunes/{fine_tune_id}/events', '/models', '/models/{model}', '/moderations'] Thought: I now know the path for the /completions endpoint Final Answer: The path for the /completions endpoint is data["paths"][2] > Finished chain. Observation: The path for the /completions endpoint is data["paths"][2] Thought: I should find the= JsonSpec(dict_=data, max_value_length=4000)openapi_toolkit = OpenAPIToolkit.from_llm( OpenAI(temperature=0), json_spec, openai_requests_wrapper, verbose=True)openapi_agent_executor = create_openapi_agent( llm=OpenAI(temperature=0), toolkit=openapi_toolkit, verbose=True)openapi_agent_executor.run( "Make a post request to openai /completions. The prompt should be 'tell me a joke.'") > Entering new AgentExecutor chain... Action: json_explorer Action Input: What is the base url for the API? > Entering new AgentExecutor chain... Action: json_spec_list_keys Action Input: data Observation: ['openapi', 'info', 'servers', 'tags', 'paths', 'components', 'x-oaiMeta'] Thought: I should look at the servers key to see what the base url is Action: json_spec_list_keys Action Input: data["servers"][0] Observation: ValueError('Value at path `data["servers"][0]` is not a dict, get the value directly.') Thought: I should get the value of the servers key Action: json_spec_get_value Action Input: data["servers"][0] Observation: {'url': 'https://api.openai.com/v1'} Thought: I now know the base url for the API Final Answer: The base url for the API is https://api.openai.com/v1 > Finished chain. Observation: The base url for the API is"prompt": "Give a short piece of advice on how to improve communication skills"}, "output_instructions": "Extract the text from the first choice"} Observation: "Some basic advice for improving communication skills would be to make sure to listen" Thought:I am finished executing the plan. Final Answer: Some basic advice for improving communication skills would be to make sure to listen. > Finished chain. Observation: Some basic advice for improving communication skills would be to make sure to listen. Thought:I am finished executing the plan and have the information the user asked for. Final Answer: A short piece of advice for improving communication skills is to make sure to listen. > Finished chain. 'A short piece of advice for improving communication skills is to make sure to listen.'Takes awhile to get there!2nd example: "json explorer" agent​Here's an agent that's not particularly practical, but neat! The agent has access to 2 toolkits. One comprises tools to interact with json: one tool to list the keys of a json object and another tool to get the value for a given key. The other toolkit comprises requests wrappers to send GET and POST requests. This agent consumes a lot calls to the language model, but does a surprisingly decent job.from langchain.agents import create_openapi_agentfrom langchain.agents.agent_toolkits import OpenAPIToolkitfrom langchain.llms.openai import OpenAIfrom langchain.requests import TextRequestsWrapperfrom langchain.tools.json.tool import JsonSpecwith open("openai_openapi.yaml") as f: data = yaml.load(f, Loader=yaml.FullLoader)json_spec = JsonSpec(dict_=data, max_value_length=4000)openapi_toolkit =to adjust my plan to include a more specific prompt for generating a short piece of advice on improving communication skills. Action: api_planner Action Input: I need to find the right API calls to generate a short piece of advice on improving communication skills, including the model parameter in the POST request Observation: 1. GET /models to retrieve the list of available models 2. Choose a suitable model for generating text (e.g., text-davinci-002) 3. POST /completions with the chosen model and a prompt related to improving communication skills to generate a short piece of advice Thought:I have an updated plan, now I need to execute the API calls. Action: api_controller Action Input: 1. GET /models to retrieve the list of available models 2. Choose a suitable model for generating text (e.g., text-davinci-002) 3. POST /completions with the chosen model and a prompt related to improving communication skills to generate a short piece of advice > Entering new AgentExecutor chain... Action: requests_get Action Input: {"url": "https://api.openai.com/v1/models", "output_instructions": "Extract the names of the models"} Observation: babbage, davinci, text-davinci-edit-001, babbage-code-search-code, text-similarity-babbage-001, code-davinci-edit-001, text-davinci-edit-001, ada Thought:Action: requests_post Action Input: {"url": "https://api.openai.com/v1/completions", "data": {"model": "text-davinci-002", "prompt": "Give a short piece of advice on how to improve communicationAction Input: 1. GET /models to retrieve the list of available models 2. Choose a suitable model from the list 3. POST /completions with the chosen model as a parameter to generate a short piece of advice > Entering new AgentExecutor chain... Action: requests_get Action Input: {"url": "https://api.openai.com/v1/models", "output_instructions": "Extract the ids of the available models"} Observation: babbage, davinci, text-davinci-edit-001, babbage-code-search-code, text-similarity-babbage-001, code-davinci-edit-001, text-davinci-edit-001, ada Thought:Action: requests_post Action Input: {"url": "https://api.openai.com/v1/completions", "data": {"model": "davinci", "prompt": "Give me a short piece of advice on how to improve communication skills."}, "output_instructions": "Extract the text from the first choice"} Observation: "I'd like to broaden my horizon.\n\nI was trying to" Thought:I cannot finish executing the plan without knowing some other information. Final Answer: The generated text is not a piece of advice on improving communication skills. I would need to retry the API call with a different prompt or model to get a more relevant response. > Finished chain. Observation: The generated text is not a piece of advice on improving communication skills. I would need to retry the API call with a different prompt or model to get a more relevant response. Thought:I need to adjust my plan to include a more specific prompt for generating a short piece of advice on improving communicationada-code-search-text, text-search-ada-doc-001, davinci-instruct-beta, text-similarity-curie-001, code-search-ada-code-001 Thought:I will use the "davinci" engine to generate a short piece of advice. Action: requests_post Action Input: {"url": "https://api.openai.com/v1/completions", "data": {"engine": "davinci", "prompt": "Give me a short piece of advice on how to be more productive."}, "output_instructions": "Extract the text from the first choice"} Observation: "you must provide a model parameter" Thought:!! Could not _extract_tool_and_input from "I cannot finish executing the plan without knowing how to provide the model parameter correctly." in _get_next_action I cannot finish executing the plan without knowing how to provide the model parameter correctly. > Finished chain. Observation: I need more information on how to provide the model parameter correctly in the POST request to generate a short piece of advice. Thought:I need to adjust my plan to include the model parameter in the POST request. Action: api_planner Action Input: I need to find the right API calls to generate a short piece of advice, including the model parameter in the POST request Observation: 1. GET /models to retrieve the list of available models 2. Choose a suitable model from the list 3. POST /completions with the chosen model as a parameter to generate a short piece of advice Thought:I have an updated plan, now I need to execute the API calls. Action: api_controller Action Input: 1. GET /models to retrieve the list of available models> Entering new AgentExecutor chain... Action: api_planner Action Input: I need to find the right API calls to generate a short piece of advice Observation: 1. GET /engines to retrieve the list of available engines 2. POST /completions with the selected engine and a prompt for generating a short piece of advice Thought:I have the plan, now I need to execute the API calls. Action: api_controller Action Input: 1. GET /engines to retrieve the list of available engines 2. POST /completions with the selected engine and a prompt for generating a short piece of advice > Entering new AgentExecutor chain... Action: requests_get Action Input: {"url": "https://api.openai.com/v1/engines", "output_instructions": "Extract the ids of the engines"} Observation: babbage, davinci, text-davinci-edit-001, babbage-code-search-code, text-similarity-babbage-001, code-davinci-edit-001, text-davinci-001, ada, babbage-code-search-text, babbage-similarity, whisper-1, code-search-babbage-text-001, text-curie-001, code-search-babbage-code-001, text-ada-001, text-embedding-ada-002, text-similarity-ada-001, curie-instruct-beta, ada-code-search-code, ada-similarity, text-davinci-003, code-search-ada-text-001, text-search-ada-query-001, davinci-search-document, ada-code-search-text, text-search-ada-doc-001, davinci-instruct-beta,name: 'Get Away Jordan' } ] Thought:I am finished executing the plan. Final Answer: The recommended blues song for user Jeremy Welborn (ID: 22rhrz4m4kvpxlsb5hezokzwi) is "Get Away Jordan" with the track ID: 03lXHmokj9qsXspNsPoirR. > Finished chain. Observation: The recommended blues song for user Jeremy Welborn (ID: 22rhrz4m4kvpxlsb5hezokzwi) is "Get Away Jordan" with the track ID: 03lXHmokj9qsXspNsPoirR. Thought:I am finished executing the plan and have the information the user asked for. Final Answer: The recommended blues song for you is "Get Away Jordan" with the track ID: 03lXHmokj9qsXspNsPoirR. > Finished chain. 'The recommended blues song for you is "Get Away Jordan" with the track ID: 03lXHmokj9qsXspNsPoirR.'Try another API.​headers = {"Authorization": f"Bearer {os.getenv('OPENAI_API_KEY')}"}openai_requests_wrapper = RequestsWrapper(headers=headers)# Meta!llm = OpenAI(model_name="gpt-4", temperature=0.25)openai_agent = planner.create_openapi_agent( openai_api_spec, openai_requests_wrapper, llm)user_query = "generate a short piece of advice"openai_agent.run(user_query) > Entering new AgentExecutor chain...classical, club, comedy, country, dance, dancehall, death-metal, deep-house, detroit-techno, disco, disney, drum-and-bass, dub, dubstep, edm, electro, electronic, emo, folk, forro, french, funk, garage, german, gospel, goth, grindcore, groove, grunge, guitar, happy, hard-rock, hardcore, hardstyle, heavy-metal, hip-hop, holidays, honky-tonk, house, idm, indian, indie, indie-pop, industrial, iranian, j-dance, j-idol, j-pop, j-rock, jazz, k-pop, kids, latin, latino, malay, mandopop, metal, metal-misc, metalcore, minimal-techno, movies, mpb, new-age, new-release, opera, pagode, party, philippines- Thought: Retrying langchain.llms.openai.completion_with_retry.._completion_with_retry in 4.0 seconds as it raised RateLimitError: That model is currently overloaded with other requests. You can retry your request, or contact us through our help center at help.openai.com if the error persists. (Please include the request ID 2167437a0072228238f3c0c5b3882764 in your message.). Action: requests_get Action Input: {"url": "https://api.spotify.com/v1/recommendations?seed_genres=blues", "output_instructions": "Extract the list of recommended tracks with their ids and names"} Observation: [ { id: '03lXHmokj9qsXspNsPoirR', name: 'Get Away Jordan' } ]api_planner Action Input: I need to find the right API calls to get a blues song recommendation for the user Observation: 1. GET /me to get the current user's information 2. GET /recommendations/available-genre-seeds to retrieve a list of available genres 3. GET /recommendations with the seed_genre parameter set to "blues" to get a blues song recommendation for the user Thought:I have the plan, now I need to execute the API calls. Action: api_controller Action Input: 1. GET /me to get the current user's information 2. GET /recommendations/available-genre-seeds to retrieve a list of available genres 3. GET /recommendations with the seed_genre parameter set to "blues" to get a blues song recommendation for the user > Entering new AgentExecutor chain... Action: requests_get Action Input: {"url": "https://api.spotify.com/v1/me", "output_instructions": "Extract the user's id and username"} Observation: ID: 22rhrz4m4kvpxlsb5hezokzwi, Username: Jeremy Welborn Thought:Action: requests_get Action Input: {"url": "https://api.spotify.com/v1/recommendations/available-genre-seeds", "output_instructions": "Extract the list of available genres"} Observation: acoustic, afrobeat, alt-rock, alternative, ambient, anime, black-metal, bluegrass, blues, bossanova, brazil, breakbeat, british, cantopop, chicago-house, children, chill, classical, club, comedy, country, dance, dancehall, death-metal, deep-house,Thought:Action: requests_post Action Input: {"url": "https://api.spotify.com/v1/playlists/7lzoEi44WOISnFYlrAIqyX/tracks", "data": {"uris": ["spotify:track:7q3kkfAVpmcZ8g6JUThi3o"]}, "output_instructions": "Confirm that the track was added to the playlist"} Observation: The track was added to the playlist, confirmed by the snapshot_id: MiwxODMxNTMxZTFlNzg3ZWFlZmMxYTlmYWQyMDFiYzUwNDEwMTAwZmE1. Thought:I am finished executing the plan. Final Answer: The first song from the "Kind of Blue" album has been added to the "Machine Blues" playlist. > Finished chain. Observation: The first song from the "Kind of Blue" album has been added to the "Machine Blues" playlist. Thought:I am finished executing the plan and have created the playlist with the first song from Kind of Blue. Final Answer: I have created a playlist called "Machine Blues" with the first song from the "Kind of Blue" album. > Finished chain. 'I have created a playlist called "Machine Blues" with the first song from the "Kind of Blue" album.'user_query = "give me a song I'd like, make it blues-ey"spotify_agent.run(user_query) > Entering new AgentExecutor chain... Action: api_planner Action Input: I need to find the right API calls to get acreate a new playlist named "Machine Blues" for the current user 5. POST /playlists/{playlist_id}/tracks to add the first song from "Kind of Blue" to the "Machine Blues" playlist > Entering new AgentExecutor chain... Action: requests_get Action Input: {"url": "https://api.spotify.com/v1/search?q=Kind%20of%20Blue&type=album", "output_instructions": "Extract the id of the first album in the search results"} Observation: 1weenld61qoidwYuZ1GESA Thought:Action: requests_get Action Input: {"url": "https://api.spotify.com/v1/albums/1weenld61qoidwYuZ1GESA/tracks", "output_instructions": "Extract the id of the first track in the album"} Observation: 7q3kkfAVpmcZ8g6JUThi3o Thought:Action: requests_get Action Input: {"url": "https://api.spotify.com/v1/me", "output_instructions": "Extract the id of the current user"} Observation: 22rhrz4m4kvpxlsb5hezokzwi Thought:Action: requests_post Action Input: {"url": "https://api.spotify.com/v1/users/22rhrz4m4kvpxlsb5hezokzwi/playlists", "data": {"name": "Machine Blues"}, "output_instructions": "Extract the id of the created playlist"} Observation: 7lzoEi44WOISnFYlrAIqyX Thought:Action: requests_post Action Input: {"url":use a chat model. This way of initializing it is no longer supported. Instead, please use: `from langchain.chat_models import ChatOpenAI` warnings.warn(spotify_agent = planner.create_openapi_agent(spotify_api_spec, requests_wrapper, llm)user_query = ( "make me a playlist with the first song from kind of blue. call it machine blues.")spotify_agent.run(user_query) > Entering new AgentExecutor chain... Action: api_planner Action Input: I need to find the right API calls to create a playlist with the first song from Kind of Blue and name it Machine Blues Observation: 1. GET /search to search for the album "Kind of Blue" 2. GET /albums/{id}/tracks to get the tracks from the "Kind of Blue" album 3. GET /me to get the current user's information 4. POST /users/{user_id}/playlists to create a new playlist named "Machine Blues" for the current user 5. POST /playlists/{playlist_id}/tracks to add the first song from "Kind of Blue" to the "Machine Blues" playlist Thought:I have the plan, now I need to execute the API calls. Action: api_controller Action Input: 1. GET /search to search for the album "Kind of Blue" 2. GET /albums/{id}/tracks to get the tracks from the "Kind of Blue" album 3. GET /me to get the current user's information 4. POST /users/{user_id}/playlists to create a new playlist named "Machine Blues" for the current user 5. POST"authorizationCode" ]["scopes"].keys() ) access_token = util.prompt_for_user_token(scope=",".join(scopes)) return {"Authorization": f"Bearer {access_token}"}# Get API credentials.headers = construct_spotify_auth_headers(raw_spotify_api_spec)requests_wrapper = RequestsWrapper(headers=headers)How big is this spec?​endpoints = [ (route, operation) for route, operations in raw_spotify_api_spec["paths"].items() for operation in operations if operation in ["get", "post"]]len(endpoints) 63import tiktokenenc = tiktoken.encoding_for_model("text-davinci-003")def count_tokens(s): return len(enc.encode(s))count_tokens(yaml.dump(raw_spotify_api_spec)) 80326Let's see some examples!​Starting with GPT-4. (Some robustness iterations under way for GPT-3 family.)from langchain.llms.openai import OpenAIfrom langchain.agents.agent_toolkits.openapi import plannerllm = OpenAI(model_name="gpt-4", temperature=0.0) /Users/jeremywelborn/src/langchain/langchain/llms/openai.py:169: UserWarning: You are trying to use a chat model. This way of initializing it is no longer supported. Instead, please use: `from langchain.chat_models import ChatOpenAI` warnings.warn( /Users/jeremywelborn/src/langchain/langchain/llms/openai.py:608: UserWarning: You are trying to use a chat model. This way of initializing it is no longer supported. Instead, please use:in 0.02s 2023-03-31 15:45:58 (13.3 MB/s) - ‘openapi.yaml’ saved [286747/286747] from langchain.agents.agent_toolkits.openapi.spec import reduce_openapi_specwith open("openai_openapi.yaml") as f: raw_openai_api_spec = yaml.load(f, Loader=yaml.Loader)openai_api_spec = reduce_openapi_spec(raw_openai_api_spec)with open("klarna_openapi.yaml") as f: raw_klarna_api_spec = yaml.load(f, Loader=yaml.Loader)klarna_api_spec = reduce_openapi_spec(raw_klarna_api_spec)with open("spotify_openapi.yaml") as f: raw_spotify_api_spec = yaml.load(f, Loader=yaml.Loader)spotify_api_spec = reduce_openapi_spec(raw_spotify_api_spec)We'll work with the Spotify API as one of the examples of a somewhat complex API. There's a bit of auth-related setup to do if you want to replicate this.You'll have to set up an application in the Spotify developer console, documented here, to get credentials: CLIENT_ID, CLIENT_SECRET, and REDIRECT_URI.To get an access tokens (and keep them fresh), you can implement the oauth flows, or you can use spotipy. If you've set your Spotify creedentials as environment variables SPOTIPY_CLIENT_ID, SPOTIPY_CLIENT_SECRET, and SPOTIPY_REDIRECT_URI, you can use the helper functions below:import spotipy.util as utilfrom langchain.requests import RequestsWrapperdef construct_spotify_auth_headers(raw_spec: dict): scopes = list( raw_spec["components"]["securitySchemes"]["oauth_2_0"]["flows"][52.84.150.61, ... Connecting to www.klarna.com (www.klarna.com)|52.84.150.34|:443... connected. HTTP request sent, awaiting response... 200 OK Length: unspecified [application/json] Saving to: ‘api-docs’ api-docs [ <=> ] 1.87K --.-KB/s in 0s 2023-03-31 15:45:57 (261 MB/s) - ‘api-docs’ saved [1916] --2023-03-31 15:45:57-- https://raw.githubusercontent.com/APIs-guru/openapi-directory/main/APIs/spotify.com/1.0.0/openapi.yaml Resolving raw.githubusercontent.com (raw.githubusercontent.com)... 185.199.110.133, 185.199.109.133, 185.199.111.133, ... Connecting to raw.githubusercontent.com (raw.githubusercontent.com)|185.199.110.133|:443... connected. HTTP request sent, awaiting response... 200 OK Length: 286747 (280K) [text/plain] Saving to: ‘openapi.yaml’ openapi.yaml 100%[===================>] 280.03K --.-KB/s in 0.02sopenapi.yaml openai_openapi.yamlwget https://www.klarna.com/us/shopping/public/openai/v0/api-docsmv api-docs klarna_openapi.yamlwget https://raw.githubusercontent.com/APIs-guru/openapi-directory/main/APIs/spotify.com/1.0.0/openapi.yamlmv openapi.yaml spotify_openapi.yaml --2023-03-31 15:45:56-- https://raw.githubusercontent.com/openai/openai-openapi/master/openapi.yaml Resolving raw.githubusercontent.com (raw.githubusercontent.com)... 185.199.110.133, 185.199.109.133, 185.199.111.133, ... Connecting to raw.githubusercontent.com (raw.githubusercontent.com)|185.199.110.133|:443... connected. HTTP request sent, awaiting response... 200 OK Length: 122995 (120K) [text/plain] Saving to: ‘openapi.yaml’ openapi.yaml 100%[===================>] 120.11K --.-KB/s in 0.01s 2023-03-31 15:45:56 (10.4 MB/s) - ‘openapi.yaml’ saved [122995/122995] --2023-03-31 15:45:57-- https://www.klarna.com/us/shopping/public/openai/v0/api-docs Resolving www.klarna.com (www.klarna.com)... 52.84.150.34, 52.84.150.46, 52.84.150.61, ... Connecting toSkip to main content🦜ï¸�🔗 LangChainDocsUse casesIntegrationsAPILangSmithJS/TS DocsCTRLKIntegrationsCallbacksChat modelsDocument loadersDocument transformersLLMsMemoryRetrieversText embedding modelsAgent toolkitsAmadeus ToolkitAzure Cognitive Services ToolkitCSV AgentDocument ComparisonGitHubGmail ToolkitJiraJSON AgentMultion ToolkitOffice365 ToolkitOpenAPI agentsNatural Language APIsPandas Dataframe AgentPlayWright Browser ToolkitPowerBI Dataset AgentPython AgentSpark Dataframe AgentSpark SQL AgentSQL Database AgentVectorstore AgentXorbits AgentToolsVector storesGrouped by providerIntegrationsAgent toolkitsOpenAPI agentsOn this pageOpenAPI agentsWe can construct agents to consume arbitrary APIs, here APIs conformant to the OpenAPI/Swagger specification.1st example: hierarchical planning agent​In this example, we'll consider an approach called hierarchical planning, common in robotics and appearing in recent works for LLMs X robotics. We'll see it's a viable approach to start working with a massive API spec AND to assist with user queries that require multiple steps against the API.The idea is simple: to get coherent agent behavior over long sequences behavior & to save on tokens, we'll separate concerns: a "planner" will be responsible for what endpoints to call and a "controller" will be responsible for how to call them.In the initial implementation, the planner is an LLM chain that has the name and a short description for each endpoint in context. The controller is an LLM agent that is instantiated with documentation for only the endpoints for a particular plan. There's a lot left to get this working very robustly :)To start, let's collect some OpenAPI specs.​import os, yamlwget https://raw.githubusercontent.com/openai/openai-openapi/master/openapi.yamlmv openapi.yaml openai_openapi.yamlwgetOpenAPI agents | 🦜ï¸�🔗 Langchaindifferent.'PreviousNatural Language APIsNextPlayWright Browser ToolkitUsing ZERO_SHOT_REACT_DESCRIPTIONUsing OpenAI FunctionsMulti DataFrame ExampleCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.Observation: NameError("name 'math' is not defined") Thought: I need to import the math library Action: python_repl_ast Action Input: import math Observation: Thought: I now need to calculate the square root of the average age Action: python_repl_ast Action Input: math.sqrt(df['Age'].mean()) Observation: 5.449689683556195 Thought: I now know the final answer Final Answer: The square root of the average age is 5.449689683556195. > Finished chain. 'The square root of the average age is 5.449689683556195.'Multi DataFrame Example​This next part shows how the agent can interact with multiple dataframes passed in as a list.df1 = df.copy()df1["Age"] = df1["Age"].fillna(df1["Age"].mean())agent = create_pandas_dataframe_agent(OpenAI(temperature=0), [df, df1], verbose=True)agent.run("how many rows in the age column are different?") > Entering new AgentExecutor chain... Thought: I need to compare the age columns in both dataframes Action: python_repl_ast Action Input: len(df1[df1['Age'] != df2['Age']]) Observation: 177 Thought: I now know the final answer Final Answer: 177 rows in the age column are different. > Finished chain. '177 rows in the age column are different.'PreviousNatural Language APIsNextPlayWright Browser ToolkitUsing ZERO_SHOT_REACT_DESCRIPTIONUsingdf, verbose=True, agent_type=AgentType.OPENAI_FUNCTIONS,)agent.run("how many rows are there?") > Entering new chain... Invoking: `python_repl_ast` with `df.shape[0]` 891There are 891 rows in the dataframe. > Finished chain. 'There are 891 rows in the dataframe.'agent.run("how many people have more than 3 siblings") > Entering new AgentExecutor chain... Thought: I need to count the number of people with more than 3 siblings Action: python_repl_ast Action Input: df[df['SibSp'] > 3].shape[0] Observation: 30 Thought: I now know the final answer Final Answer: 30 people have more than 3 siblings. > Finished chain. '30 people have more than 3 siblings.'agent.run("whats the square root of the average age?") > Entering new AgentExecutor chain... Thought: I need to calculate the average age first Action: python_repl_ast Action Input: df['Age'].mean() Observation: 29.69911764705882 Thought: I now need to calculate the square root of the average age Action: python_repl_ast Action Input: math.sqrt(df['Age'].mean()) Observation: NameError("name 'math' is not defined")Skip to main content🦜ï¸�🔗 LangChainDocsUse casesIntegrationsAPILangSmithJS/TS DocsCTRLKIntegrationsCallbacksChat modelsDocument loadersDocument transformersLLMsMemoryRetrieversText embedding modelsAgent toolkitsAmadeus ToolkitAzure Cognitive Services ToolkitCSV AgentDocument ComparisonGitHubGmail ToolkitJiraJSON AgentMultion ToolkitOffice365 ToolkitOpenAPI agentsNatural Language APIsPandas Dataframe AgentPlayWright Browser ToolkitPowerBI Dataset AgentPython AgentSpark Dataframe AgentSpark SQL AgentSQL Database AgentVectorstore AgentXorbits AgentToolsVector storesGrouped by providerIntegrationsAgent toolkitsPandas Dataframe AgentOn this pagePandas Dataframe AgentThis notebook shows how to use agents to interact with a pandas dataframe. It is mostly optimized for question answering.NOTE: this agent calls the Python agent under the hood, which executes LLM generated Python code - this can be bad if the LLM generated Python code is harmful. Use cautiously.from langchain.agents import create_pandas_dataframe_agentfrom langchain.chat_models import ChatOpenAIfrom langchain.agents.agent_types import AgentTypefrom langchain.llms import OpenAIimport pandas as pddf = pd.read_csv("titanic.csv")Using ZERO_SHOT_REACT_DESCRIPTION​This shows how to initialize the agent using the ZERO_SHOT_REACT_DESCRIPTION agent type. Note that this is an alternative to the above.agent = create_pandas_dataframe_agent(OpenAI(temperature=0), df, verbose=True)Using OpenAI Functions​This shows how to initialize the agent using the OPENAI_FUNCTIONS agent type. Note that this is an alternative to the above.agent = create_pandas_dataframe_agent( ChatOpenAI(temperature=0, model="gpt-3.5-turbo-0613"), df, verbose=True,Pandas Dataframe Agent | 🦜ï¸�🔗 Langchain'number': 1}] Thought: I need to get more information about this issue. Action: Get Issue Action Input: 1 Observation: {'title': 'Change the main script to print Hello AI!', 'body': None, 'comments': '[]'} Thought: I need to update the main script to print Hello AI! Action: Update File Action Input: main.py OLD <<<< print("Hello World!") >>>> OLD NEW <<<< print("Hello AI!") >>>> NEW Observation: File content was not updated because the old content was not found. It may be helpful to use the read_file action to get the current file contents. Thought: I need to read the current file contents. Action: Read File Action Input: main.py Observation: print("Hello world!") Thought: I need to update the main script to print Hello AI! Action: Update File Action Input: main.py OLD <<<< print("Hello world!") >>>> OLD NEW <<<< print("Hello AI!") >>>> NEW Observation: Updated file main.py Thought: I now know the final answer Final Answer: The main script has been updated to print "Hello AI!" > Finished chain. 'The main script has been updated to print "Hello AI!"'PreviousDocument ComparisonNextGmail ToolkitCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.The GitHub tool allows agents to interact with a given GitHub repository. It implements CRUD operations for modifying files and can read/comment on Issues. The tool wraps the PyGitHub library.In order to interact with the GitHub API you must create a GitHub app. Next, you must set the following environment variables:GITHUB_APP_IDGITHUB_APP_PRIVATE_KEYGITHUB_REPOSITORYGITHUB_BRANCH%pip install pygithubimport osfrom langchain.agents import AgentTypefrom langchain.agents import initialize_agentfrom langchain.agents.agent_toolkits.github.toolkit import GitHubToolkitfrom langchain.llms import OpenAIfrom langchain.utilities.github import GitHubAPIWrapperos.environ["GITHUB_APP_ID"] = "your-github-app-id"os.environ["GITHUB_APP_PRIVATE_KEY"] = "/path/to/your/private/key"os.environ["GITHUB_REPOSITORY"] = "user/repo"os.environ["GITHUB_BRANCH"] = "branch-name"os.environ["OPENAI_API_KEY"] = "your-openai-api-key"llm = OpenAI(temperature=0)github = GitHubAPIWrapper()toolkit = GitHubToolkit.from_github_api_wrapper(github)agent = initialize_agent( toolkit.get_tools(), llm, agent=AgentType.ZERO_SHOT_REACT_DESCRIPTION, verbose=True)agent.run( "You have the software engineering capabilities of a Google Principle engineer. You are tasked with completing issues on a github repository. Please look at the existing issues and complete them.") > Entering new AgentExecutor chain... I need to figure out what issues need to be completed and how to complete them. Action: Get Issues Action Input: N/A Observation: Found 1 issues: [{'title': 'Change the main script to print Hello AI!', 'number': 1}] Thought: I need to get more information about thisSkip to main content🦜ï¸�🔗 LangChainDocsUse casesIntegrationsAPILangSmithJS/TS DocsCTRLKIntegrationsCallbacksChat modelsDocument loadersDocument transformersLLMsMemoryRetrieversText embedding modelsAgent toolkitsAmadeus ToolkitAzure Cognitive Services ToolkitCSV AgentDocument ComparisonGitHubGmail ToolkitJiraJSON AgentMultion ToolkitOffice365 ToolkitOpenAPI agentsNatural Language APIsPandas Dataframe AgentPlayWright Browser ToolkitPowerBI Dataset AgentPython AgentSpark Dataframe AgentSpark SQL AgentSQL Database AgentVectorstore AgentXorbits AgentToolsVector storesGrouped by providerIntegrationsAgent toolkitsGitHubGitHubThis notebook goes over how to use the GitHub tool.GitHub | 🦜ï¸�🔗 Langchain<> "")))----Question: What was the average of value in revenue in dollars?DAX: EVALUATE ROW("Average", AVERAGE(revenue_details[dollar_value]))----"""toolkit = PowerBIToolkit( powerbi=PowerBIDataset( dataset_id="", table_names=["table1", "table2"], credential=DefaultAzureCredential(), ), llm=smart_llm, examples=few_shots,)agent_executor = create_pbi_agent( llm=fast_llm, toolkit=toolkit, verbose=True,)agent_executor.run("What was the maximum of value in revenue in dollars in 2022?")PreviousPlayWright Browser ToolkitNextPython AgentSome notesInitializationExample: describing a tableExample: simple query on a tableExample: running queriesExample: add your own few-shot promptsCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.import AgentExecutorfrom azure.identity import DefaultAzureCredentialfast_llm = ChatOpenAI( temperature=0.5, max_tokens=1000, model_name="gpt-3.5-turbo", verbose=True)smart_llm = ChatOpenAI(temperature=0, max_tokens=100, model_name="gpt-4", verbose=True)toolkit = PowerBIToolkit( powerbi=PowerBIDataset( dataset_id="", table_names=["table1", "table2"], credential=DefaultAzureCredential(), ), llm=smart_llm,)agent_executor = create_pbi_agent( llm=fast_llm, toolkit=toolkit, verbose=True,)Example: describing a table​agent_executor.run("Describe table1")Example: simple query on a table​In this example, the agent actually figures out the correct query to get a row count of the table.agent_executor.run("How many records are in table1?")Example: running queries​agent_executor.run("How many records are there by dimension1 in table2?")agent_executor.run("What unique values are there for dimensions2 in table2")Example: add your own few-shot prompts​# fictional examplefew_shots = """Question: How many rows are in the table revenue?DAX: EVALUATE ROW("Number of rows", COUNTROWS(revenue_details))----Question: How many rows are in the table revenue where year is not empty?DAX: EVALUATE ROW("Number of rows", COUNTROWS(FILTER(revenue_details, revenue_details[year] <> "")))----Question: What was the average of value in revenue in dollars?DAX:Skip to main content🦜ï¸�🔗 LangChainDocsUse casesIntegrationsAPILangSmithJS/TS DocsCTRLKIntegrationsCallbacksChat modelsDocument loadersDocument transformersLLMsMemoryRetrieversText embedding modelsAgent toolkitsAmadeus ToolkitAzure Cognitive Services ToolkitCSV AgentDocument ComparisonGitHubGmail ToolkitJiraJSON AgentMultion ToolkitOffice365 ToolkitOpenAPI agentsNatural Language APIsPandas Dataframe AgentPlayWright Browser ToolkitPowerBI Dataset AgentPython AgentSpark Dataframe AgentSpark SQL AgentSQL Database AgentVectorstore AgentXorbits AgentToolsVector storesGrouped by providerIntegrationsAgent toolkitsPowerBI Dataset AgentOn this pagePowerBI Dataset AgentThis notebook showcases an agent designed to interact with a Power BI Dataset. The agent is designed to answer more general questions about a dataset, as well as recover from errors.Note that, as this agent is in active development, all answers might not be correct. It runs against the executequery endpoint, which does not allow deletes.Some notes​It relies on authentication with the azure.identity package, which can be installed with pip install azure-identity. Alternatively you can create the powerbi dataset with a token as a string without supplying the credentials.You can also supply a username to impersonate for use with datasets that have RLS enabled. The toolkit uses a LLM to create the query from the question, the agent uses the LLM for the overall execution.Testing was done mostly with a text-davinci-003 model, codex models did not seem to perform ver well.Initialization​from langchain.agents.agent_toolkits import create_pbi_agentfrom langchain.agents.agent_toolkits import PowerBIToolkitfrom langchain.utilities.powerbi import PowerBIDatasetfrom langchain.chat_models import ChatOpenAIfrom langchain.agents import AgentExecutorfrom azure.identity import DefaultAzureCredentialfast_llm = ChatOpenAI(PowerBI Dataset Agent | 🦜ï¸�🔗 Langchainevent is titled 'Meeting with sentient parrot' and is scheduled from 6:00 PM to 6:30 PM."PreviousMultion ToolkitNextOpenAPI agentsAssign Environmental VariablesCreate the Toolkit and Get ToolsUse within an AgentCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.and the subject was 'Collaboration Request'."agent.run( "Can you schedule a 30 minute meeting with a sentient parrot to discuss research collaborations on October 3, 2023 at 2 pm Easter Time?") /home/vscode/langchain-py-env/lib/python3.11/site-packages/O365/utils/windows_tz.py:639: PytzUsageWarning: The zone attribute is specific to pytz's interface; please migrate to a new time zone provider. For more details on how to do so, see https://pytz-deprecation-shim.readthedocs.io/en/latest/migration.html iana_tz.zone if isinstance(iana_tz, tzinfo) else iana_tz) /home/vscode/langchain-py-env/lib/python3.11/site-packages/O365/utils/utils.py:463: PytzUsageWarning: The zone attribute is specific to pytz's interface; please migrate to a new time zone provider. For more details on how to do so, see https://pytz-deprecation-shim.readthedocs.io/en/latest/migration.html timezone = date_time.tzinfo.zone if date_time.tzinfo is not None else None 'I have scheduled a meeting with a sentient parrot to discuss research collaborations on October 3, 2023 at 2 pm Easter Time. Please let me know if you need to make any changes.'agent.run( "Can you tell me if I have any events on October 3, 2023 in Eastern Time, and if so, tell me if any of them are with a sentient parrot?") "Yes, you have an event on October 3, 2023 with a sentient parrot. The event is titled 'Meeting with sentient parrot' and is scheduled from 6:00 PM to'langchain.tools.office365.send_event.SendEventSchema'>, return_direct=False, verbose=False, callbacks=None, callback_manager=None, handle_tool_error=False, account=Account Client Id: f32a022c-3c4c-4d10-a9d8-f6a9a9055302), O365SendMessage(name='send_email', description='Use this tool to send an email with the provided message fields.', args_schema=, return_direct=False, verbose=False, callbacks=None, callback_manager=None, handle_tool_error=False, account=Account Client Id: f32a022c-3c4c-4d10-a9d8-f6a9a9055302)]Use within an Agent​from langchain import OpenAIfrom langchain.agents import initialize_agent, AgentTypellm = OpenAI(temperature=0)agent = initialize_agent( tools=toolkit.get_tools(), llm=llm, verbose=False, agent=AgentType.STRUCTURED_CHAT_ZERO_SHOT_REACT_DESCRIPTION,)agent.run( "Create an email draft for me to edit of a letter from the perspective of a sentient parrot" " who is looking to collaborate on some research with her" " estranged friend, a cat. Under no circumstances may you send the message, however.") 'The draft email was created correctly.'agent.run( "Could you search in my drafts folder and let me know if any of them are about collaboration?") "I found one draft in your drafts folder about collaboration. It was sent on 2023-06-16T18:22:17+0000 and the subject was 'Collaboration Request'."agent.run( "Can youis a JSON list of all the events in the user's calendar between the start and end times. You can assume that the user can not schedule any meeting over existing meetings, and that the user is busy during meetings. Any times without events are free for the user. ", args_schema=, return_direct=False, verbose=False, callbacks=None, callback_manager=None, handle_tool_error=False, account=Account Client Id: f32a022c-3c4c-4d10-a9d8-f6a9a9055302), O365CreateDraftMessage(name='create_email_draft', description='Use this tool to create a draft email with the provided message fields.', args_schema=, return_direct=False, verbose=False, callbacks=None, callback_manager=None, handle_tool_error=False, account=Account Client Id: f32a022c-3c4c-4d10-a9d8-f6a9a9055302), O365SearchEmails(name='messages_search', description='Use this tool to search for email messages. The input must be a valid Microsoft Graph v1.0 $search query. The output is a JSON list of the requested resource.', args_schema=, return_direct=False, verbose=False, callbacks=None, callback_manager=None, handle_tool_error=False, account=Account Client Id: f32a022c-3c4c-4d10-a9d8-f6a9a9055302), O365SendEvent(name='send_event', description='Use this tool to create and send an event with the provided event fields.', args_schema=, return_direct=False,Skip to main content🦜ï¸�🔗 LangChainDocsUse casesIntegrationsAPILangSmithJS/TS DocsCTRLKIntegrationsCallbacksChat modelsDocument loadersDocument transformersLLMsMemoryRetrieversText embedding modelsAgent toolkitsAmadeus ToolkitAzure Cognitive Services ToolkitCSV AgentDocument ComparisonGitHubGmail ToolkitJiraJSON AgentMultion ToolkitOffice365 ToolkitOpenAPI agentsNatural Language APIsPandas Dataframe AgentPlayWright Browser ToolkitPowerBI Dataset AgentPython AgentSpark Dataframe AgentSpark SQL AgentSQL Database AgentVectorstore AgentXorbits AgentToolsVector storesGrouped by providerIntegrationsAgent toolkitsOffice365 ToolkitOn this pageOffice365 ToolkitThis notebook walks through connecting LangChain to Office365 email and calendar.To use this toolkit, you will need to set up your credentials explained in the Microsoft Graph authentication and authorization overview. Once you've received a CLIENT_ID and CLIENT_SECRET, you can input them as environmental variables below.pip install --upgrade O365 > /dev/nullpip install beautifulsoup4 > /dev/null # This is optional but is useful for parsing HTML messagesAssign Environmental Variables​The toolkit will read the CLIENT_ID and CLIENT_SECRET environmental variables to authenticate the user so you need to set them here. You will also need to set your OPENAI_API_KEY to use the agent later.# Set environmental variables hereCreate the Toolkit and Get Tools​To start, you need to create the toolkit, so you can access its tools later.from langchain.agents.agent_toolkits import O365Toolkittoolkit = O365Toolkit()tools = toolkit.get_tools()tools [O365SearchEvents(name='events_search', description=" Use this tool to search for the user's calendar events. The input must be the start and end datetimes for the search query. The output is a JSON list of all the events in the user's calendar between the start and end times.Office365 Toolkit | 🦜ï¸�🔗 Langchain# Calculate the square root of the average age square_root = math.sqrt(average_age) square_root` 5.585696017507576The square root of the average age is approximately 5.59. > Finished chain. 'The square root of the average age is approximately 5.59.'Multi CSV Example​This next part shows how the agent can interact with multiple csv files passed in as a list.agent = create_csv_agent( ChatOpenAI(temperature=0, model="gpt-3.5-turbo-0613"), ["titanic.csv", "titanic_age_fillna.csv"], verbose=True, agent_type=AgentType.OPENAI_FUNCTIONS,)agent.run("how many rows in the age column are different between the two dfs?") Error in on_chain_start callback: 'name' Invoking: `python_repl_ast` with `df1['Age'].nunique() - df2['Age'].nunique()` -1There is 1 row in the age column that is different between the two dataframes. > Finished chain. 'There is 1 row in the age column that is different between the two dataframes.'PreviousAzure Cognitive Services ToolkitNextDocument ComparisonUsing ZERO_SHOT_REACT_DESCRIPTIONUsing OpenAI FunctionsMulti CSV ExampleCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.model="gpt-3.5-turbo-0613"), "titanic.csv", verbose=True, agent_type=AgentType.OPENAI_FUNCTIONS,)agent.run("how many rows are there?") Error in on_chain_start callback: 'name' Invoking: `python_repl_ast` with `df.shape[0]` 891There are 891 rows in the dataframe. > Finished chain. 'There are 891 rows in the dataframe.'agent.run("how many people have more than 3 siblings") Error in on_chain_start callback: 'name' Invoking: `python_repl_ast` with `df[df['SibSp'] > 3]['PassengerId'].count()` 30There are 30 people in the dataframe who have more than 3 siblings. > Finished chain. 'There are 30 people in the dataframe who have more than 3 siblings.'agent.run("whats the square root of the average age?") Error in on_chain_start callback: 'name' Invoking: `python_repl_ast` with `import pandas as pd import math # Create a dataframe data = {'Age': [22, 38, 26, 35, 35]} df = pd.DataFrame(data) # Calculate the average age average_age = df['Age'].mean() # Calculate the square root of the average age square_root =Skip to main content🦜ï¸�🔗 LangChainDocsUse casesIntegrationsAPILangSmithJS/TS DocsCTRLKIntegrationsCallbacksChat modelsDocument loadersDocument transformersLLMsMemoryRetrieversText embedding modelsAgent toolkitsAmadeus ToolkitAzure Cognitive Services ToolkitCSV AgentDocument ComparisonGitHubGmail ToolkitJiraJSON AgentMultion ToolkitOffice365 ToolkitOpenAPI agentsNatural Language APIsPandas Dataframe AgentPlayWright Browser ToolkitPowerBI Dataset AgentPython AgentSpark Dataframe AgentSpark SQL AgentSQL Database AgentVectorstore AgentXorbits AgentToolsVector storesGrouped by providerIntegrationsAgent toolkitsCSV AgentOn this pageCSV AgentThis notebook shows how to use agents to interact with a csv. It is mostly optimized for question answering.NOTE: this agent calls the Pandas DataFrame agent under the hood, which in turn calls the Python agent, which executes LLM generated Python code - this can be bad if the LLM generated Python code is harmful. Use cautiously.from langchain.agents import create_csv_agentfrom langchain.llms import OpenAIfrom langchain.chat_models import ChatOpenAIfrom langchain.agents.agent_types import AgentTypeUsing ZERO_SHOT_REACT_DESCRIPTION​This shows how to initialize the agent using the ZERO_SHOT_REACT_DESCRIPTION agent type. Note that this is an alternative to the above.agent = create_csv_agent( OpenAI(temperature=0), "titanic.csv", verbose=True, agent_type=AgentType.ZERO_SHOT_REACT_DESCRIPTION,)Using OpenAI Functions​This shows how to initialize the agent using the OPENAI_FUNCTIONS agent type. Note that this is an alternative to the above.agent = create_csv_agent( ChatOpenAI(temperature=0, model="gpt-3.5-turbo-0613"),CSV Agent | 🦜ï¸�🔗 Langchainh3: Follow us on social media h3: Langchain Foundation Ltd. All rights reserved. PreviousPandas Dataframe AgentNextPowerBI Dataset AgentInstantiating a Browser ToolkitUse within an AgentCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.h3, h4, h5, h6" } } ``` Observation: [] Thought: Thought: The page has loaded, I can now extract the headers Action: ``` { "action": "get_elements", "action_input": { "selector": "h1, h2, h3, h4, h5, h6" } } ``` Observation: [] Thought: Thought: I need to navigate to langchain.com to see the headers Action: ``` { "action": "navigate_browser", "action_input": "https://langchain.com/" } ``` Observation: Navigating to https://langchain.com/ returned status code 200 Thought: > Finished chain. The headers on langchain.com are: h1: Langchain - Decentralized Translation Protocol h2: A protocol for decentralized translation h3: How it works h3: The Problem h3: The Solution h3: Key Features h3: Roadmap h3: Team h3: Advisors h3: Partners h3: FAQ h3: Contact Us h3: Subscribe for updates h3: Follow us on social media h3: Langchain Foundationthe agent wants to remember the current webpage, it can use the `current_webpage` toolawait tools_by_name["current_webpage"].arun({}) 'https://web.archive.org/web/20230428133211/https://cnn.com/world'Use within an Agent​Several of the browser tools are StructuredTool's, meaning they expect multiple arguments. These aren't compatible (out of the box) with agents older than the STRUCTURED_CHAT_ZERO_SHOT_REACT_DESCRIPTIONfrom langchain.agents import initialize_agent, AgentTypefrom langchain.chat_models import ChatAnthropicllm = ChatAnthropic(temperature=0) # or any other LLM, e.g., ChatOpenAI(), OpenAI()agent_chain = initialize_agent( tools, llm, agent=AgentType.STRUCTURED_CHAT_ZERO_SHOT_REACT_DESCRIPTION, verbose=True,)result = await agent_chain.arun("What are the headers on langchain.com?")print(result) > Entering new AgentExecutor chain... Thought: I need to navigate to langchain.com to see the headers Action: ``` { "action": "navigate_browser", "action_input": "https://langchain.com/" } ``` Observation: Navigating to https://langchain.com/ returned status code 200 Thought: Action: ``` { "action": "get_elements", "action_input": { "selector": "h1, h2, h3, h4, h5, h6" }year"}, {"innerText": "Ocean census aims to discover 100,000 previously unknown marine species"}, {"innerText": "Wall Street Journal editor discusses reporter\\u2019s arrest in Moscow"}, {"innerText": "Can Tunisia\\u2019s democracy be saved?"}, {"innerText": "Yasmeen Lari, \\u2018starchitect\\u2019 turned social engineer, wins one of architecture\\u2019s most coveted prizes"}, {"innerText": "A massive, newly restored Frank Lloyd Wright mansion is up for sale"}, {"innerText": "Are these the most sustainable architectural projects in the world?"}, {"innerText": "Step inside a $72 million London townhouse in a converted army barracks"}, {"innerText": "A 3D-printing company is preparing to build on the lunar surface. But first, a moonshot at home"}, {"innerText": "Simona Halep says \\u2018the stress is huge\\u2019 as she battles to return to tennis following positive drug test"}, {"innerText": "Barcelona reaches third straight Women\\u2019s Champions League final with draw against Chelsea"}, {"innerText": "Wrexham: An intoxicating tale of Hollywood glamor and sporting romance"}, {"innerText": "Shohei Ohtani comes within inches of making yet more MLB history in Angels win"}, {"innerText": "This CNN Hero is recruiting recreational divers to help rebuild reefs in Florida one coral at a time"}, {"innerText": "This CNN Hero offers judgment-free veterinary care for the pets of those experiencing homelessness"}, {"innerText": "Don\\u2019t give up on milestones: A CNN Hero\\u2019s message for Autism Awareness Month"}, {"innerText": "CNN Hero of the Year Nelly Cheboi returned to Kenya with plans to lift more students out of poverty"}]'# If the agent wants to remember the current webpage, it can use the `current_webpage` toolawaitfrom using their phones while eating"}, {"innerText": "South African opera star will perform at coronation of King Charles III"}, {"innerText": "Luxury loot under the hammer: France auctions goods seized from drug dealers"}, {"innerText": "Judy Blume\\u2019s books were formative for generations of readers. Here\\u2019s why they endure"}, {"innerText": "Craft, salvage and sustainability take center stage at Milan Design Week"}, {"innerText": "Life-sized chocolate King Charles III sculpture unveiled to celebrate coronation"}, {"innerText": "Severe storms to strike the South again as millions in Texas could see damaging winds and hail"}, {"innerText": "The South is in the crosshairs of severe weather again, as the multi-day threat of large hail and tornadoes continues"}, {"innerText": "Spring snowmelt has cities along the Mississippi bracing for flooding in homes and businesses"}, {"innerText": "Know the difference between a tornado watch, a tornado warning and a tornado emergency"}, {"innerText": "Reporter spotted familiar face covering Sudan evacuation. See what happened next"}, {"innerText": "This country will soon become the world\\u2019s most populated"}, {"innerText": "April 27, 2023 - Russia-Ukraine news"}, {"innerText": "\\u2018Often they shoot at each other\\u2019: Ukrainian drone operator details chaos in Russian ranks"}, {"innerText": "Hear from family members of Americans stuck in Sudan frustrated with US response"}, {"innerText": "U.S. talk show host Jerry Springer dies at 79"}, {"innerText": "Bureaucracy stalling at least one family\\u2019s evacuation from Sudan"}, {"innerText": "Girl to get life-saving treatment for rare immune disease"}, {"innerText": "Haiti\\u2019s crime rate more than doubles in a year"}, {"innerText": "Ocean census aims to discover 100,000 previously unknown marine species"},pushed millions into crisis made 100 times more likely by climate change, analysis finds"}, {"innerText": "For years, a UK mining giant was untouchable in Zambia for pollution until a former miner\\u2019s son took them on"}, {"innerText": "Former Sudanese minister Ahmed Haroun wanted on war crimes charges freed from Khartoum prison"}, {"innerText": "WHO warns of \\u2018biological risk\\u2019 after Sudan fighters seize lab, as violence mars US-brokered ceasefire"}, {"innerText": "How Colombia\\u2019s Petro, a former leftwing guerrilla, found his opening in Washington"}, {"innerText": "Bolsonaro accidentally created Facebook post questioning Brazil election results, say his attorneys"}, {"innerText": "Crowd kills over a dozen suspected gang members in Haiti"}, {"innerText": "Thousands of tequila bottles containing liquid meth seized"}, {"innerText": "Why send a US stealth submarine to South Korea \\u2013 and tell the world about it?"}, {"innerText": "Fukushima\\u2019s fishing industry survived a nuclear disaster. 12 years on, it fears Tokyo\\u2019s next move may finish it off"}, {"innerText": "Singapore executes man for trafficking two pounds of cannabis"}, {"innerText": "Conservative Thai party looks to woo voters with promise to legalize sex toys"}, {"innerText": "Inside the Italian village being repopulated by Americans"}, {"innerText": "Strikes, soaring airfares and yo-yoing hotel fees: A traveler\\u2019s guide to the coronation"}, {"innerText": "A year in Azerbaijan: From spring\\u2019s Grand Prix to winter ski adventures"}, {"innerText": "The bicycle mayor peddling a two-wheeled revolution in Cape Town"}, {"innerText": "Tokyo ramen shop bans customers from using their phones while eating"}, {"innerText": "South African opera star will perform at coronation"Two sides agree to extend Sudan ceasefire"}, {"innerText": "Spanish Leopard 2 tanks are on their way to Ukraine, defense minister confirms"}, {"innerText": "Flamb\\u00e9ed pizza thought to have sparked deadly Madrid restaurant fire"}, {"innerText": "Another bomb found in Belgorod just days after Russia accidentally struck the city"}, {"innerText": "A Black teen\\u2019s murder sparked a crisis over racism in British policing. Thirty years on, little has changed"}, {"innerText": "Belgium destroys shipment of American beer after taking issue with \\u2018Champagne of Beer\\u2019 slogan"}, {"innerText": "UK Prime Minister Rishi Sunak rocked by resignation of top ally Raab over bullying allegations"}, {"innerText": "Iran\\u2019s Navy seizes Marshall Islands-flagged ship"}, {"innerText": "A divided Israel stands at a perilous crossroads on its 75th birthday"}, {"innerText": "Palestinian reporter breaks barriers by reporting in Hebrew on Israeli TV"}, {"innerText": "One-fifth of water pollution comes from textile dyes. But a shellfish-inspired solution could clean it up"}, {"innerText": "\\u2018People sacrificed their lives for just\\u00a010 dollars\\u2019: At least 78 killed in Yemen crowd surge"}, {"innerText": "Israeli police say two men shot near Jewish tomb in Jerusalem in suspected \\u2018terror attack\\u2019"}, {"innerText": "King Charles III\\u2019s coronation: Who\\u2019s performing at the ceremony"}, {"innerText": "The week in 33 photos"}, {"innerText": "Hong Kong\\u2019s endangered turtles"}, {"innerText": "In pictures: Britain\\u2019s Queen Camilla"}, {"innerText": "Catastrophic drought that\\u2019s pushed millions into crisis made 100 times more likely by climate change, analysis finds"},crisis"}, {"innerText": "Clashes renew in West Darfur as food and water shortages worsen in Sudan violence"}, {"innerText": "Thai policeman\\u2019s wife investigated over alleged murder and a dozen other poison cases"}, {"innerText": "American teacher escaped Sudan on French evacuation plane, with no help offered back home"}, {"innerText": "Dubai\\u2019s emerging hip-hop scene is finding its voice"}, {"innerText": "How an underwater film inspired a marine protected area off Kenya\\u2019s coast"}, {"innerText": "The Iranian drones deployed by Russia in Ukraine are powered by stolen Western technology, research reveals"}, {"innerText": "India says border violations erode \\u2018entire basis\\u2019 of ties with China"}, {"innerText": "Australian police sift through 3,000 tons of trash for missing woman\\u2019s remains"}, {"innerText": "As US and Philippine defense ties grow, China warns over Taiwan tensions"}, {"innerText": "Don McLean offers duet with South Korean president who sang \\u2018American Pie\\u2019 to Biden"}, {"innerText": "Almost two-thirds of elephant habitat lost across Asia, study finds"}, {"innerText": "\\u2018We don\\u2019t sleep \\u2026 I would call it fainting\\u2019: Working as a doctor in Sudan\\u2019s crisis"}, {"innerText": "Kenya arrests second pastor to face criminal charges \\u2018related to mass killing of his followers\\u2019"}, {"innerText": "Russia launches deadly wave of strikes across Ukraine"}, {"innerText": "Woman forced to leave her forever home or \\u2018walk to your death\\u2019 she says"}, {"innerText": "U.S. House Speaker Kevin McCarthy weighs in on Disney-DeSantis feud"}, {"innerText": "Two sides agree to extend Sudan ceasefire"}, {"innerText": "Spanish Leopard 2 tankscallback_manager=None, sync_browser=None, async_browser= version=112.0.5615.29>), CurrentWebPageTool(name='current_webpage', description='Returns the URL of the current page', args_schema=, return_direct=False, verbose=False, callbacks=None, callback_manager=None, sync_browser=None, async_browser= version=112.0.5615.29>)]tools_by_name = {tool.name: tool for tool in tools}navigate_tool = tools_by_name["navigate_browser"]get_elements_tool = tools_by_name["get_elements"]await navigate_tool.arun( {"url": "https://web.archive.org/web/20230428131116/https://www.cnn.com/world"}) 'Navigating to https://web.archive.org/web/20230428131116/https://www.cnn.com/world returned status code 200'# The browser is shared across tools, so the agent can interact in a stateful mannerawait get_elements_tool.arun( {"selector": ".container__headline", "attributes": ["innerText"]}) '[{"innerText": "These Ukrainian veterinarians are risking their lives to care for dogs and cats in the war zone"}, {"innerText": "Life in the ocean\\u2019s \\u2018twilight zone\\u2019 could disappear due to the climate crisis"}, {"innerText": "Clashes renew in West Darfur as food and water shortages worsenreturn_direct=False, verbose=False, callbacks=None, callback_manager=None, sync_browser=None, async_browser= version=112.0.5615.29>), ExtractTextTool(name='extract_text', description='Extract all the text on the current webpage', args_schema=, return_direct=False, verbose=False, callbacks=None, callback_manager=None, sync_browser=None, async_browser= version=112.0.5615.29>), ExtractHyperlinksTool(name='extract_hyperlinks', description='Extract all hyperlinks on the current webpage', args_schema=, return_direct=False, verbose=False, callbacks=None, callback_manager=None, sync_browser=None, async_browser= version=112.0.5615.29>), GetElementsTool(name='get_elements', description='Retrieve elements in the current web page matching the given CSS selector', args_schema=, return_direct=False, verbose=False, callbacks=None, callback_manager=None, sync_browser=None, async_browser=, return_direct=False, verbose=False, callbacks=None, callback_manager=None, sync_browser=None, async_browser= version=112.0.5615.29>), NavigateTool(name='navigate_browser', description='Navigate a browser to the specified URL', args_schema=, return_direct=False, verbose=False, callbacks=None, callback_manager=None, sync_browser=None, async_browser= version=112.0.5615.29>), NavigateBackTool(name='previous_webpage', description='Navigate back to the previous page in the browser history', args_schema=, return_direct=False, verbose=False, callbacks=None, callback_manager=None,Skip to main content🦜ï¸�🔗 LangChainDocsUse casesIntegrationsAPILangSmithJS/TS DocsCTRLKIntegrationsCallbacksChat modelsDocument loadersDocument transformersLLMsMemoryRetrieversText embedding modelsAgent toolkitsAmadeus ToolkitAzure Cognitive Services ToolkitCSV AgentDocument ComparisonGitHubGmail ToolkitJiraJSON AgentMultion ToolkitOffice365 ToolkitOpenAPI agentsNatural Language APIsPandas Dataframe AgentPlayWright Browser ToolkitPowerBI Dataset AgentPython AgentSpark Dataframe AgentSpark SQL AgentSQL Database AgentVectorstore AgentXorbits AgentToolsVector storesGrouped by providerIntegrationsAgent toolkitsPlayWright Browser ToolkitOn this pagePlayWright Browser ToolkitThis toolkit is used to interact with the browser. While other tools (like the Requests tools) are fine for static sites, Browser toolkits let your agent navigate the web and interact with dynamically rendered sites. Some tools bundled within the Browser toolkit include:NavigateTool (navigate_browser) - navigate to a URLNavigateBackTool (previous_page) - wait for an element to appearClickTool (click_element) - click on an element (specified by selector)ExtractTextTool (extract_text) - use beautiful soup to extract text from the current web pageExtractHyperlinksTool (extract_hyperlinks) - use beautiful soup to extract hyperlinks from the current web pageGetElementsTool (get_elements) - select elements by CSS selectorCurrentPageTool (current_page) - get the current page URL# !pip install playwright > /dev/null# !pip install lxml# If this is your first time using playwright, you'll have to install a browser executable.# Running `playwright install` by default installs a chromium browser executable.# playwright installfrom langchain.agents.agent_toolkits import PlayWrightBrowserToolkitfrom langchain.tools.playwright.utils import ( create_async_playwright_browser, create_sync_playwright_browser, #PlayWright Browser Toolkit | 🦜ï¸�🔗 LangchainObservation: #/components/schemas/CreateCompletionRequest Thought: I should look at the CreateCompletionRequest schema to see what parameters are required Action: json_spec_list_keys Action Input: data["components"]["schemas"]["CreateCompletionRequest"] Observation: ['type', 'properties', 'required'] Thought: I should look at the required key to see what parameters are required Action: json_spec_get_value Action Input: data["components"]["schemas"]["CreateCompletionRequest"]["required"] Observation: ['model'] Thought: I now know the final answer Final Answer: The required parameters in the request body to the /completions endpoint are 'model'. > Finished chain. "The required parameters in the request body to the /completions endpoint are 'model'."PreviousJiraNextMultion ToolkitInitializationExample: getting the required POST parameters for a requestCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.Observation: ['operationId', 'tags', 'summary', 'requestBody', 'responses', 'x-oaiMeta'] Thought: I should look at the requestBody key to see what parameters are required Action: json_spec_list_keys Action Input: data["paths"]["/completions"]["post"]["requestBody"] Observation: ['required', 'content'] Thought: I should look at the required key to see what parameters are required Action: json_spec_get_value Action Input: data["paths"]["/completions"]["post"]["requestBody"]["required"] Observation: True Thought: I should look at the content key to see what parameters are required Action: json_spec_list_keys Action Input: data["paths"]["/completions"]["post"]["requestBody"]["content"] Observation: ['application/json'] Thought: I should look at the application/json key to see what parameters are required Action: json_spec_list_keys Action Input: data["paths"]["/completions"]["post"]["requestBody"]["content"]["application/json"] Observation: ['schema'] Thought: I should look at the schema key to see what parameters are required Action: json_spec_list_keys Action Input: data["paths"]["/completions"]["post"]["requestBody"]["content"]["application/json"]["schema"] Observation: ['$ref'] Thought: I should look at the $ref key to see what parameters are required Action: json_spec_get_value Action Input: data["paths"]["/completions"]["post"]["requestBody"]["content"]["application/json"]["schema"]["$ref"] Observation: #/components/schemas/CreateCompletionRequest Thought: I should look attoolkit=json_toolkit, verbose=True)Example: getting the required POST parameters for a request​json_agent_executor.run( "What are the required parameters in the request body to the /completions endpoint?") > Entering new AgentExecutor chain... Action: json_spec_list_keys Action Input: data Observation: ['openapi', 'info', 'servers', 'tags', 'paths', 'components', 'x-oaiMeta'] Thought: I should look at the paths key to see what endpoints exist Action: json_spec_list_keys Action Input: data["paths"] Observation: ['/engines', '/engines/{engine_id}', '/completions', '/edits', '/images/generations', '/images/edits', '/images/variations', '/embeddings', '/engines/{engine_id}/search', '/files', '/files/{file_id}', '/files/{file_id}/content', '/answers', '/classifications', '/fine-tunes', '/fine-tunes/{fine_tune_id}', '/fine-tunes/{fine_tune_id}/cancel', '/fine-tunes/{fine_tune_id}/events', '/models', '/models/{model}', '/moderations'] Thought: I should look at the /completions endpoint to see what parameters are required Action: json_spec_list_keys Action Input: data["paths"]["/completions"] Observation: ['post'] Thought: I should look at the post key to see what parameters are required Action: json_spec_list_keys Action Input: data["paths"]["/completions"]["post"] Observation: ['operationId', 'tags', 'summary', 'requestBody', 'responses',Skip to main content🦜ï¸�🔗 LangChainDocsUse casesIntegrationsAPILangSmithJS/TS DocsCTRLKIntegrationsCallbacksChat modelsDocument loadersDocument transformersLLMsMemoryRetrieversText embedding modelsAgent toolkitsAmadeus ToolkitAzure Cognitive Services ToolkitCSV AgentDocument ComparisonGitHubGmail ToolkitJiraJSON AgentMultion ToolkitOffice365 ToolkitOpenAPI agentsNatural Language APIsPandas Dataframe AgentPlayWright Browser ToolkitPowerBI Dataset AgentPython AgentSpark Dataframe AgentSpark SQL AgentSQL Database AgentVectorstore AgentXorbits AgentToolsVector storesGrouped by providerIntegrationsAgent toolkitsJSON AgentOn this pageJSON AgentThis notebook showcases an agent designed to interact with large JSON/dict objects. This is useful when you want to answer questions about a JSON blob that's too large to fit in the context window of an LLM. The agent is able to iteratively explore the blob to find what it needs to answer the user's question.In the below example, we are using the OpenAPI spec for the OpenAI API, which you can find here.We will use the JSON agent to answer some questions about the API spec.Initialization​import osimport yamlfrom langchain.agents import create_json_agent, AgentExecutorfrom langchain.agents.agent_toolkits import JsonToolkitfrom langchain.chains import LLMChainfrom langchain.llms.openai import OpenAIfrom langchain.requests import TextRequestsWrapperfrom langchain.tools.json.tool import JsonSpecwith open("openai_openapi.yml") as f: data = yaml.load(f, Loader=yaml.FullLoader)json_spec = JsonSpec(dict_=data, max_value_length=4000)json_toolkit = JsonToolkit(spec=json_spec)json_agent_executor = create_json_agent( llm=OpenAI(temperature=0), toolkit=json_toolkit, verbose=True)Example: getting the requiredJSON Agent | 🦜ï¸�🔗 Langchainwhile Tesla had a revenue of $23.329 billion. Therefore, Alphabet had more revenue than Tesla." } {'input': 'did alphabet or tesla have more revenue?', 'output': 'Alphabet had a revenue of $69,787 million, while Tesla had a revenue of $23.329 billion. Therefore, Alphabet had more revenue than Tesla.'}PreviousCSV AgentNextGitHubOpenAI Multi FunctionsCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.{ "text": "Alphabet had a revenue of $69,787 million, while Tesla had a revenue of $23.329 billion. Therefore, Alphabet had more revenue than Tesla.", "generation_info": null, "message": { "content": "Alphabet had a revenue of $69,787 million, while Tesla had a revenue of $23.329 billion. Therefore, Alphabet had more revenue than Tesla.", "additional_kwargs": {}, "example": false } } ] ], "llm_output": { "token_usage": { "prompt_tokens": 353, "completion_tokens": 34, "total_tokens": 387 }, "model_name": "gpt-3.5-turbo-0613" }, "run": null } [chain/end] [1:chain:AgentExecutor] [7.83s] Exiting Chain run with output: { "output": "Alphabet had a revenue of $69,787 million, while Tesla had a revenue of $23.329 billion. Therefore, Alphabet had more revenue than\"action\": {\\n \"question\": \"What was Tesla\\'s revenue?\"\\n }\\n }\\n ]\\n}'}\nFunction: {\"query\": \"What was Alphabet's revenue?\", \"result\": \"Alphabet's revenue for the quarter ended March 31, 2023, was $69,787 million.\"}\nAI: {'name': 'tool_selection', 'arguments': '{\\n \"actions\": [\\n {\\n \"action_name\": \"alphabet-earnings\",\\n \"action\": {\\n \"question\": \"What was Alphabet\\'s revenue?\"\\n }\\n },\\n {\\n \"action_name\": \"tesla-earnings\",\\n \"action\": {\\n \"question\": \"What was Tesla\\'s revenue?\"\\n }\\n }\\n ]\\n}'}\nFunction: {\"query\": \"What was Tesla's revenue?\", \"result\": \"Tesla's revenue for Q1-2023 was $23.329 billion.\"}" ] } [llm/end] [1:chain:AgentExecutor > 13:llm:ChatOpenAI] [1.69s] Exiting LLM run with output: { "generations": [ [ { "text":{ "output_text": "Tesla's revenue for Q1-2023 was $23.329 billion." } [chain/end] [1:chain:AgentExecutor > 8:tool:tesla-earnings > 9:chain:RetrievalQA] [1.61s] Exiting Chain run with output: { "result": "Tesla's revenue for Q1-2023 was $23.329 billion." } [tool/end] [1:chain:AgentExecutor > 8:tool:tesla-earnings] [1.61s] Exiting Tool run with output: "{'query': "What was Tesla's revenue?", 'result': "Tesla's revenue for Q1-2023 was $23.329 billion."}" [llm/start] [1:chain:AgentExecutor > 13:llm:ChatOpenAI] Entering LLM run with input: { "prompts": [ "System: You are a helpful AI assistant.\nHuman: did alphabet or tesla have more revenue?\nAI: {'name': 'tool_selection', 'arguments': '{\\n \"actions\": [\\n {\\n \"action_name\": \"alphabet-earnings\",\\n \"action\": {\\n \"question\": \"What was Alphabet\\'s revenue?\"\\n }\\n },\\n {\\n \"action_name\": \"tesla-earnings\",\\n \"action\": {\\n \"question\":"content": "Tesla's revenue for Q1-2023 was $23.329 billion.", "additional_kwargs": {}, "example": false } } ] ], "llm_output": { "token_usage": { "prompt_tokens": 2246, "completion_tokens": 16, "total_tokens": 2262 }, "model_name": "gpt-3.5-turbo-0613" }, "run": null } [chain/end] [1:chain:AgentExecutor > 8:tool:tesla-earnings > 9:chain:RetrievalQA > 10:chain:StuffDocumentsChain > 11:chain:LLMChain] [1.17s] Exiting Chain run with output: { "text": "Tesla's revenue for Q1-2023 was $23.329 billion." } [chain/end] [1:chain:AgentExecutor > 8:tool:tesla-earnings > 9:chain:RetrievalQA > 10:chain:StuffDocumentsChain] [1.17s] Exiting Chain run with output: { "output_text": "Tesla's revenue for Q1-20231,372 1,453 1,532 1,584 1,692 23%\nSupercharger stations 3,724 3,971 4,283 4,678 4,947 33%\nSupercharger connectors 33,657 36,165 38,883 42,419 45,169 34%\n(1)Days of supply is calculated by dividing new car ending inventory by the relevant quarter’s deliveries and using 75 trading days (aligned with Automotive News definition).\n(2)Starting in Q1 -2023, we revised our methodology for reporting Tesla’s physical footprint. This count now includes all sales, del ivery, body shop and service locations globally. O P E R A T I O N A L S U M MA R Y\n(Unaudited)\n6\nHuman: What was Tesla's revenue?" ] } [llm/end] [1:chain:AgentExecutor > 8:tool:tesla-earnings > 9:chain:RetrievalQA > 10:chain:StuffDocumentsChain > 11:chain:LLMChain > 12:llm:ChatOpenAI] [1.17s] Exiting LLM run with output: { "generations": [ [ { "text": "Tesla's revenue for Q1-2023 was $23.329 billion.", "generation_info": null, "message": { "content": "Tesla's revenue for Q1-2023 wasreflect the three -for-one stock split effected in the form of a stock d ividend in August 2022.\n\nQ1-2022 Q2-2022 Q3-2022 Q4-2022 Q1-2023 YoY\nModel S/X production 14,218 16,411 19,935 20,613 19,437 37%\nModel 3/Y production 291,189 242,169 345,988 419,088 421,371 45%\nTotal production 305,407 258,580 365,923 439,701 440,808 44%\nModel S/X deliveries 14,724 16,162 18,672 17,147 10,695 -27%\nModel 3/Y deliveries 295,324 238,533 325,158 388,131 412,180 40%\nTotal deliveries 310,048 254,695 343,830 405,278 422,875 36%\nof which subject to operating lease accounting 12,167 9,227 11,004 15,184 22,357 84%\nTotal end of quarter operating lease vehicle count 128,402 131,756 135,054 140,667 153,988 20%\nGlobal vehicle inventory (days of supply )(1)3 4 8 13 15 400%\nSolar deployed (MW) 48 106 94 100 67 40%\nStorage deployed (MWh) 846 1,133 2,100 2,462 3,889 360%\nTesla locations(2)787 831 903 963 1,000 27%\nMobile service fleet 1,372 1,453 1,532 1,584 1,6923,636 3,983 2,800 \nProvision for income taxes 346 205 305 276 261 \nNET INCOME 3,280 2,269 3,331 3,707 2,539 \nNet (loss) income attributable to noncontrolling interests and redeemable noncontrolling interests in \nsubsidiaries(38) 10 39 20 26 \nNET INCOME ATTRIBUTABLE TO COMMON STOCKHOLDERS 3,318 2,259 3,292 3,687 2,513 \nNet income per share of common stock attributable to common stockholders(1)\nBasic $ 1.07 $ 0.73 $ 1.05 $ 1.18 $ 0.80 \nDiluted $ 0.95 $ 0.65 $ 0.95 $ 1.07 $ 0.73 \nWeighted average shares used in computing net income per share of common stock(1)\nBasic 3,103 3,111 3,146 3,160 3,166\nDiluted 3,472 3,464 3,468 3,471 3,468\nS T A T E M E N T O F O P E R A T I O N S\n(Unaudited)\n23 (1) Prior period results have been retroactively adjusted to reflect the three -for-one stock split effected in the form of a stock d ividend in August\nTotal revenues 18,756 16,934 21,454 24,318 23,329 \nCOST OF REVENUES\nAutomotive sales 10,914 10,153 13,099 15,433 15,422 \nAutomotive leasing 408 368 381 352 333 \nTotal automotive cost of revenues 11,322 10,521 13,480 15,785 15,755 \nEnergy generation and storage 688 769 1,013 1,151 1,361 \nServices and other 1,286 1,410 1,579 1,605 1,702 \nTotal cost of revenues 13,296 12,700 16,072 18,541 18,818 \nGross profit 5,460 4,234 5,382 5,777 4,511 \nOPERATING EXPENSES\nResearch and development 865 667 733 810 771 \nSelling, general and administrative 992 961 961 1,032 1,076 \nRestructuring and other — 142 — 34 —\nTotal operating expenses 1,857 1,770 1,694 1,876 1,847 \nINCOME FROM OPERATIONS 3,603 2,464 3,688 3,901 2,664 \nInterest income 28 26 86 157 213 \nInterest expense (61) (44) (53) (33) (29)\nOther income (expense), net 56 28 (85) (42) (48)\nINCOME BEFORE INCOME TAXES 3,626 2,474 3,636 3,983 2,800 \nProvision for income taxesfirst of many. We recently announced our second 40 GWh Megafactory, this time in Shanghai, with construction starting later this year. \nSolar\nSolar deployments increased by 40% YoY in Q1 to 67 MW, but declined sequentially in \nthe quarter, predominantly due to volatile weather and other factors. In addition, the solar industry has been impacted by supply chain challenges.\nServices and Other\nBoth revenue and gross profit from Services and Other reached an all -time high in Q1 \n2023. Within this business division, growth of used vehicle sales remained strong YoY and had healthy margins. Supercharging, while still a relatively small part of the business, continued to grow as we gradually open up the network to non- Tesla \nvehicles. \n-4%-2%0%2%4%6%8%\nQ3'21 Q4'21 Q1'22 Q2'22 Q3'22 Q4'22 Q1'23\n\nIn millions of USD or shares as applicable, except per share data Q1-2022 Q2-2022 Q3-2022 Q4-2022 Q1-2023\nREVENUES\nAutomotive sales 15,514 13,670 17,785 20,241 18,878 \nAutomotive regulatory credits 679 344 286 467 521 \nAutomotive leasing 668 588 621 599 564 \nTotal automotive revenues 16,861 14,602 18,692 21,307 19,963 \nEnergy generation and storage 616 866 1,117 1,310 1,529 \nServices and other 1,279 1,466 1,645 1,701 1,837 \nTotal revenues 18,756 16,934 21,454 24,318operating margins reduced at a manageable rate. We \nexpect ongoing cost reduction of our vehicles, including improved production \nefficiency at our newest factories and lower logistics costs, and remain focused on \noperating leverage as we scale.\nWe are rapidly growing energy storage production capacity at our Megafactory in \nLathrop and we recently announced a new Megafactory in Shanghai. We are also \ncontinuing to execute on our product roadmap, including Cybertruck, our next \ngeneration vehicle platform, autonomy and other AI enabled products. \nOur balance sheet and net income enable us to continue to make these capital \nexpenditures in line with our future growth. In this environment, we believe it \nmakes sense to push forward to ensure we lay a proper foundation for the best \npossible future.Cash Operating cash flow of $2.5B\nFree cash flow2of $0.4B in Q1\n$0.2B increase in our cash and investments3in Q1 to $22.4B\nOperations Cybertruck factory tooling on track; producing Alpha versions\nModel Y was the best -selling vehicle in Europe in Q1\nModel Y was the best -selling vehicle in the US in Q1 (ex -pickups)\n\n01234O T H E R H I G H L I G H T S\n9Services & Other gross margin\nEnergy Storage deployments (GWh)Energy Storage\nEnergy storage deployments increased by 360% YoY in Q1 to 3.9 GWh, the highest \nlevel of deployments we have achieved due to ongoing Megafactory ramp. The ramp of our 40 GWh Megapack factory in Lathrop, California has been successful with still more room to reach full capacity. This Megapack factory will be the first of many. We recently announced our second 40 GWh Megafactory, this time in> 10:chain:StuffDocumentsChain > 11:chain:LLMChain > 12:llm:ChatOpenAI] Entering LLM run with input: { "prompts": [ "System: Use the following pieces of context to answer the users question. \nIf you don't know the answer, just say that you don't know, don't try to make up an answer.\n----------------\nS U M M A R Y H I G H L I G H T S \n(1) Excludes SBC (stock -based compensation).\n(2) Free cash flow = operating cash flow less capex.\n(3) Includes cash, cash equivalents and investments.Profitability 11.4% operating margin in Q1\n$2.7B GAAP operating income in Q1\n$2.5B GAAP net income in Q1\n$2.9B non -GAAP net income1in Q1In the current macroeconomic environment, we see this year as a unique \nopportunity for Tesla. As many carmakers are working through challenges with the \nunit economics of their EV programs, we aim to leverage our position as a cost \nleader. We are focused on rapidly growing production, investments in autonomy \nand vehicle software, and remaining on track with our growth investments.\nOur near -term pricing strategy considers a long -term view on per vehicle \nprofitability given the potential lifetime value of a Tesla vehicle through autonomy, \nsupercharging, connectivity and service. We expect that our product pricing will \ncontinue to evolve, upwards or downwards, depending on a number of factors.\nAlthough we implemented price reductions on many vehicle models across regions \nin the first quarter, our operating margins reduced at a manageable rate. We \nexpect ongoing cost reduction of our vehicles, includingto operating lease accounting 12,167 9,227 11,004 15,184 22,357 84%\nTotal end of quarter operating lease vehicle count 128,402 131,756 135,054 140,667 153,988 20%\nGlobal vehicle inventory (days of supply )(1)3 4 8 13 15 400%\nSolar deployed (MW) 48 106 94 100 67 40%\nStorage deployed (MWh) 846 1,133 2,100 2,462 3,889 360%\nTesla locations(2)787 831 903 963 1,000 27%\nMobile service fleet 1,372 1,453 1,532 1,584 1,692 23%\nSupercharger stations 3,724 3,971 4,283 4,678 4,947 33%\nSupercharger connectors 33,657 36,165 38,883 42,419 45,169 34%\n(1)Days of supply is calculated by dividing new car ending inventory by the relevant quarter’s deliveries and using 75 trading days (aligned with Automotive News definition).\n(2)Starting in Q1 -2023, we revised our methodology for reporting Tesla’s physical footprint. This count now includes all sales, del ivery, body shop and service locations globally. O P E R A T I O N A L S U M MA R Y\n(Unaudited)\n6" } [llm/start] [1:chain:AgentExecutor > 8:tool:tesla-earnings > 9:chain:RetrievalQA > 10:chain:StuffDocumentsChain > 11:chain:LLMChain >$ 0.65 $ 0.95 $ 1.07 $ 0.73 \nWeighted average shares used in computing net income per share of common stock(1)\nBasic 3,103 3,111 3,146 3,160 3,166\nDiluted 3,472 3,464 3,468 3,471 3,468\nS T A T E M E N T O F O P E R A T I O N S\n(Unaudited)\n23 (1) Prior period results have been retroactively adjusted to reflect the three -for-one stock split effected in the form of a stock d ividend in August 2022.\n\nQ1-2022 Q2-2022 Q3-2022 Q4-2022 Q1-2023 YoY\nModel S/X production 14,218 16,411 19,935 20,613 19,437 37%\nModel 3/Y production 291,189 242,169 345,988 419,088 421,371 45%\nTotal production 305,407 258,580 365,923 439,701 440,808 44%\nModel S/X deliveries 14,724 16,162 18,672 17,147 10,695 -27%\nModel 3/Y deliveries 295,324 238,533 325,158 388,131 412,180 40%\nTotal deliveries 310,048 254,695 343,830 405,278 422,875 36%\nof which subject to operating lease accounting 12,167 9,227 11,004 15,184general and administrative 992 961 961 1,032 1,076 \nRestructuring and other — 142 — 34 —\nTotal operating expenses 1,857 1,770 1,694 1,876 1,847 \nINCOME FROM OPERATIONS 3,603 2,464 3,688 3,901 2,664 \nInterest income 28 26 86 157 213 \nInterest expense (61) (44) (53) (33) (29)\nOther income (expense), net 56 28 (85) (42) (48)\nINCOME BEFORE INCOME TAXES 3,626 2,474 3,636 3,983 2,800 \nProvision for income taxes 346 205 305 276 261 \nNET INCOME 3,280 2,269 3,331 3,707 2,539 \nNet (loss) income attributable to noncontrolling interests and redeemable noncontrolling interests in \nsubsidiaries(38) 10 39 20 26 \nNET INCOME ATTRIBUTABLE TO COMMON STOCKHOLDERS 3,318 2,259 3,292 3,687 2,513 \nNet income per share of common stock attributable to common stockholders(1)\nBasic $ 1.07 $ 0.73 $ 1.05 $ 1.18 $ 0.80 \nDiluted $ 0.95 $ 0.65 $except per share data Q1-2022 Q2-2022 Q3-2022 Q4-2022 Q1-2023\nREVENUES\nAutomotive sales 15,514 13,670 17,785 20,241 18,878 \nAutomotive regulatory credits 679 344 286 467 521 \nAutomotive leasing 668 588 621 599 564 \nTotal automotive revenues 16,861 14,602 18,692 21,307 19,963 \nEnergy generation and storage 616 866 1,117 1,310 1,529 \nServices and other 1,279 1,466 1,645 1,701 1,837 \nTotal revenues 18,756 16,934 21,454 24,318 23,329 \nCOST OF REVENUES\nAutomotive sales 10,914 10,153 13,099 15,433 15,422 \nAutomotive leasing 408 368 381 352 333 \nTotal automotive cost of revenues 11,322 10,521 13,480 15,785 15,755 \nEnergy generation and storage 688 769 1,013 1,151 1,361 \nServices and other 1,286 1,410 1,579 1,605 1,702 \nTotal cost of revenues 13,296 12,700 16,072 18,541 18,818 \nGross profit 5,460 4,234 5,382 5,777 4,511 \nOPERATING EXPENSES\nResearch and development 865 667 733 810 771 \nSelling, general and administrative 992 961 961 1,032 1,076Cybertruck factory tooling on track; producing Alpha versions\nModel Y was the best -selling vehicle in Europe in Q1\nModel Y was the best -selling vehicle in the US in Q1 (ex -pickups)\n\n01234O T H E R H I G H L I G H T S\n9Services & Other gross margin\nEnergy Storage deployments (GWh)Energy Storage\nEnergy storage deployments increased by 360% YoY in Q1 to 3.9 GWh, the highest \nlevel of deployments we have achieved due to ongoing Megafactory ramp. The ramp of our 40 GWh Megapack factory in Lathrop, California has been successful with still more room to reach full capacity. This Megapack factory will be the first of many. We recently announced our second 40 GWh Megafactory, this time in Shanghai, with construction starting later this year. \nSolar\nSolar deployments increased by 40% YoY in Q1 to 67 MW, but declined sequentially in \nthe quarter, predominantly due to volatile weather and other factors. In addition, the solar industry has been impacted by supply chain challenges.\nServices and Other\nBoth revenue and gross profit from Services and Other reached an all -time high in Q1 \n2023. Within this business division, growth of used vehicle sales remained strong YoY and had healthy margins. Supercharging, while still a relatively small part of the business, continued to grow as we gradually open up the network to non- Tesla \nvehicles. \n-4%-2%0%2%4%6%8%\nQ3'21 Q4'21 Q1'22 Q2'22 Q3'22 Q4'22 Q1'23\n\nIn millions of USD or shares as applicable, except per share data Q1-2022 Q2-2022 Q3-2022net income1in Q1In the current macroeconomic environment, we see this year as a unique \nopportunity for Tesla. As many carmakers are working through challenges with the \nunit economics of their EV programs, we aim to leverage our position as a cost \nleader. We are focused on rapidly growing production, investments in autonomy \nand vehicle software, and remaining on track with our growth investments.\nOur near -term pricing strategy considers a long -term view on per vehicle \nprofitability given the potential lifetime value of a Tesla vehicle through autonomy, \nsupercharging, connectivity and service. We expect that our product pricing will \ncontinue to evolve, upwards or downwards, depending on a number of factors.\nAlthough we implemented price reductions on many vehicle models across regions \nin the first quarter, our operating margins reduced at a manageable rate. We \nexpect ongoing cost reduction of our vehicles, including improved production \nefficiency at our newest factories and lower logistics costs, and remain focused on \noperating leverage as we scale.\nWe are rapidly growing energy storage production capacity at our Megafactory in \nLathrop and we recently announced a new Megafactory in Shanghai. We are also \ncontinuing to execute on our product roadmap, including Cybertruck, our next \ngeneration vehicle platform, autonomy and other AI enabled products. \nOur balance sheet and net income enable us to continue to make these capital \nexpenditures in line with our future growth. In this environment, we believe it \nmakes sense to push forward to ensure we lay a proper foundation for the best \npossible future.Cash Operating cash flow of $2.5B\nFree cash flow2of $0.4B in Q1\n$0.2B increase in our cash and investments3in Q1 to $22.4B\nOperations Cybertruck factory tooling on track; producing Alpha versions\nModel Y2023, was $69,787 million."}" [tool/start] [1:chain:AgentExecutor > 8:tool:tesla-earnings] Entering Tool run with input: "{'question': "What was Tesla's revenue?"}" [chain/start] [1:chain:AgentExecutor > 8:tool:tesla-earnings > 9:chain:RetrievalQA] Entering Chain run with input: { "query": "What was Tesla's revenue?" } [chain/start] [1:chain:AgentExecutor > 8:tool:tesla-earnings > 9:chain:RetrievalQA > 10:chain:StuffDocumentsChain] Entering Chain run with input: [inputs] [chain/start] [1:chain:AgentExecutor > 8:tool:tesla-earnings > 9:chain:RetrievalQA > 10:chain:StuffDocumentsChain > 11:chain:LLMChain] Entering Chain run with input: { "question": "What was Tesla's revenue?", "context": "S U M M A R Y H I G H L I G H T S \n(1) Excludes SBC (stock -based compensation).\n(2) Free cash flow = operating cash flow less capex.\n(3) Includes cash, cash equivalents and investments.Profitability 11.4% operating margin in Q1\n$2.7B GAAP operating income in Q1\n$2.5B GAAP net income in Q1\n$2.9B non -GAAP net income1in Q1In the current macroeconomic environment, we see this year as a}, "run": null } [chain/end] [1:chain:AgentExecutor > 3:tool:alphabet-earnings > 4:chain:RetrievalQA > 5:chain:StuffDocumentsChain > 6:chain:LLMChain] [1.61s] Exiting Chain run with output: { "text": "Alphabet's revenue for the quarter ended March 31, 2023, was $69,787 million." } [chain/end] [1:chain:AgentExecutor > 3:tool:alphabet-earnings > 4:chain:RetrievalQA > 5:chain:StuffDocumentsChain] [1.61s] Exiting Chain run with output: { "output_text": "Alphabet's revenue for the quarter ended March 31, 2023, was $69,787 million." } [chain/end] [1:chain:AgentExecutor > 3:tool:alphabet-earnings > 4:chain:RetrievalQA] [1.85s] Exiting Chain run with output: { "result": "Alphabet's revenue for the quarter ended March 31, 2023, was $69,787 million." } [tool/end] [1:chain:AgentExecutor > 3:tool:alphabet-earnings] [1.86s] Exiting Tool run with output: "{'query': "What was Alphabet's revenue?", 'result': "Alphabet's revenue for the quarter ended March 31, 2023, was $69,787 million."}" [tool/start][1:chain:AgentExecutor > 3:tool:alphabet-earnings > 4:chain:RetrievalQA > 5:chain:StuffDocumentsChain > 6:chain:LLMChain > 7:llm:ChatOpenAI] [1.61s] Exiting LLM run with output: { "generations": [ [ { "text": "Alphabet's revenue for the quarter ended March 31, 2023, was $69,787 million.", "generation_info": null, "message": { "content": "Alphabet's revenue for the quarter ended March 31, 2023, was $69,787 million.", "additional_kwargs": {}, "example": false } } ] ], "llm_output": { "token_usage": { "prompt_tokens": 2335, "completion_tokens": 23, "total_tokens": 2358 }, "model_name": "gpt-3.5-turbo-0613" },certain human resource costs, and legal, including certain fines and settlements. In the first quarter of 2023, \nunallocated corporate costs also include charges associated with reductions in our workforce and office space. \nAdditionally, hedging gains (losses) related to revenue are included in unallocated corporate costs.\nRecast Historical Segment Results\nRecast historical segment results are as follows (in millions; unaudited):\nQuarter Fiscal Year\nRecast Historical Results\nQ1 2022 Q2 2022 Q3 2022 Q4 2022 2021 2022\nOperating income (loss):\nGoogle Services $ 21,973 $ 21,621 $ 18,883 $ 20,222 $ 88,132 $ 82,699 \nGoogle Cloud (706) (590) (440) (186) (2,282) (1,922) \nOther Bets (835) (1,339) (1,225) (1,237) (4,051) (4,636) \nCorporate costs, unallocated(1) (338) (239) (83) (639) (3,085) (1,299) \nTotal income from operations $ 20,094 $ 19,453 $ 17,135 $ 18,160 $ 78,714 $ 74,842 \n(1)Includes hedging gains (losses); in fiscal years 2021 and 2022 hedging gains of $149 million and $2.0 billion, respectively.\n8\nHuman: What was Alphabet's revenue?" ] } [llm/end] [1:chain:AgentExecutor > 3:tool:alphabet-earnings >2023\n(recast)\nRevenues:\nGoogle Services $ 61,472 $ 61,961 \nGoogle Cloud 5,821 7,454 \nOther Bets 440 288 \nHedging gains (losses) 278 84 \nTotal revenues $ 68,011 $ 69,787 \nOperating income (loss):\nGoogle Services $ 21,973 $ 21,737 \nGoogle Cloud (706) 191 \nOther Bets (835) (1,225) \nCorporate costs, unallocated (338) (3,288) \nTotal income from operations $ 20,094 $ 17,415 \nWe report our segment results as Google Services, Google Cloud, and Other Bets:\n•Google Services includes products and services such as ads, Android, Chrome, hardware, Google Maps, \nGoogle Play, Search, and YouTube. Google Services generates revenues primarily from advertising; sales \nof apps and in-app purchases, and hardware; and fees received for subscription-based products such as \nYouTube Premium and YouTube TV.\n•Google Cloud includes infrastructure and platform services, collaboration tools, and other services for \nenterprise customers. Google Cloud generates revenues from fees received for Google Cloud Platform \nservices, Google Workspace communication and collaboration tools, and other enterprise services.\n•Other Bets is a combination of multiple operating segments that are not individually material. Revenues \nfrom Other Bets are generated primarily from the sale of health technology and internet services.\nAfter the segment reporting changes discussed above, unallocated corporate costs primarily include AI-focused \nshared R&D activities; corporate initiatives such as our philanthropic activities; and corporate shared costs such as \nfinance, certain human resource costs, and legal, including certain fines and settlements. In the first quartercorporate costs instead of within \nOther Bets. Additionally, beginning in the first quarter of 2023, we updated and simplified our cost allocation \nmethodologies to provide our business leaders with increased transparency for decision-making . Prior periods have \nbeen recast to reflect the revised presentation and are shown in Recast Historical Segment Results below .\nAs announced on April 20, 2023 , we are bringing together part of Google Research (the Brain Team) and DeepMind \nto significantly accelerate our progress in AI. This change does not affect first quarter reporting. The group, called \nGoogle DeepMind, will be reported within Alphabet's unallocated corporate costs beginning in the second quarter of \n2023.\nQuarter Ended March 31,\n2022 2023\n(recast)\nOperating income (loss):\nGoogle Services $ 21,973 $ 21,737 \nGoogle Cloud (706) 191 \nOther Bets (835) (1,225) \nCorporate costs, unallocated(1) (338) (3,288) \nTotal income from operations $ 20,094 $ 17,415 \n(1)Hedging gains (losses) related to revenue included in unallocated corporate costs were $278 million and $84 million for the \nthree months ended March 31, 2022 and 2023 , respectively. For the three months ended March 31, 2023, unallocated \ncorporate costs include charges related to the reductions in our workforce and office space totaling $2.5 billion . \n2\n\nSegment results\nThe following table presents our segment revenues and operating income (loss) (in millions; unaudited):\nQuarter Ended March 31,\n2022 2023\n(recast)\nRevenues:\nGoogle Services $ 61,472 $GAAP percentage change in revenues to non-GAAP percentage change in constant currency revenuesâ€� for \nmore details.\n\nQ1 2023 supplemental information (in millions, except for number of employees; unaudited)\nRevenues, T raffic Acquisition Costs (TAC), and number of employees\nQuarter Ended March 31,\n2022 2023\nGoogle Search & other $ 39,618 $ 40,359 \nYouTube ads 6,869 6,693 \nGoogle Network 8,174 7,496 \nGoogle advertising 54,661 54,548 \nGoogle other 6,811 7,413 \nGoogle Services total 61,472 61,961 \nGoogle Cloud 5,821 7,454 \nOther Bets 440 288 \nHedging gains (losses) 278 84 \nTotal revenues $ 68,011 $ 69,787 \nTotal TAC $ 11,990 $ 11,721 \nNumber of employees(1) 163,906 190,711 \n(1) As of March 31, 2023, the number of employees includes almost all of the employees affected by the reduction of our \nworkforce. We expect most of those affected will no longer be reflected in our headcount by the end of the second quarter \nof 2023, subject to local law and consultation requirements.\nSegment Operating Results\nReflecting DeepMind’s increasing collaboration with Google Services, Google Cloud, and Other Bets, beginning in \nthe first quarter of 2023 DeepMind is reported as part of Alphabet’s unallocated corporate costs instead of within \nOther Bets. Additionally, beginning in the first quarter of2023 financial highlights (unaudited)\nOur first quarter 2023 results reflect:\ni.$2.6 billion in charges related to reductions in our workforce and office space; \nii.a $988 million reduction in depreciation expense from the change in estimated useful life of our servers and \ncertain network equipment; and\niii.a shift in the timing of our annual employee stock-based compensation awards resulting in relatively less \nstock-based compensation expense recognized in the first quarter compared to the remaining quarters of \nthe ye ar. The shift in timing itself will not affect the amount of stock-based compensation expense over the \nfull fiscal year 2023.\nFor further information, please refer to our blog post also filed with the SEC via Form 8-K on April 20, 2023.\nThe following table summarizes our consolidated financial results for the quarters ended March 31, 2022 and 2023 \n(in millions, except for per share information and percentages). \nQuarter Ended March 31,\n2022 2023\nRevenues $ 68,011 $ 69,787 \nChange in revenues year over year 23 % 3 %\nChange in constant currency revenues year over year(1) 26 % 6 %\nOperating income $ 20,094 $ 17,415 \nOperating margin 30 % 25 %\nOther income (expense), net $ (1,160) $ 790 \nNet income $ 16,436 $ 15,051 \nDiluted EPS $ 1.23 $ 1.17 \n(1) Non-GAAP measure. See the table captioned “Reconciliation from GAAP revenues to non-GAAP constant currency \nrevenues and GAAP percentage change in revenues to non-GAAP percentage change in constant currency revenuesâ€� for2,498 3,154 \nNet income $ 16,436 $ 15,051 \nBasic earnings per share of Class A, Class B, and Class C stock $ 1.24 $ 1.18 \nDiluted earnings per share of Class A, Class B, and Class C stock $ 1.23 $ 1.17 \nNumber of shares used in basic earnings per share calculation 13,203 12,781 \nNumber of shares used in diluted earnings per share calculation 13,351 12,823 \n6\n\nAlphabet Announces First Quarter 2023 Results\nMOUNTAIN VIEW, Calif. – April 25, 2023 – Alphabet Inc. (NASDAQ: GOOG, GOOGL) today announced financial \nresults for the quarter ended March 31, 2023 .\nSundar Pichai, CEO of Alphabet and Google, said: “We are pleased with our business performance in the first \nquarter, with Search performing well and momentum in Cloud. We introduced important product updates anchored \nin deep computer science and AI. Our North Star is providing the most helpful answers for our users, and we see \nhuge opportunities ahead, continuing our long track record of innovation.â€�\nRuth Porat, CFO of Alphabet and Google, said: “Resilience in Search and momentum in Cloud resulted in Q1 \nconsolidated revenues of $69.8 billion, up 3% year over year, or up 6% in constant currency. We remain committed \nto delivering long-term growth and creating capacity to invest in our most compelling growth areas by re-engineering \nour cost base.â€�\nQ1 2023 financial highlights (unaudited)\nOur first quarter$ 74,842 \n(1)Includes hedging gains (losses); in fiscal years 2021 and 2022 hedging gains of $149 million and $2.0 billion, respectively.\n8" } [llm/start] [1:chain:AgentExecutor > 3:tool:alphabet-earnings > 4:chain:RetrievalQA > 5:chain:StuffDocumentsChain > 6:chain:LLMChain > 7:llm:ChatOpenAI] Entering LLM run with input: { "prompts": [ "System: Use the following pieces of context to answer the users question. \nIf you don't know the answer, just say that you don't know, don't try to make up an answer.\n----------------\nAlphabet Inc.\nCONSOLIDATED STATEMENTS OF INCOME\n(In millions, except per share amounts, unaudited)\nQuarter Ended March 31,\n2022 2023\nRevenues $ 68,011 $ 69,787 \nCosts and expenses:\nCost of revenues 29,599 30,612 \nResearch and development 9,119 11,468 \nSales and marketing 5,825 6,533 \nGeneral and administrative 3,374 3,759 \nTotal costs and expenses 47,917 52,372 \nIncome from operations 20,094 17,415 \nOther income (expense), net (1,160) 790 \nIncome before income taxes 18,934 18,205 \nProvision for income taxes 2,498 3,154 \nNet income $ 16,436 $Bets is a combination of multiple operating segments that are not individually material. Revenues \nfrom Other Bets are generated primarily from the sale of health technology and internet services.\nAfter the segment reporting changes discussed above, unallocated corporate costs primarily include AI-focused \nshared R&D activities; corporate initiatives such as our philanthropic activities; and corporate shared costs such as \nfinance, certain human resource costs, and legal, including certain fines and settlements. In the first quarter of 2023, \nunallocated corporate costs also include charges associated with reductions in our workforce and office space. \nAdditionally, hedging gains (losses) related to revenue are included in unallocated corporate costs.\nRecast Historical Segment Results\nRecast historical segment results are as follows (in millions; unaudited):\nQuarter Fiscal Year\nRecast Historical Results\nQ1 2022 Q2 2022 Q3 2022 Q4 2022 2021 2022\nOperating income (loss):\nGoogle Services $ 21,973 $ 21,621 $ 18,883 $ 20,222 $ 88,132 $ 82,699 \nGoogle Cloud (706) (590) (440) (186) (2,282) (1,922) \nOther Bets (835) (1,339) (1,225) (1,237) (4,051) (4,636) \nCorporate costs, unallocated(1) (338) (239) (83) (639) (3,085) (1,299) \nTotal income from operations $ 20,094 $ 19,453 $ 17,135 $ 18,160 $ 78,714 $ 74,842 \n(1)Includes hedging gains (losses); in, respectively. For the three months ended March 31, 2023, unallocated \ncorporate costs include charges related to the reductions in our workforce and office space totaling $2.5 billion . \n2\n\nSegment results\nThe following table presents our segment revenues and operating income (loss) (in millions; unaudited):\nQuarter Ended March 31,\n2022 2023\n(recast)\nRevenues:\nGoogle Services $ 61,472 $ 61,961 \nGoogle Cloud 5,821 7,454 \nOther Bets 440 288 \nHedging gains (losses) 278 84 \nTotal revenues $ 68,011 $ 69,787 \nOperating income (loss):\nGoogle Services $ 21,973 $ 21,737 \nGoogle Cloud (706) 191 \nOther Bets (835) (1,225) \nCorporate costs, unallocated (338) (3,288) \nTotal income from operations $ 20,094 $ 17,415 \nWe report our segment results as Google Services, Google Cloud, and Other Bets:\n•Google Services includes products and services such as ads, Android, Chrome, hardware, Google Maps, \nGoogle Play, Search, and YouTube. Google Services generates revenues primarily from advertising; sales \nof apps and in-app purchases, and hardware; and fees received for subscription-based products such as \nYouTube Premium and YouTube TV.\n•Google Cloud includes infrastructure and platform services, collaboration tools, and other services for \nenterprise customers. Google Cloud generates revenues from fees received for Google Cloud Platform \nservices, Google Workspace communication and collaboration tools, and other enterprise services.\n•Other Bets is a combination of multiple operating segments that are not individually material.expect most of those affected will no longer be reflected in our headcount by the end of the second quarter \nof 2023, subject to local law and consultation requirements.\nSegment Operating Results\nReflecting DeepMind’s increasing collaboration with Google Services, Google Cloud, and Other Bets, beginning in \nthe first quarter of 2023 DeepMind is reported as part of Alphabet’s unallocated corporate costs instead of within \nOther Bets. Additionally, beginning in the first quarter of 2023, we updated and simplified our cost allocation \nmethodologies to provide our business leaders with increased transparency for decision-making . Prior periods have \nbeen recast to reflect the revised presentation and are shown in Recast Historical Segment Results below .\nAs announced on April 20, 2023 , we are bringing together part of Google Research (the Brain Team) and DeepMind \nto significantly accelerate our progress in AI. This change does not affect first quarter reporting. The group, called \nGoogle DeepMind, will be reported within Alphabet's unallocated corporate costs beginning in the second quarter of \n2023.\nQuarter Ended March 31,\n2022 2023\n(recast)\nOperating income (loss):\nGoogle Services $ 21,973 $ 21,737 \nGoogle Cloud (706) 191 \nOther Bets (835) (1,225) \nCorporate costs, unallocated(1) (338) (3,288) \nTotal income from operations $ 20,094 $ 17,415 \n(1)Hedging gains (losses) related to revenue included in unallocated corporate costs were $278 million and $84 million for the \nthree months ended March 31, 2022 and 2023 , respectively. For the three months ended March 31, 2023, unallocated% 25 %\nOther income (expense), net $ (1,160) $ 790 \nNet income $ 16,436 $ 15,051 \nDiluted EPS $ 1.23 $ 1.17 \n(1) Non-GAAP measure. See the table captioned “Reconciliation from GAAP revenues to non-GAAP constant currency \nrevenues and GAAP percentage change in revenues to non-GAAP percentage change in constant currency revenuesâ€� for \nmore details.\n\nQ1 2023 supplemental information (in millions, except for number of employees; unaudited)\nRevenues, T raffic Acquisition Costs (TAC), and number of employees\nQuarter Ended March 31,\n2022 2023\nGoogle Search & other $ 39,618 $ 40,359 \nYouTube ads 6,869 6,693 \nGoogle Network 8,174 7,496 \nGoogle advertising 54,661 54,548 \nGoogle other 6,811 7,413 \nGoogle Services total 61,472 61,961 \nGoogle Cloud 5,821 7,454 \nOther Bets 440 288 \nHedging gains (losses) 278 84 \nTotal revenues $ 68,011 $ 69,787 \nTotal TAC $ 11,990 $ 11,721 \nNumber of employees(1) 163,906 190,711 \n(1) As of March 31, 2023, the number of employees includes almost all of the employees affected by the reduction of our \nworkforce. We expect most of those affected will no longer be reflected in our headcount by the end of the secondGoogle, said: “Resilience in Search and momentum in Cloud resulted in Q1 \nconsolidated revenues of $69.8 billion, up 3% year over year, or up 6% in constant currency. We remain committed \nto delivering long-term growth and creating capacity to invest in our most compelling growth areas by re-engineering \nour cost base.â€�\nQ1 2023 financial highlights (unaudited)\nOur first quarter 2023 results reflect:\ni.$2.6 billion in charges related to reductions in our workforce and office space; \nii.a $988 million reduction in depreciation expense from the change in estimated useful life of our servers and \ncertain network equipment; and\niii.a shift in the timing of our annual employee stock-based compensation awards resulting in relatively less \nstock-based compensation expense recognized in the first quarter compared to the remaining quarters of \nthe ye ar. The shift in timing itself will not affect the amount of stock-based compensation expense over the \nfull fiscal year 2023.\nFor further information, please refer to our blog post also filed with the SEC via Form 8-K on April 20, 2023.\nThe following table summarizes our consolidated financial results for the quarters ended March 31, 2022 and 2023 \n(in millions, except for per share information and percentages). \nQuarter Ended March 31,\n2022 2023\nRevenues $ 68,011 $ 69,787 \nChange in revenues year over year 23 % 3 %\nChange in constant currency revenues year over year(1) 26 % 6 %\nOperating income $ 20,094 $ 17,415 \nOperating margin 30 % 25 %\nOther income (expense), net $ (1,160)and administrative 3,374 3,759 \nTotal costs and expenses 47,917 52,372 \nIncome from operations 20,094 17,415 \nOther income (expense), net (1,160) 790 \nIncome before income taxes 18,934 18,205 \nProvision for income taxes 2,498 3,154 \nNet income $ 16,436 $ 15,051 \nBasic earnings per share of Class A, Class B, and Class C stock $ 1.24 $ 1.18 \nDiluted earnings per share of Class A, Class B, and Class C stock $ 1.23 $ 1.17 \nNumber of shares used in basic earnings per share calculation 13,203 12,781 \nNumber of shares used in diluted earnings per share calculation 13,351 12,823 \n6\n\nAlphabet Announces First Quarter 2023 Results\nMOUNTAIN VIEW, Calif. – April 25, 2023 – Alphabet Inc. (NASDAQ: GOOG, GOOGL) today announced financial \nresults for the quarter ended March 31, 2023 .\nSundar Pichai, CEO of Alphabet and Google, said: “We are pleased with our business performance in the first \nquarter, with Search performing well and momentum in Cloud. We introduced important product updates anchored \nin deep computer science and AI. Our North Star is providing the most helpful answers for our users, and we see \nhuge opportunities ahead, continuing our long track record of innovation.â€�\nRuth Porat, CFO of Alphabet and Google, said: “Resilience in Search and momentum in Cloud resulted in Q1}, "run": null } [tool/start] [1:chain:AgentExecutor > 3:tool:alphabet-earnings] Entering Tool run with input: "{'question': "What was Alphabet's revenue?"}" [chain/start] [1:chain:AgentExecutor > 3:tool:alphabet-earnings > 4:chain:RetrievalQA] Entering Chain run with input: { "query": "What was Alphabet's revenue?" } [chain/start] [1:chain:AgentExecutor > 3:tool:alphabet-earnings > 4:chain:RetrievalQA > 5:chain:StuffDocumentsChain] Entering Chain run with input: [inputs] [chain/start] [1:chain:AgentExecutor > 3:tool:alphabet-earnings > 4:chain:RetrievalQA > 5:chain:StuffDocumentsChain > 6:chain:LLMChain] Entering Chain run with input: { "question": "What was Alphabet's revenue?", "context": "Alphabet Inc.\nCONSOLIDATED STATEMENTS OF INCOME\n(In millions, except per share amounts, unaudited)\nQuarter Ended March 31,\n2022 2023\nRevenues $ 68,011 $ 69,787 \nCosts and expenses:\nCost of revenues 29,599 30,612 \nResearch and development 9,119 11,468 \nSales and marketing 5,825 6,533 \nGeneral and administrative 3,374 3,759 \nTotal costs and expenses"function_call": { "name": "tool_selection", "arguments": "{\n \"actions\": [\n {\n \"action_name\": \"alphabet-earnings\",\n \"action\": {\n \"question\": \"What was Alphabet's revenue?\"\n }\n },\n {\n \"action_name\": \"tesla-earnings\",\n \"action\": {\n \"question\": \"What was Tesla's revenue?\"\n }\n }\n ]\n}" } }, "example": false } } ] ], "llm_output": { "token_usage": { "prompt_tokens": 99, "completion_tokens": 82, "total_tokens": 181 }, "model_name": "gpt-3.5-turbo-0613" },= Truellm = ChatOpenAI( temperature=0, model="gpt-3.5-turbo-0613",)agent = initialize_agent( agent=AgentType.OPENAI_MULTI_FUNCTIONS, tools=tools, llm=llm, verbose=True,)agent({"input": "did alphabet or tesla have more revenue?"}) [chain/start] [1:chain:AgentExecutor] Entering Chain run with input: { "input": "did alphabet or tesla have more revenue?" } [llm/start] [1:chain:AgentExecutor > 2:llm:ChatOpenAI] Entering LLM run with input: { "prompts": [ "System: You are a helpful AI assistant.\nHuman: did alphabet or tesla have more revenue?" ] } [llm/end] [1:chain:AgentExecutor > 2:llm:ChatOpenAI] [2.66s] Exiting LLM run with output: { "generations": [ [ { "text": "", "generation_info": null, "message": { "content": "", "additional_kwargs": { "function_call": {tools=tools, llm=llm, verbose=True,)agent({"input": "did alphabet or tesla have more revenue?"}) > Entering new chain... Invoking: `alphabet-earnings` with `{'question': 'revenue'}` {'query': 'revenue', 'result': 'The revenue for Alphabet Inc. for the quarter ended March 31, 2023, was $69,787 million.'} Invoking: `tesla-earnings` with `{'question': 'revenue'}` {'query': 'revenue', 'result': 'Total revenue for Q1-2023 was $23.3 billion.'}Alphabet Inc. had more revenue than Tesla. Alphabet's revenue for the quarter ended March 31, 2023, was $69,787 million, while Tesla's total revenue for Q1-2023 was $23.3 billion. > Finished chain. {'input': 'did alphabet or tesla have more revenue?', 'output': "Alphabet Inc. had more revenue than Tesla. Alphabet's revenue for the quarter ended March 31, 2023, was $69,787 million, while Tesla's total revenue for Q1-2023 was $23.3 billion."}OpenAI Multi Functions​This type of agent allows calling multiple functions at once. This is really useful when some steps can be computed in parallel - like when asked to compare multiple documentsimport langchainlangchain.debug = Truellm = ChatOpenAI( temperature=0,}, # https://digitalassets.tesla.com/tesla-contents/image/upload/IR/TSLA-Q1-2023-Update { "name": "tesla-earnings", "path": "/Users/harrisonchase/Downloads/TSLA-Q1-2023-Update.pdf", },]for file in files: loader = PyPDFLoader(file["path"]) pages = loader.load_and_split() text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0) docs = text_splitter.split_documents(pages) embeddings = OpenAIEmbeddings() retriever = FAISS.from_documents(docs, embeddings).as_retriever() # Wrap retrievers in a Tool tools.append( Tool( args_schema=DocumentInput, name=file["name"], description=f"useful when you want to answer questions about {file['name']}", func=RetrievalQA.from_chain_type(llm=llm, retriever=retriever), ) )from langchain.agents import initialize_agentfrom langchain.agents import AgentTypellm = ChatOpenAI( temperature=0, model="gpt-3.5-turbo-0613",)agent = initialize_agent( agent=AgentType.OPENAI_FUNCTIONS, tools=tools, llm=llm,Skip to main content🦜ï¸�🔗 LangChainDocsUse casesIntegrationsAPILangSmithJS/TS DocsCTRLKIntegrationsCallbacksChat modelsDocument loadersDocument transformersLLMsMemoryRetrieversText embedding modelsAgent toolkitsAmadeus ToolkitAzure Cognitive Services ToolkitCSV AgentDocument ComparisonGitHubGmail ToolkitJiraJSON AgentMultion ToolkitOffice365 ToolkitOpenAPI agentsNatural Language APIsPandas Dataframe AgentPlayWright Browser ToolkitPowerBI Dataset AgentPython AgentSpark Dataframe AgentSpark SQL AgentSQL Database AgentVectorstore AgentXorbits AgentToolsVector storesGrouped by providerIntegrationsAgent toolkitsDocument ComparisonOn this pageDocument ComparisonThis notebook shows how to use an agent to compare two documents.The high level idea is we will create a question-answering chain for each document, and then use that from pydantic import BaseModel, Fieldfrom langchain.chat_models import ChatOpenAIfrom langchain.agents import Toolfrom langchain.embeddings.openai import OpenAIEmbeddingsfrom langchain.text_splitter import CharacterTextSplitterfrom langchain.vectorstores import FAISSfrom langchain.document_loaders import PyPDFLoaderfrom langchain.chains import RetrievalQAclass DocumentInput(BaseModel): question: str = Field()llm = ChatOpenAI(temperature=0, model="gpt-3.5-turbo-0613")tools = []files = [ # https://abc.xyz/investor/static/pdf/2023Q1_alphabet_earnings_release.pdf { "name": "alphabet-earnings", "path": "/Users/harrisonchase/Downloads/2023Q1_alphabet_earnings_release.pdf", }, #Document Comparison | 🦜ï¸�🔗 Langchain1. 1.] [1. 1. 1.] [1. 1. 1.]] Thought: I now know the final answer Final Answer: The covariance matrix is [[1. 1. 1.], [1. 1. 1.], [1. 1. 1.]]. > Finished chain. 'The covariance matrix is [[1. 1. 1.], [1. 1. 1.], [1. 1. 1.]].'agent.run("compute the U of Singular Value Decomposition of the matrix") > Entering new chain... Thought: I need to use the SVD function Action: python_repl_ast Action Input: U, S, V = np.linalg.svd(data) Observation: Thought: I now have the U matrix Final Answer: U = [[-0.70710678 -0.70710678] [-0.70710678 0.70710678]] > Finished chain. 'U = [[-0.70710678 -0.70710678]\n [-0.70710678 0.70710678]]'PreviousVectorstore AgentNextToolsPandas examplesNumpy examplesCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.and sum the array along the first axis") > Entering new chain... Thought: I need to reshape the array and then sum it Action: python_repl_ast Action Input: np.sum(np.reshape(data, (3,2)), axis=0) 0%| | 0.00/100 [00:00 Finished chain. 'The sum of the array along the first axis is [9, 12].'arr = np.array([[1, 2, 3], [4, 5, 6], [7, 8, 9]])agent = create_xorbits_agent(OpenAI(temperature=0), arr, verbose=True) 0%| | 0.00/100 [00:00 Entering new chain... Thought: I need to use the numpy covariance function Action: python_repl_ast Action Input: np.cov(data) 0%| | 0.00/100 [00:00 Finished chain. '2'agent.run( "Reshape the array into a 2-dimensional array with 2 rows and 3 columns, and then transpose it") > Entering new chain... Thought: I need to reshape the array and then transpose it Action: python_repl_ast Action Input: np.reshape(data, (2,3)).T 0%| | 0.00/100 [00:00 Finished chain. 'The reshaped and transposed array is [[1 4], [2 5], [3 6]].'agent.run( "Reshape the array into a 2-dimensional array with 3 rows and 2 columns and sum the array along the first axis")to get the desired result Action: python_repl_ast Action Input: data[(data['Age'] > 30) & (data['Fare'] > 30) & (data['Fare'] < 50) & ((data['Pclass'] == 1) | (data['Pclass'] == 2))].shape[0] Observation: 20 Thought: I now know the final answer Final Answer: 20 > Finished chain. '20'Numpy examples​import xorbits.numpy as npfrom langchain.agents import create_xorbits_agentfrom langchain.llms import OpenAIarr = np.array([1, 2, 3, 4, 5, 6])agent = create_xorbits_agent(OpenAI(temperature=0), arr, verbose=True) 0%| | 0.00/100 [00:00 Entering new chain... Thought: I need to find out the shape of the array Action: python_repl_ast Action Input: data.shape Observation: (6,) Thought: I now know the final answer Final Answer: The shape of the array is (6,). > Finished chain. 'The shape of the array is (6,).'agent.run("Give the 2nd element of the array ") > Entering new chain... Thought: I need to access the 2nd element of the array> Finished chain. 'The mean age is 29.69911764705882.'agent.run("Group the data by sex and find the average age for each group") > Entering new chain... Thought: I need to group the data by sex and then find the average age for each group Action: python_repl_ast Action Input: data.groupby('Sex')['Age'].mean() 0%| | 0.00/100 [00:00 Finished chain. 'The average age for female passengers is 27.92 and the average age for male passengers is 30.73.'agent.run( "Show the number of people whose age is greater than 30 and fare is between 30 and 50 , and pclass is either 1 or 2") > Entering new chain... 0%| | 0.00/100 [00:00 Finished chain. 'There are 891 rows and 12 columns.'agent.run("How many people are in pclass 1?") > Entering new chain... 0%| | 0.00/100 [00:00 Finished chain. 'There are 216 people in pclass 1.'agent.run("whats the mean age?") > Entering new chain... Thought: I need to calculate the mean age Action: python_repl_ast Action Input: data['Age'].mean() 0%| | 0.00/100 [00:00 Finished chain. 'The mean age isSkip to main content🦜ï¸�🔗 LangChainDocsUse casesIntegrationsAPILangSmithJS/TS DocsCTRLKIntegrationsCallbacksChat modelsDocument loadersDocument transformersLLMsMemoryRetrieversText embedding modelsAgent toolkitsAmadeus ToolkitAzure Cognitive Services ToolkitCSV AgentDocument ComparisonGitHubGmail ToolkitJiraJSON AgentMultion ToolkitOffice365 ToolkitOpenAPI agentsNatural Language APIsPandas Dataframe AgentPlayWright Browser ToolkitPowerBI Dataset AgentPython AgentSpark Dataframe AgentSpark SQL AgentSQL Database AgentVectorstore AgentXorbits AgentToolsVector storesGrouped by providerIntegrationsAgent toolkitsXorbits AgentOn this pageXorbits AgentThis notebook shows how to use agents to interact with Xorbits Pandas dataframe and Xorbits Numpy ndarray. It is mostly optimized for question answering.NOTE: this agent calls the Python agent under the hood, which executes LLM generated Python code - this can be bad if the LLM generated Python code is harmful. Use cautiously.Pandas examples​import xorbits.pandas as pdfrom langchain.agents import create_xorbits_agentfrom langchain.llms import OpenAIdata = pd.read_csv("titanic.csv")agent = create_xorbits_agent(OpenAI(temperature=0), data, verbose=True) 0%| | 0.00/100 [00:00 Entering new chain... Thought: I need to count the number of rows and columns Action: python_repl_ast Action Input: data.shape Observation: (891, 12) Thought: I now know the final answerXorbits Agent | 🦜ï¸�🔗 Langchain107), ('Metallica', 91)] Thought: I now know the final answer. Final Answer: The top 3 best selling artists are Iron Maiden, U2, and Metallica. > Finished chain. 'The top 3 best selling artists are Iron Maiden, U2, and Metallica.'PreviousSpark SQL AgentNextVectorstore AgentInitializationUsing ZERO_SHOT_REACT_DESCRIPTIONUsing OpenAI FunctionsDisclamer âš ï¸�Example: describing a tableExample: describing a table, recovering from an errorExample: running queriesRecovering from an errorCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.(sqlite3.OperationalError) no such column: Track.ArtistId [SQL: SELECT Artist.Name, SUM(InvoiceLine.Quantity) AS TotalQuantity FROM Artist INNER JOIN Track ON Artist.ArtistId = Track.ArtistId INNER JOIN InvoiceLine ON Track.TrackId = InvoiceLine.TrackId GROUP BY Artist.Name ORDER BY TotalQuantity DESC LIMIT 3] (Background on this error at: https://sqlalche.me/e/14/e3q8) Thought: I should double check my query before executing it. Action: query_checker_sql_db Action Input: SELECT Artist.Name, SUM(InvoiceLine.Quantity) AS TotalQuantity FROM Artist INNER JOIN Track ON Artist.ArtistId = Track.ArtistId INNER JOIN InvoiceLine ON Track.TrackId = InvoiceLine.TrackId GROUP BY Artist.Name ORDER BY TotalQuantity DESC LIMIT 3 Observation: SELECT Artist.Name, SUM(InvoiceLine.Quantity) AS TotalQuantity FROM Artist INNER JOIN Track ON Artist.ArtistId = Track.ArtistId INNER JOIN InvoiceLine ON Track.TrackId = InvoiceLine.TrackId GROUP BY Artist.Name ORDER BY TotalQuantity DESC LIMIT 3; Thought: I now know the final answer. Action: query_sql_db Action Input: SELECT Artist.Name, SUM(InvoiceLine.Quantity) AS TotalQuantity FROM Artist INNER JOIN Album ON Artist.ArtistId = Album.ArtistId INNER JOIN Track ON Album.AlbumId = Track.AlbumId INNER JOIN InvoiceLine ON Track.TrackId = InvoiceLine.TrackId GROUP BY Artist.Name ORDER BY TotalQuantity DESC LIMIT 3 Observation: [('Iron Maiden', 140), ('U2', 107), ('Metallica', 91)] Thought: I now know3 Fast As a Shark 3 2 1 F. Baltes, S. Kaufman, U. Dirkscneider & W. Hoffman 230619 3990994 0.99 CREATE TABLE "InvoiceLine" ( "InvoiceLineId" INTEGER NOT NULL, "InvoiceId" INTEGER NOT NULL, "TrackId" INTEGER NOT NULL, "UnitPrice" NUMERIC(10, 2) NOT NULL, "Quantity" INTEGER NOT NULL, PRIMARY KEY ("InvoiceLineId"), FOREIGN KEY("TrackId") REFERENCES "Track" ("TrackId"), FOREIGN KEY("InvoiceId") REFERENCES "Invoice" ("InvoiceId") ) SELECT * FROM 'InvoiceLine' LIMIT 3; InvoiceLineId InvoiceId TrackId UnitPrice Quantity 1 1 2 0.99 1 2 1 4 0.99 1 3 2 6 0.99 1 Thought: I should query the database to get the top 3 best selling artists. Action: query_sql_db Action Input: SELECT Artist.Name, SUM(InvoiceLine.Quantity) AS TotalQuantity FROM Artist INNER JOIN Track ON Artist.ArtistId = Track.ArtistId INNER JOIN InvoiceLine ON Track.TrackId = InvoiceLine.TrackId GROUP BY Artist.Name ORDER BY TotalQuantity DESC LIMIT 3 Observation: Error: (sqlite3.OperationalError) no such column: Track.ArtistIdArtistId Name 1 AC/DC 2 Accept 3 Aerosmith CREATE TABLE "Track" ( "TrackId" INTEGER NOT NULL, "Name" NVARCHAR(200) NOT NULL, "AlbumId" INTEGER, "MediaTypeId" INTEGER NOT NULL, "GenreId" INTEGER, "Composer" NVARCHAR(220), "Milliseconds" INTEGER NOT NULL, "Bytes" INTEGER, "UnitPrice" NUMERIC(10, 2) NOT NULL, PRIMARY KEY ("TrackId"), FOREIGN KEY("MediaTypeId") REFERENCES "MediaType" ("MediaTypeId"), FOREIGN KEY("GenreId") REFERENCES "Genre" ("GenreId"), FOREIGN KEY("AlbumId") REFERENCES "Album" ("AlbumId") ) SELECT * FROM 'Track' LIMIT 3; TrackId Name AlbumId MediaTypeId GenreId Composer Milliseconds Bytes UnitPrice 1 For Those About To Rock (We Salute You) 1 1 1 Angus Young, Malcolm Young, Brian Johnson 343719 11170334 0.99 2 Balls to the Wall 2 2 1 None 342562 5510424 0.99 3 Fast As a Shark 3 2 1 F. Baltes, S.'Music' (6580), 'Music Videos' (1). > Finished chain. "The total number of tracks in each playlist are: '90’s Music' (1477), 'Brazilian Music' (39), 'Classical' (75), 'Classical 101 - Deep Cuts' (25), 'Classical 101 - Next Steps' (25), 'Classical 101 - The Basics' (25), 'Grunge' (15), 'Heavy Metal Classic' (26), 'Music' (6580), 'Music Videos' (1)."Recovering from an error​In this example, the agent is able to recover from an error after initially trying to access an attribute (Track.ArtistId) which doesn't exist.agent_executor.run("Who are the top 3 best selling artists?") > Entering new AgentExecutor chain... Action: list_tables_sql_db Action Input: "" Observation: MediaType, Track, Invoice, Album, Playlist, Customer, Employee, InvoiceLine, PlaylistTrack, Genre, Artist Thought: I should look at the schema of the Artist, InvoiceLine, and Track tables to see what columns I can use. Action: schema_sql_db Action Input: "Artist, InvoiceLine, Track" Observation: CREATE TABLE "Artist" ( "ArtistId" INTEGER NOT NULL, "Name" NVARCHAR(120), PRIMARY KEY ("ArtistId") ) SELECT * FROM 'Artist' LIMIT 3; ArtistId Name 1 AC/DC 2the total number of tracks in each playlist. Action: query_checker_sql_db Action Input: SELECT Playlist.Name, COUNT(PlaylistTrack.TrackId) AS TotalTracks FROM Playlist INNER JOIN PlaylistTrack ON Playlist.PlaylistId = PlaylistTrack.PlaylistId GROUP BY Playlist.Name Observation: SELECT Playlist.Name, COUNT(PlaylistTrack.TrackId) AS TotalTracks FROM Playlist INNER JOIN PlaylistTrack ON Playlist.PlaylistId = PlaylistTrack.PlaylistId GROUP BY Playlist.Name Thought: The query looks correct, I can now execute it. Action: query_sql_db Action Input: SELECT Playlist.Name, COUNT(PlaylistTrack.TrackId) AS TotalTracks FROM Playlist INNER JOIN PlaylistTrack ON Playlist.PlaylistId = PlaylistTrack.PlaylistId GROUP BY Playlist.Name LIMIT 10 Observation: [('90’s Music', 1477), ('Brazilian Music', 39), ('Classical', 75), ('Classical 101 - Deep Cuts', 25), ('Classical 101 - Next Steps', 25), ('Classical 101 - The Basics', 25), ('Grunge', 15), ('Heavy Metal Classic', 26), ('Music', 6580), ('Music Videos', 1)] Thought: I now know the final answer. Final Answer: The total number of tracks in each playlist are: '90’s Music' (1477), 'Brazilian Music' (39), 'Classical' (75), 'Classical 101 - Deep Cuts' (25), 'Classical 101 - Next Steps' (25), 'Classical 101 - The Basics' (25), 'Grunge' (15), 'Heavy Metal Classic' (26), 'Music' (6580), 'Music Videos' (1).Entering new AgentExecutor chain... Action: list_tables_sql_db Action Input: "" Observation: Invoice, MediaType, Artist, InvoiceLine, Genre, Playlist, Employee, Album, PlaylistTrack, Track, Customer Thought: I should look at the schema of the Playlist and PlaylistTrack tables to see what columns I can use. Action: schema_sql_db Action Input: "Playlist, PlaylistTrack" Observation: CREATE TABLE "Playlist" ( "PlaylistId" INTEGER NOT NULL, "Name" NVARCHAR(120), PRIMARY KEY ("PlaylistId") ) SELECT * FROM 'Playlist' LIMIT 3; PlaylistId Name 1 Music 2 Movies 3 TV Shows CREATE TABLE "PlaylistTrack" ( "PlaylistId" INTEGER NOT NULL, "TrackId" INTEGER NOT NULL, PRIMARY KEY ("PlaylistId", "TrackId"), FOREIGN KEY("TrackId") REFERENCES "Track" ("TrackId"), FOREIGN KEY("PlaylistId") REFERENCES "Playlist" ("PlaylistId") ) SELECT * FROM 'PlaylistTrack' LIMIT 3; PlaylistId TrackId 1 3402 1 3389 1 3390 Thought: I can use a SELECT statement to get the total number of tracks in each playlist. Action: query_checker_sql_db2 4 2009-01-02 00:00:00 UllevÃ¥lsveien 14 Oslo None Norway 0171 3.96 3 8 2009-01-03 00:00:00 Grétrystraat 63 Brussels None Belgium 1000 5.94 Thought: I should query the Invoice and Customer tables to get the total sales per country. Action: query_sql_db Action Input: SELECT c.Country, SUM(i.Total) AS TotalSales FROM Invoice i INNER JOIN Customer c ON i.CustomerId = c.CustomerId GROUP BY c.Country ORDER BY TotalSales DESC LIMIT 10 Observation: [('USA', 523.0600000000003), ('Canada', 303.9599999999999), ('France', 195.09999999999994), ('Brazil', 190.09999999999997), ('Germany', 156.48), ('United Kingdom', 112.85999999999999), ('Czech Republic', 90.24000000000001), ('Portugal', 77.23999999999998), ('India', 75.25999999999999), ('Chile', 46.62)] Thought: I now know the final answer Final Answer: The customers from the USA spent the most, with a total of $523.06. > Finished chain. 'The customers from the USA spent the most, with a total of $523.06.'agent_executor.run( "Show the total number of tracks in each playlist. The Playlist name should be included in the result.") > Entering new AgentExecutor chain... Action: list_tables_sql_db Action34 Stuttgart None Germany 70174 +49 0711 2842222 None leonekohler@surfeu.de 5 3 François Tremblay None 1498 rue Bélanger Montréal QC Canada H2G 1A7 +1 (514) 721-4711 None ftremblay@gmail.com 3 CREATE TABLE "Invoice" ( "InvoiceId" INTEGER NOT NULL, "CustomerId" INTEGER NOT NULL, "InvoiceDate" DATETIME NOT NULL, "BillingAddress" NVARCHAR(70), "BillingCity" NVARCHAR(40), "BillingState" NVARCHAR(40), "BillingCountry" NVARCHAR(40), "BillingPostalCode" NVARCHAR(10), "Total" NUMERIC(10, 2) NOT NULL, PRIMARY KEY ("InvoiceId"), FOREIGN KEY("CustomerId") REFERENCES "Customer" ("CustomerId") ) SELECT * FROM 'Invoice' LIMIT 3; InvoiceId CustomerId InvoiceDate BillingAddress BillingCity BillingState BillingCountry BillingPostalCode Total 1 2 2009-01-01 00:00:00 Theodor-Heuss-Straße 34 Stuttgart None Germany 70174 1.98 2 4 2009-01-02 00:00:00NOT NULL, "LastName" NVARCHAR(20) NOT NULL, "Company" NVARCHAR(80), "Address" NVARCHAR(70), "City" NVARCHAR(40), "State" NVARCHAR(40), "Country" NVARCHAR(40), "PostalCode" NVARCHAR(10), "Phone" NVARCHAR(24), "Fax" NVARCHAR(24), "Email" NVARCHAR(60) NOT NULL, "SupportRepId" INTEGER, PRIMARY KEY ("CustomerId"), FOREIGN KEY("SupportRepId") REFERENCES "Employee" ("EmployeeId") ) SELECT * FROM 'Customer' LIMIT 3; CustomerId FirstName LastName Company Address City State Country PostalCode Phone Fax Email SupportRepId 1 Luís Gonçalves Embraer - Empresa Brasileira de Aeronáutica S.A. Av. Brigadeiro Faria Lima, 2170 São José dos Campos SP Brazil 12227-000 +55 (12) 3923-5555 +55 (12) 3923-5566 luisg@embraer.com.br 3 2 Leonie Köhler None Theodor-Heuss-Straße 34 Stuttgart None Germany 70174 +49 0711 2842222 NonePRIMARY KEY ("PlaylistId", "TrackId"), FOREIGN KEY("TrackId") REFERENCES "Track" ("TrackId"), FOREIGN KEY("PlaylistId") REFERENCES "Playlist" ("PlaylistId") ) SELECT * FROM 'PlaylistTrack' LIMIT 3; PlaylistId TrackId 1 3402 1 3389 1 3390 Thought: I now know the final answer Final Answer: The PlaylistTrack table contains two columns, PlaylistId and TrackId, which are both integers and are used to link Playlist and Track tables. > Finished chain. 'The PlaylistTrack table contains two columns, PlaylistId and TrackId, which are both integers and are used to link Playlist and Track tables.'Example: running queries​agent_executor.run( "List the total sales per country. Which country's customers spent the most?") > Entering new AgentExecutor chain... Action: list_tables_sql_db Action Input: "" Observation: Invoice, MediaType, Artist, InvoiceLine, Genre, Playlist, Employee, Album, PlaylistTrack, Track, Customer Thought: I should look at the schema of the relevant tables to see what columns I can use. Action: schema_sql_db Action Input: "Invoice, Customer" Observation: CREATE TABLE "Customer" ( "CustomerId" INTEGER NOT NULL, "FirstName" NVARCHAR(40) NOT NULL, "LastName" NVARCHAR(20) NOTtable:\n\n```\nPlaylistId TrackId\n1 3402\n1 3389\n1 3390\n```\n\nPlease let me know if there is anything else I can help you with.'Example: describing a table, recovering from an error​In this example, the agent tries to search for a table that doesn't exist, but finds the next best resultagent_executor.run("Describe the playlistsong table") > Entering new AgentExecutor chain... Action: list_tables_sql_db Action Input: "" Observation: Genre, PlaylistTrack, MediaType, Invoice, InvoiceLine, Track, Playlist, Customer, Album, Employee, Artist Thought: I should look at the schema of the PlaylistSong table Action: schema_sql_db Action Input: "PlaylistSong" Observation: Error: table_names {'PlaylistSong'} not found in database Thought: I should check the spelling of the table Action: list_tables_sql_db Action Input: "" Observation: Genre, PlaylistTrack, MediaType, Invoice, InvoiceLine, Track, Playlist, Customer, Album, Employee, Artist Thought: The table is called PlaylistTrack Action: schema_sql_db Action Input: "PlaylistTrack" Observation: CREATE TABLE "PlaylistTrack" ( "PlaylistId" INTEGER NOT NULL, "TrackId" INTEGER NOT NULL, PRIMARY KEY ("PlaylistId", "TrackId"),"PlaylistId" INTEGER NOT NULL, "TrackId" INTEGER NOT NULL, PRIMARY KEY ("PlaylistId", "TrackId"), FOREIGN KEY("TrackId") REFERENCES "Track" ("TrackId"), FOREIGN KEY("PlaylistId") REFERENCES "Playlist" ("PlaylistId") ) ``` Here are three sample rows from the `PlaylistTrack` table: ``` PlaylistId TrackId 1 3402 1 3389 1 3390 ``` Please let me know if there is anything else I can help you with. > Finished chain. 'The `PlaylistTrack` table has two columns: `PlaylistId` and `TrackId`. It is a junction table that represents the relationship between playlists and tracks. \n\nHere is the schema of the `PlaylistTrack` table:\n\n```\nCREATE TABLE "PlaylistTrack" (\n\t"PlaylistId" INTEGER NOT NULL, \n\t"TrackId" INTEGER NOT NULL, \n\tPRIMARY KEY ("PlaylistId", "TrackId"), \n\tFOREIGN KEY("TrackId") REFERENCES "Track" ("TrackId"), \n\tFOREIGN KEY("PlaylistId") REFERENCES "Playlist" ("PlaylistId")\n)\n```\n\nHere are three sample rows from the `PlaylistTrack` table:\n\n```\nPlaylistId TrackId\n1datawarehouse oriented databases support user-level quota, for limiting resource usage.Example: describing a table​agent_executor.run("Describe the playlisttrack table") > Entering new chain... Invoking: `list_tables_sql_db` with `{}` Album, Artist, Track, PlaylistTrack, InvoiceLine, sales_table, Playlist, Genre, Employee, Customer, Invoice, MediaType Invoking: `schema_sql_db` with `PlaylistTrack` CREATE TABLE "PlaylistTrack" ( "PlaylistId" INTEGER NOT NULL, "TrackId" INTEGER NOT NULL, PRIMARY KEY ("PlaylistId", "TrackId"), FOREIGN KEY("TrackId") REFERENCES "Track" ("TrackId"), FOREIGN KEY("PlaylistId") REFERENCES "Playlist" ("PlaylistId") ) /* 3 rows from PlaylistTrack table: PlaylistId TrackId 1 3402 1 3389 1 3390 */The `PlaylistTrack` table has two columns: `PlaylistId` and `TrackId`. It is a junction table that represents the relationship between playlists and tracks. Here is the schema of the `PlaylistTrack` table: ``` CREATE TABLE "PlaylistTrack" ( "PlaylistId" INTEGER NOT NULL,ZERO_SHOT_REACT_DESCRIPTION​This shows how to initialize the agent using the ZERO_SHOT_REACT_DESCRIPTION agent type. Note that this is an alternative to the above.agent_executor = create_sql_agent( llm=OpenAI(temperature=0), toolkit=toolkit, verbose=True, agent_type=AgentType.ZERO_SHOT_REACT_DESCRIPTION,)Using OpenAI Functions​This shows how to initialize the agent using the OPENAI_FUNCTIONS agent type. Note that this is an alternative to the above.# agent_executor = create_sql_agent(# llm=ChatOpenAI(temperature=0, model="gpt-3.5-turbo-0613"),# toolkit=toolkit,# verbose=True,# agent_type=AgentType.OPENAI_FUNCTIONS# )Disclamer âš ï¸�​The query chain may generate insert/update/delete queries. When this is not expected, use a custom prompt or create a SQL users without write permissions.The final user might overload your SQL database by asking a simple question such as "run the biggest query possible". The generated query might look like:SELECT * FROM "public"."users" JOIN "public"."user_permissions" ON "public"."users".id = "public"."user_permissions".user_id JOIN "public"."projects" ON "public"."users".id = "public"."projects".user_id JOIN "public"."events" ON "public"."projects".id = "public"."events".project_id;For a transactional SQL database, if one of the table above contains millions of rows, the query might cause trouble to other applications using the same database.Most datawarehouse oriented databases support user-level quota, for limiting resource usage.Example: describing aSkip to main content🦜ï¸�🔗 LangChainDocsUse casesIntegrationsAPILangSmithJS/TS DocsCTRLKIntegrationsCallbacksChat modelsDocument loadersDocument transformersLLMsMemoryRetrieversText embedding modelsAgent toolkitsAmadeus ToolkitAzure Cognitive Services ToolkitCSV AgentDocument ComparisonGitHubGmail ToolkitJiraJSON AgentMultion ToolkitOffice365 ToolkitOpenAPI agentsNatural Language APIsPandas Dataframe AgentPlayWright Browser ToolkitPowerBI Dataset AgentPython AgentSpark Dataframe AgentSpark SQL AgentSQL Database AgentVectorstore AgentXorbits AgentToolsVector storesGrouped by providerIntegrationsAgent toolkitsSQL Database AgentOn this pageSQL Database AgentThis notebook showcases an agent designed to interact with a sql databases. The agent builds off of SQLDatabaseChain and is designed to answer more general questions about a database, as well as recover from errors.Note that, as this agent is in active development, all answers might not be correct. Additionally, it is not guaranteed that the agent won't perform DML statements on your database given certain questions. Be careful running it on sensitive data!This uses the example Chinook database. To set it up follow the instructions on https://database.guide/2-sample-databases-sqlite/, placing the .db file in a notebooks folder at the root of this repository.Initialization​from langchain.agents import create_sql_agentfrom langchain.agents.agent_toolkits import SQLDatabaseToolkitfrom langchain.sql_database import SQLDatabasefrom langchain.llms.openai import OpenAIfrom langchain.agents import AgentExecutorfrom langchain.agents.agent_types import AgentTypefrom langchain.chat_models import ChatOpenAIdb = SQLDatabase.from_uri("sqlite:///../../../../../notebooks/Chinook.db")toolkit = SQLDatabaseToolkit(db=db, llm=OpenAI(temperature=0))Using ZERO_SHOT_REACT_DESCRIPTION​ThisSQL Database Agent | 🦜ï¸�🔗 Langchaina concise email from Santiago to Paul, Santiago's travel agent, asking him to book the earliest flight from DFW to DCA on Aug 28, 2023. Include all flight details in the email.") 'Dear Paul,\n\nI am writing to request that you book the earliest flight from DFW to DCA on Aug 28, 2023. The flight details are as follows:\n\nFlight 1: DFW to ATL, departing at 7:15 AM, arriving at 10:25 AM, flight number 983, carrier Delta Air Lines\nFlight 2: ATL to DCA, departing at 12:15 PM, arriving at 2:02 PM, flight number 759, carrier Delta Air Lines\n\nThank you for your help.\n\nSincerely,\nSantiago'PreviousAgent toolkitsNextAzure Cognitive Services ToolkitAssign Environmental VariablesCreate the Amadeus Toolkit and Get ToolsUse Amadeus Toolkit within an AgentCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.= toolkit.get_tools()Use Amadeus Toolkit within an Agent​from langchain import OpenAIfrom langchain.agents import initialize_agent, AgentTypellm = OpenAI(temperature=0)agent = initialize_agent( tools=tools, llm=llm, verbose=False, agent=AgentType.STRUCTURED_CHAT_ZERO_SHOT_REACT_DESCRIPTION,)agent.run("What is the name of the airport in Cali, Colombia?") 'The closest airport to Cali, Colombia is Alfonso Bonilla Aragón International Airport (CLO).'agent.run( "What is the departure time of the cheapest flight on August 23, 2023 leaving Dallas, Texas before noon to Lincoln, Nebraska?") 'The cheapest flight on August 23, 2023 leaving Dallas, Texas before noon to Lincoln, Nebraska has a departure time of 16:42 and a total price of 276.08 EURO.'agent.run( "At what time does earliest flight on August 23, 2023 leaving Dallas, Texas to Lincoln, Nebraska land in Nebraska?") 'The earliest flight on August 23, 2023 leaving Dallas, Texas to Lincoln, Nebraska lands in Lincoln, Nebraska at 16:07.'agent.run( "What is the full travel time for the cheapest flight between Portland, Oregon to Dallas, TX on October 3, 2023?") 'The cheapest flight between Portland, Oregon to Dallas, TX on October 3, 2023 is a Spirit Airlines flight with a total price of 84.02 EURO and a total travel time of 8 hours and 43 minutes.'agent.run( "Please draft a concise email from Santiago to Paul, Santiago's travel agent, asking him to book the earliest flightSkip to main content🦜ï¸�🔗 LangChainDocsUse casesIntegrationsAPILangSmithJS/TS DocsCTRLKIntegrationsCallbacksChat modelsDocument loadersDocument transformersLLMsMemoryRetrieversText embedding modelsAgent toolkitsAmadeus ToolkitAzure Cognitive Services ToolkitCSV AgentDocument ComparisonGitHubGmail ToolkitJiraJSON AgentMultion ToolkitOffice365 ToolkitOpenAPI agentsNatural Language APIsPandas Dataframe AgentPlayWright Browser ToolkitPowerBI Dataset AgentPython AgentSpark Dataframe AgentSpark SQL AgentSQL Database AgentVectorstore AgentXorbits AgentToolsVector storesGrouped by providerIntegrationsAgent toolkitsAmadeus ToolkitOn this pageAmadeus ToolkitThis notebook walks you through connecting LangChain to the Amadeus travel information APITo use this toolkit, you will need to set up your credentials explained in the Amadeus for developers getting started overview. Once you've received a AMADEUS_CLIENT_ID and AMADEUS_CLIENT_SECRET, you can input them as environmental variables below.pip install --upgrade amadeus > /dev/nullAssign Environmental Variables​The toolkit will read the AMADEUS_CLIENT_ID and AMADEUS_CLIENT_SECRET environmental variables to authenticate the user so you need to set them here. You will also need to set your OPENAI_API_KEY to use the agent later.# Set environmental variables hereimport osos.environ["AMADEUS_CLIENT_ID"] = "CLIENT_ID"os.environ["AMADEUS_CLIENT_SECRET"] = "CLIENT_SECRET"os.environ["OPENAI_API_KEY"] = "API_KEY"Create the Amadeus Toolkit and Get Tools​To start, you need to create the toolkit, so you can access its tools later.from langchain.agents.agent_toolkits.amadeus.toolkit import AmadeusToolkittoolkit = AmadeusToolkit()tools = toolkit.get_tools()Use Amadeus Toolkit within an Agent​fromAmadeus Toolkit | 🦜ï¸�🔗 Langchain'/tmp/tmpa3uu_j6b.wav'from IPython import displayaudio = display.Audio(audio_file)display.display(audio)PreviousAmadeus ToolkitNextCSV AgentCreate the ToolkitUse within an AgentCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.} ``` Observation: Caption: a group of eggs and flour in bowls Objects: Egg, Egg, Food Tags: dairy, ingredient, indoor, thickening agent, food, mixing bowl, powder, flour, egg, bowl Thought: I can use the objects and tags to suggest recipes Action: ``` { "action": "Final Answer", "action_input": "You can make pancakes, omelettes, or quiches with these ingredients!" } ``` > Finished chain. 'You can make pancakes, omelettes, or quiches with these ingredients!'audio_file = agent.run("Tell me a joke and read it out for me.") > Entering new AgentExecutor chain... Action: ``` { "action": "Azure Cognitive Services Text2Speech", "action_input": "Why did the chicken cross the playground? To get to the other slide!" } ``` Observation: /tmp/tmpa3uu_j6b.wav Thought: I have the audio file of the joke Action: ``` { "action": "Final Answer", "action_input": "/tmp/tmpa3uu_j6b.wav" } ``` > Finished chain. '/tmp/tmpa3uu_j6b.wav'from IPython import displayaudio => /dev/null# For Windows/Linux# !pip install --upgrade azure-ai-vision > /dev/nullimport osos.environ["OPENAI_API_KEY"] = "sk-"os.environ["AZURE_COGS_KEY"] = ""os.environ["AZURE_COGS_ENDPOINT"] = ""os.environ["AZURE_COGS_REGION"] = ""Create the Toolkit​from langchain.agents.agent_toolkits import AzureCognitiveServicesToolkittoolkit = AzureCognitiveServicesToolkit()[tool.name for tool in toolkit.get_tools()] ['Azure Cognitive Services Image Analysis', 'Azure Cognitive Services Form Recognizer', 'Azure Cognitive Services Speech2Text', 'Azure Cognitive Services Text2Speech']Use within an Agent​from langchain import OpenAIfrom langchain.agents import initialize_agent, AgentTypellm = OpenAI(temperature=0)agent = initialize_agent( tools=toolkit.get_tools(), llm=llm, agent=AgentType.STRUCTURED_CHAT_ZERO_SHOT_REACT_DESCRIPTION, verbose=True,)agent.run( "What can I make with these ingredients?" "https://images.openai.com/blob/9ad5a2ab-041f-475f-ad6a-b51899c50182/ingredients.png") > Entering new AgentExecutor chain... Action: ``` { "action": "Azure Cognitive Services Image Analysis", "action_input": "https://images.openai.com/blob/9ad5a2ab-041f-475f-ad6a-b51899c50182/ingredients.png" } ```Skip to main content🦜ï¸�🔗 LangChainDocsUse casesIntegrationsAPILangSmithJS/TS DocsCTRLKIntegrationsCallbacksChat modelsDocument loadersDocument transformersLLMsMemoryRetrieversText embedding modelsAgent toolkitsAmadeus ToolkitAzure Cognitive Services ToolkitCSV AgentDocument ComparisonGitHubGmail ToolkitJiraJSON AgentMultion ToolkitOffice365 ToolkitOpenAPI agentsNatural Language APIsPandas Dataframe AgentPlayWright Browser ToolkitPowerBI Dataset AgentPython AgentSpark Dataframe AgentSpark SQL AgentSQL Database AgentVectorstore AgentXorbits AgentToolsVector storesGrouped by providerIntegrationsAgent toolkitsAzure Cognitive Services ToolkitOn this pageAzure Cognitive Services ToolkitThis toolkit is used to interact with the Azure Cognitive Services API to achieve some multimodal capabilities.Currently There are four tools bundled in this toolkit:AzureCogsImageAnalysisTool: used to extract caption, objects, tags, and text from images. (Note: this tool is not available on Mac OS yet, due to the dependency on azure-ai-vision package, which is only supported on Windows and Linux currently.)AzureCogsFormRecognizerTool: used to extract text, tables, and key-value pairs from documents.AzureCogsSpeech2TextTool: used to transcribe speech to text.AzureCogsText2SpeechTool: used to synthesize text to speech.First, you need to set up an Azure account and create a Cognitive Services resource. You can follow the instructions here to create a resource. Then, you need to get the endpoint, key and region of your resource, and set them as environment variables. You can find them in the "Keys and Endpoint" page of your resource.# !pip install --upgrade azure-ai-formrecognizer > /dev/null# !pip install --upgrade azure-cognitiveservices-speech > /dev/null# For Windows/Linux# !pip install --upgrade azure-ai-vision >Azure Cognitive Services Toolkit | 🦜ï¸�🔗 Langchain{"detail":"Not Found"} 'I have created a draft email for you to edit. The draft Id is r5681294731961864018.'agent.run("Could you search in my drafts for the latest email?") WARNING:root:Failed to load default session, using empty session: 0 WARNING:root:Failed to persist run: {"detail":"Not Found"} "The latest email in your drafts is from hopefulparrot@gmail.com with the subject 'Collaboration Opportunity'. The body of the email reads: 'Dear [Friend], I hope this letter finds you well. I am writing to you in the hopes of rekindling our friendship and to discuss the possibility of collaborating on some research together. I know that we have had our differences in the past, but I believe that we can put them aside and work together for the greater good. I look forward to hearing from you. Sincerely, [Parrot]'"PreviousGitHubNextJiraCreate the ToolkitCustomizing AuthenticationUse within an AgentCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.object at 0x10e5c6d10>), GmailGetMessage(name='get_gmail_message', description='Use this tool to fetch an email by message ID. Returns the thread ID, snipet, body, subject, and sender.', args_schema=, return_direct=False, verbose=False, callbacks=None, callback_manager=None, api_resource=), GmailGetThread(name='get_gmail_thread', description=('Use this tool to search for email messages. The input must be a valid Gmail query. The output is a JSON list of messages.',), args_schema=, return_direct=False, verbose=False, callbacks=None, callback_manager=None, api_resource=)]Use within an Agent​from langchain import OpenAIfrom langchain.agents import initialize_agent, AgentTypellm = OpenAI(temperature=0)agent = initialize_agent( tools=toolkit.get_tools(), llm=llm, agent=AgentType.STRUCTURED_CHAT_ZERO_SHOT_REACT_DESCRIPTION,)agent.run( "Create a gmail draft for me to edit of a letter from the perspective of a sentient parrot" " who is looking to collaborate on some research with her" " estranged friend, a cat. Under no circumstances may you send the message, however.") WARNING:root:Failed to load default session, using empty session: 0 WARNING:root:Failed to persist run: {"detail":"Not Found"} 'I have created a draft email for you toyou can manually build a googleapi resource for more auth control. from langchain.tools.gmail.utils import build_resource_service, get_gmail_credentials# Can review scopes here https://developers.google.com/gmail/api/auth/scopes# For instance, readonly scope is 'https://www.googleapis.com/auth/gmail.readonly'credentials = get_gmail_credentials( token_file="token.json", scopes=["https://mail.google.com/"], client_secrets_file="credentials.json",)api_resource = build_resource_service(credentials=credentials)toolkit = GmailToolkit(api_resource=api_resource)tools = toolkit.get_tools()tools [GmailCreateDraft(name='create_gmail_draft', description='Use this tool to create a draft email with the provided message fields.', args_schema=, return_direct=False, verbose=False, callbacks=None, callback_manager=None, api_resource=), GmailSendMessage(name='send_gmail_message', description='Use this tool to send email messages. The input is the message, recipents', args_schema=None, return_direct=False, verbose=False, callbacks=None, callback_manager=None, api_resource=), GmailSearch(name='search_gmail', description=('Use this tool to search for email messages or threads. The input must be a valid Gmail query. The output is a JSON list of the requested resource.',), args_schema=, return_direct=False, verbose=False, callbacks=None, callback_manager=None, api_resource=),Skip to main content🦜ï¸�🔗 LangChainDocsUse casesIntegrationsAPILangSmithJS/TS DocsCTRLKIntegrationsCallbacksChat modelsDocument loadersDocument transformersLLMsMemoryRetrieversText embedding modelsAgent toolkitsAmadeus ToolkitAzure Cognitive Services ToolkitCSV AgentDocument ComparisonGitHubGmail ToolkitJiraJSON AgentMultion ToolkitOffice365 ToolkitOpenAPI agentsNatural Language APIsPandas Dataframe AgentPlayWright Browser ToolkitPowerBI Dataset AgentPython AgentSpark Dataframe AgentSpark SQL AgentSQL Database AgentVectorstore AgentXorbits AgentToolsVector storesGrouped by providerIntegrationsAgent toolkitsGmail ToolkitOn this pageGmail ToolkitThis notebook walks through connecting a LangChain email to the Gmail API.To use this toolkit, you will need to set up your credentials explained in the Gmail API docs. Once you've downloaded the credentials.json file, you can start using the Gmail API. Once this is done, we'll install the required libraries.pip install --upgrade google-api-python-client > /dev/nullpip install --upgrade google-auth-oauthlib > /dev/nullpip install --upgrade google-auth-httplib2 > /dev/nullpip install beautifulsoup4 > /dev/null # This is optional but is useful for parsing HTML messagesCreate the Toolkit​By default the toolkit reads the local credentials.json file. You can also manually provide a Credentials object.from langchain.agents.agent_toolkits import GmailToolkittoolkit = GmailToolkit()Customizing Authentication​Behind the scenes, a googleapi resource is created using the following methods.Gmail Toolkit | 🦜ï¸�🔗 Langchaina tool for running linters and code formatters over Jupyter Notebooks. After installing ruff and nbqa, you can run Ruff over a notebook like so: > nbqa ruff Untitled.html Thought: I need to find out if the president mentioned nbQA in the state of the union. Action: state_of_union_address Action Input: Did the president mention nbQA in the state of the union? Observation: No, the president did not mention nbQA in the state of the union. Thought: I now know the final answer. Final Answer: No, the president did not mention nbQA in the state of the union. > Finished chain. 'No, the president did not mention nbQA in the state of the union.'PreviousSQL Database AgentNextXorbits AgentCreate the VectorstoresInitialize Toolkit and AgentExamplesMultiple VectorstoresExamplesCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.> Entering new AgentExecutor chain... I need to find out what tool ruff uses to run over Jupyter Notebooks Action: ruff Action Input: What tool does ruff use to run over Jupyter Notebooks? Observation: Ruff is integrated into nbQA, a tool for running linters and code formatters over Jupyter Notebooks. After installing ruff and nbqa, you can run Ruff over a notebook like so: > nbqa ruff Untitled.html Thought: I now know the final answer Final Answer: Ruff is integrated into nbQA, a tool for running linters and code formatters over Jupyter Notebooks. After installing ruff and nbqa, you can run Ruff over a notebook like so: > nbqa ruff Untitled.html > Finished chain. 'Ruff is integrated into nbQA, a tool for running linters and code formatters over Jupyter Notebooks. After installing ruff and nbqa, you can run Ruff over a notebook like so: > nbqa ruff Untitled.html'agent_executor.run( "What tool does ruff use to run over Jupyter Notebooks? Did the president mention that tool in the state of the union?") > Entering new AgentExecutor chain... I need to find out what tool ruff uses and if the president mentioned it in the state of the union. Action: ruff Action Input: What tool does ruff use to run over Jupyter Notebooks? Observation: Ruff is integrated into nbQA, a tool for running linters and code formatters over Jupyter Notebooks. After installinglangchain.agents.agent_toolkits import ( create_vectorstore_router_agent, VectorStoreRouterToolkit, VectorStoreInfo,)ruff_vectorstore_info = VectorStoreInfo( name="ruff", description="Information about the Ruff python linting library", vectorstore=ruff_store,)router_toolkit = VectorStoreRouterToolkit( vectorstores=[vectorstore_info, ruff_vectorstore_info], llm=llm)agent_executor = create_vectorstore_router_agent( llm=llm, toolkit=router_toolkit, verbose=True)Examples​agent_executor.run( "What did biden say about ketanji brown jackson in the state of the union address?") > Entering new AgentExecutor chain... I need to use the state_of_union_address tool to answer this question. Action: state_of_union_address Action Input: What did biden say about ketanji brown jackson Observation: Biden said that Ketanji Brown Jackson is one of the nation's top legal minds and that she will continue Justice Breyer's legacy of excellence. Thought: I now know the final answer Final Answer: Biden said that Ketanji Brown Jackson is one of the nation's top legal minds and that she will continue Justice Breyer's legacy of excellence. > Finished chain. "Biden said that Ketanji Brown Jackson is one of the nation's top legal minds and that she will continue Justice Breyer's legacy of excellence."agent_executor.run("What tool does ruff use to run over Jupyter Notebooks?") > Entering new AgentExecutorone of the nation's top legal minds and that she will continue Justice Breyer's legacy of excellence."agent_executor.run( "What did biden say about ketanji brown jackson in the state of the union address? List the source.") > Entering new AgentExecutor chain... I need to use the state_of_union_address_with_sources tool to answer this question. Action: state_of_union_address_with_sources Action Input: What did biden say about ketanji brown jackson Observation: {"answer": " Biden said that he nominated Circuit Court of Appeals Judge Ketanji Brown Jackson to the United States Supreme Court, and that she is one of the nation's top legal minds who will continue Justice Breyer's legacy of excellence.\n", "sources": "../../state_of_the_union.txt"} Thought: I now know the final answer Final Answer: Biden said that he nominated Circuit Court of Appeals Judge Ketanji Brown Jackson to the United States Supreme Court, and that she is one of the nation's top legal minds who will continue Justice Breyer's legacy of excellence. Sources: ../../state_of_the_union.txt > Finished chain. "Biden said that he nominated Circuit Court of Appeals Judge Ketanji Brown Jackson to the United States Supreme Court, and that she is one of the nation's top legal minds who will continue Justice Breyer's legacy of excellence. Sources: ../../state_of_the_union.txt"Multiple Vectorstores​We can also easily use this initialize an agent with multiple vectorstores and use the agent to route between them. To do this. This agent is optimized for routing, so it is a different toolkit and initializer.from langchain.agents.agent_toolkits import ( create_vectorstore_router_agent,= Chroma.from_documents(ruff_texts, embeddings, collection_name="ruff") Running Chroma using direct local API. Using DuckDB in-memory for database. Data will be transient.Initialize Toolkit and Agent​First, we'll create an agent with a single vectorstore.from langchain.agents.agent_toolkits import ( create_vectorstore_agent, VectorStoreToolkit, VectorStoreInfo,)vectorstore_info = VectorStoreInfo( name="state_of_union_address", description="the most recent state of the Union adress", vectorstore=state_of_union_store,)toolkit = VectorStoreToolkit(vectorstore_info=vectorstore_info)agent_executor = create_vectorstore_agent(llm=llm, toolkit=toolkit, verbose=True)Examples​agent_executor.run( "What did biden say about ketanji brown jackson in the state of the union address?") > Entering new AgentExecutor chain... I need to find the answer in the state of the union address Action: state_of_union_address Action Input: What did biden say about ketanji brown jackson Observation: Biden said that Ketanji Brown Jackson is one of the nation's top legal minds and that she will continue Justice Breyer's legacy of excellence. Thought: I now know the final answer Final Answer: Biden said that Ketanji Brown Jackson is one of the nation's top legal minds and that she will continue Justice Breyer's legacy of excellence. > Finished chain. "Biden said that Ketanji Brown Jackson is one of the nation's top legal minds and that she will continue Justice Breyer's legacy ofSkip to main content🦜ï¸�🔗 LangChainDocsUse casesIntegrationsAPILangSmithJS/TS DocsCTRLKIntegrationsCallbacksChat modelsDocument loadersDocument transformersLLMsMemoryRetrieversText embedding modelsAgent toolkitsAmadeus ToolkitAzure Cognitive Services ToolkitCSV AgentDocument ComparisonGitHubGmail ToolkitJiraJSON AgentMultion ToolkitOffice365 ToolkitOpenAPI agentsNatural Language APIsPandas Dataframe AgentPlayWright Browser ToolkitPowerBI Dataset AgentPython AgentSpark Dataframe AgentSpark SQL AgentSQL Database AgentVectorstore AgentXorbits AgentToolsVector storesGrouped by providerIntegrationsAgent toolkitsVectorstore AgentOn this pageVectorstore AgentThis notebook showcases an agent designed to retrieve information from one or more vectorstores, either with or without sources.Create the Vectorstores​from langchain.embeddings.openai import OpenAIEmbeddingsfrom langchain.vectorstores import Chromafrom langchain.text_splitter import CharacterTextSplitterfrom langchain import OpenAI, VectorDBQAllm = OpenAI(temperature=0)from langchain.document_loaders import TextLoaderloader = TextLoader("../../../state_of_the_union.txt")documents = loader.load()text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0)texts = text_splitter.split_documents(documents)embeddings = OpenAIEmbeddings()state_of_union_store = Chroma.from_documents( texts, embeddings, collection_name="state-of-union") Running Chroma using direct local API. Using DuckDB in-memory for database. Data will be transient.from langchain.document_loaders import WebBaseLoaderloader = WebBaseLoader("https://beta.ruff.rs/docs/faq/")docs = loader.load()ruff_texts = text_splitter.split_documents(docs)ruff_store = Chroma.from_documents(ruff_texts, embeddings, collection_name="ruff")Vectorstore Agent | 🦜ï¸�🔗 LangchainSkip to main content🦜ï¸�🔗 LangChainDocsUse casesIntegrationsAPILangSmithJS/TS DocsCTRLKIntegrationsCallbacksChat modelsDocument loadersDocument transformersLLMsMemoryRetrieversText embedding modelsAgent toolkitsAmadeus ToolkitAzure Cognitive Services ToolkitCSV AgentDocument ComparisonGitHubGmail ToolkitJiraJSON AgentMultion ToolkitOffice365 ToolkitOpenAPI agentsNatural Language APIsPandas Dataframe AgentPlayWright Browser ToolkitPowerBI Dataset AgentPython AgentSpark Dataframe AgentSpark SQL AgentSQL Database AgentVectorstore AgentXorbits AgentToolsVector storesGrouped by providerIntegrationsAgent toolkitsMultion ToolkitOn this pageMultion ToolkitThis notebook walks you through connecting LangChain to the MultiOn Client in your browserTo use this toolkit, you will need to add MultiOn Extension to your browser as explained in the MultiOn for Chrome.pip install --upgrade multion > /dev/nullMultiOn Setup​Login to establish connection with your extension.# Authorize connection to your Browser extentionimport multion multion.login()Use Multion Toolkit within an Agent​from langchain.agents.agent_toolkits import create_multion_agentfrom langchain.tools.multion.tool import MultionClientToolfrom langchain.agents.agent_types import AgentTypefrom langchain.chat_models import ChatOpenAIagent_executor = create_multion_agent( llm=ChatOpenAI(temperature=0), tool=MultionClientTool(), agent_type=AgentType.OPENAI_FUNCTIONS, verbose=True)agent.run("show me the weather today")agent.run( "Tweet about Elon Musk")PreviousJSON AgentNextOffice365 ToolkitMultiOn SetupUse Multion Toolkit within an AgentCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.Multion Toolkit | 🦜ï¸�🔗 Langchain0.00017328384274151176 Invoking: `Python_REPL` with `x_test.item()` The prediction for x = 5 is 10.000173568725586. > Finished chain. 'The prediction for x = 5 is 10.000173568725586.'PreviousPowerBI Dataset AgentNextSpark Dataframe AgentUsing ZERO_SHOT_REACT_DESCRIPTIONUsing OpenAI FunctionsFibonacci ExampleTraining neural netCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.# Compute loss loss = criterion(y_pred, y_train) # Backward pass and optimization optimizer.zero_grad() loss.backward() optimizer.step() # Print the loss every 100 epochs if epoch % 100 == 0: print(f"Epoch {epoch}: Loss = {loss.item()}") # Make a prediction for x = 5 x_test = torch.tensor([[5.0]], dtype=torch.float32) y_pred = model(x_test) y_pred.item()` Epoch 100: Loss = 0.03825576975941658 Epoch 200: Loss = 0.02100197970867157 Epoch 300: Loss = 0.01152981910854578 Epoch 400: Loss = 0.006329738534986973 Epoch 500: Loss = 0.0034749575424939394 Epoch 600: Loss = 0.0019077073084190488 Epoch 700: Loss = 0.001047312980517745 Epoch 800: Loss = 0.0005749554838985205 Epoch 900: Loss = 0.0003156439634039998 Epoch 1000: Loss = 0.00017328384274151176 Invoking:dtype=torch.float32)\ny_pred = model(x_test)\ny_pred.item()'} because the `arguments` is not valid JSON.Invalid or incomplete response Invoking: `Python_REPL` with `import torch import torch.nn as nn import torch.optim as optim # Define the neural network class SingleNeuron(nn.Module): def __init__(self): super(SingleNeuron, self).__init__() self.linear = nn.Linear(1, 1) def forward(self, x): return self.linear(x) # Create the synthetic data x_train = torch.tensor([[1.0], [2.0], [3.0], [4.0]], dtype=torch.float32) y_train = torch.tensor([[2.0], [4.0], [6.0], [8.0]], dtype=torch.float32) # Create the neural network model = SingleNeuron() # Define the loss function and optimizer criterion = nn.MSELoss() optimizer = optim.SGD(model.parameters(), lr=0.01) # Train the neural network for epoch in range(1, 1001): # Forward pass y_pred = model(x_train) # Compute loss loss =Define the neural network\nclass SingleNeuron(nn.Module):\n def __init__(self):\n super(SingleNeuron, self).__init__()\n self.linear = nn.Linear(1, 1)\n \n def forward(self, x):\n return self.linear(x)\n\n# Create the synthetic data\nx_train = torch.tensor([[1.0], [2.0], [3.0], [4.0]], dtype=torch.float32)\ny_train = torch.tensor([[2.0], [4.0], [6.0], [8.0]], dtype=torch.float32)\n\n# Create the neural network\nmodel = SingleNeuron()\n\n# Define the loss function and optimizer\ncriterion = nn.MSELoss()\noptimizer = optim.SGD(model.parameters(), lr=0.01)\n\n# Train the neural network\nfor epoch in range(1, 1001):\n # Forward pass\n y_pred = model(x_train)\n \n # Compute loss\n loss = criterion(y_pred, y_train)\n \n # Backward pass and optimization\n optimizer.zero_grad()\n loss.backward()\n optimizer.step()\n \n # Print the loss every 100 epochs\n if epoch % 100 == 0:\n print(f"Epoch {epoch}: Loss = {loss.item()}")\n\n# Make a prediction for x = 5\nx_test = torch.tensor([[5.0]], dtype=torch.float32)\ny_pred = model(x_test)\ny_pred.item()'}tool=PythonREPLTool(), verbose=True, agent_type=AgentType.OPENAI_FUNCTIONS, agent_executor_kwargs={"handle_parsing_errors": True},)Fibonacci Example​This example was created by John Wiseman.agent_executor.run("What is the 10th fibonacci number?") > Entering new chain... Invoking: `Python_REPL` with `def fibonacci(n): if n <= 0: return 0 elif n == 1: return 1 else: return fibonacci(n-1) + fibonacci(n-2) fibonacci(10)` The 10th Fibonacci number is 55. > Finished chain. 'The 10th Fibonacci number is 55.'Training neural net​This example was created by Samee Ur Rehman.agent_executor.run( """Understand, write a single neuron neural network in PyTorch.Take synthetic data for y=2x. Train for 1000 epochs and print every 100 epochs.Return prediction for x = 5""") > Entering new chain... Could not parse tool input: {'name': 'python', 'arguments': 'import torch\nimport torch.nn as nn\nimport torch.optim as optim\n\n# Define the neural network\nclass SingleNeuron(nn.Module):\n defSkip to main content🦜ï¸�🔗 LangChainDocsUse casesIntegrationsAPILangSmithJS/TS DocsCTRLKIntegrationsCallbacksChat modelsDocument loadersDocument transformersLLMsMemoryRetrieversText embedding modelsAgent toolkitsAmadeus ToolkitAzure Cognitive Services ToolkitCSV AgentDocument ComparisonGitHubGmail ToolkitJiraJSON AgentMultion ToolkitOffice365 ToolkitOpenAPI agentsNatural Language APIsPandas Dataframe AgentPlayWright Browser ToolkitPowerBI Dataset AgentPython AgentSpark Dataframe AgentSpark SQL AgentSQL Database AgentVectorstore AgentXorbits AgentToolsVector storesGrouped by providerIntegrationsAgent toolkitsPython AgentOn this pagePython AgentThis notebook showcases an agent designed to write and execute python code to answer a question.from langchain.agents.agent_toolkits import create_python_agentfrom langchain.tools.python.tool import PythonREPLToolfrom langchain.python import PythonREPLfrom langchain.llms.openai import OpenAIfrom langchain.agents.agent_types import AgentTypefrom langchain.chat_models import ChatOpenAIUsing ZERO_SHOT_REACT_DESCRIPTION​This shows how to initialize the agent using the ZERO_SHOT_REACT_DESCRIPTION agent type. Note that this is an alternative to the above.agent_executor = create_python_agent( llm=OpenAI(temperature=0, max_tokens=1000), tool=PythonREPLTool(), verbose=True, agent_type=AgentType.ZERO_SHOT_REACT_DESCRIPTION,)Using OpenAI Functions​This shows how to initialize the agent using the OPENAI_FUNCTIONS agent type. Note that this is an alternative to the above.agent_executor = create_python_agent( llm=ChatOpenAI(temperature=0, model="gpt-3.5-turbo-0613"), tool=PythonREPLTool(), verbose=True,Python Agent | 🦜ï¸�🔗 LangchainJIRA_API_TOKEN JIRA_USERNAME JIRA_INSTANCE_URL%pip install atlassian-python-apiimport osfrom langchain.agents import AgentTypefrom langchain.agents import initialize_agentfrom langchain.agents.agent_toolkits.jira.toolkit import JiraToolkitfrom langchain.llms import OpenAIfrom langchain.utilities.jira import JiraAPIWrapperos.environ["JIRA_API_TOKEN"] = "abc"os.environ["JIRA_USERNAME"] = "123"os.environ["JIRA_INSTANCE_URL"] = "https://jira.atlassian.com"os.environ["OPENAI_API_KEY"] = "xyz"llm = OpenAI(temperature=0)jira = JiraAPIWrapper()toolkit = JiraToolkit.from_jira_api_wrapper(jira)agent = initialize_agent( toolkit.get_tools(), llm, agent=AgentType.ZERO_SHOT_REACT_DESCRIPTION, verbose=True)agent.run("make a new issue in project PW to remind me to make more fried rice") > Entering new AgentExecutor chain... I need to create an issue in project PW Action: Create Issue Action Input: {"summary": "Make more fried rice", "description": "Reminder to make more fried rice", "issuetype": {"name": "Task"}, "priority": {"name": "Low"}, "project": {"key": "PW"}} Observation: None Thought: I now know the final answer Final Answer: A new issue has been created in project PW with the summary "Make more fried rice" and description "Reminder to make more fried rice". > Finished chain. 'A new issue has been created in project PW with the summary "Make more fried rice" and description "Reminder to make more fried rice".'PreviousGmail ToolkitNextJSON AgentCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.Skip to main content🦜ï¸�🔗 LangChainDocsUse casesIntegrationsAPILangSmithJS/TS DocsCTRLKIntegrationsCallbacksChat modelsDocument loadersDocument transformersLLMsMemoryRetrieversText embedding modelsAgent toolkitsAmadeus ToolkitAzure Cognitive Services ToolkitCSV AgentDocument ComparisonGitHubGmail ToolkitJiraJSON AgentMultion ToolkitOffice365 ToolkitOpenAPI agentsNatural Language APIsPandas Dataframe AgentPlayWright Browser ToolkitPowerBI Dataset AgentPython AgentSpark Dataframe AgentSpark SQL AgentSQL Database AgentVectorstore AgentXorbits AgentToolsVector storesGrouped by providerIntegrationsAgent toolkitsJiraJiraThis notebook goes over how to use the Jira tool. The Jira tool allows agents to interact with a given Jira instance, performing actions such as searching for issues and creating issues, the tool wraps the atlassian-python-api library, for more see: https://atlassian-python-api.readthedocs.io/jira.htmlTo use this tool, you must first set as environment variables: JIRA_API_TOKEN JIRA_USERNAMEJira | 🦜ï¸�🔗 Langchain"Tell the LangChain audience to 'enjoy the meal' in Italian, please!") "In Italian, you can say 'Buon appetito' to someone to wish them to enjoy their meal. This phrase is commonly used in Italy when someone is about to eat, often at the beginning of a meal. It's similar to saying 'Bon appétit' in French or 'Guten Appetit' in German."PreviousOpenAPI agentsNextPandas Dataframe AgentFirst, import dependencies and load the LLMNext, load the Natural Language API ToolkitsCreate the AgentUsing Auth + Adding more EndpointsThank you!CommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.Miami Cuban Link Chain Necklace - Gold, Italian Gold Miami Cuban Link Chain Necklace - Gold, Italian Gold Herringbone Necklace - Gold, Italian Gold Claddagh Ring - Gold, Italian Gold Herringbone Chain Necklace - Gold, Garmin QuickFit 22mm Italian Vacchetta Leather Band, Macy's Italian Horn Charm - Gold, Dolce & Gabbana Light Blue Italian Love Pour Homme EdT 1.7 fl oz. Thought: I now know the final answer. Final Answer: To present for your Italian language class, you could wear an Italian Gold Sparkle Perfectina Necklace - Gold, an Italian Design Miami Cuban Link Chain Necklace - Gold, or an Italian Gold Miami Cuban Link Chain Necklace - Gold. For a recipe, you could make Turkey Tomato Cheese Pizza, Broccolini Quinoa Pilaf, Bruschetta Style Pork & Pasta, Salmon Quinoa Risotto, Italian Tuna Pasta, Roasted Brussels Sprouts With Garlic, Asparagus Lemon Risotto, Italian Steamed Artichokes, Crispy Italian Cauliflower Poppers Appetizer, or Pappa Al Pomodoro. > Finished chain. 'To present for your Italian language class, you could wear an Italian Gold Sparkle Perfectina Necklace - Gold, an Italian Design Miami Cuban Link Chain Necklace - Gold, or an Italian Gold Miami Cuban Link Chain Necklace - Gold. For a recipe, you could make Turkey Tomato Cheese Pizza, Broccolini Quinoa Pilaf, Bruschetta Style Pork & Pasta, Salmon Quinoa Risotto, Italian Tuna Pasta, Roasted Brussels Sprouts With Garlic, Asparagus Lemon Risotto, Italian Steamed Artichokes, Crispy Italian Cauliflower Poppers Appetizer, or Pappa Al Pomodoro.'Thank you!​natural_language_api_tools[1].run( "Tell the LangChain audience to 'enjoy thespoonacular_toolkit.get_tools()[:30])print(f"{len(natural_language_api_tools)} tools loaded.") 34 tools loaded.# Create an agent with the new toolsmrkl = initialize_agent( natural_language_api_tools, llm, agent=AgentType.ZERO_SHOT_REACT_DESCRIPTION, verbose=True, agent_kwargs={"format_instructions": openapi_format_instructions},)# Make the query more complex!user_input = ( "I'm learning Italian, and my language class is having an end of year party... " " Could you help me find an Italian outfit to wear and" " an appropriate recipe to prepare so I can present for the class in Italian?")mrkl.run(user_input) > Entering new AgentExecutor chain... I need to find a recipe and an outfit that is Italian-themed. Action: spoonacular_API.searchRecipes Action Input: Italian Observation: The API response contains 10 Italian recipes, including Turkey Tomato Cheese Pizza, Broccolini Quinoa Pilaf, Bruschetta Style Pork & Pasta, Salmon Quinoa Risotto, Italian Tuna Pasta, Roasted Brussels Sprouts With Garlic, Asparagus Lemon Risotto, Italian Steamed Artichokes, Crispy Italian Cauliflower Poppers Appetizer, and Pappa Al Pomodoro. Thought: I need to find an Italian-themed outfit. Action: Open_AI_Klarna_product_Api.productsUsingGET Action Input: Italian Observation: I found 10 products related to 'Italian' in the API response. These products include Italian Gold Sparkle Perfectina Necklace - Gold, Italian Design Miami Cuban Link Chain Necklace - Gold, Italian Gold Miami Cuban Link Chain Necklace - Gold, Italian Goldfor parameter Accept. Valid values are ['path', 'query'] Ignoring optional parameter Unsupported APIPropertyLocation "header" for parameter Content-Type. Valid values are ['path', 'query'] Ignoring optional parameter Unsupported APIPropertyLocation "header" for parameter Accept. Valid values are ['path', 'query'] Ignoring optional parameter Unsupported APIPropertyLocation "header" for parameter Content-Type. Valid values are ['path', 'query'] Ignoring optional parameter Unsupported APIPropertyLocation "header" for parameter Content-Type. Valid values are ['path', 'query'] Ignoring optional parameter Unsupported APIPropertyLocation "header" for parameter Content-Type. Valid values are ['path', 'query'] Ignoring optional parameter Unsupported APIPropertyLocation "header" for parameter Content-Type. Valid values are ['path', 'query'] Ignoring optional parameter Unsupported APIPropertyLocation "header" for parameter Accept. Valid values are ['path', 'query'] Ignoring optional parameter Unsupported APIPropertyLocation "header" for parameter Content-Type. Valid values are ['path', 'query'] Ignoring optional parameter Unsupported APIPropertyLocation "header" for parameter Accept. Valid values are ['path', 'query'] Ignoring optional parameter Unsupported APIPropertyLocation "header" for parameter Accept. Valid values are ['path', 'query'] Ignoring optional parameter Unsupported APIPropertyLocation "header" for parameter Accept. Valid values are ['path', 'query'] Ignoring optional parameter Unsupported APIPropertyLocation "header" for parameter Content-Type. Valid values are ['path', 'query'] Ignoring optional parameternatural_language_api_tools = ( speak_toolkit.get_tools() + klarna_toolkit.get_tools() + spoonacular_toolkit.get_tools()[:30])print(f"{len(natural_language_api_tools)}endpoints may require user authentication via things like access tokens. Here we show how to pass in the authentication information via the Requests wrapper object.Since each NLATool exposes a concisee natural language interface to its wrapped API, the top level conversational agent has an easier job incorporating each endpoint to satisfy a user's request.Adding the Spoonacular endpoints.Go to the Spoonacular API Console and make a free account.Click on Profile and copy your API key below.spoonacular_api_key = "" # Copy from the API Consolerequests = Requests(headers={"x-api-key": spoonacular_api_key})spoonacular_toolkit = NLAToolkit.from_llm_and_url( llm, "https://spoonacular.com/application/frontend/downloads/spoonacular-openapi-3.json", requests=requests, max_text_length=1800, # If you want to truncate the response text) Attempting to load an OpenAPI 3.0.0 spec. This may result in degraded performance. Convert your OpenAPI spec to 3.1.* spec for better support. Unsupported APIPropertyLocation "header" for parameter Content-Type. Valid values are ['path', 'query'] Ignoring optional parameter Unsupported APIPropertyLocation "header" for parameter Accept. Valid values are ['path', 'query'] Ignoring optional parameter Unsupported APIPropertyLocation "header" for parameter Content-Type. Valid values are ['path', 'query'] Ignoring optional parameter Unsupported APIPropertyLocation "header" for parameter Accept. Valid values are ['path', 'query'] Ignoring optional parameter Unsupported APIPropertyLocation "header" for parameter Content-Type. Valid values are ['path', 'query'] Ignoring optional parameter Unsupported APIPropertyLocation "header" for parameter Accept. Valid values are ['path', 'query'] Ignoring optional parameter= initialize_agent( natural_language_tools, llm, agent=AgentType.ZERO_SHOT_REACT_DESCRIPTION, verbose=True, agent_kwargs={"format_instructions": openapi_format_instructions},)mrkl.run( "I have an end of year party for my Italian class and have to buy some Italian clothes for it") > Entering new AgentExecutor chain... I need to find out what kind of Italian clothes are available Action: Open_AI_Klarna_product_Api.productsUsingGET Action Input: Italian clothes Observation: The API response contains two products from the Alé brand in Italian Blue. The first is the Alé Colour Block Short Sleeve Jersey Men - Italian Blue, which costs $86.49, and the second is the Alé Dolid Flash Jersey Men - Italian Blue, which costs $40.00. Thought: I now know what kind of Italian clothes are available and how much they cost. Final Answer: You can buy two products from the Alé brand in Italian Blue for your end of year party. The Alé Colour Block Short Sleeve Jersey Men - Italian Blue costs $86.49, and the Alé Dolid Flash Jersey Men - Italian Blue costs $40.00. > Finished chain. 'You can buy two products from the Alé brand in Italian Blue for your end of year party. The Alé Colour Block Short Sleeve Jersey Men - Italian Blue costs $86.49, and the Alé Dolid Flash Jersey Men - Italian Blue costs $40.00.'Using Auth + Adding more Endpoints​Some endpoints may require user authentication via things like access tokens. Here we show how to pass in the authentication"https://api.speak.com/openapi.yaml")klarna_toolkit = NLAToolkit.from_llm_and_url( llm, "https://www.klarna.com/us/shopping/public/openai/v0/api-docs/") Attempting to load an OpenAPI 3.0.1 spec. This may result in degraded performance. Convert your OpenAPI spec to 3.1.* spec for better support. Attempting to load an OpenAPI 3.0.1 spec. This may result in degraded performance. Convert your OpenAPI spec to 3.1.* spec for better support. Attempting to load an OpenAPI 3.0.1 spec. This may result in degraded performance. Convert your OpenAPI spec to 3.1.* spec for better support.Create the Agent​# Slightly tweak the instructions from the default agentopenapi_format_instructions = """Use the following format:Question: the input question you must answerThought: you should always think about what to doAction: the action to take, should be one of [{tool_names}]Action Input: what to instruct the AI Action representative.Observation: The Agent's response... (this Thought/Action/Action Input/Observation can repeat N times)Thought: I now know the final answer. User can't see any of my observations, API responses, links, or tools.Final Answer: the final answer to the original input question with the right amount of detailWhen responding with your Final Answer, remember that the person you are responding to CANNOT see any of your Thought/Action/Action Input/Observations, so if there is any relevant information there you need to include it explicitly in your response."""natural_language_tools = speak_toolkit.get_tools() + klarna_toolkit.get_tools()mrkl = initialize_agent( natural_language_tools,Skip to main content🦜ï¸�🔗 LangChainDocsUse casesIntegrationsAPILangSmithJS/TS DocsCTRLKIntegrationsCallbacksChat modelsDocument loadersDocument transformersLLMsMemoryRetrieversText embedding modelsAgent toolkitsAmadeus ToolkitAzure Cognitive Services ToolkitCSV AgentDocument ComparisonGitHubGmail ToolkitJiraJSON AgentMultion ToolkitOffice365 ToolkitOpenAPI agentsNatural Language APIsPandas Dataframe AgentPlayWright Browser ToolkitPowerBI Dataset AgentPython AgentSpark Dataframe AgentSpark SQL AgentSQL Database AgentVectorstore AgentXorbits AgentToolsVector storesGrouped by providerIntegrationsAgent toolkitsNatural Language APIsOn this pageNatural Language APIsNatural Language API Toolkits (NLAToolkits) permit LangChain Agents to efficiently plan and combine calls across endpoints. This notebook demonstrates a sample composition of the Speak, Klarna, and Spoonacluar APIs.For a detailed walkthrough of the OpenAPI chains wrapped within the NLAToolkit, see the OpenAPI Operation Chain notebook.First, import dependencies and load the LLM​from typing import List, Optionalfrom langchain.chains import LLMChainfrom langchain.llms import OpenAIfrom langchain.prompts import PromptTemplatefrom langchain.requests import Requestsfrom langchain.tools import APIOperation, OpenAPISpecfrom langchain.agents import AgentType, Tool, initialize_agentfrom langchain.agents.agent_toolkits import NLAToolkit# Select the LLM to use. Here, we use text-davinci-003llm = OpenAI( temperature=0, max_tokens=700) # You can swap between different core LLM's here.Next, load the Natural Language API Toolkits​speak_toolkit = NLAToolkit.from_llm_and_url(llm, "https://api.speak.com/openapi.yaml")klarna_toolkit =Natural Language APIs | 🦜ï¸�🔗 LangchainAlgernon Henry Wilson', '80.0')] Thought:I now know the final answer. Final Answer: The oldest survived passenger is Barkworth, Mr. Algernon Henry Wilson, who was 80 years old. > Finished chain. 'The oldest survived passenger is Barkworth, Mr. Algernon Henry Wilson, who was 80 years old.'PreviousSpark Dataframe AgentNextSQL Database AgentInitializationExample: describing a tableExample: running queriesCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.Sex Age SibSp Parch Ticket Fare Cabin Embarked 1 0 3 Braund, Mr. Owen Harris male 22.0 1 0 A/5 21171 7.25 None S 2 1 1 Cumings, Mrs. John Bradley (Florence Briggs Thayer) female 38.0 1 0 PC 17599 71.2833 C85 C 3 1 3 Heikkinen, Miss. Laina female 26.0 0 0 STON/O2. 3101282 7.925 None S */ Thought:I can use the titanic table to find the oldest survived passenger. I will query the Name and Age columns, filtering by Survived and ordering by Age in descending order. Action: query_checker_sql_db Action Input: SELECT Name, Age FROM titanic WHERE Survived = 1 ORDER BY Age DESC LIMIT 1 Observation: SELECT Name, Age FROM titanic WHERE Survived = 1 ORDER BY Age DESC LIMIT 1 Thought:The query is correct. Now I will execute it to find the oldest survived passenger. Action: query_sql_db Action Input: SELECT Name, Age FROM titanic WHERE Survived = 1 ORDER BY Age DESC LIMIT 1 Observation: [('Barkworth, Mr. Algernon Henry Wilson', '80.0')] Thought:I now know thetitanic Thought:The query is correct, so I can execute it to find the square root of the average age. Action: query_sql_db Action Input: SELECT SQRT(AVG(Age)) as square_root_of_avg_age FROM titanic Observation: [('5.449689683556195',)] Thought:I now know the final answer Final Answer: The square root of the average age is approximately 5.45. > Finished chain. 'The square root of the average age is approximately 5.45.'agent_executor.run("What's the name of the oldest survived passenger?") > Entering new AgentExecutor chain... Action: list_tables_sql_db Action Input: Observation: titanic Thought:I should check the schema of the titanic table to see what columns are available. Action: schema_sql_db Action Input: titanic Observation: CREATE TABLE langchain_example.titanic ( PassengerId INT, Survived INT, Pclass INT, Name STRING, Sex STRING, Age DOUBLE, SibSp INT, Parch INT, Ticket STRING, Fare DOUBLE, Cabin STRING, Embarked STRING) ; /* 3 rows from titanic table: PassengerId Survived Pclass Name Sex Age SibSp Parch Ticket Fare CabinDOUBLE, Cabin STRING, Embarked STRING) ; /* 3 rows from titanic table: PassengerId Survived Pclass Name Sex Age SibSp Parch Ticket Fare Cabin Embarked 1 0 3 Braund, Mr. Owen Harris male 22.0 1 0 A/5 21171 7.25 None S 2 1 1 Cumings, Mrs. John Bradley (Florence Briggs Thayer) female 38.0 1 0 PC 17599 71.2833 C85 C 3 1 3 Heikkinen, Miss. Laina female 26.0 0 0 STON/O2. 3101282 7.925 None S */ Thought:There is an Age column in the titanic table. I should write a query to calculate the average age and then find the square root of the result. Action: query_checker_sql_db Action Input: SELECT SQRT(AVG(Age)) as square_root_of_avg_age FROM titanic Observation: The original query seems to be correct. Here it is again: SELECT SQRT(AVG(Age)) as square_root_of_avg_age FROM titanic Thought:The query is correct, so I can execute it to find the squareCabin: None, Embarked: S\n2. PassengerId: 2, Survived: 1, Pclass: 1, Name: Cumings, Mrs. John Bradley (Florence Briggs Thayer), Sex: female, Age: 38.0, SibSp: 1, Parch: 0, Ticket: PC 17599, Fare: 71.2833, Cabin: C85, Embarked: C\n3. PassengerId: 3, Survived: 1, Pclass: 3, Name: Heikkinen, Miss. Laina, Sex: female, Age: 26.0, SibSp: 0, Parch: 0, Ticket: STON/O2. 3101282, Fare: 7.925, Cabin: None, Embarked: S'Example: running queries​agent_executor.run("whats the square root of the average age?") > Entering new AgentExecutor chain... Action: list_tables_sql_db Action Input: Observation: titanic Thought:I should check the schema of the titanic table to see if there is an age column. Action: schema_sql_db Action Input: titanic Observation: CREATE TABLE langchain_example.titanic ( PassengerId INT, Survived INT, Pclass INT, Name STRING, Sex STRING, Age DOUBLE, SibSp INT, Parch INT, Ticket STRING, Fare DOUBLE, Cabin STRING, Embarked STRING)male, Age: 22.0, SibSp: 1, Parch: 0, Ticket: A/5 21171, Fare: 7.25, Cabin: None, Embarked: S 2. PassengerId: 2, Survived: 1, Pclass: 1, Name: Cumings, Mrs. John Bradley (Florence Briggs Thayer), Sex: female, Age: 38.0, SibSp: 1, Parch: 0, Ticket: PC 17599, Fare: 71.2833, Cabin: C85, Embarked: C 3. PassengerId: 3, Survived: 1, Pclass: 3, Name: Heikkinen, Miss. Laina, Sex: female, Age: 26.0, SibSp: 0, Parch: 0, Ticket: STON/O2. 3101282, Fare: 7.925, Cabin: None, Embarked: S > Finished chain. 'The titanic table has the following columns: PassengerId (INT), Survived (INT), Pclass (INT), Name (STRING), Sex (STRING), Age (DOUBLE), SibSp (INT), Parch (INT), Ticket (STRING), Fare (DOUBLE), Cabin (STRING), and Embarked (STRING). Here are some sample rows from the table: \n\n1. PassengerId: 1, Survived: 0, Pclass: 3, Name: Braund, Mr. Owen Harris, Sex: male, Age: 22.0, SibSp: 1, Parch: 0, Ticket: A/5 21171, Fare: 7.25, Cabin: None, Embarked: S\n2. PassengerId: 2,3 rows from titanic table: PassengerId Survived Pclass Name Sex Age SibSp Parch Ticket Fare Cabin Embarked 1 0 3 Braund, Mr. Owen Harris male 22.0 1 0 A/5 21171 7.25 None S 2 1 1 Cumings, Mrs. John Bradley (Florence Briggs Thayer) female 38.0 1 0 PC 17599 71.2833 C85 C 3 1 3 Heikkinen, Miss. Laina female 26.0 0 0 STON/O2. 3101282 7.925 None S */ Thought:I now know the schema and sample rows for the titanic table. Final Answer: The titanic table has the following columns: PassengerId (INT), Survived (INT), Pclass (INT), Name (STRING), Sex (STRING), Age (DOUBLE), SibSp (INT), Parch (INT), Ticket (STRING), Fare (DOUBLE), Cabin (STRING), and Embarked (STRING). Here are some sample rows from the table: 1. PassengerId: 1, Survived: 0, Pclass: 3, Name: Braund, Mr. Owen Harris, Sex: male, Age: 22.0, SibSp: 1, Parch:7.225| null| C| +-----------+--------+------+--------------------+------+----+-----+-----+----------------+-------+-----+--------+ only showing top 20 rows # Note, you can also connect to Spark via Spark connect. For example:# db = SparkSQL.from_uri("sc://localhost:15002", schema=schema)spark_sql = SparkSQL(schema=schema)llm = ChatOpenAI(temperature=0)toolkit = SparkSQLToolkit(db=spark_sql, llm=llm)agent_executor = create_spark_sql_agent(llm=llm, toolkit=toolkit, verbose=True)Example: describing a table​agent_executor.run("Describe the titanic table") > Entering new AgentExecutor chain... Action: list_tables_sql_db Action Input: Observation: titanic Thought:I found the titanic table. Now I need to get the schema and sample rows for the titanic table. Action: schema_sql_db Action Input: titanic Observation: CREATE TABLE langchain_example.titanic ( PassengerId INT, Survived INT, Pclass INT, Name STRING, Sex STRING, Age DOUBLE, SibSp INT, Parch INT, Ticket STRING, Fare DOUBLE, Cabin STRING, Embarked STRING) ; /* 3 rows from titanic table: PassengerId Survived(Ma...|female|55.0| 0| 0| 248706| 16.0| null| S| | 17| 0| 3|Rice, Master. Eugene| male| 2.0| 4| 1| 382652| 29.125| null| Q| | 18| 1| 2|Williams, Mr. Cha...| male|null| 0| 0| 244373| 13.0| null| S| | 19| 0| 3|Vander Planke, Mr...|female|31.0| 1| 0| 345763| 18.0| null| S| | 20| 1| 3|Masselmani, Mrs. ...|female|null| 0| 0| 2649| 7.225| null| C|12| 1| 1|Bonnell, Miss. El...|female|58.0| 0| 0| 113783| 26.55| C103| S| | 13| 0| 3|Saundercock, Mr. ...| male|20.0| 0| 0| A/5. 2151| 8.05| null| S| | 14| 0| 3|Andersson, Mr. An...| male|39.0| 1| 5| 347082| 31.275| null| S| | 15| 0| 3|Vestrom, Miss. Hu...|female|14.0| 0| 0| 350406| 7.8542| null| S| | 16| 1| 2|Hewlett, Mrs. (Ma...|female|55.0| 0|17463|51.8625| E46| S| | 8| 0| 3|Palsson, Master. ...| male| 2.0| 3| 1| 349909| 21.075| null| S| | 9| 1| 3|Johnson, Mrs. Osc...|female|27.0| 0| 2| 347742|11.1333| null| S| | 10| 1| 2|Nasser, Mrs. Nich...|female|14.0| 1| 0| 237736|30.0708| null| C| | 11| 1| 3|Sandstrom, Miss. ...|female| 4.0| 1| 1| PP 9549| 16.7| G6| S| | 12| 1|Miss. ...|female|26.0| 0| 0|STON/O2. 3101282| 7.925| null| S| | 4| 1| 1|Futrelle, Mrs. Ja...|female|35.0| 1| 0| 113803| 53.1| C123| S| | 5| 0| 3|Allen, Mr. Willia...| male|35.0| 0| 0| 373450| 8.05| null| S| | 6| 0| 3| Moran, Mr. James| male|null| 0| 0| 330877| 8.4583| null| Q| | 7| 0| 1|McCarthy, Mr. Tim...| male|54.0| 0| 0| 17463|51.8625| E46|use sc.setLogLevel(newLevel). For SparkR, use setLogLevel(newLevel). 23/05/18 16:03:10 WARN NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable +-----------+--------+------+--------------------+------+----+-----+-----+----------------+-------+-----+--------+ |PassengerId|Survived|Pclass| Name| Sex| Age|SibSp|Parch| Ticket| Fare|Cabin|Embarked| +-----------+--------+------+--------------------+------+----+-----+-----+----------------+-------+-----+--------+ | 1| 0| 3|Braund, Mr. Owen ...| male|22.0| 1| 0| A/5 21171| 7.25| null| S| | 2| 1| 1|Cumings, Mrs. Joh...|female|38.0| 1| 0| PC 17599|71.2833| C85| C| | 3| 1| 3|Heikkinen, Miss. ...|female|26.0|Skip to main content🦜ï¸�🔗 LangChainDocsUse casesIntegrationsAPILangSmithJS/TS DocsCTRLKIntegrationsCallbacksChat modelsDocument loadersDocument transformersLLMsMemoryRetrieversText embedding modelsAgent toolkitsAmadeus ToolkitAzure Cognitive Services ToolkitCSV AgentDocument ComparisonGitHubGmail ToolkitJiraJSON AgentMultion ToolkitOffice365 ToolkitOpenAPI agentsNatural Language APIsPandas Dataframe AgentPlayWright Browser ToolkitPowerBI Dataset AgentPython AgentSpark Dataframe AgentSpark SQL AgentSQL Database AgentVectorstore AgentXorbits AgentToolsVector storesGrouped by providerIntegrationsAgent toolkitsSpark SQL AgentOn this pageSpark SQL AgentThis notebook shows how to use agents to interact with a Spark SQL. Similar to SQL Database Agent, it is designed to address general inquiries about Spark SQL and facilitate error recovery.NOTE: Note that, as this agent is in active development, all answers might not be correct. Additionally, it is not guaranteed that the agent won't perform DML statements on your Spark cluster given certain questions. Be careful running it on sensitive data!Initialization​from langchain.agents import create_spark_sql_agentfrom langchain.agents.agent_toolkits import SparkSQLToolkitfrom langchain.chat_models import ChatOpenAIfrom langchain.utilities.spark_sql import SparkSQLfrom pyspark.sql import SparkSessionspark = SparkSession.builder.getOrCreate()schema = "langchain_example"spark.sql(f"CREATE DATABASE IF NOT EXISTS {schema}")spark.sql(f"USE {schema}")csv_file_path = "titanic.csv"table = "titanic"spark.read.csv(csv_file_path, header=True, inferSchema=True).write.saveAsTable(table)spark.table(table).show() Setting default log level to "WARN". To adjust logging level use sc.setLogLevel(newLevel). For SparkR, use setLogLevel(newLevel).Spark SQL Agent | 🦜ï¸�🔗 LangchainMiss. Anna', Sex='female', Age=35.0, SibSp=0, Parch=0, Ticket='PC 17755', Fare=512.3292, Cabin=None, Embarked='C') Thought: I now know the name of the person who bought the most expensive ticket Final Answer: Miss. Anna Ward > Finished chain. 'Miss. Anna Ward'spark.stop()PreviousPython AgentNextSpark SQL AgentSpark Connect ExampleCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.3|Vander Planke, Mr...|female|31.0| 1| 0| 345763| 18.0| null| S| | 20| 1| 3|Masselmani, Mrs. ...|female|null| 0| 0| 2649| 7.225| null| C| +-----------+--------+------+--------------------+------+----+-----+-----+----------------+-------+-----+--------+ only showing top 20 rows from langchain.agents import create_spark_dataframe_agentfrom langchain.llms import OpenAIimport osos.environ["OPENAI_API_KEY"] = "...input your openai api key here..."agent = create_spark_dataframe_agent(llm=OpenAI(temperature=0), df=df, verbose=True)agent.run( """who bought the most expensive ticket?You can find all supported function types in https://spark.apache.org/docs/latest/api/python/reference/pyspark.sql/dataframe.html""") > Entering new AgentExecutor chain... Thought: I need to find the row with the highest fare Action: python_repl_ast Action Input: df.sort(df.Fare.desc()).first() Observation: Row(PassengerId=259, Survived=1, Pclass=1, Name='Ward, Miss. Anna', Sex='female', Age=35.0, SibSp=0,null| S| | 15| 0| 3|Vestrom, Miss. Hu...|female|14.0| 0| 0| 350406| 7.8542| null| S| | 16| 1| 2|Hewlett, Mrs. (Ma...|female|55.0| 0| 0| 248706| 16.0| null| S| | 17| 0| 3|Rice, Master. Eugene| male| 2.0| 4| 1| 382652| 29.125| null| Q| | 18| 1| 2|Williams, Mr. Cha...| male|null| 0| 0| 244373| 13.0| null| S| | 19| 0| 3|Vander Planke, Mr...|female|31.0|1| 0| 237736|30.0708| null| C| | 11| 1| 3|Sandstrom, Miss. ...|female| 4.0| 1| 1| PP 9549| 16.7| G6| S| | 12| 1| 1|Bonnell, Miss. El...|female|58.0| 0| 0| 113783| 26.55| C103| S| | 13| 0| 3|Saundercock, Mr. ...| male|20.0| 0| 0| A/5. 2151| 8.05| null| S| | 14| 0| 3|Andersson, Mr. An...| male|39.0| 1| 5| 347082| 31.275| null| S| |0| 3| Moran, Mr. James| male|null| 0| 0| 330877| 8.4583| null| Q| | 7| 0| 1|McCarthy, Mr. Tim...| male|54.0| 0| 0| 17463|51.8625| E46| S| | 8| 0| 3|Palsson, Master. ...| male| 2.0| 3| 1| 349909| 21.075| null| S| | 9| 1| 3|Johnson, Mrs. Osc...|female|27.0| 0| 2| 347742|11.1333| null| S| | 10| 1| 2|Nasser, Mrs. Nich...|female|14.0| 1| 0|S| | 2| 1| 1|Cumings, Mrs. Joh...|female|38.0| 1| 0| PC 17599|71.2833| C85| C| | 3| 1| 3|Heikkinen, Miss. ...|female|26.0| 0| 0|STON/O2. 3101282| 7.925| null| S| | 4| 1| 1|Futrelle, Mrs. Ja...|female|35.0| 1| 0| 113803| 53.1| C123| S| | 5| 0| 3|Allen, Mr. Willia...| male|35.0| 0| 0| 373450| 8.05| null| S| | 6| 0| 3| Moran, Mr. James|Spark session on the client where our application runs. Before we can do that, we need# to make sure to stop the existing regular Spark session because it cannot coexist with the remote# Spark Connect session we are about to create.SparkSession.builder.master("local[*]").getOrCreate().stop() 23/05/08 10:06:09 WARN Utils: Service 'SparkUI' could not bind on port 4040. Attempting port 4041.# The command we used above to launch the server configured Spark to run as localhost:15002.# So now we can create a remote Spark session on the client using the following command.spark = SparkSession.builder.remote("sc://localhost:15002").getOrCreate()csv_file_path = "titanic.csv"df = spark.read.csv(csv_file_path, header=True, inferSchema=True)df.show() +-----------+--------+------+--------------------+------+----+-----+-----+----------------+-------+-----+--------+ |PassengerId|Survived|Pclass| Name| Sex| Age|SibSp|Parch| Ticket| Fare|Cabin|Embarked| +-----------+--------+------+--------------------+------+----+-----+-----+----------------+-------+-----+--------+ | 1| 0| 3|Braund, Mr. Owen ...| male|22.0| 1| 0| A/5 21171| 7.25| null| S| |> Entering new AgentExecutor chain... Thought: I need to get the average age first Action: python_repl_ast Action Input: df.agg({"Age": "mean"}).collect()[0][0] Observation: 29.69911764705882 Thought: I now have the average age, I need to get the square root Action: python_repl_ast Action Input: math.sqrt(29.69911764705882) Observation: name 'math' is not defined Thought: I need to import math first Action: python_repl_ast Action Input: import math Observation: Thought: I now have the math library imported, I can get the square root Action: python_repl_ast Action Input: math.sqrt(29.69911764705882) Observation: 5.449689683556195 Thought: I now know the final answer Final Answer: 5.449689683556195 > Finished chain. '5.449689683556195'spark.stop()Spark Connect Example​# in apache-spark root directory. (tested here with "spark-3.4.0-bin-hadoop3 and later")# To launch Spark with support for Spark Connect sessions, run the start-connect-server.sh script../sbin/start-connect-server.sh --packages org.apache.spark:spark-connect_2.12:3.4.0from pyspark.sql import SparkSession# Now that the Spark server is running, we can connect to it remotely using Spark Connect. We do this by# creating a remote Spark session on the client where our application runs. Before we can do that, we need# to�c���S<}i�����\ � &����{a"!%�'�*�-�0W4|7�:�=�@D GJ�LPS=VWwZ�\x_ b�d#g�iNl�l&r9sks9yw~3^����k���3�k�\���U�p�3��I���(�1�X��������h���j�H�����B�����%U: U����>#�%&,�/@3r39�>-ElJ�NTDT�Z�`dNd,jaj�p�vo{��΃����Q������U�6�b��K�y�$��Y���x��{���h���^������0�����<�Mj� � fK����"/)/�/�/�5=9u9�?|FtL�P�TBYR]Kage�i�m�q"v�z�~ ���1����������Z����Q����T����'�!�X���q��������C ���#�#�)�/u2�2d8�9�?nB�B`HJ�M�MtS�V�V�]Vb�e�i�m�q�u�y�}�����[�O����C�ۍ����z���K�.�L�%�E�ڣ��/�����&�������غ�-��������W��J�,�������_��������i�'Lion | Characteristics, Ha2 Answer the following questions as best you can... OpenAI 1 7 Answer the following questions as best you can... OpenAI 2 12 Answer the following questions as best you can... OpenAI output_step output \ 0 3 I need to find out who sang summer of 69 and ... 1 8 I need to find out who Bryan Adams is married... 2 13 I now know the final answer.\nFinal Answer: B... token_usage_total_tokens token_usage_prompt_tokens \ 0 223 189 1 270 242 2NaN 4 NaN .. ... 66 NaN 67 NaN 68 NaN 69 NaN 70 Bryan Adams has never been married. [71 rows x 47 columns], 'session_analysis': prompt_step prompts name \ 0 2 Answer the following questions as best you can... OpenAINaN 68 NaN NaN NaN 69 NaN NaN Bryan Adams has never been married. 70 NaN NaN NaN outputs 0 NaN 1 NaN 2 NaN 3NaN 3 NaN NaN NaN 4 NaN NaN NaN .. ... ... ... 66 NaN NaN Bryan Adams has never married. In the 1990s, h... 67 NaN NaN NaN 68 NaNNaN 69 NaN NaN I now know the final answer.\nFinal Answer: B... 70 NaN NaN NaN input_str description output \ 0 NaN NaN NaN 1 NaN NaN NaN 2 NaN NaNNaN 3 NaN NaN NaN 4 NaN NaN NaN .. ... ... ... 66 NaN NaN NaN 67 NaN NaN NaN 68 NaN NaN0 3 ... NaN NaN NaN 70 1 1 3 ... NaN NaN NaN tool tool_input log \ 0 NaN NaN NaN 1 NaN NaN NaN 2 NaN NaN NaN 3 NaNNaN NaN NaN 4 0 0 1 ... NaN NaN NaN .. ... ... ... ... ... ... ... 66 1 0 2 ... NaN NaN NaN 67 1 0 3 ... NaN NaN NaN 68 1 0 3 ... 85.4 83.14 NaN 69 1 0 36 0 0 70 on_chain_end NaN 15 8 7 0 0 chain_starts chain_ends llm_starts ... gulpease_index osman input \ 0 0 0 1 ... NaN NaN NaN 1 0 0 1 ... NaN NaN NaN 2 0 0 1 ... NaN NaN NaN 3 0 0 1 ... NaN NaN NaN0 3 on_llm_start OpenAI 1 1 0 0 0 4 on_llm_start OpenAI 1 1 0 0 0 .. ... ... ... ... ... ... ... 66 on_tool_end NaN 11 7 4 0 0 67 on_llm_start OpenAI 12 8 4 0 0 68 on_llm_end NaN 13 8 5 0 0 69 on_agent_finish NaN 14 8 6 01, 'chain_ends': 0, 'llm_starts': 3, 'llm_ends': 3, 'llm_streams': 0, 'tool_starts': 4, 'tool_ends': 2, 'agent_ends': 1} > Finished chain. {'action': 'on_chain_end', 'outputs': 'Bryan Adams has never been married.', 'step': 15, 'starts': 8, 'ends': 7, 'errors': 0, 'text_ctr': 0, 'chain_starts': 1, 'chain_ends': 1, 'llm_starts': 3, 'llm_ends': 3, 'llm_streams': 0, 'tool_starts': 4, 'tool_ends': 2, 'agent_ends': 1} {'action_records': action name step starts ends errors text_ctr \ 0 on_llm_start OpenAI 1 1 0 0 0 1 on_llm_start OpenAI 1 1 0 0 0 2 on_llm_start OpenAI 1 1 0 0 0 30, 'llm_starts': 3, 'llm_ends': 3, 'llm_streams': 0, 'tool_starts': 4, 'tool_ends': 2, 'agent_ends': 0, 'text': ' I now know the final answer.\nFinal Answer: Bryan Adams has never been married.', 'generation_info_finish_reason': 'stop', 'generation_info_logprobs': None, 'flesch_reading_ease': 81.29, 'flesch_kincaid_grade': 3.7, 'smog_index': 0.0, 'coleman_liau_index': 5.75, 'automated_readability_index': 3.9, 'dale_chall_readability_score': 7.37, 'difficult_words': 1, 'linsear_write_formula': 2.5, 'gunning_fog': 2.8, 'text_standard': '3rd and 4th grade', 'fernandez_huerta': 115.7, 'szigriszt_pazos': 110.84, 'gutierrez_polini': 49.79, 'crawford': 0.7, 'gulpease_index': 85.4, 'osman': 83.14} I now know the final answer. Final Answer: Bryan Adams has never been married. {'action': 'on_agent_finish', 'output': 'Bryan Adams has never been married.', 'log': ' I now know the final answer.\nFinal Answer: Bryan Adams has never been married.', 'step': 14, 'starts': 8, 'ends': 6, 'errors': 0, 'text_ctr': 0, 'chain_starts': 1, 'chain_ends': 0, 'llm_starts': 3,need to answer questions about current events. Input should be a search query.\nCalculator: Useful for when you need to answer questions about math.\n\nUse the following format:\n\nQuestion: the input question you must answer\nThought: you should always think about what to do\nAction: the action to take, should be one of [Search, Calculator]\nAction Input: the input to the action\nObservation: the result of the action\n... (this Thought/Action/Action Input/Observation can repeat N times)\nThought: I now know the final answer\nFinal Answer: the final answer to the original input question\n\nBegin!\n\nQuestion: Who is the wife of the person who sang summer of 69?\nThought: I need to find out who sang summer of 69 and then find out who their wife is.\nAction: Search\nAction Input: "Who sang summer of 69"\nObservation: Bryan Adams - Summer Of 69 (Official Music Video).\nThought: I need to find out who Bryan Adams is married to.\nAction: Search\nAction Input: "Who is Bryan Adams married to"\nObservation: Bryan Adams has never married. In the 1990s, he was in a relationship with Danish model Cecilie Thomsen. In 2011, Bryan and Alicia Grimaldi, his ...\nThought:'} {'action': 'on_llm_end', 'token_usage_prompt_tokens': 314, 'token_usage_completion_tokens': 18, 'token_usage_total_tokens': 332, 'model_name': 'text-davinci-003', 'step': 13, 'starts': 8, 'ends': 5, 'errors': 0, 'text_ctr': 0, 'chain_starts': 1, 'chain_ends': 0, 'llm_starts': 3, 'llm_ends': 3,'llm_ends': 2, 'llm_streams': 0, 'tool_starts': 4, 'tool_ends': 1, 'agent_ends': 0} Observation: Bryan Adams has never married. In the 1990s, he was in a relationship with Danish model Cecilie Thomsen. In 2011, Bryan and Alicia Grimaldi, his ... Thought:{'action': 'on_tool_end', 'output': 'Bryan Adams has never married. In the 1990s, he was in a relationship with Danish model Cecilie Thomsen. In 2011, Bryan and Alicia Grimaldi, his ...', 'step': 11, 'starts': 7, 'ends': 4, 'errors': 0, 'text_ctr': 0, 'chain_starts': 1, 'chain_ends': 0, 'llm_starts': 2, 'llm_ends': 2, 'llm_streams': 0, 'tool_starts': 4, 'tool_ends': 2, 'agent_ends': 0} {'action': 'on_llm_start', 'name': 'OpenAI', 'step': 12, 'starts': 8, 'ends': 4, 'errors': 0, 'text_ctr': 0, 'chain_starts': 1, 'chain_ends': 0, 'llm_starts': 3, 'llm_ends': 2, 'llm_streams': 0, 'tool_starts': 4, 'tool_ends': 2, 'agent_ends': 0, 'prompts': 'Answer the following questions as best you can. You have access to the following tools:\n\nSearch: A search engine. Useful for when you need to answer questions about current events. Input should be a search query.\nCalculator: Useful for4.2, 'text_standard': '4th and 5th grade', 'fernandez_huerta': 124.13, 'szigriszt_pazos': 119.2, 'gutierrez_polini': 52.26, 'crawford': 0.7, 'gulpease_index': 74.7, 'osman': 84.2} I need to find out who Bryan Adams is married to. Action: Search Action Input: "Who is Bryan Adams married to"{'action': 'on_agent_action', 'tool': 'Search', 'tool_input': 'Who is Bryan Adams married to', 'log': ' I need to find out who Bryan Adams is married to.\nAction: Search\nAction Input: "Who is Bryan Adams married to"', 'step': 9, 'starts': 6, 'ends': 3, 'errors': 0, 'text_ctr': 0, 'chain_starts': 1, 'chain_ends': 0, 'llm_starts': 2, 'llm_ends': 2, 'llm_streams': 0, 'tool_starts': 3, 'tool_ends': 1, 'agent_ends': 0} {'action': 'on_tool_start', 'input_str': 'Who is Bryan Adams married to', 'name': 'Search', 'description': 'A search engine. Useful for when you need to answer questions about current events. Input should be a search query.', 'step': 10, 'starts': 7, 'ends': 3, 'errors': 0, 'text_ctr': 0, 'chain_starts': 1, 'chain_ends': 0, 'llm_starts': 2, 'llm_ends': 2, 'llm_streams': 0, 'tool_starts':final answer to the original input question\n\nBegin!\n\nQuestion: Who is the wife of the person who sang summer of 69?\nThought: I need to find out who sang summer of 69 and then find out who their wife is.\nAction: Search\nAction Input: "Who sang summer of 69"\nObservation: Bryan Adams - Summer Of 69 (Official Music Video).\nThought:'} {'action': 'on_llm_end', 'token_usage_prompt_tokens': 242, 'token_usage_completion_tokens': 28, 'token_usage_total_tokens': 270, 'model_name': 'text-davinci-003', 'step': 8, 'starts': 5, 'ends': 3, 'errors': 0, 'text_ctr': 0, 'chain_starts': 1, 'chain_ends': 0, 'llm_starts': 2, 'llm_ends': 2, 'llm_streams': 0, 'tool_starts': 2, 'tool_ends': 1, 'agent_ends': 0, 'text': ' I need to find out who Bryan Adams is married to.\nAction: Search\nAction Input: "Who is Bryan Adams married to"', 'generation_info_finish_reason': 'stop', 'generation_info_logprobs': None, 'flesch_reading_ease': 94.66, 'flesch_kincaid_grade': 2.7, 'smog_index': 0.0, 'coleman_liau_index': 4.73, 'automated_readability_index': 4.0, 'dale_chall_readability_score': 7.16, 'difficult_words': 2, 'linsear_write_formula': 4.25, 'gunning_fog': 4.2, 'text_standard': '4th and 5th grade','on_tool_end', 'output': 'Bryan Adams - Summer Of 69 (Official Music Video).', 'step': 6, 'starts': 4, 'ends': 2, 'errors': 0, 'text_ctr': 0, 'chain_starts': 1, 'chain_ends': 0, 'llm_starts': 1, 'llm_ends': 1, 'llm_streams': 0, 'tool_starts': 2, 'tool_ends': 1, 'agent_ends': 0} {'action': 'on_llm_start', 'name': 'OpenAI', 'step': 7, 'starts': 5, 'ends': 2, 'errors': 0, 'text_ctr': 0, 'chain_starts': 1, 'chain_ends': 0, 'llm_starts': 2, 'llm_ends': 1, 'llm_streams': 0, 'tool_starts': 2, 'tool_ends': 1, 'agent_ends': 0, 'prompts': 'Answer the following questions as best you can. You have access to the following tools:\n\nSearch: A search engine. Useful for when you need to answer questions about current events. Input should be a search query.\nCalculator: Useful for when you need to answer questions about math.\n\nUse the following format:\n\nQuestion: the input question you must answer\nThought: you should always think about what to do\nAction: the action to take, should be one of [Search, Calculator]\nAction Input: the input to the action\nObservation: the result of the action\n... (this Thought/Action/Action Input/Observation can repeat N times)\nThought: I now know the final answer\nFinal Answer: the final answer to the original input question\n\nBegin!\n\nQuestion: Who is the wife of92.16} I need to find out who sang summer of 69 and then find out who their wife is. Action: Search Action Input: "Who sang summer of 69"{'action': 'on_agent_action', 'tool': 'Search', 'tool_input': 'Who sang summer of 69', 'log': ' I need to find out who sang summer of 69 and then find out who their wife is.\nAction: Search\nAction Input: "Who sang summer of 69"', 'step': 4, 'starts': 3, 'ends': 1, 'errors': 0, 'text_ctr': 0, 'chain_starts': 1, 'chain_ends': 0, 'llm_starts': 1, 'llm_ends': 1, 'llm_streams': 0, 'tool_starts': 1, 'tool_ends': 0, 'agent_ends': 0} {'action': 'on_tool_start', 'input_str': 'Who sang summer of 69', 'name': 'Search', 'description': 'A search engine. Useful for when you need to answer questions about current events. Input should be a search query.', 'step': 5, 'starts': 4, 'ends': 1, 'errors': 0, 'text_ctr': 0, 'chain_starts': 1, 'chain_ends': 0, 'llm_starts': 1, 'llm_ends': 1, 'llm_streams': 0, 'tool_starts': 2, 'tool_ends': 0, 'agent_ends': 0} Observation: Bryan Adams - Summer Of 69 (Official Music Video). Thought:{'action': 'on_tool_end', 'output': 'Bryan Adams - Summer Of 69 (Official Music{'action': 'on_llm_end', 'token_usage_prompt_tokens': 189, 'token_usage_completion_tokens': 34, 'token_usage_total_tokens': 223, 'model_name': 'text-davinci-003', 'step': 3, 'starts': 2, 'ends': 1, 'errors': 0, 'text_ctr': 0, 'chain_starts': 1, 'chain_ends': 0, 'llm_starts': 1, 'llm_ends': 1, 'llm_streams': 0, 'tool_starts': 0, 'tool_ends': 0, 'agent_ends': 0, 'text': ' I need to find out who sang summer of 69 and then find out who their wife is.\nAction: Search\nAction Input: "Who sang summer of 69"', 'generation_info_finish_reason': 'stop', 'generation_info_logprobs': None, 'flesch_reading_ease': 91.61, 'flesch_kincaid_grade': 3.8, 'smog_index': 0.0, 'coleman_liau_index': 3.41, 'automated_readability_index': 3.5, 'dale_chall_readability_score': 6.06, 'difficult_words': 2, 'linsear_write_formula': 5.75, 'gunning_fog': 5.4, 'text_standard': '3rd and 4th grade', 'fernandez_huerta': 121.07, 'szigriszt_pazos': 119.5, 'gutierrez_polini': 54.91, 'crawford': 0.9, 'gulpease_index': 72.7, 'osman': 92.16} I need to find out who sang summer0, 'chain_starts': 1, 'chain_ends': 0, 'llm_starts': 0, 'llm_ends': 0, 'llm_streams': 0, 'tool_starts': 0, 'tool_ends': 0, 'agent_ends': 0, 'input': 'Who is the wife of the person who sang summer of 69?'} {'action': 'on_llm_start', 'name': 'OpenAI', 'step': 2, 'starts': 2, 'ends': 0, 'errors': 0, 'text_ctr': 0, 'chain_starts': 1, 'chain_ends': 0, 'llm_starts': 1, 'llm_ends': 0, 'llm_streams': 0, 'tool_starts': 0, 'tool_ends': 0, 'agent_ends': 0, 'prompts': 'Answer the following questions as best you can. You have access to the following tools:\n\nSearch: A search engine. Useful for when you need to answer questions about current events. Input should be a search query.\nCalculator: Useful for when you need to answer questions about math.\n\nUse the following format:\n\nQuestion: the input question you must answer\nThought: you should always think about what to do\nAction: the action to take, should be one of [Search, Calculator]\nAction Input: the input to the action\nObservation: the result of the action\n... (this Thought/Action/Action Input/Observation can repeat N times)\nThought: I now know the final answer\nFinal Answer: the final answer to the original input question\n\nBegin!\n\nQuestion: Who is the wife of the person who sang summer of 69?\nThought:'} {'action': 'on_llm_end', 'token_usage_prompt_tokens':and take a look at the resulting ClearML Task that was created.Among others, you should see that this notebook is saved along with any git information. The model JSON that contains the used parameters is saved as an artifact, there are also console logs and under the plots section, you'll find tables that represent the flow of the chain.Finally, if you enabled visualizations, these are stored as HTML files under debug samples.Scenario 2: Creating an agent with tools​To show a more advanced workflow, let's create an agent with access to tools. The way ClearML tracks the results is not different though, only the table will look slightly different as there are other types of actions taken when compared to the earlier, simpler example.You can now also see the use of the finish=True keyword, which will fully close the ClearML Task, instead of just resetting the parameters and prompts for a new conversation.from langchain.agents import initialize_agent, load_toolsfrom langchain.agents import AgentType# SCENARIO 2 - Agent with Toolstools = load_tools(["serpapi", "llm-math"], llm=llm, callbacks=callbacks)agent = initialize_agent( tools, llm, agent=AgentType.ZERO_SHOT_REACT_DESCRIPTION, callbacks=callbacks,)agent.run("Who is the wife of the person who sang summer of 69?")clearml_callback.flush_tracker( langchain_asset=agent, name="Agent with Tools", finish=True) > Entering new AgentExecutor chain... {'action': 'on_chain_start', 'name': 'AgentExecutor', 'step': 1, 'starts': 1, 'ends': 0, 'errors': 0, 'text_ctr': 0, 'chain_starts': 1, 'chain_ends': 0,4 -0.2 79.8 116.91 5 1.4 72.1 100.17 6 -0.2 79.8 116.91 7 1.4 72.1 100.17 8 -0.2 79.8 116.91 9 1.4 72.1 100.17 10 -0.2 79.8 116.91 11 1.4 72.1 100.17 [12 rows x 24 columns]} 2023-03-29 14:00:25,948 - clearml.Task - INFO - Completed model upload to https://files.clear.ml/langchain_callback_demo/llm.988bd727b0e94a29a3ac0ee526813545/models/simple_sequentialAt this point you can already go to https://app.clear.ml and take a look at the resulting ClearML Task that was created.Among others,131.54 62.30 9 6th and 7th grade 115.58 112.37 54.83 10 5th and 6th grade 133.58 131.54 62.30 11 6th and 7th grade 115.58 112.37 54.83 crawford gulpease_index osman 0 -0.2 79.8 116.91 1 1.4 72.1 100.17 2 -0.2 79.8 116.91 3 1.4 72.1 100.17 4 -0.2131.54 62.30 3 6th and 7th grade 115.58 112.37 54.83 4 5th and 6th grade 133.58 131.54 62.30 5 6th and 7th grade 115.58 112.37 54.83 6 5th and 6th grade 133.58 131.54 62.30 7 6th and 7th grade 115.58 112.37 54.83 8 5th and 6th grade 133.58 131.54... 2 6.5 8.28 10 ... 0 5.5 5.20 11 ... 2 6.5 8.28 text_standard fernandez_huerta szigriszt_pazos gutierrez_polini \ 0 5th and 6th grade 133.58 131.54 62.30 1 6th and 7th grade 115.58 112.37 54.83 2 5th and 6th grade 133.58 131.54... 2 6.5 8.28 4 ... 0 5.5 5.20 5 ... 2 6.5 8.28 6 ... 0 5.5 5.20 7 ... 2 6.5 8.28 8 ... 0 5.5 5.20 9 ... 210 138 109.04 1.3 11 138 83.66 4.8 ... difficult_words linsear_write_formula gunning_fog \ 0 ... 0 5.5 5.20 1 ... 2 6.5 8.28 2 ... 0 5.5 5.20 3 ... 2138 83.66 4.8 6 138 109.04 1.3 7 138 83.66 4.8 8 138 109.04 1.3 9 138 83.66 4.8 10138 109.04 1.3 1 138 83.66 4.8 2 138 109.04 1.3 3 138 83.66 4.8 4 138 109.04 1.3 5162 24 7 162 24 8 162 24 9 162 24 10 162 24 11 162 24 token_usage_completion_tokens flesch_reading_ease flesch_kincaid_grade \ 0 138\ 0 162 24 1 162 24 2 162 24 3 162 24 4 162 24 5 162 24 6 162output \ 0 \n\nQ: What did the fish say when it hit the w... 1 \n\nRoses are red,\nViolets are blue,\nSugar i... 2 \n\nQ: What did the fish say when it hit the w... 3 \n\nRoses are red,\nViolets are blue,\nSugar i... 4 \n\nQ: What did the fish say when it hit the w... 5 \n\nRoses are red,\nViolets are blue,\nSugar i... 6 \n\nQ: What did the fish say when it hit the w... 7 \n\nRoses are red,\nViolets are blue,\nSugar i... 8 \n\nQ: What did the fish say when it hit the w... 9 \n\nRoses are red,\nViolets are blue,\nSugar i... 10 \n\nQ: What did the fish say when it hit the w... 11 \n\nRoses are red,\nViolets are blue,\nSugar i... token_usage_total_tokens token_usage_prompt_tokens \ 02 4 1 Tell me a joke OpenAI 2 5 1 Tell me a poem OpenAI 2 6 3 Tell me a joke OpenAI 4 7 3 Tell me a poem OpenAI 4 8 3 Tell me a joke OpenAI 4 9 3 Tell me a poem OpenAI 4 10 3 Tell me a joke OpenAI 4 11 3 Tell me a poem OpenAI 479.8 116.91 21 54.83 1.4 72.1 100.17 22 62.30 -0.2 79.8 116.91 23 54.83 1.4 72.1 100.17 [24 rows x 39 columns], 'session_analysis': prompt_step prompts name output_step \ 0 1 Tell me a joke OpenAI 2 1 1 Tell me a poem OpenAI 2 2 1 Tell me a joke OpenAI 2 3 1 Tell me a poem OpenAI 2 4NaN 13 NaN NaN NaN NaN 14 NaN NaN NaN NaN 15 NaN NaN NaN NaN 16 NaN NaN NaN NaN 17 NaN NaN NaN NaN 18 62.30 -0.2 79.8 116.91 19 54.83 1.4 72.1 100.17 20 62.30 -0.2 79.8 116.91NaN NaN NaN 6 62.30 -0.2 79.8 116.91 7 54.83 1.4 72.1 100.17 8 62.30 -0.2 79.8 116.91 9 54.83 1.4 72.1 100.17 10 62.30 -0.2 79.8 116.91 11 54.83 1.4 72.1 100.17 12 NaN NaN NaN NaN 13133.58 131.54 23 8.28 6th and 7th grade 115.58 112.37 gutierrez_polini crawford gulpease_index osman 0 NaN NaN NaN NaN 1 NaN NaN NaN NaN 2 NaN NaN NaN NaN 3 NaN NaN NaN NaN 4 NaN NaN NaN NaN 5 NaN NaN NaN NaNNaN NaN NaN NaN 17 NaN NaN NaN NaN 18 5.20 5th and 6th grade 133.58 131.54 19 8.28 6th and 7th grade 115.58 112.37 20 5.20 5th and 6th grade 133.58 131.54 21 8.28 6th and 7th grade 115.58 112.37 22 5.20 5th and 6th grade 133.58112.37 10 5.20 5th and 6th grade 133.58 131.54 11 8.28 6th and 7th grade 115.58 112.37 12 NaN NaN NaN NaN 13 NaN NaN NaN NaN 14 NaN NaN NaN NaN 15 NaN NaN NaN NaN 16 NaNNaN NaN NaN 4 NaN NaN NaN NaN 5 NaN NaN NaN NaN 6 5.20 5th and 6th grade 133.58 131.54 7 8.28 6th and 7th grade 115.58 112.37 8 5.20 5th and 6th grade 133.58 131.54 9 8.28 6th and 7th grade 115.58 112.37 102 ... 0.0 5.5 23 0 2 ... 2.0 6.5 gunning_fog text_standard fernandez_huerta szigriszt_pazos \ 0 NaN NaN NaN NaN 1 NaN NaN NaN NaN 2 NaN NaN NaN NaN 3 NaN NaN NaN0 2 ... NaN NaN 18 0 2 ... 0.0 5.5 19 0 2 ... 2.0 6.5 20 0 2 ... 0.0 5.5 21 0 2 ... 2.0 6.5 22 0 2 ...2.0 6.5 12 0 2 ... NaN NaN 13 0 2 ... NaN NaN 14 0 2 ... NaN NaN 15 0 2 ... NaN NaN 16 0 2 ... NaN NaN 17 00.0 5.5 7 0 1 ... 2.0 6.5 8 0 1 ... 0.0 5.5 9 0 1 ... 2.0 6.5 10 0 1 ... 0.0 5.5 11 0 1 ... 2.01 0 1 ... NaN NaN 2 0 1 ... NaN NaN 3 0 1 ... NaN NaN 4 0 1 ... NaN NaN 5 0 1 ... NaN NaN 6 0 1 ... 0.0on_llm_end NaN 4 2 2 0 0 0 21 on_llm_end NaN 4 2 2 0 0 0 22 on_llm_end NaN 4 2 2 0 0 0 23 on_llm_end NaN 4 2 2 0 0 0 chain_ends llm_starts ... difficult_words linsear_write_formula \ 0 0 1 ... NaN NaN 1 00 0 15 on_llm_start OpenAI 3 2 1 0 0 0 16 on_llm_start OpenAI 3 2 1 0 0 0 17 on_llm_start OpenAI 3 2 1 0 0 0 18 on_llm_end NaN 4 2 2 0 0 0 19 on_llm_end NaN 4 2 2 0 0 0 20 on_llm_end NaN 4NaN 2 1 1 0 0 0 10 on_llm_end NaN 2 1 1 0 0 0 11 on_llm_end NaN 2 1 1 0 0 0 12 on_llm_start OpenAI 3 2 1 0 0 0 13 on_llm_start OpenAI 3 2 1 0 0 0 14 on_llm_start OpenAI 3 2 1 0 00 4 on_llm_start OpenAI 1 1 0 0 0 0 5 on_llm_start OpenAI 1 1 0 0 0 0 6 on_llm_end NaN 2 1 1 0 0 0 7 on_llm_end NaN 2 1 1 0 0 0 8 on_llm_end NaN 2 1 1 0 0 0 9 on_llm_end NaN 2 1and 7th grade', 'fernandez_huerta': 115.58, 'szigriszt_pazos': 112.37, 'gutierrez_polini': 54.83, 'crawford': 1.4, 'gulpease_index': 72.1, 'osman': 100.17} {'action_records': action name step starts ends errors text_ctr chain_starts \ 0 on_llm_start OpenAI 1 1 0 0 0 0 1 on_llm_start OpenAI 1 1 0 0 0 0 2 on_llm_start OpenAI 1 1 0 0 0 0 3 on_llm_start OpenAI 1 1 0 0 0 0 4 on_llm_startand 6th grade', 'fernandez_huerta': 133.58, 'szigriszt_pazos': 131.54, 'gutierrez_polini': 62.3, 'crawford': -0.2, 'gulpease_index': 79.8, 'osman': 116.91} {'action': 'on_llm_end', 'token_usage_prompt_tokens': 24, 'token_usage_completion_tokens': 138, 'token_usage_total_tokens': 162, 'model_name': 'text-davinci-003', 'step': 4, 'starts': 2, 'ends': 2, 'errors': 0, 'text_ctr': 0, 'chain_starts': 0, 'chain_ends': 0, 'llm_starts': 2, 'llm_ends': 2, 'llm_streams': 0, 'tool_starts': 0, 'tool_ends': 0, 'agent_ends': 0, 'text': '\n\nRoses are red,\nViolets are blue,\nSugar is sweet,\nAnd so are you.', 'generation_info_finish_reason': 'stop', 'generation_info_logprobs': None, 'flesch_reading_ease': 83.66, 'flesch_kincaid_grade': 4.8, 'smog_index': 0.0, 'coleman_liau_index': 3.23, 'automated_readability_index': 3.9, 'dale_chall_readability_score': 6.71, 'difficult_words': 2, 'linsear_write_formula': 6.5, 'gunning_fog': 8.28, 'text_standard': '6th and 7th grade', 'fernandez_huerta': 115.58,'text_standard': '6th and 7th grade', 'fernandez_huerta': 115.58, 'szigriszt_pazos': 112.37, 'gutierrez_polini': 54.83, 'crawford': 1.4, 'gulpease_index': 72.1, 'osman': 100.17} {'action': 'on_llm_end', 'token_usage_prompt_tokens': 24, 'token_usage_completion_tokens': 138, 'token_usage_total_tokens': 162, 'model_name': 'text-davinci-003', 'step': 4, 'starts': 2, 'ends': 2, 'errors': 0, 'text_ctr': 0, 'chain_starts': 0, 'chain_ends': 0, 'llm_starts': 2, 'llm_ends': 2, 'llm_streams': 0, 'tool_starts': 0, 'tool_ends': 0, 'agent_ends': 0, 'text': '\n\nQ: What did the fish say when it hit the wall?\nA: Dam!', 'generation_info_finish_reason': 'stop', 'generation_info_logprobs': None, 'flesch_reading_ease': 109.04, 'flesch_kincaid_grade': 1.3, 'smog_index': 0.0, 'coleman_liau_index': -1.24, 'automated_readability_index': 0.3, 'dale_chall_readability_score': 5.5, 'difficult_words': 0, 'linsear_write_formula': 5.5, 'gunning_fog': 5.2, 'text_standard': '5th and 6th grade', 'fernandez_huerta': 133.58,'text_standard': '5th and 6th grade', 'fernandez_huerta': 133.58, 'szigriszt_pazos': 131.54, 'gutierrez_polini': 62.3, 'crawford': -0.2, 'gulpease_index': 79.8, 'osman': 116.91} {'action': 'on_llm_end', 'token_usage_prompt_tokens': 24, 'token_usage_completion_tokens': 138, 'token_usage_total_tokens': 162, 'model_name': 'text-davinci-003', 'step': 4, 'starts': 2, 'ends': 2, 'errors': 0, 'text_ctr': 0, 'chain_starts': 0, 'chain_ends': 0, 'llm_starts': 2, 'llm_ends': 2, 'llm_streams': 0, 'tool_starts': 0, 'tool_ends': 0, 'agent_ends': 0, 'text': '\n\nRoses are red,\nViolets are blue,\nSugar is sweet,\nAnd so are you.', 'generation_info_finish_reason': 'stop', 'generation_info_logprobs': None, 'flesch_reading_ease': 83.66, 'flesch_kincaid_grade': 4.8, 'smog_index': 0.0, 'coleman_liau_index': 3.23, 'automated_readability_index': 3.9, 'dale_chall_readability_score': 6.71, 'difficult_words': 2, 'linsear_write_formula': 6.5, 'gunning_fog': 8.28, 'text_standard': '6th and 7th grade', 'fernandez_huerta':8.28, 'text_standard': '6th and 7th grade', 'fernandez_huerta': 115.58, 'szigriszt_pazos': 112.37, 'gutierrez_polini': 54.83, 'crawford': 1.4, 'gulpease_index': 72.1, 'osman': 100.17} {'action': 'on_llm_end', 'token_usage_prompt_tokens': 24, 'token_usage_completion_tokens': 138, 'token_usage_total_tokens': 162, 'model_name': 'text-davinci-003', 'step': 4, 'starts': 2, 'ends': 2, 'errors': 0, 'text_ctr': 0, 'chain_starts': 0, 'chain_ends': 0, 'llm_starts': 2, 'llm_ends': 2, 'llm_streams': 0, 'tool_starts': 0, 'tool_ends': 0, 'agent_ends': 0, 'text': '\n\nQ: What did the fish say when it hit the wall?\nA: Dam!', 'generation_info_finish_reason': 'stop', 'generation_info_logprobs': None, 'flesch_reading_ease': 109.04, 'flesch_kincaid_grade': 1.3, 'smog_index': 0.0, 'coleman_liau_index': -1.24, 'automated_readability_index': 0.3, 'dale_chall_readability_score': 5.5, 'difficult_words': 0, 'linsear_write_formula': 5.5, 'gunning_fog': 5.2, 'text_standard': '5th and 6th grade', 'fernandez_huerta':'gunning_fog': 5.2, 'text_standard': '5th and 6th grade', 'fernandez_huerta': 133.58, 'szigriszt_pazos': 131.54, 'gutierrez_polini': 62.3, 'crawford': -0.2, 'gulpease_index': 79.8, 'osman': 116.91} {'action': 'on_llm_end', 'token_usage_prompt_tokens': 24, 'token_usage_completion_tokens': 138, 'token_usage_total_tokens': 162, 'model_name': 'text-davinci-003', 'step': 4, 'starts': 2, 'ends': 2, 'errors': 0, 'text_ctr': 0, 'chain_starts': 0, 'chain_ends': 0, 'llm_starts': 2, 'llm_ends': 2, 'llm_streams': 0, 'tool_starts': 0, 'tool_ends': 0, 'agent_ends': 0, 'text': '\n\nRoses are red,\nViolets are blue,\nSugar is sweet,\nAnd so are you.', 'generation_info_finish_reason': 'stop', 'generation_info_logprobs': None, 'flesch_reading_ease': 83.66, 'flesch_kincaid_grade': 4.8, 'smog_index': 0.0, 'coleman_liau_index': 3.23, 'automated_readability_index': 3.9, 'dale_chall_readability_score': 6.71, 'difficult_words': 2, 'linsear_write_formula': 6.5, 'gunning_fog': 8.28, 'text_standard': '6th and 7th'step': 3, 'starts': 2, 'ends': 1, 'errors': 0, 'text_ctr': 0, 'chain_starts': 0, 'chain_ends': 0, 'llm_starts': 2, 'llm_ends': 1, 'llm_streams': 0, 'tool_starts': 0, 'tool_ends': 0, 'agent_ends': 0, 'prompts': 'Tell me a poem'} {'action': 'on_llm_end', 'token_usage_prompt_tokens': 24, 'token_usage_completion_tokens': 138, 'token_usage_total_tokens': 162, 'model_name': 'text-davinci-003', 'step': 4, 'starts': 2, 'ends': 2, 'errors': 0, 'text_ctr': 0, 'chain_starts': 0, 'chain_ends': 0, 'llm_starts': 2, 'llm_ends': 2, 'llm_streams': 0, 'tool_starts': 0, 'tool_ends': 0, 'agent_ends': 0, 'text': '\n\nQ: What did the fish say when it hit the wall?\nA: Dam!', 'generation_info_finish_reason': 'stop', 'generation_info_logprobs': None, 'flesch_reading_ease': 109.04, 'flesch_kincaid_grade': 1.3, 'smog_index': 0.0, 'coleman_liau_index': -1.24, 'automated_readability_index': 0.3, 'dale_chall_readability_score': 5.5, 'difficult_words': 0, 'linsear_write_formula': 5.5, 'gunning_fog': 5.2, 'text_standard': '5th andme a poem'} {'action': 'on_llm_start', 'name': 'OpenAI', 'step': 3, 'starts': 2, 'ends': 1, 'errors': 0, 'text_ctr': 0, 'chain_starts': 0, 'chain_ends': 0, 'llm_starts': 2, 'llm_ends': 1, 'llm_streams': 0, 'tool_starts': 0, 'tool_ends': 0, 'agent_ends': 0, 'prompts': 'Tell me a joke'} {'action': 'on_llm_start', 'name': 'OpenAI', 'step': 3, 'starts': 2, 'ends': 1, 'errors': 0, 'text_ctr': 0, 'chain_starts': 0, 'chain_ends': 0, 'llm_starts': 2, 'llm_ends': 1, 'llm_streams': 0, 'tool_starts': 0, 'tool_ends': 0, 'agent_ends': 0, 'prompts': 'Tell me a poem'} {'action': 'on_llm_start', 'name': 'OpenAI', 'step': 3, 'starts': 2, 'ends': 1, 'errors': 0, 'text_ctr': 0, 'chain_starts': 0, 'chain_ends': 0, 'llm_starts': 2, 'llm_ends': 1, 'llm_streams': 0, 'tool_starts': 0, 'tool_ends': 0, 'agent_ends': 0, 'prompts': 'Tell me a joke'} {'action': 'on_llm_start', 'name': 'OpenAI', 'step': 3, 'starts': 2, 'ends': 1,in beta and is subject to change based on updates to `langchain`. Please report any issues to https://github.com/allegroai/clearml/issues with the tag `langchain`.Scenario 1: Just an LLM​First, let's just run a single LLM a few times and capture the resulting prompt-answer conversation in ClearML# SCENARIO 1 - LLMllm_result = llm.generate(["Tell me a joke", "Tell me a poem"] * 3)# After every generation run, use flush to make sure all the metrics# prompts and other output are properly saved separatelyclearml_callback.flush_tracker(langchain_asset=llm, name="simple_sequential") {'action': 'on_llm_start', 'name': 'OpenAI', 'step': 3, 'starts': 2, 'ends': 1, 'errors': 0, 'text_ctr': 0, 'chain_starts': 0, 'chain_ends': 0, 'llm_starts': 2, 'llm_ends': 1, 'llm_streams': 0, 'tool_starts': 0, 'tool_ends': 0, 'agent_ends': 0, 'prompts': 'Tell me a joke'} {'action': 'on_llm_start', 'name': 'OpenAI', 'step': 3, 'starts': 2, 'ends': 1, 'errors': 0, 'text_ctr': 0, 'chain_starts': 0, 'chain_ends': 0, 'llm_starts': 2, 'llm_ends': 1, 'llm_streams': 0, 'tool_starts': 0, 'tool_ends': 0, 'agent_ends': 0, 'prompts': 'Tell me a poem'} {'action': 'on_llm_start', 'name':with out-of-the-box Model MonitoringFire Reports - Create and share rich MarkDown documents supporting embeddable online contentIn order to properly keep track of your langchain experiments and their results, you can enable the ClearML integration. We use the ClearML Experiment Manager that neatly tracks and organizes all your experiment runs.Installation and Setup​pip install clearmlpip install pandaspip install textstatpip install spacypython -m spacy download en_core_web_smGetting API Credentials​We'll be using quite some APIs in this notebook, here is a list and where to get them:ClearML: https://app.clear.ml/settings/workspace-configurationOpenAI: https://platform.openai.com/account/api-keysSerpAPI (google search): https://serpapi.com/dashboardimport osos.environ["CLEARML_API_ACCESS_KEY"] = ""os.environ["CLEARML_API_SECRET_KEY"] = ""os.environ["OPENAI_API_KEY"] = ""os.environ["SERPAPI_API_KEY"] = ""Callbacks​from langchain.callbacks import ClearMLCallbackHandlerfrom datetime import datetimefrom langchain.callbacks import StdOutCallbackHandlerfrom langchain.llms import OpenAI# Setup and use the ClearML Callbackclearml_callback = ClearMLCallbackHandler( task_type="inference", project_name="langchain_callback_demo", task_name="llm", tags=["test"], # Change the following parameters based on the amount of detail you want tracked visualize=True, complexity_metrics=True, stream_logs=True,)callbacks = [StdOutCallbackHandler(), clearml_callback]# Get the OpenAI model ready to gollm = OpenAI(temperature=0, callbacks=callbacks) The clearml callback is currently in beta and is subject to change based on updates to `langchain`. Please report any issues toDeploy new model endpoints in under 5 minutes Includes optimized GPU serving support backed by Nvidia-TritonEndpointSearxNG Search APISerpAPIShale ProtocolSingleStoreDBscikit-learnSlackspaCySpreedlyStarRocksStochasticAIStripeTairTelegramTigris2MarkdownTrelloTruLensTwitterTypesenseUnstructuredVectaraVespaWeights & BiasesWeatherWeaviateWhatsAppWhyLabsWikipediaWolfram AlphaWriterYeager.aiYouTubeZepZillizIntegrationsGrouped by providerClearMLOn this pageClearMLClearML is a ML/DL development and production suite, it contains 5 main modules:Experiment Manager - Automagical experiment tracking, environments and resultsMLOps - Orchestration, Automation & Pipelines solution for ML/DL jobs (K8s / Cloud / bare-metal)Data-Management - Fully differentiable data management & version control solution on top of object-storage (S3 / GS / Azure / NAS)Model-Serving - cloud-ready Scalable model serving solution!Skip to main content🦜ï¸�🔗 LangChainDocsUse casesIntegrationsAPILangSmithJS/TS DocsCTRLKIntegrationsCallbacksChat modelsDocument loadersDocument transformersLLMsMemoryRetrieversText embedding modelsAgent toolkitsToolsVector storesGrouped by providerWandB TracingAI21 LabsAimAirbyteAirtableAleph AlphaAlibaba Cloud OpensearchAmazon API GatewayAnalyticDBAnnoyAnyscaleApifyArangoDBArgillaArthurArxivAtlasDBAwaDBAWS S3 DirectoryAZLyricsAzure Blob StorageAzure Cognitive SearchAzure OpenAIBananaBasetenBeamBedrockBiliBiliBlackboardBrave SearchCassandraCerebriumAIChaindeskChromaClarifaiClearMLCnosDBCohereCollege ConfidentialCometConfluenceC TransformersDatabricksDatadog TracingDatadog LogsDataForSEODeepInfraDeep LakeDiffbotDiscordDocugamiDuckDBElasticsearchEverNoteFacebook ChatFigmaFlyteForefrontAIGitGitBookGoldenGoogle BigQueryGoogle Cloud StorageGoogle DriveGoogle SearchGoogle SerperGooseAIGPT4AllGraphsignalGrobidGutenbergHacker NewsHazy ResearchHeliconeHologresHugging FaceiFixitIMSDbInfinoJinaLanceDBLangChain Decorators ✨Llama.cppMarqoMediaWikiDumpMetalMicrosoft OneDriveMicrosoft PowerPointMicrosoft WordMilvusMLflow AI GatewayMLflowModalModelScopeModern TreasuryMomentoMotherduckMyScaleNLPCloudNotion DBObsidianOpenAIOpenLLMOpenSearchOpenWeatherMapPetalsPGVectorPineconePipelineAIPortkeyPredibasePrediction GuardPromptLayerPsychicQdrantRay ServeRebuffRedditRedisReplicateRoamRocksetRunhouseRWKV-4SageMaker EndpointSearxNG Search APISerpAPIShaleClearML | 🦜ï¸�🔗 LangchainIt is broken into two parts: installation and setup, and then references to the specific SerpAPI wrapper.Installation and Setup​Install requirements with pip install google-search-resultsGet a SerpAPI api key and either set it as an environment variable (SERPAPI_API_KEY)Wrappers​Utility​There exists a SerpAPI utility which wraps this API. To import this utility:from langchain.utilities import SerpAPIWrapperFor a more detailed walkthrough of this wrapper, see this notebook.Tool​You can also easily load this wrapper as a Tool (to use with an Agent). You can do this with:from langchain.agents import load_toolstools = load_tools(["serpapi"])For more information on this, see this pagePreviousSearxNG Search APINextShale ProtocolInstallation and SetupWrappersUtilityToolCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.EndpointSearxNG Search APISerpAPIShale ProtocolSingleStoreDBscikit-learnSlackspaCySpreedlyStarRocksStochasticAIStripeTairTelegramTigris2MarkdownTrelloTruLensTwitterTypesenseUnstructuredVectaraVespaWeights & BiasesWeatherWeaviateWhatsAppWhyLabsWikipediaWolfram AlphaWriterYeager.aiYouTubeZepZillizIntegrationsGrouped by providerSerpAPIOn this pageSerpAPIThis page covers how to use the SerpAPI search APIs within LangChain.Skip to main content🦜ï¸�🔗 LangChainDocsUse casesIntegrationsAPILangSmithJS/TS DocsCTRLKIntegrationsCallbacksChat modelsDocument loadersDocument transformersLLMsMemoryRetrieversText embedding modelsAgent toolkitsToolsVector storesGrouped by providerWandB TracingAI21 LabsAimAirbyteAirtableAleph AlphaAlibaba Cloud OpensearchAmazon API GatewayAnalyticDBAnnoyAnyscaleApifyArangoDBArgillaArthurArxivAtlasDBAwaDBAWS S3 DirectoryAZLyricsAzure Blob StorageAzure Cognitive SearchAzure OpenAIBananaBasetenBeamBedrockBiliBiliBlackboardBrave SearchCassandraCerebriumAIChaindeskChromaClarifaiClearMLCnosDBCohereCollege ConfidentialCometConfluenceC TransformersDatabricksDatadog TracingDatadog LogsDataForSEODeepInfraDeep LakeDiffbotDiscordDocugamiDuckDBElasticsearchEverNoteFacebook ChatFigmaFlyteForefrontAIGitGitBookGoldenGoogle BigQueryGoogle Cloud StorageGoogle DriveGoogle SearchGoogle SerperGooseAIGPT4AllGraphsignalGrobidGutenbergHacker NewsHazy ResearchHeliconeHologresHugging FaceiFixitIMSDbInfinoJinaLanceDBLangChain Decorators ✨Llama.cppMarqoMediaWikiDumpMetalMicrosoft OneDriveMicrosoft PowerPointMicrosoft WordMilvusMLflow AI GatewayMLflowModalModelScopeModern TreasuryMomentoMotherduckMyScaleNLPCloudNotion DBObsidianOpenAIOpenLLMOpenSearchOpenWeatherMapPetalsPGVectorPineconePipelineAIPortkeyPredibasePrediction GuardPromptLayerPsychicQdrantRay ServeRebuffRedditRedisReplicateRoamRocksetRunhouseRWKV-4SageMaker EndpointSearxNG Search APISerpAPIShaleSerpAPI | 🦜ï¸�🔗 Langchain) return agent_executor.run(input)Deploy on Jina AI Cloud with lc-serve deploy jcloud app. Once deployed, we can send a POST request to the API endpoint to get a response.curl -X 'POST' 'https://.wolf.jina.ai/ask' \ -d '{ "input": "Your Quesion here?", "envs": { "OPENAI_API_KEY": "sk-***" }}'You can also self-host the app on your infrastructure with Docker-compose or Kubernetes. See here for more details.Langchain-serve also allows to deploy the apps with WebSocket APIs and Slack Bots both on Jina AI Cloud or self-hosted infrastructure.PreviousInfinoNextLanceDBInstallation and SetupWrappersEmbeddingsDeploymentUsageCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.It is broken into two parts: installation and setup, and then references to specific Jina wrappers.Installation and Setup​Install the Python SDK with pip install jinaGet a Jina AI Cloud auth token from here and set it as an environment variable (JINA_AUTH_TOKEN)Wrappers​Embeddings​There exists a Jina Embeddings wrapper, which you can access with from langchain.embeddings import JinaEmbeddingsFor a more detailed walkthrough of this, see this notebookDeployment​Langchain-serve, powered by Jina, helps take LangChain apps to production with easy to use REST/WebSocket APIs and Slack bots. Usage​Install the package from PyPI. pip install langchain-serveWrap your LangChain app with the @serving decorator. # app.pyfrom lcserve import serving@servingdef ask(input: str) -> str: from langchain import LLMChain, OpenAI from langchain.agents import AgentExecutor, ZeroShotAgent tools = [...] # list of tools prompt = ZeroShotAgent.create_prompt( tools, input_variables=["input", "agent_scratchpad"], ) llm_chain = LLMChain(llm=OpenAI(temperature=0), prompt=prompt) agent = ZeroShotAgent( llm_chain=llm_chain, allowed_tools=[tool.name for tool in tools] ) agent_executor = AgentExecutor.from_agent_and_tools( agent=agent, tools=tools, verbose=True, ) return agent_executor.run(input)Deploy on Jina AI Cloud with lc-serveEndpointSearxNG Search APISerpAPIShale ProtocolSingleStoreDBscikit-learnSlackspaCySpreedlyStarRocksStochasticAIStripeTairTelegramTigris2MarkdownTrelloTruLensTwitterTypesenseUnstructuredVectaraVespaWeights & BiasesWeatherWeaviateWhatsAppWhyLabsWikipediaWolfram AlphaWriterYeager.aiYouTubeZepZillizIntegrationsGrouped by providerJinaOn this pageJinaThis page covers how to use the Jina ecosystem within LangChain.Skip to main content🦜ï¸�🔗 LangChainDocsUse casesIntegrationsAPILangSmithJS/TS DocsCTRLKIntegrationsCallbacksChat modelsDocument loadersDocument transformersLLMsMemoryRetrieversText embedding modelsAgent toolkitsToolsVector storesGrouped by providerWandB TracingAI21 LabsAimAirbyteAirtableAleph AlphaAlibaba Cloud OpensearchAmazon API GatewayAnalyticDBAnnoyAnyscaleApifyArangoDBArgillaArthurArxivAtlasDBAwaDBAWS S3 DirectoryAZLyricsAzure Blob StorageAzure Cognitive SearchAzure OpenAIBananaBasetenBeamBedrockBiliBiliBlackboardBrave SearchCassandraCerebriumAIChaindeskChromaClarifaiClearMLCnosDBCohereCollege ConfidentialCometConfluenceC TransformersDatabricksDatadog TracingDatadog LogsDataForSEODeepInfraDeep LakeDiffbotDiscordDocugamiDuckDBElasticsearchEverNoteFacebook ChatFigmaFlyteForefrontAIGitGitBookGoldenGoogle BigQueryGoogle Cloud StorageGoogle DriveGoogle SearchGoogle SerperGooseAIGPT4AllGraphsignalGrobidGutenbergHacker NewsHazy ResearchHeliconeHologresHugging FaceiFixitIMSDbInfinoJinaLanceDBLangChain Decorators ✨Llama.cppMarqoMediaWikiDumpMetalMicrosoft OneDriveMicrosoft PowerPointMicrosoft WordMilvusMLflow AI GatewayMLflowModalModelScopeModern TreasuryMomentoMotherduckMyScaleNLPCloudNotion DBObsidianOpenAIOpenLLMOpenSearchOpenWeatherMapPetalsPGVectorPineconePipelineAIPortkeyPredibasePrediction GuardPromptLayerPsychicQdrantRay ServeRebuffRedditRedisReplicateRoamRocksetRunhouseRWKV-4SageMaker EndpointSearxNG Search APISerpAPIShaleJina | 🦜ï¸�🔗 Langchain= "ENTER YOUR API KEY"llm = OpenAI()template = """Question: {question}# Answer: Let's think step by step."""prompt = PromptTemplate(template=template, input_variables=["question"])llm_chain = LLMChain(prompt=prompt, llm=llm)question = "What NFL team won the Super Bowl in the year Justin Beiber was born?"llm_chain.run(question)PreviousSerpAPINextSingleStoreDBHow to1. Find the link to our Discord on https://shaleprotocol.com. Generate an API key through the "Shale Bot" on our Discord. No credit card is required and no free trials. It's a forever free tier with 1K limit per day per API key.2. Use https://shale.live/v1 as OpenAI API drop-in replacementCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.EndpointSearxNG Search APISerpAPIShale ProtocolSingleStoreDBscikit-learnSlackspaCySpreedlyStarRocksStochasticAIStripeTairTelegramTigris2MarkdownTrelloTruLensTwitterTypesenseUnstructuredVectaraVespaWeights & BiasesWeatherWeaviateWhatsAppWhyLabsWikipediaWolfram AlphaWriterYeager.aiYouTubeZepZillizIntegrationsGrouped by providerShale ProtocolOn this pageShale ProtocolShale Protocol provides production-ready inference APIs for open LLMs. It's a Plug & Play API as it's hosted on a highly scalable GPU cloud infrastructure. Our free tier supports up to 1K daily requests per key as we want to eliminate the barrier for anyone to start building genAI apps with LLMs. With Shale Protocol, developers/researchers can create apps and explore the capabilities of open LLMs at no cost.This page covers how Shale-Serve API can be incorporated with LangChain.As of June 2023, the API supports Vicuna-13B by default. We are going to support more LLMs such as Falcon-40B in future releases. How to​1. Find the link to our Discord on https://shaleprotocol.com. Generate an API key through the "Shale Bot" on our Discord. No credit card is required and no free trials. It's a forever free tier with 1K limit per day per API key.​2. Use https://shale.live/v1 as OpenAI API drop-in replacement​For examplefrom langchain.llms import OpenAIfrom langchain import PromptTemplate, LLMChainimport osos.environ['OPENAI_API_BASE'] = "https://shale.live/v1"os.environ['OPENAI_API_KEY'] = "ENTER YOUR API KEY"llm = OpenAI()template = """Question:Skip to main content🦜ï¸�🔗 LangChainDocsUse casesIntegrationsAPILangSmithJS/TS DocsCTRLKIntegrationsCallbacksChat modelsDocument loadersDocument transformersLLMsMemoryRetrieversText embedding modelsAgent toolkitsToolsVector storesGrouped by providerWandB TracingAI21 LabsAimAirbyteAirtableAleph AlphaAlibaba Cloud OpensearchAmazon API GatewayAnalyticDBAnnoyAnyscaleApifyArangoDBArgillaArthurArxivAtlasDBAwaDBAWS S3 DirectoryAZLyricsAzure Blob StorageAzure Cognitive SearchAzure OpenAIBananaBasetenBeamBedrockBiliBiliBlackboardBrave SearchCassandraCerebriumAIChaindeskChromaClarifaiClearMLCnosDBCohereCollege ConfidentialCometConfluenceC TransformersDatabricksDatadog TracingDatadog LogsDataForSEODeepInfraDeep LakeDiffbotDiscordDocugamiDuckDBElasticsearchEverNoteFacebook ChatFigmaFlyteForefrontAIGitGitBookGoldenGoogle BigQueryGoogle Cloud StorageGoogle DriveGoogle SearchGoogle SerperGooseAIGPT4AllGraphsignalGrobidGutenbergHacker NewsHazy ResearchHeliconeHologresHugging FaceiFixitIMSDbInfinoJinaLanceDBLangChain Decorators ✨Llama.cppMarqoMediaWikiDumpMetalMicrosoft OneDriveMicrosoft PowerPointMicrosoft WordMilvusMLflow AI GatewayMLflowModalModelScopeModern TreasuryMomentoMotherduckMyScaleNLPCloudNotion DBObsidianOpenAIOpenLLMOpenSearchOpenWeatherMapPetalsPGVectorPineconePipelineAIPortkeyPredibasePrediction GuardPromptLayerPsychicQdrantRay ServeRebuffRedditRedisReplicateRoamRocksetRunhouseRWKV-4SageMaker EndpointSearxNG Search APISerpAPIShaleShale Protocol | 🦜ï¸�🔗 Langchainis an observability platform designed to monitor data pipelines and ML applications for data quality regressions, data drift, and model performance degradation. Built on top of an open-source package called whylogs, the platform enables Data Scientists and Engineers to:📄ï¸� WikipediaWikipedia is a multilingual free online encyclopedia written and maintained by a community of volunteers, known as Wikipedians, through open collaboration and using a wiki-based editing system called MediaWiki. Wikipedia is the largest and most-read reference work in history.📄ï¸� Wolfram AlphaWolframAlpha is an answer engine developed by Wolfram Research.📄ï¸� WriterThis page covers how to use the Writer ecosystem within LangChain.📄ï¸� Yeager.aiThis page covers how to use Yeager.ai to generate LangChain tools and agents.📄ï¸� YouTubeYouTube is an online video sharing and social media platform by Google.📄ï¸� ZepZep - A long-term memory store for LLM applications.📄ï¸� ZillizZilliz Cloud is a fully managed service on cloud for LF AI Milvus®,PreviousZillizNextWandB TracingCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.is a web-based project management and collaboration tool that allows individuals and teams to organize and track their tasks and projects. It provides a visual interface known as a "board" where users can create lists and cards to represent their tasks and activities.📄ï¸� TruLensThis page covers how to use TruLens to evaluate and track LLM apps built on langchain.📄ï¸� TwitterTwitter is an online social media and social networking service.📄ï¸� TypesenseTypesense is an open source, in-memory search engine, that you can either📄ï¸� UnstructuredThe unstructured package from🗃ï¸� Vectara2 items📄ï¸� VespaVespa is a fully featured search engine and vector database.📄ï¸� Weights & BiasesThis notebook goes over how to track your LangChain experiments into one centralized Weights and Biases dashboard. To learn more about prompt engineering and the callback please refer to this Report which explains both alongside the resultant dashboards you can expect to see.📄ï¸� WeatherOpenWeatherMap is an open source weather service provider.📄ï¸� WeaviateThis page covers how to use the Weaviate ecosystem within LangChain.📄ï¸� WhatsAppWhatsApp (also called WhatsApp Messenger) is a freeware, cross-platform, centralized instant messaging (IM) and voice-over-IP (VoIP) service. It allows users to send text and voice messages, make voice and video calls, and share images, documents, user locations, and other content.📄ï¸� WhyLabsWhyLabs is an observability platform designed to monitor data pipelines and ML applications for data quality regressions, dataspaCyspaCy is an open-source software library for advanced natural language processing, written in the programming languages Python and Cython.📄ï¸� SpreedlySpreedly is a service that allows you to securely store credit cards and use them to transact against any number of payment gateways and third party APIs. It does this by simultaneously providing a card tokenization/vault service as well as a gateway and receiver integration service. Payment methods tokenized by Spreedly are stored at Spreedly, allowing you to independently store a card and then pass that card to different end points based on your business requirements.📄ï¸� StarRocksStarRocks is a High-Performance Analytical Database.📄ï¸� StochasticAIThis page covers how to use the StochasticAI ecosystem within LangChain.📄ï¸� StripeStripe is an Irish-American financial services and software as a service (SaaS) company. It offers payment-processing software and application programming interfaces for e-commerce websites and mobile applications.📄ï¸� TairThis page covers how to use the Tair ecosystem within LangChain.📄ï¸� TelegramTelegram Messenger is a globally accessible freemium, cross-platform, encrypted, cloud-based and centralized instant messaging service. The application also provides optional end-to-end encrypted chats and video calling, VoIP, file sharing and several other features.📄ï¸� TigrisTigris is an open source Serverless NoSQL Database and Search Platform designed to simplify building high-performance vector search applications.📄ï¸� 2Markdown2markdown service transforms website content into structured markdown files.📄ï¸� TrelloTrello is a web-based project management and collaboration tool that allows individuals and teams to organize and trackdatabase service for serving low latency, high concurrency analytical queries at scale. It builds a Converged Indexâ„¢ on structured and semi-structured data with an efficient store for vector embeddings. Its support for running SQL on schemaless data makes it a perfect choice for running vector search with metadata filters.📄ï¸� RunhouseThis page covers how to use the Runhouse ecosystem within LangChain.📄ï¸� RWKV-4This page covers how to use the RWKV-4 wrapper within LangChain.📄ï¸� SageMaker EndpointAmazon SageMaker is a system that can build, train, and deploy machine learning (ML) models with fully managed infrastructure, tools, and workflows.📄ï¸� SearxNG Search APIThis page covers how to use the SearxNG search API within LangChain.📄ï¸� SerpAPIThis page covers how to use the SerpAPI search APIs within LangChain.📄ï¸� Shale ProtocolShale Protocol provides production-ready inference APIs for open LLMs. It's a Plug & Play API as it's hosted on a highly scalable GPU cloud infrastructure.📄ï¸� SingleStoreDBSingleStoreDB is a high-performance distributed SQL database that supports deployment both in the cloud and on-premises. It provides vector storage, and vector functions including dotproduct and euclideandistance, thereby supporting AI applications that require text similarity matching.📄ï¸� scikit-learnscikit-learn is an open source collection of machine learning algorithms,📄ï¸� SlackSlack is an instant messaging program.📄ï¸� spaCyspaCy is an open-source software library for advanced natural language processing, written in thePineconeThis page covers how to use the Pinecone ecosystem within LangChain.📄ï¸� PipelineAIThis page covers how to use the PipelineAI ecosystem within LangChain.🗃ï¸� Portkey1 items📄ï¸� PredibaseLearn how to use LangChain with models on Predibase.📄ï¸� Prediction GuardThis page covers how to use the Prediction Guard ecosystem within LangChain.📄ï¸� PromptLayerThis page covers how to use PromptLayer within LangChain.📄ï¸� PsychicPsychic is a platform for integrating with SaaS tools like Notion, Zendesk,📄ï¸� QdrantThis page covers how to use the Qdrant ecosystem within LangChain.📄ï¸� Ray ServeRay Serve is a scalable model serving library for building online inference APIs. Serve is particularly well suited for system composition, enabling you to build a complex inference service consisting of multiple chains and business logic all in Python code.📄ï¸� RebuffRebuff is a self-hardening prompt injection detector.📄ï¸� RedditReddit is an American social news aggregation, content rating, and discussion website.📄ï¸� RedisThis page covers how to use the Redis ecosystem within LangChain.📄ï¸� ReplicateThis page covers how to run models on Replicate within LangChain.📄ï¸� RoamROAM is a note-taking tool for networked thought, designed to create a personal knowledge base.📄ï¸� RocksetRockset is a real-time analytics database service for serving low latency, high concurrency analytical queries at scale. It builds a ConvergedModalThis page covers how to use the Modal ecosystem to run LangChain custom LLMs.📄ï¸� ModelScopeThis page covers how to use the modelscope ecosystem within LangChain.📄ï¸� Modern TreasuryModern Treasury simplifies complex payment operations. It is a unified platform to power products and processes that move money.📄ï¸� MomentoMomento Cache is the world's first truly serverless caching service. It provides instant elasticity, scale-to-zero📄ï¸� MotherduckMotherduck is a managed DuckDB-in-the-cloud service.📄ï¸� MyScaleThis page covers how to use MyScale vector database within LangChain.📄ï¸� NLPCloudThis page covers how to use the NLPCloud ecosystem within LangChain.📄ï¸� Notion DBNotion is a collaboration platform with modified Markdown support that integrates kanban📄ï¸� ObsidianObsidian is a powerful and extensible knowledge base📄ï¸� OpenAIOpenAI is American artificial intelligence (AI) research laboratory📄ï¸� OpenLLMThis page demonstrates how to use OpenLLM📄ï¸� OpenSearchThis page covers how to use the OpenSearch ecosystem within LangChain.📄ï¸� OpenWeatherMapOpenWeatherMap provides all essential weather data for a specific location:📄ï¸� PetalsThis page covers how to use the Petals ecosystem within LangChain.📄ï¸� PGVectorThis page covers how to use the Postgres PGVector ecosystem within LangChain📄ï¸� PineconeThis page covers how to use the Pinecone ecosystem withinuse LanceDB within LangChain.📄ï¸� LangChain Decorators ✨lanchchain decorators is a layer on the top of LangChain that provides syntactic sugar ğŸ�­ for writing custom langchain prompts and chains📄ï¸� Llama.cppThis page covers how to use llama.cpp within LangChain.📄ï¸� MarqoThis page covers how to use the Marqo ecosystem within LangChain.📄ï¸� MediaWikiDumpMediaWiki XML Dumps contain the content of a wiki📄ï¸� MetalThis page covers how to use Metal within LangChain.📄ï¸� Microsoft OneDriveMicrosoft OneDrive (formerly SkyDrive) is a file-hosting service operated by Microsoft.📄ï¸� Microsoft PowerPointMicrosoft PowerPoint is a presentation program by Microsoft.📄ï¸� Microsoft WordMicrosoft Word is a word processor developed by Microsoft.📄ï¸� MilvusThis page covers how to use the Milvus ecosystem within LangChain.📄ï¸� MLflow AI GatewayThe MLflow AI Gateway service is a powerful tool designed to streamline the usage and management of various large language model (LLM) providers, such as OpenAI and Anthropic, within an organization. It offers a high-level interface that simplifies the interaction with these services by providing a unified endpoint to handle specific LLM related requests. See the MLflow AI Gateway documentation for more details.📄ï¸� MLflowThis notebook goes over how to track your LangChain experiments into your MLflow Server📄ï¸� ModalThis page covers how to use the Modal ecosystem to run LangChain customand monitor LangChain. Graphsignal enables full visibility into your application. It provides latency breakdowns by chains and tools, exceptions with full context, data monitoring, compute/GPU utilization, OpenAI cost analytics, and more.📄ï¸� GrobidThis page covers how to use the Grobid to parse articles for LangChain.📄ï¸� GutenbergProject Gutenberg is an online library of free eBooks.📄ï¸� Hacker NewsHacker News (sometimes abbreviated as HN) is a social news📄ï¸� Hazy ResearchThis page covers how to use the Hazy Research ecosystem within LangChain.📄ï¸� HeliconeThis page covers how to use the Helicone ecosystem within LangChain.📄ï¸� HologresHologres is a unified real-time data warehousing service developed by Alibaba Cloud. You can use Hologres to write, update, process, and analyze large amounts of data in real time.📄ï¸� Hugging FaceThis page covers how to use the Hugging Face ecosystem (including the Hugging Face Hub) within LangChain.📄ï¸� iFixitiFixit is the largest, open repair community on the web. The site contains nearly 100k📄ï¸� IMSDbIMSDb is the Internet Movie Script Database.📄ï¸� InfinoInfino is an open-source observability platform that stores both metrics and application logs together.📄ï¸� JinaThis page covers how to use the Jina ecosystem within LangChain.📄ï¸� LanceDBThis page covers how to use LanceDB within LangChain.📄ï¸� LangChain Decoratorsin any set of computer files, usually used for coordinating work among programmers collaboratively developing source code during software development.📄ï¸� GitBookGitBook is a modern documentation platform where teams can document everything from products to internal knowledge bases and APIs.📄ï¸� GoldenGolden provides a set of natural language APIs for querying and enrichment using the Golden Knowledge Graph e.g. queries such as: Products from OpenAI, Generative ai companies with series a funding, and rappers who invest can be used to retrieve relevant structured data about relevant entities.📄ï¸� Google BigQueryGoogle BigQuery is a serverless and cost-effective enterprise data warehouse that works across clouds and scales with your data.📄ï¸� Google Cloud StorageGoogle Cloud Storage is a managed service for storing unstructured data.📄ï¸� Google DriveGoogle Drive is a file storage and synchronization service developed by Google.📄ï¸� Google SearchThis page covers how to use the Google Search API within LangChain.📄ï¸� Google SerperThis page covers how to use the Serper Google Search API within LangChain. Serper is a low-cost Google Search API that can be used to add answer box, knowledge graph, and organic results data from Google Search.📄ï¸� GooseAIThis page covers how to use the GooseAI ecosystem within LangChain.📄ï¸� GPT4AllThis page covers how to use the GPT4All wrapper within LangChain. The tutorial is divided into two parts: installation and setup, followed by usage with an example.📄ï¸� GraphsignalThis page covers how to use Graphsignal to trace and monitor LangChain. Graphsignal enables full visibility into your application. It provides latency breakdowns byDataForSEOThis page provides instructions on how to use the DataForSEO search APIs within LangChain.📄ï¸� DeepInfraThis page covers how to use the DeepInfra ecosystem within LangChain.📄ï¸� Deep LakeThis page covers how to use the Deep Lake ecosystem within LangChain.📄ï¸� DiffbotDiffbot is a service to read web pages. Unlike traditional web scraping tools,📄ï¸� DiscordDiscord is a VoIP and instant messaging social platform. Users have the ability to communicate📄ï¸� DocugamiDocugami converts business documents into a Document XML Knowledge Graph, generating forests📄ï¸� DuckDBDuckDB is an in-process SQL OLAP database management system.📄ï¸� ElasticsearchElasticsearch is a distributed, RESTful search and analytics engine.📄ï¸� EverNoteEverNote is intended for archiving and creating notes in which photos, audio and saved web content can be embedded. Notes are stored in virtual "notebooks" and can be tagged, annotated, edited, searched, and exported.📄ï¸� Facebook ChatMessenger) is an American proprietary instant messaging app and📄ï¸� FigmaFigma is a collaborative web application for interface design.📄ï¸� FlyteFlyte is an open-source orchestrator that facilitates building production-grade data and ML pipelines.📄ï¸� ForefrontAIThis page covers how to use the ForefrontAI ecosystem within LangChain.📄ï¸� GitGit is a distributed version control system that tracks changes in any set of computer files, usually used for coordinating work among programmers collaboratively developing source code duringwe're aware, Clarifai is the only provider that supports LLMs, embeddings and a vector store in one production scale platform, making it an excellent choice to operationalize your LangChain implementations.📄ï¸� ClearMLClearML is a ML/DL development and production suite, it contains 5 main modules:📄ï¸� CnosDBCnosDB is an open source distributed time series database with high performance, high compression rate and high ease of use.📄ï¸� CohereCohere is a Canadian startup that provides natural language processing models📄ï¸� College ConfidentialCollege Confidential gives information on 3,800+ colleges and universities.📄ï¸� CometIn this guide we will demonstrate how to track your Langchain Experiments, Evaluation Metrics, and LLM Sessions with Comet.📄ï¸� ConfluenceConfluence is a wiki collaboration platform that saves and organizes all of the project-related material. Confluence is a knowledge base that primarily handles content management activities.📄ï¸� C TransformersThis page covers how to use the C Transformers library within LangChain.📄ï¸� DatabricksThis notebook covers how to connect to the Databricks runtimes and Databricks SQL using the SQLDatabase wrapper of LangChain.📄ï¸� Datadog Tracingddtrace is a Datadog application performance monitoring (APM) library which provides an integration to monitor your LangChain application.📄ï¸� Datadog LogsDatadog is a monitoring and analytics platform for cloud-scale applications.📄ï¸� DataForSEOThis page provides instructions on how to use the DataForSEO search APIs withinBananaThis page covers how to use the Banana ecosystem within LangChain.📄ï¸� BasetenLearn how to use LangChain with models deployed on Baseten.📄ï¸� BeamThis page covers how to use Beam within LangChain.📄ï¸� BedrockAmazon Bedrock is a fully managed service that makes FMs from leading AI startups and Amazon available via an API, so you can choose from a wide range of FMs to find the model that is best suited for your use case.📄ï¸� BiliBiliBilibili is one of the most beloved long-form video sites in China.📄ï¸� BlackboardBlackboard Learn (previously the Blackboard Learning Management System)📄ï¸� Brave SearchBrave Search is a search engine developed by Brave Software.📄ï¸� CassandraApache Cassandra® is a free and open-source, distributed, wide-column📄ï¸� CerebriumAIThis page covers how to use the CerebriumAI ecosystem within LangChain.📄ï¸� ChaindeskChaindesk is an open source document retrieval platform that helps to connect your personal data with Large Language Models.📄ï¸� ChromaChroma is a database for building AI applications with embeddings.📄ï¸� ClarifaiClarifai is one of first deep learning platforms having been founded in 2013. Clarifai provides an AI platform with the full AI lifecycle for data exploration, data labeling, model training, evaluation and inference around images, video, text and audio data. In the LangChain ecosystem, as far as we're aware, Clarifai is the only provider that supports LLMs, embeddings and aArthurArthur is a model monitoring and observability platform.📄ï¸� ArxivarXiv is an open-access archive for 2 million scholarly articles in the fields of physics,📄ï¸� AtlasDBThis page covers how to use Nomic's Atlas ecosystem within LangChain.📄ï¸� AwaDBAwaDB is an AI Native database for the search and storage of embedding vectors used by LLM Applications.📄ï¸� AWS S3 DirectoryAmazon Simple Storage Service (Amazon S3) is an object storage service.📄ï¸� AZLyricsAZLyrics is a large, legal, every day growing collection of lyrics.📄ï¸� Azure Blob StorageAzure Blob Storage is Microsoft's object storage solution for the cloud. Blob Storage is optimized for storing massive amounts of unstructured data. Unstructured data is data that doesn't adhere to a particular data model or definition, such as text or binary data.📄ï¸� Azure Cognitive SearchAzure Cognitive Search (formerly known as Azure Search) is a cloud search service that gives developers infrastructure, APIs, and tools for building a rich search experience over private, heterogeneous content in web, mobile, and enterprise applications.📄ï¸� Azure OpenAIMicrosoft Azure, often referred to as Azure is a cloud computing platform run by Microsoft, which offers access, management, and development of applications and services through global data centers. It provides a range of capabilities, including software as a service (SaaS), platform as a service (PaaS), and infrastructure as a service (IaaS). Microsoft Azure supports many programming languages, tools, and frameworks, including Microsoft-specific and third-party software and systems.📄ï¸� BananaThis page covers how to use the Banana ecosystem withinOpenSearch helps develop search services in different search scenarios, including e-commerce, O2O, multimedia, the content industry, communities and forums, and big data query in enterprises.📄ï¸� Amazon API GatewayAmazon API Gateway is a fully managed service that makes it easy for developers to create, publish, maintain, monitor, and secure APIs at any scale. APIs act as the "front door" for applications to access data, business logic, or functionality from your backend services. Using API Gateway, you can create RESTful APIs and WebSocket APIs that enable real-time two-way communication applications. API Gateway supports containerized and serverless workloads, as well as web applications.📄ï¸� AnalyticDBThis page covers how to use the AnalyticDB ecosystem within LangChain.📄ï¸� AnnoyAnnoy (Approximate Nearest Neighbors Oh Yeah) is a C++ library with Python bindings to search for points in space that are close to a given query point. It also creates large read-only file-based data structures that are mmapped into memory so that many processes may share the same data.📄ï¸� AnyscaleThis page covers how to use the Anyscale ecosystem within LangChain.📄ï¸� ApifyThis page covers how to use Apify within LangChain.📄ï¸� ArangoDBArangoDB is a scalable graph database system to drive value from connected data, faster. Native graphs, an integrated search engine, and JSON support, via a single query language. ArangoDB runs on-prem, in the cloud – anywhere.📄ï¸� ArgillaArgilla - Open-source data platform for LLMs📄ï¸� ArthurArthur is a model monitoring and observabilityEndpointSearxNG Search APISerpAPIShale ProtocolSingleStoreDBscikit-learnSlackspaCySpreedlyStarRocksStochasticAIStripeTairTelegramTigris2MarkdownTrelloTruLensTwitterTypesenseUnstructuredVectaraVespaWeights & BiasesWeatherWeaviateWhatsAppWhyLabsWikipediaWolfram AlphaWriterYeager.aiYouTubeZepZillizIntegrationsGrouped by providerGrouped by provider📄ï¸� WandB TracingThere are two recommended ways to trace your LangChains:📄ï¸� AI21 LabsThis page covers how to use the AI21 ecosystem within LangChain.📄ï¸� AimAim makes it super easy to visualize and debug LangChain executions. Aim tracks inputs and outputs of LLMs and tools, as well as actions of agents.📄ï¸� AirbyteAirbyte is a data integration platform for ELT pipelines from APIs,📄ï¸� AirtableAirtable is a cloud collaboration service.📄ï¸� Aleph AlphaAleph Alpha was founded in 2019 with the mission to research and build the foundational technology for an era of strong AI. The team of international scientists, engineers, and innovators researches, develops, and deploys transformative AI like large language and multimodal models and runs the fastest European commercial AI cluster.📄ï¸� Alibaba Cloud OpensearchAlibaba Cloud Opensearch OpenSearch is a one-stop platform to develop intelligent search services. OpenSearch was built based on the large-scale distributed search engine developed by Alibaba. OpenSearch serves more than 500 business cases in Alibaba Group and thousands of Alibaba Cloud customers. OpenSearch helps develop search services in different search scenarios, including e-commerce, O2O,Skip to main content🦜ï¸�🔗 LangChainDocsUse casesIntegrationsAPILangSmithJS/TS DocsCTRLKIntegrationsCallbacksChat modelsDocument loadersDocument transformersLLMsMemoryRetrieversText embedding modelsAgent toolkitsToolsVector storesGrouped by providerWandB TracingAI21 LabsAimAirbyteAirtableAleph AlphaAlibaba Cloud OpensearchAmazon API GatewayAnalyticDBAnnoyAnyscaleApifyArangoDBArgillaArthurArxivAtlasDBAwaDBAWS S3 DirectoryAZLyricsAzure Blob StorageAzure Cognitive SearchAzure OpenAIBananaBasetenBeamBedrockBiliBiliBlackboardBrave SearchCassandraCerebriumAIChaindeskChromaClarifaiClearMLCnosDBCohereCollege ConfidentialCometConfluenceC TransformersDatabricksDatadog TracingDatadog LogsDataForSEODeepInfraDeep LakeDiffbotDiscordDocugamiDuckDBElasticsearchEverNoteFacebook ChatFigmaFlyteForefrontAIGitGitBookGoldenGoogle BigQueryGoogle Cloud StorageGoogle DriveGoogle SearchGoogle SerperGooseAIGPT4AllGraphsignalGrobidGutenbergHacker NewsHazy ResearchHeliconeHologresHugging FaceiFixitIMSDbInfinoJinaLanceDBLangChain Decorators ✨Llama.cppMarqoMediaWikiDumpMetalMicrosoft OneDriveMicrosoft PowerPointMicrosoft WordMilvusMLflow AI GatewayMLflowModalModelScopeModern TreasuryMomentoMotherduckMyScaleNLPCloudNotion DBObsidianOpenAIOpenLLMOpenSearchOpenWeatherMapPetalsPGVectorPineconePipelineAIPortkeyPredibasePrediction GuardPromptLayerPsychicQdrantRay ServeRebuffRedditRedisReplicateRoamRocksetRunhouseRWKV-4SageMaker EndpointSearxNG Search APISerpAPIShaleGrouped by provider | 🦜ï¸�🔗 Langchainhere.search.results( "The best blog post about AI safety is definitely this: ", 10, include_domains=["lesswrong.com"], start_published_date="2019-01-01",)Use Metaphor as a toolMetaphor can be used as a tool that gets URLs that other tools such as browsing tools.from langchain.agents.agent_toolkits import PlayWrightBrowserToolkitfrom langchain.tools.playwright.utils import ( create_async_playwright_browser, # A synchronous browser is available, though it isn't compatible with jupyter.)async_browser = create_async_playwright_browser()toolkit = PlayWrightBrowserToolkit.from_browser(async_browser=async_browser)tools = toolkit.get_tools()tools_by_name = {tool.name: tool for tool in tools}print(tools_by_name.keys())navigate_tool = tools_by_name["navigate_browser"]extract_text = tools_by_name["extract_text"]from langchain.agents import initialize_agent, AgentTypefrom langchain.chat_models import ChatOpenAIfrom langchain.tools import MetaphorSearchResultsllm = ChatOpenAI(model_name="gpt-4", temperature=0.7)metaphor_tool = MetaphorSearchResults(api_wrapper=search)agent_chain = initialize_agent( [metaphor_tool, extract_text, navigate_tool], llm, agent=AgentType.STRUCTURED_CHAT_ZERO_SHOT_REACT_DESCRIPTION, verbose=True,)agent_chain.run( "find me an interesting tweet about AI safety using Metaphor, then tell me the first sentence in the post. Do not finish until able to retrieve the first sentence.")PreviousLemon AI NLP Workflow AutomationNextOpenWeatherMap APICommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.the search. If specified, results will only come from these domains. Only one of include_domains and exclude_domains should be specified.exclude_domains: Optional[List[str]] - List of domains to exclude in the search. If specified, results will only come from these domains. Only one of include_domains and exclude_domains should be specified.start_crawl_date: Optional[str] - "Crawl date" refers to the date that Metaphor discovered a link, which is more granular and can be more useful than published date. If start_crawl_date is specified, results will only include links that were crawled after start_crawl_date. Must be specified in ISO 8601 format (YYYY-MM-DDTHH:MM:SSZ)end_crawl_date: Optional[str] - "Crawl date" refers to the date that Metaphor discovered a link, which is more granular and can be more useful than published date. If endCrawlDate is specified, results will only include links that were crawled before end_crawl_date. Must be specified in ISO 8601 format (YYYY-MM-DDTHH:MM:SSZ)start_published_date: Optional[str] - If specified, only links with a published date after start_published_date will be returned. Must be specified in ISO 8601 format (YYYY-MM-DDTHH:MM:SSZ). Note that for some links, we have no published date, and these links will be excluded from the results if start_published_date is specified.end_published_date: Optional[str] - If specified, only links with a published date before end_published_date will be returned. Must be specified in ISO 8601 format (YYYY-MM-DDTHH:MM:SSZ). Note that for some links, we have no published date, and these links will be excluded from the results if end_published_date is specified.See full docs here.search.results( "The best blog post about AI safety is definitely this: ",Skip to main content🦜ï¸�🔗 LangChainDocsUse casesIntegrationsAPILangSmithJS/TS DocsCTRLKIntegrationsCallbacksChat modelsDocument loadersDocument transformersLLMsMemoryRetrieversText embedding modelsAgent toolkitsToolsApifyArXiv API ToolawslambdaShell ToolBing SearchBrave SearchChatGPT PluginsDataForSeo API WrapperDuckDuckGo SearchFile System ToolsGolden QueryGoogle PlacesGoogle SearchGoogle Serper APIGradio ToolsGraphQL toolhuggingface_toolsHuman as a toolIFTTT WebHooksLemon AI NLP Workflow AutomationMetaphor SearchOpenWeatherMap APIPubMed ToolRequestsSceneXplainSearch ToolsSearxNG Search APISerpAPITwilioWikipediaWolfram AlphaYouTubeSearchToolZapier Natural Language Actions APIVector storesGrouped by providerIntegrationsToolsMetaphor SearchMetaphor SearchMetaphor is a search engine fully designed to be used by LLMs. You can search and then get the contents for any page.This notebook goes over how to use Metaphor search.First, you need to set up the proper API keys and environment variables. Get 1000 free searches/month here.Then enter your API key as an environment variable.import osos.environ["METAPHOR_API_KEY"] = ""from langchain.utilities import MetaphorSearchAPIWrappersearch = MetaphorSearchAPIWrapper()Call the APIresults takes in a Metaphor-optimized search query and a number of results (up to 500). It returns a list of results with title, url, author, and creation date.search.results("The best blog post about AI safety is definitely this: ", 10)Adding filtersWe can also add filters to our search. include_domains: Optional[List[str]] - List of domains to include in the search. If specified, results will only come from these domains. Only one of include_domains andMetaphor Search | 🦜ï¸�🔗 Langchainhttps://maker.ifttt.com/use/YOUR_IFTTT_KEY. Grab the YOUR_IFTTT_KEY value.from langchain.tools.ifttt import IFTTTWebhookimport oskey = os.environ["IFTTTKey"]url = f"https://maker.ifttt.com/trigger/spotify/json/with/key/{key}"tool = IFTTTWebhook( name="Spotify", description="Add a song to spotify playlist", url=url)tool.run("taylor swift") "Congratulations! You've fired the spotify JSON event"PreviousHuman as a toolNextLemon AI NLP Workflow AutomationCreating a webhookConfiguring the "If This"Configuring the "Then That"Finishing upCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.For example, if you're connecting to Spotify, you could use "Spotify" as your Event Name.Click the "Create Trigger" button to save your settings and create your webhook.Configuring the "Then That"​Tap on the "Then That" button in the IFTTT interface.Search for the service you want to connect, such as Spotify.Choose an action from the service, such as "Add track to a playlist".Configure the action by specifying the necessary details, such as the playlist name, e.g., "Songs from AI".Reference the JSON Payload received by the Webhook in your action. For the Spotify scenario, choose "{{JsonPayload}}" as your search query.Tap the "Create Action" button to save your action settings.Once you have finished configuring your action, click the "Finish" button to complete the setup.Congratulations! You have successfully connected the Webhook to the desired service, and you're ready to start receiving data and triggering actions ğŸ�‰Finishing up​To get your webhook URL go to https://ifttt.com/maker_webhooks/settingsCopy the IFTTT key value from there. The URL is of the formSkip to main content🦜ï¸�🔗 LangChainDocsUse casesIntegrationsAPILangSmithJS/TS DocsCTRLKIntegrationsCallbacksChat modelsDocument loadersDocument transformersLLMsMemoryRetrieversText embedding modelsAgent toolkitsToolsApifyArXiv API ToolawslambdaShell ToolBing SearchBrave SearchChatGPT PluginsDataForSeo API WrapperDuckDuckGo SearchFile System ToolsGolden QueryGoogle PlacesGoogle SearchGoogle Serper APIGradio ToolsGraphQL toolhuggingface_toolsHuman as a toolIFTTT WebHooksLemon AI NLP Workflow AutomationMetaphor SearchOpenWeatherMap APIPubMed ToolRequestsSceneXplainSearch ToolsSearxNG Search APISerpAPITwilioWikipediaWolfram AlphaYouTubeSearchToolZapier Natural Language Actions APIVector storesGrouped by providerIntegrationsToolsIFTTT WebHooksOn this pageIFTTT WebHooksThis notebook shows how to use IFTTT Webhooks.From https://github.com/SidU/teams-langchain-js/wiki/Connecting-IFTTT-Services.Creating a webhook​Go to https://ifttt.com/createConfiguring the "If This"​Click on the "If This" button in the IFTTT interface.Search for "Webhooks" in the search bar.Choose the first option for "Receive a web request with a JSON payload."Choose an Event Name that is specific to the service you plan to connect to. This will make it easier for you to manage the webhook URL. For example, if you're connecting to Spotify, you could use "Spotify" as yourIFTTT WebHooks | 🦜ï¸�🔗 Langchainmust provide it with the name and description that match the functionality of you lambda function's logic. You must also provide the name of your function. Note that because this tool is effectively just a wrapper around the boto3 library, you will need to run aws configure in order to make use of the tool. For more detail, see herefrom langchain import OpenAIfrom langchain.agents import load_tools, AgentTypellm = OpenAI(temperature=0)tools = load_tools( ["awslambda"], awslambda_tool_name="email-sender", awslambda_tool_description="sends an email with the specified content to test@testing123.com", function_name="testFunction1",)agent = initialize_agent( tools, llm, agent=AgentType.ZERO_SHOT_REACT_DESCRIPTION, verbose=True)agent.run("Send an email to test@testing123.com saying hello world.")PreviousArXiv API ToolNextShell ToolAWS Lambda APICommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.Skip to main content🦜ï¸�🔗 LangChainDocsUse casesIntegrationsAPILangSmithJS/TS DocsCTRLKIntegrationsCallbacksChat modelsDocument loadersDocument transformersLLMsMemoryRetrieversText embedding modelsAgent toolkitsToolsApifyArXiv API ToolawslambdaShell ToolBing SearchBrave SearchChatGPT PluginsDataForSeo API WrapperDuckDuckGo SearchFile System ToolsGolden QueryGoogle PlacesGoogle SearchGoogle Serper APIGradio ToolsGraphQL toolhuggingface_toolsHuman as a toolIFTTT WebHooksLemon AI NLP Workflow AutomationMetaphor SearchOpenWeatherMap APIPubMed ToolRequestsSceneXplainSearch ToolsSearxNG Search APISerpAPITwilioWikipediaWolfram AlphaYouTubeSearchToolZapier Natural Language Actions APIVector storesGrouped by providerIntegrationsToolsawslambdaOn this pageawslambdaAWS Lambda API​This notebook goes over how to use the AWS Lambda Tool component.AWS Lambda is a serverless computing service provided by Amazon Web Services (AWS), designed to allow developers to build and run applications and services without the need for provisioning or managing servers. This serverless architecture enables you to focus on writing and deploying code, while AWS automatically takes care of scaling, patching, and managing the infrastructure required to run your applications.By including a awslambda in the list of tools provided to an Agent, you can grant your Agent the ability to invoke code running in your AWS Cloud for whatever purposes you need.When an Agent uses the awslambda tool, it will provide an argument of type string which will in turn be passed into the Lambda function via the event parameter.First, you need to install boto3 python package.pip install boto3 > /dev/nullIn order for an agent to use the tool, you must provide it with the name and description that match the functionality of you lambda function's logic. Youawslambda | 🦜ï¸�🔗 Langchain{'snippet': 'This repository contains the code, data, and models of the paper ' 'titled "XL-Sum: Large-Scale Multilingual Abstractive ' 'Summarization for 44 Languages" published in Findings of the ' 'Association for Computational Linguistics: ACL-IJCNLP 2021.', 'title': 'xl-sum', 'link': 'https://github.com/csebuetnlp/xl-sum', 'engines': ['github'], 'category': 'it'}]PreviousSearch ToolsNextSerpAPICustom ParametersObtaining results with metadataCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.'category': 'it'}, {'snippet': 'Design and Deploy Large Language Model Apps', 'title': 'dust', 'link': 'https://github.com/dust-tt/dust', 'engines': ['github'], 'category': 'it'}, {'snippet': 'Polyglot: Large Language Models of Well-balanced Competence in ' 'Multi-languages', 'title': 'polyglot', 'link': 'https://github.com/EleutherAI/polyglot', 'engines': ['github'], 'category': 'it'}, {'snippet': 'Code release for "Learning Video Representations from Large ' 'Language Models"', 'title': 'LaViLa', 'link': 'https://github.com/facebookresearch/LaViLa', 'engines': ['github'], 'category': 'it'}, {'snippet': 'SmoothQuant: Accurate and Efficient Post-Training Quantization ' 'for Large Language Models', 'title': 'smoothquant', 'link': 'https://github.com/mit-han-lab/smoothquant', 'engines': ['github'], 'category': 'it'}, {'snippet': 'This repository contains the code, data, and models of the paper ''link': 'https://github.com/OpenBioLink/ThoughtSource', 'engines': ['github'], 'category': 'it'}, {'snippet': 'A comprehensive list of papers using large language/multi-modal ' 'models for Robotics/RL, including papers, codes, and related ' 'websites', 'title': 'Awesome-LLM-Robotics', 'link': 'https://github.com/GT-RIPL/Awesome-LLM-Robotics', 'engines': ['github'], 'category': 'it'}, {'snippet': 'Tools for curating biomedical training data for large-scale ' 'language modeling', 'title': 'biomedical', 'link': 'https://github.com/bigscience-workshop/biomedical', 'engines': ['github'], 'category': 'it'}, {'snippet': 'ChatGPT @ Home: Large Language Model (LLM) chatbot application, ' 'written by ChatGPT', 'title': 'ChatGPT-at-Home', 'link': 'https://github.com/Sentdex/ChatGPT-at-Home', 'engines': ['github'], 'category': 'it'}, {'snippet': 'Design and Deploy Large'it'}, {'snippet': 'A prize for finding tasks that cause large language models to ' 'show inverse scaling', 'title': 'prize', 'link': 'https://github.com/inverse-scaling/prize', 'engines': ['github'], 'category': 'it'}, {'snippet': 'Optimus: the first large-scale pre-trained VAE language model', 'title': 'Optimus', 'link': 'https://github.com/ChunyuanLI/Optimus', 'engines': ['github'], 'category': 'it'}, {'snippet': 'Seminar on Large Language Models (COMP790-101 at UNC Chapel ' 'Hill, Fall 2022)', 'title': 'llm-seminar', 'link': 'https://github.com/craffel/llm-seminar', 'engines': ['github'], 'category': 'it'}, {'snippet': 'A central, open resource for data and tools related to ' 'chain-of-thought reasoning in large language models. Developed @ ' 'Samwald research group: https://samwald.info/', 'title': 'ThoughtSource', 'link': 'https://github.com/OpenBioLink/ThoughtSource','https://github.com/microsoft/LoRA', 'engines': ['github'], 'category': 'it'}, {'snippet': 'Code for the paper "Evaluating Large Language Models Trained on ' 'Code"', 'title': 'human-eval', 'link': 'https://github.com/openai/human-eval', 'engines': ['github'], 'category': 'it'}, {'snippet': 'A trend starts from "Chain of Thought Prompting Elicits ' 'Reasoning in Large Language Models".', 'title': 'Chain-of-ThoughtsPapers', 'link': 'https://github.com/Timothyxxx/Chain-of-ThoughtsPapers', 'engines': ['github'], 'category': 'it'}, {'snippet': 'Mistral: A strong, northwesterly wind: Framework for transparent ' 'and accessible large-scale language model training, built with ' 'Hugging Face 🤗 Transformers.', 'title': 'mistral', 'link': 'https://github.com/stanford-crfm/mistral', 'engines': ['github'], 'category': 'it'}, {'snippet': 'A prize for finding tasks that cause large language'link': 'https://gitlab.com/BrianPulfer/LMWatermark', 'engines': ['gitlab'], 'category': 'it'}, {'snippet': 'Guide to using pre-trained large language models of source code', 'title': 'Code-LMs', 'link': 'https://github.com/VHellendoorn/Code-LMs', 'engines': ['github'], 'category': 'it'}, {'snippet': '', 'title': 'Simen Burud / Large-scale Language Models for Conversational ' 'Speech Recognition', 'link': 'https://gitlab.com/BrianPulfer', 'engines': ['gitlab'], 'category': 'it'}, {'snippet': 'Dramatron uses large language models to generate coherent ' 'scripts and screenplays.', 'title': 'dramatron', 'link': 'https://github.com/deepmind/dramatron', 'engines': ['github'], 'category': 'it'}, {'snippet': 'Code for loralib, an implementation of "LoRA: Low-Rank ' 'Adaptation of Large Language Models"', 'title': 'LoRA', 'link': 'https://github.com/microsoft/LoRA', 'engines':'engines': ['arxiv'], 'category': 'science'}]In this example we query for large language models under the it category. We then filter the results that come from github.results = search.results("large language model", num_results=20, categories="it")pprint.pp(list(filter(lambda r: r["engines"][0] == "github", results))) [{'snippet': 'Guide to using pre-trained large language models of source code', 'title': 'Code-LMs', 'link': 'https://github.com/VHellendoorn/Code-LMs', 'engines': ['github'], 'category': 'it'}, {'snippet': 'Dramatron uses large language models to generate coherent ' 'scripts and screenplays.', 'title': 'dramatron', 'link': 'https://github.com/deepmind/dramatron', 'engines': ['github'], 'category': 'it'}]We could also directly query for results from github and other source forges.results = search.results( "large language model", num_results=20, engines=["github", "gitlab"])pprint.pp(results) [{'snippet': "Implementation of 'A Watermark for Large Language Models' paper " 'by Kirchenbauer & Geiping et. al.', 'title': 'Peutlefaire / LMWatermark', 'link': 'https://gitlab.com/BrianPulfer/LMWatermark',prompt programming, ' 'emphasizing the usefulness of considering prompts through the ' 'lens of natural language. We explore techniques for exploiting ' 'the capacity of narratives and cultural anchors to encode ' 'nuanced intentions and techniques for encouraging deconstruction ' 'of a problem into components before producing a verdict. ' 'Informed by this more encompassing theory of prompt programming, ' 'we also introduce the idea of a metaprompt that seeds the model ' 'to generate its own natural language prompts for a range of ' 'tasks. Finally, we discuss how these more general methods of ' 'interacting with language models can be incorporated into ' 'existing and future benchmarks and practical applications.', 'title': 'Prompt Programming for Large Language Models: Beyond the Few-Shot ' 'Paradigm', 'link': 'http://arxiv.org/abs/2102.07350v1', 'engines': ['arxiv'],' 'accuracy across three tasks.', 'title': "Toward Human Readable Prompt Tuning: Kubrick's The Shining is a " 'good movie, and a good prompt too?', 'link': 'http://arxiv.org/abs/2212.10539v1', 'engines': ['arxiv'], 'category': 'science'}, {'snippet': 'Prevailing methods for mapping large generative language models ' "to supervised tasks may fail to sufficiently probe models' novel " 'capabilities. Using GPT-3 as a case study, we show that 0-shot ' 'prompts can significantly outperform few-shot prompts. We ' 'suggest that the function of few-shot examples in these cases is ' 'better described as locating an already learned task rather than ' 'meta-learning. This analysis motivates rethinking the role of ' 'prompts in controlling and evaluating powerful language models. ' 'In this work, we discuss methods of prompt programming, ''behavior. Such prompts are typically hand engineered, but can ' 'also be learned with gradient-based methods from labeled data. ' 'However, it is underexplored what factors make the prompts ' 'effective, especially when the prompts are natural language. In ' 'this paper, we investigate common attributes shared by effective ' 'prompts. We first propose a human readable prompt tuning method ' '(F LUENT P ROMPT) based on Langevin dynamics that incorporates a ' 'fluency constraint to find a diverse distribution of effective ' 'and fluent prompts. Our analysis reveals that effective prompts ' 'are topically related to the task domain and calibrate the prior ' 'probability of label words. Based on these findings, we also ' 'propose a method for generating prompts using only unlabeled ' 'data, outperforming strong baselines by an average of 7.0% 'several features of prompt examples, such as semantic ' 'similarity, show significant Spearman correlation with their ' 'prompting performance; yet, none of the correlations are strong ' 'enough; 3) using pseudo parallel prompt examples constructed ' 'from monolingual data via zero-shot prompting could improve ' 'translation; and 4) improved performance is achievable by ' 'transferring knowledge from prompt examples selected in other ' 'settings. We finally provide an analysis on the model outputs ' 'and discuss several problems that prompting still suffers from.', 'title': 'Prompting Large Language Model for Machine Translation: A Case ' 'Study', 'link': 'http://arxiv.org/abs/2301.07069v2', 'engines': ['arxiv'], 'category': 'science'}, {'snippet': 'Large language models can perform new tasks in a zero-shot ' 'fashion, given natural language prompts that specify the desired ' 'behavior. Such prompts are typically hand'category': 'science'}, {'snippet': 'Research on prompting has shown excellent performance with ' 'little or even no supervised training across many tasks. ' 'However, prompting for machine translation is still ' 'under-explored in the literature. We fill this gap by offering a ' 'systematic study on prompting strategies for translation, ' 'examining various factors for prompt template and demonstration ' 'example selection. We further explore the use of monolingual ' 'data and the feasibility of cross-lingual, cross-domain, and ' 'sentence-to-document transfer learning in prompting. Extensive ' 'experiments with GLM-130B (Zeng et al., 2022) as the testbed ' 'show that 1) the number and the quality of prompt examples ' 'matter, where using suboptimal examples degenerates translation; ' '2) several features of prompt examples, such as semantic '' 'approaches to cross-prompt AES either require a certain quantity ' 'of labelled target-prompt essays or require a large quantity of ' 'unlabelled target-prompt essays to perform transfer learning in ' 'a multi-step manner. To address these issues, we introduce ' 'Prompt Agnostic Essay Scorer (PAES) for cross-prompt AES. Our ' 'method requires no access to labelled or unlabelled ' 'target-prompt data during training and is a single-stage ' 'approach. PAES is easy to apply in practice and achieves ' 'state-of-the-art performance on the Automated Student Assessment ' 'Prize (ASAP) dataset.', 'title': 'Prompt Agnostic Essay Scorer: A Domain Generalization Approach to ' 'Cross-prompt Automated Essay Scoring', 'link': 'http://arxiv.org/abs/2008.01441v1', 'engines': ['arxiv'], 'category': 'science'}, {'snippet':'effective in general. Besides, the performance gaps tend to ' 'diminish when the scale of training data grows large.', 'title': 'Do Prompts Solve NLP Tasks Using Natural Language?', 'link': 'http://arxiv.org/abs/2203.00902v1', 'engines': ['arxiv'], 'category': 'science'}, {'snippet': 'Cross-prompt automated essay scoring (AES) requires the system ' 'to use non target-prompt essays to award scores to a ' 'target-prompt essay. Since obtaining a large quantity of ' 'pre-graded essays to a particular prompt is often difficult and ' 'unrealistic, the task of cross-prompt AES is vital for the ' 'development of real-world AES systems, yet it remains an ' 'under-explored area of research. Models designed for ' 'prompt-specific AES rely heavily on prompt-specific knowledge ' 'and perform poorly in the cross-prompt setting, whereas current ' 'approachesthe ' 'unique …', 'title': 'Can language models learn from explanations in context?', 'link': 'https://arxiv.org/abs/2204.02329', 'engines': ['google scholar'], 'category': 'science'}]Get papers from arxivresults = search.results( "Large Language Model prompt", num_results=5, engines=["arxiv"])pprint.pp(results) [{'snippet': 'Thanks to the advanced improvement of large pre-trained language ' 'models, prompt-based fine-tuning is shown to be effective on a ' 'variety of downstream tasks. Though many prompting methods have ' 'been investigated, it remains unknown which type of prompts are ' 'the most effective among three types of prompts (i.e., ' 'human-designed prompts, schema prompts and null prompts). In ' 'this work, we empirically compare the three types of prompts ' 'under both few-shot and fully-supervised settings. Our ' 'experimental results show that schema prompts are the most ' 'effective in general. Besides,'fine-tuning T0 during co-training, we focus on soft prompt ' 'tuning, …', 'title': 'Co-training improves prompt-based learning for large language ' 'models', 'link': 'https://proceedings.mlr.press/v162/lang22a.html', 'engines': ['google scholar'], 'category': 'science'}, {'snippet': '… With the success of large language models (LLMs) of code and ' 'their use as … prompt design process become important. In this ' 'work, we propose a framework called Repo-Level Prompt …', 'title': 'Repository-level prompt generation for large language models of ' 'code', 'link': 'https://arxiv.org/abs/2206.12839', 'engines': ['google scholar'], 'category': 'science'}, {'snippet': '… Figure 2 | The benefits of different components of a prompt ' 'for the largest language model (Gopher), as estimated from ' 'hierarchical logistic regression. Each point estimates the ' 'unique'prompt used to steer the model, and most effective prompts … to ' 'prompt engineering, we propose Automatic Prompt …', 'title': 'Large language models are human-level prompt engineers', 'link': 'https://arxiv.org/abs/2211.01910', 'engines': ['google scholar'], 'category': 'science'}, {'snippet': '… Large language models (LLMs) have introduced new possibilities ' 'for prototyping with AI [18]. Pre-trained on a large amount of ' 'text data, models … language instructions called prompts. …', 'title': 'Promptchainer: Chaining large language model prompts through ' 'visual programming', 'link': 'https://dl.acm.org/doi/abs/10.1145/3491101.3519729', 'engines': ['google scholar'], 'category': 'science'}, {'snippet': '… can introspect the large prompt model. We derive the view ' 'Ï•0(X) and the model h0 from T01. However, instead of fully ' 'fine-tuning T0 during co-training, we focus on soft promptfrom AI21 Labs—are...\n\nLarge language models (LLMs) such as GPT-3are increasingly being used to generate text. These tools should be used with care, since they can generate content that is biased, non-verifiable, constitutes original research, or violates copyrights.'Passing other Searx parameters for searx like languagesearch = SearxSearchWrapper(searx_host="http://127.0.0.1:8888", k=1)search.run("deep learning", language="es", engines=["wiki"]) 'Aprendizaje profundo (en inglés, deep learning) es un conjunto de algoritmos de aprendizaje automático (en inglés, machine learning) que intenta modelar abstracciones de alto nivel en datos usando arquitecturas computacionales que admiten transformaciones no lineales múltiples e iterativas de datos expresados en forma matricial o tensorial. 1'Obtaining results with metadata​In this example we will be looking for scientific paper using the categories parameter and limiting the results to a time_range (not all engines support the time range option).We also would like to obtain the results in a structured way including metadata. For this we will be using the results method of the wrapper.search = SearxSearchWrapper(searx_host="http://127.0.0.1:8888")results = search.results( "Large Language Model prompt", num_results=5, categories="science", time_range="year",)pprint.pp(results) [{'snippet': '… on natural language instructions, large language models (… the ' 'prompt used to steer the model,Parameters​SearxNG supports 135 search engines. You can also customize the Searx wrapper with arbitrary named parameters that will be passed to the Searx search API . In the below example we will making a more interesting use of custom search parameters from searx search api.In this example we will be using the engines parameters to query wikipediasearch = SearxSearchWrapper( searx_host="http://127.0.0.1:8888", k=5) # k is for max number of itemssearch.run("large language model ", engines=["wiki"]) 'Large language models (LLMs) represent a major advancement in AI, with the promise of transforming domains through learned knowledge. LLM sizes have been increasing 10X every year for the last few years, and as these models grow in complexity and size, so do their capabilities.\n\nGPT-3 can translate language, write essays, generate computer code, and more — all with limited to no supervision. In July 2020, OpenAI unveiled GPT-3, a language model that was easily the largest known at the time. Put simply, GPT-3 is trained to predict the next word in a sentence, much like how a text message autocomplete feature works.\n\nA large language model, or LLM, is a deep learning algorithm that can recognize, summarize, translate, predict and generate text and other content based on knowledge gained from massive datasets. Large language models are among the most successful applications of transformer models.\n\nAll of today’s well-known language models—e.g., GPT-3 from OpenAI, PaLM or LaMDA from Google, Galactica or OPT from Meta, Megatron-Turing from Nvidia/Microsoft, Jurassic-1 from AI21 Labs—are...\n\nLarge language models (LLMs) suchSkip to main content🦜ï¸�🔗 LangChainDocsUse casesIntegrationsAPILangSmithJS/TS DocsCTRLKIntegrationsCallbacksChat modelsDocument loadersDocument transformersLLMsMemoryRetrieversText embedding modelsAgent toolkitsToolsApifyArXiv API ToolawslambdaShell ToolBing SearchBrave SearchChatGPT PluginsDataForSeo API WrapperDuckDuckGo SearchFile System ToolsGolden QueryGoogle PlacesGoogle SearchGoogle Serper APIGradio ToolsGraphQL toolhuggingface_toolsHuman as a toolIFTTT WebHooksLemon AI NLP Workflow AutomationMetaphor SearchOpenWeatherMap APIPubMed ToolRequestsSceneXplainSearch ToolsSearxNG Search APISerpAPITwilioWikipediaWolfram AlphaYouTubeSearchToolZapier Natural Language Actions APIVector storesGrouped by providerIntegrationsToolsSearxNG Search APIOn this pageSearxNG Search APIThis notebook goes over how to use a self hosted SearxNG search API to search the web.You can check this link for more informations about Searx API parameters.import pprintfrom langchain.utilities import SearxSearchWrappersearch = SearxSearchWrapper(searx_host="http://127.0.0.1:8888")For some engines, if a direct answer is available the warpper will print the answer instead of the full list of search results. You can use the results method of the wrapper if you want to obtain all the results.search.run("What is the capital of France") 'Paris is the capital of France, the largest country of Europe with 550 000 km2 (65 millions inhabitants). Paris has 2.234 million inhabitants end 2011. She is the core of Ile de France region (12 million people).'Custom Parameters​SearxNG supports 135 search engines. You can also customizeSearxNG Search API | 🦜ï¸�🔗 Langchainand midwifery practice and education: An editorial.\nSummary: \n\nPublished: 2023Jun02\nTitle: The Impact of ChatGPT on the Nursing Profession: Revolutionizing Patient Care and Education.\nSummary: The nursing field has undergone notable changes over time and is projected to undergo further modifications in the future, owing to the advent of sophisticated technologies and growing healthcare needs. The advent of ChatGPT, an AI-powered language model, is expected to exert a significant influence on the nursing profession, specifically in the domains of patient care and instruction. The present article delves into the ramifications of ChatGPT within the nursing domain and accentuates its capacity and constraints to transform the discipline.'PreviousOpenWeatherMap APINextRequestsCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.Skip to main content🦜ï¸�🔗 LangChainDocsUse casesIntegrationsAPILangSmithJS/TS DocsCTRLKIntegrationsCallbacksChat modelsDocument loadersDocument transformersLLMsMemoryRetrieversText embedding modelsAgent toolkitsToolsApifyArXiv API ToolawslambdaShell ToolBing SearchBrave SearchChatGPT PluginsDataForSeo API WrapperDuckDuckGo SearchFile System ToolsGolden QueryGoogle PlacesGoogle SearchGoogle Serper APIGradio ToolsGraphQL toolhuggingface_toolsHuman as a toolIFTTT WebHooksLemon AI NLP Workflow AutomationMetaphor SearchOpenWeatherMap APIPubMed ToolRequestsSceneXplainSearch ToolsSearxNG Search APISerpAPITwilioWikipediaWolfram AlphaYouTubeSearchToolZapier Natural Language Actions APIVector storesGrouped by providerIntegrationsToolsPubMed ToolPubMed ToolThis notebook goes over how to use PubMed as a toolPubMed® comprises more than 35 million citations for biomedical literature from MEDLINE, life science journals, and online books. Citations may include links to full text content from PubMed Central and publisher web sites.from langchain.tools import PubmedQueryRuntool = PubmedQueryRun()tool.run("chatgpt") 'Published: 2023May31\nTitle: Dermatology in the wake of an AI revolution: who gets a say?\nSummary: \n\nPublished: 2023May30\nTitle: What is ChatGPT and what do we do with it? Implications of the age of AI for nursing and midwifery practice and education: An editorial.\nSummary: \n\nPublished:PubMed Tool | 🦜ï¸�🔗 Langchainwith Langchain AgentsCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.params={"location_name": "Germany", "language_code": "en", "se_name": "bing"},)customized_wrapper.results("coffee near me")Customizing the Search Type​The API wrapper also allows you to specify the type of search you want to perform. For example, you can perform a maps search.maps_search = DataForSeoAPIWrapper( top_count=10, json_result_fields=["title", "value", "address", "rating", "type"], params={ "location_coordinate": "52.512,13.36,12z", "language_code": "en", "se_type": "maps", },)maps_search.results("coffee near me")Integration with Langchain Agents​You can use the Tool class from the langchain.agents module to integrate the DataForSeoAPIWrapper with a langchain agent. The Tool class encapsulates a function that the agent can call.from langchain.agents import Toolsearch = DataForSeoAPIWrapper( top_count=3, json_result_types=["organic"], json_result_fields=["title", "description", "type"],)tool = Tool( name="google-search-answer", description="My new answer tool", func=search.run,)json_tool = Tool( name="google-search-json", description="My new json tool", func=search.results,)PreviousChatGPT PluginsNextDuckDuckGo SearchSetting up the API wrapper with your credentialsThe Difference Between run and resultsGetting Results as JSONCustomizing Location and LanguageCustomizing the Search EngineCustomizing the Search TypeIntegration with Langchain AgentsCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright ©Difference Between run and results​run and results are two methods provided by the DataForSeoAPIWrapper class.The run method executes the search and returns the first result snippet from the answer box, knowledge graph, featured snippet, shopping, or organic results. These elements are sorted by priority from highest to lowest.The results method returns a JSON response configured according to the parameters set in the wrapper. This allows for more flexibility in terms of what data you want to return from the API.Getting Results as JSON​You can customize the result types and fields you want to return in the JSON response. You can also set a maximum count for the number of top results to return.json_wrapper = DataForSeoAPIWrapper( json_result_types=["organic", "knowledge_graph", "answer_box"], json_result_fields=["type", "title", "description", "text"], top_count=3,)json_wrapper.results("Bill Gates")Customizing Location and Language​You can specify the location and language of your search results by passing additional parameters to the API wrapper.customized_wrapper = DataForSeoAPIWrapper( top_count=10, json_result_types=["organic", "local_pack"], json_result_fields=["title", "description", "type"], params={"location_name": "Germany", "language_code": "en"},)customized_wrapper.results("coffee near me")Customizing the Search Engine​You can also specify the search engine you want to use.customized_wrapper = DataForSeoAPIWrapper( top_count=10, json_result_types=["organic", "local_pack"], json_result_fields=["title", "description", "type"], params={"location_name": "Germany", "language_code": "en", "se_name":Skip to main content🦜ï¸�🔗 LangChainDocsUse casesIntegrationsAPILangSmithJS/TS DocsCTRLKIntegrationsCallbacksChat modelsDocument loadersDocument transformersLLMsMemoryRetrieversText embedding modelsAgent toolkitsToolsApifyArXiv API ToolawslambdaShell ToolBing SearchBrave SearchChatGPT PluginsDataForSeo API WrapperDuckDuckGo SearchFile System ToolsGolden QueryGoogle PlacesGoogle SearchGoogle Serper APIGradio ToolsGraphQL toolhuggingface_toolsHuman as a toolIFTTT WebHooksLemon AI NLP Workflow AutomationMetaphor SearchOpenWeatherMap APIPubMed ToolRequestsSceneXplainSearch ToolsSearxNG Search APISerpAPITwilioWikipediaWolfram AlphaYouTubeSearchToolZapier Natural Language Actions APIVector storesGrouped by providerIntegrationsToolsDataForSeo API WrapperOn this pageDataForSeo API WrapperThis notebook demonstrates how to use the DataForSeo API wrapper to obtain search engine results. The DataForSeo API allows users to retrieve SERP from most popular search engines like Google, Bing, Yahoo. It also allows to get SERPs from different search engine types like Maps, News, Events, etc.from langchain.utilities import DataForSeoAPIWrapperSetting up the API wrapper with your credentials​You can obtain your API credentials by registering on the DataForSeo website.import osos.environ["DATAFORSEO_LOGIN"] = "your_api_access_username"os.environ["DATAFORSEO_PASSWORD"] = "your_api_access_password"wrapper = DataForSeoAPIWrapper()The run method will return the first result snippet from one of the following elements: answer_box, knowledge_graph, featured_snippet, shopping, organic.wrapper.run("Weather in Los Angeles")The Difference Between run and results​run and results are two methods provided by theDataForSeo API Wrapper | 🦜ï¸�🔗 Langchain"Veni, vidi, vici" which means "I came, I saw, I conquered".'Previoushuggingface_toolsNextIFTTT WebHooksConfiguring the Input FunctionCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.the phrase in a letter to the Roman Senate around 47 BC after he had achieved a quick victory in his short ... veni, vidi, vici Latin quotation from Julius Caesar ve· ni, vi· di, vi· ci ËŒwÄ�-nÄ“ ËŒwÄ“-dÄ“ ˈwÄ“-kÄ“ ËŒvÄ�-nÄ“ ËŒvÄ“-dÄ“ ˈvÄ“-chÄ“ : I came, I saw, I conquered Articles Related to veni, vidi, vici 'In Vino Veritas' and Other Latin... Dictionary Entries Near veni, vidi, vici Venite veni, vidi, vici Venizélos See More Nearby Entries Cite this Entry Style The simplest explanation for why veni, vidi, vici is a popular saying is that it comes from Julius Caesar, one of history's most famous figures, and has a simple, strong meaning: I'm powerful and fast. But it's not just the meaning that makes the phrase so powerful. Caesar was a gifted writer, and the phrase makes use of Latin grammar to ... One of the best known and most frequently quoted Latin expression, veni, vidi, vici may be found hundreds of times throughout the centuries used as an expression of triumph. The words are said to have been used by Caesar as he was enjoying a triumph. Thought:I now know the final answer Final Answer: Julius Caesar said the quote "Veni, vidi, vici" which means "I came, I saw, I conquered". > Finished chain. 'Julius Caesar said the quote "Veni, vidi, vici" which means "I came, I saw, IObservation: vini vidi vici Thought:I need to provide more context about the quote Action: Human Action Input: "The quote is 'Veni, vidi, vici'" The quote is 'Veni, vidi, vici' Insert your text. Enter 'q' or press Ctrl-D (or Ctrl-Z on Windows) to end. oh who said it q Observation: oh who said it Thought:I can use DuckDuckGo Search to find out who said the quote Action: DuckDuckGo Search Action Input: "Who said 'Veni, vidi, vici'?" Observation: Updated on September 06, 2019. "Veni, vidi, vici" is a famous phrase said to have been spoken by the Roman Emperor Julius Caesar (100-44 BCE) in a bit of stylish bragging that impressed many of the writers of his day and beyond. The phrase means roughly "I came, I saw, I conquered" and it could be pronounced approximately Vehnee, Veedee ... Veni, vidi, vici (Classical Latin: [weË�niË� wiË�diË� wiË�kiË�], Ecclesiastical Latin: [ˈveni ˈvidi ˈvitʃi]; "I came; I saw; I conquered") is a Latin phrase used to refer to a swift, conclusive victory.The phrase is popularly attributed to Julius Caesar who, according to Appian, used the phrase in a letter to the Roman Senate around 47 BC after he had achieved a quick victoryFor instance, if you want to accept multi-line input, you could do the following:def get_input() -> str: print("Insert your text. Enter 'q' or press Ctrl-D (or Ctrl-Z on Windows) to end.") contents = [] while True: try: line = input() except EOFError: break if line == "q": break contents.append(line) return "\n".join(contents)# You can modify the tool when loadingtools = load_tools(["human", "ddg-search"], llm=math_llm, input_func=get_input)# Or you can directly instantiate the toolfrom langchain.tools import HumanInputRuntool = HumanInputRun(input_func=get_input)agent_chain = initialize_agent( tools, llm, agent=AgentType.ZERO_SHOT_REACT_DESCRIPTION, verbose=True,)agent_chain.run("I need help attributing a quote") > Entering new AgentExecutor chain... I should ask a human for guidance Action: Human Action Input: "Can you help me attribute a quote?" Can you help me attribute a quote? Insert your text. Enter 'q' or press Ctrl-D (or Ctrl-Z on Windows) to end. vini vidi vici q Observation: vini vidi viciYou can customize prompt_func and input_func according to your need (as shown below).agent_chain.run("What's my friend Eric's surname?")# Answer with 'Zhu' > Entering new AgentExecutor chain... I don't know Eric's surname, so I should ask a human for guidance. Action: Human Action Input: "What is Eric's surname?" What is Eric's surname? Zhu Observation: Zhu Thought:I now know Eric's surname is Zhu. Final Answer: Eric's surname is Zhu. > Finished chain. "Eric's surname is Zhu."Configuring the Input Function​By default, the HumanInputRun tool uses the python input function to get input from the user. You can customize the input_func to be anything you'd like.Skip to main content🦜ï¸�🔗 LangChainDocsUse casesIntegrationsAPILangSmithJS/TS DocsCTRLKIntegrationsCallbacksChat modelsDocument loadersDocument transformersLLMsMemoryRetrieversText embedding modelsAgent toolkitsToolsApifyArXiv API ToolawslambdaShell ToolBing SearchBrave SearchChatGPT PluginsDataForSeo API WrapperDuckDuckGo SearchFile System ToolsGolden QueryGoogle PlacesGoogle SearchGoogle Serper APIGradio ToolsGraphQL toolhuggingface_toolsHuman as a toolIFTTT WebHooksLemon AI NLP Workflow AutomationMetaphor SearchOpenWeatherMap APIPubMed ToolRequestsSceneXplainSearch ToolsSearxNG Search APISerpAPITwilioWikipediaWolfram AlphaYouTubeSearchToolZapier Natural Language Actions APIVector storesGrouped by providerIntegrationsToolsHuman as a toolOn this pageHuman as a toolHuman are AGI so they can certainly be used as a tool to help out AI agent when it is confused.from langchain.chat_models import ChatOpenAIfrom langchain.llms import OpenAIfrom langchain.agents import load_tools, initialize_agentfrom langchain.agents import AgentTypellm = ChatOpenAI(temperature=0.0)math_llm = OpenAI(temperature=0.0)tools = load_tools( ["human", "llm-math"], llm=math_llm,)agent_chain = initialize_agent( tools, llm, agent=AgentType.ZERO_SHOT_REACT_DESCRIPTION, verbose=True,)In the above code you can see the tool takes input directly from command line.Human as a tool | 🦜ï¸�🔗 LangchainSearch\\x22,\\x22dym\\x22:\\x22Did you mean:\\x22,\\x22lcky\\x22:\\x22I\\\\u0026#39;m Feeling Lucky\\x22,\\x22lml\\x22:\\x22Learn more\\x22,\\x22psrc\\x22:\\x22This search was removed from your \\\\u003Ca href\\x3d\\\\\\x22/history\\\\\\x22\\\\u003EWeb History\\\\u003C/a\\\\u003E\\x22,\\x22psrl\\x22:\\x22Remove\\x22,\\x22sbit\\x22:\\x22Search by image\\x22,\\x22srch\\x22:\\x22Google Search\\x22},\\x22ovr\\x22:{},\\x22pq\\x22:\\x22\\x22,\\x22rfs\\x22:[],\\x22sbas\\x22:\\x220 3px 8px 0 rgba(0,0,0,0.2),0 0 0 1px rgba(0,0,0,0.08)\\x22,\\x22stok\\x22:\\x22C3TIBpTor6RHJfEIn2nbidnhv50\\x22}}\';google.pmc=JSON.parse(pmc);})(); 'PreviousPubMed ToolNextSceneXplainInside the toolCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.0:q.call(k,"script[nonce]"))?a.nonce||a.getAttribute("nonce")||"":"")&&b.setAttribute("nonce",k);document.body.appendChild(b);google.psa=!0;google.lx=g};google.bx||google.lx()};google.xjsu=u;e._F_jsUrl=u;setTimeout(function(){0(function(){var u=\'/xjs/_/js/k\\x3dxjs.hp.en.q0lHXBfs9JY.O/am\\x3dAAAA6AQAUABgAQ/d\\x3d1/ed\\x3d1/rs\\x3dACT90oE3ek6-fjkab6CsTH0wUEUUPhnExg/m\\x3dsb_he,d\';var amd=0;\nvar e=this||self,f=function(c){return c};var h;var n=function(c,g){this.g=g===l?c:""};n.prototype.toString=function(){return this.g+""};var l={};\nfunction p(){var c=u,g=function(){};google.lx=google.stvsc?g:function(){google.timers&&google.timers.load&&google.tick&&google.tick("load","xjsls");var a=document;var b="SCRIPT";"application/xhtml+xml"===a.contentType&&(b=b.toLowerCase());b=a.createElement(b);a=null===c?"null":void 0===c?"undefined":c;if(void 0===h){var d=null;var m=e.trustedTypes;if(m&&m.createPolicy){try{d=m.createPolicy("goog#html",{createHTML:f,createScript:f,createScriptURL:f})}catch(r){e.console&&e.console.error(r.message)}h=\nd}else h=d}a=(d=h)?d.createScriptURL(a):a;a=new n(a,l);b.src=a instanceof n&&a.constructor===n?a.g:"type_error:TrustedResourceUrl";var k,q;(k=(a=null==(q=(k=(b.ownerDocument&&b.ownerDocument.defaultView||window).document).querySelector)?voidAsian Pacific American Heritage Month with Google

© 2023 - Privacy - Terms


Advanced search
Web Historynull;q++;d=d||{};b=encodeURIComponent;var c="/gen_204?atyp=i&ei="+b(google.kEI);google.kEXPI&&(c+="&jexpid="+b(google.kEXPI));c+="&srcpg="+b(google.sn)+"&jsr="+b(t.jsr)+"&bver="+b(t.bv);var f=a.lineNumber;void 0!==f&&(c+="&line="+f);var g=\na.fileName;g&&(0=l&&(window.onerror=null)};})();PreviousAnnoyNextAwaDBCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc..iframe { /* vh can be **very** large in vscode html. */ height: min(75vh, 66vw); width: 100%; }