The base building block of Langchain, takes a text input and gets a prediction text output
Chat Models
A specialized version of LLMs that take in message objects instead of raw text
Prompt templates
Most applications do not pass raw user input to LLMs. They inject the user input into a larger prompt template
and send the whole combined message to the LLM
Langchain allows you to define custom prompt templates for applications you’re building
Chains
These are a way to link multiple primitives together (e.g. LLMs, Chat Models, Prompt Templates) to create a more powerful response
A simple example of this is chaining together a prompt template with an LLM
Agents
Chains follow a static path of structure. Agents can choose options dynamically
Each agent can complete a certain task (Google search, DB lookup, Python REPL)
Memory
Allows chains/models to maintain state about previous messages, etc.
Documents
Structures of text that can be used as input to chains, etc.
A common workflow is to take a large chunk of text, split it up into a bunch of documents, and then feed those documents into a chain
Accepts an input_documents parameter and returns an output_text output
Map Reduce
Takes a list of documents and calls the LLM on them
The load_summarize_chain utility function handles instantiating base chains like MapReduceChain
All of the combine documents chains continue to ask language models to summarize documents to shorten them
MapReduceChain
The map_reduce_chain prompt asks to provide a concise summary of the context provided
The Refine chain takes an existing summary and asks the LLM to refine it
After each individual doc is summarized using the LLM, these summaries are then combined in the Stuff combine documents chain
This Stuff chain then stuffs all the summaries into one prompt to the LLM and asks it to write a summary of it
Map: Each document is mapped to its own summary
Reduce: Each summary is reduced into one overall summary
StuffDocumentsChain
This performs the second half of the map reduce chain, which summarizes all of the documents. It does not perform the first map step which initially summarizes all of the docs
RefineDocumentsChain
This first summarizes all of the documents into one summary
It then loops through each of the documents and allows the LLM to update the summary given the existing summary and the doc passed as input
Each database has a different prompt that specifies helpful info (e.g. postgres mentions get_date and says “You are a PostgreSQL expert”.)
First the chain queries for table/schema info using the database
Then the chain executes this query against the database
In the more complex use case, another LLM query is used to check the syntax of the generated query
This prompt template includes some common mistakes the LLM generated queries make and asks the LLM to resolve it
You just have to initialize the chain with a SQLAlchemy engine and Langchain will pick the right prompt based on the database dialect and will automatically get
the schema using common sql alchemy functions