🦜️🧰 langchain-dev-utils
🚀 An efficient utility library crafted for LangChain and LangGraph developers
Why choose langchain-dev-utils?
Tired of writing repetitive code in LangChain development? langchain-dev-utils is exactly the solution you need! This lightweight yet powerful utility library is designed to enhance the LangChain and LangGraph development experience, helping you to:
- Boost Development Efficiency - Reduce boilerplate code, allowing you to focus on core functionality.
- Simplify Complex Workflows - Easily manage multi-model, multi-tool, and multi-agent applications.
- Enhance Code Quality - Improve consistency and readability, reducing maintenance costs.
- Accelerate Prototyping - Quickly implement ideas and iterate faster for validation.
Core Features
-
Unified Model Management
Easily switch and combine different models by specifying the model provider via string.
-
Built-in OpenAI-Compatible Integration
Built-in integration classes for OpenAI-Compatible APIs, enhancing model compatibility through explicit configuration.
-
Flexible Message Handling
Supports chain-of-thought concatenation, streaming processing, and message formatting.
-
Powerful Tool Calling
Built-in tool call detection, parameter parsing, and human-in-the-loop review functionality.
-
Efficient Agent Development
Simplifies the agent creation process and provides more common middleware extensions.
-
Convenient State Graph Construction
Provides two pre-built functions to facilitate the construction of sequential and parallel execution state graphs.
Quick Start
One of the main uses of this library is integrating models that provide OpenAI-Compatible APIs. Below is an example using qwen2.5-7b deployed via vLLM.
1. Install langchain-dev-utils
Note: You must install the langchain-dev-utils[standard] version to use this feature.
2. Getting Started
from langchain.tools import tool
from langchain_core.messages import HumanMessage
from langchain_dev_utils.chat_models import register_model_provider, load_chat_model
from langchain_dev_utils.agents import create_agent
# Register model provider
register_model_provider("vllm", "openai-compatible", base_url="http://localhost:8000/v1")
@tool
def get_current_weather(location: str) -> str:
"""Get the current weather for a specified location."""
return f"25 degrees, {location}"
# Dynamically load model using string
model = load_chat_model("vllm:qwen2.5-7b")
response = model.invoke("Hello")
print(response)
# Create agent
agent = create_agent("vllm:qwen2.5-7b", tools=[get_current_weather])
response = agent.invoke({"messages": [HumanMessage(content="What is the weather in New York today?")]})
print(response)
GitHub Repository
Visit the GitHub Repository to view the source code and issues.