This content originally appeared on DEV Community and was authored by Utkarsh Rastogi
Welcome back to our LangChain journey! Yesterday we explored prompt engineering. Today, we are mastering LCEL (LangChain Expression Language) – the modern way to chain multiple steps together.
What You’ll Learn:
- How LCEL replaces deprecated LangChain chains
- Building sequential workflows with pipe operators
- Managing data flow between processing steps
- Real-world multi-step text processing
What is LCEL Sequential Processing?
Imagine you’re cooking a meal:
- Prep ingredients (chop vegetables)
- Cook (sauté in pan)
- Plate (arrange beautifully)
Each step depends on the previous one. That’s exactly how LCEL chaining works!
LCEL Patterns:
-
Simple Chain:
prompt | llm | parser
– Linear processing - Complex Chain: Multiple inputs/outputs with data transformation
Real-world analogy:
- Simple LCEL = Email → Spell check → Send
- Complex LCEL = Email + Recipient info + Priority → Format + Route + Log
Key Advantage: LCEL chains are composable – you can easily combine, modify, or reuse parts of your workflow.
Setup First!
If you haven’t set up your environment yet, check Day 1 for package installation.
LCEL Sequential Processing in Action
Let’s build our AWS S3 definition processor using modern LCEL:
import boto3
from langchain.prompts import PromptTemplate
from langchain_aws import ChatBedrock
from langchain.schema.output_parser import StrOutputParser
import time
# Initialize Bedrock client
bedrock_client = boto3.client('bedrock-runtime', region_name='us-east-1')
# LLM = Large Language Model (the AI that processes text)
llm = ChatBedrock(
client=bedrock_client,
model_id="anthropic.claude-3-sonnet-20240229-v1:0",
model_kwargs={
"max_tokens": 200,
"temperature": 0.7
}
)
# Create the complete chain using LCEL with time tracking
def overall_chain(inputs):
print("> Starting chain...\n")
# Step 1: Summarize
print("> Step 1: Summarizing...")
start_time = time.time()
summary_prompt = PromptTemplate(
input_variables=["text"],
template="Summarize this in exactly 2 sentences (max 50 words): {text}"
)
summary = (summary_prompt | llm | StrOutputParser()).invoke({"text": inputs["text"]})
step_time = time.time() - start_time
print(f"> Finished step ({step_time:.2f}s)")
print(summary)
print()
# Step 2: Translate
print("> Step 2: Translating...")
start_time = time.time()
translate_prompt = PromptTemplate(
input_variables=["summary"],
template="Translate this to Hindi (keep it concise): {summary}"
)
hindi_text = (translate_prompt | llm | StrOutputParser()).invoke({"summary": summary})
step_time = time.time() - start_time
print(f"> Finished step ({step_time:.2f}s)")
print(hindi_text)
print()
# Step 3: Format as Tweet
print("> Step 3: Formatting for social media...")
start_time = time.time()
tweet_prompt = PromptTemplate(
input_variables=["hindi_text"],
template="Convert to tweet format with 2-3 emojis (max 280 chars): {hindi_text}"
)
tweet = (tweet_prompt | llm | StrOutputParser()).invoke({"hindi_text": hindi_text})
step_time = time.time() - start_time
print(f"> Finished step ({step_time:.2f}s)")
print(tweet)
print()
print("> Chain complete.")
return tweet
Running Our Chain
s3_definition = """
Amazon Simple Storage Service (Amazon S3) is an object storage service
offering industry-leading scalability, data availability, security, and
performance. Customers of all sizes and industries can store and protect
any amount of data for virtually any use case, such as data lakes,
cloud-native applications, and mobile apps.
"""
# Execute the chain
result = overall_chain({"text": s3_definition})
print(result)
Actual Output
Here’s what our chain produces:
LCEL Components Explained
Key Components:
-
llm
: The Large Language Model (ChatBedrock) that processes text -
StrOutputParser()
: Converts LLM’s complex response object to simple string -
|
(pipe): Connects components – output of left becomes input of right
Why This Approach Works Better:
- Modern: Uses latest LangChain syntax (no deprecation warnings)
-
Clean Output:
StrOutputParser()
gives you just the text, not metadata - Flexible: Easy pipe operator chaining
- Type-Safe: Better error handling and validation
-
Readable: Clear data flow with
|
operators
Advanced: Complex LCEL Chains
For more complex workflows with multiple inputs/outputs, use advanced LCEL patterns:
import time
# Complex chain with multiple inputs and time tracking
def complex_chain(inputs):
print("> Starting complex analysis chain...\n")
# Step 1: Analyze service (Detailed analysis - more tokens)
print("> Step 1: Analyzing service...")
start_time = time.time()
analyze_llm = ChatBedrock(
client=bedrock_client,
model_id="anthropic.claude-3-sonnet-20240229-v1:0",
model_kwargs={"max_tokens": 300, "temperature": 0.3}
)
analysis_prompt = PromptTemplate(
input_variables=["service_text", "focus_area"],
template="Analyze this AWS service focusing on {focus_area} (max 200 words): {service_text}"
)
analysis = (analysis_prompt | analyze_llm | StrOutputParser()).invoke({
"service_text": inputs["service_text"],
"focus_area": inputs["focus_area"]
})
step_time = time.time() - start_time
print(f"> Finished analysis ({step_time:.2f}s)")
print(f"Analysis: {analysis[:100]}...")
print()
# Step 2: Create summary (Medium tokens)
print("> Step 2: Creating summary...")
start_time = time.time()
summary_llm = ChatBedrock(
client=bedrock_client,
model_id="anthropic.claude-3-sonnet-20240229-v1:0",
model_kwargs={"max_tokens": 150, "temperature": 0.5}
)
summary_prompt = PromptTemplate(
input_variables=["analysis", "target_audience"],
template="Summarize for {target_audience} in 3-4 sentences: {analysis}"
)
summary = (summary_prompt | summary_llm | StrOutputParser()).invoke({
"analysis": analysis,
"target_audience": inputs["target_audience"]
})
step_time = time.time() - start_time
print(f"> Finished summary ({step_time:.2f}s)")
print(f"Summary: {summary}")
print()
# Step 3: Generate recommendation (Short and decisive)
print("> Step 3: Making recommendation...")
start_time = time.time()
recommend_llm = ChatBedrock(
client=bedrock_client,
model_id="anthropic.claude-3-sonnet-20240229-v1:0",
model_kwargs={"max_tokens": 100, "temperature": 0.2}
)
recommendation_prompt = PromptTemplate(
input_variables=["summary", "use_case"],
template="Recommend YES/NO for {use_case} with 2-sentence reason: {summary}"
)
recommendation = (recommendation_prompt | recommend_llm | StrOutputParser()).invoke({
"summary": summary,
"use_case": inputs["use_case"]
})
step_time = time.time() - start_time
print(f"> Finished recommendation ({step_time:.2f}s)")
print(f"Recommendation: {recommendation}")
print()
print("> Complex chain complete.")
return {
"analysis": analysis,
"summary": summary,
"recommendation": recommendation
}
# Add input_keys for compatibility
complex_chain.input_keys = ["service_text", "focus_area", "target_audience", "use_case"]
# Run with multiple inputs
result = complex_chain({
"service_text": s3_definition,
"focus_area": "security and scalability",
"target_audience": "developers",
"use_case": "mobile app backend"
})
print("\nFinal Results:")
print("Analysis:", result["analysis"])
print("Summary:", result["summary"])
print("Recommendation:", result["recommendation"])
Actual Output
Here’s what our chain produces:
Simple vs Complex LCEL Chains
Feature | Simple LCEL | Complex LCEL |
---|---|---|
Inputs | Single input | Multiple inputs |
Outputs | Single output | Multiple outputs |
Use Case | Linear pipeline | Complex workflows |
Data Flow | Auto-passed | Custom transformation |
Complexity | Simple | Moderate |
Best For | Content processing | Decision workflows |
Learning Curve | 1-2 hours | 4-6 hours |
Maintenance | Easy | Requires planning |
Practical Applications:
- Simple LCEL: Content processing, translation, formatting, social media automation
- Complex LCEL: Business analysis, decision-making, multi-criteria evaluation, recommendation systems
Key Differences Between Our Examples
Example 1: Simple Chain
# Single input
overall_chain({"text": "AWS S3 definition..."})
# Fixed flow: Text → Summarize → Translate → Tweet
# Same LLM settings for all steps
# Returns: Final tweet string
Example 2: Complex Chain
# Multiple inputs
complex_chain({
"service_text": "AWS S3 definition...",
"focus_area": "security and scalability",
"target_audience": "developers",
"use_case": "mobile app backend"
})
# Business flow: Analyze → Summarize → Recommend
# Different LLM settings per step
# Returns: Dictionary with all results
Different LLM Configurations:
# Analyze: More tokens, less creative (factual analysis)
analyze_llm = ChatBedrock(model_kwargs={"max_tokens": 300, "temperature": 0.3})
# Summary: Medium tokens, balanced creativity
summary_llm = ChatBedrock(model_kwargs={"max_tokens": 150, "temperature": 0.5})
# Recommend: Fewer tokens, very focused (decisions)
recommend_llm = ChatBedrock(model_kwargs={"max_tokens": 100, "temperature": 0.2})
Temperature Guide:
- 0.2 = Very focused, factual responses (recommendations)
- 0.5 = Balanced creativity and accuracy (summaries)
- 0.7 = More creative and varied responses (content creation)
Token Management Strategy:
- 100 tokens = Short, concise responses (decisions)
- 150 tokens = Medium-length responses (summaries)
- 300 tokens = Detailed, comprehensive responses (analysis)
LCEL vs Legacy Chains
Modern LCEL Approach:
# Clean, readable, no deprecation warnings
chain = prompt | llm | StrOutputParser()
result = chain.invoke({"text": input_text})
Legacy Approach (Deprecated):
# Old way - causes deprecation warnings
chain = LLMChain(llm=llm, prompt=prompt)
result = chain.run(input_text)
Real-World Use Cases
Simple LCEL Patterns:
- Content Pipeline: Write → Edit → Format → Publish
- Customer Support: Query → Analyze → Respond → Log
- Document Review: Read → Analyze → Suggest → Format
Complex LCEL Patterns:
- Service Recommendation: Requirements + Budget → Analyze + Compare → Recommend + Justify
- Document Processing: Text + Metadata → Extract + Classify → Store + Index
- Multi-language Support: Content + Target + Tone → Translate + Adapt + Validate
Token Optimization Strategies:
- Use smaller models for simple steps (summarization)
- Use larger models for complex reasoning (analysis)
- Set lower temperature for factual tasks
- Set higher temperature for creative tasks
Key Takeaways
- Sequential chains = Multi-step automation with control
- Simple LCEL for linear processing (A→B→C)
- Complex LCEL for multi-input scenarios (A+B→C+D→E)
- Use
verbose=True
for debugging and learning - Control response length per step to manage costs
- Different processor settings for different complexity levels
- Each step builds on the previous one
Pro Tips:
- Start with simple LCEL, upgrade when needed
- Monitor usage with
verbose=True
- Use specific instructions to control response length
- Test each step individually first
About Me
Hi! I’m Utkarsh, a Cloud Specialist & AWS Community Builder who loves turning complex AWS topics into fun chai-time stories
This is part of my “LangChain with AWS Bedrock: A Developer’s Journey” series. Follow along as I document everything I learn, including the mistakes and the victories.
This content originally appeared on DEV Community and was authored by Utkarsh Rastogi