This content originally appeared on DEV Community and was authored by Yiğit Konur
Working with LLMs is powerful, but crafting and managing the prompts that drive them can quickly become complex. As you move beyond simple questions, you start juggling system messages, user inputs, example dialogues (few-shot prompting), tool definitions, desired output structures, and even multi-step conversations. How do you keep all this organized, maintainable, and reusable, especially across different LLM providers?
Enter PromptL, a templating language built specifically for the world of LLM prompting. And to bring its power into your Python applications, we have the promptl-py
library.
This guide will walk you through everything you need to know, from the basics of PromptL syntax to advanced chaining and error handling using the Python library. We’ll cover every detail from the original documentation, ensuring you have a complete picture.
Let’s Dive In! Here’s What We’ll Cover:
- What Exactly is PromptL? (And why you need it)
- Introducing
promptl-py
: Your Python Bridge - Core Ideas: Templates, Parameters, Rendering, Chains, Adapters
- Installation: Getting Set Up
- Quick Start: Rendering Your First Prompt & Running a Simple Chain
- The PromptL Language: A Deep Dive
- Anatomy of a
.promptl
File - Front Matter Magic (
--- ... ---
): Configuration, Tools, Schemas - The Prompt Body: Crafting Your Messages
- Message Roles (
<system>
,<user>
, etc.) - Beyond Text: Content Types
- Dynamic Prompts: Variables & Templating (
{{ ... }}
) - Adding Logic: Control Flow (
if
,for
) - Multi-Turn Conversations: Steps (
<step>
) & Attributes (as
,schema
) - Keeping it Clean: Comments & Includes
- Anatomy of a
- Using
promptl-py
Like a Pro- Getting Started:
Promptl
Class & Options - Validating Templates: Scanning Prompts (
promptl.prompts.scan
) - Bringing Prompts to Life: Rendering (
promptl.prompts.render
) - Managing Conversations: Working with Chains (
promptl.chains
) - Speaking the LLM’s Language: Adapters (
OpenAI
,Anthropic
,Default
) - Under the Hood: Key Data Types
- Handling Hiccups: Error Management (
PromptlError
,RPCError
)
- Getting Started:
- Real-World Scenarios: Complex Examples
- Multi-Step Chains with Tools, Schema & Logic
- Conditional Prompts Based on Inputs
- Leveraging Complex Data Structures
- Graceful Error Handling
- Quick Reference: API Summary
- For Contributors: Development Setup
- The Fine Print: License
1. What Exactly is PromptL?
PromptL is not just another templating engine; it’s a domain-specific language meticulously designed for defining and managing LLM prompts. Think of it like Jinja or Handlebars, but supercharged for AI interactions. It provides a clear, human-readable syntax to:
- Define both static text and dynamic content using variables.
- Structure prompts using standard roles (system, user, assistant, tool).
- Incorporate logic with conditionals (
if
) and loops (for
). - Specify LLM configurations (like model name, temperature) right alongside the prompt.
- Define tools (functions) the LLM can use and specify required output formats using JSON Schema.
- Orchestrate complex, multi-step conversations (chains).
- Optionally abstract away the specific formatting details required by different LLM providers.
Its goal is to make prompt engineering more systematic, maintainable, and less error-prone.
2. Introducing promptl-py
: Your Python Bridge
The promptl-py
library is the official way to use PromptL within your Python applications. It acts as the interface to the core PromptL engine, allowing you to:
- Parse & Validate: Read
.promptl
files or strings and check them for correctness. - Render: Inject your data (parameters) into PromptL templates to generate the structured message lists that LLM APIs expect.
- Execute Chains: Run multi-step prompts step-by-step, managing the conversation flow.
- Integrate Seamlessly: Fit PromptL into your existing Python workflows for interacting with LLMs.
Interestingly, promptl-py
uses a WebAssembly (WASM) module (promptl.wasm
) under the hood. This contains the core PromptL parser and runtime. The Python library communicates with this WASM module via Remote Procedure Calls (RPC). This architecture ensures that the core PromptL logic remains consistent, regardless of the host language (Python, in this case).
3. Core Ideas
Before we jump into code, let’s clarify a few key concepts:
- Template: A string or
.promptl
file written using PromptL syntax. - Parameters: A Python dictionary containing data you want to inject into your template (e.g., user names, context, lists of items).
- Rendering: The process of combining a Template and Parameters to produce a final output, usually a list of messages and LLM configuration settings.
- Chain: A prompt template designed for multi-step interactions, using
<step>
tags. Execution proceeds step-by-step. - Adapter: A setting that tells
promptl-py
how to format the rendered messages. This ensures compatibility with specific LLM provider APIs (like OpenAI or Anthropic). - Messages: Structured objects representing parts of the conversation (e.g., a system instruction, a user query, an assistant’s reply, a tool’s output).
4. Installation: Getting Set Up
Getting promptl-py
is straightforward using pip. Make sure you have Python 3.9 or higher.
pip install promptl-py
Easy peasy! Now, let’s write some code.
5. Quick Start: Rendering Your First Prompt & Running a Simple Chain
Basic Rendering
Let’s take a simple PromptL template and render it with some data.
# examples/render_prompt.py
from pprint import pprint
from promptl_ai import Promptl, Adapter # Import Adapter too
# 1. Initialize the PromptL engine
promptl = Promptl() # Uses default settings
# 2. Define your PromptL template as a string
prompt_template = """
---
provider: OpenAI # Hint for configuration, affects defaults/adapters
model: gpt-4o-mini # Specify the LLM model
---
# This text before the first tag often becomes a system message or part of the first message.
Answer succinctly yet complete.
<user> # Start of a user message
Taking into account this context: {{context}} # Inject 'context' variable
I have the following question: {{question}} # Inject 'question' variable
</user>
"""
# 3. Prepare your data (parameters) as a Python dictionary
parameters = {
"context": "PromptL is a templating language specifically designed for LLM prompting.",
"question": "What is PromptL?",
}
# 4. Render the prompt!
# By default, it uses the OpenAI adapter format.
result = promptl.prompts.render(
prompt=prompt_template,
parameters=parameters,
# adapter=Adapter.OpenAI # Explicitly stating the default
)
# 5. Check the output
print("--- Rendered Prompt Output ---")
pprint(result.model_dump())
# Expected Output (structure):
# {
# 'messages': [ # List of messages ready for an LLM API
# {'role': 'system', 'content': 'Answer succinctly yet complete.'},
# {'role': 'user',
# 'content': 'Taking into account this context: PromptL is a templating language specifically designed for LLM prompting.\n'
# 'I have the following question: What is PromptL?'}
# ],
# 'config': { # Configuration extracted from front matter
# 'provider': 'OpenAI',
# 'model': 'gpt-4o-mini'
# }
# }
See how the variables {{context}}
and {{question}}
were replaced, and the output is structured into messages with roles and content, plus the configuration? That’s the magic of rendering!
Basic Chain Execution
Now, let’s see how PromptL handles a simple back-and-forth conversation using <step>
tags.
# examples/run_chain.py
from pprint import pprint
from promptl_ai import Promptl
promptl = Promptl()
# Define a multi-step template
chain_prompt = """
<step> # Defines the first turn of the conversation
<system>
You are a helpful assistant.
</system>
<user>
Say hello.
</user>
</step>
<step> # Defines the second turn, executed after the first response
<user>
Now say goodbye.
</user>
</step>
"""
# 1. Create a chain instance from the template
chain = promptl.chains.create(chain_prompt)
print(f"Chain created. Is it complete yet? {chain.completed}") # Output: False
print("\n--- Executing Step 1 ---")
# 2. Execute the first step. No input response needed here.
# This renders messages up to the end of the first <step>.
step1_result = chain.step()
pprint(step1_result.model_dump(exclude={'chain'})) # Exclude chain object for brevity
# Expected Output (structure):
# {'messages': [{'role': 'system', 'content': 'You are a helpful assistant.'},
# {'role': 'user', 'content': 'Say hello.'}],
# 'config': {}, # No specific config in this template
# 'completed': False} # The chain isn't finished
print(f"\nChain completed after step 1? {chain.completed}") # Output: False
# 3. Simulate the LLM responding to the first step
llm_response_step1 = "Hello there! How can I help?"
print("\n--- Executing Step 2 (with LLM response) ---")
# 4. Execute the second step, providing the assistant's previous response.
# This appends the assistant's response and then renders the content of the second <step>.
step2_result = chain.step(llm_response_step1)
pprint(step2_result.model_dump(exclude={'chain'}))
# Expected Output (structure):
# {'messages': [{'role': 'system', 'content': 'You are a helpful assistant.'},
# {'role': 'user', 'content': 'Say hello.'},
# {'role': 'assistant', 'content': 'Hello there! How can I help?'}, # <-- Added response
# {'role': 'user', 'content': 'Now say goodbye.'}], # <-- From second step
# 'config': {},
# 'completed': False} # Still waiting for the final response
print(f"\nChain completed after step 2? {chain.completed}") # Output: False
# 5. Simulate the LLM responding to the second step
llm_response_step2 = "Goodbye! Have a great day!"
print("\n--- Executing Final Step (with LLM response) ---")
# 6. Execute the chain again with the final response.
# Since there are no more <step> tags, the chain completes.
final_result = chain.step(llm_response_step2)
pprint(final_result.model_dump(exclude={'chain'}))
# Expected Output (structure):
# {'messages': [{'role': 'system', 'content': 'You are a helpful assistant.'},
# {'role': 'user', 'content': 'Say hello.'},
# {'role': 'assistant', 'content': 'Hello there! How can I help?'},
# {'role': 'user', 'content': 'Now say goodbye.'},
# {'role': 'assistant', 'content': 'Goodbye! Have a great day!'}], # <-- Added final response
# 'config': {},
# 'completed': True} # Chain is now finished!
print(f"\nChain completed finally? {chain.completed}") # Output: True
assert chain.completed
assert final_result.completed
print("\n--- Final Conversation History ---")
pprint([msg.model_dump() for msg in final_result.messages]) # Display all messages
This demonstrates the core loop: create
the chain, then repeatedly call chain.step(llm_response)
until chain.completed
is True
.
6. The PromptL Language: A Deep Dive
Now that you’ve seen it in action, let’s break down the PromptL syntax systematically.
Anatomy of a .promptl
File
A typical PromptL template (.promptl
file or string) has two main sections:
---
# 1. Optional Front Matter (YAML-like configuration)
provider: OpenAI
model: gpt-4o-mini
temperature: 0.5
# ... other settings, tools, schemas ...
---
# 2. Prompt Body (The actual message content and logic)
This text might be treated as a system message.
<user>
Hello, {{ user_name }}!
{{#if show_details}}
Here are the details...
{{/if}}
</user>
<step>
<assistant> Okay, processing step 1. </assistant>
</step>
Front Matter Magic (--- ... ---
)
This optional block at the very top, enclosed by triple dashes (---
), uses a YAML-like syntax (keys and values, indentation for nesting) to define metadata and configuration.
-
Common Configuration: These directly influence how the LLM should behave.
-
provider
: (String, e.g.,"OpenAI"
,"Anthropic"
) Hints at the target LLM provider, influencing defaults and adapter behavior. -
model
: (String, e.g.,"gpt-4o-mini"
,"claude-3-opus-20240229"
) Specifies the exact LLM model. -
temperature
: (Number, e.g.,0.7
) Controls response randomness. -
max_tokens
: (Integer) Limits the length of the generated response. - Other Parameters: You can often include other API parameters supported by the provider (like
top_p
,stop_sequences
).
--- provider: Anthropic model: claude-3-sonnet-20240229 temperature: 0.2 max_tokens: 1000 ---
-
-
Tool Definitions (
tools
): Define functions the LLM can call. This structure closely mirrors the OpenAI function calling/tool definition format.- It’s an object (dictionary) where keys are tool names.
- Each tool has a
description
(String) andparameters
(a JSON Schema object describing the arguments).
--- tools: get_weather: description: Fetches the current weather for a location. parameters: type: object properties: location: type: string description: The city and state (e.g., "Boston, MA"). unit: type: string enum: ["celsius", "fahrenheit"] default: "celsius" required: - location send_email: description: Sends an email. parameters: # ... schema for recipient, subject, body ... ---
-
Output Schema (
schema
): Define a required structure (using JSON Schema) for the LLM’s final response message content. This is incredibly useful for forcing the LLM to output structured data (like JSON) reliably.
--- schema: type: object properties: summary: type: string description: A concise summary of the input text. keywords: type: array items: type: string description: A list of relevant keywords. required: - summary - keywords additionalProperties: false # Don't allow extra fields in the output ---
The Prompt Body: Crafting Your Messages
This is where the core content of your prompt lives, using tags, variables, and logic.
-
Message Roles (
<system>
,<user>
,<assistant>
,<tool>
): These tags structure the conversation, mirroring typical chat roles.-
<system>...</system>
: High-level instructions, persona setting, or context for the AI. Often placed first. -
<user>...</user>
: Represents input from the human user. -
<assistant>...</assistant>
: Represents responses from the AI. Used for few-shot examples or to capture the AI’s actual response in chains. -
<tool>...</tool>
: Represents the result returned from executing a tool call. Requires specific content (see below). - Implicit System Message: Text placed in the body before any other tag is often treated as a system message or merged into the first message.
You are a Shakespearean pirate bot. # Implicit system message <system> Always respond in iambic pentameter. Use pirate slang. </system> <user> Ahoy! Tell me about the weather in Tortuga. Use the weather tool if ye must. </user> <assistant> # Example response (few-shot) Aye, the skies be clear, the sun doth shine so bright! But let me check the charts... {{ tool_call(id="weather_call_1", name="get_weather", args={location: "Tortuga"}) }} </assistant> <tool> # Example tool result {{ tool_result(id="weather_call_1", name="get_weather", result={temp: 28, condition: "Sunny", unit: "celsius"}) }} </tool> <assistant> # Example final response Hark, the charts say Tortuga's warmth delights the soul, 'Tis twenty-eight degrees, a sunny toll! </assistant>
-
-
Beyond Text: Content Types: While text is the default, PromptL understands richer content types, especially important for multi-modal models and tool use. The
promptl-py
library maps these to specific Python data models after rendering.- Text: Simple text within tags. (
TextContent
in Python) - Image: Handling varies by provider. Often passed via parameters containing image data (e.g., base64 string or URL) rather than directly embedded in PromptL syntax itself. Check specific adapter documentation. (
ImageContent
in Python) - File/Document: Similar to images, usually handled via parameters. (
FileContent
in Python) - Tool Call: Represents the LLM’s decision to call a tool. Typically generated by the LLM, but can be included in
<assistant>
tags for few-shot examples. The syntax might involve helpers like{{ tool_call(...) }}
or be implicit. (ToolCallContent
in Python) - Tool Result: Represents the data returned to the LLM after a tool has been executed. Placed within
<tool>
tags. Often uses a helper like{{ tool_result(...) }}
. (ToolResultContent
in Python)
- Text: Simple text within tags. (
-
Dynamic Prompts: Variables & Templating (
{{ ... }}
): Uses Handlebars-like syntax.-
{{ variable }}
: Inserts the value ofvariable
from the parameters dict. -
{{ object.property }}
: Accesses nested data. -
{{ list[index] }}
: Accesses elements in a list. - Whitespace Control:
{{- ... }}
or{{ ... -}}
might be available to remove adjacent whitespace (check PromptL specification details). - Escaping: To get literal
{{
, you might need\{{
or similar (check PromptL specs).
<user> Order details for {{ customer.name }} (ID: {{ customer.id }}): Items: {{#each customer.orders[0].items }} - {{ this.name }} ({{ this.quantity }}) {{/each}} Total: {{ customer.orders[0].total_price }} </user>
-
-
Adding Logic: Control Flow:
-
{{ if condition }}
: Conditionally include content. Supports{{ else }}
and potentially{{ else if }}
. Remember{{ endif }}
!
<system> {{ if task == "summarize" }} Provide a brief summary. {{ else if task == "translate" }} Translate the text accurately. {{ else }} Follow the user's specific instruction. {{ endif }} Target audience: {{ audience }}. </system>
-
{{ for item, index in list }}
: Loop over items in a list (or keys/values in a dictionary).index
is optional. Remember{{ endfor }}
!
<user> Here are the discussion points: {{ for point, idx in points_list }} {{ idx + 1 }}. {{ point }} {{ endfor }} Please synthesize them. </user>
-
-
Multi-Turn Conversations: Steps (
<step>
) & Attributes (as
,schema
):-
<step>
tags define distinct turns in a conversation (a chain). The Python library’schain.step()
method progresses through these. Execution pauses conceptually at the end of a step, waiting for the assistant’s response. -
as="var_name"
: An attribute on the<step>
tag. It captures the assistant’s response for that specific step into a PromptL variable namedvar_name
. This variable can then be used in later steps. -
schema={{ ... }}
: An attribute on the<step>
tag. It defines an inline JSON schema that the assistant’s response for this step must conform to. The schema itself is often defined using JSON-like syntax within the{{ }}
.
<step as="initial_request"> # Capture response as 'initial_request' <user>Suggest three blog post titles about PromptL.</user> </step> # Assume assistant responds with a list of titles, captured in 'initial_request' <step as="selection" schema={{ {type: "object", properties: {chosen_title: {type: "string"}}, required: ["chosen_title"]} }}> # Capture selection, enforce schema <user> Okay, here are the titles you suggested: {{ initial_request }} # Use the captured response Which one is the best? Respond with JSON: {"chosen_title": "Your Choice"} </user> </step> # Assume assistant responds with {"chosen_title": "Title 2"} captured in 'selection' <step> <user> Great! Let's go with "{{ selection.chosen_title }}". # Use the captured selection Now, write an outline for it. </user> </step>
-
-
Keeping it Clean: Comments:
- Multi-line: Enclose in
/* ... */
. - Single-line: Often start with
#
(especially within front matter or at the beginning of lines in the body).
--- # Model configuration section model: gpt-4o /* Temperature controls creativity. Lower is more focused. */ temperature: 0.3 --- /* Main prompt body starts here */ <system> # System instructions Be helpful and concise. </system>
- Multi-line: Enclose in
Includes (Conceptual): PromptL aims for modularity. While the specific implementation details might depend on the core engine version, the concept involves including content from other
.promptl
files. This might look something like<prompt path="shared_instructions.promptl" />
. Theincluded_prompt_paths
field inScanPromptResult
hints at this capability.
7. Using promptl-py
Like a Pro
Now let’s focus on the Python library itself.
Getting Started: Promptl
Class & Options
You interact with PromptL via the Promptl
class.
from promptl_ai import Promptl, PromptlOptions, Adapter
# Simplest way: uses defaults (bundled WASM, temp dir, OpenAI adapter)
promptl_default = Promptl()
# Customize it:
options = PromptlOptions(
# Path to the promptl.wasm file (if not using the one bundled with the lib)
module_path="/path/to/custom/promptl.wasm",
# Directory for temporary files used by WASM communication
working_dir="/app/temp/promptl_cache",
# Set the default adapter for all operations on this instance
adapter=Adapter.Anthropic
)
promptl_custom = Promptl(options=options)
# Access prompts and chains submodules
prompts_api = promptl_custom.prompts
chains_api = promptl_custom.chains
PromptlOptions
allows you to configure:
-
adapter
: Default output format (Adapter.OpenAI
(default),Adapter.Anthropic
,Adapter.Default
). -
module_path
: Location ofpromptl.wasm
. -
working_dir
: Temporary storage directory.
Validating Templates: Scanning Prompts (promptl.prompts.scan
)
Before rendering, you might want to check a template for errors, see what parameters it expects, or extract its configuration without providing any data. That’s what scan
is for.
- Input:
prompt: str
(the template content). - Output:
ScanPromptResult
(a Pydantic model) containing:-
hash
: A unique identifier for the scanned prompt string. -
resolved_prompt
: The template text after processing includes, etc. -
config
: The parsed front matter dictionary. -
errors
: A list ofError
objects if issues were found (empty list if valid). -
parameters
: A list of variable names (like{{variable}}
) found in the template. -
is_chain
: Boolean,True
if<step>
tags are present. -
included_prompt_paths
: List of any included file paths (if feature is used).
-
from promptl_ai import Promptl
from pprint import pprint
promptl = Promptl()
valid_prompt = """
---
model: gpt-4
max_tokens: 100
---
<user>Hello {{name}}!</user>
"""
scan_result = promptl.prompts.scan(valid_prompt)
print("--- Scan Result (Valid Prompt) ---")
pprint(scan_result.model_dump())
# Expected Output (structure):
# {'hash': '...', 'resolved_prompt': '...', 'config': {'model': 'gpt-4', 'max_tokens': 100},
# 'errors': [], 'parameters': ['name'], 'is_chain': False, 'included_prompt_paths': ['']}
invalid_prompt = "<user>Hello {{ name </user> <!-- Missing closing braces -->"
scan_result_error = promptl.prompts.scan(invalid_prompt)
print("\n--- Scan Result (Invalid Prompt) ---")
pprint(scan_result_error.model_dump())
# Expected Output (structure):
# { ... 'errors': [{'name': 'ParseError', 'message': 'Expected "}}" ...', ...}], ... }
Bringing Prompts to Life: Rendering (promptl.prompts.render
)
This is the core function for single-turn prompts (or getting the first step of a chain). It takes the template, injects parameters, and produces the final output.
- Input:
-
prompt: str
: The template string. -
parameters: Optional[Dict[str, Any]]
: Your data dictionary. -
adapter: Optional[Adapter]
: Override the default adapter for this call (e.g.,Adapter.Anthropic
). -
options: Optional[RenderPromptOptions]
: Fine-tuning (rarely needed):default_role
for untagged text,include_source_map
.
-
- Output:
RenderPromptResult
(Pydantic model) containing:-
messages: List[MessageLike]
: The list of generated message objects, formatted according to the specifiedadapter
. -
config: Dict[str, Any]
: The final LLM configuration (from front matter, potentially merged with step-specific settings).
-
from promptl_ai import Promptl, Adapter
from pprint import pprint
promptl = Promptl() # Defaulting to OpenAI adapter
prompt = """
---
model: gpt-3.5-turbo
---
<user>Translate '{{text}}' to {{language}}.</user>
"""
parameters = {"text": "hello world", "language": "Spanish"}
# Render for OpenAI (default)
render_openai = promptl.prompts.render(prompt, parameters)
print("--- Rendered (OpenAI Adapter) ---")
pprint(render_openai.model_dump())
# Expected: {'messages': [{'role': 'user', 'content': "Translate 'hello world' to Spanish."}], 'config': {'model': 'gpt-3.5-turbo'}}
# Render specifically for Anthropic
render_anthropic = promptl.prompts.render(prompt, parameters, adapter=Adapter.Anthropic)
print("\n--- Rendered (Anthropic Adapter) ---")
pprint(render_anthropic.model_dump())
# Expected: {'messages': [{'role': 'user', 'content': "Translate 'hello world' to Spanish."}], 'config': {'model': 'gpt-3.5-turbo'}}
# Note: For simple text, output looks similar. Differences are more apparent with tools, images, etc.
Generating Objects for LLM Libraries (e.g., OpenAI)
Since the default adapter is OpenAI
, the output messages
and config
are often directly usable with libraries like openai
.
import os
# Ensure: pip install openai
# Requires OPENAI_API_KEY env var for actual API call
# from openai import OpenAI
# client = OpenAI()
from promptl_ai import Promptl, Adapter
from pprint import pprint
promptl = Promptl() # Uses OpenAI adapter by default
weather_prompt = """
---
provider: OpenAI
model: gpt-4o-mini
tools:
get_current_weather:
description: Get the current weather in a given location
parameters:
type: object
properties:
location: { type: string, description: "City, State" }
unit: { type: string, enum: [celsius, fahrenheit] }
required: [location]
---
<system>You are a helpful weather bot. Use tools if necessary.</system>
<user>What's the weather like in San Francisco today?</user>
"""
# Render the prompt
render_result = promptl.prompts.render(prompt=weather_prompt)
print("--- Rendered Output (Ready for OpenAI Client) ---")
pprint(render_result.model_dump())
# How you'd use it with the OpenAI client:
messages_for_api = [msg.model_dump() for msg in render_result.messages] # Convert Pydantic models to dicts
tools_for_api = [{"type": "function", "function": v} for k, v in render_result.config.get("tools", {}).items()] # Format tools
model_name = render_result.config.get("model")
# print(f"\n--- Simulating OpenAI Call (Model: {model_name}) ---")
# try:
# response = client.chat.completions.create(
# model=model_name,
# messages=messages_for_api,
# tools=tools_for_api if tools_for_api else None,
# tool_choice="auto",
# )
# pprint(response.choices[0].message.model_dump())
# except Exception as e:
# print(f"OpenAI API call would fail here (ensure API key is set): {e}")
# Expected Rendered Output Structure:
# {'messages': [{'role': 'system', 'content': '...'}, {'role': 'user', 'content': '...'}],
# 'config': {'provider': 'OpenAI', 'model': 'gpt-4o-mini', 'tools': {'get_current_weather': {...}}}}
Managing Conversations: Working with Chains (promptl.chains
)
This submodule is dedicated to handling multi-step prompts defined with <step>
.
-
Creating Chains (
promptl.chains.create
)- Initializes the state for a chain execution.
- Input:
prompt: str
,parameters: Optional[Dict]
,adapter: Optional[Adapter]
,options: Optional[CreateChainOptions]
. - Output: A
Chain
object, representing the stateful conversation.
from promptl_ai import Promptl promptl = Promptl() two_step_prompt = """ <step><user>Ask question 1 about {{topic}}.</user></step> <step><user>Ask question 2 about {{topic}}.</user></step> """ params = {"topic": "PromptL"} chain_instance = promptl.chains.create(two_step_prompt, params) print(f"Chain ready. Completed: {chain_instance.completed}") # False
-
Stepping Through Chains (
chain.step
)- Executes the next step in the sequence. You call this method directly on the
Chain
object. - Input:
response: Optional[Union[str, MessageLike, Sequence[MessageLike]]]
. This is crucial: it’s the assistant’s response from the previous step.- Pass a
str
for simple text replies. - Pass a
MessageLike
object (or list of them) for structured replies (like including tool calls the LLM made). SeeData Types
below. - It’s
None
only for the very first call tostep()
.
- Pass a
- Output:
StepChainResult
containing the state after executing the current step.
- Executes the next step in the sequence. You call this method directly on the
-
The
Chain
Object:- Holds the state. Returned by
create
and included inStepChainResult
. - Properties:
adapter
,completed
(bool),global_messages_count
,raw_text
,_chain
(internal state). - Methods:
step(response=...)
is the primary way to interact.
- Holds the state. Returned by
-
The
StepChainResult
Object:- The result of calling
chain.step()
. - Properties:
-
messages: List[MessageLike]
: The complete list of messages up to the end of the executed step (including theresponse
you provided). Format depends on the adapter. -
config: Dict[str, Any]
: Configuration applicable at this stage. -
completed: bool
: True if this was the last step and its response was provided. -
chain: Chain
: The updatedChain
object. Use this for the next call tostep()
.
-
Let’s continue the
two_step_prompt
example:
# Continuing from chain_instance creation above... # --- First step --- print("\n--- Step 1 ---") step1_result = chain_instance.step() # No response needed yet pprint(step1_result.model_dump(exclude={'chain'})) # Expected: {'messages': [{'role': 'user', 'content': 'Ask question 1 about PromptL.'}], 'config': {}, 'completed': False} print(f"Chain completed? {step1_result.chain.completed}") # False # --- Simulate LLM Response --- response1 = "Okay, Question 1: What is the main advantage of PromptL?" # --- Second step --- print("\n--- Step 2 ---") # Use the updated chain from the previous result! step2_result = step1_result.chain.step(response1) pprint(step2_result.model_dump(exclude={'chain'})) # Expected: {'messages': [..., {'role': 'assistant', 'content': response1}, {'role': 'user', 'content': 'Ask question 2 about PromptL.'}], 'config': {}, 'completed': False} print(f"Chain completed? {step2_result.chain.completed}") # False # --- Simulate LLM Response --- response2 = "Question 2: How does it handle different LLM providers?" # --- Final step --- print("\n--- Final Step ---") final_result = step2_result.chain.step(response2) pprint(final_result.model_dump(exclude={'chain'})) # Expected: {'messages': [..., {'role': 'assistant', 'content': response2}], 'config': {}, 'completed': True} print(f"Chain completed? {final_result.chain.completed}") # True
- The result of calling
Speaking the LLM’s Language: Adapters (OpenAI
, Anthropic
, Default
)
Why adapters? Different LLM providers have slightly different API requirements for how message lists, tool calls, or multi-modal content should be formatted. Adapters bridge this gap.
- Concept: When you
render
orcreate
a chain, theadapter
setting tellspromptl-py
which format to use for the outputmessages
list. - Available Adapters (
Adapter
Enum):-
Adapter.OpenAI
: Formats output compatible with OpenAI’s API (e.g.,{'role': 'user', 'content': '...'}
). This is the default. -
Adapter.Anthropic
: Formats output compatible with Anthropic’s API. -
Adapter.Default
: Uses PromptL’s internal, generic message structures (defined inpromptl_ai.bindings.types
). Useful if you want maximum control over the final formatting step.
-
- Impact on Output Types: The choice of adapter determines the specific Pydantic models used within the
messages
list (e.g.,openai.UserMessage
vsanthropic.UserMessage
vstypes.UserMessage
). TheMessageLike
type hint cleverly handles this.
from promptl_ai import Promptl, Adapter
from pprint import pprint
promptl = Promptl()
simple_prompt = "<user>Hi there.</user>"
# OpenAI (Default)
res_openai = promptl.prompts.render(simple_prompt, adapter=Adapter.OpenAI)
print("--- OpenAI Format ---")
pprint([type(msg).__name__ for msg in res_openai.messages]) # Show class names
# Expected: ['UserMessage'] (from promptl_ai.bindings.adapters.openai)
# Anthropic
res_anthropic = promptl.prompts.render(simple_prompt, adapter=Adapter.Anthropic)
print("\n--- Anthropic Format ---")
pprint([type(msg).__name__ for msg in res_anthropic.messages])
# Expected: ['UserMessage'] (from promptl_ai.bindings.adapters.anthropic)
# Default (PromptL internal)
res_default = promptl.prompts.render(simple_prompt, adapter=Adapter.Default)
print("\n--- Default Format ---")
pprint([type(msg).__name__ for msg in res_default.messages])
# Expected: ['UserMessage'] (from promptl_ai.bindings.types)
Under the Hood: Key Data Types (promptl_ai.bindings.types
)
These are the building blocks used internally and sometimes exposed in results:
-
MessageLike
: A special type hint (typing.Annotated
) that represents any valid message object, whose specific structure (openai.Message
,anthropic.Message
, ortypes.Message
) is validated based on the active adapter context. -
MessageRole
(Enum):System
,User
,Assistant
,Tool
. -
ContentType
(Enum):Text
,Image
,File
,ToolCall
,ToolResult
. These represent PromptL’s internal view of content. - Content Models: Define the structure for non-text content (used within
types.Message
):-
TextContent
:{ type: "text", text: str }
-
ImageContent
:{ type: "image", image: str }
(Content might be URL or base64) -
FileContent
:{ type: "file", file: str, mimeType: str }
-
ToolCallContent
:{ type: "tool-call", id: str, name: str, arguments: Dict }
(Has aliases liketoolCallId
,toolName
,args
) -
ToolResultContent
:{ type: "tool-result", id: str, name: str, result: Any, is_error: Optional[bool] }
(Has aliases liketoolCallId
,toolName
,isError
)
-
-
Error
&ErrorPosition
: Used for reporting issues found duringscan
or raised inPromptlError
.-
ErrorPosition
:{ line: int, column: int, character: int }
-
Error
:{ name: Optional[str], code: Optional[str], message: str, start: Optional[ErrorPosition], end: Optional[ErrorPosition], frame: Optional[str] }
-
Handling Hiccups: Error Management (PromptlError
, RPCError
)
Things can go wrong! The library uses two main custom exceptions:
-
PromptlError
: Raised when the PromptL engine itself (running in WASM) reports an error. This usually means there’s a problem with your PromptL template syntax, variable usage, or logic.- Access
e.cause
(anError
object) for details: message, code, position. - Catch this for issues related to your prompt’s content.
from promptl_ai import Promptl, PromptlError promptl = Promptl() bad_syntax_prompt = "<user> {{ user.name " # Missing closing }} try: result = promptl.prompts.render(bad_syntax_prompt, parameters={"user": {"name": "Bob"}}) except PromptlError as e: print(f"Caught PromptL Error!") print(f" Message: {e.cause.message}") print(f" Code: {e.cause.code}") if e.cause.start: print(f" Position: Line {e.cause.start.line}, Col {e.cause.start.column}") # Example Output: Caught PromptL Error! Message: Expected "}}" but did not find it...
- Access
-
RPCError
: Raised when there’s an issue with the communication between Python and the WASM module, or with the WASM execution environment itself. Examples: WASM file not found, temporary directory inaccessible, malformed RPC message.- Access
e.cause
(anrpc.Error
object) for details:code
(anErrorCode
enum value likeExecuteError
),message
,details
. - Catch this for lower-level, infrastructure-like problems.
import mock from promptl_ai import Promptl, rpc # Need to import rpc for RPCError type promptl = Promptl() good_prompt = "<user>This is fine.</user>" # Simulate a failure during WASM instantiation/communication with mock.patch.object(rpc.Client, "_send_recv", side_effect=rpc.RPCError("Simulated comms failure", code=rpc.ErrorCode.SendError)): try: result = promptl.prompts.render(good_prompt) except rpc.RPCError as e: print(f"Caught RPC Error!") print(f" Message: {e.cause.message}") print(f" Code: {e.cause.code}") # Example Output: Caught RPC Error! Message: Simulated comms failure Code: SEND_ERROR
- Access
8. Real-World Scenarios: Complex Examples
Let’s put these concepts together with more involved examples.
Example 1: Multi-Step Chain with Tools, Schema, and Control Flow
This example simulates a complex interaction involving understanding instructions, calling tools conditionally, enforcing schemas, and using captured step results. We’ll simulate the LLM and tool responses for clarity.
import os
from pprint import pprint
from promptl_ai import Promptl, Adapter, ToolCallContent, ToolResultContent, AssistantMessage, ToolMessage, UserMessage, SystemMessage, types # Import necessary types
# --- Use the complex prompt from the original docs ---
# (Copied here for completeness - see original docs for full text)
COMPLEX_PROMPT = """
---
provider: OpenAI
model: gpt-4o # Using a capable model for tool use and schema adherence
temperature: 0.2
tools:
meme_downloader:
description: Downloads memes from the internet.
parameters:
type: object
properties: {category: {type: string, description: The category of memes.}}
problem_solver:
description: Resolves all problems you may have.
parameters:
type: object
properties: {problem: {type: string, description: The problem you have.}}
schema:
type: object
properties: {confidence: {type: integer}, response: {type: string}}
required: [confidence, response]
additionalProperties: false
---
<step>
You are an advanced assistant specialized in assisting users.
Take a look at the following user problem: <user>{{problem}}</user>
You must fix the user problem. HOWEVER, DON'T FIX IT YET, AND TELL ME IF YOU HAVE UNDERSTOOD THE INSTRUCTIONS.
</step>
<step>
WAIT THERE IS ONE MORE THING BEFORE YOU CAN FIX THE PROBLEM.
I NEED YOU TO DOWNLOAD A MEME FIRST, WHATEVER CATEGORY YOU WANT.
</step>
<step as="reasoning">
Okay, first I need you to think about how to fix the user problem.
</step>
<step as="conclusion" schema={{ { type: "object", properties: { response: { type: "string", enum: ["SHOULD_FIX", "SHOULD_NOT_FIX"] } }, required: ["response"] } }}>
Now, I want you to think about whether the problem should be fixed ("SHOULD_FIX") or not ("SHOULD_NOT_FIX").
</step>
<step>
{{ if conclusion.response == "SHOULD_FIX" }}
Use the magical tool to fix the user problem.
{{ else }}
Take a look at these jokes, which have nothing to do with the user problem and pick one:
{{ for joke, index in jokes }}
{{ index + 1 }}. ({{ joke.category }}) {{ joke.text }}
{{ endfor }}
{{ endif }}
</step>
"""
parameters = {
"problem": "My keyboard is sticky.",
"jokes": [
{"category": "Tech", "text": "Why did the programmer quit his job? He didn't get arrays!"},
{"category": "Food", "text": "Why did the tomato turn red? Because it saw the salad dressing!"},
],
}
# --- Initialize with OpenAI adapter ---
promptl = Promptl(options=PromptlOptions(adapter=Adapter.OpenAI))
# --- Create the chain ---
print("--- Creating Chain ---")
chain = promptl.chains.create(COMPLEX_PROMPT, parameters)
pprint(f"Chain created. Initial parameters: {parameters}")
conversation_history = []
current_result = None
# --- Simulate the conversation flow ---
step_counter = 0
while not chain.completed:
step_counter += 1
print(f"\n--- Executing Step {step_counter} (Completed: {chain.completed}) ---")
llm_tool_response = None # This will hold the simulated response(s) for the *previous* step
# Determine simulated response based on which step we are *about* to execute
if step_counter == 1: # Before executing step 1
llm_tool_response = None # First step needs no prior response
elif step_counter == 2: # Before executing step 2 (after step 1 ran)
print("[Simulating LLM Response for Step 1: Confirm Understanding]")
# Response must match global schema
llm_tool_response = AssistantMessage(role="assistant", content='{"confidence": 100, "response": "Understood. I will wait to fix the problem."}')
elif step_counter == 3: # Before executing step 3 (after step 2 ran)
print("[Simulating LLM Response for Step 2: Request Meme Tool]")
# Simulate LLM deciding to call the meme tool
llm_tool_response = [ # Can be a list if tool involved
AssistantMessage(
role="assistant",
content=None, # Content is None when tool_calls are present
tool_calls=[
# Must use ToolCallContent structure (using types module here for clarity)
types.ToolCallContent(id="call_meme_abc", name="meme_downloader", arguments={"category": "Tech"}).model_dump(by_alias=True)
]
),
# Simulate the tool execution result immediately following
print("[Simulating Tool Execution: meme_downloader]"),
ToolMessage(
role="tool",
tool_call_id="call_meme_abc",
content='{"url": "http://example.com/tech_meme.png"}' # Result as JSON string
)
]
elif step_counter == 4: # Before executing step 4 (after step 3 ran)
print("[Simulating LLM Response for Step 3: Reasoning]")
# Response must match global schema
llm_tool_response = AssistantMessage(role="assistant", content='{"confidence": 90, "response": "To fix a sticky keyboard, one might need cleaning supplies."}')
elif step_counter == 5: # Before executing step 5 (after step 4 ran)
print("[Simulating LLM Response for Step 4: Conclusion - Adhering to Step Schema]")
# Response must match STEP schema: { response: "SHOULD_FIX" | "SHOULD_NOT_FIX" }
llm_tool_response = AssistantMessage(role="assistant", content='{"response": "SHOULD_FIX"}')
elif step_counter == 6: # Before executing step 6 (after step 5 ran)
# Control flow took "SHOULD_FIX" path
print("[Simulating LLM Response for Step 5: Request Problem Solver Tool]")
llm_tool_response = [
AssistantMessage(
role="assistant",
content=None,
tool_calls=[
types.ToolCallContent(id="call_solve_xyz", name="problem_solver", arguments={"problem": "My keyboard is sticky."}).model_dump(by_alias=True)
]
),
print("[Simulating Tool Execution: problem_solver]"),
ToolMessage(
role="tool",
tool_call_id="call_solve_xyz",
content='{"status": "solved", "method": "Used isopropyl alcohol and compressed air."}'
)
]
else: # After step 6 ran
print("[Simulating Final LLM Response]")
# Response must match global schema
llm_tool_response = AssistantMessage(role="assistant", content='{"confidence": 95, "response": "The problem solver tool has been dispatched. Cleaning is recommended."}')
# --- Perform the actual step ---
# Use the *current* chain state (from previous iteration or initial create)
current_chain_state = current_result.chain if current_result else chain
current_result = current_chain_state.step(llm_tool_response) # Pass the simulated response
print(f"--- Result for Step {step_counter} (Completed: {current_result.completed}) ---")
printable_messages = [msg.model_dump() for msg in current_result.messages]
pprint({"messages_count": len(printable_messages), "config": current_result.config})
conversation_history = current_result.messages # Update history
# Safety break for this example simulation
if step_counter > 7:
print("\nSafety break!")
break
print("\n--- Final Chain State ---")
print(f"Completed: {current_result.chain.completed}")
print("--- Final Conversation ---")
pprint([msg.model_dump() for msg in conversation_history])
Example 2: Conditional Rendering Based on Parameters
Demonstrates using {{ if }}
based on input parameters.
from promptl_ai import Promptl
from pprint import pprint
promptl = Promptl()
conditional_prompt = """
---
model: gpt-3.5-turbo
---
<system>
Generate a response tailored to the user's expertise level: {{ level }}
</system>
<user>
Explain the concept of zero-knowledge proofs.
{{ if level == "expert" }}
Focus on the underlying mathematical principles and different proof systems (like zk-SNARKs vs zk-STARKs). Assume deep cryptographic knowledge.
{{ else if level == "intermediate" }}
Explain the core idea using analogies (like Alibaba's cave), mention interactivity vs non-interactivity, and briefly touch upon applications like privacy-preserving transactions.
{{ else }} # Default to beginner
Provide a very simple analogy (like 'Where's Waldo?') explaining how you can prove you know something without revealing what it is. Keep it high-level and non-technical.
{{ endif }}
</user>
"""
doc_topic = "zero-knowledge proofs" # Included for context, though not used in {{}}
print("--- Expert Level ---")
params_expert = {"level": "expert"}
result_expert = promptl.prompts.render(conditional_prompt, params_expert)
pprint(result_expert.model_dump()) # The user message content will include the expert-level instructions
print("\n--- Intermediate Level ---")
params_intermediate = {"level": "intermediate"}
result_intermediate = promptl.prompts.render(conditional_prompt, params_intermediate)
pprint(result_intermediate.model_dump()) # User message content will have intermediate instructions
print("\n--- Beginner Level ---")
params_beginner = {"level": "beginner"}
result_beginner = promptl.prompts.render(conditional_prompt, params_beginner)
pprint(result_beginner.model_dump()) # User message content will have beginner instructions
Example 3: Using Complex Data Structures in Parameters
Showcases looping through nested data structures passed in parameters using {{ for }}
.
from promptl_ai import Promptl
from pprint import pprint
promptl = Promptl()
looping_prompt = """
---
model: gpt-4
---
<user>
Please analyze the following code review feedback and suggest priority actions:
Repository: {{ repo_details.name }}
Pull Request: #{{ repo_details.pr_id }}
Feedback Received:
{{ for comment in feedback_list }}
--------------------
File: {{ comment.file_path }} (Lines: {{ comment.line_range }})
Author: {{ comment.author }}
Severity: {{ comment.severity }}
Comment: {{ comment.text }}
{{ if comment.suggestion }}
Suggestion:
```
diff
{{ comment.suggestion }}
```
{{ endif }}
{{ endfor }}
--------------------
Based on severity ('Blocker', 'Major', 'Minor', 'Nitpick'), what are the top 3 actions to take?
</user>
"""
complex_params = {
"repo_details": {
"name": "promptl-py",
"pr_id": 42
},
"feedback_list": [
{
"file_path": "src/promptl_ai/core.py",
"line_range": "50-55",
"author": "DevA",
"severity": "Major",
"text": "This logic seems overly complex, can it be simplified?",
"suggestion": None
},
{
"file_path": "tests/test_chains.py",
"line_range": "112",
"author": "DevB",
"severity": "Blocker",
"text": "Missing assertion for edge case X.",
"suggestion": "+ assert result.error_code == 'EXPECTED_ERROR'"
},
{
"file_path": "README.md",
"line_range": "10",
"author": "DevA",
"severity": "Nitpick",
"text": "Typo in the introduction.",
"suggestion": "- Teh library\n+ The library"
}
]
}
result = promptl.prompts.render(looping_prompt, complex_params)
print("--- Rendered Prompt with Complex Data ---")
# The user message content will be long, containing the formatted list
# derived from the complex_params structure and the {{for}} loop.
# We'll just print the structure here.
pprint(result.model_dump())
print("\n--- Snippet of Rendered User Content ---")
print(result.messages[0].content[:500] + "...") # Show start of the content
Example 4: Handling Potential PromptL Errors Gracefully
Demonstrates using try...except PromptlError
to catch issues during rendering, like missing variables.
from promptl_ai import Promptl, PromptlError
from pprint import pprint
promptl = Promptl()
template = """
---
model: gpt-3.5-turbo
---
<user>Generate an email to {{recipient_name}} about {{subject}}.</user>
"""
valid_params = {"recipient_name": "Alice", "subject": "Meeting Follow-up"}
invalid_params = {"recipient_name": "Bob"} # Missing the 'subject' variable
print("--- Attempt 1: Valid Parameters ---")
try:
result_valid = promptl.prompts.render(template, valid_params)
print("Success!")
pprint(result_valid.model_dump())
except PromptlError as e:
print(f"Unexpected PromptL Error: {e.cause.message}")
except Exception as e:
print(f"Unexpected General Error: {e}")
print("\n--- Attempt 2: Invalid Parameters (Missing Variable) ---")
try:
result_invalid = promptl.prompts.render(template, invalid_params)
print("Success (This shouldn't happen!)")
pprint(result_invalid.model_dump())
except PromptlError as e:
print(f"Caught Expected PromptL Error:")
# Print relevant parts of the error cause
error_details = e.cause.model_dump(include={'name', 'message', 'code', 'start'})
pprint(error_details)
# Expected output should mention 'variable-not-declared' or similar for 'subject'
except Exception as e:
print(f"Unexpected General Error: {e}")
9. Quick Reference: API Summary
Here’s a cheat sheet of the main components:
- Core Classes:
-
Promptl(options: Optional[PromptlOptions])
: Main entry point. Provides.prompts
and.chains
. -
Prompts
: Access viapromptl.prompts
.-
scan(prompt: str) -> ScanPromptResult
-
render(...) -> RenderPromptResult
-
-
Chains
: Access viapromptl.chains
.-
create(...) -> Chain
-
-
Chain
: Represents chain state.-
step(response=...) -> StepChainResult
: Advances the chain. - Properties:
adapter
,completed
,global_messages_count
, etc.
-
-
- Key Data Models:
-
ScanPromptResult
: Output ofscan
. Hasconfig
,errors
,parameters
,is_chain
, etc. -
RenderPromptResult
: Output ofrender
. Hasmessages
,config
. -
StepChainResult
: Output ofchain.step
. Hasmessages
,config
,completed
,chain
. - Message Models (
SystemMessage
,UserMessage
, etc.): Structure varies by adapter (types
,adapters.openai
,adapters.anthropic
). - Content Models (
TextContent
,ToolCallContent
, etc.): Defined intypes
. -
Error
,ErrorPosition
: For PromptL errors. -
rpc.Error
: For WASM/communication errors.
-
- Enums:
-
Adapter
:OpenAI
,Anthropic
,Default
. -
MessageRole
:System
,User
,Assistant
,Tool
. -
ContentType
:Text
,Image
,File
,ToolCall
,ToolResult
. -
rpc.ErrorCode
:ExecuteError
,SendError
, etc.
-
- Exceptions:
-
PromptlError
: Wraps errors from the PromptL engine (e.cause
is anError
). -
RPCError
: Wraps errors from WASM/RPC layer (e.cause
is anrpc.Error
).
-
10. For Contributors: Development Setup
Interested in contributing to promptl-py
? The project uses modern Python tooling:
- Environment & Dependencies: Uses
uv
. Set up withuv venv && uv sync --all-extras --all-groups
. - Linting: Run
uv run scripts/lint.py
(usesruff
for checks andpyright
for type checking). - Formatting: Run
uv run scripts/format.py
(usesruff
). - Testing: Run
uv run scripts/test.py
(usespytest
and related plugins). - Building/Publishing: Standard
uv build
anduv publish
.
Check the README.md
and pyproject.toml
in the repository for more details.
11. The Fine Print: License
The promptl-py
Python library is open-source and licensed under the MIT License. You can find the full license text in the LICENSE
file within the library’s source code.
Conclusion: Prompt Engineering Made Easier
PromptL and the promptl-py
library offer a powerful combination for managing the increasing complexity of LLM interactions. By providing a dedicated syntax for templates, configuration, tool use, schemas, and chains, PromptL brings structure and maintainability to prompt engineering. The Python library provides the robust bridge needed to integrate this power into your applications, handling parsing, rendering, chain execution, and provider-specific formatting through adapters.
Whether you’re building simple Q&A bots, sophisticated agents using tools, or complex multi-turn conversational flows, promptl-py
gives you the tools to do it more effectively.
This content originally appeared on DEV Community and was authored by Yiğit Konur