Skip to main content

How to run custom functions

Prerequisites

This guide assumes familiarity with the following concepts:

You can use arbitrary functions as Runnables. This is useful for formatting or when you need functionality not provided by other LangChain components, and custom functions used as Runnables are called RunnableLambdas.

Note that all inputs to these functions need to be a SINGLE argument. If you have a function that accepts multiple arguments, you should write a wrapper that accepts a single dict input and unpacks it into multiple argument.

This guide will cover:

  • How to explicitly create a runnable from a custom function using the RunnableLambda constructor and the convenience @chain decorator
  • Coercion of custom functions into runnables when used in chains
  • How to accept and use run metadata in your custom function
  • How to stream with custom functions by having them return generators

Using the constructorโ€‹

Below, we explicitly wrap our custom logic using a RunnableLambda method:

yarn add @langchain/openai
import { StringOutputParser } from "@langchain/core/output_parsers";
import { ChatPromptTemplate } from "@langchain/core/prompts";
import { RunnableLambda } from "@langchain/core/runnables";
import { ChatOpenAI } from "@langchain/openai";

const lengthFunction = (input: { foo: string }): { length: string } => {
return {
length: input.foo.length.toString(),
};
};

const model = new ChatOpenAI({ model: "gpt-4o" });

const prompt = ChatPromptTemplate.fromTemplate("What is {length} squared?");

const chain = RunnableLambda.from(lengthFunction)
.pipe(prompt)
.pipe(model)
.pipe(new StringOutputParser());

await chain.invoke({ foo: "bar" });
"3 squared is \\(3^2\\), which means multiplying 3 by itself. \n" +
"\n" +
"\\[3^2 = 3 \\times 3 = 9\\]\n" +
"\n" +
"So, 3 squared"... 6 more characters

Automatic coercion in chainsโ€‹

When using custom functions in chains with RunnableSequence.from static method, you can omit the explicit RunnableLambda creation and rely on coercion.

Hereโ€™s a simple example with a function that takes the output from the model and returns the first five letters of it:

import { RunnableSequence } from "@langchain/core/runnables";

const prompt = ChatPromptTemplate.fromTemplate(
"Tell me a short story about {topic}"
);

const model = new ChatOpenAI({ model: "gpt-4o" });

const chainWithCoercedFunction = RunnableSequence.from([
prompt,
model,
(input) => input.content.slice(0, 5),
]);

await chainWithCoercedFunction.invoke({ topic: "bears" });
"Once "

Note that we didnโ€™t need to wrap the custom function (input) => input.content.slice(0, 5) in a RunnableLambda method. The custom function is coerced into a runnable. See this section for more information.

Passing run metadataโ€‹

Runnable lambdas can optionally accept a RunnableConfig parameter, which they can use to pass callbacks, tags, and other configuration information to nested runs.

import { type RunnableConfig } from "@langchain/core/runnables";

const echo = (text: string, config: RunnableConfig) => {
const prompt = ChatPromptTemplate.fromTemplate(
"Reverse the following text: {text}"
);
const model = new ChatOpenAI({ model: "gpt-4o" });
const chain = prompt.pipe(model).pipe(new StringOutputParser());
return chain.invoke({ text }, config);
};

const output = await RunnableLambda.from(echo).invoke("foo", {
tags: ["my-tag"],
callbacks: [
{
handleLLMEnd: (output) => console.log(output),
},
],
});
{
generations: [
[
{
text: "oof",
message: AIMessage {
lc_serializable: true,
lc_kwargs: [Object],
lc_namespace: [Array],
content: "oof",
name: undefined,
additional_kwargs: [Object],
response_metadata: [Object],
tool_calls: [],
invalid_tool_calls: []
},
generationInfo: { finish_reason: "stop" }
}
]
],
llmOutput: {
tokenUsage: { completionTokens: 2, promptTokens: 13, totalTokens: 15 }
}
}

Streaming

You can use generator functions (ie. functions that use the yield keyword, and behave like iterators) in a chain.

The signature of these generators should be AsyncGenerator<Input> -> AsyncGenerator<Output>.

These are useful for: - implementing a custom output parser - modifying the output of a previous step, while preserving streaming capabilities

Hereโ€™s an example of a custom output parser for comma-separated lists. First, we create a chain that generates such a list as text:

const prompt = ChatPromptTemplate.fromTemplate(
"Write a comma-separated list of 5 animals similar to: {animal}. Do not include numbers"
);

const strChain = prompt.pipe(model).pipe(new StringOutputParser());

const stream = await strChain.stream({ animal: "bear" });

for await (const chunk of stream) {
console.log(chunk);
}

Lion
,
wolf
,
tiger
,
cougar
,
leopard

Next, we define a custom function that will aggregate the currently streamed output and yield it when the model generates the next comma in the list:

// This is a custom parser that splits an iterator of llm tokens
// into a list of strings separated by commas
async function* splitIntoList(input) {
// hold partial input until we get a comma
let buffer = "";
for await (const chunk of input) {
// add current chunk to buffer
buffer += chunk;
// while there are commas in the buffer
while (buffer.includes(",")) {
// split buffer on comma
const commaIndex = buffer.indexOf(",");
// yield everything before the comma
yield [buffer.slice(0, commaIndex).trim()];
// save the rest for the next iteration
buffer = buffer.slice(commaIndex + 1);
}
}
// yield the last chunk
yield [buffer.trim()];
}

const listChain = strChain.pipe(splitIntoList);

const stream = await listChain.stream({ animal: "bear" });

for await (const chunk of stream) {
console.log(chunk);
}
[ "wolf" ]
[ "lion" ]
[ "tiger" ]
[ "cougar" ]
[ "cheetah" ]

Invoking it gives a full array of values:

await listChain.invoke({ animal: "bear" });
[ "lion", "tiger", "wolf", "cougar", "jaguar" ]

Next stepsโ€‹

Now youโ€™ve learned a few different ways to use custom logic within your chains, and how to implement streaming.

To learn more, see the other how-to guides on runnables in this section.


Was this page helpful?


You can leave detailed feedback on GitHub.