- Javascript
- Python
AI SDK
Cloudflare Agents
LangGraph
GenKit
LlamaIndex
Prerequisites
Before getting started, make sure you have completed the following steps:Install Node.js 20+ and npm
Complete the User authentication quickstart to create an application integrated with Auth0.
Configure a Social Connection for Slack in Auth0
- Under the Purpose section, make sure to enable the
Use for Connected Accounts with Token Vaulttoggle.
1. Configure Auth0 AI
First, you must install the SDK:npm install @auth0/ai-vercel
import { Auth0AI } from "@auth0/ai-vercel";
import { auth0 } from "@/lib/auth0";
const auth0AI = new Auth0AI();
export const withSlack = auth0AI.withTokenVault({
connection: "sign-in-with-slack",
scopes: ["channels:read", "groups:read"],
refreshToken: async () => {
const session = await auth0.getSession();
const refreshToken = session?.tokenSet.refreshToken as string;
return refreshToken;
},
});
auth0 is an instance of @auth0/nextjs-auth0 to handle the application auth flows. You can check different authentication options for Next.js with Auth0 at the official documentation.
2. Integrate your tool with Slack
Wrap your tool using the Auth0 AI SDK to obtain an access token for the Slack API.import { ErrorCode, WebClient } from "@slack/web-api";
import { getAccessTokenFromTokenVault } from "@auth0/ai-vercel";
import { TokenVaultError } from "@auth0/ai/interrupts";
import { withSlack } from "@/lib/auth0-ai";
import { tool } from "ai";
import { z } from "zod";
export const listChannels = withSlack(
tool({
description: "List channels for the current user on Slack",
parameters: z.object({}),
execute: async () => {
// Get the access token from Auth0 AI
const accessToken = getAccessTokenFromTokenVault();
// Slack SDK
try {
const web = new WebClient(accessToken);
const result = await web.conversations.list({
exclude_archived: true,
types: "public_channel,private_channel",
limit: 10,
});
return result.channels?.map((channel) => channel.name);
} catch (error) {
if (error && typeof error === "object" && "code" in error) {
if (error.code === ErrorCode.HTTPError) {
throw new TokenVaultError(
`Authorization required to access the Token Vault connection`
);
}
}
throw error;
}
},
})
);
3. Handle authentication redirects
Interrupts are a way for the system to pause execution and prompt the user to take an action—such as authenticating or granting API access—before resuming the interaction. This ensures that any required access is granted dynamically and securely during the chat experience. In this context, Auth0-AI SDK manages authentication redirects in the Vercel AI SDK via these interrupts.Server Side
On the server-side code of your Next.js App, you need to set up the tool invocation and handle the interruption messaging via theerrorSerializer. The setAIContext function is used to set the async-context for the Auth0 AI SDK.import { createDataStreamResponse, Message, streamText } from "ai";
import { listChannels } from "@/lib/tools/";
import { setAIContext } from "@auth0/ai-vercel";
import { errorSerializer, withInterruptions } from "@auth0/ai-vercel/interrupts";
import { openai } from "@ai-sdk/openai";
export async function POST(request: Request) {
const { id, messages} = await request.json();
const tools = { listChannels };
setAIContext({ threadID: id });
return createDataStreamResponse({
execute: withInterruptions(
async (dataStream) => {
const result = streamText({
model: openai("gpt-4o-mini"),
system: "You are a friendly assistant! Keep your responses concise and helpful.",
messages,
maxSteps: 5,
tools,
});
result.mergeIntoDataStream(dataStream, {
sendReasoning: true,
});
},
{ messages, tools }
),
onError: errorSerializer((err) => {
console.log(err);
return "Oops, an error occured!";
}),
});
}
Client Side
In this example, we utilize theTokenVaultConsentPopup component to show a pop-up that allows the user to authenticate with Slack and grant access with the requested scopes. You’ll first need to install the @auth0/ai-components package:npx @auth0/ai-components add TokenVault
"use client";
import { useChat } from "@ai-sdk/react";
import { useInterruptions } from "@auth0/ai-vercel/react";
import { TokenVaultInterrupt } from "@auth0/ai/interrupts";
import { TokenVaultConsentPopup } from "@/components/auth0-ai/TokenVault/popup";
export default function Chat() {
const { messages, handleSubmit, input, setInput, toolInterrupt } =
useInterruptions((handler) =>
useChat({
onError: handler((error) => console.error("Chat error:", error)),
})
);
return (
<div>
{messages.map((message) => (
<div key={message.id}>
{message.role === "user" ? "User: " : "AI: "}
{message.content}
</div>
))}
{TokenVaultInterrupt.isInterrupt(toolInterrupt) && (
<TokenVaultConsentPopup
interrupt={toolInterrupt}
connectWidget={{
title: "List Slack channels",
description:"description ...",
action: { label: "Check" },
}}
/>
)}
<form onSubmit={handleSubmit}>
<input value={input} placeholder="Say something..." onChange={(e) => setInput(e.target.value)} />
</form>
</div>
);
}
Prerequisites
Before getting started, make sure you have completed the following steps:Install Node.js 20+ and npm
Complete the User authentication quickstart to create an application integrated with Auth0.
Configure a Social Connection for Slack in Auth0
- Under the Purpose section, make sure to enable the
Use for Connected Accounts with Token Vaulttoggle.
1. Configure Auth0 AI
- Auth0 Hono Web SDK: for the Worker.
- Auth0 Cloudflare Agents API SDK for the Chat Agent.
npm install @auth0/ai-vercel @auth0/ai-cloudflare @auth0/ai
import { Auth0AI, setGlobalAIContext } from "@auth0/ai-vercel";
import { getCurrentAgent } from "agents";
import type { Chat } from "./chat";
const getAgent = () => {
const { agent } = getCurrentAgent<Chat>();
if (!agent) {
throw new Error("No agent found");
}
return agent;
};
setGlobalAIContext(() => ({ threadID: getAgent().name }));
const auth0AI = new Auth0AI();
const refreshToken = async () => {
const credentials = getAgent().getCredentials();
return credentials?.refresh_token;
};
export const withSlack = auth0AI.withTokenVault({
refreshToken,
connection: "sign-in-with-slack",
scopes: ["channels:read", "groups:read"],
});
2. Integrate your tool with the Slack API
Wrap your tool using the Auth0 AI SDK to obtain an access token for the Slack API.import { tool } from "ai";
import { z } from "zod/v3";
import { getAccessTokenFromTokenVault } from "@auth0/ai-vercel";
import { TokenVaultError } from "@auth0/ai/interrupts";
import { withSlack } from "@/agent/auth0-ai";
import { ErrorCode, WebClient } from "@slack/web-api";
export const listChannels = withSlack(
tool({
description: "List channels for the current user on Slack",
inputSchema: z.object({}),
execute: async () => {
// Get the access token from Auth0 AI
const accessToken = getAccessTokenFromTokenVault();
// Slack SDK
try {
const web = new WebClient(accessToken);
const result = await web.conversations.list({
exclude_archived: true,
types: "public_channel,private_channel",
limit: 10,
});
return result.channels?.map((channel) => channel.name);
} catch (error) {
if (error && typeof error === "object" && "code" in error) {
if (error.code === ErrorCode.HTTPError) {
throw new TokenVaultError(
`Authorization required to access the Token Vault`
);
}
}
throw error;
}
},
})
);
3. Handle authentication redirects
Interrupts are a way for the system to pause execution and prompt the user to take an action—such as authenticating or granting API access—before resuming the interaction. This ensures that any required access is granted dynamically and securely during the chat experience. In this context, Auth0-AI SDK manages authentication redirects in the Vercel AI SDK via these interrupts.Server Side
On the Chat agent class, you need to set up the tool invocation and handle the interruption messaging via theerrorSerializer.import { openai } from "@ai-sdk/openai";
import {
AsyncUserConfirmationResumer,
CloudflareKVStore,
} from "@auth0/ai-cloudflare";
import {
errorSerializer,
invokeTools,
withInterruptions,
} from "@auth0/ai-vercel/interrupts";
import { AuthAgent, OwnedAgent } from "@auth0/auth0-cloudflare-agents-api";
import { AIChatAgent } from "agents/ai-chat-agent";
import {
convertToModelMessages,
createUIMessageStream,
createUIMessageStreamResponse,
generateId,
stepCountIs,
streamText,
type UIMessage,
} from "ai";
import { executions, tools } from "./tools";
import { processToolCalls } from "./utils";
const model = openai("gpt-4o-2024-11-20");
class BaseChat extends AIChatAgent<Env> {}
const AuthedChat = AuthAgent(BaseChat);
const OwnedAuthedChat = OwnedAgent(AuthedChat);
const ResumableOwnedAuthedChat = AsyncUserConfirmationResumer(OwnedAuthedChat);
export class Chat extends ResumableOwnedAuthedChat {
messages: UIMessage[] = [];
declare mcp?:
| {
unstable_getAITools?: () => Record<string, unknown>;
}
| undefined;
async onChatMessage() {
const allTools = {
...tools,
...(this.mcp?.unstable_getAITools?.() ?? {}),
};
const claims = this.getClaims?.();
const stream = createUIMessageStream({
originalMessages: this.messages,
execute: withInterruptions(
async ({ writer }) => {
await invokeTools({
messages: convertToModelMessages(this.messages),
tools: allTools,
});
const processed = await processToolCalls({
messages: this.messages,
dataStream: writer,
tools: allTools,
executions,
});
const result = streamText({
model,
stopWhen: stepCountIs(10),
messages: convertToModelMessages(processed),
system: `You are a helpful assistant that can do various tasks...
If the user asks to schedule a task, use the schedule tool to schedule the task.
The name of the user is ${claims?.name ?? "unknown"}.`,
tools: allTools,
onStepFinish: (output) => {
if (output.finishReason === "tool-calls") {
const last = output.content[output.content.length - 1];
if (last?.type === "tool-error") {
const { toolName, toolCallId, error, input } = last;
const serializableError = {
cause: error,
toolCallId,
toolName,
toolArgs: input,
};
throw serializableError;
}
}
},
});
writer.merge(
result.toUIMessageStream({
sendReasoning: true,
})
);
},
{ messages: this.messages, tools: allTools }
),
onError: errorSerializer(),
});
return createUIMessageStreamResponse({ stream });
}
async executeTask(description: string) {
await this.saveMessages([
...this.messages,
{
id: generateId(),
role: "user",
parts: [{ type: "text", text: `Running scheduled task: ${description}` }],
},
]);
}
get auth0AIStore() {
return new CloudflareKVStore({ kv: this.env.Session });
}
}
CloudflareKVStore instance with your Cloudflare agent worker, you can use Workers KV and a KV namespace as the persistent store. This enables you to store Auth0 session data and other key-value pairs with easy access from your Cloudflare agent workers.import { CloudflareKVStore } from '@auth0/ai-cloudflare';
...
return new CloudflareKVStore({ kv: this.env.YOUR_KV_NAMESPACE });
kv prop accepts any store which implements the KVNamespace interface, so any persistent store which implements this interface will work.Client Side
In this example, we utilize theTokenVaultConsentPopup component to show a pop-up that allows the user to authenticate with GitHub and grant access with the requested scopes. You’ll first need to install the @auth0/ai-components package:npx @auth0/ai-components add TokenVault
"use client";
import { useChat } from "@ai-sdk/react";
import { useAgentChatInterruptions } from "@auth0/ai-cloudflare/react";
import { TokenVaultInterrupt } from "@auth0/ai/interrupts";
import { TokenVaultConsentPopup } from "@/components/auth0-ai/TokenVault/popup";
export default function Chat() {
const {
messages: agentMessages,
input: agentInput,
handleInputChange: handleAgentInputChange,
handleSubmit: handleAgentSubmit,
addToolResult,
clearHistory,
toolInterrupt,
} = useAgentChatInterruptions({
agent,
maxSteps: 5,
id: threadID,
});
return (
<div>
{messages.map((message) => (
<div key={message.id}>
{message.role === "user" ? "User: " : "AI: "}
{message.content}
</div>
))}
{TokenVaultInterrupt.isInterrupt(toolInterrupt) && (
<TokenVaultConsentPopup
interrupt={toolInterrupt}
connectWidget={{
title: "Access to your Slack channels",
description:"description ...",
action: { label: "Check" },
}}
/>
)}
<form onSubmit={handleSubmit}>
<input value={input} placeholder="Say something..." onChange={(e) => setInput(e.target.value)} />
</form>
</div>
);
}
Prerequisites
Before getting started, make sure you have completed the following steps:Install Node.js 20+ and npm
Complete the User authentication quickstart to create an application integrated with Auth0.
Configure a Social Connection for Slack in Auth0
- Under the Purpose section, make sure to enable the
Use for Connected Accounts with Token Vaulttoggle.
Create a Custom API Client in Auth0
- Navigate to Applications > APIs
- Click the Create API button to create a new Custom API.
- Go to the Custom API you created and click the Add Application button in the right top corner.
- Once you've added the API as an application, click the Configure Application button in the right top corner.
- Note down the
client idandclient secretfor your environment variables.
1. Configure Auth0 AI
First, you must install the SDK:npm install @auth0/ai-langchain
import { SUBJECT_TOKEN_TYPES } from "@auth0/ai";
import { Auth0AI } from "@auth0/ai-langchain";
const auth0AI = new Auth0AI({
auth0: {
domain: process.env.AUTH0_DOMAIN!,
clientId: process.env.AUTH0_CUSTOM_API_CLIENT_ID!,
clientSecret: process.env.AUTH0_CUSTOM_API_CLIENT_SECRET!,
},
});
const withAccessTokenForConnection = (connection: string, scopes: string[]) =>
auth0AI.withTokenVault({
connection,
scopes,
accessToken: async (_, config) => {
return config.configurable?.langgraph_auth_user?.getRawAccessToken();
},
subjectTokenType: SUBJECT_TOKEN_TYPES.SUBJECT_TYPE_ACCESS_TOKEN,
});
export const withSlack = withAccessTokenForConnection("sign-in-with-slack", ["channels:read", "groups:read"]);
2. Integrate your tool with Slack
Wrap your tool using the Auth0 AI SDK to obtain an access token for the Slack API.import { ErrorCode, WebClient } from "@slack/web-api";
import { getAccessTokenFromTokenVault } from "@auth0/ai-langchain";
import { TokenVaultError } from "@auth0/ai/interrupts";
import { withSlack } from "@/lib/auth0-ai";
import { tool } from "@langchain/core/tools";
import { z } from "zod";
export const listChannels = withSlack(
tool(async ({ date }) => {
// Get the access token from Auth0 AI
const accessToken = getAccessTokenFromTokenVault();
// Slack SDK
try {
const web = new WebClient(accessToken);
const result = await web.conversations.list({
exclude_archived: true,
types: "public_channel,private_channel",
limit: 10,
});
return result.channels?.map((channel) => channel.name);
} catch (error) {
if (error && typeof error === "object" && "code" in error) {
if (error.code === ErrorCode.HTTPError) {
throw new TokenVaultError(
`Authorization required to access the Token Vault connection`
);
}
}
throw error;
}
},
{
name: "list_slack_channels",
description: "List channels for the current user on Slack",
schema: z.object({
date: z.coerce.date(),
}),
})
);
ToolNode. The agent will automatically request the access token when the tool is called.import { AIMessage } from "@langchain/core/messages";
import { RunnableLike } from "@langchain/core/runnables";
import { END, InMemoryStore, MemorySaver, MessagesAnnotation, START, StateGraph } from "@langchain/langgraph";
import { ToolNode } from "@langchain/langgraph/prebuilt";
import { ChatOpenAI } from "@langchain/openai";
import { listChannels } from "@/lib/tools/listChannels";
const model = new ChatOpenAI({ model: "gpt-4o", }).bindTools([
listChannels,
]);
const callLLM = async (state: typeof MessagesAnnotation.State) => {
const response = await model.invoke(state.messages);
return { messages: [response] };
};
const routeAfterLLM: RunnableLike = function (state) {
const lastMessage = state.messages[state.messages.length - 1] as AIMessage;
if (!lastMessage.tool_calls?.length) {
return END;
}
return "tools";
};
const stateGraph = new StateGraph(MessagesAnnotation)
.addNode("callLLM", callLLM)
.addNode(
"tools",
new ToolNode(
[
// A tool with Token Vault access
listChannels,
// ... other tools
],
{
// Error handler should be disabled in order to
// trigger interruptions from within tools.
handleToolErrors: false,
}
)
)
.addEdge(START, "callLLM")
.addConditionalEdges("callLLM", routeAfterLLM, [END, "tools"])
.addEdge("tools", "callLLM");
const checkpointer = new MemorySaver();
const store = new InMemoryStore();
export const graph = stateGraph.compile({
checkpointer,
store,
interruptBefore: [],
interruptAfter: [],
});
3. Handle authentication redirects
Interrupts are a way for the system to pause execution and prompt the user to take an action —such as authenticating or granting API access— before resuming the interaction. This ensures that any required access is granted dynamically and securely during the chat experience. In this context, Auth0-AI SDK manages such authentication redirects integrated with the Langchain SDK.Server Side
On the server side of your Next.js application you need to set up a route to handle the Chat API requests. This route will be responsible for forwarding the requests to the LangGraph API. Additionally, you must provide theaccessToken in the headers.import { initApiPassthrough } from "langgraph-nextjs-api-passthrough";
import { NextRequest } from "next/server";
import { auth0 } from "@/lib/auth0";
async function getAccessToken() {
const tokenResult = await auth0.getAccessToken();
if (!tokenResult?.token) {
throw new Error("Error retrieving access token for langgraph api.");
}
return tokenResult.token;
}
export const { GET, POST, PUT, PATCH, DELETE, OPTIONS, runtime } =
initApiPassthrough({
apiUrl: process.env.LANGGRAPH_API_URL,
apiKey: process.env.LANGSMITH_API_KEY,
runtime: "edge",
baseRoute: "langgraph/",
headers: async (req: NextRequest) => {
const headers: Record<string, string> = {};
req.headers.forEach((value, key) => {
headers[key] = value;
});
const accessToken = await getAccessToken();
headers["Authorization"] = `Bearer ${accessToken}`;
return headers;
},
});
auth0 is an instance of @auth0/nextjs-auth0 to handle the application auth flows. You can check different authentication options for Next.js with Auth0 at the official documentation.
Add Custom Authentication
{
"node_version": "20",
"graphs": {
"agent": "./src/lib/agent.ts:agent"
},
"env": ".env",
"auth": {
"path": "./src/lib/auth.ts:authHandler"
}
}
import { createRemoteJWKSet, jwtVerify } from "jose";
const { Auth, HTTPException } = require("@langchain/langgraph-sdk/auth");
const AUTH0_DOMAIN = process.env.AUTH0_DOMAIN;
const AUTH0_AUDIENCE = process.env.AUTH0_AUDIENCE;
// JWKS endpoint for Auth0
const JWKS = createRemoteJWKSet(
new URL(`https://${AUTH0_DOMAIN}/.well-known/jwks.json`)
);
// Create the Auth instance
const auth = new Auth();
// Register the authentication handler
auth.authenticate(async (request: Request) => {
const authHeader = request.headers.get("Authorization");
const xApiKeyHeader = request.headers.get("x-api-key");
/**
* LangGraph Platform will convert the `Authorization` header from the client to an `x-api-key` header automatically
* as of now: https://docs.langchain.com/langgraph-platform/custom-auth
*
* We can still leverage the `Authorization` header when served in other infrastructure w/ langgraph-cli
* or when running locally.
*/
// This header is required in Langgraph Cloud.
if (!authHeader && !xApiKeyHeader) {
throw new HTTPException(401, {
message: "Invalid auth header provided.",
});
}
// prefer the xApiKeyHeader first
let token = xApiKeyHeader || authHeader;
// Remove "Bearer " prefix if present
if (token && token.startsWith("Bearer ")) {
token = token.substring(7);
}
// Validate Auth0 Access Token using common JWKS endpoint
if (!token) {
throw new HTTPException(401, {
message:
"Authorization header format must be of the form: Bearer <token>",
});
}
if (token) {
try {
// Verify the JWT using Auth0 JWKS
const { payload } = await jwtVerify(token, JWKS, {
issuer: `https://${AUTH0_DOMAIN}/`,
audience: AUTH0_AUDIENCE,
});
console.log("✅ Auth0 JWT payload resolved!", payload);
// Return the verified payload - this becomes available in graph nodes
return {
identity: payload.sub!,
email: payload.email as string,
permissions:
typeof payload.scope === "string" ? payload.scope.split(" ") : [],
auth_type: "auth0",
// include the access token for use with Auth0 Token Vault exchanges by tools
getRawAccessToken: () => token,
// Add any other claims you need
...payload,
};
} catch (jwtError) {
console.log(
"Auth0 JWT validation failed:",
jwtError instanceof Error ? jwtError.message : "Unknown error"
);
throw new HTTPException(401, {
message: "Invalid Authorization token provided.",
});
}
}
});
export { auth as authHandler };
Client Side
In this example, we utilize theTokenVaultConsentPopup component to show a pop-up that allows the user to authenticate with Slack and grant access with the requested scopes. You’ll first need to install the @auth0/ai-components package:npx @auth0/ai-components add TokenVault
import { useStream } from "@langchain/langgraph-sdk/react";
import { TokenVaultInterrupt } from "@auth0/ai/interrupts";
import { TokenVaultConsentPopup } from "@/components/auth0-ai/TokenVault/popup";
const useFocus = () => {
const htmlElRef = useRef<HTMLInputElement>(null);
const setFocus = () => {
if (!htmlElRef.current) {
return;
}
htmlElRef.current.focus();
};
return [htmlElRef, setFocus] as const;
};
export default function Chat() {
const [threadId, setThreadId] = useQueryState("threadId");
const [input, setInput] = useState("");
const thread = useStream({
apiUrl: `${process.env.NEXT_PUBLIC_URL}/api/langgraph`,
assistantId: "agent",
threadId,
onThreadId: setThreadId,
onError: (err) => {
console.dir(err);
},
});
const [inputRef, setInputFocus] = useFocus();
useEffect(() => {
if (thread.isLoading) {
return;
}
setInputFocus();
}, [thread.isLoading, setInputFocus]);
const handleSubmit: FormEventHandler<HTMLFormElement> = async (e) => {
e.preventDefault();
thread.submit(
{ messages: [{ type: "human", content: input }] },
{
optimisticValues: (prev) => ({
messages: [
...((prev?.messages as []) ?? []),
{ type: "human", content: input, id: "temp" },
],
}),
}
);
setInput("");
};
return (
<div>
{thread.messages.filter((m) => m.content && ["human", "ai"].includes(m.type)).map((message) => (
<div key={message.id}>
{message.type === "human" ? "User: " : "AI: "}
{message.content as string}
</div>
))}
{thread.interrupt && TokenVaultInterrupt.isInterrupt(thread.interrupt.value) ? (
<div key={thread.interrupt.ns?.join("")}>
<TokenVaultConsentPopup
interrupt={thread.interrupt.value}
onFinish={() => thread.submit(null)}
connectWidget={{
title: "List Slack channels",
description:"description ...",
action: { label: "Check" },
}}
/>
</div>
) : null}
<form onSubmit={handleSubmit}>
<input ref={inputRef} value={input} placeholder="Say something..." readOnly={thread.isLoading} disabled={thread.isLoading} onChange={(e) => setInput(e.target.value)} />
</form>
</div>
);
}
Prerequisites
Before getting started, make sure you have completed the following steps:Install Node.js 20+ and npm
Complete the User authentication quickstart to create an application integrated with Auth0.
Configure a Social Connection for Slack in Auth0
- Under the Purpose section, make sure to enable the
Use for Connected Accounts with Token Vaulttoggle.
1. Configure Auth0 AI
First, you must install the SDK:npm install @auth0/ai-genkit
import { Auth0AI } from "@auth0/ai-genkit";
import { auth0 } from "@/lib/auth0";
// importing GenKit instance
import { ai } from "./genkit";
const auth0AI = new Auth0AI({
genkit: ai,
});
export const withSlack = auth0AI.withTokenVault({
connection: "sign-in-with-slack",
scopes: ["channels:read", "groups:read"],
refreshToken: async () => {
const session = await auth0.getSession();
const refreshToken = session?.tokenSet.refreshToken as string;
return refreshToken;
},
});
auth0 is an instance of @auth0/nextjs-auth0 to handle the application auth flows. You can check different authentication options for Next.js with Auth0 at the official documentation.
2. Integrate your tool with Slack
Wrap your tool using the Auth0 AI SDK to obtain an access token for the Slack API.import { z } from "zod";
import { getAccessTokenFromTokenVault } from "@auth0/ai-genkit";
import { TokenVaultError } from "@auth0/ai/interrupts";
import { withSlack } from "@/lib/auth0-ai";
import { ErrorCode, WebClient } from "@slack/web-api";
// importing GenKit instance
import { ai } from "../genkit";
export const listChannels = ai.defineTool(
...withSlack(
{
description: "List channels for the current user on Slack",
inputSchema: z.object({}),
name: "listChannels",
},
async () => {
const accessToken = getAccessTokenFromTokenVault();
try {
// Slack SDK
const web = new WebClient(accessToken);
const result = await web.conversations.list({
exclude_archived: true,
types: "public_channel,private_channel",
limit: 10,
});
return result.channels?.map((channel) => channel.name);
} catch (error) {
if (error && typeof error === "object" && "code" in error) {
if (error.code === ErrorCode.HTTPError) {
throw new TokenVaultError(
`Authorization required to access the Token Vault connection`
);
}
}
throw error;
}
}
)
);
3. Handle authentication redirects
Interrupts are a way for the system to pause execution and prompt the user to take an action—such as authenticating or granting API access—before resuming the interaction. This ensures that any required access is granted dynamically and securely during the chat experience. In this context, Auth0-AI SDK manages authentication redirects in the GenKit SDK via these interrupts.Server Side
On the server-side code of your Next.js App, you need to set up the tool invocation and handle the interruption messaging via theerrorSerializer. The setAIContext function is used to set the async-context for the Auth0 AI SDK.import { ToolRequestPart } from "genkit";
import path from "path";
import { ai } from "@/lib/genkit";
import { listChannels } from "@/lib/tools/list-channels";
import { resumeAuth0Interrupts } from "@auth0/ai-genkit";
import { auth0 } from "@/lib/auth0";
export async function POST(
request: Request,
{ params }: { params: Promise<{ id: string }> }
) {
const auth0Session = await auth0.getSession();
const { id } = await params;
const {
message,
interruptedToolRequest,
timezone,
}: {
message?: string;
interruptedToolRequest?: ToolRequestPart;
timezone: { region: string; offset: number };
} = await request.json();
let session = await ai.loadSession(id);
if (!session) {
session = ai.createSession({
sessionId: id,
});
}
const tools = [listChannels];
const chat = session.chat({
tools: tools,
system: `You are a helpful assistant.
The user's timezone is ${timezone.region} with an offset of ${timezone.offset} minutes.
User's details: ${JSON.stringify(auth0Session?.user, null, 2)}.
You can use the tools provided to help the user.
You can also ask the user for more information if needed.
Chat started at ${new Date().toISOString()}
`,
});
const r = await chat.send({
prompt: message,
resume: resumeAuth0Interrupts(tools, interruptedToolRequest),
});
return Response.json({ messages: r.messages, interrupts: r.interrupts });
}
export async function GET(
request: Request,
{ params }: { params: Promise<{ id: string }> }
) {
const { id } = await params;
const session = await ai.loadSession(id);
if (!session) {
return new Response("Session not found", {
status: 404,
});
}
const json = session.toJSON();
if (!json?.threads?.main) {
return new Response("Session not found", {
status: 404,
});
}
return Response.json(json.threads.main);
}
Client Side
In this example, we utilize theTokenVaultConsentPopup component to show a pop-up that allows the user to authenticate with Google Calendar and grant access with the requested scopes. You’ll first need to install the @auth0/ai-components package:npx @auth0/ai-components add TokenVault
"use client";
import { useQueryState } from "nuqs";
import { FormEventHandler, useEffect, useRef, useState } from "react";
import { TokenVaultInterrupt } from "@auth0/ai/interrupts";
import { TokenVaultConsentPopup } from "@/components/auth0-ai/TokenVault/popup";
import Markdown from "react-markdown";
const useFocus = () => {
const htmlElRef = useRef<HTMLInputElement>(null);
const setFocus = () => {
if (!htmlElRef.current) {
return;
}
htmlElRef.current.focus();
};
return [htmlElRef, setFocus] as const;
};
export default function Chat() {
const [threadId, setThreadId] = useQueryState("threadId");
const [input, setInput] = useState("");
const [isLoading, setIsLoading] = useState(false);
const [messages, setMessages] = useState<
{
role: "user" | "model";
content: [{ text?: string; metadata?: { interrupt?: any } }];
}[]
>([]);
useEffect(() => {
if (!threadId) {
setThreadId(self.crypto.randomUUID());
}
}, [threadId, setThreadId]);
useEffect(() => {
if (!threadId) {
return;
}
setIsLoading(true);
(async () => {
const messagesResponse = await fetch(`/api/chat/${threadId}`, {
method: "GET",
credentials: "include",
});
if (!messagesResponse.ok) {
setMessages([]);
} else {
setMessages(await messagesResponse.json());
}
setIsLoading(false);
})();
}, [threadId]);
const [inputRef, setInputFocus] = useFocus();
useEffect(() => {
if (isLoading) {
return;
}
setInputFocus();
}, [isLoading, setInputFocus]);
const submit = async ({
message,
interruptedToolRequest,
}: {
message?: string;
interruptedToolRequest?: any;
}) => {
setIsLoading(true);
const timezone = {
region: Intl.DateTimeFormat().resolvedOptions().timeZone,
offset: new Date().getTimezoneOffset(),
};
const response = await fetch(`/api/chat/${threadId}`, {
method: "POST",
credentials: "include",
headers: {
"Content-Type": "application/json",
},
body: JSON.stringify({ message, interruptedToolRequest, timezone }),
});
if (!response.ok) {
console.error("Error sending message");
} else {
const { messages: messagesResponse } = await response.json();
setMessages(messagesResponse);
}
setIsLoading(false);
};
// When the user submits a message, add it to the list of messages and resume the conversation.
const handleSubmit: FormEventHandler<HTMLFormElement> = async (e) => {
e.preventDefault();
setMessages((messages) => [
...messages,
{ role: "user", content: [{ text: input }] },
]);
submit({ message: input });
setInput("");
};
return (
<div>
{messages
.filter(
(m) =>
["model", "user", "tool"].includes(m.role) &&
m.content?.length > 0 &&
(m.content[0].text || m.content[0].metadata?.interrupt)
)
.map((message, index) => (
<div key={index}>
<Markdown>
{(message.role === "user" ? "User: " : "AI: ") +
(message.content[0].text || "")}
</Markdown>
{!isLoading &&
message.content[0].metadata?.interrupt &&
TokenVaultInterrupt.isInterrupt(
message.content[0].metadata?.interrupt
)
? (() => {
const interrupt: any = message.content[0].metadata?.interrupt;
return (
<div>
<TokenVaultConsentPopup
onFinish={() => submit({ interruptedToolRequest: message.content[0] })}
interrupt={interrupt}
connectWidget={{
title: `Requested by: "${interrupt.toolCall.toolName}"`,
description: "Description...",
action: { label: "Check" },
}}
/>
</div>
);
})()
: null}
</div>
))}
<form onSubmit={handleSubmit}>
<input value={input} ref={inputRef} placeholder="Say something..." readOnly={isLoading} disabled={isLoading} onChange={(e) => setInput(e.target.value)} />
</form>
</div>
);
}
Prerequisites
Before getting started, make sure you have completed the following steps:Install Node.js 20+ and npm
Complete the User authentication quickstart to create an application integrated with Auth0.
Configure a Social Connection for Slack in Auth0
- Under the Purpose section, make sure to enable the
Use for Connected Accounts with Token Vaulttoggle.
1. Configure Auth0 AI
First, you must install the SDK:npm install @auth0/ai-llamaindex
import { Auth0AI } from "@auth0/ai-llamaindex";
import { auth0 } from "@/lib/auth0";
const auth0AI = new Auth0AI();
export const withSlack = auth0AI.withTokenVault({
connection: "sign-in-with-slack",
scopes: ["channels:read", "groups:read"],
refreshToken: async () => {
const session = await auth0.getSession();
const refreshToken = session?.tokenSet.refreshToken as string;
return refreshToken;
},
});
auth0 is an instance of @auth0/nextjs-auth0 to handle the application auth flows. You can check different authentication options for Next.js with Auth0 at the official documentation.
2. Integrate your tool with GitHub
Wrap your tool using the Auth0 AI SDK to obtain an access token for the GitHub API.import { tool } from "llamaindex";
import { z } from "zod";
import { withSlack } from "@/lib/auth0-ai";
import { getAccessTokenFromTokenVault } from "@auth0/ai-llamaindex";
import { TokenVaultError } from "@auth0/ai/interrupts";
import { ErrorCode, WebClient } from "@slack/web-api";
export const listChannels = () =>
withSlack(
tool(
async () => {
// Get the access token from Auth0 AI
const accessToken = getAccessTokenFromTokenVault();
// Slack SDK
try {
const web = new WebClient(accessToken);
const result = await web.conversations.list({
exclude_archived: true,
types: "public_channel,private_channel",
limit: 10,
});
return (
result.channels
?.map((channel) => channel.name)
.filter((name): name is string => name !== undefined) || []
);
} catch (error) {
if (error && typeof error === "object" && "code" in error) {
if (error.code === ErrorCode.HTTPError) {
throw new TokenVaultError(
`Authorization required to access the Token Vault connection`
);
}
}
throw error;
}
},
{
name: "listChannels",
description: "List channels for the current user on Slack",
parameters: z.object({}),
}
)
);
3. Handle authentication redirects
Interrupts are a way for the system to pause execution and prompt the user to take an action —such as authenticating or granting API access— before resuming the interaction. This ensures that any required access is granted dynamically and securely during the chat experience. In this context, Auth0-AI SDK manages authentication redirects in the LlamaIndex SDK via these interrupts.Server Side
On the server-side code of your Next.js App, you need to set up the tool invocation and handle the interruption messaging via theerrorSerializer. The setAIContext function is used to set the async-context for the Auth0 AI SDK.import { createDataStreamResponse, LlamaIndexAdapter, Message, ToolExecutionError } from "ai";
import { listRepositories } from "@/lib/tools/";
import { setAIContext } from "@auth0/ai-llamaindex";
import { withInterruptions } from "@auth0/ai-llamaindex/interrupts";
import { errorSerializer } from "@auth0/ai-vercel/interrupts";
import { OpenAIAgent } from "llamaindex";
export async function POST(request: Request) {
const { id, messages }: { id: string; messages: Message[] } =
await request.json();
setAIContext({ threadID: id });
return createDataStreamResponse({
execute: withInterruptions(
async (dataStream) => {
const agent = new OpenAIAgent({
systemPrompt: "You are an AI assistant",
tools: [listRepositories()],
verbose: true,
});
const stream = await agent.chat({
message: messages[messages.length - 1].content,
stream: true,
});
LlamaIndexAdapter.mergeIntoDataStream(stream as any, { dataStream });
},
{
messages,
errorType: ToolExecutionError,
}
),
onError: errorSerializer((err) => {
console.log(err);
return "Oops, an error occured!";
}),
});
}
Client Side
In this example, we utilize theTokenVaultConsentPopup component to show a pop-up that allows the user to authenticate with GitHub and grant access with the requested scopes. You’ll first need to install the @auth0/ai-components package:npx @auth0/ai-components add TokenVault
"use client";
import { generateId } from "ai";
import { TokenVaultConsentPopup } from "@/components/auth0-ai/TokenVault/popup";
import { useInterruptions } from "@auth0/ai-vercel/react";
import { TokenVaultInterrupt } from "@auth0/ai/interrupts";
import { useChat } from "@ai-sdk/react";
export default function Chat() {
const { messages, handleSubmit, input, setInput, toolInterrupt } =
useInterruptions((handler) =>
useChat({
experimental_throttle: 100,
sendExtraMessageFields: true,
generateId,
onError: handler((error) => console.error("Chat error:", error)),
})
);
return (
<div>
{messages.map((message) => (
<div key={message.id}>
{message.role === "user" ? "User: " : "AI: "}
{message.content}
{message.parts && message.parts.length > 0 && (
<div>
{toolInterrupt?.toolCall.id.includes(message.id) &&
TokenVaultInterrupt.isInterrupt(toolInterrupt) && (
<TokenVaultConsentPopup
interrupt={toolInterrupt}
connectWidget={{
title: `Requested by: "${toolInterrupt.toolCall.name}"`,
description: "Description...",
action: { label: "Check" },
}}
/>
)}
</div>
)}
</div>
))}
<form onSubmit={handleSubmit}>
<input value={input} placeholder="Say something..." onChange={(e) => setInput(e.target.value)} autoFocus />
</form>
</div>
);
}
LangGraph
LlamaIndex
CrewAI
Prerequisites
Before getting started, make sure you have completed the following steps:Install Python 3.11+ and pip
Complete the User authentication quickstart to create an application integrated with Auth0.
Configure a Social Connection for Slack in Auth0
- Under the Purpose section, make sure to enable the
Use for Connected Accounts with Token Vaulttoggle.
1. Configure Auth0 AI
First, you must install the SDK:pip install auth0-ai-langchain
from auth0_ai_langchain.auth0_ai import Auth0AI
auth0_ai = Auth0AI()
with_slack = auth0_ai.with_token_vault(
connection="sign-in-with-slack",
scopes=["channels:read groups:read"],
# Optional: By default, the SDK will expect the refresh token from
# the LangChain RunnableConfig (`config.configurable._credentials.refresh_token`)
# If you want to use a different store for refresh token you can set up a getter here
# refresh_token=lambda *_args, **_kwargs:session["user"]["refresh_token"],
)
2. Integrate your tool with Slack
Wrap your tool using the Auth0 AI SDK to obtain an access token for the Slack API.from slack_sdk import WebClient
from slack_sdk.errors import SlackApiError
from pydantic import BaseModel
from langchain_core.tools import StructuredTool
from auth0_ai_langchain.token_vault import get_access_token_from_token_vault, TokenVaultError
from lib.auth0_ai import with_slack
class EmptySchema(BaseModel):
pass
def list_channels_tool_function(date: datetime):
# Get the access token from Auth0 AI
access_token = get_access_token_from_token_vault()
# Slack SDK
try:
client = WebClient(token=access_token)
response = client.conversations_list(
exclude_archived=True,
types="public_channel,private_channel",
limit=10
)
channels = response['channels']
channel_names = [channel['name'] for channel in channels]
return channel_names
except SlackApiError as e:
if e.response['error'] == 'not_authed':
raise TokenVaultError("Authorization required to access the Token Vault API")
raise ValueError(f"An error occurred: {e.response['error']}")
list_slack_channels_tool = with_slack(StructuredTool(
name="list_slack_channels",
description="List channels for the current user on Slack",
args_schema=EmptySchema,
func=list_channels_tool_function,
))
ToolNode. The agent will automatically request the access token when the tool is called.from typing import Annotated, Sequence, TypedDict
from langchain.storage import InMemoryStore
from langchain_core.messages import AIMessage, BaseMessage
from langchain_openai import ChatOpenAI
from langgraph.checkpoint.memory import MemorySaver
from langgraph.graph import END, START, StateGraph, add_messages
from langgraph.prebuilt import ToolNode
from tools.list_channels import list_slack_channels_tool
class State(TypedDict):
messages: Annotated[Sequence[BaseMessage], add_messages]
llm = ChatOpenAI(model="gpt-4o")
llm.bind_tools([list_slack_channels_tool])
async def call_llm(state: State):
response = await llm.ainvoke(state["messages"])
return {"messages": [response]}
def route_after_llm(state: State):
messages = state["messages"]
last_message = messages[-1] if messages else None
if isinstance(last_message, AIMessage) and last_message.tool_calls:
return "tools"
return END
workflow = (
StateGraph(State)
.add_node("call_llm", call_llm)
.add_node(
"tools",
ToolNode(
[
# a tool with Token Vault access
list_slack_channels_tool,
# ... other tools
],
# The error handler should be disabled to
# allow interruptions to be triggered from within tools.
handle_tool_errors=False
)
)
.add_edge(START, "call_llm")
.add_edge("tools", "call_llm")
.add_conditional_edges("call_llm", route_after_llm, [END, "tools"])
)
graph = workflow.compile(checkpointer=MemorySaver(), store=InMemoryStore())
3. Handle authentication redirects
Interrupts are a way for the system to pause execution and prompt the user to take an action —such as authenticating or granting API access— before resuming the interaction. This ensures that any required access is granted dynamically and securely during the chat experience. In this context, Auth0-AI SDK manages such authentication redirects integrated with the Langchain SDK.Server Side
On the server side of your Next.js application you need to set up a route to handle the Chat API requests. This route will be responsible for forwarding the requests to the LangGraph API. Additionally, you must provide therefreshToken to the Langchain’s RunnableConfig from the authenticated user’s session.import { initApiPassthrough } from "langgraph-nextjs-api-passthrough";
import { auth0 } from "@/lib/auth0";
const getRefreshToken = async () => {
const session = await auth0.getSession();
const refreshToken = session?.tokenSet.refreshToken as string;
return refreshToken;
};
export const { GET, POST, PUT, PATCH, DELETE, OPTIONS, runtime } =
initApiPassthrough({
apiUrl: process.env.LANGGRAPH_API_URL,
apiKey: process.env.LANGSMITH_API_KEY,
runtime: "edge",
baseRoute: "langgraph/",
bodyParameters: async (req, body) => {
if (
req.nextUrl.pathname.endsWith("/runs/stream") &&
req.method === "POST"
) {
return {
...body,
config: {
configurable: {
_credentials: {
refreshToken: await getRefreshToken(),
},
},
},
};
}
return body;
},
});
auth0 is an instance of @auth0/nextjs-auth0 to handle the application auth flows. You can check different authentication options for Next.js with Auth0 at the official documentation.
Client Side
In this example, we utilize theTokenVaultConsentPopup component to show a pop-up that allows the user to authenticate with Slack and grant access with the requested scopes. You’ll first need to install the @auth0/ai-components package:npx @auth0/ai-components add TokenVault
import { useStream } from "@langchain/langgraph-sdk/react";
import { TokenVaultInterrupt } from "@auth0/ai/interrupts";
import { TokenVaultConsentPopup } from "@/components/auth0-ai/TokenVault/popup";
const useFocus = () => {
const htmlElRef = useRef<HTMLInputElement>(null);
const setFocus = () => {
if (!htmlElRef.current) {
return;
}
htmlElRef.current.focus();
};
return [htmlElRef, setFocus] as const;
};
export default function Chat() {
const [threadId, setThreadId] = useQueryState("threadId");
const [input, setInput] = useState("");
const thread = useStream({
apiUrl: `${process.env.NEXT_PUBLIC_URL}/api/langgraph`,
assistantId: "agent",
threadId,
onThreadId: setThreadId,
onError: (err) => {
console.dir(err);
},
});
const [inputRef, setInputFocus] = useFocus();
useEffect(() => {
if (thread.isLoading) {
return;
}
setInputFocus();
}, [thread.isLoading, setInputFocus]);
const handleSubmit: FormEventHandler<HTMLFormElement> = async (e) => {
e.preventDefault();
thread.submit(
{ messages: [{ type: "human", content: input }] },
{
optimisticValues: (prev) => ({
messages: [
...((prev?.messages as []) ?? []),
{ type: "human", content: input, id: "temp" },
],
}),
}
);
setInput("");
};
return (
<div>
{thread.messages.filter((m) => m.content && ["human", "ai"].includes(m.type)).map((message) => (
<div key={message.id}>
{message.type === "human" ? "User: " : "AI: "}
{message.content as string}
</div>
))}
{thread.interrupt && TokenVaultInterrupt.isInterrupt(thread.interrupt.value) ? (
<div key={thread.interrupt.ns?.join("")}>
<TokenVaultConsentPopup
interrupt={thread.interrupt.value}
onFinish={() => thread.submit(null)}
connectWidget={{
title: "List GitHub respositories",
description:"description ...",
action: { label: "Check" },
}}
/>
</div>
) : null}
<form onSubmit={handleSubmit}>
<input ref={inputRef} value={input} placeholder="Say something..." readOnly={thread.isLoading} disabled={thread.isLoading} onChange={(e) => setInput(e.target.value)} />
</form>
</div>
);
}
Prerequisites
Before getting started, make sure you have completed the following steps:Install Python 3.11+ and pip
Complete the User authentication quickstart to create an application integrated with Auth0.
Configure a Social Connection for Slack in Auth0
- Under the Purpose section, make sure to enable the
Use for Connected Accounts with Token Vaulttoggle.
1. Configure Auth0 AI
First, you must install the SDK:pip install auth0-ai-llamaindex
from auth0_ai_llamaindex.auth0_ai import Auth0AI
from flask import session
auth0_ai = Auth0AI()
with_slack = auth0_ai.with_token_vault(
connection="sign-in-with-slack",
scopes=["channels:read groups:read"],
refresh_token=lambda *_args, **_kwargs:session["user"]["refresh_token"],
)
2. Integrate your tool with Slack
Wrap your tool using the Auth0 AI SDK to obtain an access token for the Slack API.from slack_sdk import WebClient
from slack_sdk.errors import SlackApiError
from llama_index.core.tools import FunctionTool
from auth0_ai_llamaindex.token_vault import get_access_token_from_token_vault, TokenVaultError
from src.lib.auth0_ai import with_slack
def list_slack_channels_tool_function():
# Get the access token from Auth0 AI
access_token = get_access_token_from_token_vault()
# Slack SDK
try:
client = WebClient(token=access_token)
response = client.conversations_list(
exclude_archived=True,
types="public_channel,private_channel",
limit=10
)
channels = response['channels']
channel_names = [channel['name'] for channel in channels]
return channel_names
except SlackApiError as e:
if e.response['error'] == 'not_authed':
raise TokenVaultError("Authorization required to access the Token Vault API")
raise ValueError(f"An error occurred: {e.response['error']}")
list_slack_channels_tool = with_slack(FunctionTool.from_defaults(
name="list_slack_channels",
description="List channels for the current user on Slack",
fn=list_slack_channels_tool_function,
))
from datetime import datetime
from llama_index.agent.openai import OpenAIAgent
from src.lib.tools.list_channels import list_slack_channels_tool
system_prompt = f"""You are an assistant designed to answer random user's questions.
**Additional Guidelines**:
- Today’s date for reference: {datetime.now().isoformat()}
"""
agent = OpenAIAgent.from_tools(
tools=[
# a tool with Token Vault access
list_slack_channels_tool
# ... other tools
],
model="gpt-4o",
system_prompt=system_prompt
verbose=True,
)
3. Handle authentication redirects
Interrupts are a way for the system to pause execution and prompt the user to take an action —such as authenticating or granting API access— before resuming the interaction. This ensures that any required access is granted dynamically and securely during the chat experience. In this context, Auth0-AI SDK manages such authentication redirects integrated with the LLamaIndex SDK.Server side
On the server side of your Flask application you will need to set up a route to handle the Chat API requests. This route will be responsible for forwarding the requests to the OpenAI API utilizing LlamaIndex’s SDK, that has been initialized with Auth0 AI’s protection enhancements for tools.WhenTokenVaultInterrupt error occurs, the server side will signal the front-end about the level access restrictions, and the front-end should prompt the user to trigger a new authorization (or login) request with the necessary permissions.from dotenv import load_dotenv
from flask import Flask, request, jsonify, session
from auth0_ai_llamaindex.auth0_ai import Auth0AI
from auth0_ai_llamaindex.token_vault import TokenVaultInterrupt
from src.lib.agent import agent
load_dotenv()
app = Flask(__name__)
@app.route("/chat", methods=["POST"])
async def chat():
if "user" not in session:
return jsonify({"error": "unauthorized"}), 401
try:
message = request.json.get("message")
response = agent.achat(message)
return jsonify({"response": str(response)})
except TokenVaultInterrupt as e:
return jsonify({"error": str(e.to_json())}), 403
except Exception as e:
return jsonify({"error": str(e)}), 500
Account Linking
If you’re integrating with Google, but users in your app or agent can sign in using other methods (e.g., a username and password or another social provider), you’ll need to link these identities into a single user account. Auth0 refers to this process as Account Linking. Account Linking logic and handling will vary depending on your app or agent. You can find an example of how to implement it in a Next.js chatbot app here. If you have questions or are looking for best practices, join our Discord and ask in the#auth0-for-gen-ai channel.