AI agents and Operators are still taking over software development. In a previous post, we learned how to build a personal assistant that can use different types of tools. This post continues our journey, adding more tools to the assistant and making it production-ready.
Recap
Previous posts in this series:
- Tool Calling in AI Agents: Empowering Intelligent Automation Securely
- Build an AI Assistant with LangGraph, Vercel, and Next.js: Use Gmail as a Tool Securely
In the previous post, we learned how to build a tool-calling AI agent using LangGraph, Vercel AI SDK, and Next.js. We used a simple calculator tool, a web search tool, and a Gmail search and draft tool. We also learned how to secure the tools using Auth0 Token Vault.
What we will learn in this post
Today, we will learn the following:
- Use LangGraph Server to host the AI agent
- Handle authorization interrupts and step-up authentication using Auth0
- Add LangChain's Google Calendar community tool
- Add a custom tool that can access your own APIs
If you haven't read the previous posts, I recommend you do so before continuing to better understand tool calling in AI agents and how security is currently handled.
To keep this post focused on tool calling, we will only focus on the AI agent part and not the UI. Each step of the tutorial, including building the UI, can be found as a distinct commit in the GitHub repository.
Technology stack
We will use a Next.js application called Assistant 0 as the base, and we will continue from where we left off in the previous post. We will no longer need Vercel's AI SDK, as we will be using LangGraph's React SDK to stream response tokens to the client.
Prerequisites
You will need the following tools and services to build this application:
- Bun v1.2 or NodeJS v20
- An Auth0 AI account. Create one.
- An OpenAI account and API key. Create one or use any other LLM provider supported by LangChain.
- A Google account for Gmail and Calendar. Preferably a new one for testing.
Getting started
First clone the repository and install the dependencies:
git clone https://github.com/auth0-samples/auth0-assistant0.git cd auth0-assistant0 git switch step-4 # so that we skip to the point where we left off in the previous post bun install # or npm install
If you haven't already, you'll need to set up environment variables in your repo's .env.local
file. Copy the .env.example
file to .env.local
. To start, you'll need to add your OpenAI API key and Auth0 credentials.
Switch to LangGraph Server
First, let's switch to a LangGraph Server so that the application is more production-ready.
Why LangGraph Server?
LangGraph recommends using a LangGraph Server to host the StateGraph acting as the AI agent. For local development, this can be an in-memory server created using the LangGraph CLI. For production, you can use a self-hosted LangGraph Server or LangGraph Cloud. This setup also provides LangGraph Studio for debugging and monitoring the agent.
New architecture
The application will be structured as below:
src/app
: Contains the Next.js application routes, layout, and pages.src/app/api/chat/[..._path]/route.ts
: API route that forwards the chat requests to the LangGraph Server.src/lib
: Services and configurations. This is where custom tools will be defined.src/lib/agent.ts
: The LangGraph AI agent is defined here.src/lib/auth0.ts
: The Auth0 client is defined here.src/lib/auth0-ai.ts
: The Auth0 AI SDK instance is defined here.src/components
: UI components used in the application.src/utils
: Utility functions
Our new architecture is as follows:
Let's install some new dependencies. We will talk about them later on.
bun add langgraph-nextjs-api-passthrough nuqs react-device-detect bun add npm-run-all -d
You can remove the Vercel AI SDK dependency (ai
) from the package.json
file if you like.
Update the Auth0 configuration
Update the src/lib/auth0.ts
file with the following code.
// src/lib/auth0.ts import { Auth0Client } from '@auth0/nextjs-auth0/server'; export const auth0 = new Auth0Client(); // Get the refresh token from Auth0 session export const getRefreshToken = async () => { const session = await auth0.getSession(); return session?.tokenSet?.refreshToken; };
Update the src/lib/auth0-ai.ts
file and remove the refreshToken
function and its reference from the withGoogleConnection
function.
// src/lib/auth0-ai.ts //... existing code // Connection for Google services export const withGoogleConnection = auth0AI.withTokenForConnection({ connection: 'google-oauth2', scopes: [ 'https://www.googleapis.com/auth/gmail.readonly', 'https://www.googleapis.com/auth/gmail.compose', ], });
Use the LangGraph Server
First, add a langgraph.json
file to the project's root. This file will be used to configure the LangGraph Server.
{ "node_version": "20", "graphs": { "agent": "./src/lib/agent.ts:agent" }, "env": ".env.local", "dependencies": ["."] }
Add .langgraph_api
folder to the .gitignore
file.
Update the .env.local
file to add the LANGGRAPH_API_URL
environment variable.
LANGGRAPH_API_URL=http://localhost:54367
Add the following script entries to the package.json
file.
"scripts": { // ... existing scripts "all:dev": "run-p langgraph:dev dev", "all:start": "run-p langgraph:start start", "langgraph:dev": "npx @langchain/langgraph-cli dev --port 54367", "langgraph:start": "npx @langchain/langgraph-cli up" }
Now, create a new file, src/lib/agent.ts
, and add the following code. It's almost the same as the one we used in the previous post, with some minor changes so that chat messages are saved and we can handle interrupts.
// src/lib/agent.ts import { createReactAgent, ToolNode } from '@langchain/langgraph/prebuilt'; import { ChatOpenAI } from '@langchain/openai'; import { InMemoryStore, MemorySaver } from '@langchain/langgraph'; import { Calculator } from '@langchain/community/tools/calculator'; import { SerpAPI } from '@langchain/community/tools/serpapi'; import { GmailCreateDraft, GmailSearch } from '@langchain/community/tools/gmail'; import { getAccessToken, withGoogleConnection } from './auth0-ai'; const AGENT_SYSTEM_TEMPLATE = `You are a personal assistant named Assistant0. You are a helpful assistant that can answer questions and help with tasks. You have access to a set of tools, use the tools as needed to answer the user's question. Render the email body as a markdown block, do not wrap it in code blocks.`; const llm = new ChatOpenAI({ model: 'gpt-4o', temperature: 0, }); // Provide the access token to the Gmail tools const gmailParams = { credentials: { accessToken: getAccessToken, }, }; const tools = [ new Calculator(), // web search using SerpAPI new SerpAPI(), withGoogleConnection(new GmailSearch(gmailParams)), withGoogleConnection(new GmailCreateDraft(gmailParams)), ]; const checkpointer = new MemorySaver(); const store = new InMemoryStore(); /** * Use a prebuilt LangGraph agent. */ export const agent = createReactAgent({ llm, tools: new ToolNode(tools, { // Error handler must be disabled in order to trigger interruptions from within tools. handleToolErrors: false, }), // Modify the stock prompt in the prebuilt agent. prompt: AGENT_SYSTEM_TEMPLATE, store, checkpointer, });
Now let's update our route file. Rename src/api/chat/route.ts
to src/app/api/chat/[..._path]/route.ts
. The [..._path]
format is a Next.js catch-all dynamic route that captures any path after /api/chat/
. Update the code to the following. This will forward all requests to the LangGraph Server. We are also passing the refresh token to the LangGraph Server so that it can be used for authentication from the LangChain tools.
// src/app/api/chat/[..._path]/route.ts import { initApiPassthrough } from 'langgraph-nextjs-api-passthrough'; import { getRefreshToken } from '@/lib/auth0'; const getCredentials = async () => ({ refreshToken: await getRefreshToken(), }); export const { GET, POST, PUT, PATCH, DELETE, OPTIONS, runtime } = initApiPassthrough({ apiUrl: process.env.LANGGRAPH_API_URL, baseRoute: 'chat/', bodyParameters: async (req, body) => { if (req.nextUrl.pathname.endsWith('/runs/stream') && req.method === 'POST') { return { ...body, config: { configurable: { _credentials: await getCredentials(), }, }, }; } return body; }, });
We have now switched to a more realistic LangGraph architecture.
Implement step-up authorization
Until now, we relied on users consenting to all required scopes during login to get access tokens for Google API calls. However, in realistic scenarios, this approach doesn't scale and isn't the most secure way to handle authorizations. We want the user to be able to use the assistant without having to consent to all scopes upfront. When the user requires a tool that requires a scope, we will interrupt the agent and ask the user to consent to the scope. This is called step-up authorization. Let us implement this.
Add step-up authorization using Auth0 AI Components
Now, let's add the necessary components to handle interrupts in the UI for step-up authorization. We will use the prebuilt Auth0 AI Components to handle interrupts in the UI.
Install Auth0 AI Components for Next.js to get the required UI components.
npx @auth0/ai-components add FederatedConnections
Create a new file, src/components/auth0-ai/FederatedConnections/FederatedConnectionInterruptHandler.tsx
and add the following code:
import { FederatedConnectionInterrupt } from '@auth0/ai/interrupts'; import type { Interrupt } from '@langchain/langgraph-sdk'; import { EnsureAPIAccess } from '@/components/auth0-ai/FederatedConnections'; interface FederatedConnectionInterruptHandlerProps { interrupt: Interrupt | undefined | null; onFinish: () => void; } export function FederatedConnectionInterruptHandler({ interrupt, onFinish, }: FederatedConnectionInterruptHandlerProps) { if (!interrupt || !FederatedConnectionInterrupt.isInterrupt(interrupt.value)) { return null; } return ( <div key={interrupt.ns?.join('')} className="whitespace-pre-wrap"> <EnsureAPIAccess mode="popup" interrupt={interrupt.value} onFinish={onFinish} connectWidget={{ title: 'Authorization Required.', description: interrupt.value.message, action: { label: 'Authorize' }, }} /> </div> ); }
Now, add a close page for the authorization pop-up. Create a new file src/app/close/page.tsx
and add the following code:
// src/app/close/page.tsx 'use client'; import { useEffect, useState, useCallback } from 'react'; import { Button } from '@/components/ui/button'; export default function PopupClosePage() { const [isClosing, setIsClosing] = useState(true); const handleClose = useCallback(() => { if (typeof window !== 'undefined') { try { window.close(); } catch (err) { console.error(err); setIsClosing(false); } } }, []); useEffect(() => { // Attempt to close the window on load handleClose(); }, [handleClose]); return isClosing ? ( <></> ) : ( <div className="flex items-center justify-center min-h-screen bg-gray-100"> <div className="text-center"> <p className="mb-4 text-lg">You can now close this window.</p> <Button onClick={handleClose}>Close</Button> </div> </div> ); }
Update the UI to handle the chat stream
So far, we have been streaming the chat messages using the Vercel AI SDK. Now, we will use the LangGraph SDK. This means the UI needs to be updated to handle the new chat stream structure.
Update the src/components/ChatWindow.tsx
file to include the FederatedConnectionInterruptHandler
component and handle the new chat stream:
//... // remove: import { type Message } from 'ai'; // remove: import { useChat } from '@ai-sdk/react'; import { useQueryState } from 'nuqs'; import { useStream } from '@langchain/langgraph-sdk/react'; import { type Message } from '@langchain/langgraph-sdk'; import { FederatedConnectionInterruptHandler } from '@/components/auth0-ai/FederatedConnections/FederatedConnectionInterruptHandler'; //... existing code export function ChatWindow(props: { endpoint: string; emptyStateComponent: ReactNode; placeholder?: string; emoji?: string; }) { const [threadId, setThreadId] = useQueryState('threadId'); const [input, setInput] = useState(''); const chat = useStream({ apiUrl: props.endpoint, assistantId: 'agent', threadId, onThreadId: setThreadId, onError: (e: any) => { console.error('Error: ', e); toast.error(`Error while processing your request`, { description: e.message }); }, }); function isChatLoading(): boolean { return chat.isLoading; } async function sendMessage(e: FormEvent<HTMLFormElement>) { e.preventDefault(); if (isChatLoading()) return; chat.submit( { messages: [{ type: 'human', content: input }] }, { optimisticValues: (prev) => ({ messages: [...((prev?.messages as []) ?? []), { type: 'human', content: input, id: 'temp' }], }), }, ); setInput(''); } return ( <StickToBottom> <StickyToBottomContent className="absolute inset-0" contentClassName="py-8 px-2" content={ chat.messages.length === 0 ? ( <div>{props.emptyStateComponent}</div> ) : ( <> <ChatMessages aiEmoji={props.emoji} messages={chat.messages} emptyStateComponent={props.emptyStateComponent} /> <div className="flex flex-col max-w-[768px] mx-auto pb-12 w-full"> <FederatedConnectionInterruptHandler interrupt={chat.interrupt} onFinish={() => chat.submit(null)} /> </div> </> ) } footer={ <div className="sticky bottom-8 px-2"> <ScrollToBottom className="absolute bottom-full left-1/2 -translate-x-1/2 mb-4" /> <ChatInput value={input} onChange={(e) => setInput(e.target.value)} onSubmit={sendMessage} loading={isChatLoading()} placeholder={props.placeholder ?? 'What can I help you with?'} ></ChatInput> </div> } ></StickyToBottomContent> </StickToBottom> ); }
Next, update the src/components/ChatMessageBubble.tsx
file to work with the new chat stream:
// src/components/ChatMessageBubble.tsx import { Message } from '@langchain/langgraph-sdk'; import { cn } from '@/utils/cn'; import { MemoizedMarkdown } from './MemoizedMarkdown'; export function ChatMessageBubble(props: { message: Message; aiEmoji?: string }) { return ['human', 'ai'].includes(props.message.type) && props.message.content.length > 0 ? ( <div className={cn( `rounded-[24px] max-w-[80%] mb-8 flex`, props.message.type === 'human' ? 'bg-secondary text-secondary-foreground px-4 py-2' : null, props.message.type === 'human' ? 'ml-auto' : 'mr-auto', )} > {props.message.type === 'ai' && ( <div className="mr-4 mt-1 border bg-secondary -mt-2 rounded-full w-10 h-10 flex-shrink-0 flex items-center justify-center"> {props.aiEmoji} </div> )} <div className="chat-message-bubble whitespace-pre-wrap flex flex-col prose dark:prose-invert max-w-none"> <MemoizedMarkdown content={props.message.content as string} id={props.message.id ?? ''} /> </div> </div> ) : null; }
Other UI updates
Next, update src/app/page.tsx
to pass the full URL:
// src/app/page.tsx //... existing code export default async function Home() { //... existing code return ( <ChatWindow endpoint={`${process.env.APP_BASE_URL}/api/chat`} {/*... existing code */} /> ); }
Update src/app/layout.tsx
to use NuQS for the thread ID:
// src/app/layout.tsx //... existing code import { NuqsAdapter } from 'nuqs/adapters/next/app'; export default async function RootLayout({ children }: { children: React.ReactNode }) { const session = await auth0.getSession(); return ( <html lang="en" suppressHydrationWarning> {/* existing code */} <body className={publicSans.className}> <NuqsAdapter> <div className="bg-secondary grid grid-rows-[auto,1fr] h-[100dvh]"> {/* existing code */} </div> <Toaster /> </NuqsAdapter> </body> </html> ); }
Test the application
Now, start the development server:
bun all:dev # or npm run dev
Open http://localhost:3000 with your browser and ask the agent something. You should see a streamed response.
Add tools to the assistant
Now that the migrated application is running, let's add more tools to the assistant.
Add Google Calendar tools
If you followed the previous post, you should have set up the application to use Gmail tools, meaning you would have configured the Google Connection for Auth0. If not, please follow the Google Sign-in and Authorization guide from Auth0 to configure this.
First, we need to get an access token for the Google Calendar API. We can get this using the Auth0 Token Vault feature.
First, update src/lib/auth0-ai.ts
with the required scopes for Google Calendar.
// src/lib/auth0-ai.ts // ... existing code export const withGoogleConnection = auth0AI.withTokenForConnection({ connection: 'google-oauth2', scopes: [ 'https://www.googleapis.com/auth/gmail.readonly', 'https://www.googleapis.com/auth/gmail.compose', 'https://www.googleapis.com/auth/calendar.events', ], });
Now, import the GoogleCalendarCreateTool
and GoogleCalendarViewTool
tools from the LangChain community tools and update the src/lib/agent.ts
file with the following code.
// src/lib/agent.ts import { GoogleCalendarCreateTool, GoogleCalendarViewTool, } from '@langchain/community/tools/google_calendar'; // ... existing code const googleCalendarParams = { credentials: { accessToken: getAccessToken, calendarId: 'primary' }, model: llm, }; const tools = [ // ... existing tools withGoogleConnection(new GoogleCalendarCreateTool(googleCalendarParams)), withGoogleConnection(new GoogleCalendarViewTool(googleCalendarParams)), ]; // ... existing code
We are all set! Now, try asking the assistant to check for events on your calendar. For example, What is my schedule today?
. You should see the response in the UI.
Add user info tool
Finally, let’s see how you can use a tool that calls your own APIs (APIs that you can authenticate using Auth0 credentials). This can be any API you've configured in Auth0, including the management APIs provided by Auth0. For demo purposes, we will use Auth0's /userinfo
endpoint, as it's already available in your Auth0 tenant.
First, you have to pass the access token from the user's session to the agent. To do this, create a helper function to get the access token from the session. Add the following function to src/lib/auth0.ts
.
// src/lib/auth0.ts //... // Get the Access token from Auth0 session export const getAccessToken = async () => { const session = await auth0.getSession(); return session?.tokenSet?.accessToken; };
Now, update the /src/app/api/chat/[..._path]/route.ts
file to pass the access token to the agent.
// src/app/api/chat/[..._path]/route.ts //... import { getRefreshToken, getAccessToken } from '@/lib/auth0'; const getCredentials = async () => ({ refreshToken: await getRefreshToken(), accessToken: await getAccessToken(), }); //...
Next, create a new file src/lib/tools/user-info.ts
and add the following code. The tool returns the user profile of the currently logged-in user by calling the /userinfo endpoint.
// src/lib/tools/user-info.ts import { tool } from '@langchain/core/tools'; export const getUserInfoTool = tool( async (_input, config?) => { // Access credentials from config const accessToken = config?.configurable?._credentials?.accessToken; if (!accessToken) { return 'There is no user logged in.'; } const response = await fetch(`https://${process.env.AUTH0_DOMAIN}/userinfo`, { headers: { Authorization: `Bearer ${accessToken}`, }, }); if (response.ok) { return { result: await response.json() }; } return "I couldn't verify your identity"; }, { name: 'get_user_info', description: 'Get information about the current logged in user.', }, );
Update the /src/lib/agent.ts
file to add the tool to the agent.
// src/lib/agent.ts //... import { getUserInfoTool } from './tools/user-info'; //... existing code const tools = [ //... existing tools getUserInfoTool, ]; //... existing code
Now, you can ask questions like "who am I?"
to trigger the tool call and test whether it successfully retrieves information about the logged-in user.
Learn more about AI agents and Auth for GenAI
You have successfully built an AI personal assistant that can search the web, search your emails, draft emails, and manage your calendar. In the next chapter, we will add more tools like Google Drive, Slack, and GitHub to the assistant.
This tutorial is part of a series of posts on tool-calling agents with Auth for GenAI. We learned how to add tools to an AI agent and how to secure them using Auth0.
Sign up for the Auth for GenAI Developer Preview program.
Before you go, we have some great news to share: we are working on more content and sample apps in collaboration with amazing GenAI frameworks like LlamaIndex, LangChain, CrewAI, Vercel AI, and GenKit.
About the author

Deepu K Sasidharan
Staff Developer Advocate