Skip to main content

Call Your APIs On User's Behalf

Let your AI agent call your APIs on behalf of the authenticated user using access tokens securely issued by Auth0.

By the end of this quickstart, you should have an application integrated with Auth0 and the Vercel AI SDK that can:

  • Get an Auth0 access token.
  • Use the Auth0 access token to make a tool call to your API endpoint, in this case, Auth0's /userinfo endpoint.
  • Return the data to the user via an AI agent.
note

We value your feedback! To ask questions, report issues, or request new frameworks and providers, connect with us on GitHub.

Pick Your Tech Stack


Prerequisites

Before getting started, make sure you have completed the following steps:

unchecked

Create an Auth0 Account and a Dev Tenant

To continue with this quickstart, you need an Auth0 account and a Developer Tenant.

unchecked

Create Application

Create and configure a Regular Web Application to use with this quickstart.
To learn more about Auth0 applications, read ApplicationsArrow icon

Complete User Authentication quickstart

Complete User Authentication quickstart

To complete this quickstart, you need to use the same application you built in the User authenticationArrow icon quickstart.

OpenAI Platform

OpenAI Platform

Install dependencies

In the root directory of your project, install the following dependencies:

npm install ai @ai-sdk/openai @ai-sdk/react zod
ctrl+C

Define a tool to call your API

In this step, you’ll create a Vercel AI tool to make the first-party API call. The tool fetches an access token to call the API.

In this example, after taking in an Auth0 access token during user login, the tool returns the user profile of the currently logged-in user by calling the /userinfo endpoint.

./src/lib/tools/user-info.ts
ctrl+C
import { tool } from "ai";
import { z } from "zod";
import { auth0 } from "../auth0";

export const getUserInfoTool = tool({
description: "Get information about the current logged in user.",
parameters: z.object({}),
execute: async () => {
const session = await auth0.getSession();
if (!session) {
return "There is no user logged in.";
}

const response = await fetch(
`https://${process.env.AUTH0_DOMAIN}/userinfo`,
{
headers: {
Authorization: `Bearer ${session.tokenSet.accessToken}`,
},
},
);

if (response.ok) {
return { result: await response.json() };
}

return "I couldn't verify your identity";
},
});

Create the AI agent API route

The AI agent processes and runs the user’s request through the AI pipeline, including the tool call. Vercel AI simplifies this task with the streamText() method:

./src/app/api/chat/route.ts
ctrl+C
import { getUserInfoTool } from "@/lib/tools/user-info";
import { openai } from "@ai-sdk/openai";
import { streamText } from "ai";

export async function POST(req: Request) {
const { messages } = await req.json();

// streamText is used to run the request
const result = streamText({
model: openai("gpt-4o-mini"),
maxSteps: 2,
tools: {
userInfo: getUserInfoTool,
},
messages,
system: "You are an AI agent for tool calling with Auth0.",
});

return result.toDataStreamResponse();
}

You need an API Key from Open AI or another provider to use an LLM. Add that API key to your .env.local file:

.env.local
ctrl+C
# .env.local
# ...

# You can use any provider of your choice supported by Vercel AI
OPENAI_API_KEY="YOUR_API_KEY"

If you use another provider for your LLM, adjust the variable name in .env.local accordingly.

Build the Chat UI

Before testing the application, update the /src/app/chat/page.tsx file with the Chat UI.

The following code sample creates a minimalist and functional chat page to get you started quickly:

./src/app/chat/page.tsx
ctrl+C
"use client";

import React from "react";
import { useChat } from "@ai-sdk/react";

export default function ChatPage() {
const { messages, input, handleInputChange, handleSubmit } = useChat({});
return (
<main className="flex flex-col items-center justify-center h-screen p-10">
<div className="flex flex-col gap-2">
{messages.map((message) => (
<div key={message.id}>
{message.role === "user" ? "User: " : "AI: "}
{message.content}
</div>
))}
</div>

<form onSubmit={handleSubmit}>
<input
name="prompt"
value={input}
className="w-full border"
onChange={handleInputChange}
/>
<button
className="border-zinc-800 bg-zinc-800 border-2 rounded-md p-2 m-2 text-zinc-50 hover:bg-black"
type="submit"
>
Send
</button>
</form>
</main>
);
}

Test your application

To test the application, run npm run dev and navigate to http://localhost:3000/chat and interact with the AI agent. You can ask questions like “who am I?” to trigger the tool call and test whether it successfully retrieves information about the logged-in user.

User: who am I?
AI: It seems that there is no user currently logged in. If you need assistance with anything else, feel free to ask!

User: who am I?
AI: You are Juan Martinez. Here are your details: - .........
ctrl+C

That’s it! You’ve successfully integrated first-party tool-calling into your project.

Explore the example app on GitHub.

Prerequisites

Before getting started, make sure you have completed the following steps:

unchecked

Create an Auth0 Account and a Dev Tenant

To continue with this quickstart, you need an Auth0 account and a Developer Tenant.

unchecked

Create Application

Create and configure a Regular Web Application to use with this quickstart.
To learn more about Auth0 applications, read ApplicationsArrow icon

OpenAI Platform

OpenAI Platform

Install dependencies

In the root directory of your project, install the following dependencies:

  • fastapi: FastAPI web framework for building APIs with Python.
  • auth0-fastapi-api: Auth0's FastAPI API SDK to secure APIs using bearer tokens from Auth0.
  • langchain: LangChain's base Python library.
  • langchain-core: LangChain's core abstractions library.
  • langchain-openai: LangChain integrations for OpenAI.
  • uvicorn and python-dotenv https: Other Python utility libraries.
pip3 install fastapi auth0-fastapi-api langchain langchain-core langchain-openai uvicorn python-dotenv httpx
ctrl+C

Set up your environment

In the root directory of your project, create the .env.local file and add the following variables. If you created an application with this quickstart, Auth0 automatically populates your environment variables for you:

note

Your application’s client secret is masked for you. To get the client secret value, click the copy button on the code sample.

.env.local
ctrl+C
# .env.local
AUTH0_SECRET='use [openssl rand -hex 32] to generate a 32 bytes value'
APP_BASE_URL='http://localhost:3000'
AUTH0_DOMAIN='<your-auth0-domain>'
AUTH0_CLIENT_ID='<your-auth0-application-client-id>'
AUTH0_CLIENT_SECRET='<your-auth0-application-client-secret>'

To initialize your local Python environment, run these commands in the terminal:

python3 -m venv env
source env/bin/activate
ctrl+C

Create an API in Auth0

In the Auth0 Dashboard, create an API. Set the Identifier as agent0-api.

Once you've successfully created the API, enable the corresponding AUTH0_CLIENT_ID within the Machine to Machine Applications tab. This enables that client to request access tokens for this API.

Update the .env.local file to set the AUTH0_API_AUDIENCE to agent0-api, or the identifier for the API you created.

.env.local
ctrl+C
# .env.local
# ...

AUTH0_API_AUDIENCE=agent0-api

Next, create a file called src/app.py and add the following code to import dependencies and set up the Auth0 configuration:

./src/app.py
ctrl+C
import json

from fastapi import Depends, FastAPI, HTTPException, Request, Response
from fastapi.middleware.cors import CORSMiddleware
from fastapi_plugin import Auth0FastAPI

from langchain_openai import ChatOpenAI
from langchain.agents import AgentExecutor, create_tool_calling_agent

from langchain_core.prompts import ChatPromptTemplate, MessagesPlaceholder
from langchain_core.tools import tool

from dotenv import load_dotenv
from typing import Any, Dict, List

import httpx
import os
import uvicorn

load_dotenv(dotenv_path=".env.local")
app = FastAPI()

# Set up the CORS middleware
app.add_middleware(
CORSMiddleware,
allow_origins=[os.getenv("AGENT0_WEB_HOST")],
allow_credentials=True,
allow_methods=["*"],
allow_headers=["*"],
)

# Auth0 configuration
auth0 = Auth0FastAPI(
domain=os.getenv("AUTH0_DOMAIN"),
audience=os.getenv("AUTH0_API_AUDIENCE"),
)

Define a tool to call your API

In this step, you’ll create a tool to make the first-party API call. The tool fetches an access token to call the API.

Add the following code to src/app.py to set up Agent0 with a token-protected API tool call to Auth0's /userinfo endpoint:

./src/app.py
ctrl+C
# ...

# OpenAI model
model = ChatOpenAI(
model="gpt-4o",
temperature=0,
api_key=os.getenv("AGENT0_OPENAI_KEY")
)

# Create a prompt template
prompt = ChatPromptTemplate.from_messages([
("system", "You are an AI agent for demonstrating tool calling with Auth0."),
("human", "{input}"),
MessagesPlaceholder(variable_name="agent_scratchpad"),
])

# Function to create a tool dynamically with the provided token
def get_user_info_tool(token: str):
@tool
async def get_user_info() -> Dict[str, Any]:
"""Fetch user info from Auth0 using the provided token."""
auth0_domain = os.getenv("AUTH0_DOMAIN")
if not auth0_domain:
return {"error": "Auth0 domain is not defined"}

url = f"https://{auth0_domain}/userinfo"
headers = {"Authorization": f"Bearer {token}"}

async with httpx.AsyncClient() as client:
response = await client.get(url, headers=headers)
if response.status_code != 200:
return {"error": "Failed to fetch user info"}
return response.json()

return get_user_info

# Agent execution function
async def agent0(messages: List[Dict], token: str) -> Dict[str, Any]:
get_user_info = get_user_info_tool(token) # Inject token dynamically

tools = [get_user_info]

# Create the agent using the factory function
agent = create_tool_calling_agent(model, tools, prompt)

# Create the agent executor
agent_executor = AgentExecutor(
agent=agent,
tools=tools,
)

response = await agent_executor.ainvoke({"input": messages})

return {"content": response["output"]}

Run API server and handle AI agent requests

Add the following code to src/app.py to handle requests to the /agent endpoint:

./src/app.py
ctrl+C
# ...

@app.post("/agent")
async def agent_api(
request: Request,
response: Response,
claims: dict = Depends(auth0.require_auth())
):
data = await request.body()
messages = json.loads(data).get('messages') if data else ""

if not messages or not isinstance(messages, list):
detail = "Messages are required and should be an array"
raise HTTPException(status_code=400, detail=detail)

try:
# Retrieve the authorization header with access token
# in the format "Bearer <token>"
token = request.headers.get("authorization")
response = await agent0(messages, token)
return {"response": response["content"]}
except Exception as error:
print(error)
detail("Failed to get response from agent")
raise HTTPException(status_code=500, detail=detail)

if __name__ == "__main__":
uvicorn.run(app, host="127.0.0.1", port=3000)

Run your API server

Run this command to start the API server:

python3 src/app.py
ctrl+C

Test your application

Create a test script to call the API using an access token issued to the logged-in user. This simulates using a front-end application which can follow the same pattern to access backend APIs.

1. Create a test script

Create a file called api_test.py and add the following code to it:

./api_test.py
ctrl+C
from dotenv import load_dotenv
import os
import requests
load_dotenv(dotenv_path=".env.local")

def get_access_token():
url = f"https://{os.getenv("AUTH0_DOMAIN")}/oauth/token"
headers = {"content-type": "application/x-www-form-urlencoded"}
data = {
"grant_type": "client_credentials",
"client_id": os.getenv("AUTH0_CLIENT_ID"),
"client_secret": os.getenv("AUTH0_CLIENT_SECRET"),
"audience": os.getenv("AUTH0_API_AUDIENCE"),
}
response = requests.post(url, headers=headers, data=data)
response.raise_for_status() # Raise an error for bad responses
return response.json().get("access_token")

def call_agent_api(access_token):
headers = {
"Content-Type": "application/json",
"Authorization": f"Bearer {access_token}"
}
payload = {
"messages": [
{"content": "Hello, tell me a joke.", "role": "Human"}
]
}
response = requests.post((os.getenv("APP_BASE_URL")+"/agent"), json=payload, headers=headers)
response.raise_for_status()
return response.json()

def main():
try:
token = get_access_token()
print("Access token obtained successfully.")
response = call_agent_api(token)
print("Agent API Response:", response.get("response"))
except requests.exceptions.RequestException as e:
print("Error:", e)

if __name__ == "__main__":
main()

2. Run your test script

To run your test script, enter the following command in a second terminal instance:

python3 api_test.py
ctrl+C

When you run the test script, the CLI app fetches an Auth0 token with agent0-api as the audience and uses it to call the /agent endpoint. If successful, it prints the AI agent's response.

Next Steps