Introduction
Ready to see how an AI assistant can transform your workflow?
In this post, I’ll walk you through how to create and use AI-powered assistants that leverage Code Interpreter, Retrieval, Function Calling, and more using the new OpenAI Assistants API. By the end, you’ll be able to spin up a real conversation thread, make your assistant run code, and handle advanced AI capabilities in your own applications.
If you haven’t already, check out OpenAI’s deep dive docs for a closer look at the full suite of features, including function calling and reasoning.
Step 1: Setting up the Environment
First things first: let’s make sure we have the OpenAI Node.js package installed. Run the following command in your project directory:
npm install openai
Also, ensure you have your OPENAI_API_KEY
set up as an environment variable (e.g., in a .env
file or via your deployment platform’s settings).
Step 2: Creating an Assistant
Next, let’s create a simple assistant. Here, I’m enabling the Code Interpreter tool so the assistant can write and debug code. You could also explore other tools like Retrieval or Function Calling for structured outputs.
import OpenAI from 'openai'
const openai = new OpenAI({
apiKey: process.env.OPENAI_API_KEY,
})
async function createAssistant() {
try {
const assistant = await openai.beta.assistants.create({
name: "Code Helper",
instructions: "You are a programming assistant. Help users write and debug code.",
tools: [{ type: "code_interpreter" }],
model: "gpt-4-1106-preview"
})
console.log('Assistant created:', assistant.id)
return assistant
} catch (error) {
console.error('Error creating assistant:', error)
throw error
}
}
You can include other tools, like retrieval or function calling, by adding them into the tools
array. Check the official docs for details.
Step 3: Creating a Thread
OpenAI threads keep track of conversation state. Every time we have a conversation with our assistant, we should create (or reuse) a thread to hold messages and context. Here’s how to create a thread and add a user message:
async function startConversation(question: string) {
try {
// Create a new thread
const thread = await openai.beta.threads.create();
// Add a message to the thread
await openai.beta.threads.messages.create(thread.id, {
role: "user",
content: question,
});
return thread;
} catch (error) {
console.error('Error starting conversation:', error);
throw error;
}
}
Step 4: Running the Assistant
Once our assistant and thread are ready, we need to create a Run. A Run executes the assistant’s logic for the latest messages in the thread. We’ll then poll until it’s done and retrieve the response:
async function getAssistantResponse(threadId: string, assistantId: string) {
try {
// Create a run for the given assistant
const run = await openai.beta.threads.runs.create(threadId, {
assistant_id: assistantId,
});
// Poll for completion
let runStatus = await openai.beta.threads.runs.retrieve(threadId, run.id);
while (runStatus.status === "queued" || runStatus.status === "in_progress") {
await new Promise(resolve => setTimeout(resolve, 1000));
runStatus = await openai.beta.threads.runs.retrieve(threadId, run.id);
}
// Get the messages from the thread
const messages = await openai.beta.threads.messages.list(threadId);
// Return the latest assistant message
const lastMessage = messages.data
.filter(message => message.role === "assistant")
.pop();
return lastMessage?.content[0]?.text?.value || "No response received";
} catch (error) {
console.error('Error getting assistant response:', error);
throw error;
}
}
Here, I’m filtering messages by their role
to find the assistant’s reply. Notice that each message can have multiple content
parts (like text, code, or function calls). We’re grabbing the first text
block for simplicity.
Step 5: Putting It All Together
Let’s combine these snippets into a single script. This script:
- Creates an Assistant
- Creates a Thread
- Adds a User Message
- Runs the Assistant
- Retrieves the Assistant’s Reply
- Prints the result
- (Optional) Cleans up by deleting the Assistant
import OpenAI from 'openai';
const openai = new OpenAI({
apiKey: process.env.OPENAI_API_KEY,
});
async function main() {
try {
// 1. Create an assistant
const assistant = await openai.beta.assistants.create({
name: "Code Helper",
instructions: "You are a programming assistant. Help users write and debug code.",
tools: [{ type: "code_interpreter" }],
model: "gpt-4-1106-preview"
});
// 2. Create a thread
const thread = await openai.beta.threads.create();
// 3. Add a user message
await openai.beta.threads.messages.create(thread.id, {
role: "user",
content: "Can you help me write a function to calculate Fibonacci numbers?"
});
// 4. Run the assistant
const run = await openai.beta.threads.runs.create(thread.id, {
assistant_id: assistant.id
});
// 5. Poll for completion
let runStatus = await openai.beta.threads.runs.retrieve(thread.id, run.id);
while (runStatus.status === "queued" || runStatus.status === "in_progress") {
await new Promise(resolve => setTimeout(resolve, 1000));
runStatus = await openai.beta.threads.runs.retrieve(thread.id, run.id);
}
// 6. Retrieve messages & get the assistant response
const messages = await openai.beta.threads.messages.list(thread.id);
const lastMessage = messages.data
.filter(message => message.role === "assistant")
.pop();
console.log('Assistant response:', lastMessage?.content[0]?.text?.value);
// 7. Clean up (optional)
await openai.beta.assistants.del(assistant.id);
} catch (error) {
console.error('Error:', error);
}
}
main();
Step 6: Creating a Next.js API Route
If you’re building a Next.js application, you can wrap all of this logic in an API route. Here’s an example using a POST
request where I send a user’s question, then return the assistant’s response as JSON.
// app/api/assistant/route.ts
import { NextResponse } from 'next/server';
import OpenAI from 'openai';
const openai = new OpenAI({
apiKey: process.env.OPENAI_API_KEY,
});
export async function POST(req: Request) {
try {
const { question } = await req.json();
// Create an assistant (or use a stored assistant ID)
const assistant = await openai.beta.assistants.create({
name: "Code Helper",
instructions: "You are a programming assistant. Help users write and debug code.",
tools: [{ type: "code_interpreter" }],
model: "gpt-4-1106-preview"
});
// Create a thread and add the user's question
const thread = await openai.beta.threads.create();
await openai.beta.threads.messages.create(thread.id, {
role: "user",
content: question
});
// Run the assistant
const run = await openai.beta.threads.runs.create(thread.id, {
assistant_id: assistant.id
});
// Poll until it's done
let runStatus = await openai.beta.threads.runs.retrieve(thread.id, run.id);
while (runStatus.status === "queued" || runStatus.status === "in_progress") {
await new Promise(resolve => setTimeout(resolve, 1000));
runStatus = await openai.beta.threads.runs.retrieve(thread.id, run.id);
}
// Retrieve the new messages
const messages = await openai.beta.threads.messages.list(thread.id);
const lastMessage = messages.data
.filter(message => message.role === "assistant")
.pop();
// Clean up resources
await openai.beta.assistants.del(assistant.id);
return NextResponse.json({
response: lastMessage?.content[0]?.text?.value || "No response received"
});
} catch (error) {
console.error('Error:', error);
return NextResponse.json(
{ error: 'Failed to get assistant response' },
{ status: 500 }
);
}
}
This pattern keeps your API calls server-side, protecting your OPENAI_API_KEY
from exposure.
Step 7: Creating a React Component
Finally, here’s a client-side React component for handling user input, sending it to our API route, and displaying the assistant’s response.
'use client'
import { useState } from 'react';
import { Button } from '@/components/ui/button';
import { Textarea } from '@/components/ui/textarea';
import { Card } from '@/components/ui/card';
export default function AssistantChat() {
const [question, setQuestion] = useState('');
const [response, setResponse] = useState('');
const [loading, setLoading] = useState(false);
async function handleSubmit(e: React.FormEvent) {
e.preventDefault();
setLoading(true);
try {
const res = await fetch('/api/assistant', {
method: 'POST',
headers: { 'Content-Type': 'application/json' },
body: JSON.stringify({ question }),
});
const data = await res.json();
if (!res.ok) throw new Error(data.error);
setResponse(data.response);
} catch (error) {
console.error('Error:', error);
setResponse('Failed to get response from assistant');
} finally {
setLoading(false);
}
}
return (
<div className="n8rs-blog-content-wrapper">
<form onSubmit={handleSubmit} className="n8rs-blog-space-y-4">
<Textarea
value={question}
onChange={(e) => setQuestion(e.target.value)}
placeholder="Ask me about programming..."
className="min-h-[100px]"
/>
<Button type="submit" disabled={loading}>
{loading ? 'Getting Response...' : 'Ask Assistant'}
</Button>
</form>
{response && (
<Card className="n8rs-blog-p-4 n8rs-blog-mt-4">
<pre className="whitespace-pre-wrap">{response}</pre>
</Card>
)}
</div>
);
}
Best Practices & Gotchas
- Keep Secrets Secret: Always store your
OPENAI_API_KEY
server-side or in an environment variable. - Clean Up Assistants: If you’re creating ephemeral assistants, remember to
delete
them after use. - Handle Errors & Retries: In production, implement robust error handling. The “while queued” approach works for short waits, but for heavier loads, consider more sophisticated solutions.
- Function Calling: When using function calling, the assistant might return structured data that you’ll need to parse. Handle
JSON
or other outputs accordingly. See the official docs for examples. - Data Privacy: OpenAI may log API requests for improvements. If you’re working with sensitive data, review OpenAI’s data usage policies.
Conclusion
That’s it! By following these steps, you can create conversational assistants that maintain state through threads, utilize powerful tools like the Code Interpreter, and incorporate advanced features like Function Calling.
I hope this guide helps you get started with OpenAI’s new Assistants API. If you run into any issues, feel free to reach out or post a comment. Let’s build some incredible AI-powered applications together!