
Check out CopilotKit on GitHub at https://go.copilotkit.ai/copilotkit to view the demo + more featured in this video. While you're there, star their repository and support open source. Building a Full Stack AI App with LangGraph and CoPilotKit: A Step-by-Step Guide In this comprehensive video, I demonstrate how to create a full stack AI application using CoPilot Kit and LangGraph's Python cognitive architecture. You'll learn to build a Next.js application that functions as a research assistant, allowing you to input research questions and generate drafts using models from OpenAI, Anthropic, or Google. I'll guide you through the setup process, including the necessary environment variables, application structure, and the integration of LangGraph for backend agent architecture. By the end, you'll understand how to deploy your app and make use of LangGraph's advanced features, enabling you to craft a personalized research assistant application. 00:00 Introduction to Full Stack AI Application 01:19 Setting Up the Technical Environment 01:51 Initializing the Project Repository 02:32 Configuring Environment Variables 03:07 Building the UI Layer 03:53 Creating API Routes and Handlers 05:12 Implementing the Main Components 06:16 Managing State and Context 07:25 Finalizing the UI 16:34 Utility Functions and Model Selection 17:20 Running the Application 17:33 Introduction to Agent Architecture 19:40 Setting Up LangGraph Workflow 25:37 Implementing Agent Nodes 28:38 Handling Resource Management 29:15 Downloading and Parsing Resources 31:17 Executing Search Queries 33:33 Conclusion and Future Applications
Weekly deep dives on AI agents, coding tools, and building with LLMs - delivered to your inbox.
Free forever. No spam.
Subscribe FreeNew tutorials, open-source projects, and deep dives on coding agents - delivered weekly.
--- type: transcript date: 2024-12-12 youtube_id: PjuOsM3W3G8 --- # Transcript: Build an AI Agent Web App with LangGraph & CopilotKit in 30 Minutes in this video I'm going to be showing you how to build out a full stack AI application by leveraging co-agents by co-pilot kit as well as a python cognitive architecture that's built within Lang graph by the end of the video you'll have this nextjs application that you'll be able to have a research helper where what you'll be able to do is you can put in your research question as well as ask questions within the co-pilot pane here on the right hand side and it will ultimately generate a draft for you so within this you can leverage models from open AI anthropic or Google in this example I asked the question of how will Google's new Willow Quantum Computing chip affect Bitcoin security this is less about the question about that but more just to show you how you could potentially have your own tool and build on top an example like this to be able to generate a research draft in terms of some of the technical aspects of the project here what you'll be able to do is you'll be able to add your own resources as well and if you don't put in any resources where it's going to be reaching from those sources is from something called Tav so Tav this is going to be our search API this is going to be how we break up our query this is going to be how we get that most recent up-to-date information that we can ultimately feed into the response and within our application itself and in this example I'm going to be showing you how you can set this up with an openai API key just a couple housekeeping things this example is going to be using the 3.12 or higher version of python for our Lang graph architecture on the back end for the UI layer of our application just make sure that you have a recent version of node.js installed for what we're ultimately going to be using we're going to be using nextjs in this but just make sure that you do have a recent version of node installed so you don't run into any issues when we run through the installation steps the first thing that I do want to mention is I'm going to put a link to the repo within the description of the video all that you need to get started will be within the readme.md the way that this is going to be broken out is we have our UI layer this is going to be our nextjs application and then we also have our agent architecture these are going to run independently of one another you can deploy your UI basically anywhere that you can deploy a nextjs application now in terms of the agent architecture you'll be able to deploy this wherever you can run python or alternatively you will be able to run this within the Lang graph Studio which I believe is still within beta but if you do have a paid account for something like Langs Smith you will be able to try this the first thing that we're going to do is within our UI layer we're going to put our open AI key with in this that is the only environment variable that we need within the EnV so just make sure you make a EnV you put in open aore aior key and then save that out now for our agent architecture we're going to be setting up a EnV within here we're going to be putting in our Tav API key for Tav it's tavore aior key key and for open aai it's open aore aior key and then you can save out your EnV once we're within the UI portion of the application you can go ahead and pmpm install everything within here we have some dependencies from co-pilot kit as well as some styling libraries as well as things like open AI as well as a handful of other things like the tailwind dependencies and what have you within our app directory we're going to create a route at API copilot kit and within here we're going to go within our route. TS the first thing that we're going to do within the route is we're going to import a handful of dependencies we're going to import a handful of things from co-pilot kit as well as open AI once we have that we're going to initialize open AI as well as set up the adapters we're also going to be setting up the Langs Smith API key if you do want to have Langs Smith tracing you will be able to have that as well this endpoint is going to be set up as a post request within the post request we're going to be setting up the search parameter but we're also going to be setting up the deployment URL depending on where you have your Lang graph architecture deployed and where the agent is deployed that's going to dictate where your deployment URL is now Lang Smith like I mentioned this is going to be for tracing to be able to see and debug all of the different pieces within your application from here we can set up the different agents that we have within our application in the example that I showed you we're going to be leveraging the research agent one thing that I do want to call out is for the environment variable we do have the ability to set the remote action URL so this is going to point to wherever you've deployed your python agent architecture this can be completely separate like you could have this on a render instance and then you can have the nextjs application on something like forell or neifi or whatever it might be you can have those pieces and really decide wherever you want to put the two of them and make them communicate to one another once we've done that we're going to initially ilze the co-pilot kit runtime to our remote URL right here finally we're going to be handling those requests and that's going to be what ultimately get send back to the UI portion of our application to be rendered next we're going to go within our page. TSX within here we're going to be setting up a client component again we're going to be importing a handful of dependencies so some from the co-pilot kit library and then we're also going to be importing some of the components that I'm going to show you within the page. TSX so this is a single page application this is going to be effectively the highest point outside of the layout itself the layout it's the typical nextjs layout we didn't touch that at all but for the page itself this is going to be like the shell of our application within here we're going to be loading up a number of different dependencies like co-pilot kit but then we're also going to be importing a number of the components that I'm going to show you right after this component within here it's a pretty simple component we're going to have our model select like you saw and then we're also going to be rendering adjacent the home component the home component is like the shell of the shell that's going to be the main component that has the subsequent children components which have all of those different pieces like you saw within the UI of the application within the home component the important piece here is we're going to be using the selector context within this is going to be how we access the context of the agent whatever that agent is doing and the various States that's happening on the backend those are going to be sent down as the agent props within the co-pilot kit component and that's going to be ultimately sent into our application and how it has the context of what's happening within the main. TSX the first thing that we're going to do is we're going to be importing a handful of things once again here we're going to be importing the research canvas and then we're also going to be importing a handful of things from co-pilot kit again we're going to be getting the types for the agent State as well as the model selector context like you saw on the bottom being able to select between open AI anthropic as well as Google within our main component we're going to be setting up a handful of hooks this is going to be how we initialize the initial state of our application within this component we're going to be keeping track of the model itself that we're going to be using the agent State and then we're also going to be setting the initial State here so we're going to have logs reports research the research question all of those various component pieces that you saw within the UI this is the initial State here now if you did want to preset some of those things you could also do that within this initial State as well next what we're going to do is we're going to be setting up the default chat suggestion you might not have caught it but on the outset of the video this is set up for an example to show about the lifespan Penguins just as an example sake but this could be whatever the use case that might be specific to what you want to build out a research canvas for once we have that we have some simple jsx we're going to be rendering the title below the title we're going to be setting up the research canvas and that's going to be where we have some of the other component pieces there we're going to be setting some Styles and then finally we're going to have the co-pilot chat the co-pilot chat is going to be that right hand Pane and that's going to be your copilot where you can have your conversation and depending on what you do within that right hand chat panel it will reflect and update the changes within the UI within the components we do have a hand full of Shad C nuui components I'm not going to be going through those instead I'm going to be going through the core components these are the blocks of the different pieces that you saw on the outside of what make up the application the first thing that I'm going to show you is the research canvas components within this component we're going to be specifying to use client since we're within the client side and this component is going to be where we leverage most of the final components that I'm going to be showing you within the video the first thing that we're going to do is we're going to be setting up the research canvas component itself then from there we're going to be reaching for the context that we wrapped our entire application in once we have that we're going to initialize the co-agent state and then from here we're going to be setting up some helpers to show some different states for what the UI is doing within our application depending on the state and where it is with its particular status for instance if we don't have any state for the logs we're going to go ahead and render this progress bar here then from there we're going to have the used co-pilot action within here this is going to be the confirmation to the user to make sure that they actually do want to remove that resource if they end up clicking delete that's going to set the Handler to yes alternatively if they click no it will just exit out of that in the application you saw that the Tav responses do ultimately get streamed in but you do also have the option where you can manually those as well we have a number of pieces of state to actually update those as well as have them stream in depending on whether it's the Tav implementation that's coming back and streaming in those responses within the UI or if it's a user that's manually setting individually all of those different resources that we're ultimately going to set within the state of the UI layer which will be sent to the backend and ultimately communicated to the backend agent architecture from there we just have a function to add a new resource we do have as a conditional check is just to make sure that we at least have have that URL and then we'll be able to set the resource URL and then alternatively optionally if you have the title as well as the description that will be set in the state as well we have a simple function to remove the resource where we're just going to filter through those results on the particular resource that you've selected to remove if you happen to do that finally we have a handful of hooks on how we can set some of the different pieces that we're going to use within the state next we have a simple Handler for when you click the card itself and then we also have a function for when you do happen to update the resource finally within our render component what we're going to have within here we're going to have the research question this is going to be whatever you might be researching you can put in that context within this we're going to have the input it's going to have the context of whatever you put within the input there and then from there we're going to list out all of the different resources here we're going to have the edit resource dialogue I'll be showing you that in just a moment here we also have a handful of props which you'll see when we go through the component we also have one for adding a resource I'm going to go through that as well and then we also have another condition here for the resources component itself again I'm going to be showing you this in just a moment finally we have our research draft this is going to be where we render our text area and within the text area this is going to be what the research draft is when we get that information back from our agent and it streams in based on the information that was was passed into the llm from tavali which ultimately gets sent in and returned back as markdown that is arguably the biggest portion of the application it holds the main pieces this is largely what's the parent context of a lot of the UI state it was a little bit involved but for the other pieces they're smaller component pieces they're not quite as involved I'm going to go through them relatively quickly and then we're ultimately going to move to our agent architecture and that's going to be a lot of Logic on how we communicate with llms you'll be able to see the different pieces on how all of that works within the progress the progress TSX this is going to show different intermediate State depending on what the application is doing depending on the logs that we send back it's going to map that particular step and we're going to see that progress step item as you put in a query and you saw the different pieces of it parsing different requests from tavali and it's starting to list the various web pages that's effectively what's going to be showing in the logs here it's nice to just have a log state where depending on the state of the logs you can just show whatever is there there can be multiple pieces within that and it can vary in terms of what is actually shown there but if you want a standardized log to show this is effectively what you can do with a progress component just like this from here we're going to be connecting the lines between these steps if there are multiple steps steps like it's searching multiple web pages it's going to do that and then there are a couple sort of helpful pieces here just within the UI to parse through things like the URL to make them look a little more visually appealing next our model selector so not a particularly exciting component effectively all this is doing is if you do have multiple models within your application you'll be able to select the particular models if you're using open AI you can select open AI alternatively anthropic Google and you can also add others as well within the model selector if you'd like next this is the component to add the resource dialogue again we're going to be importing a handful of things here we're going to be setting up the types for this and then we're going to be setting up the component itself this is going to be when we add a resource if you want to add in a new resource we'll just have to have the resource URLs you can add in the title you can add in the description and those are going to be all of the different pieces that ultimately get sent back and that you can use to fetch as context if you want to have also some sort of hardcoded pieces in addition to what Tav is going to be reaching for and then finally you also have the submit button and that's going to be what ultimately adds that within the state of the application when you do send in that request it can grab this particular resource next I'm going to be setting up the edit resource dialogue within this we're going to be again importing a handful of dependencies we're going to be defining some of the props and interfaces for the different types that we're going to be using within the application within our component here this component is very much similar to the ad components we're going to have the ability to edit the resource URL edit the resource title edit the resource description just like you guessed and then finally actually saving it out for once we're done for our last component we have the resources. TSX we're going to be importing a handful of things we're going to be setting up the interfaces as well that we're going to be using Within component and then within here this is going to be the parent component to where all of the different resources are we have a card as well as the card content for each of those different resources we walk through the steps on when you add that and edit it and you have that dialogue that pops up where you'll be able to put in the particular information this is going to be the component of where all those pieces that you add or edit or ultimately get streamed in are shown as almost like that block level element across the screen there I think it showed about three or four and that's going to be what this component is within here we're going to have the resource title these are all going to be dynamic dependent on whether they're coming in from tavali or if you've set them up manually and inputed them within here we're going to have the card of all of those different resources that we're using as contacts that we ultimately leverage within that final piece within that text area where we have that markdown that's pretty pretty much it in terms of the UI now we do have some utility functions that we have referenced within some of that logic that I just showed you we do have this function to truncate URLs we also have a types file where we have some of the things like the agent State for the core component pieces of the model the research question as well as the report resources as well as the logs then we also have this model selector provider this is going to be how we map dependent on whatever model that we're going to be using for instance say if we want to use Google gen and we want to use that particular agent you can reference that that's pretty much it for the nextjs portion I know it's definitely a lot of different pieces but hopefully me walking you through this you have a better understanding on how all of this works and then to actually run this you can just go ahead PM Dev and then you can also find some of these scripts within here you can see how to build it how to actually run the development server how to start it and all of the different pieces there now we can move from the UI portion of our application and we can go to the agent architecture now the nice thing with Lang graph if you haven't used it before is you can actually load up this example within langra Studios if you want to do that you just have to make sure that you have Lang graph Studio installed as well as Docker and then once you do that you can just import this agent directory like you'll have within the repo of the video if you do want to deploy this you can deploy this to Lang graph Cloud itself I believe the offering is still in beta but this is effectively the configuration file on how to actually set that up for when you pull in your repo and if you actually want to use that as a hosted production endpoint you do have that as an option similar to how we set up the EnV within the root of the directory of the nextjs portion of the application we're going to be setting up a EnV within the agent as well just make sure that you do have the open AI API key as well as the Tav API key installed for the agent what you can do is once it's pulled down you can go ahead and poetry install that will pull down everything that you need and then once you have that you can go ahead and poetry run demo and then you should see it running on this port by the end of it you should be able to just pnpm Dev poetry demo and that's all you need to have the different pieces of your application work you have your next Jass layer and then you have your agent layer and all of the different pieces within this example are all wired up where you can get started to build on top of this the first thing we're going to go through is the agent this is the main portion of our application this is going to house a number of the different smaller component pieces of our Lang graph architecture within here what we're going to do is we're going to be importing a handful of libraries and we're going to be importing a couple things from Lang graph as well as a couple pieces that we have shown within the directory here that's similar to the nextjs portion I'll be running through step by step the first thing that we're going to do is we're going to be initializing the workflow graph this is the starting point for setting up a lane graph architecture you're going to declare your workflow of the state graph and this is going to be where we pass in the agent State the way that L graph works is you can establish all of these different nodes and how they connect to one another within here we have a number of the different nodes that I'm going to go through in just a little bit but effectively all that you need to do you can reference pieces of logic that you have within one file or alternatively you can break it up if some of the pieces are a little bit more cumbersome and just to keep it a bit cleaner you can reference it just like that say for the chat node for instance we can import that from the chat. py file and then you can add it to the graph just like that the first thing that we're going to do is we're going to be setting up a function and within here we're going to get the state of the particular message mesages that have been sent back and forth depending on the messages we're going to have the user messages and then we'll also have the AI messages and then from there we're going to specify whether the AI message has any tool calls if the tool call is specified to use the search node that's how we're going to be leveraging that search node if it's to Leverage The delete node we can Leverage The delete node if it's not using a tool and we're just using the message and the chat node we can leverage the chat node so you start to see how you can use things like function calling in combination with just natural language and be able to reference that and have it broken out in these different pieces if it's a search query it can go through the graph and whatever the architecture is and depending on what the function is it could go through a number of different hops within the grout then from there if it's not a chat node or delete or search node we're just going to end the workflow because the conditions have been met next we're going to be initializing memor this is going to be what we leverage for checkpointing within our application we're going to be setting the download as our entry point we're going to declare all of the different edges and how they relate to one another how download relates to chat as well as any conditional nodes there's the chat node and then depending on what the query is from the chat node we could reference the search node or the chat node or the delete node we're going to add those subsequent edges as well we're going to be adding the delete within the graph the perform delete node and then we're going to be defining all of the different edges within the workflow you can think of this effectively how the different pieces within the graph connect to one another you can see how we have these conditional edges and within the chat node like we went through is depending on the query is if it uses something like function calling it could route to these particular nodes which is conditional based on whatever the query might be and then finally we just need to compile that workflow into the graph and that's going to be what you ultimately see within L graph Studio to have these nice visual representations on what the graph looks like next before I go into some of the specific nodes within Lang graph I'm just going to touch on and tie up some of the pieces around the application first when we ran our server we're actually running this file here now what this file is effectively the fast API now the reason that we have this is going to be the HTTP server that communicates back and forth with our nextjs server within here we're going to be importing fast API as well as a number of things from co-pilot kit and all that this is really doing is this is acting as the layer that's sitting on top of all that we had just set up within langra within this file what we can do is we can set up the different research agents that we have if you have multiple you could very well just have one agent depending on what the application is but you do have the option to have multiple you can set up the route on where you want to have this interacted with and then once you have that you can declare what you want your endpoint to be for the server then we'll also set up an endpoint where you can do health checks as well if you just want to Ping the server to make sure that it is healthy and the server isn't down or what have you we can just have a route for that and then we're going to be running the server on Port 8000 here so that's just going to allow us that once this is ultimately deployed or if you're just running it locally to actually communicate between next G as well as the python environment within the state. py this is going to be where we declare the different pieces of State we touched on this already within the nextjs portion but this is to have the equivalent within the python environment as well this is going to show for instance for the resources the URL is going to be a string title is going to be a string description and what have you you'll basically just be able to see all of the different pieces here as well and then this is going to be the log as well as specifying whether that log stream is done and then finally we also have the agent state which we went through a number of times within some of the components which is going to have the model the research question the report Resource as well as the particular mogs next we're going to be going through the model.py file within the model file it's pretty straightforward depending on what provider you've selected you can set up and declare whichever model that you want to use say if you're using open Ai and you want to use gbd4 mini you can set that alternatively if you're using anthropic or if you're using Google you can declare the model that you want to use as well as configure some of the other settings as well if you want to play around with some things like the temperature or what have you you can do that now the one thing to know with this is you do just have to make sure that you do specify whether it's the anthropic API key the Google API key or the open AI API key within your environment variables if you're going to be leveraging multiple models and then finally if there's an invalid model that was specified we can just go ahead and log that out next we're going to be setting up the chat node which is arguably the main portion of how you're going to be interacting with the application within here again we're going to be importing a handful of dependencies now what we're going to be doing within this is we're going to be defining a number of the tools that we have available for the research agent within the function for the chat node itself we're going to be setting up the configuration this is going to be how we Amit the intermediate States for what's within the architecture the thing that's nice with co-agents and co-pilot kit is you can stream intermediate States say if there's particular nodes within the backend agent architecture you can stream out those pieces to be within the UI you saw that within the application where for the tavali resources they were streaming in as they were coming in for the logs they were streaming out all of those different pieces that's an example of streaming out intermediate State you don't need to just wait for that final answer that final llm response to stream out within your application you can emit those intermediate States depending on what's Happening from there we're just going to initialize some of the state variables here for resources research question as well as report once we have that we're going to go ahead process and filter the different resources when we get the resources in we just have to organize these because those are going to ultimately be parsed and sent in for subsequent requests once we have that we're going to get the model that we've selected from there we're just going to get the particular model that we've declared and then we're going to actually invoke that model within here this is going to be how we bind some of the tools when we send those into open AI how function calls work is if you're not as familiar is effectively they have a natural language mapping that you send in with your query and it will effectively tell the llm hey dependent on this query if there's anything relevant within this query that could be sent to a tool call or a tool call be utilized based on the context go ahead and leverage the particular function call this could be one function call it could be zero it could be multiple dependent on how your application is set up but that's effectively what's happening there then here we're going to be declaring the system message this is going to be pretty descriptive just giving instructions that it's a research assistant and then we're also going to be passing in the context of the research question the report as well as the resources once we get the response back from the llm model we're just going to check whether there are tool calls if there is a tool call that matches the particular name that we've sent in we're just going to send in the report and then if there is something that matches that tool call we're just going to go ahead and return that result and then we're going to have the tool messages that we ultimately pass within the message context as well just so we have that context within the state of our application next within the delete. pi file it's relatively straightforward this is just going to be handling the deletion of the particular resources say on the front end if we've removed a resource and we ask a subsequent question with natural language we're just going to remove that from our agent state for instance if a user goes and walks through the steps to remove a particular resource say maybe there's a website that they don't want to reference or what have you we're just going to go ahead and remove that particular resource and not use that as the context within the query next for our download. py what we're going to be doing within this is going to be effectively how we reach for those HTTP resources when we use tab it's going to give us the particular Links of whatever we're searching for but then the next step for that is actually reaching for whatever is on the web page and then parsing those so getting it to be from HTML to text because within the llm you can pass in HTML as a string but you're going to be passing in frankly a bunch of stuff that you just don't need right whereas with the HTML to text you're just going to be sending in the relevant pieces of information that an llm can parse you're going to be saving on tokens and speed and all of that effectively what this does is we're just going to be declaring a user agent you can think of it as and then what we're going to be doing from here is we're going to be reaching for that resource which is going to be the URL and then we're just going to be performing some of those helper functions within this we're going to reach for that get the text get the whole response bag and then we're going to be converting that web page to text and that's going to be what we have within the state that we can ultimately pass within the llm what we're going to do is we're going to have the state for the resources and then we're also going to be keeping track for the log as it runs through parsing each different URL the resource of the URL that you put in it's going to work through those and it's going to show those within the logs that you have within that co-pilot on the right hand side there effectively how this is going to work is we're going to go through the different resources we're going to see which resources need to be downloaded we're going to log out the different pieces as well here's an example of the intermittent state that we can stream back and then we're going to Loop through all of the different resources and then we're actually going to call that download resource and pass in the URL once it's done we're going to set the log state to done and that's going to indicate to our application that we can move on to the subsequent pieces again we can emit the state for the download progress as well and then finally we can return the final State and finally our search nodes this is going to be what does quite a bit of work we're going to be leveraging the tavali API like I mentioned at the beginning of the video we're just going to initialize the TBL API now what's going to be important important with this is we're going to indicate to extract the three to five most relevant resources from the search results and then once we've set that up we're going to have our function we're going to set some of the state variables and then we're going to create logs for each query the way that this is set up is we are going to be making multiple requests to Tav depending on the query and then within this as well we also have an example on how we can emit that state on what gets sent back to the application for the search results we're going to organize them all within this array here what we're going to do is we're going to Loop through the different queries we're going to extract all those queries and then once we have a response bag we're just going to be appending all of those different responses within this array here finally once it's done we're going to indicate that the logs are done and then update that state once again from there what we're going to do is we're going to get the model and then once we have the model we're going to send in that request this is going to be where our system message we're going to say extract the three to five most relevant resources from the search results depending on what we get back from Tav we have our search results here all of the different responses are going to get appended here and then we're going to be sending it into an llm and we're going to be basically saying like it says within the system message just get the three to five most relevant resources if you're getting 10 or 20 resources back from tabali and maybe a bunch of them aren't relevant right you can just filter through some of that noise that's what effectively this portion does here finally we're going to be clearing the logs and updating the UI again we have another example of emitting that state from one of the nodes and then we're also going to be processing the extracted resources once we get the response back we're just going to process the results finally we're just going to be updating the state that we've added the particular resources and those are now within the context and state of our application that's pretty much it for this one I know there was a ton that I went through but I wanted to really go through in depth all of the different pieces on how you can set this up hopefully you understand the different pieces on Lang graph co-pilot kit and how you can begin to build out these new applications with the two I think there's going to be a ton of different applications where it's going to be more than just natural language in and natural language out I think more and more we're going to be controlling these applications being able to reference context within the application update the application with natural language or vice versa and it just be more and more prevalent within the applications that we build as well as use hopefully you found this video useful if you did please like comment share and subscribe otherwise until the next one
Technical content at the intersection of AI and development. Building with AI agents, Claude Code, and modern dev tools - then showing you exactly how it works.