
Repo: https://github.com/developersdigest/Anthropic-Claude-Clone-in-Next.JS-and-Langchain OpenRouter: Unlock GPT-4-32K & Claude-2-100K APIs Instantly! https://youtu.be/93QLqpfcqjA More details and links coming soon...
--- type: transcript date: 2023-08-11 youtube_id: brElptE736k --- # Transcript: Build a Claude Clone in Next.JS with Langchain + Supabase in 25 Minutes in this video I'm going to be showing you how you can build out your own clod UI inspired chat bot in next.js so I'm going to be leveraging langchain for a ton of features in its core and I'm going to be showing you how you can incorporate the front end to the back end to incorporate things like open AI functions embedding documents into services like super bass and then I'll also be showing you on how you can use Versailles new AI Library so if I just demonstrate it here if I just select the jot GPT 3.5 on the left here we see that we have these functions that pop up and if I just demonstrate here and I say what is Lang chain or went Lane chain what is Lang chain so notoriously llms don't know what Lang chain is because they're trained on data with a lag but if I just incorporate an open AI function that I wrote here and I'll show you how to write your own custom functions with langchain and I now say what is Lang chain so now with this tool it's reaching out to Wikipedia and it's returning the result from Wikipedia and it's informing the question that we're asking so that's one feature and then another thing for real-time data so something like let's say what is the price of Bitcoin and you see it can't get the price of Bitcoin but if I say what is the price of Bitcoin and then I turn on that function to actually run a crypto price search you'll see it gives me the current price so those are just two examples it's not really around incorporating Wikipedia or getting crypto prices per se I'm going to show you how you can write your own functions and toggle them on and off like that and you can put whatever API or whatever responses or actions into that that you that you see fit now the next thing that it is able to do is you are going to be able to upload a series of documents and you'll be able to embed those documents into superbase so I also showed you a little bit of a component toggle switch for if you do want to incorporate other Vector offerings from other companies so this is a pine cones logo so if you want to build out with Pinecone you can sort of Leverage this and build off from it so the one thing to note with this is right now I don't have access to Claude to 100K directly through their API so I'm still waiting on access for their core offering now with that said there is a service called open router where you can access their model right now and so a number of users have been using that as sort of a stop Gap while they wait for Access so I have a video on that that I'll link in the description if you're curious on setting this up so without further Ado I'm just going to get into the code and we're going to run through it so I'm going to go awfully quick now the one thing if you're not following along line by line you can just reach for the repo in the description of the video it should be posted shortly after I put this up so the first thing we're going to do is we're going to head over to vs code and I'm just going to open up my separate tab here and the first thing that you're going to have to do is just go in the dot env.local and you'll have to just chase down a couple environment variables so you have to get your open AI API key you have to get your Super Bass private key and URL and then these are optional so if you want to use langsmith just paste in your values here from langsmith and then you'll have that tracking and similarly if you're not familiar with what langsmith is I encourage you to sign up for the preview right now and I also have a couple videos on that on what it is and how you can use that within a real life project and how it really gives you like a ton of really good insight so next we're just going to install a handful of things so if you check out the package.json here so essentially if you run through this list here and you just npmi AI dot EnV Lang chain Etc get that all set up get all installed after you have your next JS boilerplate setup or if you're just grab the repo you can just npm install all the packages from there so the first thing I'm going to run through is actually setting up the back end so the first thing that we're going to do is we're going to import a handful of dependencies so we're going to initialize our agent executor with options so the reason we're using this is this is going to be how Lang chain handles our openai function selection then I'm also going to be incorporating a few different ways of setting up tools so we're going to have the dynamic tool Dynamic structure tool and then Wikipedia query run so this one's built in to langchain the dynamic structure tool is a way that you can pass in something like a Json payload and then the dynamic tool is a simple example so I'll show you sort of like a Fubar example on how you can build a custom tool with that so next we're going to get our chat open AI included we're going to get open AI embeddings included we're going to get Super Bass Vector stores included their client and then the streaming text response from Versailles AI Library and then finally we're going to be using a Zod for our schema validation so the first thing we're going to do we're going to define a few inferences which will make sense in just a moment here we're going to set up our environment variables so we're going to be including our private key and our URL into our application here then we're going to set up a post request route so the first thing we're going to do is we're going to structure a handful of things that we're going to be sending from the front end so we're going to have our messages we're going to have the functions that are selected any files we're going to have the selected model and then we're going to have the selected Vector storage so if I just go back to the interface here and refresh so this selects the model this is the message right our files are from what's selected in this array here this is what changes our Vector storage and then finally this is the functions array so you sort of see how it all ties in with the back end okay so once we have that we're going to just handle an example case for Claude 2. so I just put this within here for if you want to begin to get ready for your Claude to access or if you do have access you can just fill out this logic here and have that condition set but right now it's predominantly set up for the open AI setup here so this is just going to Simply give you an example response if Claude twos uh selected so when you're actually using the app just make sure that you have the open AI selected otherwise you're just going to continually get this this is an example of a streaming response from cloud so the first thing that we're going to do is we're going to process the data that was sent from the front end and we're going to be sending base64 of our files to the back end and then that's going to be what we parse here and then establish that as readable content in a string then we're also going to be getting our latest message so we're just going to be getting the last message that was sent for what we actually end up sending to our agent so next we're going to check whether there's any files that were sent over because that is optional so this will only run and embed those documents if they are selected and again all this is going to be set up now is is going to be set up using Super Bass but you could add in a condition similar to a condition for selecting claw two you could say if it's super bass do this if it's pine cone do that so this is the Super Bass implementation we're just declaring the client we're mapping through the files we're putting the files into a vector storage so one thing to note actually if you have very long files and you want to do this you will need a recursive character splitter just to tie this in I have a number of videos online chain with that that you can check out on my channel if you're interested so then we're going to be calling the open AI embeddings endpoint and actually embedding everything then we're going to be reaching out and doing a similarity search with our question and we're going to be specifying that we want three results and then this is what's actually going to be passed to the openai chat gp3 llm and we're going to say the user queries this and then before using prior knowledge use the following info from the vector results so next we're just going to be setting up our agent executor with Lang chain and then we're going to be starting to declare some of our tools here so the first tool that we're going to be running is we're going to be setting up the Wikipedia query tool so that's a built-in tool that's one that I included in the Lang chain implementation or the JavaScript implementation I should say so it's the one I'm most familiar with which is why I'm using it for demonstrations sake now this is an example of a very simple tool that you could build so I don't have this in the UI but just if you want to you know create a tool that takes in a string and you can say okay what is Foo and it should return you a response with the details that you put in here so that's how you can establish your own custom tool in blank chain with a simple input and output then if you have a more involved tool that has you know it takes in a a schema of arguments like in this it takes in a crypto name and a currency as an optional argument this is an example on how you can do that so there's built-in tools and then there's a couple different ways that you can structure your tools for it to all work within sort of the quote-unquote Lang chain way of of doing things so then we're just going to be defining the functions and tools so this is going to be because we're sending a string in from the back end and then just to be able to leverage the actual functions so the function declarations or just uh declaring them here to be able to do that so it's going to Loop through them if it's active it's going to push them to the tools array to pass within the agent executor so here see we're going to be passing in the tools that we've had that we have selected we're going to be passing in the model that we're using and we're going to be passing in the agent so this can work with GPD 3.5 or gpd4 and then finally we're going to be running the executor agent so we're going to be passing in our argument for the executor and then we're going to be chunking the results and then this is just a streaming example so I'll likely be swapping this out once I dig into the new methods of streaming that are coming out within line change so essentially what this is doing is it's waiting for a string to be responded with from the or the the response I should say from the executor and then it's just emulating that output that you'd typically be familiar with so there's a bit of Randomness on like the length of time between the blocks of characters that come through and that's pretty much it for the back end so then for the front end it's a little bit longer um in terms of the amount of code but it's still not too too bad you know considering I don't think anyway so I kept it very simple I should mention so there's two files in this I'd encourage you to break this out if you're looking to build off this you know and build like a you know some cool little project with it I just wanted to keep it within two files for simplicity's sake for people following along within the video so the first thing that we're going to do is we're just going to declare that this is react for the front end in our next JS application we're going to be using the use chat feature from Versailles AI library for the react implementation we're going to be destructuring the use State and use effect features from react so these are all the icons that I'm going to be incorporating into the application so there's just a handful here from this phosphor icons Library okay so the first thing we're going to do is we're just going to have a simple component selector for actually selecting our component so that's that top left hand component so here you can put in your images like I have here within the public folder so I have anthropic I have open AI and then we're going to handle the click so if it's selected we're going to send that back to the parent component just to inform it of what has been selected and then that will be what's passed as the payload when the handle submit finally gets sent so pretty simple component all it's really doing is changing that image and that state in our application so next similar component uh all this is really doing is just showing that welcome that first welcome page where we have the welcome back and the the line chain parrot and then similar to our first component we're going to have a vector selector so almost the exact same thing we're going to just establish our initial state with the Super Bass logo then we're going to have the icon selections that it Loops through as you click them each time so same thing it's going to send that message back to the parent component when it's selected and finally send that payload to our endpoint when we're ready okay so the first thing we're going to do is we're going to establish our app variable then we're going to establish a handful of hooks so I'll just run through these quickly so the first thing we're going to do is we're going to have a hook for our show Welcome Back Page so this will control a handful of things like where the input is on the screen so it will pop down when you're not on the welcome screen it won't show that initial welcome component so stuff like that we have a focus State on whether it is focused so we're going to use this for say if you click on the files and then you click back down to the text area it's going to collapse that the functions that were that were toggled up there we're gonna have our file state so there's a handful of things I won't go through all of these actually because they're pretty self-explanatory and I will touch on just this one however so this is going to be how we link the front end with the back end so this name is going to link with the same name that you have with the functions on the back end but if you don't want to have say something like squished together like camel case or something there is that label that it shows on the front end so here is where we're going to be establishing the use chat feature so this is from versa's AI Library which is just awesome it makes it really easy to handle messages and inputs and also sending the payloads back and forth from the front end which we'll get into in this example so next we're going to be simply handling the vector storage selection so whether it's super bass or Pinecone similar thing for handling the models so for handle Focus uh it's like I mentioned it's going to set the show slide up on whether the functions are there or not and just handle the focus for us we're also going to have one to handle blurb so a lot of these are pretty self-explanatory so I'm just going to run through these oh so the other thing I didn't demonstrate is it does have the command K feature so if you go back to the application and you simply click command K you have that welcome back or the home screen here so if you're looking to build on this from here you know what a cool feature might be having your historic chats here trying to implement that you know with super bass or something else could be neat but yeah you also have that command K so I just wanted to demonstrate on how you could do that as well so next we have our key down Handler so this is to handle because we're using a text area and not a simple input it just is going to be listening for our key down to see whether there's the enter key that we press next we're going to have the use effect to show the welcome back message if it's set then we have a remove file feature so this is going to be how we handle just simply removing a file like that so we have the handle file a change event so this is going to be where we actually set up all that we need to load to be able to convert it to base64 and get it in a format with all the different options that we're going to be sending back to the the back end here so like the size the file type the name and then the actual encoded file itself then next we're going to have our handle icon click here okay so from here we're going to be going into the actual jsx here so first we're going to have our model selector in the top left hand side next we're going to just show conditionally where that input should be displayed in the messages and all that so we're going to have some conditional logic around our Tailwind so pretty much all the CSS in here is Tailwind classes so if you're not familiar with Tailwind that's what all these different classes are so it can seem a little bit verbose as we go through it but that's sort of the nature of Tailwind you'll have a lot of a lot of classes so if the welcome message state is true we're going to render that welcome back message and then for the messages if there's a length to them we're going to Loop through them here so we're just going to show if it's user you can have a custom user Avatar and if it's the AI I just put the Lang chain parrot there and have the output so you can sort of arrange it you know on the left hand right hand side that sort of thing change out the icons it's all here for you if you'd like so next this is going to be the field set and the form and the text area of where we actually submit to the back end so within the uh AI Library so the way that you can pass in other options is so if you pass in a second argument of options body and then you can include those things like I have those different variables that we have in those hooks that we're going to be passing so like our selected model Vector storage Etc so I'm going to be running through this pretty quick but if you have any questions feel free to reach out to me within the comments below and if something's not working I'm pretty responsive to try and get back to all my comments I'm pretty diligent about that so if you have any questions or ideas for that matter of future videos this was actually an idea inspired by someone in a comment for implementing both open AI functions and something with Super Bass so if you have ideas for a video or something you'd like to have boilerplate for I'm fair game for suggestions so here is we're going to be uh simply showing our slide up we're going to map through our functions here we're going to show the function or the the um the list of function names rather we're going to handle the click on which functions are selected and push those all up into the state for when the form is actually submitted and then we're also going to have the icons that switch back and forth so see we have our Circle and then we have our check Circle depending on whether you've checked them or not so next here's our text area so it's a little bit of a beefy text area part of the reason why it's set up like this is because it is a text area it's not an input and I just wanted to be able to have everything easy to see here so in your handle submit again you have your options you have the all the files functions Vector storage Etc and then we're going to be listening specifically for the enter key for this to trigger okay so next we're going to have the uh we're going to be determining I'm just looking here so this is going to be where we actually have the icons on the right hand side here so we have our function icon we have our attachment icon and then depending on the selected model in the top left hand corner so say if I click this you can see that the function icon is disappearing so that's what this is here next we have an attachment button so similar thing but if we actually yeah that's pretty much it in our attachment button so one thing to note with this so I just set it up to be working with DOT txt files if you want to use other document loaders there's a ton within line chain I have videos on how you can incorporate some packages and read stuff like csvs or PDFs or what have you by using their document loaders so we have our paper clip icon so we have our start new chat button so depending on whether it's welcome back you'll see there's a different state for our chat start chat button so if there are files present we're going to show our Vector icon for the Super Bass so for when that panel pops up so next this is going to be where we display those files where we just Loop through if there's multiple this is what it will look like and then again there's the icon that you can toggle back and forth so that's pretty much all this different stuff here so there's a lot within here because if you again if I look at the files still working out a couple bugs here on the edges so if you encounter bugs if you make a pull request open an issue I'll try and circle back and get these ironed out so you see things like the actual file size the file name and then the file type so that's why this is a bit longer and that's pretty much it so then we also just have that Focus State for the commands here so if you want to have multi lines or that command K feature just like it shows in the cloud UI so that's pretty much it I know I ran through this fast but I you know I'm going to be including the repo it's more to have you have a high level understanding on how you can build out an application in xjs leverage a ton of cool stuff in Lang chain I'm going to be coming up with a number of different examples in Lang chain on different features as they come out on how you can use them and if there's enough interest I'll continue to build on this project itself if you have any suggestions on what you'd like to see within it let me know in the comments below but otherwise if you found this video useful please like comment share and subscribe and otherwise until the next one
Technical content at the intersection of AI and development. Building with AI agents, Claude Code, and modern dev tools - then showing you exactly how it works.
Weekly deep dives on AI agents, coding tools, and building with LLMs - delivered to your inbox.
Free forever. No spam.
Subscribe FreeNew tutorials, open-source projects, and deep dives on coding agents - delivered weekly.