
In this video, I will demonstrate how to set up and deploy a custom proxy server using Node.js in approximately 50 lines of code that can be used for OpenAI’s custom GPTs. I also exlain how to deploy the server on a platform called render and discusses the integration of the proxy server with custom GPTs. The video provides step-by-step instructions on setting up the proxy server using Node.js and demonstrates the scraping and parsing logic involved. Additionally, I briefly explains how to set up a custom GPT using the GPT Builder and create a JSON schema to define the GPT action. The video concludes with instructions on how to create a Git repository and deploy the server using render. Overall, the video is a tutorial guiding viewers on building their own proxy server and integrating it with custom GPTs. Repo coming soon
--- type: transcript date: 2024-01-05 youtube_id: pfBxw56hwf4 --- # Transcript: GPT Proxy: Build a Proxy Server for OpenAI Custom GPTs in 10 Minutes in this video I'm going to be showing you how you can set up and deploy your own custom proxy server in node.js in only about 50 lines of code we're going to run through this relatively quickly then once that's all set up I'm going to be showing you how you can deploy it on a service called render now render is a really great service that offers a free tier if you're just looking to Tinker in demo or just try out their service they have a generous free tier that you can go ahead and deploy node.js apps on and once it's all set up we're going to be integrating this within custom gbts and the reason I wanted to do a little bit more content around custom gpts is I just saw today that the GPT store will be launching next week if you're not familiar the GPT store this was announced during the open AI Dev day and this will be a way that a lot of AI Builders and llm developers will be able to monetize what they build and become useful on the GPT store platform I'm going to be showing you how and why a proxy server could be useful in the context of a custom GPT I'm going to be pulling over my screen for what I've set up already here and I'm going to go through all the steps to get up to this point if you're curious on how to set this up on the left hand side here I have my own custom GPT this website proxy GPT that I've set up and then on the right side here I have a gbd4 so if I put in the question of what is on Hacker News right now and I'll explain in a little bit on how this actually works for the website proxy and how it actually goes and parses and gets the information if I go ahead and query both of these at the same time now what gbd4 is doing it says it's going to Bing it's searching Bing it's looking for the information it might parse a website it might parse multiple by the time you get a response back it can sometimes feel like it takes a little bit of time and often times you might feel like you don't have the control to steer it in the direction that you potentially want to so if you see here we have the same information inform essentially across both the website proxy and gbd4 but you might have noticed that on the website proxy the response started to come in a fair bit sooner before gbd4 and the other thing with this is by having control of the proxy server is you don't necessarily need to just return the HTML or inner text of the website if you want to integrate something with Lang chain within this you could definitely do that type of thing as well the first thing I'm going to do is actually show you all the nodejs code and we're going to run through this relatively quickly the first thing that you're going to have to do is just set up a new directory and go ahead and npm in- Y then what we're going to do is we're going to install a few dependencies we're going to npm install axio Cheerio and express and I'll get into those in just a moment so once you have all those dependencies installed you can go ahead and touch an index JS as well as a dog ignore and you can also pull pull this down just from the repo in the description of the video which I'll be posting shortly after the video has gone live if you go within the G ignore just to get this out of the way you can put in your node modules just so you're not pushing that to GitHub and then once that's all set up we can go and get started within our index CHS so the first thing that we're going to do is we're going to import those modules that we just installed we're going to be importing axios Cheerio and express so axios is what we use to make requests so this is what we're going to actually use to get the data from websites and Cheerio is going to be what we use to parse the Dom elements so Cheerio has a jQuery like syntax so if you've ever used jQuery it's very straightforward you can go ahead and purse the HTML really easily with that dollar sign selector and then Express is going to be how we set up our nodejs server next we're going to just initialize our Express on then we're going to be defining some middleware the reason why we're going to be setting up some middleware is to allow that open AI custom GPT to essentially access it so you can refine this if you want say if you want to lock down your post requests or your patch or delete or whatever you can remove some of these things but this is just an example of how to get started so once we have that we're just going to apply our cor is middleware then from there we're going to be setting up a really simple scraping logic here so essentially what we're going to be doing is we're going to be sending in a query parameter for a URL it's going to be I'm going to just show you here it's going to look something like this so it's going to be our server and then we're going to have the route of scrape and then it's going to be the URL now what's happening with that custom GPT like I'd shown you it's essentially asking the model to give us a URL so when I said show me the results for Hacker News I'm asking it with the schema that I Define to give me a https uh URL essentially which we'll get into a little bit further but that's just a bit of an aside so next we're going to Simply do a get request for the information on the website so one caveat with this is since we're just doing a simple get request for the raw HTML if you have something like a single page application it might struggle with something like that but I do plan on making some content with Puppeteer on potential other ways that you can get around this type of issue if you run into this on potential websites so then we're just going to be doing a simple parsing of the HTML so this is AR Cheerio so I just threw in a few things here just to clean up the data so you really likely don't want script tags style tags images and even the nav and footer you don't really necessarily need especially in this use case then the other thing with custom gpts you have a limited context window so I just put in a crude stop here here I didn't test this to see how high I could potentially go but you could play around with this a little bit further and then also with the response or right before the response you can set up some logic to parse it further set something up with Lang chain potentially or reach out to other services that you might have you can do all sorts of things with this setup so hopefully this works as a setup for not just a proxy server necessarily if you're node.js developer you should be able to take what I'm showing you and be a ble to tweak it and set up whatever you want really so once that's set up you can set up a kit repository the service that we're going to be using is render now render is really great because all you need to do to set up a node.js server is you just link up your GitHub repository you can sign up for a free account on render once you've set up your GitHub repository you can just connect it here all you need to do to set it up is create a unique name you can keep the region the same you can select main as your branch now for the actual build command I'm going to be using npm so you can just npm install all of the different dependencies and then for the command if you put within your package Json you could have something like mpm runev or you could have something just like node index J then you can just go ahead select their free tier once that's done you can just go ahead and create web service and it's going to go ahead which just do a test here and it's going to go ahead and just start building and deploying that server for you once it's set up you can access your server from the link here and then you're essentially Off to the Races I'm just going to leave this sort of running in the background here again you'll be able to reach all this code in the GitHub repository I think it's actually already public and live so if you want to reach for it you can go ahead and pull it down now in terms of the portion to actually set up the custom GPT is relatively straightforward to do this so what we're going to be doing is if I go into edit GPT here now now to actually create the GPT if you haven't used the GPT Builder before you can actually use this natural language to build it so you can specify some things about what you like to build I find it's actually really helpful to do the initial prompt here to get some of this configuration out of the way it will give you some conversation starters and instructions and that sort of stuff I might even give you a little dolly logo like you see here and what you can do so I have a couple examples here I can say what is the Wikipedia on worms about the other thing with this is you don't NE necessarily need to just hit like a root domain right so if I say what is the Wikipedia on worms about it's going to get that URL path from the GPT knowledge base which it has and then it's going to get their response from the proxy server and then start to give me information from that live Wikipedia page so to set up the action what you'll have to do is you'll have to create a new action so I'm going to be going in and just clicking edit here and all that we're going to have to do is essentially set up this Json schema so what we're going to be doing here is we're going to be mapping with natural language what our GPT action does as well as things like the parameter that we're using so we're using the URL parameter and then the description of what that parameter is the syntax is a little bit ugly and it does take a little bit of time to get used to but the more you play around with it the more intuitive it becomes then once you've done that you can go ahead and save that and that's pretty much it you have your own custom proxy server so that's it for this video hopefully you found it useful if you did please like comment share and subscribe and otherwise until the next one
Weekly deep dives on AI agents, coding tools, and building with LLMs - delivered to your inbox.
Free forever. No spam.
Subscribe FreeNew tutorials, open-source projects, and deep dives on coding agents - delivered weekly.
Technical content at the intersection of AI and development. Building with AI agents, Claude Code, and modern dev tools - then showing you exactly how it works.