
Links and Repo coming soon...
--- type: transcript date: 2023-08-23 youtube_id: FFaDp9eNvsw --- # Transcript: GPT-3.5 Fine-Tuning in Node.js in Just 8 Minutes! 🔥🚀 right in this video I'm going to be showing you how you can get started with the new GPD 3.5 turbo fine tuning feature in node.js so the first thing that I'm going to have you do is I'm going to have you go over to vs code and you're going to open up a new workspace so once you've done that all you need to do is npm init Dash y once you're in there you're going to just simply add the line of type module because we're going to be using Imports in this example so once you're here you're you can also npm install the two dependencies that we have here so dot EnV and open AI then once those are all installed you can go ahead and touch a couple files on index.js and a DOT EnV and while you're at it you can also create a file like I have here style and tone this can be named whatever you want dot Json l so that Json L file is going to be what you use to actually fine tune and train the model and this is what it looks like so I just used an example from their documentation scaled out to 10 different examples So within the docs I think there were three and I just scaled that out to be 10 because that's the minimum requirement of items that you need to create a fine tuning session so once you have all that set up just make sure that the file is all good to go and has the right format and everything there are some good tools out there where you can just check to make sure you have no typos and all of that and that it's a valid Json L file and then from there we're just going to go into our index.js so once we are in our index.js we're going to import a handful of things so open AI fs and Dot EnV and while we're on dot EnV I'm going to have you go over to the openai API keys and just create a new key once you have that you can go within your dot EnV openai underscore API underscore key and paste in your key there so once that's all in make sure you save that you're good to close out all these other files because we're not going to be using them from here on out so from here we're just going to first we're going to upload the file here so if I just go ahead and run this so once it's ran it should be uploaded and then this is a simple command on how you can list out the files that are within their system so if I just comment this line out and I go index.js so now I should have that most recent version of style and tone so if you have the same file name and you upload it multiple times don't worry it will create multiple instances of it you'll see that I have other versions of that same file it just specifies a different time stamp with them so once you've done that you can dive right into fine tuning but the one thing that you will have to grab from here once you've listed out the files is the ID so you can grab the ID and then you can simply go Within the fine-tuning methods and just a little asterisk here so I am using the openai SDK and version 4 just came out last week and fine tuning just came out today a timer recording so the example that I'm showing here doesn't actually work for fine tuning a GPD 3.5 but I have an alternative that you can use in the interim so if I just go ahead and save this here I'll show you the error that I ran into so it's saying invalid base model GPD 3.5 it must be at a Babbage Curry or Da Vinci so the SDK might just be still updating and at you know when you watch this it could very well be the case that this might work so I'll make a repo for this and pin it within the description and I'll just update this if there are any tweaks to the SDK once it's updated but in the interim what you can do is you can simply make a fetch request to the endpoint so you're going to specify that training file ID like I showed you in the terminal you're going to specify GPT 3.5 turbo like I had shown you in the previous example you're going to set up your headers with your environment variable you're going to define the endpoint and then you're simply going to wait for a response here So within this since I already actually fine-tuned this example I'll just go in and get the most recent version and I'll just go ahead and fine tune it again so if I just run this now so you'll see that it comes back with this response and it's created that fine-tuning job and then from there all you have to do is wait for an email so it can take anywhere from a few minutes to a few hours depending on the size of the data set now with the example I showed you where it's just 10 lines it only takes a handful of minutes at least a time of recording but this is later at night where I am so you know I'm sure time of day and all of that sort of factors into the queue and how many people are trying to train models on their system and then all you have to do is once you get that email you'll have a string of the model um that you're going to put in and reference when you make requests from your application so typically you might just go within the model and say you're using you know GPT 3.5 Turbo it's going to give you your organization and then a unique key associated with that fine-tune model to be able to use it and then once you've done that you can just simply go ahead and run it there now the other thing to note that is really nice is once you fine-tune models you can actually go within the playground and find them within the fine tunes section when you go and specify a model so that's really nice if you want to play around with seeing what's what worked what didn't work when you're fine-tuning and really you know playing around with us so other things to note is it is based on the number of tokens is how the pricing works so there is a specific rate based on token so if you just paste in your Json L it will give you a general Sense on how much or how many tokens that it could potentially be using so just a sort of a nice little tool to know that's out there a little tokenizer on their website like I showed you there's the Json L validators out there so there's a handful of these I'm sure there's one that you can use within vs code to just make sure that it is valid you don't have any typos anything like that and then a couple other resources I'll pin all these in the description of the video but I'd encourage you to keep an eye on the openai Node repo on GitHub so this is their SDK where they're updating all the new features for their node.js and typescript implementation and then also I'll point you to their blog post and their documentations for fine tuning which are all excellent so kudos to the team at openai great job on implementing this very clear documentation right out of the gate and I had no issues in fine-tuning the model so if you found this video useful please like comment share and subscribe and otherwise until the next one
Technical content at the intersection of AI and development. Building with AI agents, Claude Code, and modern dev tools - then showing you exactly how it works.
Weekly deep dives on AI agents, coding tools, and building with LLMs - delivered to your inbox.
Free forever. No spam.
Subscribe FreeNew tutorials, open-source projects, and deep dives on coding agents - delivered weekly.