
Links coming soon...
--- type: transcript date: 2023-08-17 youtube_id: 6EW3rRdGB9I --- # Transcript: V4 of OpenAI's TypeScript/Node.JS API SDK in this video I wanted to do a quick overview of the new version of the typescript node SDK for openai's API so it just came out you're able to access it on GitHub and npm install it now and I'm just going to be running through a handful of examples from the readme as well as some other examples just to show you on how you can incorporate this into your project so the first thing to note that I wanted to show you is you no longer have to specify your API key necessarily when you declare that new API client for interfacing with the openai API so before you'd have to specify API key process.env open AI API key Etc now it will automatically look for that within your project so if you've deployed it it will look for that in the environment variables or if you're running it locally with something like dot EnV it's going to look for that variable so something that's sort of a nice to have and sort of cleans up the code now the next one which I think a lot of people are going to be excited about is the ability to stream out responses which I'll demonstrate here so here is a simple example where it's streaming out the responses to the terminal but you could imagine where this could be useful for having that similar feel to something like chat GPT where you're streaming out from your back end to your front end and being able to have that familiar chat GPT like feel now another one that I think a lot of people are going to be excited about is the typescript support so this is just a simple example of the typescript support and the parameters for a non-streaming example so if I just change this here and I say completion and I go streaming as soon as I change it you start to see some of that type validation and errors that might come up if it doesn't match the parameters that are mapped to it so another huge feature I think for a lot of typescript developers I think this one is probably one of the more requested features and needless to say it's nice that it's finally included so the next one I wanted to show you is the ability on or the new feature or implementation on how to upload files now while this might not be something that a lot of you are doing now it might be something that you may consider in the near future and the reason I say that is because they made an announcement early last month that a GPD 3.5 and gpd4 will be able to be fine-tuned and in the interim they're recommending to not fine-tune their older models right now because it's seemingly imminent where you'll be able to use GPD 3.5 or 4 with your fine tuning so this is just a handful of examples on ways you can upload your Json L file so here is just a demonstration on a way you can do it there's a handful of different ways that you can do this but Json L is simply there's your prompt and there's your desired completion so if you uh this file and say you have a big fine-tuning file that's how you can do it you can simply upload it like you see here and there's a handful of examples that they have in their docs and that I have here so I'll include all this stuff within a simple GitHub repo that you can just pull things down and run things in the terminal if you like so next I wanted to show an example of whisper so this has simplified considerably in terms of how you can get a transcription from an audio file in their SDK now so if I simply run node 5 whisper you can see that how quickly and with as very few lines of code you can get a transcription back of an audio file so that one's pretty self-explanatory now the last two I just wanted to touch on is there is now the ability to have a timeout now you might be thinking why do I need a timeout so if you're working within something like a Lambda function where you don't want it to run over or just wait for it to error out or maybe you want to set it within the limit of that function like maybe you only have 60 seconds to work with now you can set your timeout and you can have that error be called without it just timing out on you so that's a nice feature to bake within or baked within the SDK and then finally there is retries so out of the box I believe it's two retries that it does try so if there's an issue like it hits the API and it gets a status code that it's not happy with it's going to try again and now you can actually specify the retry attempts as well so I don't think I actually had the retries here but nonetheless I'll just pull it up on screen here so if I just show retries so you can see yes the defaults to you can specify how many retries that you want if there is an issue and you can you can see that you know they might happen because of the you know network connectivity problems or the like so there's a handful of things uh other things within here that you can check out such as setting up a proxy if that's something that you're interested in and there's also support for these following environments here so I just thought I'd give you a quick overview on some of the features and some of the new syntax and nice to haves within the SDK but as always if you found this video useful please like comment share and subscribe and otherwise until the next one
Technical content at the intersection of AI and development. Building with AI agents, Claude Code, and modern dev tools - then showing you exactly how it works.
Weekly deep dives on AI agents, coding tools, and building with LLMs - delivered to your inbox.
Free forever. No spam.
Subscribe FreeNew tutorials, open-source projects, and deep dives on coding agents - delivered weekly.