
In this video, I showcase an innovative application I built using generative AI to create custom APIs. I'll guide you through its configuration, functionality, and underlying technology. You can define specific data needs, select scraping sources like Yahoo Finance, and validate the extracted information in real-time. The app is open-source, free to use, and supports flexible deployment options. Additionally, there's potential for automatic data rescraping via cron jobs. For those interested, the repo includes API key setup instructions from Firecrawl, Serper, OpenAI, and Upstash. Check out Firecrawl to get 500,000 tokens for free for a limited time: https://www.firecrawl.dev ⭐ https://github.com/developersdigest/llm-api-engine 00:00 Introduction to the Application 00:19 Defining API Tasks 00:54 Extracting Data from Sources 02:12 Deploying the Endpoint 03:00 Future Enhancements and Cron Jobs 04:21 Getting Started and API Keys 04:45 Conclusion and Next Steps
--- type: transcript date: 2025-01-28 youtube_id: 8kUeK1Bo4mM --- # Transcript: LLM API Engine: Describe API, Extract Sources with Firecrawl, Deploy in Seconds 🔥 in this video I'm going to be showing you an application that I built that I thought was a interesting use case of generative AI I'll just walk you through what it looks like and then I'll show you a little bit about how to set it up how it works the underlying technology and the great thing is I'm going to be open sourcing it and you can get it within the description of the video if you're interested the first thing that you can do is you can Define the different things that you'd like the API to do if I say I want nvidia's market cap trading price and trading volume let's say the first thing that we'll get back is we'll get a structured schema back this is from open AI structured outputs and there are a couple of different pieces within this like if you don't have a valid schema it will just sort of push you in the right direction to make sure that it is in fact valid so we want nvidia's market cap we want the volume we want the trading price and then you can specify which ones or not are required once we send that in the next step that we're going to do is we're going to select the different sources that we want to scrape from here if we see Yahoo finance so in this case Yahoo finance does have all of the different information but let's say you have a number of different values and it might not be on just one web page you can just go ahead and select which web pages you want it to go and extract from if we go ahead and submit that what it's doing is it's actually going to each of those sources and it's going to extract all of that necessary data that we need from that page this is using fir craw's new extract endpoint and what you can do with that is you can send in a URL as well the desired schema that you want to have returned and it will parse that page and return exactly the values that you want to serve up to your users once we have all that data extracted from the source we'll be able to validate it here here we see that we have the volume trading price as well as the market cap and if I just validate that here we can see that the price is 14692 if we look at the market cap we can see that it's 3.6 down here and if we look at the trading volume we can see it right there as well so they're all live numbers they're all accurate and what's cool with this and mind you I'm a little biased I did build this but you can go ahead and deploy this and what you can do you can Define the endpoint that you want to have if I just call this Nvidia I can go ahead and I can deploy the URL and then I can go and actually serve this up the neat thing with this is now we have a live URL at that route we have API results envidia and you'll be able to access those results so the thing with this endpoint you could deploy this to an AWS Lambda function or a cloudflare worker or you could just deploy it as a route within your application if you're using something like nextjs or whatever it can really go wherever you'd like the way that this works it's a pretty simple data structure we essentially have a hashmap where we have our keys as the different routes for what we want to specify and then for the actual values of our keys is we have the payload like you see Within here now what I plan on continuing to do with the application is to build out this cron functionality where you'll be able to resc scrape that information at a particular frequency in the context of Market data this is going to be changing second by second day by day in this case maybe we want to specify it to update every hour and the thing with this is it is set up in a way where it is going to be incredibly fast once you deploy it it's going to work just like you have a hosted endpoint now if this catches any momentum and if people are interested I'll set up the update frequency I'm still sort of TBD on how I want to do this but effectively you can just run a Cron job at an interval grab the information that you have stored within the KV result just to give you an idea on what this looks like here is the key so the key of Nvidia here is our data now in addition to that we also have the metadata this is what you could potentially pass in your Cron job when you go and res scrape that endpoint to update the key value pair here we see what we specified at the set of the video we have our sources and then we also have when it was last updated this is just a really quick implementation is more of a proof of concept you could definitely scale this out like if you're res scraping every 10 minutes or something and store every single result within a row within your postgress database you could certainly do that as well but this so far is just the first iteration on what I've built so far to get started with this you can pull down the repo and effectively all that you need is you can grab a fir craw API key where you'll be able to get up to 500,000 tokens for free at time of recording you'll be able to head over to server get a free API key as well get an API key from open Ai and then finally you can go ahead and get a free API key from up stash to store your results and have a working API but otherwise I just wanted to show you an initial prototype of what I've been building if there is interest in the project I'll continue to build on this I'll get the crown functionality set up I'll get a sidebar potentially with all the different routes so you'll be able to see them edit them update them and have an overall interface where you'll be able to quickly come up with an idea of what you need for an API set the res scrape frequency and then also be able to see and update all of your routes within the interface if you found this video useful please like comment share and subscribe otherwise until the next one
Weekly deep dives on AI agents, coding tools, and building with LLMs - delivered to your inbox.
Free forever. No spam.
Subscribe FreeNew tutorials, open-source projects, and deep dives on coding agents - delivered weekly.
Technical content at the intersection of AI and development. Building with AI agents, Claude Code, and modern dev tools - then showing you exactly how it works.