
Check out Tavus; https://tavus.plug.dev/v9tXAGz In this video, you'll discover Tavus, a powerful AI tool for creating realistic AI avatars. Learn how to build AI personas for various applications like interviews, customer support, or companionship. The tutorial covers setting up personas, integrating different language models, and adding functionality such as web searches. The video offers a step-by-step guide on setting up a development environment, scraping websites for context, and deploying a Next.js application equipped with AI capabilities. Ideal for developers interested in leveraging AI for dynamic, user-interactive experiences. Repo: https://github.com/developersdigest/tavus-demo 00:00 Introduction to Tavus: The Ultimate AI Development Tool 00:35 Setting Up Your AI Avatar Persona 01:21 Enhancing Your AI Avatar with Functionality 01:44 Developer Experience and Custom Persona Creation 01:53 Live Demonstration: Creating a Custom Persona 02:52 Integrating TVIs into Your Application 03:03 Advanced Features: Script to Video and Audio Upload 03:32 Building a Web Scraping Program with TVIs 04:21 Generating Context for Your AI Avatar 05:48 Deploying Your AI Avatar 12:52 Exploring Additional Templates and Resources 13:34 Conclusion and Final Thoughts
--- type: transcript date: 2025-07-14 youtube_id: mG9TaTMEUy4 --- # Transcript: Building & Deploy AI Avatars of ANY website with Tavus in Next.js In this video, I'm going to be showing you Tavis, which honestly has to be one of the most impressive AI development tools that I've ever used. What Tavis allows you to do is to create an AI avatar that looks just like this. And you can create your own applications, whether it's for interviewing people, if you want a support agent or a companion, whatever it might be. What you're going to be able to do is be able to programmatically stream in with WebRTC the video as well as the representation of what looks just like a person. Within this video, I'm going to show you how to build out an example completely from scratch leveraging Tavis. First off, right off the bat, we do have the ability to set the persona. There are a number of stock personas that you can go in and choose from, whether it's a researcher, a coach, a history teacher, an AI interviewer. But the cool thing with this is you can also create your own personas within the persona. What you can do is you can add things like your system prompt just like you would an LLM. You can also add in conversational context. Say if there's particular topics or things like that you want to have within the conversational setting, you can maybe specify this is an interview, this is a support agent or whatever that might be. And the great thing with this is you can leverage a number of different language models within this. whether it's their own Tavis train models or you can actually put in your own custom models as well. You just have to specify things like the model name, the base URL as well as the API key. Now, you can actually even add in function calls or tool calls. The really cool thing with this is now your AI avatar will be able to access things like external services. Say for instance, if you want to equip the avatar with the capability to look up the latest news or do a web search or whatever the action might be, you can go ahead and add in all of that functionality within here as well. One of the really great things with the platform is the developer experience. It's really intuitive to get started. You're going to be able to find all of the different pieces that you need for whatever you're building within here. I'll create a custom persona. I'll call it developers digest. I'm going to put a quick system prompt and I'm just going to say you are a customer service support agent. And then for the model, I'm going to specify that I'm going to use the Tavis Lama model and I'm going to go ahead and create my persona here. Now that I have my persona, we can grab the conversation link or we can join the conversation right from the UI. For instance, if I go over to the link, I'm going to say allow while visiting. I'm going to turn on my microphone here. Yeah. So, I'm really excited about this software engineering role. Could you tell me a little bit more about the role, responsibilities, the culture, all of those types of things? So, the software engineering role is pretty cool. You'll be working on a team to design and develop new features for our product. As for responsibilities, you'll be writing code, testing, and collaborating with other engineers to solve problems. Our culture is pretty relaxed and collaborative. We have a lot of talented people, and we're all about learning from each other. What specifically are you most interested in knowing about the role? That's just to give you a sense on how easy it is to get set up within the playground. But let's say now you want to add this within your application. You can leverage it through WebRTC and you can have those conversations as if it was something similar to a Zoom call, just like you saw in the demonstration that I showed you. But what you can also do within this is you can convert scripts to videos. Say if you want to have something like a script for a post or whatever it might be, what you're going to be able to do is you can fill out the script within here. Or alternatively, one of the really cool things with this is you can actually even upload an audio file of your own voice and then you can choose the AI avatar that you want to read that script. Now, I'm super excited to show you what I built for how you can get started with Tavis. I built out a program and I'm going to link all of this within the description of the video. And what this allows us to do is we're going to be able to run this Tavis cle command. And what that will do is we're going to be able to crawl or scrape specific websites just to demonstrate this. So say if I want to have it automatically recursively crawl a website. Let's say I put in the Verscell website for instance. What you can do from there is you can specify the number of pages that you want to scrape. Say for instance I want to scrape 25 pages. I can go ahead and input that. And then we're going to be able to add in the name for our persona. And it will also default to the uppercase of whatever the domain is without the HTTP and the.com and all of that. It will just take a number of seconds to crawl. And the way that this is set up is you can go and grab a handful of API keys for the scraping. We're going to be leveraging fire crawl. And then what it does is once it gathers all of the information from all of those crawled pages, we're going to instruct an LLM to create a relevant piece of context that will ultimately be the context that we leverage within our Tabis AI agent. So within here, we see it successfully scraped 25 pages and it went ahead and generated the knowledge base for us. Now, if I go ahead and open up this file, what we have within this current context file is we're going to have all of the pages that are included within here. If I just scroll down here, we see all of the different pages for if we do end up using them within our application somehow within the current context JSON. This is going to be where we have both the knowledge base that was generated from the response from in this case OpenAI but you can also leverage any LLM if you want to use Gemini for a particular large set of contacts that could be a potentially good use case and within here there's really two main parts to consider yourself with within the current context this is going to be the context that we feed to our Tavis agent. So, what we've done within here is after we've scraped all those pages, we sent those pages to an LLM to create a nice succinct block of context. And this is going to be essentially the brain of the AI avatar. Now, within here, we also have some of the metadata that we generated as we went through the CLI commands. And we also have all of the contents and the markdown of all of those respective pages that we scraped. Now that we have that, I set it up in a way where you can go and you can just run the development server or with just one click, you can deploy it to Verscell. Now I'll go ahead and I'll start the development server. So we see what this looks like. Here is our Nex.js template equipped full with the context of the Versell website and all of those different pages that we scraped. Now if I click to join the call here, can you tell me a little bit about Versell? Versel is a platform that enables developers to build, deploy, and manage fast, scalable, and secure websites and web applications. It provides a suite of tools and services that make it easy to create high performance web experiences including the global CDN serverless functions and edge computing. Also supports popular frameworks like next chie Gatsby and create react app and offers features like automatic code optimization traffic splitting and AB testing. Offers a free plan as well as several paid plans including pro business and enterprise. The free plan includes features like global CDN serverless functions and edge computing but has limitations on bandwidth storage and support. The pro plan starts at $120/mon and includes additional features like increased bandwidth, storage, and support as well as advanced security and analytics. Do you know about their AI platform? I'm curious about that. Zero is an AI powered code generation tool developed by Purcell. It's designed to help developers build user interface UI components and even entire pages using natural language prompts. Think of it as a pair programmer that can understand your design ideas and generate the code for you. With V0ero, you can describe your idea in natural language and the AI will generate the code and UI for your project. You can also upload design mockups and V 0ero will translate them into code. It supports popular front-end frameworks like React, Vue, and Spelt and can even generate responsive design code. V0ero is free to use with additional premium plans available. It's a great tool for rapid UI prototyping and can streamline your web app development process. Now, I want to show you exactly how you can go and get started with us. What we're going to do is we're going to head on over to the GitHub repo, which you'll find within the description of the video. And the first thing that we're going to do and we're going to clone the repo. You can click code. We're going to go and we're going to grab this command here. And then within here, I have an empty directory. So the first thing that we're going to do is we're going to pull down the repo. Once we pull down the repo, we're going to cd within the directory. Once we're within the directory, we can go ahead and npm or pmppm install everything. All right. So now that everything is installed, what we can do is we're going to PNPM and then we're going to run Tavis CLI create. This is going to start the initialization process. Now, if it's the first time that you've run this, what it will do is it will ask you for API keys that it will put within the ENV. If you are going to be creating or iterating on the context that you want to gather for your chatbot, just know that once you get the API keys, if you go and you run create again, just like you saw on the outset of this demo, you won't have to put in the API keys each time. The first API key that we're going to reach for is fire crawl. So, if you haven't used firecrawl before, you can get started with 500 free credits, which should be more than enough to get started with building a chatbot. And within here, we're going to copy our API key. Once you have your API key, we can go ahead and paste it in here. I'll just say your API key and submit it. Then, from here, we can select from two different models. I set it up with GPD40 as well as Gemini Flash. Now, the reason that I like Gemini Flash is it has a very large context window. Let's just say you want to really scour a ton of different sources and you go above the context limit of GPD40, which I believe is 128,000 tokens of context. You can go and leverage Gemini Flash. And the really nice thing with Gemini is they also have a great free tier. You're going to be able to have a number of requests per day for free from Gemini. They're very generous for some of their development use cases. And this is a really great use case for it because all that we're going to be doing really is just one LLM call to gather that initial piece of context that's really going to be the basis of the context for how that AI avatar responds to us. I'm going to go ahead and select OpenAI. And within here again, you can go ahead and put in your OpenAI API key or if you were leveraging Gemini, you can just plug in your API key there. And then finally to get our API key from Tavis, we can just go over to API keys here. And then we can click create new key. And within here, I can call this demo application. You do also have the option to whitelist different IPs if you'd like. And then finally, from here, you can put in your API key. Just like that. Now that we have our API keys, I'm going to go over to replica library. Within our replica library, there are a ton of stock replicas. You can go ahead and look through all of the different AI avatars that you can choose from. And the really cool thing with Tavis is you can even create a replica of yourself. If you go through the process with your camera on and you read through a script, you can even have a visual representation of yourself as an AI avatar equipped to do everything that I'm about to show you. In this case, I'm going to go ahead and I'm going to copy the replica ID from Anna. Once I have that, I'm going to put in my replica. Once you have all of that, all of this information is going to be saved to your local file. Now at this point we can go and we can select the website that we want to crawl. Say if we want to crawl the Verscell website again I can go ahead I can input the website I can specify in this case just for demonstration sake I'll say 10. Once you have your API keys you have a few different options. You can crawl a website. Where that can be helpful is say in the context of something like a customer support agent you can go ahead and point it to something like a documentation page. it can recursively crawl all of those pages and then ultimately based on that context, send it to an LLM and give you a nice succinct block of code. One thing that I do want to note is with the built-in model from Tavis, it does allow for about 16,000 tokens of context, but it is recommended to only give it about 10,000 or less. It does still give you a decent amount of context to work with. But just one thing to be mindful of is as you push that outer bound of that context limit, you might run into latency and stuff like that. So just try and keep it small and succinct. And that's why I like leveraging the LLM within this. But effectively what this allows you to do is you can grab any site. So it doesn't necessarily need to be customer service agents. You can also just grab any website. Say if you just want to have a conversation about some of the articles that Dario Ammedday, the CEO of Anthropic created, I can go and I can put in that website and specify to just have it crawl all of the different pages that are within there. I can call this Dario. It will go and it will crawl the website for me and ultimately create all of that relevant context. Now, the other thing that I do want to note is even though it does automatically generate that knowledge base, you can go ahead and tweak anything that is within that JSON. So if you do want to go within here and there's different aspects that you want to exclude or maybe there's more details that you want to include, you can go ahead and add that in. This is just a really super fast way on how you can create these things. And additionally, what I want to show you is if I do click to deploy this to Verscell, now mind you, you do have to make sure that you do have the Verscell CLI installed. But what it will do is it will spin up a production instance where you can go and just in a number of seconds after going through all of the different prompts, it will deploy this to Versel. Last but not least, I did also want to point you in the direction of another potential template on how you can get started with Tavis. They do have this repo called the Tabis vibe code quick start and what you can do within this is you can open it up within stack blitz and this is what it looks like. You can go and edit all of the different component pieces of this. And what's nice with this is you can open it up directly within bolt new if you'd like or you can pull down this repo in full and use this as a starting point. As you can see, this has a really beautiful UI that they've laid out equipped with all of the relevant UI components like being able to turn on and off your microphone, your camera, as well as also end the call. This is also a really great starting point if you're interested in getting started with Tavis. That's pretty much it for this video. Kudos to the team at Tavis for what they built. It's an incredibly powerful technology. I was definitely blown away the first time that I tried it. If you found this video useful, please star the GitHub repo, comment, share, and subscribe.
Weekly deep dives on AI agents, coding tools, and building with LLMs - delivered to your inbox.
Free forever. No spam.
Subscribe FreeNew tutorials, open-source projects, and deep dives on coding agents - delivered weekly.
Technical content at the intersection of AI and development. Building with AI agents, Claude Code, and modern dev tools - then showing you exactly how it works.