
In this video, I'll teach you how to create a feature similar to Anthropic Claude Artifacts. I'll explain how to stream artifacts to the right side of the screen and complete responses on the left, toggling views, copying, and downloading outputs. I'll demonstrate a working snake game and cover the web dev setup (HTML, JavaScript, CSS), SVG rendering, and backend compatibility. I outline the system's function without specific function calls and discuss prompt engineering insights. You'll see code walkthroughs, streaming logic, artifact rendering, React components, and mermaid graphs handling. Join me to explore artifact rendering and other neat features. Repo will be updated shortly: https://git.new/answr ⭐ 00:00 Introduction to the Anthropic Clawed Artifacts Feature 00:33 Demonstration of the Artifact Feature 00:48 Setting Up the Development Environment 01:02 Exploring Different Artifact Types 01:33 Code Walkthrough and GitHub Repository 03:02 Token Streaming and UI Integration 04:01 Rendering Artifacts and Handling XML Tags 07:47 Final Thoughts and Next Steps
--- type: transcript date: 2024-06-24 youtube_id: 8ICRr4jPLEc --- # Transcript: Building Anthropic Claude 3.5 Sonnet Artifacts: Step-by-Step Tutorial in this video I'm going to be showing you how to build out something similar to the anthropic CLA artifacts feature that was just released last week you'll see on the screen here what it will do is as soon as it determines that it's going to be using an artifact it will stream out that artifact on the right hand side of the screen there and then once it's complete it will go ahead and finish up the response on the left hand side there and then finally once the response is done it will go and select the visual view you'll be able to toggle this back and forth you'll be able to copy the output or you'll be able to download the output as well so just to show you this is a working game of snake and you can play around with it that the screen shifting a little bit you might have to add something to disable the event on your keyboard to make this a little bit smoother but this is just to give you a general idea on how this could work so right now I have the webd portion set up I have the HTML JavaScript and CSS setup and then I also have the ability to render the svgs so I'll just demonstrate that so if I say render a SVG of a crb you'll see it's going to give us that SVG and there's something unique with the way that the artifacts work and I found that there's these groups of buckets of different artifacts here there's the webdev portion where it will render out like those websites or those games that you're seeing online there's the SVG portion and then there is the react component feature I don't have the react component feature quite figured out quite yet it's just going to take a little bit more time and then there is also a really nice graph chart feature with with incloud that I haven't quite figured out as well either I'm just going to run through the code relatively quickly I'm going to throw this up on GitHub so you can go ahead and look at it play around with it do whatever you want with it the way that I set this up and you'll be able to take this component and put it wherever you want as long as you're able to have an API that supports the streaming of tokens you can set this up with any back ends you can set it up with a go you can set it up with no. JS or python or whatever you want it doesn't really matter as long as you're able to stream in those tokens it's set up in a way where it's actually not using any function calling this is all from the system message here which I do want to touch on there was a Twitter thread that I saw that got the system prompt from Claude and this is what gave a lot of clues in terms of how I built this out it gives you a list of all of the different capabilities that's built into the ux and gives you some ideas on how it actually works I encourage you to read through this it is really impressive in a few ways the prompt engineering is really masterful in this it does work exceptionally well as we've seen if you played around with Claude but the other thing that's impressive with this is the model actually follows this very well I tested this with llama 70b did not work it really did struggle quite a bit with something like this I didn't try it with gp40 that is something I'm planning on doing but you definitely could swap that out with this example as well we have the skeleton loader that you saw there just loads at the top of the screen there we have our interface set up here for some of the different values that we're going to be using and then we have everything within our main component here what it's doing is all of those tokens are streaming into the front end component here the way that you can think about this is like a Plinko tile we're dropping each token in and it's going to land in a particular spot and what we're doing is we're giving boards within those Plinko points along the way and we're directing those tokens where we want to have them the way that I set this up is depending on the XML identifi if we know that it's developing what we're doing is we're routing them to particular spots within the UI if the token is detected as being within the coding portion we're going to rote that to the right hand side of the screen if it's just that regular response we're going to rote it within the response side of the screen on the left hand side if it's developing or if it's ant thinking that's going to be that trigger on how we render that that component that we have there what we're doing here is we're just waiting for all of that to be streaming in and then as it's streaming in we're just putting it all in different places within the opening tag if it identifies that it's developing then it's going to start to render that artifact and then once that tag is complete we're going to go ahead and rote those subsequent tokens to the right hand side of the screen there that's what the main use effect is doing it's routing things where they need to go then what we're going to be doing is we have a use effect to detect on whether it's still streaming when streaming is done from an llm you can get the end value they're all a little bit different some of them say end some of them have a different value but we're just going to wait for that streaming to be done and that's going to be how we automatically select that rendered view of the HTML or the SVG I'm not going to be covering this render react portion cuz this is still a work in progress here the download artifact is pretty self-explanatory this can definitely be augmented a bit instead of just an artifact. txt this will be able to ultimately be able to be the file type that it generates but for now I just have it as a txt but you'll be able to download whatever it has generated within that coding pane then from there we're just going to render the buttons for both the copy to clipboard as well as the download button from there we're going to render the artifact and in this case we have two different types of artifacts we have an iframe for how we're going to be setting our HTML the way that they're doing this if you inspect on anthropic is they are using the source stock attribute and that's how they're passing in all of that information into the I frame within the Dom there and then for our SVG we're just going to be rendering that directly within the Dom now if you want to render the react portion it is a little bit more involved because you have to do some transpant or something like Babble or what have you to actually have that work so that's a work in progress and then there's also another portion where it does render these mermaid graphs so that's a potential other case that you could put within here of if it detects this within that XML tag to choose the different artifact on how it will render within your little viewer there within here this is going to be how we filter out the different portions within the content View we have these developing and digesting portion upon to the YouTube channel here these were originally and what they are right now in production on Claud are ant thinking and ant artifact that's going to be the portions that we essentially filter out here if it's digesting or ant thinking that's going to be how we render that button that we have there and then if it's the artifact itself that's going to be within this tag as well we're just going to be cleaning it up it's a little bit of redx but this is just to give you an overall Sense on what this portion is doing here if the tag of digesting is detected we're going to just remove that whole XML so from start to finish and then you can put in whatever you want here in this case I'm just putting render artifact but you could make it like a nice little button you could have the file name within here you could have an icon you could have a loader like they do within Cloud but that's just to give you an idea on how to do that and then we're going to be rendering that within mark down here if there are any code elements or what have you that do still come in within that left hand pane that's going to be how we render that next we just have a wrap HTML and back so this is just a placeholder here this is another sort of area of improvement this is essentially to wrap the markdown with a viewer and then from there we're just going to be rendering out our views so the isolated view if it is isolated we're just going to render that full view like you saw within the screen here it's not really specific to the artifacts use case it's just like an answer engine thing that I have set up then if there is an llm response we're going to be rendering this out then this is going to be where you see the different methods where we're going to be calling the functions that we had declared above there if it's loading we're going to be rendering that skeleton loader we're going to show that logo of the anthropic logo and then here is going to be whether you have the drawer open and close this is something that you could definitely tie in some nice animation like they have within Claud if you'd like right now I just have it open and appear on the screen here but that's pretty much it so all of the logic it's about 300 lines of code so far it's going to definitely be a little bit more to add in the react portion as well as those mermaid graphs but I just wanted to do a quick one get this out so you could play around with it let me know your thoughts on it if you have different ideas on how you would approach this if you have any questions just leave them within the comments below if you have any questions on how I built this or how I thought through this you can leave it within the comments below if you found this video useful please like comment share and subscribe otherwise until the next one
Weekly deep dives on AI agents, coding tools, and building with LLMs - delivered to your inbox.
Free forever. No spam.
Subscribe FreeNew tutorials, open-source projects, and deep dives on coding agents - delivered weekly.
Technical content at the intersection of AI and development. Building with AI agents, Claude Code, and modern dev tools - then showing you exactly how it works.