
Check out Memex: https://memex.tech/?utm_source=cc&utm_medium=dg&utm_campaign=dg1&utm_content=youtube In this video, we explore Memex, a versatile platform for building full-stack web applications, web servers, MCP servers, and even mobile apps. The demonstration walks you through the creation of a sleek, dark mode landing page for developers using Next.js. Unique features of Memex are highlighted, including chat mode for coding queries, agent settings for custom instructions, long context from Gemini models, and the ability to restore to checkpoints. We discuss Memex's seamless integration with various deployment platforms and its support for multiple operating systems. This video is ideal for both technical and non-technical users looking to streamline their project development process. 00:00 Introduction to Memex 00:33 Live Build Demonstration 01:03 Exploring Build and Chat Modes 01:29 Agent Settings and Custom Instructions 02:09 Leveraging Gemini Models 03:04 Starting the Project 04:03 Building a Blog Section 04:32 Multimodal Support and Model Selection 06:50 Interactive Terminal and Hero Component 07:50 Restorative Checkpoints and Reliability 08:51 Secret Management and Platform Availability 09:59 Project Management and Templates 11:19 Deploying to CloudFlare 12:10 Conclusion and Call to Action
--- type: transcript date: 2025-07-09 youtube_id: UWxVUc8yIjY --- # Transcript: Building Full-Stack Web Applications with Memex: An In-Depth Guide In this video, I'm going to be showing you MEX, which is a platform that allows you to build out full stack web applications, web servers, MCP servers, even mobile apps. MEMX you can think of as if cloud code as well as cloud desktop were rolled into one application that both supports anthropic models as well as Gemini models. In this video, what I'm going to show you is a live build. And as I go through the building process, I'm also going to point out a number of the unique features that are built into Mex. Now, to demonstrate the platform, what I'm going to prompt it with is I'm going to say, "Create a sleek dark mode landing page for developers digest tools with animated code elements, vibrant gradient accents, and interactive tool showcase cards that highlight curated developer resources. include a hero section with bold typography, feature tools and categories, plus engaging hover effects and micro animations that make it feel like a premium developer destination. I want to build this in Nex.js. Now, within here, you have the option to have the build mode or the chat mode. The chat mode is helpful for say if you're building your application and you might just have some questions whether it's about the codebase or if you want to research something w within the LLM that you're leveraging without touching the codebase you can turn that on. Additionally you can add in context from images whether it's UI elements or mermaid diagrams or architecture diagrams you can include within here and additionally within here you have the agent settings. Within the agent settings we have custom instructions. So if you have any preferences for instance in terms of how to run particular commands or if you want to bias the agent with any preferences that you have in mind you can include it all there. You can also automatically include code execution. Say if I ask for something that involves a terminal command like I say create a new nex.js project if you have it on automatic mode it will go through and execute those commands. Now you do also have the option to turn on a maximum number of turns. This can be helpful if you do want some guard rails where if you do want to just check up on the agent at certain intervals, you can set it to something like after 25 steps, just have it respond back where you can take a look and make sure that it's still on track. Additionally, within here, you can turn on long context from Gemini. Where Gemini models are super helpful is they have up to a million tokens of contact. So, sometimes for particularly hard problems, just throwing more context at the problem can often result in solving the problems faster. That is one great thing with Gemini models, having that million tokens of context where you can leverage them to sometimes get through some hairy situations that you might have within your codebase. Additionally, you do have the option to turn on and off thinking. And that's one really neat thing with the Claude models, for instance, where it does have hybrid reasoning where you can turn on or off that thinking mode, as well as dial it up, whether it's something particularly hard that you're working on or potentially a really involved codebase. Or if you want to have more of a balanced view or even turn off thinking if it's something that doesn't involve something that is overly complex, you can just probably turn it off given that these are state-of-the-art language models. I'm going to go ahead and send this in. Now, one quick aside, we see setting up project. So the screen that I just showed you there, that is what you're greeted with when you open up MEX. And that's something that I haven't seen on other platforms where this really encourages you just to start a new project. Whereas if I were to open up cursor, it doesn't have that default view where you're going to have to create a folder and then bring in that folder within cursor and then initialize your project and all of that. That's just a nice ergonomic feature of not having to go up and spin up all of those things. So within here we have the starting point of our application. As the responses are streaming in, you can unfold all of the different code sections. You will also see if it is making an edit to a file. Now one thing that you have probably noticed is it is a chat first application. If you do want to see the code at any point, you can just click this button here and we'll be able to see all of the different files. If I go within source, I can see our app as well as all of the different components that it created as the starting point of our application. Now that I have this, I'm going to say I want to build out a blog section and specifically I want to seed it with three different blog posts and I want to have them within a markdown folder. Effectively, what I want to have is an example of some blog posts that have some technical pieces of some code blocks with a copy button that's functional and just include some hello world examples, maybe a Python example as well as a JavaScript example within the blog post. While this is running through, I want to touch on a couple different things. Mex has taken the approach where they do have opinionated multimodal support. Like I mentioned on the outset, they do have support for both Anthropic as well as Gemini models, but at time of recording, they don't explicitly allow for model selection. And this actually isn't probably something that is as uncommon as you might think. Cursor has said that even when you're leveraging any model within their system, at any point you are going to be leveraging other models. You might think that you are leveraging say Opus or Sonnet or whatever it might be, but for that agent mode or whether it's applying code, it does touch different models depending on what it's doing. So the way that MEX is set up is it leverages each model for their particular strengths and you don't need to worry about any of that orchestration that occurs behind the scenes. With that being said, one thing that I do want to note is if you are using it within the default mode just like I demonstrated here, you are going to be leveraging the cloud models. But if you do select that long context Gemini mode, you are obviously going to be leveraging Gemini. Now I see it's gone through and we have this new page here for our blog. If I click to the blog, we have this hello world example where we can copy that bit of code. And if I go to another post here, we have a little bit of Python. And again, I can copy that piece. Now I'm going to say I also want to have a link to the blog within the navigation. Now we have the navigation item for the blog here. If I hop back over to the code, the one thing with the blog post is within here we have this post folder and this is going to be where we have all of those different markdown posts. We have the code blocks here. Now that you have an idea on what it can accomplish, I'll just go through some features of the platform. Within here, you can see how much context that you are using at any given time. So, this can be super helpful, especially as you're leveraging larger and larger projects and potentially whether you should switch to something like the Gemini model. You can see how many credits were used within this conversation. And additionally, you can see everything within your billing period within here as well. Now, another thing to know with the platform is they do optimize the agent for both editing files, so coding or actually running the terminal commands like you saw. Another cool thing of the platform is they also have interactive terminal session management. What that allows you to do is you're going to be able to actually spin up and iterate on whatever is happening within the terminal. That's something that a lot of other programs struggle with where they might try and pass in a terminal command that has all of the arguments all within one. But a lot of programs especially CLI based things like even for instance something as subtle as create next app. Well, you can put all of the different arguments to run that within one command. Most of the time some people do want to iterate through that. So being able to actually have that control over the terminal can certainly be useful. Now, I'm going to send in another prompt as I go through some other aspects of the platform. And I'm going to say I want to add in a reusable hero component to all of the pages. Specifically, I want the hero component to have a animated background with a number of different colors and linear gradients. And I want to have the text that's overlaying be white to read the title of each respective page. Now, another great feature of MEX is you can restore to checkpoints. This can be something that's helpful to both developers or people that are potentially even less technical that don't know how to leverage something like GitHub. I know definitely as a programmer myself that I find these restore to checkpoints absolutely helpful especially if you're iterating or trying an idea. Well, it is always good practice to have version control and all of that. Especially with AI generated code, having the ability to restore to a certain checkpoint where it will just revert all of those changes of all of the different affected files is a really useful feature. The other thing that I want to note with the platform is it's really optimized for reliability over speed or cost. This is really built to try and get you the best results and is built with a code last approach in mind. It is designed to be easy to use even for less technical users as well. But with that being said, it is definitely pound-for-pound competitive with tools like Claude Code. Like I mentioned, it does leverage those state-of-the-art models just like Claude does under the hood for what it's building. Another thing to know with the platform is they do have built-in secret management. Where this is great is if you're constantly porting different environment variables from one project to another. Something like your OpenAI API key or various other API keys. I know at work I probably port a handful of API keys across a ton of different projects. What's nice with this is you have a centralized place for all of your secrets that you can store within here. And the other great thing with MEX is it's available on Mac, Linux, or Windows. I primarily use Mac, but I do also have a Linux machine that I use. So, being able to have this across both of those platforms is definitely something nice to see. So, when to get started, it is super straightforward. You can just go ahead and make an account and download the respective version. They do also have a free tier where you can try all of this out. Mex does also have MCP support. You can go within their server directory here where there's a handful of different MCB clients that you can go and easily installed things like GitHub, Slack, Nellifi, Playright, Neon, Context 7, as well as a handful of others. And of course, you can add in your own custom server as well if you do want to leverage the MCP capability. Now, just to show you a handful of other things within the platform. On the left sidebar here, if you click this history icon, you can see all of the different projects that you've been working on. You can hop into them, start up the server, and iterate on them directly just one click from the sidebar here. Additionally, you do have the ability to see all of your projects within this projects view where you can see how many credits were used on each project as well as how many user turns occurred to actually get that desired result. That is helpful especially just to give you an idea on how many potential credits are needed for each particular thing that you're looking at doing. Another thing that I encourage you to check out are the templates within here. For instance, if I go over to the Cloudflare MCP template, what I can do here is if I click the setup and deploy button, it will go and it will actually build out an MCP server for me. The great thing with this is it does have a number of different deployment templates across a number of providers, whether it's Nellifi, Cloudflare, as well as Render. And within here, what we'll see is the user wants to set up and deploy a Cloudflare MCP server boilerplate with GitHub authentication as well as Stripe billing. And within here, it will autonomously go through all of the different steps. And what it will also do is when it needs consent for say deploying to Cloudflare, it will even open up that window for you. I'll go and I'll click allow. And I'll hop back to Mex. And we can see great. And now let's create the KV namespace. And it's going to run all of those different commands to actually deploy this server live to Cloudflare. That's where this is really powerful is you can go from just natural language to deployed whether it's MCP server or application all within the interface all without potentially not even having to touch a line of code which is pretty amazing to consider. The really great thing by having all of this within a thread just like this. The AI model is going to have the context of what the code is doing, but also what the deployment instructions are doing as well as the relevant steps that it can instruct us for different services that we might actually have to manually reach for. For things like the GitHub OAS steps, it gives us a very detailed and clear instructions on where to reach for those things as well as getting the proper environment variables from Stripe as well. But overall, that's pretty much it for this video. I just wanted to show you a really quick one on MEX. I encourage you to try out the platform. But otherwise, if you found this video useful, please comment, share, and subscribe.
Weekly deep dives on AI agents, coding tools, and building with LLMs - delivered to your inbox.
Free forever. No spam.
Subscribe FreeNew tutorials, open-source projects, and deep dives on coding agents - delivered weekly.
Technical content at the intersection of AI and development. Building with AI agents, Claude Code, and modern dev tools - then showing you exactly how it works.