2 items
2 posts
Microsoft's PHI-4 is an MIT-licensed 14 billion parameter model that matches Llama 3.3 70B and Qwen 2.5 72B on key benchmarks. Here is what makes it special, how to run it locally, and why small language models are increasingly practical for real development work.
Meta surprised the AI community with Llama 3.3, a 70 billion parameter model that delivers 405B-class performance at a fraction of the cost. Here is what the benchmarks show, where to run it, and why this release matters for developers building with open-source models.

New tutorials, open-source projects, and deep dives on coding agents - delivered weekly.