Every Sunday, I curate the top 5 trending GitHub repositories that caught my attention this week. These aren’t just the most-starred projects—they’re the ones solving real problems, introducing innovative approaches, or gaining serious momentum in the developer community.
1. OpenScreen – Create Stunning Demos Without the Price Tag
Repository: siddharthvaddem/openscreen | TypeScript | 21,811 stars (+2,692 this week)
OpenScreen offers a fully open-source alternative to Screen Studio with no subscriptions, no watermarks, and free commercial use. It automatically adds smooth zoom effects, cursor highlighting, and background blur.
git clone https://github.com/siddharthvaddem/openscreen.git
cd openscreen && npm install && npm run dev
2. Goose – The AI Agent That Actually Does Things
Repository: block/goose | Rust | 36,362 stars (+866 this week)
Goose is an autonomous agent that can install dependencies, execute commands, edit files, run tests, and iterate on solutions. It’s extensible and works with any LLM.
curl -fsSL https://github.com/block/goose/releases/latest/download/install.sh | sh
goose configure --provider openai --model gpt-4
goose run "Add error handling to auth module"
3. Onyx – Open Source AI Platform for Every LLM
Repository: onyx-dot-app/onyx | Python | 24,733 stars (+960 this week)
Onyx is a self-hosted AI chat platform that works with every LLM—OpenAI, Anthropic, Google, local models. It provides a polished web interface, conversation history, and document RAG features.
git clone https://github.com/onyx-dot-app/onyx.git
cd onyx && docker-compose up -d
# Access at http://localhost:3000
4. Google AI Edge Gallery – On-Device ML Showcase
Repository: google-ai-edge/gallery | Kotlin | 16,518 stars (+286 this week)
Google’s official showcase for on-device ML and generative AI. Demonstrates real-world use cases where models run entirely on your device—no cloud calls, no latency, complete privacy.
5. MLX-VLM – Vision Language Models on Your Mac
Repository: Blaizzy/mlx-vlm | Python | 3,800 stars (+408 this week)
MLX-VLM brings Vision Language Models to Apple Silicon using MLX framework. Supports inference and fine-tuning of models like LLaVA and Phi-3-Vision—all running locally on your Mac.
pip install mlx-vlm
from mlx_vlm import load, generate
model, processor = load("llava-hf/llava-1.5-7b-hf")
response = generate(model, processor, image="image.jpg", prompt="Describe this")
Wrapping Up
2026 is the year of practical AI tooling. These repos show production-ready systems you can self-host, run locally, and adapt to your needs. Check them out and star the ones that align with your projects!