Every week, thousands of new repositories appear on GitHub, but only a handful capture the developer community’s attention in a meaningful way. This week was no exception — from AI agent platforms that learn as you use them, to foundation models that read financial markets like a language, to a single configuration file that amassed nearly 58,000 stars. Let’s dive into the five most interesting and impactful repositories trending right now.
1. andrej-karpathy-skills — The 58K-Star Configuration File
Repository: multica-ai/andrej-karpathy-skills
Stars: 57,737 (+42,267 this week) | Language: Markdown
Sometimes the most impactful software is barely software at all. This repository contains a single CLAUDE.md file — a configuration document that dramatically improves how AI coding assistants behave. Derived from Andrej Karpathy’s widely-discussed observations about LLM coding pitfalls, it encodes four foundational principles that every developer should understand.
The Four Principles
- Think Before Coding: Forces explicit reasoning before writing code. The AI must state assumptions, present multiple interpretations, and push back when requirements are ambiguous.
- Simplicity First: Write the minimum code that solves the problem. No speculative features, no premature abstractions, no unnecessary configurability.
- Surgical Changes: Touch only what you must. Don’t refactor adjacent code or “improve” things the user didn’t ask for.
- Goal-Driven Execution: Transform vague tasks into verifiable goals. “Fix the bug” becomes “Write a test that reproduces it, then make it pass.”
What makes this repository remarkable is its proof that philosophy can be as impactful as complex software. A single Markdown file with clear, actionable guidelines resonated with nearly 60,000 developers. It now also supports Cursor IDE via a .cursor/rules/ directory.
# Install as a Claude Code plugin
/plugin marketplace add forrestchang/andrej-karpathy-skills
/plugin install andrej-karpathy-skills@karpathy-skills
# Or drop into your project root
curl -o CLAUDE.md https://raw.githubusercontent.com/multica-ai/andrej-karpathy-skills/main/CLAUDE.md
2. Hermes Agent — The AI Agent That Grows With You
Repository: NousResearch/hermes-agent
Stars: 99,285 (+47,053 this week) | Language: Python
Hermes Agent by Nous Research is arguably the most ambitious open-source AI agent project on GitHub right now. Its defining feature is a closed learning loop — the agent autonomously curates memory, creates reusable skills from complex tasks, and improves those skills each time they’re used. This represents a fundamental shift from stateless chat assistants to truly evolving digital collaborators.
The architecture supports 200+ models via OpenRouter, NVIDIA NIM, Hugging Face, and custom endpoints. It runs across Telegram, Discord, Slack, WhatsApp, and CLI from a single gateway process. A built-in cron scheduler enables automated workflows — daily reports, nightly backups, you name it. The agent can even spawn isolated subagents for parallel workstreams.
# Install and start
curl -fsSL https://raw.githubusercontent.com/NousResearch/hermes-agent/main/scripts/install.sh | bash
source ~/.bashrc
hermes # start chatting
hermes model # choose your LLM provider
hermes gateway # start messaging gateway (Telegram, Discord, etc.)
With 4,820 commits, 860 branches, and 3,600+ pull requests, Hermes Agent is one of the most actively developed AI projects in the open-source ecosystem. Its ACP (Agent Communication Protocol) adapter also hints at a future of standardized agent interoperability.
3. Kronos — A Foundation Model for Financial Markets
Repository: shiyu-coder/Kronos
Stars: 19,299 (+6,511 this week) | Language: Python | Accepted at AAAI 2026
Kronos takes a novel approach to financial time series forecasting: it treats market data as a language. Using a two-stage framework, it first compresses OHLCV candlestick data into discrete tokens via a specialized tokenizer, then trains an autoregressive Transformer on these tokens — the same architecture that powers LLMs, but adapted for the noisy, non-stationary characteristics of financial markets.
The model zoo ranges from Kronos-mini (4.1M parameters) to Kronos-large (499.2M), with context lengths up to 2,048 time steps. It supports probabilistic forecasting with temperature sampling and nucleus sampling, and all weights are available on Hugging Face.
from model import Kronos, KronosTokenizer, KronosPredictor
tokenizer = KronosTokenizer.from_pretrained("NeoQuasar/Kronos-Tokenizer-base")
model = Kronos.from_pretrained("NeoQuasar/Kronos-small")
predictor = KronosPredictor(model, tokenizer, max_context=512)
pred_df = predictor.predict(
df=x_df, # DataFrame with OHLCV columns
x_timestamp=x_timestamp,# Historical timestamps
y_timestamp=y_timestamp,# Future timestamps to predict
pred_len=120,
T=1.0, # Temperature for sampling
top_p=0.9, # Nucleus sampling
sample_count=1 # Number of paths to average
)
Its AAAI 2026 acceptance gives Kronos academic credibility alongside its practical appeal for quantitative trading. The tokenizer’s ability to create a “vocabulary” for market movements is technically elegant and opens interesting research directions.
4. claude-mem — Persistent Memory for AI Coding Assistants
Repository: thedotmack/claude-mem
Stars: 62,390 (+14,033 this week) | Language: TypeScript
AI coding assistants are powerful, but they suffer from a critical limitation: they forget everything between sessions. Claude-mem solves this with an elegant automated memory system. It captures tool usage observations during coding sessions, compresses them with AI, and injects relevant context back into future sessions — all without manual intervention.
The architecture uses a 3-layer progressive disclosure pattern that mirrors human memory: you get a summary first, then drill into specifics only when needed. This yields roughly 10x token savings compared to naively loading full context. Under the hood, it combines SQLite with FTS5 full-text search and Chroma vector database for hybrid semantic and keyword retrieval.
# Install for Claude Code
npx claude-mem install
# Install for Gemini CLI
npx claude-mem install --ide gemini-cli
# Install for OpenCode
npx claude-mem install --ide opencode
# A web viewer at localhost:37777 shows captured context in real-time
With support for Claude Code, Gemini CLI, and OpenCode, plus a privacy system using <private> tags to exclude sensitive content, claude-mem has become an essential tool for developers who want their AI assistants to truly understand their codebase over time.
5. MarkItDown — Microsoft’s Document-to-Markdown Converter
Repository: microsoft/markitdown
Stars: 111,954 | Language: Python
In the age of LLMs, document format conversion became a universal problem: how do you feed PDFs, Word files, PowerPoint decks, and Excel spreadsheets into models that expect plain text? Microsoft’s MarkItDown answers this elegantly — it converts virtually any file format into Markdown, the format that LLMs natively “speak.”
The supported format list is impressive: PDF, PowerPoint, Word, Excel, Images (with EXIF and OCR), Audio (with transcription), HTML, CSV, JSON, XML, ZIP, EPub, and even YouTube URLs. The tool preserves document structure — headings, lists, tables, links — while keeping output token-efficient.
# CLI usage
markitdown report.pdf > report.md
markitdown presentation.pptx -o slides.md
# Python API
from markitdown import MarkItDown
md = MarkItDown()
result = md.convert("path-to-file.pdf")
print(result.text_content)
Recent versions added an MCP (Model Context Protocol) server for integration with Claude Desktop and other LLM applications, plus a plugin architecture that allows community extensions. The markitdown-ocr plugin uses LLM vision capabilities for text extraction from embedded images — no additional ML libraries required.
The Bigger Picture
This week’s trending repositories paint a clear picture of where the developer community’s energy is focused. Four of the five projects are directly related to AI developer tooling — from making AI assistants more reliable (andrej-karpathy-skills), to giving them memory (claude-mem), to building autonomous agents that learn (hermes-agent). The fifth applies foundation model thinking to financial markets (Kronos), while MarkItDown solves the universal document-ingestion problem that every LLM workflow needs.
The common thread? The community is moving beyond using AI tools to improving how AI tools work. We’re building the infrastructure for AI-assisted development that is persistent, reliable, and genuinely useful across sessions. If you’re not exploring these tools yet, this week is a great time to start.
Sources & Verification
This post was fact-checked against primary sources on April 19, 2026. Star counts are approximate and reflect values at time of publication.
- andrej-karpathy-skills — GitHub Repository · Star count verified (~57.9k) · Cursor IDE support confirmed via
.cursor/rules/directory · Plugin install command confirmed in README - Hermes Agent — GitHub Repository · Star count verified (~99.4k) · 4,831 commits and 860 branches confirmed · Platform support verified (README also lists Signal in addition to those mentioned) · ACP adapter present in repository
- Kronos — GitHub Repository · HuggingFace Model Weights · Star count verified (~19.3k) · AAAI 2026 acceptance confirmed in README · Note: The 2,048-step context length applies to Kronos-mini only; Kronos-small, base, and large use 512-step context · Kronos-large (499.2M params) is not open-source and weights are not available on HuggingFace
- claude-mem — GitHub Repository · Star count verified (~62.5k) · 3-layer progressive disclosure architecture confirmed · SQLite + FTS5 + Chroma stack verified · Web viewer at localhost:37777 confirmed
- MarkItDown — GitHub Repository · Star count verified (~112k) · All supported formats confirmed · MCP server and plugin architecture confirmed ·
markitdown-ocrplugin verified with LLM Vision-based text extraction