I've been working on Fast LiteLLM - a Rust acceleration layer for the popular LiteLLM library - and I had some interesting learnings that might resonate with other developers trying to squeeze performance out of existing systems.<p>My assumption was that LiteLLM, being a Python library, would h...
Stories by ticktockten
Building AI agents with LangGraph, I noticed graph invocations were slow even before hitting the LLM. Dug into the Pregel execution engine to find out why.<p>THE PROBLEM<p>Profiled my LangGraph agents. 50-100ms per invocation, most of it not the LLM. Found two culprits:<p>1. ThreadPoolExecutor creat...
I built Rust extensions for Axolotl that dramatically speed up data loading and preprocessing for LLM fine-tuning.<p>The problem: Python data pipelines become the bottleneck when fine-tuning large models. Your GPUs sit idle waiting for data.<p>The solution: Drop-in Rust acceleration. One import line...
Hey HN,
I built SolScript, a compiler that lets you write smart contracts in Solidity syntax and deploy them to Solana.<p>The problem: Solana has mass dev interest (17k+ active developers in 2025), but the Rust learning curve remains a 3-6 month barrier. Anchor helps, but you still need to grok owne...
Hey HN,<p>I built SolScript, a compiler that lets you write smart contracts in Solidity syntax and deploy them to Solana.<p>The problem: Solana has mass dev interest (17k+ active developers in 2025), but the Rust learning curve remains a 3-6 month barrier. Anchor helps, but you still need to grok ow...
I built FastWorker after getting tired of deploying Celery + Redis for simple background tasks in FastAPI apps. Every time I needed to offload work from API requests, I had to manage 4-6 separate services. For small projects, this felt like overkill.<p>FastWorker is a brokerless task queue requiring...