Machine Learning
Inside the BPE Tokenizer: How GPT Splits Words Into Subword Units
By Javier ‘Javi’ Rodriguez The string “SolidGoldMagikarp” is a single token in GPT-3.5’s cl100k_base vocabulary.
The Parameter Wars Are Over (And Nobody Won)
I was digging through the technical report for the latest “O-series” model update this morning—coffee in hand, dreading the inevitable API migration—and I.
The 2026 GPT Pivot: Why I’m Trading Generalists for Specialists
I remember sitting in a coffee shop last February, watching the timeline melt down over the initial GPT-4.5 rumors.
Grok 3 Hands-On: It’s Not Just Another GPT Wrapper
The Model Name Fatigue is Real Well, I’ll be honest – I’m tired. It’s February 2026, and if I have to memorize another model version number that looks.
Databricks & OpenAI: Finally, Data Governance That Doesn’t Suck
I usually scroll past “strategic partnership” announcements without pausing my music. You know the type: two massive tech giants shake hands, issue a.
Stop Grading Brainstorming: AI Won That Game Last Year
I still remember the faculty meeting back in late 2024 when someone confidently declared that while AI could write code, it would never have “true.
OpenAI Finally Dropped Weights. My Local Rig Is Crying.
I owe my co-worker, Dave, a steak dinner. A simplified, non-wagyu steak, but a steak nonetheless. Back in 2024, I looked him in the eye and swore that.
The Plugin Ecosystem Finally Makes Sense (Mostly)
I admit it, I gave up on plugins back in ’24. They were a mess. You remember how it was. You’d enable three different travel plugins, ask for a flight to.
GPT APIs Are Finally Fixing Robot Dexterity
I Spent Years Fighting Reward Functions, and Now an API Does It for Me I remember staring at a Python script for a reinforcement learning environment back.
Why 4-bit Quantization is Beating FP16 in 2025
Here is a number that stopped me in my tracks this morning: 44.4%. That is the HLE (Human Level Exams) score achieved by Grok 4 Heavy, a model heavily.
