Home
Based on a viral observation: caveman-speak dramatically reduces LLM token usage without losing technical substance.
When you prompt Claude with caveman-style constraints — drop articles, kill pleasantries, eliminate hedging — the model produces responses that are technically identical but radically shorter.
So we made it a one-line install for Claude Code.
// These waste tokens every single call: "I'd be happy to help you with that" // +8 tokens "The reason this is happening is because" // +7 tokens "I would recommend that you consider" // +7 tokens "Sure, let me take a look at that for you" // +10 tokens // Caveman says what needs saying. Then stops.
Everything technically important is preserved verbatim: code blocks, technical terminology, error messages, command syntax, git commits, PR descriptions.
"If removing it doesn't change what you'd do next, it's gone."
Caveman was created by JuliusBrussee. Open source, MIT licensed, free forever.
Type
Claude Code Skill
Token Reduction
~75%
License
MIT — Free forever
Author
JuliusBrussee
Version
v1.0.0
claude install-skill JuliusBrussee/caveman