On Claude
I recently subscribed to Anthropic’s Claude Pro offering for $20/month.
If you told me that last year, I would’ve scoffed at you. I don’t easily part with cash—it takes a large activation energy level to make that happen. Generative AI tools like ChatGPT are fundamentally productivity tools, so they’re well worth it if you use them regularly. Which I did, but still I didn’t pay outright for any LLM chatbot tool until Claude for three main reasons:
(1) My company buys revolving OpenAI API credits for apps, so this gives me sandbox access to their premium models.
(2) I got 1 year of Perplexity Pro for free with the purchase of my rabbit r1. This unlocked premium models from OpenAI, Anthropic, and others. (Coincidently, I used the Claude model on Perplexity by default soon after its release.)
(3) Basic ChatGPT + Perplexity provided adequate answers most of the time.
Over the past couple of months, I kept hearing how Claude was materially better than ChatGPT. I’ve used it on and off (the free version) over the year, but I didn’t get all the fuss. Claude would break after a few minutes of use, or outright refuse to work. That left me with a negative impression, so I stuck to the alternative LLM access points listed above and didn’t explore further. However, last week I gave Claude another shot after reading a new blog post detailing its impressive intelligence. I asked for help with a relatively complex feature in an app (complex both technically and from a design-choice perspective). I was so impressed with the results that I decided to upgrade to Pro.
It’s been about a few days now. Earlier today, Claude helped me find and fix a pernicious issue in my code. (Audio would randomly de-sync from video because of some audio buffers were copied with incorrect presentation timestamps.) It took some prodding, but it’s fairly competent. Unlike ChatGPT, I don’t have the urge to swear at the model when I get frustrated with it; maybe because it’s named Claude and that personifies it. Another tidbit from that same blog post that caught my eye was about how data from Pro users is not collected—in fact, Claude’s tagline on its homepage is currently “Privacy-first AI that helps you create in confidence.” That’s an apt description of how I feel. I’m a private person, and often feel the need to obfuscate code when pasting it into ChatGPT. However, with Claude, I don’t necessarily feel this urge to redact code. It’s not that the code itself is top-secret, but it’s still our IP. In short, prompting Claude feels like being able to dress down in front of a guest you’re comfortable with.
I’ve been harsh on and critical of “AI” in the past—maybe two or three years ago. I feel that generative AI is being overhyped and we’re wasting resources on dumb stuff. Just the way we did with dumb crypto stuff. I think that’s still mostly true, and I’m not convinced the LLMs can “wake up”. But I do trust Dario Amodei. I remember watching an interview with him around the time I wrote those harsh posts. When asked what he thought p(doom) was, he said 10-25%. That annoyed me. It just felt like here’s some guy making shit up and everyone’s listening because “AI” is the shiny new thing. Reflecting on that statement today, I actually think it’s good that there’s someone in charge who takes the threat of the singularity seriously. After all, if there’s even a 1% chance LLMs can destroy humanity, and LLMs get deeply integrated into daily life, the person in charge should keep a high pucker factor and make sure safety is the number one priority. Before writing this post, I was skimming through Lex Fridman’s recent interview with Dario. In it, Dario said something that made me like him even more. Two years ago, the LLMs had the intelligence of a high schooler; one year ago, the intelligence of an undergraduate; this year, the intelligence of a PhD or expert. He says if you believe in scaling laws, you’d expect AGI in 2-3 years. Then he goes on to clarify that “scaling law” is a misnomer here just like Moores’ law is a misnomer; instead, he calls them empirical regularities. I respect his intellectual honesty, which increases my trust in him, and in turn in Anthropic, and in turn in Claude.