On Claude

4 min read

I recently subscribed to Anthropic’s Claude Pro because generative AI is finally worth paying for.

I didn't pay outright for any LLM chatbot tool until Claude for three main reasons:

  1. Rrevolving OpenAI API credits provide sandbox access to premium models.
  2. I got Perplexity Pro free for the first year, which unlocked access to models from OpenAI, Anthropic, and more.
  3. Free limit on ChatGPT was adequate in conjunction with the above.

Over the past couple of months, I kept hearing that Claude was materially better than ChatGPT. I had used it on and off over the year, but I disliked it. Claude's site loaded slowly, would break often, or outright refuse to work. That left me with a negative impression, so I stuck to the alternative AI access points listed above. However, last week I gave Claude another shot after reading a new blog post detailing its impressive intelligence. I asked for help with a relatively complex feature in an app (complex both technically and from a product design perspective).

I was surprised and impressed with the quality of result.

Claude had gotten a lot smarter since the summer.

It’s been about a few days now. Earlier today, Claude helped me find and fix a pernicious issue in my code. 1 It took some prodding to find the root cause, but Claude was competent. Unlike ChatGPT, I don’t get the urge to swear at the model when I get frustrated with it—maybe because it's named Claude and that personifies it. Another tidbit from that same blog post that caught my eye was about how data from Pro users is not collected—in fact, Claude’s tagline on its homepage is currently “Privacy-first AI that helps you create in confidence.” That’s an apt description of how I feel. I’m a private person, and often go so far as to obfuscate code when pasting it into ChatGPT. However, with Claude, I don’t necessarily feel this urge to redact code. It's not that the code itself is top-secret, but it's still our IP. In other words, prompting Claude feels like being able to dress down in front of a guest you’re comfortable with.

I’ve been harsh on and critical of “AI” in the past. I feel that generative AI is being overhyped and we’re wasting resources on dumb stuff. Just the way we did with dumb crypto stuff. I think that’s still mostly true, and I’m not convinced the LLMs can “wake up”.

But I do trust Dario Amodei.

I remember watching an interview with him around the time I wrote those harsh posts. When asked what he thought p(doom) was, he said 10-25%. That annoyed me. It just felt like here’s some guy making shit up and everyone’s listening because “AI” is the shiny new thing. Reflecting on that statement today, I actually think it’s good that there’s someone in charge who takes the threat of the singularity seriously. After all, if there’s even a 1% chance LLMs can destroy humanity, and LLMs get deeply integrated into daily life, the person in charge should maintain a high pucker factor and ensure safety is the top priority.

Before writing this post, I was skimming through Lex Fridman's recent interview with Dario. In it, Dario said something that made me like him even more. Two years ago, the LLMs had the intelligence of a high schooler; one year ago, the intelligence of an undergraduate; this year, the intelligence of a PhD or expert. He says if you believe in scaling laws, you’d expect AGI in 2-3 years. Then he goes on to clarify that “scaling law” is a misnomer here just like Moores’ law is a misnomer; instead, he calls them empirical regularities. I respect his intellectual honesty, which increases my trust in him, and in turn in Anthropic, and in turn in Claude.

Footnotes

  1. Audio would randomly de-sync from video because of some audio buffers were copied with incorrect presentation timestamps.