Zettelkasten Forum


Beware the LLMs!!

TL;DR: LLMs are unreliable, but oh my, this unreliable?!?

I have been working through a beautiful (and funny!) book, A.C. Grayling's "The History of Philosophy" (2019). My main interest has been the ideas therein and less biography, but I decided to have a note each for the major philosophers I care about, and started using Claude 3 Opus (the latest model from Anthropic, claimed to be GPT-4 level) for just the biography part. What could possibly go wrong?!?

Well, in the dozen philosophers I have had Claude write a short bio for, maybe nothing wrong was seen. But then, I am hardly an expert on Thales (born around 624 BC) so wouldn't have spotted a small lie here or there.

So I asked it to write a biographical note for someone I know well --- me. The results were horrendous. I report them in full detail (with correction) here: https://amahabal.substack.com/p/dissecting-the-people-make-errors (as part of the bigger question in response to someone saying "but people make errors, too!").

Beware!

Comments

  • LLM are not there to replace humans, so we shouldn't compare them. I don't think the results are bad, from a writers perspective.

    my first Zettel uid: 202008120915

  • It seems a lot of the issues with LLM's are the same as with _any _computer data. Garbage in, garbage out. If the set of data the program works off is too large and not well curated, it seems you end up with junk like we see these things spitting out.

Sign In or Register to comment.