Zettelkasten Forum


Are AI and Zettelkasten compatible each other?

There are many topics about the use of AI in our workflows. I've Initially started to write in one of these, but I think that this question could be considered on it own. With a well made title...

I've very negative thoughts about the use of AI in personal knowledge processing and development.

Maybe I could change my mind in the future, reasoning and contextualize better on different use cases.

Into my notes I've already put a set of "notemaking fallacies" regarding AI :smile:

  • Automatic classification fallacy
  • Automatic linking fallacy
  • Automatic summarization fallacy

The underlying principle is the same for all of them.

The effort of doing these things is tiring and takes time, but it's thinking itself.
Its like training. Using AI for thinking activities is like using a motorcycle during training for a marathon. You lose the benefits of the practice.

Discovering and using Zettelkasten in the last two years I've relearned to think. Before this time, I collected.
Using AI I fear to lose again this ability and return to a new form of shallow thinking and learning.

For me Zettelkaten and AI are highly incompatible. At least for the "core tasks", those in which we need to use a lot of thinking.
I see the use of AI a very dangerous risk for young students. It's very easy to misuse it.

It's a strong opinion, maybe :smile: Again, I could change my opinion in the future or listening other opinions. AI could be useful in low-lewel tasks.

Someone already use AI assistance for something.

What do you think? Can be defined the right boundaries of this problem?

What do you think?
  1. Are AI and Zettelkasten compatible?10 votes
    1. yes
      40.00%
    2. no
      60.00%

Comments

  • edited March 29

    I voted yes.

    With that being said, they should not be used for writing (thinking) - I would keep doing those myself. :smile:

    But, that does not mean they are not useful or incompatible with Zettelkasten. They perfectly are.

    It's just that the features are limited at this point in time, but they'll soon be capable of assisting you in terms of navigation through your ZK, to automatically scan, review and suggest which notes and how to improve etc. They'll be suggesting new links between notes, why they are useful and possibly help you build and connect new ideas.

    You're still behind the wheel, but for the first time you'll have an artificial assistant that can actually go through your unstructured (in a software context, ie. plain text with links) notes and hopefully make useful suggestions.

    That's all I hope it will be - a copilot with useful suggestions.

  • edited March 29

    @andang76 said:
    Its like training. Using AI for thinking activities is like using a motorcycle during training for a marathon. You lose the benefits of the practice.

    Okey, let's put it this way, if you copy-paste AI (LLM's) output into your Zettelkasten - my opinion is that it's of course wrong.

    But if you use it to explore the condensed information landscape of the whole world wide web that will give you multiple valuable focus areas which you can then use to confirm your own findings in a more credible source and then use all of that information with different credibility levels combined to form your own approximation of truth that you'll put into a knowledge base which will be updated as that approximation changes - then yes - I use AI (LLM's) for that.

  • edited March 30

    The only use I see of AI at the moment after trying CHATGpt and Perplexity AI is for helping me answer people in a respectful way, which is cool considering I’m in the autism spectrum and for language learning where I type my responses (like this one) and ask it to improve it or very specific questions regarding grammar.

    I don’t like the summaries they give, the search results they provide and I’m not going to pay for it just to ask stupid answers like “do you hate humanity?” or generate images.

  • Yes. Compatible.
    I use genAI as a temporary aside window to expand my understanding or refine my grammar (especially when I'm writing in English, which is my 3rd language).

    David Delgado Vendrell
    www.daviddelgado.cat

  • I don't bother to whip up an LLM to do anything when I'm working in my Zettelkasten. The first-brain thinking is hard enough when I'm in a frenzy :)

    But I do see potential value in generated feedback for writing. Blog posts (based on Zettel) more than actual notes, because there I can ask e.g.

    You are an expert writer and copy editor. The following is a blog post.
    
    <<EXPRESS YOUR INTENT HERE>>
    
    Please provide feedback to point out weaknesses, and how I could make the piece stronger.
    

    You do get useful feedback when you ask for "adversarial feedback".

    Other prompt suggestions: https://gist.github.com/rcarmo/f96c659f149e357e1091cbfe352af6d4#file-shortcuts-js

    Author at Zettelkasten.de • https://christiantietze.de/

  • I can only repeat myself: AI is a calculator with its strength and benefits.

    The question is how far one is in his knowledge processing education. I don't feel that I have any business in outsourcing anything to AI, because my mind is not developed enough.

    Just the last year alone was a year of growth for me. This is by far more valuable to me to risk it. (I am also the guy who never owned a smartphone because of the same reasons, or likes to use a physical map that he buys locally when he goes to another city on his own)

    I am a Zettler

  • edited March 30

    @phykas said:

    @andang76 said:
    Its like training. Using AI for thinking activities is like using a motorcycle during training for a marathon. You lose the benefits of the practice.

    Okey, let's put it this way, if you copy-paste AI (LLM's) output into your Zettelkasten - my opinion is that it's of course wrong.

    But if you use it to explore the condensed information landscape of the whole world wide web that will give you multiple valuable focus areas which you can then use to confirm your own findings in a more credible source and then use all of that information with different credibility levels combined to form your own approximation of truth that you'll put into a knowledge base which will be updated as that approximation changes - then yes - I use AI (LLM's) for that.

    Yes, my intent is try to discover the boundaries of good and bad use of AI, in this historical moment in which there is a big enthusiastic wave about :-)

    Focusing me on the most critical phases of my workflow (the core process in which sources are brought to the generation of the network of thoughts), I conclude that in these phases AI is not useful, on the contrary it's a step back.
    Inside this perimeter I think that AI has to be strongly avoided.

    I don't see a useful use beyond as a search engine for having a grasp.
    But even in this task, it cannot be used as the single and main tool.
    There is a content reliability issue.
    For me it's not important only the search result, but even how it is obtained, how much is reliable, what are the sources of it. These critical aspects are generally opaque. As far as I understand, they should be the matter of study in the field of explainable AI, that should try to resolve them.

  • edited March 30

    @andang76 said:
    Yes, my intent is try to discover the boundaries of good and bad use of AI, in this historical moment in which there is a big enthusiastic wave about :-)

    Focusing me on the most critical phases of my workflow (the core process in which sources are brought to the generation of the network of thoughts), I conclude that in these phases AI is not useful, on the contrary it's a step back.
    Inside this perimeter I think that AI has to be strongly avoided.

    I don't see a useful use beyond as a search engine for having a grasp.
    But even in this task, it cannot be used as the single and main tool.
    There is a content reliability issue.
    For me it's not important only the search result, but even how it is obtained, how much is reliable, what are the sources of it. These critical aspects are generally opaque. As far as I understand, they should be the matter of study in the field of explainable AI, that should try to resolve them.

    Agreed.

    This brings us to the concept of truth and reliability, which has a much larger scope than AI only, and it's very intriguing to me.

    If something is written by a human, it does not mean that it's reliable or true. Even modern science is relying on the same statistical probability as LLM's do.

    Modern science gives statistical answers that best approximates truths (which score high in reliability), but it's often refuted.

    So, how do you verify if something is true or not? Even if you do the experiments yourself, they are still drawn from statistical samples, the same process used by LLM's and therefore it can be wrong.

    LLM's are a weaker source of truth because they draw these samples from language only, but they use the same principles.

    The only way to improve your estimate of what is true is to do your own research, nothing will replace that.

    Using approaches like 'Layers of Evidence' introduced by Sascha is a good start.

    This article explains better what I mean when I say science approximates truths, and don't focus only on the fasting part, there are several more approximations being done over the years:
    https://www.statnews.com/2024/03/19/intermittent-fasting-study-heart-risk/

  • edited March 30

    @andang76

    I don't see a useful use beyond as a search engine for having a grasp.
    But even in this task, it cannot be used as the single and main tool.
    There is a content reliability issue.
    For me it's not important only the search result, but even how it is obtained, how much is reliable, what are the sources of it. These critical aspects are generally opaque. As far as I understand, they should be the matter of study in the field of explainable AI, that should try to resolve them.

    Here we go again : I agree with you there.

    Who an what feed the AI is a critical point for using it. It can't condensed every single text once written by humanity about one subject or another : that's mean there are criterias for choosing one source or another, for prioritizing one source, for exposing one thing among every other things to the user. There are discriminations standardization. Who decide it, how and why? It's a black box. When learning skills are users driven, it's even worse.

    I recently have studies marketing : people mind is malleable. It's okay, it's needed to evolve and survive, we are a social specie.

    But it becomes a problem in mass exposition. Some exagerated examples to illustrate :

    Washington Post - november 23 - AI generated images bias
    Forbes - april 2023 - Racist AI

    What I really need would be reliable hard facts checking, but also fictions stories, for example : "can you find me a story about a very innocent person or creature winning over evil thanks to their innocence?" . A sort of reversed Tv-Tropes. Grammatical and style corrections would be nice to, with literature references to give example, not a blog oriented tool. Now, it's really turned for bloggers.

    I totally understand the use of AI in technological field. My dearest one sometimes use Chat GPT for that purpose, to check up some code, to ask a technical question about an old programming language, or way to achieve something. It makes mistake sometimes, so double check is always needed.

  • edited April 5

    @andang76 I agree with the thought that results from AI searches is a “black box.”

    I recently started using ChatGPT and I asked it about the black box concept. ChatGPT in a nutshell responded with “the exact processes and algorithms used within ChatGPT are proprietary to OpenAI.” It went on to say, “developers often implement mechanisms to ensure ethical and accurate responses.” This response prompted me to have more questions than answers.

    After doing a few searches on research topics, I asked ChatGPT to cite the sources based on output and the response was “citing sources is not a built in feature of AI models like GPT-3 or GPT-4.”

    The discussion above adds to what I discovered.

Sign In or Register to comment.