Zettelkasten/Ai and being able to walk without crutches
I have just watched this video
The reason was the headline.
I ask you dear community for your opinion.
Personally, I have the feeling that systems like Zettelkasten, Second Brain ect. are misunderstood. Zettelkasten in particular is not used in the way I understand it.
I got my knowledge about it from this forum and Sönke Ahrens book as well as Luhmann's archive page.
Am I wrong in saying that the value of the system lies in the processing of information?
In the modulation and personal integration into his knowledge system?
I maintain that the more often I expose myself to my information, shape and form it, the more contact is created with my person. It is comparable to building a house. The personality and bond (connection), the relationship to the object is created through interaction (cooperation or, ideally, building it yourself).
I don't see the card box as a reminder tool.
Not as a snapshot of my past. But this is often practiced. Whether the information is then like all our pictures that we take because we can but don't know what for and why.
Why am I interested in systems like Zettelkasten? Because I want to expand my knowledge network, my ability to think and analyze.
I'm too scared and unfortunately it does seem to be heading in this direction, that systems that are very heavily Ai coupled are taking too much of the good away from the brain.
A good example was the CS 50 Harvard course, where it was pointed out at the beginning that completely outsourcing and handing over to an AI bot would greatly reduce the learning success and so they developed a supporting but not complete solution helper bot themselves. It works with you but not for you.
What I wanted to say or would like to ask is that Zettelkasten is not a recording tool but a working tool that wants to form a symbiosis like a pet, human or other living being.
What do you think?
We live in a time in which superficiality is leading us in the wrong direction.
And I would also like to point out that English is not my language and therefore it could easily come to deviations of my statement.
Thank you very much
Howdy, Stranger!
Comments
Short addendum:
Although superficially there are certain similarities to the streamer, I am me and rob is rob;-)
I remember I've already expressed my point about the use of AI in personal knowledge development.
It's like pursuing the goal of running a marathon and reducing myself to use a motorcycle to do the race. I can think I'm practicing a foot race, I actually do something completely different
I surely go much faster and without fatigue, but I lose the essence of trying my hand at the marathon. It's the training, the work of my legs over weeks and months that makes sense to the goal, not having the lowest time the day of the race. The work I've made is what makes me a better person, not having something in my hands written and thought by someone/something else.
I think that AI can be useful as a very minor support for our thinking and knowledge working, it absolutely doesn't have to be the main engine of our processes.
I see AI as a very relevant danger for young students. I am seriously worried about the new generations, who will inevitably be seduced by an instrument that will relieve them of a lot of effort during their study routine. This is not a benefit, it is harm. We think and learn if our brain works, not if this work is made by something outside us.
Mindless use of AI as a substitute of thinking development and knowledge acquisition and processing will be the new collector fallacy.
I think that all the zettelkasten practitioners have become aware of how enormously different it is to make a zettelkasten from making a copy and paste from internet pages or settle of an answer taken from a search engine. Use of AI for this work is ake a step back and go back to copying and pasting again.
Zettelkasten value is the thinking process that the method implies, not the quantity and even quality of produced notes. An AI that produces the same your notes, or even better, and in a faster way, is much more ineffective if your goal is thinking.
My (strong) opinion, of course. Confined to the context we are talking about, the one for which at some point we all thought of using the zettelkasten. Outside this scope, AI has its good role.
I've seen some minutes of the video.
I stopped almost immediately.
It doesn't make any sense.
Both the author and the AI they didn't understand what zettelkasten is. experiment failed :-)
thank you for your confirmation @andang76
In short: The rate limiting factor in knowledge work is the human mind. You can't bypass this by using a calculator (which AI basically is).
One of the problems of any systems arises when you don't develop the complexity of your mind. Then you incrementally increase the complexity of your system, which overwhelms your mind. This will determine the tipping point at which people say "Oh, my system is a mess."
The same pattern already happened several times:
The surface level understanding of the Zettelkasten Method that is shown in this video (both the guy and the AI) mirrors the core problem of the field: The misunderstanding that you can just acquire knowledge, like purchasing a product. You can't just google something, capture an article to your system or ask AI (which is a more sophisticated search) and then claim knowledge. You just got a piece of information. Only (proper) processing allows you to create knowledge out of it.
Sure, you can use AI - for example - to generate information that is necessary and difficult to obtain. I, for example, used AI to calculate how many walking lunges you have to perform to replace 60 minutes of running in Zone 2. However, the knowledge work had to be done on my side (gain the background knowledge to be even able to attack such a task, develop both the theory and the model as the platform to attack the whole project and then guide the AI, so it is actually doing the right calculations).
In addition: The ever-confirming beginning of AI in this conversation sets up a toxic environment for thinking, similar to an echo chamber.
But I guess, we always need to repeat history to be able to say "history repeats itself".
No, you aren't. The processing of information is the only way how to create knowledge. What you never processed, you never know. (There is a problem in the everyday language to keep track of the nuances. Technically, you can also "know" information, but knowing knowledge is different)
Yesterday, I had a call with an AI company (not one of the big ones if you are overly curious) and they are doing the very mistake that Evernote did. Mistaking information for knowledge and believing that information management is a rate limiting step in one's work. (The app was nice, but not for the reasons that it helped you with the actual knowledge work)
I am a Zettler
Any source like this that falsely claims "Luhmann developed/invented the Zettelkasten", I immediately label as a low quality source not to be trusted.
The video's proposal of AI as an assistant to help you plan and remember is fine, but there is no need to trash the idea of a Zettelkasten.
One can bypass the whole "which system is better?" debate by simply realizing that for some people working with a slip box is a "preference".
I think those are false fears, similar where people saying that online search makes people dumb;
Situation in video can be abstracted to misunderstanding of any system;
It is known that chatgpt is hallucinating and making up facts when it doesn't have info about smth, using it as primary search instead of feeding it articles/using RAG etc is just misuse, its not LLM problem its user problem. LLM can know what you ask, but also its possible it doesnt know it.;
Students using chatgpt in studying is just a symptom of poorly designed education system.
There is a lot of times where assigments exist just for the sake of it (or produce very little learning compared to time wasted), or textbooks are compressed crap; almost no school/uni teaches modern research backed learning techniques and foundations of learning science (e.g. a lot of places require attendance on lectures and force students to act as speech-to-handwriting machines - 2 hours per lecture wasted at minimum); students have no clear picture of the process - over-dependant on teacher and so on.
There's also distinction between intristic cognitive load, which actually produces learning, and external cognitive load, where you just trying to unzip convoluted text, problem definitions and so on. Latter doesnt contribute much to learning and just wastes mental resources. LLM is good at rephrasing and rewriting so you can actually build mental schema.
There's also situation where you can't tell if material you are going to study is worth it. Or subject you want to study is so heterogenous that it's very hard to study it without a map. LLM is good for this. Or get a bunch of materials and use LLM to help you to reorganise it in a way that you will be able to gradually build mental schema.
Why does everyone imply that using AI == entirely replacing your thinking? Why nobody says about how to use AI in combination with latest findings from cognition and learning sciences to extremely speed up building mental schemas (knowledge)?
Furthermore, you can use AI to find patterns in your writing which you wouldnt be able to see otherwise, you can use AI to help you form new associations, you can use AI to launch your thinking in unexpected ways. It can highlight you a bunch of your forgotten ideas and suggest directions to develop them. It can generate questions aiming at unexpected places.
Definetly not a minor support
I will not let my use of AI substitute for my thinking!
You can definitely misuse the Zettelkasten method, just as you might misuse AI if you don’t really grasp the concepts behind it. Leveling up our thinking by diving into practical tools that actually work is what I think we all want.
The negative aspects of working with AI include:
Negative Effects
1. Loss of Cognitive Skills
2. Reduced Critical Thinking
3. Impaired Memory and Attention
4. Inhibition of Intuition
Funny, this is what storytellers said about writing, the church said about books, newspapers said about television, and just about everybody said about the internet. This is truly a spectrum. Let’s stop focusing on the sewer side of the spectrum and level up to find growth and opportunity.
Like any tool, AI, when used ethically and maturely, can clarify an idea or project. It can show alternatives, associations, and what comes next.
The art of crafting precise and context-rich inputs that guide the AI toward generating desired outputs is called prompt engineering. This is cognitive work. The way we interact with Google will not serve us at all. We have to think differently. It is mostly art and is a learnable skill like writing and reading. Getting a response from AI that packs a punch is all about laying down clear instructions and giving some solid context. And you’ll unlock responses that are meaningful and super engaging.
The key thing I love about working with AI is that it is infinitely patient with me and, when asked, can be a brutally honest critic. It has a memory about me and my interactions with it, which I can control. Some have huge context windows, which allow me to focus on only my stuff.
Will Simpson
My zettelkasten is for my ideas, not the ideas of others. I don’t want to waste my time tinkering with my ZK; I’d rather dive into the work itself. My peak cognition is behind me. One day soon, I will read my last book, write my last note, eat my last meal, and kiss my sweetie for the last time.
kestrelcreek.com
Online search (perhaps, not in isolation, but the new internet ecosystem) actually reduced people's critical thinking by reducing the need to work hard for the results. Together with the over-coddling of modern people, there is very little actual need to train your processor between the ears.
There are plenty of studies that the availability of devices reduces your ability to think clearly. Studies like that:
Adrian F. Ward, Kristen Duke, Ayelet Gneezy, and Maarten W. Bos (2017): Brain Drain: The Mere Presence of One’s Own Smartphone Reduces Available Cognitive Capacity, Journal of the Association for Consumer Research 2, 2017, Vol. 2, S. 140-154.
I know, that the study is not directly observing the effect of online search. However, the pattern of availability is the problem at hand.
To be fair, I think it is a winner-takes-it-all-effect: There are people, either consciously or unconsciously, protecting themselves from the hazard.
If the students wouldn't enter university as completely dependend thinkers, there wouldn't be a problem at all.
It is important part of training the skill of reading. In fact, this mirrors real problems because reality is convoluted and has no clear problem definition. To unzip this, is one of the most crucial skills in real world problem-solving.
Just take the example, that you are in a hectic meeting: Your boss presents you a problem that you have to solve. But the company is under fire and he himself doesn't understand the problem right away. Then it is up to you to untangle the mess that he presents you.
Or you are a doctor and are trying to make sense out of the gibberish patients present you.
Or you are a psychotherapist, and, and, and...
Exactly for that, I don't use AI to protect my mind.
Wrestling with texts (or speech) is way too valuable.
Many of the core skills outlined in the article below won't be developed if you let AI take care of that.
Argenta M. Price, Candice J. Kim, Eric W. Burkholder, Amy V. Fritz, and Carl E. Wieman (2021): A Detailed Characterization of the Expert Problem-Solving Process in Science and Engineering: Guidance for Teaching and Assessment, CBE—Life Sciences Education 3, 2021, Vol. 20, S. ar43.
Because the tell signs are already out there that this is happening already. And it happened with the smartphone infection of society already.
I am a Zettler
@Will I completely agree with you to focus on the positive and constructive.
Personally, I belong to the group of people who also deal with the “dark side” of a topic.
My concerns are based on a widespread euphoria that is flitting around the corner like a fashion trend. Miracle technology, miracle cures here and there. We know the saying that the dose leads to the poison.
In many areas, people are prone to uncontrolled, unreflected use, and the effects are not immediately apparent.
Misuse and the resulting damage follow.
Yes, we have gained from technology, my first computer was a VIC-20 and I was able to experience developments personally. After hours of typing in code that initially made no sense to my brain, I was forced by my inner urge, even if it was just a game, to complete this process, which subsequently (not immediately) led to a resolution of the incomprehensible. This path without shortcuts brought you development. It is possible to skip levels, but even here there are limits. 3-4 levels are possible 5 and more are difficult. That is a good thing.
I would argue that it is necessary to carry out a review in the sense of networked thinking before using any technology.
Write your own side effect sheet - it is necessary. Social media and the like have needed such information sheets for a long time.
AI/Ai too.
I don't want to patronize anyone, but we are talking about tools that can be used without age restrictions.
@Ydkd
Generally speaking, just as I spoke of a package insert with a list of side effects in the previous text, the introduction to the conscious use of the technology is also missing here.
Let's call them operating instructions and training in the use of systems. Dosage recommendations to stay with the comparison.
Searching with Google makes you stupid, I didn't know these concerns before. If there is an attempted explanation with possible conclusions, I would like to read it. I am interested in arguments, especially if they are based on networked thinking.
I would ask a question about the whole topic: isn't it precisely convenience that prevents us from making possible developments?
We are given new technologies only to realize that we should use them in a measured and limited way so as not to take a step backwards in development.
Special development strategies allow us to feel the full impact of the addictive potential of these technologies.
We are only able to fight freely with increased disciplined effort.
**I strive to live a life of the least possible dependency in every sense. **
While i know that research, i think it's not a problem of smartphones, AI, but lack of enabling environments (which enable what we need to learn to think).
They go through university like this too.
Yes, here is the problem with education system too.
I agree that unraveling convolutions is one of the most important skills, i was talking about the situation where you have to do this a lot, where you obviously know how to do this, you just dont need to do this again and again.
It's like looking for a word in paper dictionary instead of using online translator - you can, but why would you, if you already can do it? Or like solving same type of math problems over and over - you have to do calculations, but that almost doesnt add anything once you learned it.
Yes there's again problem with education system. Developing such skills can be interesting and rewarding, its not the students problem system doesnt explain them why they need to learn this and what exactly.
It is possible to design assigments in a such way that they will enable developing cognitive skills and prevent misuse of AI.
Also if you are getting some hard degree, you will VERY QUICKLY realise that AI doesn't help you much and you have to grab textbooks and learn by yourself.
We build mental schemal by building relationships, chunking, doing spatial arrangement, encoding etc. Any activity which we have to do before it can be considered detrimental in terms of learning.
If you have a subject to learn, it's in your best interest to remove as much convolutions as possible and allocate as much resources as possible to relating and chunking concepts. That is making sure that you have mostly intristic cognitive load and not extraneous.
My main point if if you know/taught important foundations from learning and cognitive sciences, you know meta thinking, you are able to use AI to enhance your thinking and not degrade, same with search and other tools, and dont lose your skills.
Will, the big point in my humble opinion.
How many people in their daily lives actually have the awareness of having to manage these tools in a mature way?
By our nature we tend to avoid fatigue, difficulties and the use of large amounts of time.
If we have a tool at our disposal that takes away our effort, we will tend to use it even when he shouldn't.
A young student today has the opportunity to have a summary of a writing done by an AI instead of struggling with it himself, saving a lot of time and dedicating himself to what he likes. How much is he able to understand that this is a problem for his growth, instead of an opportunity?
It's so easy and seducent open a window on a browser and let something else do the tough part.
What we leave to an AI do we take away as an exercise for our brain, and we will tend to do it more and more because relying on our brain is tiring and we want to save resources.
This issue is very evident in the message contained in the video posted by @John_P, pursuing greater quantity and more speed we risk to do shallow information consuming rather than effective knowledge development.
There is a big enthusiasm about use of AI all around, very little attention to the issue that its misuse and abuse will cause detriment of our cognitive abilities.