Give Me Any Source That Has Something to Do With Atomicity
Dear Zettlers,
I am currently compiling a research plan for a wider perspective on atomicity. So, I am looking at old sources on hypertexts, linguistic text theories, contemporary approaches like blogs (e.g. https://writingslowly.com/), Plato's archetypes, basically everything that I can lay my hands on.
I asked What is Next For Atomicity. My answer is to go to the extremes:
- Informing the Zettelkasten Practice: I prepared a complete guide to atomicity. This complete guide aims to give the reader focussed understanding on what atomicity means for the Zettelkasten and what it looks like in practice. One place to answer the question "What is atomicity, why does it matter and how do I act on it?"
- Casting a wide net for anything that is similar to atomicity. This is the part for what I am asking for sources.
So, hammer all the sources that you deem remotely relevant to this topic (perhaps, with a sentence to give reasoning on why it is relevant if it is obscure).
Live long and prosper
Sascha
I am a Zettler
Howdy, Stranger!
Comments
Going by memory, I can point out two sources:
https://notes.andymatuschak.org/Evergreen_notes_should_be_atomic
As a photographer, I've found explaining idea takins a photo taking very effective.
I also remember a reference in discourse graph theory, I think I can get it back.
Later I can scan all my zettels about atomicity and see if I find some other useful sources.
Here are a few links to ideas that might be interesting.
Will Simpson
My peak cognition is behind me. One day soon, I will read my last book, write my last note, eat my last meal, and kiss my sweetie for the last time.
My Internet Home — My Now Page
I don't know if anyone has done a literature review on atomicity. I will mention a few more sources from the personal computing era that I can easily identify. I won't repeat here the sources that I already mentioned in the discussion of "The Principle of Atomicity – On the Difference Between a Principle and Its Implementation".
First, I want to mention a 2007 essay by Manfred Kuehn that you probably know and that I rediscovered in my files right now. I want to mention it because he makes a point that can be found much earlier in the literature. Manfred said (emphasis added):
The atomicity (granularity) in hypertext software that Manfred describes as "similar to the old index card method" is a feature that he discovered through his own experimentation, but hypertext researchers already noticed this 20 to 25 years earlier, in the years when usable hypertext software systems were first developed for personal computers. For example:
NoteCards was an early hypertext personal knowledge base software system. It was developed at Xerox based on the work in Randall Trigg's 1983 PhD thesis, which he wrote using his own software (which was at that time called TEXTNET). See:
Jeff Conklin wrote about hypertext and developed some IBIS software systems starting in the 1980s. See:
There is a book on hypertext from 1990 that I cited once before in the forum and that is worth mentioning again here because it is notable for stating the principle of atomicity in clear simple language in its section titles "One idea per node" and "Node size—not too big and not too small" (see page 8):
It seems that it was also around 1990 that some hypertext software designers started using the term "atomic nodes". See, for example:
It is interesting to speculate about why the knowledge of the 1980s hypertext researchers was not more accessible in the 1990s, leading people like Manfred Kuehn (and me, though I'm younger) to experience "disorientation and many false steps" in note-taking on personal computers in that period. I guess that partly it was due to the inaccessibility of the relevant hypertext research literature in that period, which was in specialist print publications.
In another thread on atomicity, a few posters with computer programming backgrounds mentioned its relevance to software design and architecture.
The term “atomicity” has a specific meaning in database transactions (Atomicity (database systems) - Wikipedia) or concurrent/parallel computing, where it ensures data consistency and integrity during concurrent operations and transactions. This concept might be too context-specific and not particularly relevant to Zettelkasten.
More relevant could be object-oriented programming. While the term “atomicity” isn’t widely used in this context, software engineers strive to design objects⸺abstraction of information and knowledge⸺that are “more understandable, flexible, and maintainable.” The SOLID principles are a well-known example (SOLID - Wikipedia). In many ways, software architecture is an art of organizing knowledge so that the system as a whole functions effectively and produces value greater than the sum of its parts. That goal aligns with Zettelkasten.
It’s also worth noting that object-oriented programming has faced significant criticism over the years for its complexity, rigidity, orthodoxy, impracticality, and so on. Many younger software engineers now favor simpler paradigms. This shift in perspective is interesting in its own right.
Oh yes.
I can make a bold statement: good object-oriented programming is based on something like atomicity. Which, in that context, is a fundamental enabling factor for reuse, modularity and composability. Trying to make classes, objects, methods "for one thing only", but at the same time that they do not appear fragmented, not very significant, anemic.
Good objects are implicitly both "atomic" and "well defined", encapsulated, with a clear identity and responsibility.
I can cite Single Responsibility Principle, Separation of Concerns, High Cohesion and Low Coupling, and even many of the heuristics of Clean Code book from Robert Martin if I remember correctly.
I personally translate many of that practices into Idea development.
Including the use of Refactoring as the main tool for building a network of atoms that "collaborates" starting from a monolith, and identify note titles as "interfaces" of the notes (so of the ideas).
I think the nature of the process is the same in both worlds. A craftsmanship rather than application of an "algorithm" or a rule-based procedure. A craftmanship in which the hands are guided by the mastery of a few tools and by an eye developed thanks to a lot of experience. Experience that develops the ability to recognize and apply "patterns", too.
First object oriented programs of a novice developer suck, it's normal.
In the same way, it is very unlikely that a zettelkaster will make good zettels at the beginning. Patience, a lot o practice and and try to understand why what you are doing is not working well and learn how to correct it.
As anticipated:
It seems there's a reference into "Joel Chan - Discourse Graphs for Augmented Knowledge Synthesis What and Why.pdf"
where breaking complex ideas down into more meaningful, smaller conceptual "chunks" may be necessary for creative recombination into new conceptual wholes
(McCrickard et al., 2013, Chase and Simon, 1973, Knoblich et al., 1999, McCaffrey, 2012)
That reference to
"McCrickard, D. S., Wahid, S., Branham, S. M., and Harrison, S. (2013). Achieving Both Creativity and Rationale: Reuse in Design with Images and Claims. In Carroll, J. M., editor, Creativity and Rationale, number 20 in Human–Computer Interaction Series, pages 105–119. Springer London."
That I haven't read.
In general, it seems that in academic context atomicity is named "granularity", anyway.
many of my zettels regarding atomicity derives from this discussion:
https://forum.zettelkasten.de/discussion/3004/specific-query-about-creating-a-useful-note
That could be revisited, maybe.
Interesting comments about programming.
A long time ago I was the sole developer of the Forth language Motorola embedded in their 68F11 processor. I did that as an employee of New Micros who had the contract with Motorola to produce a 6811 Forth.
After that was in silicon I had many adventures in Forth. One of the company's design concepts was to build tools first. Once you had a toolkit appropriate to the environment, you used those focused tools to build the solution.
This led to lots of very simple functions ("words" in Forth lingo) all of known reliability. The idea was to have a toolkit of resources that couldn't break.
Very atomic, if I understand the concept.
I haven't written any Forth in decades but I still love it, particularly as a high level wrapper for assembly language functions. I could still start from scratch and hand code a Forth implementation, probably in about a week.
It's just a hundred or so atomic functions for the core implementation.
Many thanks! Keep them coming. I'll unleash the full power of the Zettelkasten Method on this project. Anything that you can come up with is welcome.
I already got sources from visual arts, architecture, ontology, and many more domains.
I am a Zettler
@andang76 said:
Thanks for mentioning that. I've read that paper by McCrickard et al. It is good and worth reading. Scott McCrickard also wrote a book on the same topic, Making Claims: The Claim as a Knowledge Design, Capture, and Sharing Tool in HCI (2012), that I mentioned before in a comment where I also mentioned that the reference lists of Joel Chan's papers are a good, if incomplete, guide to the English-language structured argumentation research literature.
The history in McCrickard's book shows that his and Chan's work is a continuation of the work of people like Jeff Conklin in the 1980s & 1990s, Horst Rittel in the 1960s & 1970s, and even Stephen Toulmin's 1958 book The Uses of Argument, which was a key influence on The Craft of Research by Wayne Booth et al. (1st edition 1995), a book that strongly influenced how I've thought about research since the 1990s.
One fact that just occurred to me is that neither McCrickard nor Chan cite Ward Cunningham, who created the first wiki for documenting software design patterns. I suspect that the reason why Cunningham's work on wikis isn't of interest to them is because Cunningham doesn't seem to be familiar with the kind of argumentation theory that influenced everyone else that I've mentioned. In other words, Ward Cunningham's work on wikis was focused on the software technology but never theorized about discursive atomicity as much as did the other people mentioned who were influenced by argumentation theory. (Correct me if I'm wrong; I think this is right based on what I remember of Cunningham's writings on wikis, but I haven't gone back to check them.)
I don’t know much about the history of Ward Cunningham, but now that you mention him, his other important contribution to the computer science community — design patterns — could be a significant source of inspiration for me if translated into our zettelkasten approaches.
"Idea patterns".
These could, after all, be quite similar to those highlighted by @sascha in one of his writings (“blocks”).
I studied design patterns in books that are much more recent than Ward’s work. I must admit I never analyzed them in their original source, and now this closing of the circle when that other paper of McCrickard is also involved, this thing intrigues me.
@andang76 said:
Here is an article by Ward Cunningham on the relation between design patterns and wikis:
From the article:
Cunningham & Mehaffy see wiki pages and design patterns as equivalent to essays. This is different from the emphasis on claims as atomic units.
Here is how McCrickard describes the relation between design patterns and claims in his book Making Claims: The Claim as a Knowledge Design, Capture, and Sharing Tool in HCI, San Rafael, Calif.: Morgan & Claypool, 2012, pages 59–60 (emphasis added):
In other words, design patterns are units of established knowledge about solved problems, analogous to published essays. Claims and issues are more finely granular units of knowledge in progress about unsolved or contested problems.
The connection between atomicity and design patterns is intriguing. Design atterns are indeed a core concept in object-oriented programming, so it’s fascinating to see their link through Ward Cunningham.
I also recall @ctietze 's reading list mentioning A Pattern Language: Towns, Buildings, Construction. I believe it’s a classic in urban planning, but its pattern-oriented problem-solving approach resonates with how software programmers think, making it popular among them due to this shared mindset.
Still, I’d remind myself that there’s been some pushback, as people can start treating these patterns as orthodoxy. While they help us understand certain problems, they can also nudge us toward thinking in ways that are more complex than necessary.
I believe the term "granularity" is more widely used in academia for a reason. When discussing the atomicity of a note, you're asking whether it is atomic or not—a binary decision. In contrast, granularity refers to a degree, not a discrete, binary state. You ask, "How granular is the note?"
People seem to have varied understandings of atomicity, often leading to misunderstandings. A recent discussion serves as an example. I think this stems from the binary nature implied by the term "atomicity." In practice, it’s likely more useful to ask how granular a note is rather than whether it’s atomic.
That’s because knowledge always has the potential to be explored in greater depth. Knowledge can also hold useful meaning even if it’s not broken down into its most atomic form.
In another thread, I used an analogy from physics. The term "atom" was coined in the hope that it represented the most fundamental unit of existence. Yet, we now know atoms consist of smaller components, like quarks and strings. What was once considered the most atomic unit of knowledge may no longer be so as science advances.
Consider the East-West dichotomy in thinking. Westerners may focus on a single tree, while East Asians see the tree as part of a forest. Is it useful to ask which perspective is more atomic? Or would it be better to simply recognize the difference in granularity?
These examples suggest it may be more productive to evaluate the degree of granularity relevant to the knowledge in question, rather than whether it is atomic.
(This reflection was partly inspired by the latest coaching session by @Sascha and @Nori , where they discussed how applying the principle of atomicity at the wrong stage can lead to fragmented thinking. I wonder about the utility of a term that has caused so much confusion.)
@zettelsan, good points about atomicity and granularity.
If you have a minimalist taxonomy of notes wherein there are basically two kinds of notes, zettels and structure notes (leaving aside literature/reference notes, which many people keep in their reference manager), then it makes sense to speak of atomicity as binary. (This binary conception can be found relatively early in the literature: see Figure 6, "Hierarchy of design object classes" in Thüring et al. 1991, cited above, where "atomic" is used in a rather binary way.)
But if, as we have seen repeatedly in discussions of note types in this forum, people naturally try to enact typed distinctions in their notes that are more than binary, then it makes sense that some kind of analysis of degree of granularity of the types would ensue. (And this is what we also see in discussions of degree of granularity in the academic literature.) And these typed distinctions that people make, and their granularity, should correspond to the kinds of knowledge/information that they are trying to organize.
The term "granularity" is bandied about in my profession (engineering) a bit. I've always thought of it as meaning "level of detail". For example, I want to look at an air photo or satellite photo of a possible bridge site, looking for evidence of past landslides on the river banks. I can look at the large view (the entire site) and then I can zoom in, to see what is happening at the scale of individual trees, or I can zoom in further to look at features on the ground at the scale of say an individual flower. In the old air photos, I'd never get to that fine of a scale as the resolution was not good enough. But with the new air photos or some satellite photos, I could go to an even smaller scale. I'd refer to the latter as being more granular, i.e., as providing a better level of detail.
I can imagine two zettels, both discussing the same idea, both being atomic, but one having a much greater level of detail and thus being more granular than the other.
So, in this way of thinking, granularity is not a replacement for atomicity; it is a related term but with a different meaning.
I understand that you want a non-binary term that allows you to describe the degree to which a note is atomic. I don't think granularity is the correct term. And maybe the problem in the first place is just thinking about the term atomicity as being binary.
@GeoEng51 said:
@GeoEng51, that's not how I understand what @zettelsan said. As I understand it, what @zettelsan was said was the same or nearly the same as what you said in your previous paragraph:
But how you define granularity is different from how I would define it based on how the term is used in the academic literature on knowledge bases (e.g., the publications by Davies et al., Völkel, Chan et al. cited in this and linked discussions). I can't say how @zettelsan would define it, as they are not explicit enough in the comment above.
You seem to define a more granular zettel as one having more data corresponding to a greater level of detail (which in cartography corresponds to a "large scale", even though you use the opposite term "small scale"):
In contrast, I would define a more (finely) granular unit (I say "unit" instead of "zettel" to avoid assumptions about how a knowledge base is organized) as one with a smaller discursive size and complexity: a concept/word is more finely granular than a statement/sentence, which is more finely granular than a complete argument, essay, etc.
There is a section titled "Granularity" that relates the issue of spatio-temporal scale that you raised above to information-system granularity, and that might help clarify how the two relate, in the book by Robert Arp, Barry Smith, & Andrew D. Spear, Building Ontologies with Basic Formal Ontology, Cambridge, MA: MIT Press, 2015, page 55:
Here, the ontology that they're talking about is the schema of categories, properties, and relations for a knowledge base about real-world entities. Granularity in a personal knowledge base such as a ZK is analogous but is about units of discourse instead of real-world entities. Just like in the kind of ontology that Arp et al. discuss, in a personal knowledge base the levels of granularity of the information units should be determined "primarily by the needs of users".
Yes, that is one of the problems that we're pointing to, but a binary conception of atomicity can also be a feature, as in the hierarchy of design object classes in Thüring et al. 1991 that I cited above. Whether a binary conception of atomicity is a problem or a feature depends on whether a binary definition of atomicity fits what you're trying to do.
What @ctietze called "the principle of atomicity" in his original post on the topic perhaps may be more simply called the principle of modularity, which doesn't have the binary implications that atomicity often has.
I found a discussion of granularity in Sutcliffe's 2002 book that is cited in the quote above from McCrickard's 2012 book: Alistair Sutcliffe, The Domain Theory: Patterns for Knowledge and Software Reuse, Mahwah, NJ: Lawrence Erlbaum Associates, 2002. Sutcliffe is mainly talking about software code but attempts to generalize his theory to knowledge management, with connections to cognitive psychology. It should be interesting to people who are interested in connections between software engineering and knowledge management.
A quote from section 1.3, "Dimensions of the reuse problem,", pages 5–6 (emphasis added):
And another quote from section 1.5, "Knowledge-management reuse", pages 10–12 (emphasis added):
There is similar discussion of granularity later in the book.
Late to the discussion, but various posts and insights here have led me to redefining atomicity to the practical standpoint of remixability.
Akin to music, some musical ideas can be reused in different contexts, genres, performed in wholly different ways to create totally different vibes. If we ascribe to the idea that the point of atomic notes is to be able to reuse an idea in other contexts, then the practical endpoint is remixing, and in my case, the practical approach I've taken.
"A writer should write what he has to say and not speak it." - Ernest Hemingway
PKM: Obsidian + DEVONthink, tasks: OmniFocus, production: Scrivener / Ableton Live.
I like this view.
If we want to borrow some ideas from the world of music, can you think of other properties that make sounds remixable together? This parallel could lead to interesting reflections.
I'm not a musical expert, but for example I think that a music sample that forms a "well closed loop" is very suitable for reuse. Like a well defined, autoconsistent idea into Zettelkasten.
@KillerWhale said:
It makes sense that an artist would think that the point of atomic notes is artistic remixing. In contrast, over the years I've increasingly thought about my note system from the perspective of a systematic philosopher, so I tend to think that the point of atomicity (or granularity or modularity—not necessarily the same concepts, as we have seen above) is to increase the systematicity or level of organization of my thinking. This is also reflected in Safaa Hashim's rationale for atomicity in his 1990 book Exploring Hypertext Programming:
Remixability seems to be a synonym for reusability in the context of music. Reusability is the subject of Sutcliffe's 2002 book that I quoted above, The Domain Theory: Patterns for Knowledge and Software Reuse. For Sutcliffe, granularity (the size of reusable components) is a factor that contributes to reusability.
As I was reading Sutcliffe's book yesterday, he convinced me that reusability is a more fundamental concept than I had thought.
Here's a quote on the importance of reusability from Sutcliffe 2002, section 1.4, "Reuse of knowledge and designs", pages 7–8 (emphasis added):
One problem with Sutcliffe's cognitive psychology in this passage is that he doesn't acknowledge differences in processing different modalities of data, such as auditory versus visual. In cognitive load theory, working memory is modeled as consisting of multiple processors corresponding to different modalities, so working memory is "multi-channel", which has implications for working memory capacity for music, visual images, etc.
Andy - thanks for these (and other) points in your post. If I understand the quotes from Arp, Smith & Spear, and from Sutcliffe, some people think of granularity in terms of the smallest size of an element that makes up a whole. In my air photo analogy, that might mean the information content in an individual silver grain (old style photo) or pixel (digital photo) in the photograph - is that a hectare of land, or a 10 m square section of land, or 1 m square piece of land? For a geological engineer to look at that air photo and understand whether or not a river bank is sliding downhill, he or she needs information down to the 1 m x 1 m piece of land (or given what follows below, even smaller).
That is a good way of thinking about it and captures the way I think about granularity in relation to air photos - the information content in the smallest, by its nature indivisible, portion of the air photo. "By its nature" I mean limited by its physical nature or properties.
I don't want to stretch the analogy too far, but in an air photo, the individual pixel really is only binary - is it white, black (or some colour)? To understand what the photo is telling us, I would need to gather together information in a number of pixels. So while granularity might be defined by the irreducible size of the basic building blocks, a large number of those blocks would be needed to form a sensible or complete idea.
Not sure where I'm going with this. In regard to a note in our Zettelkasten, this kind of granularity might then be defined by the individual words (taking those as being irreducible from the perspective of sensible content, rather than letters). However, that doesn't help us in determining the number of words (or their structure) required to express an "atomic" idea. This type of (quantitative) reasoning is not likely to be useful in assessing what we mean by an atomic note.
Going back to my air photo analogy, I need enough pixels of information to produce a picture that makes sense to me. Can I interpret the picture in terms of individual trees (and whether they are leaning or have fallen over), landforms/shape, ground cracking, surface water bodies and groundwater seepage, all of which contribute to my assessment of whether the ground is moving. It is my trained and experienced brain that decides if there is sufficient information to make that assessment. If the size of the pixels is too large to see the important features, then I can't properly assess what is going on. If they are sufficiently small, or smaller, then I can make the assessment. And there is zone where the pixels are not quite small enough and I have to guess about some features that I'm seeing.
So, in terms of an "atomic" note in our ZK, perhaps it is not just the granularity that is important, but also the way the idea(s) is/are expressed, their clarity, and the level of training and experience of the person reading the note, all of which contribute to that person being able to make sense of the idea.
Speaking of making sense, I'll let you and others decide if any of the above makes any sense - haha!
@GeoEng51, I think the aerial photo analogy may take us a little too far away from the subject of atomic notes, insofar as the content of aerial photos is uninterpreted data. In contrast, what we put in our notes is only the knowledge extracted from such data, which implies some set of types, some classification. So Arp et al., in the section on granularity that I quoted above from their book on Basic Formal Ontology, are talking only about granularity of types or categories (in their case, types of real-world entities).
I agree with @Sascha in the section "Everybody Talks About Atomicity, Nobody Ain't Talking About the Atoms", in his recent blog post "The Principle of Atomicity – On the Difference Between a Principle and Its Implementation": a good way to define an "atomic note" is with a set of types, which he calls knowledge building blocks (although, as he said, anyone's set of types doesn't have to be the same as his set), and "an atomic note is a note that contains exactly one knowledge building block". Here, atomicity of a note is indeed binary: a note conforms to a type or it doesn't. Granularity would be the relative sizes of these types. Perhaps we could even imagine a simple "periodic table" of the types. But to talk about atomicity and granularity in this way, first you have to have a set of types (a set of types that should fit your purposes, as Arp et al. said of ontologies).
Excerpt from Ann M. Blair's book Too Much To Know: Managing Scholarly Information before the Modern Age. While not exactly discussing the idea of atomicity, it does discuss the idea of excerpting text from one context into another so it can be reused in multiple contexts. Which I strongly associate with atomicity.
I'm not sure the parallel could work much farther than the global idea of remixability (I have also seen a parallel with APIs, which I like as well), but let's play along.
For me, sampling is akin to the collector's fallacy. A sample is someone else's material; use it as such as you have learned nothing (and venture into plagiarism). I would rather think about the underlying principles: what is interesting in this sample? Why does the groove work? What instruments are being used that lend it an interesting sound? What is creative in the use of harmony?
Those are underlying principles but, more importantly, they are personal interpretations of external material that one can learn from and propel their own art forward. The difference between integrating material from a lecture, and using parts of that lecture to underpin a personal project. Everybody will take something slightly different; what matters is what one would take, and how it will help shape personal output.
"A writer should write what he has to say and not speak it." - Ernest Hemingway
PKM: Obsidian + DEVONthink, tasks: OmniFocus, production: Scrivener / Ableton Live.
ok, when I've written "sample" I meant a sound created by the author themself rather than taken from a library. Anyway, this is another very interesting analogy to explain collector fallacy and the value of creating own stuff :-)
@Andy So English is not my first language -- I have trouble parsing "principle of modularity", if I understand 'principle' in the sense that it's an instruction, a guide towards action.
"principle of atomicity" → act so that you split your notes into atoms
"principle of modularity" → act so that you ... ? Ensure that things are modular and can be combined?
What do you have in mind there?
Author at Zettelkasten.de • https://christiantietze.de/
@ctietze I should have explained my thinking more in that comment, but you got it! In that 2013 post you said:
It makes sense to describe the separation of concerns as splitting into atoms, but also grouping things that belong together sounds to me like making things modular. In that post you were talking about both clustering and splitting. So, taken as a whole, that post is about more than atoms, perhaps. Or I'm overthinking it.
I think @Sascha clarified things more recently with his definition of an atomic note.
Yes, that makes sense. What constitutes one knowledge building block will vary somewhat from one person to another, but at least we should be able to understand one another's definitions