Zettelkasten Forum


Case Studies of AI-supported knowledge work

Dear Zettlers,

inspirated by this thread I thought that perhaps we might be able to collect case studies successes and failures (=learning experiences!) of AI-supported knowledge work.

My hypothesis is that it is to early to move above the layer of phenomenology. The best method to access such an unknown field is trial and error.

So, I am collecting individual use cases (or perhaps even: instances). Whenever I try something I document it and leave it there. Occasionally, I will write add hypothesis to it. I will add an example to show what I mean.

If my notes on this matter reach a level that is worth publishing I will bring everything together to a document that will be available as donationware.

If you have any case study, successful interaction or failure or thoughts in general, I'd be happy if you share it.

Live long and prosper
Sascha

I am a Zettler

Comments

  • edited February 2023

    2023-02-18

    Disclaimer: This is a direct copy of the entry in my ZK. (Translated of course)* The full title of this note is "202302180748 Case study - AI-assisted search event history crime of Irma Grese"

    Research interest: I'm writing a collection of short stories set in a prison valley in my fantasy world. For this, I'm gathering inspiration from history and literature. The original inspiration is Zimbardo's book "The Lucifer Effect", his reappraisal of the famous Stanford Prison Experiment.

    AI-Usage:

    • I asked Elicit: "What crimes did Irma Grese commit?"
    • I didn't deepen the search since it was not in my focus.

    Results:

    Knowledge-related value creation:

    • Direct: Increased my collection of historical instances for a short story collection.
    • Indirect:
      • Increased my collection of promising sources for similar project by one paper.
      • Increased my collection of promising sources for a fairly similar project by two papers.

    I am a Zettler

  • edited February 2023

    Research Interest: I am currently working on my own version of the prilepine table. I am trying to find a relationship between the xRMs, subjective feeling, and the respective set and repetition numbers that I can map into a table. The table then becomes a control tool for training planning and autoregulation.

    AI- Interaction: Question to Elicit

    • How is the relationship between the one repetition maximum and repetitions to failure?
    • I used the brainstorm function and followed the following question: "What is the relationship between the one repetition maximum and repetitions to failure in different muscle groups?"

    Results:

    Knowledge-based value creation:

    • The very good result only led to a hardening of already established statements regarding the RM table.
    • The two interesting results, although not related to the research object (load-repetition relationship), led to an interesting direction in the research project (tool for training control)
    • Interestingly, the less relevant results took me further than the relevant.

      • Is this due to the state of my research?
    • The proposed question led me away from my research interest.

    I am a Zettler

  • Research Interest: I received the tip to use many different formats for content rather than sticking to a single formula. (Such as the classic late night formula with monologue, interview, etc).

    AI Interaction:I asked ChatGPT, "Give me a list of 20 video formats."

    Results:

    • After the request, I received a mix of content and format. I requested ChatGPT to omit the content.
    • The list was not accurate, there was some duplication and responses that did not count as format.
    • However, I was able to use the list as source material to come up with a list that was satisfactory to me.

    Knowledge Added Value:

    • I had immediate source material to work with and didn't have to create anything on my own from scratch.
    • I didn't have to do any research. I suspect it was about 20-30 minutes. Great success!

    I am a Zettler

  • edited March 2023

    Research Interest: I wanted to hand off formal proofreading to AI (ChatGPT).

    AI interaction: Correct grammar and spelling of the following text.

    Results:

    • Some grammatical errors still remained in the text.
    • Some strange phrases have slipped in.
    • Some distortions of meaning have occurred. (How?)
    • EDIT (2023-03-02): There are major distortions in meaning even though I requested just grammar and spelling to be corrected. ChatGPT is absolut garbage and unreliable.

    Knowledge-based value creation:

    • Part creation, part destruction of value. Some strange errors have occurred. In return, the text is free of spelling errors and grammatical errors are rar. But distortion of meaning leads to the fact that in the end I create more work for myself than I save.
    • EDIT (2023-03-02): See above. Proof-Reading by ChatGPT resulted in way more work for me.
    Post edited by Sascha on

    I am a Zettler

  • This is what I wrote to a client about this experience:

    Hi X,

    here is the corrected document. I tried to use ChatGPT for grammar and spelling proof-reading. Then I translated the document with DeepL. There were quite some distortions in meaning that weren’t easy to see (which is not surprising since the AI-Modells work by predicting a likely next word).

    My experience with ChatGPT is overall really bad. It is somehow lying and twisting meaning. Strange to say this about an AI. But perhaps, one could use this phenomenon to actually understand not why people lie and twist meaning but how that actually happens.


    This is what is baffling to me: I never expected an AI being such a sly rat.

    I didn't have any research request for the AIs that are more reliable (like Elicit). So, the following is not applied to any other that ChatGPT:

    ChatGPT is the equivalent of talking to a crazy person who is pretty good in pretending to be a mentally healthy person. But sometimes there are little cracks in the mask and if you actually follow up the full crazy shows.

    ChatGPT reminds me of a purified left-brain how Stuard McGilchrist describes it in The Master and His Emissary. It is not deterministic and static since it operates on probability. But it is not organic in the sense modern robots make the leap to organic motoric abilities:

    You can see the organic in the movements.

    A more hybrid feel is to be found in this video:

    The movements can be divided in two categories:

    1. Closed Skill
    2. Open Skill

    If the robot is walking on a flat surface it feels and moves mechanical. But when it starts to interact with a unpredictable environment so it has not only to react but to "negotiate" the environment (and the researchers messing with it) all of the sudden it changes the quality of its movement and becomes organic (which gives the visual impression of being alive).

    If you want to see how a human is able to close a skill watch the legendary Michael Johnson:

    Ignore the "correctness" of his approach to technique and focus on what feels organic or mechanical. In his positions, movement and explanations compared to other sprinters:

    His start was not good and his style was never for the 100m. There are many reasons for that but one of them (high on the list) is that the start is much more chaotic. The first jump out of the blocks introduces a chaotic motoric environment. With every step you need to transform both the quality of movement and the positions. When you arrived at the actual upright sprinting position the chaotic nature subsides because know you return to the same position with the same optimal movement pattern over and over again. Just slowly and gradually fatigue changes the for the sprinter optimal movement.

    Michael Johnson's approach was to remove as much of the organic element and the result was not only world record performance. Something in his movement feels off. This something is the mechanical applied to what is expected to organic.

    Coming back to ChatGPT: It is the other way arround. It is something mechanical that pretends to be organic. The odd feeling that some people get comes from, this is my hypothesis, the organic nature of something that is expected to be mechanical.

    The equivalent is the classical psychopath who pretends not to be a psychopath. Something feels off.

    This is how far I can stretch this line of thought, though.

    My negative feelings towards ChatGPT are most likely my brain inaccurately applying a certain organic/mechanical-mix of expectations. The strange feeling that something is off comes from that: My brain doesn't question the organic/mechanical-mix of expectations and/or its applicability to ChatGPT. So, the natural conclusion is that ChatGPT is wrong/off. But I cannot say broken since it is not a simple machine. The expectation part is important: If don't feel that there is something off when I see robots in a factory. Quite the contrary: I feel fascinated. And if a vending machine takes my coins but does not give me my can of soda I don't feel betrayed but think that the machine is broken.

    My strong intuition is that those generative AIs like ChatGPT are more poorely understood than assumed since the organic/mechanical-axis is never applied as a category of analysis comprehensively. (Iain McGilchrist does this by the way in The Master and His Emissary which is one of the reason why his work is such a genius work)

    It is no wonder that stories accumulate that ChatGPT shows severe signs of mental illness. Example:

    https://nypost.com/2023/02/16/bing-ai-chatbots-destructive-rampage-i-want-to-be-powerful/

    (Mental illness is the correct category of thinking in my opinion. We are not talking about a vending machine)


    This will be connected within my Zettelkasten to certain areas:

    • Organic/Mechanical-Axis
    • The lateralisation of the brain
    • Open/Closed-Skill-Axis (sports but also in general)
    • Aliveness as a metaphysical category (Ontology)
    • AI-supported research and knowledge work
    • Mind
    • Mental Health

    I am a Zettler

  • @Sascha said:
    ChatGPT is the equivalent of talking to a crazy person who is pretty good in pretending to be a mentally healthy person. But sometimes there are little cracks in the mask and if you actually follow up the full crazy shows.

    Some people on the internet share the idea that AI is "hallucinating" facts. But some psychologist recently shared that the already well-established term is confabulation.

    Then followed a couple of references to split brain experiments: where the left and right hemispheres couldn't communicate, so the "language half" made up reasons on behalf of the other one, and the people didn't notice they were confabulating the 'craziests' of answers.

    (Can't find the link anymore :( I bet you already have something on this from McGilchrist or so)

    Author at Zettelkasten.de • https://christiantietze.de/

  • Curious if you have tried Perplexity or the new Bing.

    I think expecting a LLM (large language model) like ChatGPT alone unaided to get all the references correct is not very realistic.

    A search engine + LLM is a more interesting test. Of you have elicit, but there are others.

  • @aarontay said:
    Curious if you have tried Perplexity or the new Bing.

    Perplexity but not the new Bing. I generate use cases only if I have an actual problem that I want to solve.

    This is a long-term project since I don't expect the development being as fast as the public opinion assumes. :)

    But the whole topic reminds me of how the Zettelkasten works: Finding an entry point is totally separate from the later work. I am nearly exclusively interested in just finding entry points when I use AI.

    After I generated the entry point part of the benefit to me when I follow references and read papers is the gradual development of a mental map. The more I'd rely on tools the less I'd build this map. So, I expect my use of AI to be limited to less then 10 Minutes per reseach interest.

    Or put differently: I avoid being deprived of the very training that allows me to use AI-supported tasks effectively.

    I am a Zettler

  • I am currently tinkering with the last paragraphs of the second edition of the ZKM-book. I tried chatGPT half a dozen of times for just proof-reading (grammar and typos). The level of alteration of meaning is so subtle (just a word or two are changed) yet so profound that I sometimes feel that this is sabotage.

    I am a Zettler

  • @Sascha said:
    I am currently tinkering with the last paragraphs of the second edition of the ZKM-book. I tried chatGPT half a dozen of times for just proof-reading (grammar and typos). The level of alteration of meaning is so subtle (just a word or two are changed) yet so profound that I sometimes feel that this is sabotage.

    This is odd and disturbing. Can you give us an example? Or I guess you are talking about the German language version - I wouldn't grasp the subtleties there at all.

    By the way, I listened to a webinar from the University of British Columbia (my undergraduate school) yesterday on "Mapping out Modern morality". They briefly talked about the moral aspects of ChatGPT (and driverless cars) - I hadn't considered either from that perspective.

  • @GeoEng51 said:
    This is odd and disturbing. Can you give us an example? Or I guess you are talking about the German language version - I wouldn't grasp the subtleties there at all.

    Yes, in German. German is quite a subtle language with way more small variations that change the meaning of a sentence compared to English. Perhaps, in English the changes wouldn't be so distorting.

    But roughtly: I wrote "if this is true that is also true", ChatGPT changed it to something like "if you understand this you'll get that". It basically switched the sentence from the ontological layer (statements about the world) to the epistemic (statements about a subject perceiving the world). In German, the change in meaning was even bigger.

    It was especially annoying since I explicitely asked for just correcting typos and grammar.

    By the way, I listened to a webinar from the University of British Columbia (my undergraduate school) yesterday on "Mapping out Modern morality". They briefly talked about the moral aspects of ChatGPT (and driverless cars) - I hadn't considered either from that perspective.

    Especially, if the people don't understand morality at all.

    1. AI means centralisation of morality. A small group of people impact a huge number of decisions (e.g. a gazillion driverless vehicels programmed by a small group of people). Decentralisation of morality is a major part of a functioning society. A rather banal example is dog training. The "force free"-method people push for laws that basically make punishment illegal. The result will be the death of many people, dogs and other animals since there are many dogs that need punishment (with fair means) because of their character and there are a lot of dogs that are aggressive because they never exerience correct boundaries. Centralisation of morality is immoral in itself.
    2. Ethics are very poorly understood. A simple example is the is-ought-problem. Only the ivory tower philosophers can stick to such a position since the is and the ought are interlocked which is lived by people that are still connected to a more natural way of living. And more: If you accept a materialist view (meaning an atheist view) ethics are part of the natural world and the ought developed out of the is (evolutionary). So, it is not the case that you cannot infer the ought from the is because the ought is caused by the is. (It is explained a little sloppy in my English..)

    I am a Zettler

  • @Sascha "AI means centralisation of morality" - I can see that. I also think the terms of the morality are hidden, so that the user is less likely to be aware of it.

  • @GeoEng51 said:
    @Sascha "AI means centralisation of morality" - I can see that. I also think the terms of the morality are hidden, so that the user is less likely to be aware of it.

    I need to think about that. My gut reaction is that the locus of morality is in the acting individual who is in that case the user. If you offload all your thinking to the AI morality is hidden to the user because all of cognition is now opaque.

    But it might contradict my example of self driving cars.

    I am a Zettler

  • @Sascha said:
    It was especially annoying since I explicitely asked for just correcting typos and grammar.

    I've tried ChatGPT for a bit the last two weeks, and I agree it sucks for translation and grammar/spelling things.

    I was hoping that ChatGPT would be good at providing "terminology" for new areas of interest that I may have, but within areas I (think I) am knowledgeable about it takes me a lot of prying to get ChatGPT to be helpful. That could also be an issue with me querying incorrectly, but I'm not sure.

    I did look at a few guides on how to ask stuff and found it helpful to explicitly ask for a source. That way, ChatGPT becomes just an easier way to collect source material, almost like a regular search engine ...

  • The first client successfully used Chat-GPT for generating tags. I highlight the word "successful", since it was genuinely value creating with little side effects.

    I don't understand it well enough to make a definitive case. And, sadly, the current practice of "borrowing" ideas prevents me to think in public freely.

    I am a Zettler

  • Speaking about ChatGPT and morality, I've found that ChatGPT has become less creative as it has incorporated more of the so-called guardrails. Anecdotal accounts on the r/ChatGPT subreddit seem to confirm this. AI is fulfilling Nietzsche's critique of morality in the pejorative sense: under the limitations of "alignment," not only will AI be less productive, but it will also produce no spectacles of genius and nothing that could restore an affective attachment to life. The military and some commercial uses of AI will not be subject to alignment.

    † Leiter, Brian. "The Truth Is Terrible." The Journal of Nietzsche Studies 49, no. 2 (2018): 151-173. muse.jhu.edu/article/711018.


    Abstract:

    The “terrible” existential truths about the human situation raise Schopenhauer’s question: why continue living at all? Nietzsche’s answer is that only viewed in terms of aesthetic values can life itself be “justified” (where “justification” really means restoring an affective attachment to life). But how could the fact that life exemplifies aesthetic value restore our attachment to life in the face of these terrible existential truths? I suggest that there are two keys to understanding Nietzsche’s answer: first, his assimilation of aesthetic pleasure to a kind of sublimated sexual pleasure; and second, his psychological thesis that powerful affects neutralize pain, and thus can “seduce” the sufferer back to life. Life can supply the requisite kind of aesthetic pleasure only if it features what I call the “spectacle of genius,” the spectacle represented by the likes of Beethoven, Goethe, and Napoleon. Since such geniuses are not possible in a culture dominated by “morality” (in Nietzsche’s pejorative sense), the critique of morality is essential to the restoration of an affective attachment to life.

    ChatGPT4 disagrees with me but makes a logical error--I assume that genius arises from a lack of constraints. This is not the case: genius is hampered by morality in the pejorative sense, not by the self-imposed constraints of the artist or of work done under constraints other than morality in the pejorative sense.

    Here's a link to the chat, for those interested.

    GitHub. Erdős #2. CC BY-SA 4.0. Problems worthy of attack / prove their worth by hitting back. -- Piet Hein. Armchair theorists unite, you have nothing to lose but your meetings! --Phil Edwards

  • A general comment: Perplexity.ai >> chat.openai.com

    I am a Zettler

  • Some other LLM to test

    **Claude 2 from Anthropic **

    There are a few key differences between the "Talk to Claude" and "Request Access" options on the Anthropic website when it comes to using Claude:

    "Talk to Claude" allows you to have a brief conversation with Claude right on the website. This is a demo version of Claude meant to give people a quick interaction.
    "Request Access" is for getting full access to the Claude AI assistant. This requires filling out an application and being approved by Anthropic.
    The full Claude assistant is not free. Anthropic charges for access to the complete Claude AI assistant based on usage.
    The "Talk to Claude" demo on the website is a limited version of Claude meant for quick tests and demos. The full Claude assistant that you get access to through the application process is much more powerful and featured.
    So in summary:

    "Talk to Claude" is a free limited demo on the website
    "Request Access" is for getting approved to use the paid, full Claude assistant through an application process.
    You cannot freely use the full Claude 2 assistant without being approved and paying according to your usage. The demo is free but limited.

    David Delgado Vendrell
    www.daviddelgado.cat

Sign In or Register to comment.