Ah, my bad. I assumed that the English Wikipedia article would imply that there is an English translation. Quite a rare case, in my experience, that there is a German translation and not an English one.
In the book, they interact with a central AI that integrates all of human thinking. It acts like an amplifier in a way that it returns rational answers to questions, but in the first part, the basic axioms are modified:
The basic principle was, for 500 years, that the needs of humanity come first. Then they encountered a lot of alien species with varying development, while humans were still and by far the most developed. They invested a lot of resources to improve the living conditions of the other species as an instinct. But then resistance was formed, because it was seen by some, that this was wasting resources to the detriment of humanity. Then, one of the higher ups challenges axioms to move on to the next step in the (ethical) evolution to accommodate for the obvious intrinsic good to improve the conditions of other living forms.
The true problem of AI are the idiots. Gemini with its black Nazis is just the tip of the iceberg.
AI would be something truly awesome, if we'd treat it the same as a calculator:
Example benefit: Vastly accelerate calculations like logical analysis, tone analysis etc.
Example problem: If you outsource actions that you can't do, you'll never learn them. (This is why children need to memorise multiplication tables, binomial formulas etc. to justify using complex statistics software)
When I get to engage more with fiction writing, I will build myself (meaning: I either pester Christian or somebody else to do the technical stuff for me) a very awesome writing trainer for style, logical consistency etc.
Perhaps, I will do the above for non-fiction writing, even before.
One just needs to have a clear understanding on how to design the feedback loops, so, you don't get dumber to a point that you reach a plateau because of the negative feedback. The plateau should be generated by you maxing out your (human) potential.
I've used ChatGPT4 to develop a Zettel critique assistant, which mainly checks whether my notes conform to the standard format I've written down. It also checks--or tries to check--for a single focus unless the note is a structure note. I'm not asking ChatGPT4 to write for me; I'm asking for suggestions based on instructions I have already given.
My other custom instructions to ChatGPT4 have been maddening since they caused the AI to respond without conviction. "Respond in a matter-of-fact, professional tone. Remain neutral unless directed to express an opinion." Those instructions were a regrettable mistake. I want a close-minded advocate, a lawyer who argues on my behalf, not an open-minded mediator who can see the merit in the arguments of my adversaries and sworn enemies. Well, not seeing the point is the AI's specialty. I wrote the instructions; now I have to live with them.
GitHub. Erdős #2. Problems worthy of attack / prove their worth by hitting back. -- Piet Hein. Alter ego: Erel Dogg (not the first). CC BY-SA 4.0.
I've used ChatGPT4 to develop a Zettel critique assistant, which mainly checks whether my notes conform to the standard format I've written down. It also checks--or tries to check--for a single focus unless the note is a structure note. I'm not asking ChatGPT4 to write for me; I'm asking for suggestions based on instructions I have already given.
So, you use ChatGPT 4.0 as a "lexer" for chercking a kind of standardisation of your zettels? I am curious : what kind of formal do you look for? Do you use an "automation batch scanning procedure", or do you give zettels one by one to the beast?
I want a close-minded advocate, a lawyer who argues on my behalf, not an open-minded mediator who can see the merit in the arguments of my adversaries and sworn enemies. Well, not seeing the point is the AI's specialty. I wrote the instructions; now I have to live with them.
Loyalty is a special quality nowadays. Its arguments, are they reliable? Pertinent? Interesting?
The true problem of AI are the idiots. Gemini with its black Nazis is just the tip of the iceberg.
Well, from what I understand from you story (a society rulled by AI as a suprem guide), idiots are not the only problem in hand and I can't see why there's hope here. People killed each others for Bible's interpretation. Dogmas can be an absurd humans hammer.
When I get to engage more with fiction writing, I will build myself (meaning: I either pester Christian or somebody else to do the technical stuff for me) a very awesome writing trainer for style, logical consistency etc.
Like Hemmingway or the stylizing tool of IA Writer? (well, have fun, Christian) I saw an application with integrated AI which does that, but adapted for blogging. I can see the potential, especialy in self-edition.
What I would really need is something able to answer with accuracy to extremely burning and precise questions as "can we find glass windows on buildings in Japan in 1812?" or "can a civilisation find laser without petrochemicals industry?" or "If I mix egg tempera with oil painting, do I need resins to harden the painting layer?" I know that very knowledgeable people could answer that, but I don't know them all personnaly, or they are unavalaible at 2am, strangely.
The true problem of AI are the idiots. Gemini with its black Nazis is just the tip of the iceberg.
Well, from what I understand from you story (a society rulled by AI as a suprem guide), idiots are not the only problem in hand and I can't see why there's hope here. People killed each others for Bible's interpretation. Dogmas can be an absurd humans hammer.
I was purely talking about the real AI-story. The AI in the book is just a big calculator which is more of an integrator of various ideas and thinking processes of the people. (And most likely, it is part of the soviet propaganda that there is some central supreme intellect that you can lean onto...)
An example, of proper usage of AI is the rule testing of ethical codices. In practical philosophy (which is rarely practical at all in the way it is taught), you basically calculate the decision by a combination of deductive and inductive thinking based on a certain set of axioms and rules. This is where AI shines. However, AI only can do that: It can assist the bureaucratic aspect of ethics.
What happened with Gemini was that souless ethical bureaucrats designed an artificial bureaucrat that also responds to ethical queries.
But, I hope, it is just another speed bump before we reach the age of Star Trek.
When I get to engage more with fiction writing, I will build myself (meaning: I either pester Christian or somebody else to do the technical stuff for me) a very awesome writing trainer for style, logical consistency etc.
Like Hemmingway or the stylizing tool of IA Writer? (well, have fun, Christian) I saw an application with integrated AI which does that, but adapted for blogging. I can see the potential, especialy in self-edition.
Think of a 10x Hemmingway. The AI will have good taste and not just a single-minded metric that it measures the quality of the text against.
I've used ChatGPT4 to develop a Zettel critique assistant, which mainly checks whether my notes conform to the standard format I've written down. It also checks--or tries to check--for a single focus unless the note is a structure note. I'm not asking ChatGPT4 to write for me; I'm asking for suggestions based on instructions I have already given.
So, you use ChatGPT 4.0 as a "lexer" for chercking a kind of standardisation of your zettels? I am curious : what kind of formal do you look for?
AI is dangerous. Be quirky, out of step, and skeptical of AI's infringements.
Essay questions on Texas STAAR tests (student achievement) will be 75% graded by AI this year.
By edict from the State, schools will be completing part of their duties with AI. In a curious deviation from integrity, students remain barred from its use. Some epistemological methods available to the education system will not be available to its students.
Does that sound desirable? Are educators becoming obsolete?
I'm with Cogley from Star Trek's Court Martial episode. Paraphrasing his fictional wisdom, "This is where [education] is. Not in that homogenised, pasteurised, synthesiser. Do you want [knowledge], the ancient concepts in their own language, learn the intent of the men who wrote them, from Moses to the tribunal of Alpha 3? Books!"
And teachers, hard work, and research inspired by mentors. That's just as true now as in Stardate 2947.3, when Cogley's words rang so prophetically.
@Amontillado said:
AI is dangerous. Be quirky, out of step, and skeptical of AI's infringements.
Corporations with the deepest pockets have largely written intellectual property law. The case of Large Language Models will be no different. Expect a significant carve-out for OpenAI, Microsoft, and others.
In a recent interview, Microsoft's CEO commented about overly restrictive copyright protections cramping their style in the age of AI. Perhaps, now that the NY Times has OpenAI's attention, OpenAI and Microsoft might look into the economic arguments of Against Intellectual Monopoly by David K. Levine and Michel Boldrin. Microsoft might have a problem asking the court to consider relaxing the regulations it relies on, but this is not likely to be a roadblock in our Plutocracy.
GitHub. Erdős #2. Problems worthy of attack / prove their worth by hitting back. -- Piet Hein. Alter ego: Erel Dogg (not the first). CC BY-SA 4.0.
Comments
Ah, my bad. I assumed that the English Wikipedia article would imply that there is an English translation. Quite a rare case, in my experience, that there is a German translation and not an English one.
In the book, they interact with a central AI that integrates all of human thinking. It acts like an amplifier in a way that it returns rational answers to questions, but in the first part, the basic axioms are modified:
The basic principle was, for 500 years, that the needs of humanity come first. Then they encountered a lot of alien species with varying development, while humans were still and by far the most developed. They invested a lot of resources to improve the living conditions of the other species as an instinct. But then resistance was formed, because it was seen by some, that this was wasting resources to the detriment of humanity. Then, one of the higher ups challenges axioms to move on to the next step in the (ethical) evolution to accommodate for the obvious intrinsic good to improve the conditions of other living forms.
The true problem of AI are the idiots. Gemini with its black Nazis is just the tip of the iceberg.
AI would be something truly awesome, if we'd treat it the same as a calculator:
When I get to engage more with fiction writing, I will build myself (meaning: I either pester Christian or somebody else to do the technical stuff for me) a very awesome writing trainer for style, logical consistency etc.
Perhaps, I will do the above for non-fiction writing, even before.
One just needs to have a clear understanding on how to design the feedback loops, so, you don't get dumber to a point that you reach a plateau because of the negative feedback. The plateau should be generated by you maxing out your (human) potential.
(I drank coffee, so I rambled...)
I am a Zettler
I've used ChatGPT4 to develop a Zettel critique assistant, which mainly checks whether my notes conform to the standard format I've written down. It also checks--or tries to check--for a single focus unless the note is a structure note. I'm not asking ChatGPT4 to write for me; I'm asking for suggestions based on instructions I have already given.
My other custom instructions to ChatGPT4 have been maddening since they caused the AI to respond without conviction. "Respond in a matter-of-fact, professional tone. Remain neutral unless directed to express an opinion." Those instructions were a regrettable mistake. I want a close-minded advocate, a lawyer who argues on my behalf, not an open-minded mediator who can see the merit in the arguments of my adversaries and sworn enemies. Well, not seeing the point is the AI's specialty. I wrote the instructions; now I have to live with them.
GitHub. Erdős #2. Problems worthy of attack / prove their worth by hitting back. -- Piet Hein. Alter ego: Erel Dogg (not the first). CC BY-SA 4.0.
@ZettelDistraction
So, you use ChatGPT 4.0 as a "lexer" for chercking a kind of standardisation of your zettels? I am curious : what kind of formal do you look for? Do you use an "automation batch scanning procedure", or do you give zettels one by one to the beast?
Loyalty is a special quality nowadays. Its arguments, are they reliable? Pertinent? Interesting?
@Sascha :
Well, from what I understand from you story (a society rulled by AI as a suprem guide), idiots are not the only problem in hand and I can't see why there's hope here. People killed each others for Bible's interpretation. Dogmas can be an absurd humans hammer.
Like Hemmingway or the stylizing tool of IA Writer? (well, have fun, Christian) I saw an application with integrated AI which does that, but adapted for blogging. I can see the potential, especialy in self-edition.
What I would really need is something able to answer with accuracy to extremely burning and precise questions as "can we find glass windows on buildings in Japan in 1812?" or "can a civilisation find laser without petrochemicals industry?" or "If I mix egg tempera with oil painting, do I need resins to harden the painting layer?" I know that very knowledgeable people could answer that, but I don't know them all personnaly, or they are unavalaible at 2am, strangely.
I was purely talking about the real AI-story. The AI in the book is just a big calculator which is more of an integrator of various ideas and thinking processes of the people. (And most likely, it is part of the soviet propaganda that there is some central supreme intellect that you can lean onto...)
An example, of proper usage of AI is the rule testing of ethical codices. In practical philosophy (which is rarely practical at all in the way it is taught), you basically calculate the decision by a combination of deductive and inductive thinking based on a certain set of axioms and rules. This is where AI shines. However, AI only can do that: It can assist the bureaucratic aspect of ethics.
What happened with Gemini was that souless ethical bureaucrats designed an artificial bureaucrat that also responds to ethical queries.
But, I hope, it is just another speed bump before we reach the age of Star Trek.
Think of a 10x Hemmingway. The AI will have good taste and not just a single-minded metric that it measures the quality of the text against.
I am a Zettler
I use the template here: Zettel - GitHub.com
One at a time. The GPT instructions are online at https://github.com/flengyel/Zettel-Critique-Assistant-GPT.
GitHub. Erdős #2. Problems worthy of attack / prove their worth by hitting back. -- Piet Hein. Alter ego: Erel Dogg (not the first). CC BY-SA 4.0.
AI is dangerous. Be quirky, out of step, and skeptical of AI's infringements.
Essay questions on Texas STAAR tests (student achievement) will be 75% graded by AI this year.
By edict from the State, schools will be completing part of their duties with AI. In a curious deviation from integrity, students remain barred from its use. Some epistemological methods available to the education system will not be available to its students.
Does that sound desirable? Are educators becoming obsolete?
I'm with Cogley from Star Trek's Court Martial episode. Paraphrasing his fictional wisdom, "This is where [education] is. Not in that homogenised, pasteurised, synthesiser. Do you want [knowledge], the ancient concepts in their own language, learn the intent of the men who wrote them, from Moses to the tribunal of Alpha 3? Books!"
And teachers, hard work, and research inspired by mentors. That's just as true now as in Stardate 2947.3, when Cogley's words rang so prophetically.
Corporations with the deepest pockets have largely written intellectual property law. The case of Large Language Models will be no different. Expect a significant carve-out for OpenAI, Microsoft, and others.
In a recent interview, Microsoft's CEO commented about overly restrictive copyright protections cramping their style in the age of AI. Perhaps, now that the NY Times has OpenAI's attention, OpenAI and Microsoft might look into the economic arguments of Against Intellectual Monopoly by David K. Levine and Michel Boldrin. Microsoft might have a problem asking the court to consider relaxing the regulations it relies on, but this is not likely to be a roadblock in our Plutocracy.
GitHub. Erdős #2. Problems worthy of attack / prove their worth by hitting back. -- Piet Hein. Alter ego: Erel Dogg (not the first). CC BY-SA 4.0.