Home › Forums › Decaffeinated Coffee › Re: Geneiva is Geneva Switzerland according to AI
- This topic has 20 replies, 11 voices, and was last updated 3 months, 1 week ago by Always_Ask_Questions.
-
AuthorPosts
-
September 6, 2024 10:58 am at 10:58 am #2312139ywnjudyParticipant
I had queried the www if its geneiva to park a bike on a public sidewalk.
So it kicks back at me about the laws of parking on sidewalks in Geneva.September 7, 2024 8:29 pm at 8:29 pm #2312191ujmParticipantYou must have been bored today.
September 7, 2024 8:29 pm at 8:29 pm #2312197commonsaychelParticipantRhetorically asking – is there anyone here who finds cream-sauce-herring (such as Flaum’s & Golden-Taste) too sweet? If so, is there any brand/variety herring that you or your acquaintances prefer, and that isn’t overly salty either?
September 7, 2024 8:29 pm at 8:29 pm #2312215Menachem ShmeiParticipantExcuse me, please do not insult AI.
You are referring to the old fashioned way of searching on Google.Indeed, when I Googled “is it geneiva to park a bike on a public sidewalk” all of the results were about Switzerland.
However, when I posed the same question to AI (ChatGPT), this was the beautiful response:
“Parking a bike on a public sidewalk is generally not considered **geneiva** (theft), as public sidewalks are meant for public use. However, it could be an issue of **gezel harabim** (stealing from the public) if parking the bike obstructs pedestrian traffic or violates local laws.
Halachically, causing inconvenience or damage to the public can be problematic, even if no direct theft is involved. Therefore, it’s best to ensure the bike is parked in a way that doesn’t block the sidewalk or violate regulations.”
September 7, 2024 8:29 pm at 8:29 pm #2312229akupermaParticipantAI is a computer program. It has whatever intelligence the programmer has and no more. It can calculate fast. If a fool programs it, the AI will spout foolishness. “Garbage In, Garbage Out” is as valid as it always was.
September 7, 2024 8:29 pm at 8:29 pm #2312216yechiellParticipantthat’s funny
September 7, 2024 9:33 pm at 9:33 pm #2312365Always_Ask_QuestionsParticipantI think google search now makes a decision (or combines one after another) whether to (1) do simple search (2) look up a knowledge graph that contains previously collected structured info, such as people’s northplace, job, etc – this is partially taken from wikipedia/wikidata (3) call AI similar to chatGPT. In this case, it seems that this original decision-making was at fault and calling a GPT directly is better.
September 8, 2024 9:26 am at 9:26 am #2312399NonImpeditiRationeCogitationisParticipant@akuperma you seem to have no clue how LLMs (Large Language Models) AI work, otherwise you wouldn’t have written the nonsense you wrote with such certainty.
September 8, 2024 9:28 am at 9:28 am #2312403BaltimoreMavenParticipantCD …. Schwartz original Shtiglitz is amazing
September 8, 2024 4:08 pm at 4:08 pm #2312670Always_Ask_QuestionsParticipantNon> how LLMs (Large Language Models) AI work,
Well, first, nobody, even LLM, knows how exactly it works, of course …
Note that while LLM is not fully coded by the programmer, it is dependent on the training data that programmer fed into LLM. One hypothesis from an expert is that the current generation of LLMs is the best we will have (contrary to the usual always-improving tech). Explanation: LLM learns from publicly available texts. Next generation of human writers will be writing with the help of current LLMs, directly or indirectly, leading to decreasing quality while increasing quantity. Thus, next generation of LLMs will be essentially learning from the noisy data it generated. This is an AI version of “yeridas hadoros”.
This is somewhat believable as we already see now that people using Internet are not always smarter than people reading seforim… I am hopeful, though, that we still have enough of informed individuals who will understand the problem and start training LLMs on a combination of Plato, Aristotle, Gemora and Rashi.
September 9, 2024 12:21 am at 12:21 am #2313003modernParticipantI have personally seen ChatGPT mske up stuff.
September 9, 2024 10:41 am at 10:41 am #2313113Always_Ask_QuestionsParticipantmodern, you are right, LLMs use statistics of word salad they were trained on to produce a similar looking word salad. In truth, many human writers operate in a similar way! Traditional statistics and machine learning deals with producing results that have statistical validity. After initial excitement and marketing blitz, players seem to start working on adding actual facts to nice-looking text. You might search for actual facts in, say, wikipedia, with all obvious caveats. Some of the wikipedia data is already organized in machine-readable form (see wikidata) and used for answering queries. There are, proprietary, sources of reliable data also.
September 9, 2024 7:29 pm at 7:29 pm #2313329akupermaParticipantNonImpeditiRationeCogitationis: So if you train an LLM on stuss, what makes you think it will produce wisdom. Like every computer program since the first ones in the 1940s, “garbage in, garbage out”. In a practical legal sense, the programmer as well as the user should be strictly liable for injuries “caused” by an AI.
September 10, 2024 10:01 am at 10:01 am #2313346Always_Ask_QuestionsParticipantakuperma, right, one kollelnik did some research several years ago about self-driving cars – who is liable for life-and-death decisions. Most likely the developer. For example, if the car needs to resolve the famous dilemma whether to continue straight and kill person set {A} or turn and kill person set {B} – it is conceivable that the car can ID all the people, evaluate their net worth, value for humanity, insurance coverage, likelihood to sue – and make decision based on factors that are not according to halakha. Are you allowed to drive a car like that?
September 10, 2024 10:01 am at 10:01 am #2313348Always_Ask_QuestionsParticipantBe careful what you write in public domain. Your grandchildren will be learning from LLMs that were trained on your posts.
September 10, 2024 11:57 am at 11:57 am #2313724Yserbius123Participant@Always_Ask_Questions We know how LLMs work, we just don’t know every detail of every machination in its network. From a technical standpoint, there’s little difference between LLMs and a Magic Eight Ball with weighted results that change at random for every question.
September 11, 2024 10:25 am at 10:25 am #2313892Always_Ask_QuestionsParticipantYS, right. At the same time, many established machine learning methods have non-magic properties, such out-of-sample error and generalization capabilities (such as VC dimension).
September 11, 2024 10:25 am at 10:25 am #2313903Menachem ShmeiParticipantBe careful what you write in public domain. Your grandchildren will be learning from LLMs that were trained on your posts.
Not just your grandchildren!
When Ishai Ribo had the concert in Madison Square, someone told me that there was kol isha at the concert. I decided to search it up, and googled “Did Ishai Ribo concert have kol isha.”
Google was just rolling out the AI overview feature, and it responded that yes, there was indeed mixed sitting and kol isha.
When I checked the source, it was a from a comment on the YWN article! 😄(I get that Google AI overview may be different than a regular LLM like chatGPT. Just thought this was funny)
September 12, 2024 8:40 am at 8:40 am #2314230Always_Ask_QuestionsParticipantMenachem, right, good example. But YWM works similarly to Wikipedia – you expect a discussion and variety of opinions on a controversial topic. So, if there is a statement that is not contradicted, then Google probably uses shtika kmodeh heuristic.
September 12, 2024 11:06 am at 11:06 am #2314390Menachem ShmeiParticipantSo, if there is a statement that is not contradicted, then Google probably uses shtika kmodeh heuristic
Unfortunately, I doubt they are so sophisticated.
September 15, 2024 7:56 am at 7:56 am #2314578Always_Ask_QuestionsParticipant> I doubt they are so sophisticated.
Most decisions are based on machine learning that uses both manually crafted and automatically generated features, so it is not usually possible to find out what the decision is made on. But count of supporting and contradictory statements seems like a reasonable feature to use.
As to how far someone would go in optimizing the system – one high-end web designer went public complaining about his work for google some years ago. He designed a good-looking blue line on the google main page. Google engineers tested his against another hue and found out that the other hue gets 5% improvement in something, so they changed the hue to the more plain-looking one despite the protests of this artist.
-
AuthorPosts
- You must be logged in to reply to this topic.