Re: Geneiva is Geneva Switzerland according to AI

Home Forums Decaffeinated Coffee Re: Geneiva is Geneva Switzerland according to AI

Viewing 21 posts - 1 through 21 (of 21 total)
  • Author
    Posts
  • #2312139
    ywnjudy
    Participant

    I had queried the www if its geneiva to park a bike on a public sidewalk.
    So it kicks back at me about the laws of parking on sidewalks in Geneva.

    #2312191
    ujm
    Participant

    You must have been bored today.

    #2312197
    commonsaychel
    Participant

    Rhetorically asking – is there anyone here who finds cream-sauce-herring (such as Flaum’s & Golden-Taste) too sweet? If so, is there any brand/variety herring that you or your acquaintances prefer, and that isn’t overly salty either?

    #2312215
    Menachem Shmei
    Participant

    Excuse me, please do not insult AI.
    You are referring to the old fashioned way of searching on Google.

    Indeed, when I Googled “is it geneiva to park a bike on a public sidewalk” all of the results were about Switzerland.

    However, when I posed the same question to AI (ChatGPT), this was the beautiful response:

    “Parking a bike on a public sidewalk is generally not considered **geneiva** (theft), as public sidewalks are meant for public use. However, it could be an issue of **gezel harabim** (stealing from the public) if parking the bike obstructs pedestrian traffic or violates local laws.

    Halachically, causing inconvenience or damage to the public can be problematic, even if no direct theft is involved. Therefore, it’s best to ensure the bike is parked in a way that doesn’t block the sidewalk or violate regulations.”

    #2312229
    akuperma
    Participant

    AI is a computer program. It has whatever intelligence the programmer has and no more. It can calculate fast. If a fool programs it, the AI will spout foolishness. “Garbage In, Garbage Out” is as valid as it always was.

    #2312216
    yechiell
    Participant

    that’s funny

    #2312365

    I think google search now makes a decision (or combines one after another) whether to (1) do simple search (2) look up a knowledge graph that contains previously collected structured info, such as people’s northplace, job, etc – this is partially taken from wikipedia/wikidata (3) call AI similar to chatGPT. In this case, it seems that this original decision-making was at fault and calling a GPT directly is better.

    #2312399

    @akuperma you seem to have no clue how LLMs (Large Language Models) AI work, otherwise you wouldn’t have written the nonsense you wrote with such certainty.

    #2312403
    BaltimoreMaven
    Participant

    CD …. Schwartz original Shtiglitz is amazing

    #2312670

    Non> how LLMs (Large Language Models) AI work,

    Well, first, nobody, even LLM, knows how exactly it works, of course …

    Note that while LLM is not fully coded by the programmer, it is dependent on the training data that programmer fed into LLM. One hypothesis from an expert is that the current generation of LLMs is the best we will have (contrary to the usual always-improving tech). Explanation: LLM learns from publicly available texts. Next generation of human writers will be writing with the help of current LLMs, directly or indirectly, leading to decreasing quality while increasing quantity. Thus, next generation of LLMs will be essentially learning from the noisy data it generated. This is an AI version of “yeridas hadoros”.

    This is somewhat believable as we already see now that people using Internet are not always smarter than people reading seforim… I am hopeful, though, that we still have enough of informed individuals who will understand the problem and start training LLMs on a combination of Plato, Aristotle, Gemora and Rashi.

    #2313003
    modern
    Participant

    I have personally seen ChatGPT mske up stuff.

    #2313113

    modern, you are right, LLMs use statistics of word salad they were trained on to produce a similar looking word salad. In truth, many human writers operate in a similar way! Traditional statistics and machine learning deals with producing results that have statistical validity. After initial excitement and marketing blitz, players seem to start working on adding actual facts to nice-looking text. You might search for actual facts in, say, wikipedia, with all obvious caveats. Some of the wikipedia data is already organized in machine-readable form (see wikidata) and used for answering queries. There are, proprietary, sources of reliable data also.

    #2313329
    akuperma
    Participant

    NonImpeditiRationeCogitationis: So if you train an LLM on stuss, what makes you think it will produce wisdom. Like every computer program since the first ones in the 1940s, “garbage in, garbage out”. In a practical legal sense, the programmer as well as the user should be strictly liable for injuries “caused” by an AI.

    #2313346

    akuperma, right, one kollelnik did some research several years ago about self-driving cars – who is liable for life-and-death decisions. Most likely the developer. For example, if the car needs to resolve the famous dilemma whether to continue straight and kill person set {A} or turn and kill person set {B} – it is conceivable that the car can ID all the people, evaluate their net worth, value for humanity, insurance coverage, likelihood to sue – and make decision based on factors that are not according to halakha. Are you allowed to drive a car like that?

    #2313348

    Be careful what you write in public domain. Your grandchildren will be learning from LLMs that were trained on your posts.

    #2313724
    Yserbius123
    Participant

    @Always_Ask_Questions We know how LLMs work, we just don’t know every detail of every machination in its network. From a technical standpoint, there’s little difference between LLMs and a Magic Eight Ball with weighted results that change at random for every question.

    #2313892

    YS, right. At the same time, many established machine learning methods have non-magic properties, such out-of-sample error and generalization capabilities (such as VC dimension).

    #2313903
    Menachem Shmei
    Participant

    Be careful what you write in public domain. Your grandchildren will be learning from LLMs that were trained on your posts.

    Not just your grandchildren!

    When Ishai Ribo had the concert in Madison Square, someone told me that there was kol isha at the concert. I decided to search it up, and googled “Did Ishai Ribo concert have kol isha.”
    Google was just rolling out the AI overview feature, and it responded that yes, there was indeed mixed sitting and kol isha.
    When I checked the source, it was a from a comment on the YWN article! 😄

    (I get that Google AI overview may be different than a regular LLM like chatGPT. Just thought this was funny)

    #2314230

    Menachem, right, good example. But YWM works similarly to Wikipedia – you expect a discussion and variety of opinions on a controversial topic. So, if there is a statement that is not contradicted, then Google probably uses shtika kmodeh heuristic.

    #2314390
    Menachem Shmei
    Participant

    So, if there is a statement that is not contradicted, then Google probably uses shtika kmodeh heuristic

    Unfortunately, I doubt they are so sophisticated.

    #2314578

    > I doubt they are so sophisticated.

    Most decisions are based on machine learning that uses both manually crafted and automatically generated features, so it is not usually possible to find out what the decision is made on. But count of supporting and contradictory statements seems like a reasonable feature to use.

    As to how far someone would go in optimizing the system – one high-end web designer went public complaining about his work for google some years ago. He designed a good-looking blue line on the google main page. Google engineers tested his against another hue and found out that the other hue gets 5% improvement in something, so they changed the hue to the more plain-looking one despite the protests of this artist.

Viewing 21 posts - 1 through 21 (of 21 total)
  • You must be logged in to reply to this topic.