• rottingleaf@lemmy.world
    link
    fedilink
    English
    arrow-up
    2
    ·
    6 hours ago

    The roadblock, to my understanding (data science guy not biologist), is the time it takes to discover these things/how long it would take evolution to get there. Admittedly that’s still somewhat quantitative.

    Yes.

    But it’s the nature of AI to remain within (or close to within) the corpus of knowledge they were trained on.

    That’s fundamentally solvable.

    I’m not against attempts at global artificial intelligence, just against one approach to it. Also no matter how we want to pretend it’s something general, we in fact want something thinking like a human.

    What all these companies like DeepSeek and OpenAI and others are doing lately, with some “chain-of-thought” model, is in my opinion what they should have been focused on, how do you organize data for a symbolic logic model, how do you generate and check syllogisms, how do you, then, synthesize algorithms based on syllogisms ; there seems to be something like a chicken and egg problem between logic and algebra, one seems necessary for the other in such a system, but they depend on each other (for a machine, humans remember a few things constant for most of our existence). And the predictor into which they’ve invested so much data is a minor part which doesn’t have to be so powerful.

    • maniclucky@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      5 hours ago

      I’m not against attempts at global artificial intelligence, just against one approach to it. Also no matter how we want to pretend it’s something general, we in fact want something thinking like a human.

      Agreed. The techbros pretending that the stochastic parrots they’ve created are general AI annoys me to no end.

      While not as academically cogent as your response (totally not feeling inferior at the moment), it has struck me that LLMs would make a fantastic input/output to a greater system analogous to the Wernicke/Broca areas of the brain. It seems like they’re trying to get a parrot to swim by having it do literally everything. I suppose the thing that sticks in my craw is the giveaway that they’ve promised that this one technique (more or less, I know it’s more complicated than that) can do literally everything a human can, which should be an entire parade of red flags to anyone with a drop of knowledge of data science or fraud. I know that it’s supposed to be a universal function appropriator hypothetically, but I think the gap between hypothesis and practice is very large and we’re dumping a lot of resources into filling in the canyon (chucking more data at the problem) when we could be building a bridge (creating specialized models that work together).

      Now that I’ve used a whole lot of cheap metaphor on someone who causally dropped ‘syllogism’ into a conversation, I’m feeling like a freshmen in a grad level class. I’ll admit I’m nowhere near up to date on specific models and bleeding edge techniques.

      • rottingleaf@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        ·
        2 hours ago

        While not as academically cogent as your response

        An elegant way to make someone feel ashamed for using many smart words, ha-ha.

        I know that it’s supposed to be a universal function appropriator hypothetically, but I think the gap between hypothesis and practice is very large and we’re dumping a lot of resources into filling in the canyon (chucking more data at the problem) when we could be building a bridge (creating specialized models that work together).

        The metaphor is correct, I think it’s some social mechanism making them choose a brute force solution first. Say, spending more resources to achieve the same might be a downside usually, but if it’s a resource otherwise not in demand, that only the stronger parties possess in sufficient amounts, like corporations and governments, then that may be an upside for someone by changing the balance.

        And LLMs appear good enough to make captcha-solving machines, proof image or video faking machines, fraudulent chatbot machines, or machines predicting someone’s (or some crowd’s) responses well enough to play them. So I’d say commercially they already are successful.

        Now that I’ve used a whole lot of cheap metaphor on someone who causally dropped ‘syllogism’ into a conversation, I’m feeling like a freshmen in a grad level class. I’ll admit I’m nowhere near up to date on specific models and bleeding edge techniques.

        We-ell, it’s just hard to describe the idea without using that word, but I haven’t even finished my BS yet (lots of procrastinating, running away and long interruptions), and also the only bit of up to date knowledge I had was what DeepSeek prints when answering, so.

        • maniclucky@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          ·
          24 minutes ago

          An elegant way to make someone feel ashamed for using many smart words, ha-ha.

          Unintentional I assure you.

          I think it’s some social mechanism making them choose a brute force solution first.

          I feel like it’s simpler than that. Ye olde “when all you have is a hammer, everything’s a nail”. Or in this case, when you’ve built the most complex hammer in history, you want everything to be a nail.

          So I’d say commercially they already are successful.

          Definitely. I’ll never write another cover letter. In their use-case, they’re solid.

          but I haven’t even finished my BS yet

          Currently working on my masters after being in industry for a decade. The paper is nice, but actually applying the knowledge is poorly taught (IMHO, YMMV) and being willing to learn independently has served me better than by BS in EE.