• yesman@lemmy.world
    link
    fedilink
    English
    arrow-up
    14
    ·
    1 day ago

    Wow, AI researchers are not only adopting philosophy jargon, but they’re starting to cover some familiar territory. That is the difference between signifier (language) and signified (reality).

    The problem is that spoken language is vague, colloquial, and subjective. Therefore spoken language can never produce something specific, universal, or objective.

    • dustyData@lemmy.world
      link
      fedilink
      English
      arrow-up
      6
      arrow-down
      3
      ·
      1 day ago

      I deep dived into AI research when the bubble first started with chatgpt 3.5. It turns out, most AI researchers are philosophers. Because thus far, there was very little tech wise elements to discuss. Neural networks and machine learning were very basic and a lot of proposals were theoretical. Generative AI as LLMs and image generators were philosophical proposals before real technological prototypes were built. A lot of it comes from epistemology analysis mixed in with neuroscience and devops. It’s a relatively new trend that the wallstreet techbros have inserted themselves and dominated the space.