Wow, AI researchers are not only adopting philosophy jargon, but they’re starting to cover some familiar territory. That is the difference between signifier (language) and signified (reality).
The problem is that spoken language is vague, colloquial, and subjective. Therefore spoken language can never produce something specific, universal, or objective.
I deep dived into AI research when the bubble first started with chatgpt 3.5. It turns out, most AI researchers are philosophers. Because thus far, there was very little tech wise elements to discuss. Neural networks and machine learning were very basic and a lot of proposals were theoretical. Generative AI as LLMs and image generators were philosophical proposals before real technological prototypes were built. A lot of it comes from epistemology analysis mixed in with neuroscience and devops. It’s a relatively new trend that the wallstreet techbros have inserted themselves and dominated the space.
Wow, AI researchers are not only adopting philosophy jargon, but they’re starting to cover some familiar territory. That is the difference between signifier (language) and signified (reality).
The problem is that spoken language is vague, colloquial, and subjective. Therefore spoken language can never produce something specific, universal, or objective.
I deep dived into AI research when the bubble first started with chatgpt 3.5. It turns out, most AI researchers are philosophers. Because thus far, there was very little tech wise elements to discuss. Neural networks and machine learning were very basic and a lot of proposals were theoretical. Generative AI as LLMs and image generators were philosophical proposals before real technological prototypes were built. A lot of it comes from epistemology analysis mixed in with neuroscience and devops. It’s a relatively new trend that the wallstreet techbros have inserted themselves and dominated the space.