• Dirac@lemmy.today
    link
    fedilink
    English
    arrow-up
    1
    ·
    3 days ago

    Instead of answering this question, I’ll direct you to some tangential research that may help you answer this question yourself. I’d like you to read a bit on different ethical frameworks (you can just wiki that one), then I’d like you to apply that to some of the openly available policies, contracts and practices of the company. At that point you should have your answer. Thank you in advance for doing your own research 😉

    • Fizz@lemmy.nz
      link
      fedilink
      English
      arrow-up
      1
      arrow-down
      1
      ·
      3 days ago

      Thanks for letting me know you can’t answer the question at the start

      • Dirac@lemmy.today
        link
        fedilink
        English
        arrow-up
        1
        ·
        2 days ago

        Well let me just ask you a question: how much say should an AI have in the decision to kill a human being? What percentage do you think is appropriate?

        • Fizz@lemmy.nz
          link
          fedilink
          English
          arrow-up
          1
          ·
          13 hours ago

          What makes you think you can measure the % say an AI has in the decision to kill a human? Even if we pretend “it had 100% say” was true it wouldnt matter, it would still be a human that ordered the deployment and be responsible for the decision.