Instead of answering this question, I’ll direct you to some tangential research that may help you answer this question yourself. I’d like you to read a bit on different ethical frameworks (you can just wiki that one), then I’d like you to apply that to some of the openly available policies, contracts and practices of the company. At that point you should have your answer. Thank you in advance for doing your own research 😉
Well let me just ask you a question: how much say should an AI have in the decision to kill a human being? What percentage do you think is appropriate?
What makes you think you can measure the % say an AI has in the decision to kill a human? Even if we pretend “it had 100% say” was true it wouldnt matter, it would still be a human that ordered the deployment and be responsible for the decision.
Instead of answering this question, I’ll direct you to some tangential research that may help you answer this question yourself. I’d like you to read a bit on different ethical frameworks (you can just wiki that one), then I’d like you to apply that to some of the openly available policies, contracts and practices of the company. At that point you should have your answer. Thank you in advance for doing your own research 😉
Thanks for letting me know you can’t answer the question at the start
Well let me just ask you a question: how much say should an AI have in the decision to kill a human being? What percentage do you think is appropriate?
What makes you think you can measure the % say an AI has in the decision to kill a human? Even if we pretend “it had 100% say” was true it wouldnt matter, it would still be a human that ordered the deployment and be responsible for the decision.