

“Fair use” is the exact opposite of what you’re saying here. It says that you don’t need to ask for any permission. The judge ruled that obtaining illegitimate copies was unlawful but use without the creators consent is perfectly fine.
“Fair use” is the exact opposite of what you’re saying here. It says that you don’t need to ask for any permission. The judge ruled that obtaining illegitimate copies was unlawful but use without the creators consent is perfectly fine.
Of course they’re not “three laws safe”. They’re black boxes that spit out text. We don’t have enough understanding and control over how they work to force them to comply with the three laws of robotics, and the LLMs themselves do not have the reasoning capability or the consistency to enforce them even if we prompt them to.
Many times these keys are obtained illegitimately and they end up being refunded. In other cases the key is bought from another region so the devs do get some money, but far less than they would from a regular purchase.
I’m not sure exactly how the illegitimate keys are obtained, though. Maybe in trying to not pay the publisher you end up rewarding someone who steals peoples’ credit cards or something.
They work the exact same way we do.
Two things being difficult to understand does not mean that they are the exact same.
NVMEs are claiming sequential write speeds of several GBps (capital B as in byte). The article talks about 10Gbps (lowercase b as in bits), so 1.25GBps. Even with raw storage writes the NVME might not be the bottleneck in this scenario.
And then there’s the fact that disk writes are buffered in RAM. These motherboards are not available yet so we’re talking about future PC builds. It is safe to say that many of them will be used in systems with 32GB RAM. If you’re idling/doing light activity while waiting for a download to finish you’ll have most of your RAM free and you would be able to get 25-30GB before storage speed becomes a factor.
From the article:
Those joining from unsupported platforms will be automatically placed in audio-only mode to protect shared content.
and
“This feature will be available on Teams desktop applications (both Windows and Mac) and Teams mobile applications (both iOS and Android).”
So this is actually worse than just blocking screen capturing. This will break video calls for some setups for no reason at all since all it takes to break this is a phone camera - one of the most common things in the world.
The only thing I’ve been claiming is that AI training is not copyright violation
What’s the point? Are you talking specifically about some model that was trained and then put on the shelf to never be used again? Cause that’s not what people are talking about when they say that AI has a copyright issue. I’m not sure if you missed the point or this is a failed “well, actually” attempt.
It can’t be both. It’s not self-driving. That’s just what they call it to oversell it. I’m assuming they had to add the “Supervised” part for legal reasons.
Learning what a character looks like is not a copyright violation
And nobody claimed it was. But you’re claiming that this knowledge cannot possibly be used to make a work that infringes on the original. This analogy about whether brains are copyright violations make no sense and is not equivalent to your initial claim.
Just find the case law where AI training has been ruled a copyright violation.
But that’s not what I claimed is happening. It’s also not the opposite of what you claimed. You claimed that AI training is not even in the domain of copyright, which is different from something that is possibly in that domain, but is ruled to not be infringing. Also, this all started by you responding to another user saying the copyright situation “should be fixed”. As in they (and I) don’t agree that the current situation is fair. A current court ruling cannot prove that things should change. That makes no sense.
Honestly, none of your responses have actually supported your initial position. You’re constantly moving to something else that sounds vaguely similar but is neither equivalent to what you said nor a direct response to my objections.
The NYT was just one example. The Mario examples didn’t require any such techniques. Not that it matters. Whether it’s easy or hard to reproduce such an example, it is definitive proof that the information can in fact be encoded in some way inside of the model, contradicting your claim that it is not.
If it was actually storing the images it was being trained on then it would be compressing them to under 1 byte of data.
Storing a copy of the entire dataset is not a prerequisite to reproducing copyright-protected elements of someone’s work. Mario’s likeness itself is a protected work of art even if you don’t exactly reproduce any (let alone every) image that contained him in the training data. The possibility of fitting the entirety of the dataset inside a model is completely irrelevant to the discussion.
This is simply incorrect.
Yet evidence supports it, while you have presented none to support your claims.
When an AI trains on data it isn’t copying the data, the model doesn’t “contain” the training data in any meaningful sense.
And what’s your evidence for this claim? It seems to be false given the times people have tricked LLMs into spitting out verbatim or near-verbatim copies of training data. See this article as one of many examples out there.
People who insist that AI training is violating copyright are advocating for ideas and styles to be covered by copyright.
Again, what’s the evidence for this? Why do you think that of all the observable patterns, the AI will specifically copy “ideas” and “styles” but never copyrighted works of art? The examples from the above article contradict this as well. AIs don’t seem to be able to distinguish between abstract ideas like “plumbers fix pipes” and specific copyright-protected works of art. They’ll happily reproduce either one.
That sound weird to me. How big is the population of people who are technical enough to even check what certificate provider you are using but ignorant enough to think that let’s encrypt is bad because it’s free?
“Gender” means nothing without context. By a MAGAs definition of gender this policy doesn’t protect trans people, for example. We don’t know how this rule will be interpreted in practice. Even if you don’t consider the intent behind making this change, this is objectively a weaker guarantee of protection than what we had with “gender identity and expression”.
Law enforcement AI is a terrible idea and it doesn’t matter whether you feed it “false facts” or not. There’s enough bias in law enforcement that the data is essentially always poisoned.
The problem with any excuse you make for Elon is that Elon is too stupid to keep his mouth shut and give the excuse any plausibility. After the nazi salute he went on Twitter to make nazi puns about it. It is certain beyond reasonable doubt that he knows exactly what the salute was. Even if you give him the insane benefit of the doubt that it was really “his heart going out” and accidentally looked like the salute, his having shown he knows what it looks like but never stating he does not actually believe in the ideology or want present himself as an ally to nazis is just as damning.
Maybe in some cases. But I’ve been requested by Google support to provide a video for a very simple and clear issue we were having. We have a contract with them and we personally brought up the issue to a Google employee during a call. There was no concern of AI generated bullshit, but they still wouldn’t respond without a video. So maybe there’s more to this trend than what you’re theorizing.
I find that very unlikely to happen. If AI is accepted as fair use by the legal system, then that means they have a motive to keep copyright as restrictive as possible; it protects their work but allows them to use every one else’s. If you hate copyright law (and you should) AI is probably your enemy, not your ally.
Bold of you to assume this wasn’t always the plan for Pokemon Go. A ton of online services are basically designed from the get go to be mass surveillance machines and the founders know they’re eventually going to be sold as exactly that.
I see. Thanks for sharing. This will be good to know next time I’m looking for a printer.
Sign the petition even if it’s surpassed 1mil signatures by the time you read this! The signatures will be verified after the petition is complete. This could lead to removal of any number of them. We don’t want to barely make it. Let’s go as high as possible!