Brian Eno has spent decades pushing the boundaries of music and technology, but when it comes to artificial intelligence, his biggest concern isn’t the tech — it’s who controls it.
If you studied loads of classic art then started making your own would that be a derivative work? Because that’s how AI works.
The presence of watermarks in output images is just a side effect of the prompt and its similarity to training data. If you ask for a picture of an Olympic swimmer wearing a purple bathing suit and it turns out that only a hundred or so images in the training match that sort of image–and most of them included a watermark–you can end up with a kinda-sorta similar watermark in the output.
It is absolutely 100% evidence that they used watermarked images in their training. Is that a problem, though? I wouldn’t think so since they’re not distributing those exact images. Just images that are “kinda sorta” similar.
If you try to get an AI to output an image that matches someone else’s image nearly exactly… is that the fault of the AI or the end user, specifically asking for something that would violate another’s copyright (with a derivative work)?
By that logic mirroring an image would suffice to count as derivative work since it’s “kinda sorta similar”. It’s not the original, and 0% of pixels match the source.
“And the machine, it learned to flip the image by itself! Like a human!”
It’s a predictive keyboard on steroids, let’s not pretent that it can create anything but noise with no input.
If you studied loads of classic art then started making your own would that be a derivative work? Because that’s how AI works.
The presence of watermarks in output images is just a side effect of the prompt and its similarity to training data. If you ask for a picture of an Olympic swimmer wearing a purple bathing suit and it turns out that only a hundred or so images in the training match that sort of image–and most of them included a watermark–you can end up with a kinda-sorta similar watermark in the output.
It is absolutely 100% evidence that they used watermarked images in their training. Is that a problem, though? I wouldn’t think so since they’re not distributing those exact images. Just images that are “kinda sorta” similar.
If you try to get an AI to output an image that matches someone else’s image nearly exactly… is that the fault of the AI or the end user, specifically asking for something that would violate another’s copyright (with a derivative work)?
Sounds like a load of techbro nonsense.
By that logic mirroring an image would suffice to count as derivative work since it’s “kinda sorta similar”. It’s not the original, and 0% of pixels match the source.
“And the machine, it learned to flip the image by itself! Like a human!”
It’s a predictive keyboard on steroids, let’s not pretent that it can create anything but noise with no input.