Brian Eno has spent decades pushing the boundaries of music and technology, but when it comes to artificial intelligence, his biggest concern isn’t the tech — it’s who controls it.
Eh I’m fine with the illegal harvesting of data. It forces the courts to revisit the question of what copyright really is and hopefully erodes the stranglehold that copyright has on modern society.
Let the companies fight each other over whether it’s okay to pirate every video on YouTube. I’m waiting.
I would agree with you if the same companies challenging copyright (protecting the intellectual and creative work of “normies”) are not also aggressively welding copyright against the same people they are stealing from.
With the amount of coprorate power tightly integrated with the governmental bodies in the US (and now with Doge dismantling oversight) I fear that whatever comes out of this is humans own nothing, corporations own everything. Death of free independent thought and creativity.
Everything you do, say and create is instantly marketable, sellable by the major corporations and you get nothing in return.
The world needs something a lot more drastic then a copyright reform at this point.
That article is overblown. People need to configure their websites to be more robust against traffic spikes, news at 11.
Disrespecting robots.txt is bad netiquette, but honestly this sort of gentleman’s agreement is always prone to cheating. At the end of the day, when you put something on the net for people to access, you have to assume anyone (or anything) can try to access it.
You think Red Hat & friends are just all bad sysadmins? Source hut maybe…
I think there’s a bit of both: poorly optimized/antiquated sites and a gigantic spike in unexpected and persistent bot traffic. The typical mitigations do not work anymore.
Not every site is and not every site should have to be optimized for hundreds of thousands of requests every day or more. Just because they can be doesn’t mean that it’s worth the time effort or cost.
Eh I’m fine with the illegal harvesting of data. It forces the courts to revisit the question of what copyright really is and hopefully erodes the stranglehold that copyright has on modern society.
Let the companies fight each other over whether it’s okay to pirate every video on YouTube. I’m waiting.
So far, the result seems to be “it’s okay when they do it”
Yeah… Nothing to see here, people, go home, work harder, exercise, and don’t forget to eat your vegetables. Of course, family first and god bless you.
I would agree with you if the same companies challenging copyright (protecting the intellectual and creative work of “normies”) are not also aggressively welding copyright against the same people they are stealing from.
With the amount of coprorate power tightly integrated with the governmental bodies in the US (and now with Doge dismantling oversight) I fear that whatever comes out of this is humans own nothing, corporations own everything. Death of free independent thought and creativity.
Everything you do, say and create is instantly marketable, sellable by the major corporations and you get nothing in return.
The world needs something a lot more drastic then a copyright reform at this point.
It’s seldom the same companies, though; there are two camps fighting each other, like Gozilla vs Mothra.
AI scrapers illegally harvesting data are destroying smaller and open source projects. Copyright law is not the only victim
https://thelibre.news/foss-infrastructure-is-under-attack-by-ai-companies/
That article is overblown. People need to configure their websites to be more robust against traffic spikes, news at 11.
Disrespecting robots.txt is bad netiquette, but honestly this sort of gentleman’s agreement is always prone to cheating. At the end of the day, when you put something on the net for people to access, you have to assume anyone (or anything) can try to access it.
You think Red Hat & friends are just all bad sysadmins? Source hut maybe…
I think there’s a bit of both: poorly optimized/antiquated sites and a gigantic spike in unexpected and persistent bot traffic. The typical mitigations do not work anymore.
Not every site is and not every site should have to be optimized for hundreds of thousands of requests every day or more. Just because they can be doesn’t mean that it’s worth the time effort or cost.
In this case they just need to publish the code as a torrent. You wouldn’t setup a crawler if there was all the data in a torrent swarm.