• Pennomi@lemmy.world
    link
    fedilink
    English
    arrow-up
    50
    arrow-down
    21
    ·
    2 days ago

    Eh I’m fine with the illegal harvesting of data. It forces the courts to revisit the question of what copyright really is and hopefully erodes the stranglehold that copyright has on modern society.

    Let the companies fight each other over whether it’s okay to pirate every video on YouTube. I’m waiting.

      • selokichtli@lemmy.ml
        link
        fedilink
        English
        arrow-up
        2
        arrow-down
        2
        ·
        2 days ago

        Yeah… Nothing to see here, people, go home, work harder, exercise, and don’t forget to eat your vegetables. Of course, family first and god bless you.

    • Electricblush@lemmy.world
      link
      fedilink
      English
      arrow-up
      34
      arrow-down
      1
      ·
      edit-2
      2 days ago

      I would agree with you if the same companies challenging copyright (protecting the intellectual and creative work of “normies”) are not also aggressively welding copyright against the same people they are stealing from.

      With the amount of coprorate power tightly integrated with the governmental bodies in the US (and now with Doge dismantling oversight) I fear that whatever comes out of this is humans own nothing, corporations own everything. Death of free independent thought and creativity.

      Everything you do, say and create is instantly marketable, sellable by the major corporations and you get nothing in return.

      The world needs something a lot more drastic then a copyright reform at this point.

      • cyd@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        2 days ago

        It’s seldom the same companies, though; there are two camps fighting each other, like Gozilla vs Mothra.

      • cyd@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        arrow-down
        2
        ·
        2 days ago

        That article is overblown. People need to configure their websites to be more robust against traffic spikes, news at 11.

        Disrespecting robots.txt is bad netiquette, but honestly this sort of gentleman’s agreement is always prone to cheating. At the end of the day, when you put something on the net for people to access, you have to assume anyone (or anything) can try to access it.

        • naught@sh.itjust.works
          link
          fedilink
          English
          arrow-up
          2
          ·
          1 day ago

          You think Red Hat & friends are just all bad sysadmins? Source hut maybe…

          I think there’s a bit of both: poorly optimized/antiquated sites and a gigantic spike in unexpected and persistent bot traffic. The typical mitigations do not work anymore.

          Not every site is and not every site should have to be optimized for hundreds of thousands of requests every day or more. Just because they can be doesn’t mean that it’s worth the time effort or cost.

      • interdimensionalmeme@lemmy.ml
        link
        fedilink
        English
        arrow-up
        1
        arrow-down
        2
        ·
        2 days ago

        In this case they just need to publish the code as a torrent. You wouldn’t setup a crawler if there was all the data in a torrent swarm.