• Even_Adder@lemmy.dbzer0.com
    link
    fedilink
    English
    arrow-up
    5
    arrow-down
    2
    ·
    3 months ago

    You should read this letter by Katherine Klosek, the director of information policy and federal relations at the Association of Research Libraries.

    Why are scholars and librarians so invested in protecting the precedent that training AI LLMs on copyright-protected works is a transformative fair use? Rachael G. Samberg, Timothy Vollmer, and Samantha Teremi (of UC Berkeley Library) recently wrote that maintaining the continued treatment of training AI models as fair use is “essential to protecting research,” including non-generative, nonprofit educational research methodologies like text and data mining (TDM). If fair use rights were overridden and licenses restricted researchers to training AI on public domain works, scholars would be limited in the scope of inquiries that can be made using AI tools. Works in the public domain are not representative of the full scope of culture, and training AI on public domain works would omit studies of contemporary history, culture, and society from the scholarly record, as Authors Alliance and LCA described in a recent petition to the US Copyright Office. Hampering researchers’ ability to interrogate modern in-copyright materials through a licensing regime would mean that research is less relevant and useful to the concerns of the day.

    • wewbull@feddit.uk
      link
      fedilink
      English
      arrow-up
      1
      arrow-down
      1
      ·
      3 months ago

      I would disagree, because I don’t see the research into AI as something of value to preserve.

      • Even_Adder@lemmy.dbzer0.com
        link
        fedilink
        English
        arrow-up
        1
        ·
        3 months ago

        This isn’t about research into AI, what some people want will impact all research, criticism, analysis, archiving. Please re-read the letter.