• brucethemoose@lemmy.world
    link
    fedilink
    English
    arrow-up
    1
    ·
    edit-2
    9 days ago

    And in that case, will the Llama fork be the same as the Meta fork? We are talking about AI that has a considerable development, companies would probably not participate because it is not an open source license and its clause limits in those aspects.

    Llama has tons of commercial use even with its “non open” license, which is basically just a middle finger to companies the size of Google or Microsoft. And yes, companies would keep using the old weights like nothing changed… because nothing did. Just like they keep using open source software that goes through drama.

    Also you have to think that if the new version of Llama with the new license is 3 times better than Llama with the previous license, do you really think that the community will continue to develop the previous version?

    Honestly I have zero short term worries about this because the space is so fiercely competitive. If Llama 3 was completely closed, the open ecosystem would have been totally fine without Meta.

    Also much of the ecosystem (like huggingface and inference libraries) is open source and out of their control.

    And if they go API only, they will just get clobbered by Google, Claude, Deepseek or whomever.

    In the longer term… these transformers style models will be obsolete anyway. Honestly I have no idea what the landscape will look like.

    • MCasq_qsaCJ_234@lemmy.zip
      link
      fedilink
      English
      arrow-up
      1
      ·
      9 days ago

      Well, I agree that we don’t know what the situation will look like over time.

      There may be a limit that will cause another AI winter, driving companies away for a while because they invested money and received little.

      Transformers may remain relevant or end up being obsolete, although there are still many papers related to AI in one way or another.

      • brucethemoose@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        9 days ago

        The limit is already (apparently) starting to be data… and capital, lol.

        There could be a big computer breakthrough like , say, fast bitnet training that makes the multimodal approach much easier to train though.

        • MCasq_qsaCJ_234@lemmy.zip
          link
          fedilink
          English
          arrow-up
          1
          ·
          9 days ago

          I think this method is not convincing for companies, because they prefer more power and to do it on their own, because they don’t want their ideas to be replicated.