• Pennomi@lemmy.world
      link
      fedilink
      English
      arrow-up
      4
      arrow-down
      6
      ·
      edit-2
      4 months ago

      A far more likely end to humanity by an Artificial Superintelligence isn’t that it kills us all, but that it domesticates us into pets.

      Since the most obvious business case for AI requires humans to use AI a lot, it’s optimized by RLHF and engagement. A superintelligence created using human feedback like that will almost certainly become the most addictive platform ever created. (Basically think of what social media did to humanity, and then supercharge it.)

      In essence, we will become the kitties and AI will be our owners.

      • cm0002@lemmy.world
        link
        fedilink
        arrow-up
        12
        ·
        4 months ago

        but that it domesticates us into pets.

        So all our needs and wants will be taken care of and we no longer have to work or pay bills?

        Welp, I for one welcome our robot AI overlords

        • Pennomi@lemmy.world
          link
          fedilink
          English
          arrow-up
          3
          arrow-down
          1
          ·
          4 months ago

          Yes, I believe that will be the ultimate end of AI. I don’t think billionaires are immune from the same addictions that the rest of us are prone to. An AI that takes over will not answer to wealthy humans, it will domesticate them too.

          • nickwitha_k (he/him)@lemmy.sdf.org
            link
            fedilink
            arrow-up
            3
            ·
            4 months ago

            I don’t think billionaires are immune from the same addictions that the rest of us are prone to.

            I’d argue that it is likely that they are more prone to addiction but, their drug of choice is power.

        • Pennomi@lemmy.world
          link
          fedilink
          English
          arrow-up
          3
          ·
          4 months ago

          Why do you think it’ll kill us? If its prime directive is to increase engagement wouldn’t that be contrary to how we’d expect it to behave?

          There is scientifically a lot more reason to believe that advanced AI will not kill most humans.