• ColeSloth@discuss.tchncs.de
    link
    fedilink
    English
    arrow-up
    36
    ·
    10 months ago

    Pretty soon the internet will be almost completely ruined. Within a few years. AI bots will have spammed everything. Searches and web pages will be entirely faked bs. Reddit and Lemmy will have enough ai Bots commenting and pushing agendas/products that no one will have a clue who’s a real person. Information that’s true will be almost impossible to verify online.

    In short, if you think the web has gotten bad now, you ain’t seen nothin yet.

    • ParanoiaComplex@lemmy.world
      link
      fedilink
      English
      arrow-up
      18
      ·
      10 months ago

      I agree with the sentiment, but lack of AI has not stopped SEO hacking in the past. Sure it will help them go farther, but there are already tons of garbage websites hacking the top 1-5 results of any search

      • ColeSloth@discuss.tchncs.de
        link
        fedilink
        English
        arrow-up
        2
        ·
        10 months ago

        The top results pages, sure. I belive it’s going to take over the top 500. Along with flooding places like lemmy and reddit.

      • rottingleaf@lemmy.zip
        link
        fedilink
        English
        arrow-up
        2
        ·
        10 months ago

        In the past I remember it made using search engines less rewarding than using web directories, web rings, asking people on forums etc. That was slower, but gave you results (and acquaintances). While using search meant looking through dozens of pages of search results, mainly SEO.

      • EmergMemeHologram@startrek.website
        link
        fedilink
        English
        arrow-up
        1
        ·
        10 months ago

        I think it’s just a new world for spam.

        At some point, probably soon, AI content will generate so much data it becomes untenable to store all the scraped data.

        We’ll also reach a point where it becomes much more costly to parse the data for AI spam+trustworthiness+topics. If you need LLMs just to filter spam, that is a large step up in costs and infrastructure vs current methods.

        When that happens what happens to search? The quality will have to degrade or the margins will drop off sharply.

      • ColeSloth@discuss.tchncs.de
        link
        fedilink
        English
        arrow-up
        1
        ·
        10 months ago

        They have already been trying to use ai to combat and identify ai in college and highschool papers. So far it’s been severely ineffective. AI has gotten pretty good at writing out a sentence or two that looks like it’s real. If ai improves enough I doubt they’ll be much of a way to identify it all.

          • ColeSloth@discuss.tchncs.de
            link
            fedilink
            English
            arrow-up
            1
            arrow-down
            1
            ·
            10 months ago

            You’re looking at it in a flawed manner. AI has already been making up sources and names to state things as facts. If there’s a hundred websites for claiming the earth is flat and you ask an ai if the earth is flat, it may tell you it is flat and source those websites. It’s already been happening. Then imagine more opinionated things than hard observable scientific facts. Imagine a government using AI to shape opinion and claim there was no form of insurrection on Jan 6th. Thousands of websites and comments could quickly be fabricated to confirm that it was all made up. Burying the truth into obscurity.