That’s just BattleBots with a different name.
You’re not wrong.
Ok, I now need a screensaver that I can tie to a cloudflare instance that visualizes the generated “maze” and a bot’s attempts to get out.
They should program the actions and reactions of each system to actual battle bots and then televise the event for our entertainment.
Then get bored when it devolves into a wedge meta.
this is some fucking stupid situation, we somewhat got a faster internet and these bots messing each other are hugging the bandwidth.
nothing can be improved while capitalism or authority exist; all improvement will be seized and used to oppress.
How can authority not exist? That’s staggeringly broad
given what domains we’re hosted on; i think we’ve both had a version of this conversation about a thousand times, and both ended up where we ended up. do you want us to explain hypothetically-at-but-mostly-past each other again? I can do it while un-sober, if you like.
Not who you responded to but yeah I want to hear a drug fuelled rant I don’t even care what topic
That’s not really relevant here. This is more of a “genie is out of the bottle and now we have to learn how to deal with it situation”. The idea and technology of bots and AI training already exists. There’s no socioeconomic system that is going to magically make that go away.
I think the point you’re missing is that without the monetary incentive that arises under capitalism, there would be very little drive for anyone to build these wasteful AI systems. It’s difficult to imagine a group of people voluntarily amassing and then using the resources necessary for “AI” absent the desire to cash in on their investment. So you’re correct that an alternative economic system won’t “magically” make LLMs go away. I think it unlikely, however, that such wasteful nonsense would be used on any meaningful scale absent the perverse incentives of capitalism.
It’s difficult to imagine a group of people voluntarily amassing and then using the resources necessary for “AI” absent the desire to cash in on their investment.
I mean Dmitry Pospelov was arguing for AI control in the Soviet Union clear back in the 70s.
It is called regulation in sane parts of the world.
Sadly, those areas seem to be diminishing rapidly until more people enter the Find Out phase.
I don’t need it to not exist. I need it to stay the fuck out of everyone’s lives unless they work in a lab of some kind.
see, it’s not actually useful. it’s a tomogatchi. do you remember those? no, you fucking don’t.
everyone remembers tomogatchi, they were like a digital houseplant.
Lost on Lemmy?
He is not wrong. Unless people start to take steps , the dependency of tech will be used to chain most of us. Granted, these chains will be the kindest and gentlest chains seen in a long time.
Social revolution lives on in decentralized services, like this; the true battles will be later though. This year is a mild warm up. I can’t imagine the challenges that await many
no, responding to a comment about exactly that thing.
Especially since the solution I cooked up for my site works just fine and took a lot less work. This is simply to identify the incoming requests from these damn bots – which is not difficult, since they ignore all directives and sanity and try to slam your site with like 200+ requests per second, that makes 'em easy to spot – and simply IP ban them. This is considerably simpler, and doesn’t require an entire nuclear plant powered AI to combat the opposition’s nuclear plant powered AI.
In fact, anybody who doesn’t exhibit a sane crawl rate gets blocked from my site automatically. For a while, most of them were coming from Russian IP address zones for some reason. These days Amazon is the worst offender, I guess their Rufus AI or whatever the fuck it is tries to pester other retail sites to “learn” about products rather than sticking to its own domain.
Fuck 'em. Route those motherfuckers right to /dev/null.
and try to slam your site with like 200+ requests per second
Your solution would do nothing to stop the crawlers that are operating 10ish rps. There’s ones out there operating at a mere 2rps but when multiple companies are doing it at the same time 24x7x365 it adds up.
Some incredibly talented people have been battling this since last year and your solution has been tried multiple times. It’s not effective in all instances and can require a LOT of manual intervention and SysAdmin time.
https://thelibre.news/foss-infrastructure-is-under-attack-by-ai-companies/
It’s worked alright for me. Your mileage may vary.
If someone is scraping my site at a low crawl rate I honestly don’t care so long as it doesn’t impact my performance for everyone else. If I hosted anything that was not just public knowledge or copy regurgitated verbatim from the bumf provided by the vendors of the brands I sell, I might oppose to it ideologically. But I don’t. So I don’t.
If parallel crawling from multiple organizations legitimately becomes a concern for us I will have to get more creative. But thus far it hasn’t, and honestly just wholesale blocking Amazon from our shit instantly solved 90% of the problem.
So the web is a corporate war zone now and you can choose feudal protection or being attacked from all sides. What a time to be alive.
Not exactly how I expected the AI wars to go, but I guess since we’re in a cyberpunk world, we take what we get
Next step is an AI that detects AI labyrinth.
It gets trained on labyrinths generated by another AI.
So you have an AI generating labyrinths to train an AI to detect labyrinths which are generated by another AI so that your original AI crawler doesn’t get lost.
It’s gonna be AI all the way down.
LLMs tend to be really bad at detecting AI generated content. I can’t imagine specialized models are much better. For the crawler, it’s also exponentially more expensive and more human work, and must be replicated for every crawler since they’re so freaking secretive.
I think the hosts win here.
All the while each AI costs more power than a million human beings to run, and the world burns down around us.
The same way they justify cutting benefits for the disabled to balance budgets instead of putting taxes on the rich or just not giving them bailouts, they will justify cutting power to you before a data centre that’s 10 corporate AIs all fighting each other, unless we as a people stand up and actually demand change.
Vote Blue No Matter Who
Any Democrat is Better than Any Republican
So the world is now wasting energy and resources to generate AI content in order to combat AI crawlers, by making them waste more energy and resources. Great! 👍
The energy cost of inference is overstated. Small models, or “sparse” models like Deepseek are not that expensive to run. Training is a one-time cost that still pales in comparison to industrial processes.
Basically, only Altman wants it to be cost prohibitive so he can have a monopoly. Also, he’s full of shit.
Be great if these reinforced facts.
Earth us an imperfect sphere.
Humans landed on moon.
Taiwan is an independent nation.
while allowing legitimate users and verified crawlers to browse normally.
What is a “verified crawler” though? What I worry about is, is it only big companies like Google that are allowed to have them now?
I assume a crawler which adheres to robots.txt
I would love to think so. But the word “verified” suggests more.
I dunno. I don’t find any sympathy with any of these fuckers though. this is not a generally useful technology, it is not something the average person ever needs to see, and honestly, just fuck em. Fuck anyone messing with open source to engorge the garbage dispenser.
Any accessibility service will also see the “hidden links”, and while a blind person with a screen reader will notice if they wonder off into generated pages, it will waste their time too. Especially if they don’t know about such “feature” they’ll be very confused.
Also, I don’t know about you, but I absolutely have a use for crawling X, Google maps, Reddit, YouTube, and getting information from there without interacting with the service myself.
deleted by creator
Generating content with AI to throw off crawlers. I dread to think of the resources we’re wasting on this utter insanity now, but hey who the fuck cares as long as the line keeps going up for these leeches.
So we’re burning fossil fuels and destroying the planet so bots can try to deceive one another on the Internet in pursuit of our personal data. I feel like dystopian cyberpunk predictions didn’t fully understand how fucking stupid we are…
Will this further fuck up the inaccurate nature of AI results? While I’m rooting against shitty AI usage, the general population is still trusting it and making results worse will, most likely, make people believe even more wrong stuff.
The article says it’s not poisoning the AI data, only providing valid facts. The scraper still gets content, just not the content it was aiming for.
and the data for the LLM is now salted with procedural garbage. it’s great!
If you’re dumb enough and care little enough about the truth, I’m not really going to try coming at you with rationality and sense. I’m down to do an accelerationism here. fuck it. burn it down.
remember; these companies all run at a loss. if we can hold them off for a while, they’ll stop getting so much investment.
The problem I see with poisoning the data is the AI’s being trained for law enforcement hallucinating false facts used to arrest and convict people.
Law enforcement AI is a terrible idea and it doesn’t matter whether you feed it “false facts” or not. There’s enough bias in law enforcement that the data is essentially always poisoned.
that’s the entire point of laws, though, and it was already being used for that.
giving the laws better law stuff will not improve them. the law is malevolent. you cannot fix it by offering to help.
So they rewrote Nepenthes (or Iocaine, Spigot, Django-llm-poison, Quixotic, Konterfai, Caddy-defender, plus inevitably some Rust versions)
Edit, but with ✨AI✨ and apparently only true facts
Cloudflare is providing the service, not libraries
I am not happy with how much internet relies on cloudflare. However, they have a strong set of products
Cloudflare kind of real for this. I love it.
It makes perfect sense for them as a business, infinite automated traffic equals infinite costs and lower server stability, but at the same time how often do giant tech companies do things that make sense these days?
Will it actually allow ordinary users to browse normally, though? Their other stuff breaks in minority browsers. Have they tested this well enough so that it won’t? (I’d bet not.)