I get that the Internet doesn’t contain an infinite number of domains. Max visits to a each one can be limited. Hel-lo, McFly?
I get that the Internet doesn’t contain an infinite number of domains. Max visits to a each one can be limited. Hel-lo, McFly?
When you forgot to educate your people well enough so you don’t have to worry about what they see.
It doesn’t have to memorize all possible guids, it just has to limit visits to base urls.
You can limit the visits to a domain. The honeypot doesn’t register infinite new domains.
Best tan ever! for a few seconds.
He does know a guy with rockets tho.
I would simply add links to a list when visited and never revisit any. And that’s just simple web crawler logic, not even AI. Web crawlers that avoid problems like that are beginner/intermediate computer science homework.
TIL there are hotspots in Cancun.
You can detect pathpoints that come up repeatedly and avoid pursuing them further, which technically aren’t called “infinite loop” detection but I don’t know the correct name. The point is that the software isn’t a Star Trek robot that starts smoking and bricks itself when it hears something illogical.
OTOH infinite loop detection is a well known coding issue with well known, freely available solutions, so this approach will only affect the lamest implementations of AI,
In other news, today’s leopards are better at eating your face than ever before!
Makes as much sense as changing Gulf of Mexico to Gulf of America - which will go the way of Freedom Fries in 4 years anyway.
Mexico should rename Baja California to Península Pacífica.
I’m a software developer responding to a coding problem. If it’s all under one domain then avoiding infinite visits is even simpler - I would create a list of known huge websites like google and wikipedia, and limit the visits to any domain that is not on that list. This would eliminate having to track where the honeypot is deployed to.