I love the idea
I’m starting to look at airflow for my own project, not sure if you’ve heard of it or projects like it, but it seems like a great foundation for a scraper. I’m still evaluating options for that, but so far it’s my pick
Hit me up if you get stuck or make a breakthrough, I’ve got a pretty good handle on activity pub and the lemmy API, and your project would add a lot to mine
Basically it’s this system to do all kind of directional acyclic tasks, primarily based around data ingestion. It’s very flexible and powerful, which also means there’s a steep learning curve.
To give an example, you could have a task that gatherers a list of instances and updates the database. It could also spawn a new task for each one to check if the server is up and get the version number, and you could even have it email you to create an account for new instances.
Then from the task that made sure the server is up, you could spawn a new task that gets communities, which then spawns new tasks to ingest posts from it
And when this whole process is done, you could have it kick off a new set of tasks to do the indexing or whatever else on the up to date data set
It has some nice visualization of the process, you can allocate workers across devices, you can kick off the process through an API… You can use it to do anything from monitoring to scraping and doing map reduce on it. You could even federate and wire into activity pub directly, use their apis, or mix and match with scraping
I’ve never worked with crawlers and I’m not sure what angle you’re going to attack this from, but if normal crawlers don’t play well with the fediverse this is an option