Add macserialjunkie spider and SQLite pipeline

Fork and refactor project for scraping macserialjunkie.com: add a new phpBB spider (uses python-dotenv for credentials, form login enabled, multiple start_urls, robust ID/time/text extraction and pagination) and an SQLitePipeline that saves posts to posts.db with a tqdm progress bar. Update settings to use the SQLite pipeline, increase concurrency, reduce download delay, disable robots.txt, set JOBDIR for resume and silence logs; add .env.example and .python-version, update README and requirements (add tqdm), tidy .gitignore, and add pyproject.toml. Also reorganize package layout (rename/move phpBB_scraper modules), remove legacy pipeline and old spider implementations, and add a dependency lock file (uv.lock).
This commit is contained in:
√(noham)²
2026-01-31 13:30:41 +01:00
parent d0178052c9
commit 5615658452
17 changed files with 1152 additions and 163 deletions

View File

@@ -1,5 +1,7 @@
# phpBB Forum Scraper
This is a fork of [phpBB-forum-scraper](https://github.com/Dascienz/phpBB-forum-scraper/) customized for scraping [macserialjunkie.com](https://macserialjunkie.com/).
Python-based web scraper for phpBB forums. Project can be used as a template for building your own
custom Scrapy spiders or for one-off crawls on designated forums. Please keep in mind that aggressive crawls
can produce significant strain on web servers, so please throttle your request rates.
@@ -35,6 +37,6 @@ If you need additional data scraped, you will have to create additional spiders
```bash
cd phpBB_scraper/
scrapy crawl phpBB
# scrapy crawl phpBB -o posts.csv
uv run scrapy crawl phpBB -s JOBDIR=crawls/resume-002
```
NOTE: Please adjust `settings.py` to throttle your requests.