mirror of
https://github.com/NohamR/phpBB-forum-scraper.git
synced 2026-02-22 02:25:43 +00:00
Add macserialjunkie spider and SQLite pipeline
Fork and refactor project for scraping macserialjunkie.com: add a new phpBB spider (uses python-dotenv for credentials, form login enabled, multiple start_urls, robust ID/time/text extraction and pagination) and an SQLitePipeline that saves posts to posts.db with a tqdm progress bar. Update settings to use the SQLite pipeline, increase concurrency, reduce download delay, disable robots.txt, set JOBDIR for resume and silence logs; add .env.example and .python-version, update README and requirements (add tqdm), tidy .gitignore, and add pyproject.toml. Also reorganize package layout (rename/move phpBB_scraper modules), remove legacy pipeline and old spider implementations, and add a dependency lock file (uv.lock).
This commit is contained in:
@@ -1,5 +1,7 @@
|
||||
# phpBB Forum Scraper
|
||||
|
||||
This is a fork of [phpBB-forum-scraper](https://github.com/Dascienz/phpBB-forum-scraper/) customized for scraping [macserialjunkie.com](https://macserialjunkie.com/).
|
||||
|
||||
Python-based web scraper for phpBB forums. Project can be used as a template for building your own
|
||||
custom Scrapy spiders or for one-off crawls on designated forums. Please keep in mind that aggressive crawls
|
||||
can produce significant strain on web servers, so please throttle your request rates.
|
||||
@@ -35,6 +37,6 @@ If you need additional data scraped, you will have to create additional spiders
|
||||
```bash
|
||||
cd phpBB_scraper/
|
||||
scrapy crawl phpBB
|
||||
# scrapy crawl phpBB -o posts.csv
|
||||
uv run scrapy crawl phpBB -s JOBDIR=crawls/resume-002
|
||||
```
|
||||
NOTE: Please adjust `settings.py` to throttle your requests.
|
||||
Reference in New Issue
Block a user