√(noham)² 5615658452 Add macserialjunkie spider and SQLite pipeline
Fork and refactor project for scraping macserialjunkie.com: add a new phpBB spider (uses python-dotenv for credentials, form login enabled, multiple start_urls, robust ID/time/text extraction and pagination) and an SQLitePipeline that saves posts to posts.db with a tqdm progress bar. Update settings to use the SQLite pipeline, increase concurrency, reduce download delay, disable robots.txt, set JOBDIR for resume and silence logs; add .env.example and .python-version, update README and requirements (add tqdm), tidy .gitignore, and add pyproject.toml. Also reorganize package layout (rename/move phpBB_scraper modules), remove legacy pipeline and old spider implementations, and add a dependency lock file (uv.lock).
2026-01-31 13:30:41 +01:00

phpBB Forum Scraper

This is a fork of phpBB-forum-scraper customized for scraping macserialjunkie.com.

Python-based web scraper for phpBB forums. Project can be used as a template for building your own custom Scrapy spiders or for one-off crawls on designated forums. Please keep in mind that aggressive crawls can produce significant strain on web servers, so please throttle your request rates.

Requirements:

  1. Python web scraping library, Scrapy.
  2. Python HTML/XML parsing library, BeautifulSoup.

Scraper Output

The phpBB.py spider scrapes the following information from forum posts:

  1. Username
  2. User Post Count
  3. Post Date & Time
  4. Post Text
  5. Quoted Text

If you need additional data scraped, you will have to create additional spiders or edit the existing spider.

Edit phpBB.py and Specify:

  1. allowed_domains
  2. start_urls
  3. username & password
  4. forum_login=False or forum_login=True

Running the Scraper:

cd phpBB_scraper/
scrapy crawl phpBB
uv run scrapy crawl phpBB -s JOBDIR=crawls/resume-002

NOTE: Please adjust settings.py to throttle your requests.

Description
No description provided
Readme 102 KiB
Languages
Python 100%