Refactoring and README updates

This commit is contained in:
Dascienz
2020-02-05 21:12:01 -05:00
parent 584d362856
commit e1172ff8cd
4 changed files with 48 additions and 41 deletions

View File

@@ -1,42 +1,40 @@
# phpBB Forum Scraper
Python-based scraper for phpBB forums.
Code requires:
Python-based web scraper for phpBB forums. Project can be used as a template for building your own
custom Scrapy spiders or for one-off crawls on designated forums. Please keep in mind that aggressive crawls
can contribute significant strain on web servers, so please throttle your request rates.
1. Python scraping library, [Scrapy](http://scrapy.org/).
2. Python HTML parsing library, [BeautifulSoup](https://www.crummy.com/software/BeautifulSoup/bs4/doc/).
## Requirements:
1. Python web scraping library, [Scrapy](http://scrapy.org/).
2. Python HTML/XML parsing library, [BeautifulSoup](https://www.crummy.com/software/BeautifulSoup/bs4/doc/).
## Scraper Output
Scrapes the following information from forum posts:
1. Username
The phpBB.py spider scrapes the following information from forum posts:
1. Username
2. User Post Count
3. Post Date & Time
4. Post Text
5. Quoted Text
2. User post count
3. Post date & time
4. Post text
5. Quoted text
If you need additional data scraped, you will have to create additional spiders or edit the existing spider.
Edit `phpBB.py` and specify:
## Edit `phpBB.py` and Specify:
1. `allowed_domains`
2. `start_urls`
2. `start_urls`
3. `username` & `password`
4. `forum_login=False` or `forum_login=True`
## Instructions:
From within `/phpBB_scraper/`:
`scrapy crawl phpBB` to launch the crawler.
`scrapy crawl phpBB -o posts.csv` to launch the crawler and save results to CSV.
## Running the Scraper:
```bash
cd phpBB_scraper/
scrapy crawl phpBB
# scrapy crawl phpBB -o posts.csv
```
NOTE: Please adjust `settings.py` to throttle your requests.