Social media research and promotion, semi-autonomous CLI bot
A python command-line (CLI) bot for automating research and promotion on popular social media platforms (reddit, twitter, facebook, [TODO: instagram]). With a single command, scrape social media sites using custom queries and/or promote to all relevant results.
Please use with discretion: i.e. choose your input arguments wisely, otherwise your bot could find itself, along with any associated accounts, banned from platforms very quickly. The bot has some built in anti-spam filter avoidance features to help you remain undetected; however, no amount of avoidance can hide blatantly abusive use of this tool.
sudo apt install python3-pip
pip3 install --user tweepy bs4 praw selenium
keywords_and
: all keywords in this list must be present for positive matching resultkeywords_or
: at least one keyword in this list must be present for positive matchkeywords_not
: none of these keywords can be present in a positive match"keywords_not": []
<praw.ini>
...
[bot1]
client_id=Y4PJOclpDQy3xZ
client_secret=UkGLTe6oqsMk5nHCJTHLrwgvHpr
password=pni9ubeht4wd50gk
username=fakebot1
user_agent=fakebot 0.1
<red_subkey_pairs.json>
{"sub_key_pairs": [
{
"subreddits": "androidapps",
"keywords_and": ["list", "?"],
"keywords_or": ["todo", "app", "android"],
"keywords_not": ["playlist", "listen"]
}
]}
usage: spam-bot-3000.py reddit [-h] [-s N] [-n | -H | -r] [-p]
optional arguments:
-h, --help show this help message and exit
-s N, --scrape N scrape subreddits in subreddits.txt for keywords in red_keywords.txt; N = number of posts to scrape
-n, --new scrape new posts
-H, --hot scrape hot posts
-r, --rising scrape rising posts
-p, --promote promote to posts in red_scrape_dump.txt not marked with a "-" prefix
<credentials.txt>
your_consumer_key
your_consumer_secret
your_access_token
your_access_token_secret
your_twitter_username
your_twitter_password
usage: spam-bot-3000.py twitter [-h] [-j JOB_DIR] [-t] [-u UNF] [-s] [-c] [-e] [-b]
[-f] [-p] [-d]
spam-bot-3000
optional arguments:
-h, --help show this help message and exit
-j JOB_DIR, --job JOB_DIR
choose job to run by specifying job's relative directory
-t, --tweet-status update status with random promo from twit_promos.txt
-u UNF, --unfollow UNF
unfollow users who aren't following you back, UNF=number to unfollow
query:
-s, --scrape scrape for tweets matching queries in twit_queries.txt
-c, --continuous scape continuously - suppress prompt to continue after 50 results per query
-e, --english return only tweets written in English
spam -> browser:
-b, --browser favorite, follow, reply to all scraped results and
thwart api limits by mimicking human in browser!
spam -> tweepy api:
-f, --follow follow original tweeters in twit_scrape_dump.txt
-p, --promote favorite tweets and reply to tweeters in twit_scrape_dump.txt with random promo from twit_promos.txt
-d, --direct-message direct message tweeters in twit_scrape_dump.txt with random promo from twit_promos.txt
-cspf
scrape and promote to all tweets matching queries-s
scrape first-pf
then promote to remaining tweets in twit_scrape_dump.txtbash gleen_keywords_from_twit_scrape.bash
filter_out_strings_from_twit_scrape.bash
-b
thwart api limits by promoting to scraped results directly in firefox browser
auto_spam.bash
-j studfinder_example/
specify which job directory to executeNote: if you don't want to maintain individual jobs in separate directories, you may create single credentials, queries, promos, and scrape dump files in main working directory.
<jobs.json>
{"client_data":
{"name": "",
"email": "",
"fb_login": "",
"fb_password": "",
"jobs": [
{"type": "groups",
"urls": ["",""],
"keywords_and": ["",""],
"keywords_or": ["",""],
"keywords_not": ["",""] },
{"type": "users",
"urls": [],
"keywords_and": [],
"keywords_or": [],
"keywords_not": [] }
]}
}
facebook-scraper.py clients/YOUR_CLIENT/