Scraping twitter content from twitter streaming API, in python3.
config.yml, and fill your twitter application tokens you got from twitter developers.
python3 twitter.py, the listener will start to dump corpus in:
I build this corpus for training the neural network model ChatBot, thus the corpus is arranged as consecutive dialog where even sentences are the response of odd sentences, like:
|2||thank you :)|
|3||game of throne is the best drama i've seen.|
|4||I'll say the walking dead is even better.|
Where (1,2), (3,4) are two independent dialog pairs.
Twitter streaming API supports some filter, and the most useful ones are:
simply place the logitude and altitude in [South,West,North,East] format, for example:
|New York City||[-74,40,-73,41]|
|San Francisco or New York City||[-122.75,36.8,-121.75,37.8, -74,40,-73,41]|
there are threads questioning about
language filter is not work, actually is does, it just need to accompanied with
track filter, like:
This won't work.
stream.filter(languages=["en"], track=['machine', 'learning'])
Twitter can't tokenize some languages like Japanese or Chinese correctly, your
track parameter won't work for these languages. For example, you might setup:
While you expect you will get a lot of tweets about バイト, but you simply can't get it, because twitter can't tokenize Japenses and Chinese correctly.
If you just want some corpus, regardless of the topics, here are the work around: use some generic English keywords instead, like:
stream.filter(languages=["zh"], track=['I', 'you', 'http', 'www', '@', '。', '，', '！', '』', ')', '...', '-', '/'])
According to document, you could add up to 400 keywords on this list, even some emoji also works.