The web has changed. Gone are the days that raw html parsing scripts were the proper tool for the job.
Demand for data scraping and automation is higher than ever, from business needs to data science and machine learning.
Our tools need to evolve.
Ayakashi's way of finding things in the page and using them is done with props
Directly inspired by the relational database world (and SQL), domQL makes DOM access easy and readable no matter how obscure the page's structure is.
Props are the way to package domQL expressions as re-usable structures which can then be passed around to actions or to be used as models for data extraction.
Ready made actions so you can focus on what matters.
Easily handle infinite scrolling, single page navigation, events and more.
Plus, you can always build your own actions, either from scratch or by composing other actions.
Automatically save your extracted data
to all major SQL engines, JSON and CSV.
Need something more exotic or the ability to control exactly how the data is persisted?
Package and plug your custom logic as a script.
Scraping the data is only one part of the deal.
How about something like this:
Need it to also be clean, readable and performant?
If so, pipelines can help.
Ayakashi can utilize available cores as needed. Especially useful for projects that need to run multiple operations in parallel.
All APIs used to build the builtin functionality are properly exposed.
All core entities are composable and extensible.
If you want to scrape the web, you should speak its language.
Ayakashi comes bundled with a fully documented public API that you can explore
directly in your editor.
Autocomplete any method, check signatures and examples or follow links to more documentation.
Just head over to the getting started guide!