Declarative web scraping
ESCAPE_HTML
, UNESCAPE_HTML
and DECODE_URI_COMPONENT
functions. #318
INNER_HTML_SET
and INNER_TEXT_SET
functions. #329
FOCUS
function. #340
RAND
accepts optional upper and lower limits. #271
SCROLL
function. #269
WAIT_NAVIGATION
call. #281
MOUSE(x, y)
and SCROLL(x, y)
#237.WAIT_NO_ELEMENT
, WAIT_NO_CLASS
and WAIT_NO_CLASS_ALL
functions #249.HTMLElement.style
property #255.ATTR_GET
, ATTR_SET
, ATTR_REMOVE
, STYLE_GET
, STYLE_SET
and STYLE_REMOVE
functions #255.WAIT_STYLE
, WAIT_NO_STYLE
, WAIT_STYLE_ALL
and WAIT_NO_STYLE_ALL
functions #256..cookies
property.
In order to manipulate with cookies, COOKIE_DEL
, COOKIE_SET
AND COOKIE_GET
functions were added #242.LET doc = DOCUMENT(url, {
driver: "cdp",
cookies: [{
name: "x-e2e",
value: "test"
}, {
name: "x-e2e-2",
value: "test2"
}]
})
collections.IterableCollection
and collections.CollectionIterator
interfaces.
Now they are in core
package and called Iterable
and Iterator
1af8b37.collections.Collection
interface to collections.Measurable
1af8b37.runtime/values
package into drivers
package #234.drivers.WithDynamic
and drivers.WithStatic
methods with a new drivers.WithContext
method with optional parameter drivers.AsDefault()
#234.LET doc = DOCUMENT(url, {
driver: "cdp"
})
context.Done()
to interrupt an execution #201.ELEMENT_EXISTS(doc, selector) -> Boolean
function #210.LET exists = ELEMENT_EXISTS(doc, ".nav")
PageLoadParams
to DOCUMENT
function #214.LET doc = DOCUMENT("https://www.google.com/", {
dynamic: true,
timeout: 10000
})
FMT
function #151.PAGINATION
function #173.SCROLL_TOP
, SCROLL_BOTTOM
and SCROLL_ELEMENT
functions #174.HOVER
function #178.RIGHT
should return substr counting from right rather than left #164.INNER_HTML
returns outer HTML instead for dynamic elements #170.INNER_TEXT
returns HTML instead from dynamic elements #170.math
and utils
packages in standard library. Renamed LOG
to PRINT
#162.FROM_BASE64
function commit
DOWNLOAD
function commit
KEEP
function does not perform deep cloning commit
.innerHtml
to .innerHTML
commit