Run Python in Apache Storm topologies. Pythonic API, CLI tooling, and a topology DSL.
This release simply makes it possible to override more settings that are in config.json
at the Topology
level. You can now add config = {'virtualenv_flags': '-p /path/to/python3'}
to have some topologies in your project using one version of Python and others using another (Issue #399, PR #402)
This release just adds options
to pre_submit_hook
and post_submit_hook
arguments. This is mostly so you can use the storm workers list inside hooks. (PR #396)
This release simply adds a new feature where the storm.workers.list
topology configuration option is now set when you submit a topology, so if some part of your topology needs to know the list of Storm workers, you do not need to resort to connecting to Nimbus with each executor to find it out. (PR #395)
Another small release, but fixes issues with sparse tail
and sparse remove_logs
.
--pool_size
argument. (PR #393)sparse remove_logs
can now be specified with --user
(PR #393)sparse tail
and sparse remove_logs
do a much better job of only finding the logs that relate to your specified topology in Storm 1.x (PR #393)sparse run
will no longer crash if you have par
set to a dict
in your topology. (bafb72b)Fix issue where sparse run
would not work without Nimbus properly configured in config.json
(Issue #391, PR #392)
Small release, but a big convenience feature was added.
workers
in your config.json
! They will be looked up dynamically by communicating with your Nimbus server. If for some reason you would like to restrict where streamparse creates virtualenvs to a subset of your workers, you can still specify the worker list in config.json
and that will take precedence. (PR #389)util.get_ui_jsons
fails (commit b2a8219)config_file
file-like objects to util.get_config
for if you need to do something like retrieve the config at runtime from a wheel. Not very common, but it is now supported. (PR #390)decorators
module.ext
module that was removed a long time ago (Issue #388)This has bunch of bugfixes, but a few new features too.
virtualenv_name
as a setting in config.json
for users who want to reuse the same virtualenv for multiple topologies. (Issue #371, PR #373)ruamel.yaml>=0.15
(PR #379)streamparse.thrift
instead of needing the extra from streamparse.thrift import storm_thrift
(PR #380)--timeout
option to sparse run
, sparse list
, and sparse submit
so that you can control how long to wait for Nimbus to respond before timing out. This is very useful on slow connections. (Issue #341, PR #381)fabfile.py
and tasks.py
imports in get_user_tasks
with Python 3. (Issue #376, PR #378)sparse run
(Issue #340, PR #382)sparse
no longer crashes on Windows (Issues #346 and pystorm/pystorm#40, PR pystorm/pystorm#45)This release brings some compatibility fixes for Storm 1.0.3+.
sparse run
(Issue #364, PR #365)sparse run
(PR #266)sparse run
(PR #363)resources
nesting added in Storm 1.0.3 (Issue #362, PR #366)This release fixes a few bugs and adds a few new features that require pystorm 3.1.0 or greater.
ReliableSpout
implementation that can be used to have spouts that will automatically replay failed tuples up to a specified number of times before giving up on them. (pystorm/pystorm#39)Spout.activate
and Spout.deactivate
methods that will be called in Storm 1.1.0 and above when a spout is activated or deactivated. This is handy if you want to close database connections on deactivation and reconnect on activation. (Issue #351, PR pystorm/pystorm#42)config.json
Nimbus host and port with the STREAMPARSE_NIMBUS
environment variable (PR #347)topology.original_name
even when you're using sparse --override_name
. (PR #354)exit_on_exception
was False
. Now they will only fail the current batch when exit_on_exception
is False
; if it is True
, all batches are still failed. (PR pystorm/pystorm#43)lein jar
twice when creating jars. (PR #348)yaml.safe_load
instead of yaml.load
when parsing command line options. (commit 6e8c4d8)This release fixes a few bugs and adds the ability to pre-build JARs for submission to Storm/Nimbus..
--local_jar_path
and --remote_jar_path
options to submit
to allow the re-use of pre-built JARs. This should make deploying topologies that are all within the same Python project much faster. (Issue #332)help
subcommand, since it's not immediately obvious to users that sparse -h submit
and sparse submit -h
will return different help messages. (Issue #334)sparse kill
can now kill any topology and not just those that have a definition in your topologies
folder. (commit 66b3a70)sparse stats
(Issue #333) an issue where name
was being used instead of override_name
when calling pre- and post-submit hooks. (10e8ce3)sparse
will no longer hang without any indication of why when you run it as root
. (Issue #324)