Python library for analysis of neuroanatomical data.
It has been a long time (7 months, to be precise) since the last release but we've not been idle!
Version 1.6.0
is chock-full with new features, improvements and fixes. Here are some of the highlights:
navis.write_swc
no longer writes Dotprops
navis.read_parquet
and navis.write_parquet
let you read/write large set of skeletons or dotprops; note: these are experimental and the format specs might still change but feel free to take it for a spinnavis.NeuronConnector
class for creating connectivity graphs from groups of neurons with consistent connector IDs (e.g. from pymaid
neurons).tar
or .tar.gz
files (.zip
was already supported)Full Changelog: https://navis.readthedocs.io/en/latest/source/whats_new.html Commit history: https://github.com/navis-org/navis/compare/v1.5.0...v1.6.0
Changes:
navis.align.align_deform()
, navis.align.align_rigid()
, navis.align.align_pca()
, navis.align.align_pairwise()
navis.NeuronList.set_neuron_attributes()
navis.nbl.nblast_prime()
navis.persistence_vector()
, navis.persistence_diagram()
This is release contains various improvements and fixes. Importantly, it fixes a couple incompatibilities with numpy
1.24.0
.
Even though this is not a new major version there is one breaking change:
navis.flow_centrality
was renamed to navis.synapse_flow_centrality
and a new non-synaptic navis.flow_centrality
function was added. This also impacts the method
parameter in navis.split_axon_dendrite
!
Please see the change log for other changes.
This is a small release containing 2 fixes:
dill
which impact parallel processing.tqdm
.pip install navis
won't install a vispy backend (see install instructions for details)navis.interfaces.vfb
navis.models.network_models.BayesianTraversalModel
(big thanks to @aschampion)approx_nn
parameter (sacrifices precision for speed)navis.segment_analysis
&navis.form_factor
navis.write_mesh
Small fix for split_axon_dendrite
.
What's new:
navis.betweeness_centrality
navis.combine_neurons
to simply concatenate neuronsnavis.persistence_vectors
, navis.persistence_points
and navis.persistence_distances
navis.bending_flow
, navis.flow_centrality
, navis.split_axon_dendrite
and
navis.longest_neurite
navis.read_swc
now accepts a limit
parameter that enables reading on the the first N neurons (useful to sample large collections)navis.write_nrrd
and navis.read_nrrd
can now be used to write/read Dotprops to/from NRRD filesnavis.nblast
(and variants) now accept a precision
parameter that allows setting the datatype for the matrix (useful to keep memory usage low for large NBLASTs)navis.simplify_mesh
(and therefore navis.downsample_neuron
with skeletons) now uses the pyfqmr
if present (much faster!)navis.interfaces.allen_celltypes
)Thanks to @clbarnes and @Robbie1977 for contributing various PRs!
This is primarily a bug fix release for an issue when plotting skeletons with the newest plotly (5.4.0
) but still managed to squeeze in a new function:
navis.sholl_analysis
5.4.0
This new version comes packed with goodies (including breaking changes)! Here are the highlights:
VoxelNeuron
for image-type neuronsVoxelNeurons
or skeletonising MeshNeurons
)plot_flat
for dendrogram style figuresreroot_neuron
is now called reroot_skeleton
)Check out the changelog for a full list.
This new version comes with tons of goodies:
Many functions now accept a parallel=True
. If the input is a NeuronList
, navis will then use multiple cores to run that function. You can use n_cores=some number
(defaults to half the available cores) to set the number of cores used.
A toy example:
>>> nl = navis.example_neurons(4)
>>> pr = navis.prune_by_strahler(nl, to_prune=1, parallel=True)
To run generic (i.e. non-navis functions) in parallel you can use NeuronList.apply
:
>>> nl = navis.example_neurons(4)
>>> nl.apply(lambda x: x.id, parallel=True)
[1734350788, 1734350908, 722817260, 754534424]
Note that this requires that you install pathos:
$ pip3 install pathos -U
>>> nl = navis.example_neurons(4)
>>> # Write to zip
>>> navis.write_swc(nl, '~/Downloads/SWCs.zip')
>>> # Read from zip
>>> unzipped = navis.read_swc('~/Downloads/SWCs.zip')
For a while now, navis
neurons had an (optional) units property, and some downstream libraries (e.g. fafbseg
and pymaid) make use of that:
>>> # Example neurons are in raw (i.e. voxel) hemibrain space
>>> n = navis.example_neurons(1)
>>> n.units
8 <Unit('nanometer')>
Under the hood, this is using a neat library called pint
which also lets you convert between units. So you can do stuff like this:
>>> # Example neuron is in 8nm voxels (see above)
>>> n_vxl = navis.example_neurons(1)
>>> # Convert to microns
>>> n_um = n_vxl.convert_units('um')
>>> n_um.units
1.0 <Unit('micrometer')>
Likewise, many navis
functions that work with spatial units now alternatively accept a "unit str" that can be parsed by pint
. For example:
>>> n = navis.example_neurons(1)
>>> # Prune twigs smaller than 5 microns
>>> # (which would be 5 * 1000 / 8 = 625 in this neuron's space)
>>> n_pr = navis.prune_twigs(n, '5 microns')
navis.prune_at_depth
: to prune at given distance from rootnavis.read_rda
: read nat neuron-data R data (.rda
) - also works for basic stuff like dataframesnavis.cell_body_fiber
: prune neuron down to its cell body fiberFor a complete list of changes, see the change log and the commit history.