Qri Versions Save

you're invited to a data party!

v0.10.0

3 years ago

v0.10.0 (2021-05-04)

Welcome to the long awaited 0.10.0 Qri release! We've focused on usability and bug fixes, specifically surrounding massive improvements to saving a dataset, the HTTP API, and the lib package interface. We've got a few new features (step-based transform execution, change reports over the api, progress bars on save, and a new component: Stats) and you should see an obvious change based on the speed, reliability, and usability in Qri, especially when saving a new version of a dataset.

Massive Improvements to Save performance

We've drastically improved the reliability and scalability of saving a dataset on Qri. Qri uses a bounded block of memory while saving, meaning it will only consume roughly a MAX of 150MB of memory while saving, regardless of how large your dataset is. This means the max size of dataset you can save is no longer tied to your available memory.

We've had to change some underlying functionality to get the scalability we want, to that end we no longer calculate Structure.Checksum, we no longer calculate commit messages for datasets over a certain size, and we no longer store all the error values found when validating the body of a dataset.

API Overhaul

Our biggest change has been a complete overhaul of our API.

We wanted to make our API easier to work with by making it more consistent across endpoints. After a great deal of review & discussion, this overhaul introduces an RPC-style centric API that expects JSON POST requests, plus a few GET requests we're calling "sugar" endpoints.

The RPC part of our api is an HTTP pass-through to our lib methods. This makes working with qri over HTTP the same as working with Qri as a library. We've spent a lot of time building & organizing qri's lib interface, and now all of that same functionality is exposed over HTTP. The intended audience for the RPC API are folks who want to automate qri across process boundaries, and still have very fine grained control. Think "command line over HTTP".

At the same time, however, we didn't want to lose a number of important-to-have endpoints, like being able to GET a dataset body via just a URL string, so we've moved all of these into a "sugar" API, and made lots of room to grow. We'll continue to add convenience-oriented endpoints that make it easy to work with Qri. The "sugar" API will be oriented to users who are prioritizing fetching data from Qri to use elsewhere.

We also noticed how quickly our open api spec fell out of date, so we decided to start generating our spec using the code itself. Take a look at our open api spec, for a full list of supported JSON endpoints.

Here is our full API spec, supported in this release:

API Spec

Sugar

The purpose of the API package is to expose the lib.RPC api and add syntatic sugar for mapping RESTful HTTP requests to lib method calls

endpoint HTTP methods Lib Method Name
"/" GET api.HealthCheckHandler
"/health" GET api.HealthCheckHandler
"/qfs/ipfs/{path:.*}" GET qfs.Get
"/webui" GET api.WebuiHandler
/ds/get/{username}/{name} GET api.GetHandler
/ds/get/{username}/{name}/at/{path} GET api.GetHandler
/ds/get/{username}/{name}/at/{path}/{component} GET api.GetHandler
/ds/get/{username}/{name}/at/{path}/body.csv GET api.GetHandler

RPC

The purpose of the lib package is to expose a uniform interface for interacting with a qri instance

endpoint Return Type Lib Method Name
Aggregate Endpoints
"/list" []VersionInfo collection.List?
"/sql" [][]any sql.Exec
"/diff" Diff diff.Diff
"/changes" ChangeReport diff.Changes
Access Endpoints
"/access/token" JSON Web Token access.Token
Automation Endpoints
"/auto/apply" ApplyResult automation.apply
Dataset Endpoints
"/ds/componentstatus" []Status dataset.ComponentStatus
"/ds/get GetResult dataset.Get
"/ds/activity" []VersionInfo dataset.History
"/ds/rename" VersionInfo dataset.Rename
"/ds/save" dataset.Dataset dataset.Save
"/ds/pull" dataset.Dataset dataset.Pull
"/ds/push" DSRef dataset.Push
"/ds/render" []byte dataset.Render
"/ds/remove" RemoveResponse dataset.Remove
"/ds/validate" ValidateRes dataset.Validate
"/ds/unpack" Dataset dataset.Unpack
"/ds/manifest" Manifest dataset.Manifest
"/ds/manifestmissing" Manifest dataset.ManifestMissing
"/ds/daginfo" DagInfo dataset.DagInfo
Peer Endpoints
"/peer" Profile peer.Info
"/peer/connect" Profile peer.Connect
"/peer/disconnect" Profile peer.Disconnect
"/peer/list" []Profile peer.Profiles
Profile Endpoints
"/profile" Profile profile.GetProfile
"/profile/set" Profile profile.SetProfile
"/profile/photo" Profile profile.ProfilePhoto
"/profile/poster" Profile profile.PosterPhoto
Remote Endpoints
"/remote/feeds" Feed remote.Feeds
"/remote/preview" Dataset remote.Preview
"/remote/remove" - remote.Remove
"/remote/registry/profile/new" Profile registry.CreateProfile
"/remote/registry/profile/prove" Profile registry.ProveProfile
"/remote/search" SearchResult remote.Search
Working Directory Endpoints
"/wd/status" []StatusItem fsi.Status
"/wd/init" DSRef fsi.Init
"/wd/caninitworkdir" -- fsi.CanInitworkdir
"/wd/checkout" -- fsi.Checkout
"/wd/restore" -- fsi.Restore
"/wd/write" []StatusItem fsi.Write
"/wd/createlink" VersionInfo fsi.CreateLink
"/wd/unlink" string fsi.Unlink
"/wd/ensureref" -- fsi.EnsureRefNotLinked

Redesigned lib interface

In general, we've streamlined the core functionality and reconciled input params in the lib package (which contain the methods and params that power both the api and cmd), so that no matter how you are accessing functionality in qri, whether using lib as a package, using the HTTP API, or using the command line, you can expect consistent inputs and–more importantly–consistent behavior. We're also utilizing our new dispatch pattern to replace our old rpc client with the same JSON HTTP API exposed to users. That way all our API's, HTTP or otherwise have the same expectations. If something is broke in one place, it is broken in all places, consequently, when it is fixed in one place it will be fixed in all places!

These changes will also help us in our upcoming challenges to refine expand the notion of identity inside of Qri.

Stats Component!

We've added a new component: Stats! Stats is a component that contains statistical metadata about the body of a dataset. Stats are now automatically calculated and saved with each new version.

Its purpose is to provide an "at a glance" summary of a dataset, calculating statistics on columns (ok, on "compound data types", but it's much easier to think about column stats). In order to remain fast for very large dataset sizes, we have opted to calculate the stats using probabilistic structures. Stats are an important part of change reports, and allow you to get a sense of what is different in a dataset body without having to examine that body line by line.

For earlier versions that don't have stats calculated, we've added a qri stats command that will calculate the new stats component for you.

Other features:

Change report over API

We've added an endpoint to get the change report via the api: /changes, laying the groundwork for future User Interfaces that can report on changes between versions.

Access and Oauth

We're working towards making qri-core able to handle multiple users on the same node (AKA multi-tenancy). In preparation for multi-tenancy, we are adding support for generating JSON web tokens (JWTs) that will help ensure identity on the network. You can generate an access token via the cmd using the qri access command, or over the api using the /access/token endpoint.

Apply Command

You can now use the qri apply command to "dry run" and de-bug transforms! Use the --file flag to apply a new transform.star file, use a dataset reference to re-run an already existing transform on an already existing dataset, or use both to apply a new tranform to an already existing dataset. The resulting dataset is output to the terminal. To run the transform and save the results as a new commit, use the --apply flag in the qri save command.

Progress bars

Qri has been fine tuned to handle larger datasets faster. Regardless of how fast your dataset is saving, we want to be able to track any progress. Now, when saving a new dataset or a new version of a dataset, the command line will show you a progress bar.

qri version

To help with debugging, the qri version command now comes chock full of details, including the qri version, the build time, and the git summary. (developers: you need to use make build now instead of go build!)

dependancy updates:

We've also released updated to our dependencies: dag, qfs, deepdiff, dataset. Take a look at those release notes to learn about other bug fixes and stability enhancements that Qri is inheriting.

BREAKING CHANGES

Bug Fixes

  • api: handleRefRoutes should refer to username rather than (644f706)
  • api: Allow OPTIONS header so that CORS is available (e067500)
  • api: denyRPC only affects RPC, HTTP can still be used (b6298ce)
  • api: Fix unmarshal bug in api test (0234f30)
  • api: fix vet error (afbe53e)
  • api: handle OPTIONS requests on refRoute handlers (395d5ae)
  • api: health & root endpoints use middleware, which handles OPTIONS (5f421eb)
  • apply: bad API endpoint for apply over HTTP (c5cc840)
  • base.ListDatasets: support -1 limit to list all datasets (bd2f831)
  • base.SaveDataset: move logbook write operation back down from lib (a00f1b8)
  • changes: "left" side of report should be the previous path (32b46ff)
  • changes: column renames are properly handled now (c306e41)
  • cmd: nil pointer dereference in PrintProgressBarsOnEvents (b0a2ec0)
  • dispatch: Add default source resolver to Attributes (e074bf4)
  • dispatch: Calls that return only 1 value can work across RPC (ceefeaa)
  • dispatch: Comments clarifying Methods and Attributes (33222c6)
  • dispatch: Dispatch for transform. Transformer instead of Service (24562c5)
  • dispatch: Fix code style and get tests passing (4cdb3f5)
  • dispatch: Fix for fsi plumbing commands, to work over http (8070537)
  • dispatch: MethodSet interface to get name for dispatch (9b73d4b)
  • dispatch: send source over wire and cleanup attr definitions (8a2ddf8)
  • dispatch: Use dispatcher interface for DatasetMethods (201edda)
  • dispatch: When registering, compare methods to impl (c004812)
  • dsfs: fix adjustments to meta prior to commit message generation (365cbb9)
  • dsfs: LoadDataset using the mux filesystem. Error if nil (75215be)
  • dsfs: remove dataset field computing deadlock (ef615a9)
  • dsfs: set script paths before generating commit messages (8174996)
  • fill: Map keys are case-insensitive, handle maps recursively (ab27f1b)
  • fsi: fsi.ReadDir sets the '/fsi' path prefix (3d64468)
  • http: expect json requests to decode, if the body is not empty (c50ea28)
  • http: fix httpClient error checking (6f4567e)
  • init: TargetDir for init, can be absolute, is created if needed (55c9ff6)
  • key: fix local keystore key.ID encoding, require ID match keys (a469b6e)
  • lib: Align input params across all lib methods (7ba26b6)
  • lib: don't ignore serialization errors when getting full datasets (64abb7e)
  • lib: Improve context passing and visibility of internal structs (8f6509b)
  • list: List datasets even if some refs have bad profileIDs (11b6763)
  • load: Fix spec test to exercise LoadDataset (f978ec7)
  • logbook: commit timestamps overwrite run timestamps in logs (2ab44d0)
  • logbook: Logsync validates that ref has correct profileID (153c4b9)
  • logbook: remove 'stranded' log histories on new dataset creation (2412a40)
  • mux: allow api mux override (eccdf9b)
  • oas: add open api spec tests to CI and makefile (e91b50b)
  • p2p: dag MissingManifest sigfaults if there is a nil manifest (04ec5b2)
  • p2p: qri bootstrap addrs config migration (0680097)
  • prove: Prove command updates local logbook as needed (b09a2eb)
  • prove: Store the original KeyID on config.profile (88214a4)
  • pull: Pull uses network resolver. Fixes integration test. (d049958)
  • remote: always send progress completion on client push/pull events (afcb2f8)
  • repo: Don't use blank path for new repo in tests (1ec8e74)
  • routes: skip over endpoints that are DenyHTTP (ee7d882)
  • rpc: unregister dataset methods (6a2b213)
  • run: fix unixnano -> *time.Time conversion, clean up transform logging (c903657)
  • save: Remove dry-run, recall, return-body from save path (fac37da)
  • search: Dispatch for search (8570abf)
  • sql: Fix sql command for arm build (#1783) (2ae1541)
  • sql: Fix sql command for other 32-bit platforms (704d9fb)
  • startf/ds.set_body: infer structure when ds.set_body is called (a8a3492)
  • stats: close accumulator to finalize output (1b7f4f1)
  • test: fix api tests to consume refstr (e03ef3a)
  • token: claim now includes ProfileID (8dd40e4)
  • transform: don't duplicate transform steps on save (8ace963)
  • transform: don't write to out streams when nil, use updated preview.Create (28651cb)
  • version: add warning when built with 'go install' (1063a71)

Features

  • api: change report API (ca16f3c)
  • api: GiveAPIServer attachs all routes to api (ea2d4ad)
  • api: read oauth tokens to request context (f024b26)
  • apply: Apply command, and --apply flag for save (c01a4bf)
  • bus: Subscribe to all, or by ID. "Type" -> "Topic" (cc139bc)
  • cmd: add access command (1c56680)
  • dispatch: Abs paths on inputs to dispatch methods (a19efcc)
  • dispatch: Dispatch func can return 1-3 values, 2 being Cursor (304f7c5)
  • dispatch: Method attributes contain http endpoint and verb (0036cb7)
  • dsfs: compute & store stats component at save time (3ff3b75)
  • httpClient: introducing an httpClient (#1629) (8ecde53)
  • keystore: keystore implmenetation (#1602) (205165a)
  • lib: attach active user to scope (608540a)
  • lib: Create Auth tokens using inst.Access() (3be7af2)
  • lib: Dispatch methods call, used by FSI (afaf06d)
  • list: Add ProfileID restriction option to List (774fa06)
  • logbook: add methods for writing transform run ops (7d0cb91)
  • profile: ResolveProfile replaces CanonicalizeProfile (7bd848b)
  • prove: Prove a new keypair for an account, set original profileID (6effbea)
  • run: run package defines state of a transform run (8e69e5e)
  • save: emit save events. Print progress bars on save (3c979ed)
  • save: recall transform on empty --apply, write run operations (3949be9)
  • save: support custom timestamps on commit (e8c18fa)
  • sql: Disable sql command on 32-bit arm to fix compilation (190b5cb)
  • stats: overhaul stats service interface, implement os stats cache (2128a0c)
  • stats: stats based on sketch/probabalistic data structures (f6191c8)
  • transform: Add runID to transform. Publish some events. (5ceac77)
  • transform: emit DatasetPreview event after startf transform step (28bb8b0)
  • validate: Parameters to methods will Validate automatically (7bb1515)
  • version: add details reported by "qri version" (e6a0a67)
  • vesrion: add json format output for version command (ad4dcc7)
  • websocket: publish dataset save events to websocket event connections (378e922)

Performance Improvements

  • dsfs: don't calculate commit descriptions if title and message are set (f5ec420)
  • save: improve save performance, using bounded memory (7699f02)

v0.9.13

3 years ago

Patch v0.9.13 brings improvements to the validate command, and lays the groundwork for OAuth within qri core.

qri validate gets a little smarter this release, printing a cleaner, more readable list of human errors, and now has flags to output validation error data in JSON and CSV formats.

Full description of changes are in CHANGELOG.md

v0.9.12

3 years ago

Patch release 0.9.12 features a number of fixes to various qri features, most aimed at improving general quality-of-life of the tool, and some others that lay the groundwork for future changes.

HTTP API Changes

Changed the qri api so that the /get endpoint gets dataset heads and bodies. /body still exists but is now deprecated.

P2P and Collaberation

A new way to resolve peers and references on the p2p network. The start of access control added to our remote communication API. Remotes serve a simple web ui.

General polish

Fix ref resolution with divergent logbook user data. Working directories allow case-insensitive filenames. Improve sql support so that dataset names don't need an explicit table alias. The get command can fetch datasets from cloud.

v0.9.11

3 years ago

This patch release addresses a critical error in qri setup, and removes overly-verbose output when running qri connect.

v0.9.10

3 years ago

v0.9.10 (2020-07-27)

For this release we focused on clarity, reliability, major fixes, and communication (both between qri and the user, and the different working components of qri as well). The bulk of the changes surround the rename of publish and add to push and pull, as well as making the commands more reliable, flexible, and transparent.

push is the new publish

Although qri defaults to publishing datasets to our qri.cloud website (if you haven't checked it out recently, it's gone through a major facelift & has new features like dataset issues and vastly improved search!), we still give users tools to create their own services that can host data for others. We call these remotes (qri.cloud is technically a very large, very reliable remote). However, we needed a better way to keep track of where a dataset has been "published", and also allow datasets to be published to different locations.

We weren't able to correctly convey, "hey this dataset has been published to remote A but not remote B", by using a simple boolean published/unpublished paradigm. We also are working toward a system, where you can push to a peer remote or make your dataset private even though it has been sent to live at a public location.

In all these cases, the name publish wasn't cutting it, and was confusing users.

After debating a few new titles in RFC0030, we settled on push. It properly conveys what is happening: you are pushing the dataset from your node to a location that will accept and store it. Qri keeps track of where it has been pushed, so it can be pushed to multiple locations.

It also helps that git has a push command, that fulfills a similar function in software version control, so using the verb push in this way has precident. We've also clarified the command help text: only one version of a dataset is pushed at a time.

pull is the new add

We decided that, for clarity, if we are renaming qri publish to qri pull, we should rename it's mirrored action, qri add to qri pull. Now it's clear: to send a dataset to another source use qri push, to get a dataset from another source use qri pull!

use get instead of export

qri export has been removed. Use qri get --format zip me/my_dataset instead. We want more folks to play with get, it's a far more powerful version of export, and we had too many folks miss out on get because they found export first, and it didn't meet their expectations.

major fix: pushing & pulling historical versions

qri push without a specified version will still default to pushing the latest version and qri pull without a specified version will still default to pulling every version of the dataset that is available. However, we've added the ability to push or pull a dataset at specific versions by specifying the dataset version's path! You can see a list of a dataset's versions and each version's path by using the qri log command.

In the past this would error:

$ qri publish me/dataset@/ipfs/SpecificVersion

With the new push command, this will now work:

$ qri push me/dataset@/ipfs/SpecificVersion

You can use this to push old versions to a remote, same with pull!

events, websockets & progress

We needed a better way for the different internal qri processes to coordinate. So we beefed up our events and piped the stream of events to a websocket. Now, one qri process can subscribe and get notified about important events that occur in another process. This is also great for users because we can use those events to communicate more information when resource intensive or time consuming actions are running! Check our our progress bars when you push and pull!

The websocket event API is still a work in progress, but it's a great way to build dynamic functionality on top of qri, using the same events qri uses internally to power things like progress bars and inter-subsystem communication.

other important changes

  • sql now properly handles dashes in dataset names
  • migrations now work on machines across multiple mounts. We fixed a bug that was causing the migration to fail. This was most prevalent on Linux.
  • the global --no-prompt flag will disable all interactive prompts, but now falls back on defaults for each interaction.
  • a global --migrate flag will auto-run a migration check before continuing with the given command
  • the default when we ask the user to run a migration is now "No". In order to auto-run a migration you need the --migrate flag, (not the --no-prompt flag, but they can both be use together for "run all migrations and don't bother me")
  • the remove now takes the duties of the --unpublish flag. run qri remove --all --remote=registry me/dataset instead of qri publish --unpublish me/dataset. More verbose? Yes. But you're deleting stuff, so it should be a think-before-you-hit-enter type thing.
  • We've made some breaking changes to our API, they're listed below in the YELLY CAPS TEXT below detailing breaking changes

full notes in the Changelog

v0.9.9

3 years ago

Welcome to Qri 0.9.9! We've got a lot of internal changes that speed up the work you do on Qri everyday, as well as a bunch of new features, and key bug fixes!

Config Overhaul

We've taken a hard look at our config and wanted to make sure that, not only was every field being used, but also that this config could serve us well as we progress down our roadmap and create future features.

To that effect, we removed many unused fields, switched to using multiaddresses for all network configuration (replacing any port fields), formalized the hierarchy of different configuration sources, and added a new Filesystems field.

This new Filesystems field allows users to choose the supported filesystems on which they want Qri to store their data. For example, in the future, when we support s3 storage, this Filesystems field is where the user can go to configure the path to the storage, if it's the default save location, etc. More immediately however, exposing the Filesystems configuration also allows folks to point to a non-default location for their IPFS storage. This leads directly to our next change: moving the default IPFS repo location.

Migration

One big change we've been working on behind the scenes is upgrading our IPFS dependency. IPFS recently released version 0.6.0, and that's the version we are now relying on! This was a very important upgrade, as users relying on older versions of IPFS (below 0.5.0) would not be seen by the larger IPFS network.

We also wanted to move the Qri associated IPFS node off the default IPFS_PATH and into a location that advertises a bit more that this is the IPFS node we rely on. And since our new configuration allows users to explicitly set the path to the IPFS repo, if a user prefers to point their repo to the old location, we can still accommodate that. By default, the IPFS node that Qri relies on will now live on the QRI_PATH.

Migrations can be rough, so we took the time to ensure that upgrading to the newest version of IPFS, adjusting the Qri config, and moving the IPFS repo onto the QRI_PATH would go off without a hitch!

JSON schema

Qri now relies on a newer draft (draft2019_09) of JSON Schema. Our golang implementation of jsonschema now has better support for the spec, equal or better performance depending on the keyword, and the option to extend using your own keywords.

Removed Update

This was a real kill-your-darlings situation! The functionality of update - scheduling and running qri saves - can be done more reliably using other schedulers/taskmanagers. Our upcoming roadmap expands many Qri features, and we realized we couldn't justify the planning/engineering time to ensure update was up to our standards. Rather then letting this feature weigh us down, we realized it would be better to remove update and instead point users to docs on how to schedule updates. One day we may revisit updates as a plugin or wrapper.

Merkledag error

Some users were getting Merkledag not found errors when trying to add some popular datasets from Qri Cloud (for example nyc-transit-data/turnstile_daily_counts_2019). This should no longer be the case!

Specific Command Line Features/Changes

  • qri save - use the --drop flag to remove a component from that dataset version
  • qri log - use the --local flag to only get the logs of the dataset that are storied locally - use the --pull flag to only get the logs of the dataset from the network (explicitly not local) - use the --remote flag to specify a remote off of which you want to grab that dataset's log. This defaults to the qri cloud registry
  • qri get - use the -- zip flag to export a zip of the dataset

Specific API Features/Changes

  • /fetch - removed, use /history?pull=true
  • /history - use the local=true param to only get the logs of a dataset that are stored locally - use the pull=true param to get the logs of a dataset from the network only (explicitly not local) - use the remote=REMOTE_NAME to specify a remote off of which you want to grab that dataset's log. This defaults to the qri cloud registry

BREAKING CHANGES

  • update command and all api endpoints are removed
  • removed /fetch endpoint - use /history instead. local=true param ensure that the logbook data is only what you have locally in your logbook

v0.9.8

4 years ago

0.9.8 is a quick patch release to fix export for a few users who have been having trouble getting certain datasets out of qri.

Fixed Export

This patch release fixes a problem that was causing some datasets to not export properly while running qri connect.

Naming rules

This patch also clarifies what characters are allowed in a dataset name and a peername. From now on a legal dataset name and username must:

  • consist of only lowercase letters, numbers 0-9, the hyphen "-", and the underscore "_".
  • start with a letter

Length limits vary between usernames and dataset names, but qri now enforces these rules more consistently. Existing dataset names that violate these rules will continue to work, but will be forced to rename in a future version. New datasets with names that don't match these rules cannot be created.

Full description of changes are in CHANGELOG.md

v0.9.7

4 years ago

Qri CLI v0.9.7 is huge. This release adds SQL support, turning Qri into an ever-growing database of open datasets.

If that wasn't enough, we've added tab completion, nicer automatic commit messages, unified our command descriptions, and fixed a whole slew of bugs!

📊 Run SQL on datasets

Exprimental support for SQL is here! Landing this feature brings qri full circle to the original whitepaper we published in 2017.

We want to live in a world where you can SELECT * FROM any_qri_dataset, and we're delighted to say that day is here.

We have plans to improve & build upon this crucial feature, and are marking it as experimental while we flesh out our SQL implemetation. We'll drop the "experimental" flag when we support a healthy subset of the SQL spec.

We've been talking about SQL a bunch in our community calls:

🚗🏁 Autocomplete

The name says it all. after following the instructions on qri generate --help, type qri get, then press tab, and voilá, your list of datasets appears for the choosing. This makes working with datasets much easier, requiring you to remember and type less. 🎦 Here's a demo from our community call.

🤝📓 Friendlier Automatic Commit Messages

For a long time Qri has automatically generated commit messages for you if one isn't suppied by analyzing what's changed between versions. This release makes titles that look like this:

updated structure, viz, and transform

and adds detailed messages that look like this:

structure:
    updated schema.items.items.63.title
viz:
    updated scriptPath
transform:
    updated resources./ipfs/QmfQu6qBS3iJEE3ohUnhejb7vh5KwcS5j4pvNxZMi717pU.path
    added scriptBytes
    updated syntaxVersion

These automatic messages form a nice textual description of what's changed from version to version. Qri will automatically add these if you don't provide --title and/or --message values to qri save.

📙 Uniform CLI help

Finally, a big shout out to one of our biggest open source contributions to date! @Mr0Grog not only contributed a massive cleanup of our command line help text, they also wrote a style guide based on the existing help text for others to follow in the future!

Full description of changes are in CHANGELOG.md

v0.9.6

4 years ago

This patch release fixes a number of small bugs, mainly in support of our Desktop app, and continues infrastructural improvements in preparation for larger feature releases. These include: our improved diff experience, significantly better filesystem integration, and a new method of dataset name resolution that better handles changes across a peer network.

Full description of changes are in CHANGELOG.md

v0.9.5

4 years ago

This patch release is focused on a number of API refactors, and sets the stage for a new subsystem we're working on called dscache. It's a small release, but should help stabilize communication between peer remotes & the registry.