Socketcluster Versions Save

Highly scalable realtime pub/sub and RPC framework

v9.3.3

6 years ago

Fix bug with top level (master process) --inspect and --debug flags not working.

v9.3.0

6 years ago

Added async/await support for the worker and broker controller run methods. See https://github.com/SocketCluster/socketcluster/issues/351

The createHTTPServer method on the worker can also now resolve asynchronously.

Potentially breaking changes

  • Removed support for the deprecated global property on the worker and server objects - You should now use the exchange property instead.
  • Removed support for the deprecated getSocketURL() method of the SocketCluster master instance.

v9.1.1

6 years ago

This version fixes an issue with the scServer.clients object and scServer.clientsCount value not being updated correctly under certain specific scenarios. See https://github.com/SocketCluster/socketcluster-server/pull/21

In addition to fixing the issue, this version introduces a few changes.

Potentially breaking changes

  • On the server socket, the disconnect event used to trigger whenever the socket lost the connection at any stage of the handshake/connection cycle. That meant that the disconnect event could get triggered on the socket before the connection event had been triggered on the scServer object (which was strange). Also there was no way to know the specific phase of the cycle when the connection ended (before or after the handshake). Now the disconnect only triggers after the handshake has completed. To catch a lost connection during the handshake phase, you should use the connectAbort event on the socket or connectionAbort event on the server. If you need a catch-all to capture any kind of connection termination at any phase of the cycle, you should now use the socket's new close event or the server's new closure event. Note that this change does not affect the disconnection event on the server; only the disconnect event on the socket. Also note that this change should only affect you if you have written logic inside a scServer.on('handshake', handler) handler - If you just use the scServer.on('connection', handler) handler (and thus don't try to access sockets before they are fully connected), then the behaviour of disconnect will not have changed from what it was before.

Non-breaking changes

  • Added a new scServer.pendingClients property which is a hashmap similar to scServer.clients except that it contains sockets which have not yet completed the handshake (and therefore are still in the 'connecting' state). Also, a matching scServer.pendingClientsCount property was added.
  • The connectAbort event has been present on client sockets for a long time but not on server sockets until now. A new connectAbort event was added to the socket and a matching connectionAbort event was added on the scServer object.
  • As mentioned in the 'potentially breaking changes' section above, a new catch-all close event has been added to the socket to replace the disconnect event which had ambiguous meaning. You can also listen to lost socket connections from the scServer using the closure event.

v9.0.3

6 years ago

See https://github.com/SocketCluster/socketcluster/issues/333

Special thanks to @mauritslamers for coming up with this proposal and for doing the initial work for this release.

This release improves the way SC bootstraps different processes to give more control to the developer. As a result of this update, the boilerplate logic for entry points to various processes within SC has changed.

The entry point for the worker controller (worker.js) used to be:

// SocketCluster decides when the run() function is invoked.

module.exports.run = function (worker) {
    // Custom worker logic goes here.

    worker.scServer.on('connection', (socket) => {
        // Handle the new connection.
    })
};

but now it is:

var SCWorker = require('socketcluster/scworker');

class Worker extends SCWorker {
  run() {
    // Custom worker logic goes here.
    // You can reference to the worker in here with 'this'.

    this.scServer.on('connection', (socket) => {
        // Handle the new connection.
    });
  }
}

new Worker();

Note that you can use any of the approaches mentioned in issue #333 but the class-based approach is the default recommended approach (unless you need to support really old versions of Node.js). The new approach was inspired from Java's Runnable interface for threads (except in the case of SC, we have specialized processes instead of threads) see https://docs.oracle.com/javase/tutorial/essential/concurrency/runthread.html.

As part of this change, you can now override the SCWorker's createHTTPServer() method to return your own custom HTTP server for SC to use (it needs to be compatible with the default Node.js HTTP server though). This should make it easier to use other back end frameworks with SC.

For more details on the new worker boilerplate see https://socketcluster.io/#!/docs/api-scworker For more details on the new broker boilerplate see https://socketcluster.io/#!/docs/api-broker

Breaking changes

v8.0.2

6 years ago

No breaking API changes from 7.x.x but it makes an addition to the SC protocol which may affect codecs and other SC plugins which interact directly with the SC protocol.

  • Added subscription batching. This allows you to batch channel subscriptions together to reduce the number of WebSocket frames required to subscribe to multiple channels. This should result in a significant performance improvement when subscribing to a large number of unique channels in a short period of time - It should also help to improve the performance of re-connections (automatic re-subscriptions) when clients handle a large number of channels. Note that channels are non-batching by default - You need to set the batch option to true when subscribing to a new channel in order to allow a channel to be batched. See the subscribe method here: https://socketcluster.io/#!/docs/api-scsocket-client

v.7.0.2

6 years ago

Note that v7.0.x shouldn't have any major breaking changes from 6.8.0.

  • Updated SocketCluster version number to match the new major client version number 7.x.x.
  • Fixed issue with socket.setAuthToken(data, options) expiresIn option being ignored; also improved error handling related to the socket.setAuthToken function.

v6.8.0

6 years ago
  • Removed all remaining traces of domains in SC.
  • Removed custom sc-emitter in favor of component-emitter for the client and server SCSocket objects.

Possible breaking change:

  • SC no longer uses domains internally to capture errors (Node.js has deprecated them); instead, SC now listens to 'error' events directly on the server and sockets which is creates. This means that you should never use the removeAllListeners('error') on objects created by SC as this will destroy SC's internal error handling. Note that the Node.js documentation explicitly recommends against removing listeners in this way. See https://nodejs.org/api/events.html#events_emitter_removealllisteners_eventname

v6.7.0

6 years ago
  • Refactored the code, especially in and around the sc-broker module.
  • Improved visibility of broker errors (better stack traces).
  • Improved visibility of errors throughout SC in general.
  • Brokers recover from crashes faster.
  • Broker actions and messages now have improved buffering so broker crashes/recovery should be more seamless.
  • Improved buffering for all IPC messages: sendToBroker, sendToWorker and sendToMaster...
  • Improved tests around brokers.

v6.6.0

6 years ago
  • Added the ability for processes to easily respond to messages from other processes over IPC via callbacks. A new callback argument was added to sendToWorker, sendToBroker and sendToMaster methods throughout SC (see updated API docs on https://socketcluster.io website).
  • Added buffering to sendToWorker function on master - Previously, if the workerCluster wasn't ready yet or was down, invoking socketCluster.sendToWorker(...) would trigger an error - Now the messages will be buffered and delivered as soon as the workerCluster is ready.

v6.5.0

6 years ago
  • Improved error handling and logging across processes - This should account for cases where invalid objects or values are thrown or emitted as errors.