Highly scalable realtime pub/sub and RPC framework
Fix bug with top level (master process) --inspect
and --debug
flags not working.
Added async/await support for the worker and broker controller run
methods. See https://github.com/SocketCluster/socketcluster/issues/351
The createHTTPServer
method on the worker can also now resolve asynchronously.
global
property on the worker and server objects - You should now use the exchange
property instead.getSocketURL()
method of the SocketCluster master instance.This version fixes an issue with the scServer.clients
object and scServer.clientsCount
value not being updated correctly under certain specific scenarios. See https://github.com/SocketCluster/socketcluster-server/pull/21
In addition to fixing the issue, this version introduces a few changes.
disconnect
event used to trigger whenever the socket lost the connection at any stage of the handshake/connection cycle. That meant that the disconnect
event could get triggered on the socket before the connection
event had been triggered on the scServer
object (which was strange). Also there was no way to know the specific phase of the cycle when the connection ended (before or after the handshake). Now the disconnect
only triggers after the handshake has completed. To catch a lost connection during the handshake phase, you should use the connectAbort
event on the socket or connectionAbort
event on the server. If you need a catch-all to capture any kind of connection termination at any phase of the cycle, you should now use the socket's new close
event or the server's new closure
event. Note that this change does not affect the disconnection
event on the server; only the disconnect
event on the socket. Also note that this change should only affect you if you have written logic inside a scServer.on('handshake', handler)
handler - If you just use the scServer.on('connection', handler)
handler (and thus don't try to access sockets before they are fully connected), then the behaviour of disconnect
will not have changed from what it was before.scServer.pendingClients
property which is a hashmap similar to scServer.clients
except that it contains sockets which have not yet completed the handshake (and therefore are still in the 'connecting' state). Also, a matching scServer.pendingClientsCount
property was added.connectAbort
event has been present on client sockets for a long time but not on server sockets until now. A new connectAbort
event was added to the socket and a matching connectionAbort
event was added on the scServer
object.close
event has been added to the socket to replace the disconnect
event which had ambiguous meaning. You can also listen to lost socket connections from the scServer using the closure
event.See https://github.com/SocketCluster/socketcluster/issues/333
Special thanks to @mauritslamers for coming up with this proposal and for doing the initial work for this release.
This release improves the way SC bootstraps different processes to give more control to the developer. As a result of this update, the boilerplate logic for entry points to various processes within SC has changed.
The entry point for the worker controller (worker.js) used to be:
// SocketCluster decides when the run() function is invoked.
module.exports.run = function (worker) {
// Custom worker logic goes here.
worker.scServer.on('connection', (socket) => {
// Handle the new connection.
})
};
but now it is:
var SCWorker = require('socketcluster/scworker');
class Worker extends SCWorker {
run() {
// Custom worker logic goes here.
// You can reference to the worker in here with 'this'.
this.scServer.on('connection', (socket) => {
// Handle the new connection.
});
}
}
new Worker();
Note that you can use any of the approaches mentioned in issue #333 but the class-based approach is the default recommended approach (unless you need to support really old versions of Node.js).
The new approach was inspired from Java's Runnable
interface for threads (except in the case of SC, we have specialized processes instead of threads) see https://docs.oracle.com/javase/tutorial/essential/concurrency/runthread.html.
As part of this change, you can now override the SCWorker's createHTTPServer()
method to return your own custom HTTP server for SC to use (it needs to be compatible with the default Node.js HTTP server though). This should make it easier to use other back end frameworks with SC.
For more details on the new worker boilerplate see https://socketcluster.io/#!/docs/api-scworker For more details on the new broker boilerplate see https://socketcluster.io/#!/docs/api-broker
exchange.run(...)
should be renamed to exchange.exec(...)
throughout your code (run
now has a special meaning related to the various process controllers).const SocketCluster = require('socketcluster');
instead of const SocketCluster = require('socketcluster').SocketCluster;
worker.js
https://github.com/SocketCluster/socketcluster/blob/de9757cdb078805bfa0f808687b5e6a8e2ef5c20/sample/worker.js
broker.js
https://github.com/SocketCluster/socketcluster/blob/de9757cdb078805bfa0f808687b5e6a8e2ef5c20/sample/broker.js
workerCluster
process; it's similar to the new worker.js
and broker.js
format but you need to import the SCWorkerCluster
base class using var SCWorkerCluster = require('./scworkercluster');
initController
- Since the developer now has full control over the instantiation of various process controllers, it became pretty useless and didn't fit into the new model.httpServerModule
option - You can now simply override the SCWorker's createHTTPServer()
method to achieve the same thing.master.js
file, now you can just provide your own server.js
into the volume container to be your master process https://github.com/SocketCluster/socketcluster/blob/de9757cdb078805bfa0f808687b5e6a8e2ef5c20/sample/server.js
No breaking API changes from 7.x.x but it makes an addition to the SC protocol which may affect codecs and other SC plugins which interact directly with the SC protocol.
batch
option to true
when subscribing to a new channel in order to allow a channel to be batched. See the subscribe
method here: https://socketcluster.io/#!/docs/api-scsocket-client
Note that v7.0.x shouldn't have any major breaking changes from 6.8.0.
socket.setAuthToken(data, options)
expiresIn
option being ignored; also improved error handling related to the socket.setAuthToken
function.Possible breaking change:
removeAllListeners('error')
on objects created by SC as this will destroy SC's internal error handling. Note that the Node.js documentation explicitly recommends against removing listeners in this way. See https://nodejs.org/api/events.html#events_emitter_removealllisteners_eventname
sc-broker
module.socketCluster.sendToWorker(...)
would trigger an error - Now the messages will be buffered and delivered as soon as the workerCluster is ready.