CursusDB is an open-source distributed in-memory yet persisted document oriented database system with real time capabilities.
CursusDB Cluster and Node Bundle Stable v2.3.9
package main
import (
"fmt"
cursusdbgo "github.com/cursusdb/cursusdb-go"
"sync"
"time"
)
func main() {
wg := &sync.WaitGroup{}
for i := 0; i < 4; i++ { // Create 4 client connections in parallel
wg.Add(1)
go func(wgg *sync.WaitGroup) {
defer wgg.Done()
var cursusdbc *cursusdbgo.Client
cursusdbc = &cursusdbgo.Client{
TLS: false,
ClusterHost: "0.0.0.0",
ClusterPort: 7681,
Username: "test",
Password: "test",
ClusterReadTimeout: time.Now().Add(time.Second * 60),
}
err := cursusdbc.Connect()
if err != nil {
fmt.Println(err.Error())
return
}
res, err := cursusdbc.Query(fmt.Sprintf(`insert into test({"x!": 33});`)) // Just an example
if err != nil {
fmt.Println(err.Error())
return
}
fmt.Println(res)
cursusdbc.Close()
}(wg)
}
wg.Wait()
}
Results:
{"collection":"test","insert":{"$id":"065efc39-2dcb-4a92-bb32-2a8b7b3a8bed","x":33},"message":"Document inserted successfully.","statusCode":2000}
{"message":"Document already exists.","statusCode":4004}
{"message":"Document already exists.","statusCode":4004}
{"message":"Document already exists.","statusCode":4004}
This is the way it's supposed to work. Regardless of it being concurrent, there has to be reliability in this regard hence the new lock.
CursusDB Cluster and Node Bundle Stable v2.3.8
There was a v2.3.7 but there wasn't anything astronomically changed. Just added cluster logic to not allow multi-insert.
🟢 NO BREAKING CHANGES
CursusDB Cluster and Node Bundle Stable v2.3.6
🟢 NO BREAKING CHANGES
🔥NEW FEATURES🔥
update 1 in test where firstName = 'Alex' set interests = ["cars","programming"];
[{"127.0.0.1:7682": {"collection":"test","message":"1 Document(s) updated successfully.","statusCode":2000,"updated":[{"$id":"a544de0a-166b-4340-9956-0db0e9f4647c","firstName":"Alex","interests":["cars","programming"],"lastName":"Jones"}]}}]
New status codes for implementation:
4032
Invalid set array values (with description)🧪 This update to CDQL has been added to the E2E as a test case as well, ✅ passed before release
..
✅ PASS UPDATE ALL FROM COLL WITH CONDITIONS AND MULTI SET WITH ARRAY SET
..
CursusDB Cluster and Node Bundle Stable v2.3.5
🟢 NO BREAKING CHANGES
lsof -i :7682
kill -6 NODEPID
# is SIGABRT
kill -3 NODEPID
# is SIGQUIT
kill -9 NODEPID
# is SIGKILL
kill -15 NODEPID
# is SIGTERM
kill -2 NODEPID
# is SIGINTYou need to start then kill, check client, start kill, check client. So I usually check my actual collection and documents. If say you killed and the node wasn't handling -9 then you would get no persisted data. If there is a signal missing please do let me know.
Node on these signals will persist data.
CursusDB Cluster and Node Bundle Stable v2.3.2
🟢 NO BREAKING CHANGES
After a lot of action on Reddit I've implemented what I've learned.
Thank you to those whom went crazy at me, it was a good rustle and tustle but not between siblings but Engineers!
CursusDB Cluster & Node Bundle v2.3.1 STABLE
🟢 NO BREAKING CHANGES
Just updated status code 120s wording in this patch. It bothered me.
BEFORE
[ERROR] StartRunQueryQueue(): 120 Could not open/create query queue file open .qqueue: no such file or directory
AFTER
[ERROR] StartRunQueryQueue(): 120 No .qqueue file found. Possibly first run, if so the node will create the .qqueue file after run of this method.
CursusDB Cluster & Node Bundle v2.3.0 STABLE
🟢 NO BREAKING CHANGES
120
- Could not open/create query queue file (with description)502
- Node could not recover query queue503
- Could not dial self to requeue queries (with description)504
- Could not commit to queued query/transaction505
- n recovered and processed from .qqueue507
- Error loading X509 key pair (with description)CursusDB Cluster & Node Bundle v2.2.0 STABLE
🟢 NO BREAKING CHANGES
insert into test({"x": "%%Hello%"});
would cause (MISSING) to be suffixed in any return value. This has been corrected for all actions and tested through an Observer as well.