-
Notifications
You must be signed in to change notification settings - Fork 1
Going Distributed #19
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: master
Are you sure you want to change the base?
Conversation
SUMUKHA-PK
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm not gonna review the core lockclient or lockservice yet as this is gonna be merged after my PR.
This is a first round of review just for code semantics, I'll have future rounds based on the correctness.
So far, good job!
| func (f *fsm) Apply(l *raft.Log) interface{} { | ||
| var c command | ||
| if err := json.Unmarshal(l.Data, &c); err != nil { | ||
| panic(fmt.Sprintf("failed to unmarshal command: %s", err.Error())) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can you not panic and return the error?
Panics stop the system function, we shouldn't allow that.
| return f.applyRelease(c.Key, c.Value) | ||
|
|
||
| default: | ||
| panic(fmt.Sprintf("unrecognized command op: %s", c.Op)) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Same for panic here
|
|
||
| // Snapshot returns a snapshot of the key-value store. We wrap | ||
| // the things we need in fsmSnapshot and then send that over to Persist. | ||
| // Persist encodes the needed data from fsmsnapshot and transport it to |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
| // Persist encodes the needed data from fsmsnapshot and transport it to | |
| // Persist encodes the needed data from fsmsnapshot and transports it to |
| } | ||
|
|
||
| // Set the state from snapshot. No need to use mutex lock according | ||
| // to Hasicorp doc |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
| // to Hasicorp doc | |
| // to Hasicorp doc. |
|
|
||
| func (f *fsmSnapshot) Persist(sink raft.SnapshotSink) error { | ||
| err := func() error { | ||
| // Encode data |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
| // Encode data |
| // If a node already exists with either the joining node's ID or address, | ||
| // that node may need to be removed from the config first. | ||
| if srv.ID == raft.ServerID(nodeID) || srv.Address == raft.ServerAddress(addr) { | ||
| // However if *both* the ID and the address are the same, then nothing -- not even |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
| // However if *both* the ID and the address are the same, then nothing -- not even | |
| // However if BOTH the ID and the address are the same, then nothing, not even |
| // that node may need to be removed from the config first. | ||
| if srv.ID == raft.ServerID(nodeID) || srv.Address == raft.ServerAddress(addr) { | ||
| // However if *both* the ID and the address are the same, then nothing -- not even | ||
| // a join operation -- is needed. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
| // a join operation -- is needed. | |
| // a join operation, is needed. |
| "github.com/gorilla/mux" | ||
| ) | ||
|
|
||
| // func (rs *lockservice.RaftStore) Start() error { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can be removed.
| // lockservice's lock map. | ||
| // This function is used by the Snapshot() function of the | ||
| // finite state machine of the distributed consensus algorithm | ||
| func (ls *SimpleLockService) GetLockMap() map[string]string { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
You said copy, return the copy.
Here locking is of no use since you're returning the actual map anyway.
| // function | ||
| func (ls *SimpleLockService) SetLockMap(lockMapSnapshot map[string]string) { | ||
| // Set the state from snapshot. No need to use mutex lock according | ||
| // to Hasicorp doc |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
| // to Hasicorp doc | |
| // to Hashicorp doc |
Resolves #16
This pull request adds the ability to host
SimpleLockServiceon a cluster of nodes in consensus. Consensus is maintained using the Raft algorithm. The Hashicorp implementation of the algorithm has been used.A struct of type
RaftStorehas been created. It has the following fields -Each Raft node has an associated HTTP listener. If a request is sent to the listener of the leader of a cluster, then the listener commits the operation on the associated Raft node if possible. If a request is sent to the listener of a follower in the cluster, this request is redirected to the listener of the leader. The implementation is such that the listener has an IP address with a port number that is one more than the port number of the associated Raft node. (for example, if the IP address of a Raft node is 127.0.0.1:5000, then the IP address of its listener is 127.0.0.1:5001).
getHTTPAddr()andgetListenerAddr()functions implement this mapping.Redirection of HTTP requests to the listener of the leader node has been tested in
simple_client_test.goalong with the tests that are already present.