Protocol Design

https://www.linkedin.com/pulse/designing-network-protocol-cubes-endurance

Why the fuck would you design a protocol?

That's a very very good question. Usually when you can't have any of the current setup work and there are massive issues, unfixable issues or otherwise badly designed issues with the protocol that you wanted to use but can't.

Before we can dive into how to design a networking protocol, using 2SSL (Secure Sapphirian Layer) protocol, you first need to understand theory and analysis. If you already have then skip the headings that say Theory and Analysis and jump straight to Design.
Theory
What is a networking protocol?

At the fundamental level. A networking protocol is just a way of encoding complex, arbitrarily sized data within a confined and limited space with arbitrary transmission rate and fluctuating accuracy. As such it must deal with keeping track of what data has been sent, what needs to be resent and what has been received. There are many hazards within a network from bad routing, interference to eavesdropping and malicious forged packets.

The goal of a networking protocol is to abstract all of these issues away from the higher level so that higher level applications can just worry about sending and receiving the data. It's also to ensure that data gets routed to the correct location, hiding internal networking, mapping between various networks, filtering the data and so much more. Now some may argue that we're mixing up several ideas but ultimately the reason we can do all of that is that there is a standardized low level networking protocol. Which itself is filtered into several distinct layers.

The biggest active threats to any networking protocol is malicious intent. For example malformed packets, unexpected data, forced deauthentication, forged packets, replay attack and person in the middle attacks. There are also non active threats such as bad connections, interference, lag and corruption of the data being sent.

Probably or maybe not you've heard of the OSI networking model. You can learn about it at Free Code Camp. The parts that we'll be focusing on is transport, session and presentation.
Analysis

Why HTTP2 isn't good enough. The reason is that HTTP2 fetched arbitrary sized data and while supporting interleaving and 'multi channel' data transmission, it doesn't actually allow indefinite pausing and continuation of data. Plus HTTP2 is dependent on caching, browser eviction policy along with requiring full data download. What we can control is the order and 'dependency tree'. However if order changes it's hard with HTTP2 to have it change in real time.

Why FTP isn't good enough. First off relies on antiquated protocols, easily detectable and doesn't support security unless we use SFTP. Plus doesn't scale very well to multi data center usage. Remote server can see file as being uploaded and have to have quite complicated logic to determine if files are fully uploaded or not. This is further complicated if we try and hide the file size or chunk the file into smaller sizes. The good thing is that FTP supports both active and passive mode which allows certain issues with connectively to be bypassed by having the server speak to the client (instead of vice versa).

Why SSH isn't good enough. It's using public/private key encryption and doesn't allow multi data center scaling along with issues with scaling. Also have same issues as previously described with FTP/HTTP2 with chunking, encryption. Finally session transferring is coupled tightly with the protocol. Meaning that all must be renegotiated during data center transfer. This involves over 3 round trips along with requiring expensive disk to disk cloning of files.
Design

Since no current protocol we've identified will work well we're gonna be designing a new protocol. This protocol is designed with three key things in mind:

Secure

Security is a buzzword that's thrown around alot. It's gotten to the point that it's almost meaningless. We'll therefore define it in better terms. The protocol needs to be end to end encrypted, fixed transmission size, resists tampering and replay attacks.

Decoupled

There are three layers that need to be addressed:

Transport layer. This is the physical encryption of the header and prepayload. This ensures that data is end to end encrypted with a ephemeral encryption key negotiated via Kyber during intial connection and tied to 2SSL-Session-UUID. Transfering this to another data center is as simple as one data center requesting the 2SSL-Session-UUID from other data centers.
Session layer. Ties to an operator. An operator is someone who is using the software. By tying session information in at this level, we ensure that the actual worker servers are stateless. As all the information needed to perform nor not perform an action is stored within this layer. However since we don't want the client device to know about this or modify this (since JWT's can grow quite large) we tie this to a Operator-Session-UUID. Same as before if we encounter one we haven't seen before we can request from other data centers. Furthermore we can route this layer from one data center to another transparently.
Data layer. Usually tied to an operator. However this can in high traffic periods be taken in and queued while waiting to be tied to an operator at a later time. This ensures continued operation and uploading of data when an operator's device is on. Once an operators details can be retrieved then information can be linked to that operators account. This ensures a flawless upload model that can proceed in parrellel even as Operation-Session-UUID is transformed into the actual details.

Event based

The low level while operating in a likely lockstep method shouldn't obstruct or cause issues at the higher level where files appearances and disappearance are atomic. This filters up to the entire protocol designation preventing freezing of browsers, automatic retry and executing code in short manageable chunks without causing the browser to freeze.

What is the usecase for the protocol? Transferring files in an arbitrary order, of arbitrary size, over arbitrary periods of time and arbitrary bandwidth speed. With current file transfer protocols a large file can block uploads of smaller files and there is no dual device operation or cooperation. For example if device A and device B both have file X. Device A can upload the first 512 chunks and Device B (which has a much faster network) can upload the last 30412 chunks. This can be scaled to arbitrary devices because of how the actual file transfer protocol and re connection works. Furthermore corrupt files are extremely unlikely due to how the payload and header are signed and the retransmission, error detection and failsafes built into the protocol at two separate levels.

Goal of protocol. Ensure seamless transfer and synchronization of files by arbitrarily changing priorities, deleting and using swarm upload of data. This allows at the high levels commands such as .stop(fileTransferUUID), .upload(fileTransferUUID, priorityFrom0to100). This allows transferring and adjusting which file is considered 'most crucial' to be needed on another device at a high level and the low level protocol figures out how to make it happen. Since a file when stopped can be picked up any time, we don't need to worry about uploading timeouts and such.

This protocol doesn't depend on HTTPS or HTTP just as long as the key negotation occurs over some other protocol (however if need be could be done within the same protocol). Once a connection is established it can be sent over raw UDP/TCP or within another protocol like HTTPS. This allows the protocol to be used anywhere we need to connect a device w/o needing specific requirements of a parent (or if there even needs to be a parent protocol). Currently the protocol is lightly coupled to HTTPS because it's used within the web and that's the requirements for service workers. However the actual key negotation and error codes can easily be fully wrapped within a null 2SSL connection instead.

The threat model is the overal network and an adversary that has full watch and/or control of the network. This includes ability to delay packets, re transmit sent packets and arbitrarily drop packets. The first is addressed that after the initial negotation (which will time out) all other packets order doesn't matter and timeout is handled extremely gracefully by the protocol stack (Requiring no further work or error coding at a higher level). Since the confirmation packets are approximately 10 bytes numbers forging of an encrypted number is extremely unlikely and the protocol can randomly change the number for a packet without any adverse consequences of any kind to either side. However furthering difficulty in guessing which packet number was used. Finally to address replay attacks, there are anti replay UUID's that are baked in during initial negotation that prevent the same packet from being accepted twice from either end. These UUID's change for each packet sent in either direction.

Overall designing a protocol involves seeing an issue that you wish to solve, discovering that current protocols are non suitable for the job, identifying context for protocol use, layers that the protocol covers, how protocol is tapped into and how the protocol handles different threat scenarios.


You'll only receive email when they publish something new.

More from KitzuneFiles
All posts