What if you had source control in your app?
I don't mean for building your app. I mean for running it. If your app and server state had commit, rewind, and merge capabilities, built into the app and servers itself at the programming-data level instead of at the executable-files level.
When the user takes a meaningful action, we can record it to the sequence of events. By combining the sequences of events of multiple agents, we can construct a lattice ledger of events, allowing all relevant parties to ensure that their state of events is up to date within acceptable tolerance and timeframe, and that events do not leaked off to infinity before they are resolved.
This is useful for developer debugging, user analytics, and resource statistics. Deterministic replays allow us to analyze the history of a commit, and re-run the past to examine the sequence of events that occurred.
The story of a user-server transaction
A user fills out their cart on an e-commerce platform, and every time they add an item, the cart makes a micro-commit. That micro-commit to implement an 'undo' button, that allows the user to roll back the latest micro-commit, and allows them to 'redo' when they later change their mind, even if they've done other things in the meantime.
When the user is satisfied with their cart, it bundles the micro-commits into an order commit-request, which is sent to the server. The commit-request is uniquely linked to the user, and their present state, as well as the server and their last known state, and cannot be forged, serving as proof of authorship and state chain-of-evidence. Unlike credit cards, if the order is intercepted, it cannot be altered or used to place a different order, or for a different user. It can only be used for that exact order, that one time, and both the user and server know it.
The server receives the order commit-request, and either accepts it for processing, or rejects it as invalid, returning proof of this to the user. The server then begins processing the order request, if it accepted it.
If the server is able to fulfill the order, it does so, and finalizes the order by sending proof to the user. This means that not only was the order well-formatted and correctly-signed, it was not interrupted by a server going down, or a change in resource availability.
It will reject the request as invalid for reasons such as if it is ill-formatted, or if it presents insufficient or incorrect proof and authorship. This sort of error occurs when the request cannot even be considered, as we cannot establish what it is or who is asking.
If an item in the user's cart becomes unavailable, the server will learn this conflict when it experiences an appropriate event commit that the menu has changed, and it will forward this event to the user at the next point of contact. This may occur while the user is still shopping, or after the order has been accepted for processing.
In either case, the conflict is detected and the user is notified, who then chooses to either continue or abort, by resolve merging the events - replacing or canceling the now-unavailable item - or by aborting the entire order. The server then begins processing the updated order events, and continues in this fashion until the order is successfully placed, or is cancelled.
Throughout this whole process, both the user and server are proof-synchronous; they each hold evidence of the other's last known state, and walk forward through the transaction knowing that both parties can prove the chain of events leading up to the present moment.
Analyzing what just happened
These events are recognizable as the same events we use in source control of the programming code of the app, except these events occur at runtime within the app, and are being applied to the user-server cart-order transaction process, rather than the development process.
All of this is done without defining APIs, instead taking advantage of using the same programming language for both the server and client. The programmer simply defines the data types and functions, and the compiler automatically handles all of the management of storage, transmission, and synchronization of data, events, and proofs, much as we allow compilers to manage memory instead of manual allocation and freeing.
We abandoned manual management of memory because it was necessary in order to rule out an entire class of bugs and vulnerabilites involving dangling pointers, which caused decades of hacks and headaches. Having a redundant and artificial separation between compile-time and run-time (or client-side and server-side) is detrimental to programming, in the same way.
Fulfilling this process with current-generation systems requires constructing a pipeline of out of fragile manual adaptors connecting databases, servers, clients, and processes, and results in spending huge amounts on the upkeep of such a rickety system, with the loss of efficiency eventually requiring a server farm instead of just a few servers. We see the culprit laid bare, the problem of scaling.
We'll only solve this problem by stepping outside of our comfort zone, and acknowledging that we need to upgrade from a manual process to an automatic one. Sure, it means re-writing a whole lot of code, but we did the same when we moved from 16 bits to 32, or jumped to multi-core, and when we began using automatic memory management. It will be a lot easier this time, as we're not replacing what we're doing, just how we're doing it. We've done it before and just as before, it will be worth it - we'll never want to go back to the manual process just as you can't imagine going back to programming in assembly.
It wouldn't be worth the effort for a small change; hassle-free distributed computing is something worth the effort. There is a natural method of distributed computing that promises to free developers from a lot of repetative and error-prone boilerplate. Programming should be as simple as telling the computer what to do, and we should never get stuck telling the computer how to do it.
- Leo D., August, 2022