See the entire conversation

Great new work from @inkandswitch: inkandswitch.com/cambria.html Local-first software is a great aspiration, but iterating on decentralized data structures is indeed a nightmare I've been living with as I build @withorbit. Excited for these ideas on more predictable data evolution.
Project Cambria: Translate your data with lenses
Changing schemas in distributed software is hard. Could adopting bidirectional lenses help?
inkandswitch.com
10 replies and sub-replies as of Oct 07 2020

I wonder how to test / formally model the consistency property you'd want the lenses to describe.
I've maybe made the problem needlessly hard on myself by using content-addressable identifiers for CRDT log structures (including their parent pointers). Did that so I could serialize to IPFS or similar, but it means I have to rebase the whole tree whenever the structure changes.
I cain’t say I know nuthin’ bout this here perdicament o’yours, but minds me bit of that there @unisonweb.
If you perform the conversion on read (and after validating the hash) as suggested in the article I think this should be fine?
The problem is the log’s parent ID, which is the hash of the parent’s contents at write time. You can’t run the lens on the ID; you need some complex version-specific lookup table.
Ah, so you hash the whole content. I was thinking of the git model where commits / change operations are hashed (including a pointer parent commits).
Ah yes, that was how we wound up using hypercores, which have stable names over time. Let's have a talk about this -- we've done quite a bit of work and may be able to help you avoid falling into some of the traps we did.
Ah! Thank you. I will go read that paper and then try to write you an email. :)
Did you read PushPin? There's a lot of good stuff for you in there: martin.kleppmann.com/papers/pushpin…
I’ve found generative testing to be really useful for this