With these serialization libraries, do any of them have a facility that allows you to specify a wire format and an application format, with recipes for converting one to the other?
I haven't used these very seriously but a problem I had a while back was that that the wire format was not what the applications wanted to use, but a good application format was to space-inefficient for wire.
As far as I could see there was not a great way to do this. You could rewrite wire<->app converter in every app, or have a converter program and now you essentially have two wire formats and need to put this extra program and data movement into workflows, or write a library and maintain bindings for all your languages.
lalaithion 20 hours ago [-]
Protocol buffers suck but so does everything else. Name another serialization declaration format that both (a) defines which changes can be make backwards-compatibly, and (b) has a linter that enforces backwards compatible changes.
Just with those two criteria you’re down to, like, six formats at most, of which Protocol Buffers is the most widely used.
And I know the article says no one uses the backwards compatible stuff but that’s bizarre to me – setting up N clients and a server that use protocol buffers to communicate and then being able to add fields to the schema and then deploy the servers and clients in any order is way nicer than it is with some other formats that force you to babysit deployment order.
The reason why protos suck is because remote procedure calls suck, and protos expose that suckage instead of trying to hide it until you trip on it. I hope the people working on protos, and other alternatives, continue to improve them, but they’re not worse than not using them today.
> Typical offers a new solution ("asymmetric" fields) to the classic problem of how to safely add or remove fields in record types without breaking compatibility. The concept of asymmetric fields also solves the dual problem of how to preserve compatibility when adding or removing cases in sum types.
An asymmetric field in a struct is considered required for the writer, but optional for the reader.
summerlight 18 hours ago [-]
This seems interesting. Still not sure if `required` is a good thing to have (for persistent data like log you cannot really guarantee some field's presence without schema versioning baked into the file itself) but for an intermediate wire use cases, this will help.
cornstalks 20 hours ago [-]
I've never heard of Typical but the fact they didn't repeat protobuf's sin regarding varint encoding (or use leb128 encoding...) makes me very interested! Thank you for sharing, I'm going to have to give it a spin.
zigzag312 19 hours ago [-]
It looks similar to how vint64 lib encodes varints. Total length of varint can be determined via the first byte alone.
At this point I don't feel like I have a clear opinion about whether PrefixVarint is worth it, compared with LEB128.
zigzag312 17 hours ago [-]
Just remember that XML was more established than JSON for a long time.
zigzag312 19 hours ago [-]
This actually looks quite interesting.
tyleo 20 hours ago [-]
We use protocol buffers on a game and we use the back compat stuff all the time.
We include a version number with each release of the game. If we change a proto we add new fields and deprecate old ones and increment the version. We use the version number to run a series of steps on each proto to upgrade old fields to new ones.
swiftcoder 19 hours ago [-]
> We use the version number to run a series of steps on each proto to upgrade old fields to new ones
It sounds like you've built your own back-compat functionality on top of protobuf?
The only functionality protobuf is giving you here is optional-by-default (and mandatory version numbers, but most wire formats require that)
tyleo 17 hours ago [-]
Yeah, I’d probably say something more like, “we leverage protobuf built ins to make a slightly more advanced back compat system”
We do rename deprecated fields and often give new fields their names. We rely on the field number to make that work.
jnwatson 20 hours ago [-]
ASN.1 implements message versioning in an extremely precise way. Implementing a linter would be trivial.
maximilianburke 19 hours ago [-]
Flatbuffers satisfies those requirements and doesn’t have varint shenanigans.
leoc 18 hours ago [-]
What about Cap’n Proto https://capnproto.org/ ? (Don't know much about these things myself, but it's a name that usually comes up in these discussions.)
usrnm 2 hours ago [-]
Cap'n'proto is not very nice to work with in C++, and I'd discourage anyone from using it from other programming languages, the implementations are just not there yet. We use both cnp and protobufs at work, and I vastly prefer protobufs, even for C++. I only wish they stayed the hell away from abseil, though.
porridgeraisin 39 minutes ago [-]
I always thought people had a positive view on abseil, never used it myself other than when tinkering on random projects. What's the main issue?
mattnewton 20 hours ago [-]
Exactly, I think of protobuffers like I think of Java or Go - at least they weren’t writing it in C++.
Dragging your org away from using poorly specified json is often worth these papercuts IMO.
const_cast 18 hours ago [-]
Protobufs are better but not best. Still, by far, the easiest thing to use and the safest is actual APIs. Like, in your application. Interfaces and stuff.
Obviously if your thing HAS to communicate over the network that's one thing, but a lot of applications don't. The distributed system micro service stuff is a choice.
Guys, distributed systems are hard. The extremely low API visibility combined with fragile network calls and unsafe, poorly specified API versioning means your stuff is going to break, and a lot.
Want a version controlled API? Just write in interface in C# or PHP or whatever.
anonymousiam 14 hours ago [-]
The original RPC code, from which Google derived their protobuf stuff was written in (pre-ANSI) C at Sun Microsystems.
tshaddox 19 hours ago [-]
> Name another serialization declaration format that both (a) defines which changes can be make backwards-compatibly, and (b) has a linter that enforces backwards compatible changes.
The article covers this in the section "The Lie of Backwards- and Forwards-Compatibility." My experience working with protocol buffers matches what the author describes in this section.
mgaunard 20 hours ago [-]
in the systems I built I didn't bother with backwards compatibility.
If you make any change, it's a new message type.
For compatibility you can coerce the new message to the old message and dual-publish.
o11c 20 hours ago [-]
I prefer a little builtin backwards (and forwards!) compatibility (by always enforcing a length for each object, to be zero-padded or truncated as needed), but yes "don't fear adding new types" is an important lesson.
jimbokun 18 hours ago [-]
That only works if you control all the clients.
mgaunard 15 hours ago [-]
Dual-publishing makes it transparent to older clients.
Obviously you need to track when the old clients have been moved over so you can eventually retire the dual-publishing.
You could also do the conversion on the receiving side without a-priori information, but that would be extremely slow.
stickfigure 18 hours ago [-]
Backwards compatibility is just not an issue in self-describing structures like JSON, Java serialization, and (dating myself) Hessian. You can add fields and you can remove fields. That's enough to allow seamless migrations.
It's only positional protocols that have this problem.
dangets 18 hours ago [-]
You can remove JSON fields at the cost of breaking your clients at runtime that expect those fields. Of course the same can happen with any deserialization libraries, but protobufs at least make it more explicit - and you may also be more easily able to track down consumers using older versions.
nomel 14 hours ago [-]
For the missing case, whenever I use json, I always start with a sane default struct, then overwrite those with the externally provided values. If a field is missing, it will be handled reasonably.
jimbokun 18 hours ago [-]
At the cost of much larger payloads.
stickfigure 15 hours ago [-]
With gzip encoding... not really.
tgma 18 hours ago [-]
> And I know the article says no one uses the backwards compatible stuff but that’s bizarre to me – setting up N clients and a server that use protocol buffers to communicate and then being able to add fields to the schema and then deploy the servers and clients in any order is way nicer than it is with some other formats that force you to babysit deployment order.
Yet the author has the audacity to call the authors of protobuf (originally Jeff Dean et al) "amateurs."
noitpmeder 20 hours ago [-]
Not that I love it -- but SBE (Simple Binary Encoding) is a _decent_ solution in the realm of backwards/forwards compatibility.
yearolinuxdsktp 18 hours ago [-]
I agree that saying that no-one uses backwards compatible stuff is bizarre. Rolling deploys, being able to function with a mixed deployment is often worth the backwards compatibility overhead for many reasons.
In Java, you can accomplish some of this with using of Jackson JSON serialization of plain objects, where there several ways in which changes can be made backwards-compatibly (e.g. in the recent years, post-deserialization hooks can be used to handle more complex cases), which satisfies (a). For (b), there’s no automatic linter. However, in practice, I found that writing tests that deserialize prior release’s serialized objects get you pretty far along the line of regression protection for major changes. Also it was pretty easy to write an automatic round-trip serialization tester to catch mistakes in the ser/deser chain. Finally, you stay away from non-schemable ser/deser (such as a method that handles any property name), which can be enforced w/ a linter, you can output the JSON schema of your objects to committed source. Then any time the generated schema changes, you can look for corresponding test coverage in code reviews.
I know that’s not the same as an automatic linter, but it gets you pretty far in practice. It does not absolve you from cross-release/upgrade testing, because serialization backwards-compatibility does not catch all backwards-compatibility bugs.
Additionally, Jackson has many techniques, such as unwrapping objects, which let you execute more complicated refactoring backwards-compatibly, such as extracting a set of fields into a sub-object.
I like that the same schema can be used to interact with your SPA web clients for your domain objects, giving you nice inspectable JSON. Things serialized to unprivileged clients can be filtered with views, such that sensitive fields are never serialized, for example.
You can generate TypeScript objects from this schema or generate clients for other languages (e.g. with Swagger). Granted it won’t port your custom migration deserialization hooks automatically, so you will either have to stay within a subset of backwards-compatible changes, or add custom code for each client.
You can also serialize your RPC comms to a binary format, such as Smile, which uses back-references for property names, should you need to reduce on-the-wire size.
It’s also nice to be able to define Jackson mix-ins to serialize classes from other libraries’ code or code that you can’t modify.
tomrod 20 hours ago [-]
> Name another serialization declaration format that both (a) defines which changes can be make backwards-compatibly, and (b) has a linter that enforces backwards compatible changes.
ASCII text (tongue in cheek here)
jcgrillo 18 hours ago [-]
As someone who has written many mapreduce jobs over years old protobufs I can confidently report the backwards compatibility made it possible at all.
xyzzyz 1 hours ago [-]
Granted, on paper it’s a cool feature. But I’ve never once seen an application that will actually preserve that property.
Chances are, the author literally used software that does it as he wrote these words. This feature is critical to how Chrome Sync works. You wouldn’t want to lose synced state if you use an older browser version on another device that doesn’t recognize the unknown fields and silently drops them. This is so important that at some point Chrome literally forked protobuf library so that unknown fields are preserved even if you are using protobuf lite mode.
Just FYI: an obligatory comment from the protobuf v2 designer.
Yeah, protobuf has lots of design mistakes but this article is written by someone who does not understand the problem space. Most of the complexity of serialization comes from implementation compatibility between different timepoints. This significantly limits design space.
thethimble 18 hours ago [-]
Relatedly, most of the author's concerns are solved by wrapping things in a message.
> oneof fields can’t be repeated.
Wrap oneof field in message which can be repeated
> map fields cannot be repeated.
Wrap in message which can contain repeated fields
> map values cannot be other maps.
Wrap map in message which can be a value
Perhaps this is slightly inconvenient/un-ergonomic, but the author is positioning these things as "protos fundamentally can't do this".
missinglugnut 18 hours ago [-]
>Most of the complexity of serialization comes from implementation compatibility between different timepoints.
The author talks about compatibility a fair bit, specifically the importance of distinguishing a field that wasn't set from one that was intentionally set to a default, and how protobuffs punted on this.
What do you think they don't understand?
summerlight 17 hours ago [-]
If you see some statements like below on the serialization topic:
> Make all fields in a message required. This makes messages product types.
> One possible argument here is that protobuffers will hold onto any information present in a message that they don't understand. In principle this means that it's nondestructive to route a message through an intermediary that doesn't understand this version of its schema. Surely that's a win, isn't it?
> Granted, on paper it's a cool feature. But I've never once seen an application that will actually preserve that property.
Then it is fair to raise eyebrows on the author's expertise. And please don't ask if I'm attached to protobuf; I can roast the protocol buffer on its wrong designs for hours. It is just that the author makes series of wrong claims presumably due to their bias toward principled type systems and inexperience of working on large scale systems.
instig007 1 hours ago [-]
> If you see some statements like below on the serialization topic:
> Make all fields in a message required. This makes messages product types.
> Then it is fair to raise eyebrows on the author's expertise.
It's fair to raise eyebrows on your expertise, since required fields don't contribute to b/w incompatibility at all, as every real-world protocol has a mandatory required version number that's tied to a direct parsing strategy with strictly defined algebra, both for shrinking (removing data fragments) and growing (introducing data fragments) payloads. Zero-values and optionality in protobuf is one version of that algebra, it's the most inferior one, subject to lossy protocol upgrades, and is the easiest one for amateurs to design. Then, there's next lavel when the protocol upgrade is defined in terms of bijective functions and other elements of symmetric groups that can tell you whether the newly announced data change can be carried forward (new required field) or dropped (removed field) as long as both the sending and receiving ends are able to derive new compound structures from previously defined pervasive types (the things the protobuf says are oneoffs and messages, for example).
xyzzyz 45 minutes ago [-]
What you describe using many completely unnecessary mathematical terms is not only not found in “every real-world protocol”, but in fact is something virtually absent from overwhelming majority of actually used protocols, with a notable exception of the kind of protocol that gets a four digit numbered RFC document that describes it. Believe it or not, but in the software industry, nobody is defining a new “version number” with “strictly defined algebra” when they want to add a new field to an communication protocol between two internal backend services.
porridgeraisin 20 minutes ago [-]
Yeah. And for anyone curious about the actual content hidden under the jargon-kludge-FP-nerd parent comment, here's my attempt at deciphering it.
They seem to be saying that you have to publish code that can change a type from schema A to schema B... And back, whenever you make a schema B. This is the "algebra". The "and back" part makes it bijective. You do this at the level of your core primitive types so that it's reused everywhere. This is what they meant by "pervasive" and it ties into the whole symmetric groups thing.
Finally, it seems like when you're making a lossy change, where a bijection isn't possible, they want you to make it incompatible. i.e, if you replaced address with city, then you cannot decode the message in code that expects address.
xmddmx 20 hours ago [-]
I share the author's sentiment. I hate these things.
True story: trying to reverse engineer macOS Photos.app sqlite database format to extract human-readable location data from an image.
I eventually figured it out, but it was:
A base64 encoded
Binary Plist format
with one field containing a ProtoBuffer
which contained another protobuffer
which contained a unicode string
which contained improperly encoded data (for example, U+2013 EN DASH was encoded as \342\200\223)
This could have been a simple JSON string.
pjjpo 4 hours ago [-]
The JSON version would have also had the wrong encoding - all formats are just a framing for data fed in from code written by a human. In mac's case, em dash will always be an issue because that's just what Mac decided on intentionally.
seanw444 19 hours ago [-]
That's horrendous. For some reason I imagine Apple's software to be much cleaner, but I guess that's just the marketing getting to my head. Under the hood it's still the same spaghetti.
tgma 18 hours ago [-]
> This could have been a simple JSON string.
There's nothing "simple" about parsing JSON as a serialization format.
wvenable 17 hours ago [-]
Except that most often you can just look at it and figure it out.
tgma 13 hours ago [-]
Sure you can look at it[1], but you're not expected to look at Apple Photos database. The computer is.
Write a correct JSON parser, compare with protobuf on various metrics, and then we can talk.
[1]: although to be fair, I am older than kids whose first programming language was JavaScript, so I do not think of JSON object format with property names in quotes and integers that need to be wrapped as strings to be safe, etc., lack of comma after the last entry--to be fair this last one is a problem in writing, not reading JSON--as the most natural thing
wvenable 12 hours ago [-]
I'm also "older" but I don't think that means anything.
> Sure you can look at it[1], but you're not expected to look at Apple Photos database.
How else are you supposed to figure it out? If you're older then you know that you can't rely on the existence or correctness of documentation. Being able to look at JSON and understand it as a human on the wire is huge advantage. JSON being pretty simple in structure is as advantage. I don't see a problem with quoting property names! As for large integers and datetimes, yes that could be much better designed. But that's true of every protocol and file format that has any success.
JSON parsers and writers are common and plentiful and are far less crazy than any complete XML parser/writer library.
tgma 9 hours ago [-]
> Being able to look at JSON and understand it as a human on the wire is huge advantage
I don’t think this is a given at all. Depends on the context. I think it’s often overvalued. A lot of times the performance matters more. If human readability was the only thing that mattered, I would still not count JSON as the winner. You will have to pipe it to jq, realistically. You’d do the same for any other serialization format too. Inside Google where proto is prevalent, that is just as easy if not more convenient.
The point is how hard or easy it is for an app’s end user to decipher its file database is not a design goal for the serialization library chosen by Apple Photos developers here. The constraints and requirements are all on different axis.
IshKebab 15 hours ago [-]
Sure but unless you want to embed an LLM in every JSON library, computers can't do that.
fluoridation 19 hours ago [-]
I mean... you can nest-encode stuff in any serial format. You're not describing a problem either intrinsic or unique to Protobuf, you're just seeing the development org chart manifested into a data structure.
xmddmx 19 hours ago [-]
Good points this wasn't entirely a protobuf-specific issue, so much as it was a (likely hierarchical and historical set of) bad decisions to use it at all.
Using Protobuffers for a few KB of metadata, when the photo library otherwise is taking multiple GB of data, is just pennywise pound foolish.
Of course, even my preference for a simple JSON string would be problematic: data in a database really should be stored properly normalized to a separate table and fields.
My guess is that protobuffers did play a role here in causing this poor design. I imagine this scenario:
- Photos.app wants to look up location data
- the server returns structured data in a ProtoBuffer
- there's no easy or reasonable way to map a protobuf to database fields (one point of TFA)
- Surrender! just store the binary blob in SQLITE and let the next poor sod deal with it
tgma 13 hours ago [-]
You have to take into account the fact that iPhoto app has had many iterations. The binary plist stuff is very likely the native NSArchive "object archiving (serialization)" that is done by Obj-C libraries. They probably started using protobuf at some point later after iCloud. I suspect the unicode crap you are facing may even predate Cocoaization of the app (they probably used Carbon API).
So it would make it a set of historical decisions, but I am not convinced they are necessarily bad decisions given the constraints. Each layer is likely responsible for handing edge cases in the application that you and I are not privy to.
There are a lot of great comments on these old threads, and I don't think there's a lot of new science in this field since 2018, so the old threads might be a better read than today's.
Not even before the first line ends you get "They’re clearly written by amateurs".
This is a rage bait, not worth the read.
btilly 18 hours ago [-]
The reasons for that line get at a fundamental tension. As David Wheeler famously said, "All problems in computer science can be solved by another level of indirection, except for the problem of too many indirections."
Over time we accumulate cleverer and cleverer abstractions. And any abstraction that we've internalized, we stop seeing. It just becomes how we want to do things, and we have no sense of what cost we are imposing with others. Because all abstractions leak. And all abstractions pose a barrier for the maintenance programmer.
All of which leads to the problem that Brian Kernighan warned about with, "Everyone knows that debugging is twice as hard as writing a program in the first place. So if you’re as clever as you can be when you write it, how will you ever debug it?" Except that the person who will have to debug it is probably a maintenance programmer who doesn't know your abstractions.
One of the key pieces of wisdom that show through Google's approaches is that our industry's tendency towards abstraction is toxic. As much as any particular abstraction is powerful, allowing too many becomes its own problem. This is why, for example, Go was designed to strongly discourage over-abstraction.
Protobufs do exactly what it says on the tin. As long as you are using them in the straightforward way which they are intended for, they work great. All of his complaints boil down to, "I tried to do some meta-manipulation to generate new abstractions, and the design said I couldn't."
That isn't the result of them being written by amateurs. That's the result of them being written to incorporate a piece of engineering wisdom that most programmers think that they are smart enough to ignore. (My past self was definitely one of those programmers.)
Can the technology be abused? Do people do stupid things with them? Are there things that you might want to do that you can't? Absolutely. But if you KISS, they work great. And the more you keep it simple, the better they work. I consider that an incentive towards creating better engineered designs.
jilles 20 hours ago [-]
The best way to get your point across is by starting with ad-hominem attacks to assert your superior intelligence.
instig007 30 minutes ago [-]
Yeah, let's pretend that type algebra doesn't exist, and even if it does exist then it's not useful and definitely isn't practical in data protocols. Let's believe that the authors of protobuf considered everything, and since they aren't amateurs (by the virtue of having worked on protobuf at Google, presumably), every elaborated opinion that draws them as amateurs at applying type algebra in data protocol designs is a personal ad-hominem attack.
tshaddox 19 hours ago [-]
IMO it's a pretty reasonable claim about experience level, not intelligence, and isn't at all an ad hominem attack because it's referring directly to the fundamental design choices of protocol buffers and thus is not at all a fallacy of irrelevance.
perching_aix 19 hours ago [-]
Is this in reference to the blogpost, the comment above, or your own comment? Cause it honestly works for all of them.
sieabahlpark 16 hours ago [-]
[dead]
notmyjob 20 hours ago [-]
I disagree, unless you are in the majority.
2 hours ago [-]
BugsJustFindMe 20 hours ago [-]
If only the article offered both detailed analyses of the problems and also solutions. Wait, it does! You should try reading it.
pphysch 19 hours ago [-]
Where's the download link for the solution? I must have missed it.
kiitos 15 hours ago [-]
it does not
jeffbee 20 hours ago [-]
Yep, the article opens with a Hall of Fame-grade compound fallacy: a strawman refutation of a hypothetical ad hominem that nobody has argued.
You can kinda see how this author got bounced out of several major tech firms in one year or less, each, according to their linkedin.
omnicognate 20 hours ago [-]
It's a terrible attitude and I agree that sort of thing shouldn't be (and generally isn't) tolerated for long in a professional environment.
That said the article is full of technical detail and voices several serious shortcomings of protobuf that I've encountered myself, along with suggestions as to how it could be done better. It's a shame it comes packaged with unwarranted personal attacks.
IncreasePosts 20 hours ago [-]
It's written by amateurs, but solves problems that only Google(one of the biggest/most advanced tech companies in the world) has.
bithive123 17 hours ago [-]
I don't know if the author is right or wrong; I've never dealt with protobufs professionally. But I recently implemented them for a hobby project and it was kind of a game-changer.
At some stage with every ESP or Arduino project, I want to send and receive data, i.e. telemetry and control messages. A lot of people use ad-hoc protocols or HTTP/JSON, but I decided to try the nanopb library. I ended up with a relatively neat solution that just uses UDP packets. For my purposes a single packet has plenty of space, and I can easily extend this approach in the future. I know I'm not the first person to do this but I'll probably keep using protobufs until something better comes along, because the ecosystem exists and I can focus on the stuff I consider to be fun.
tliltocatl 12 minutes ago [-]
Embedded/constrained UDP is where protobuf wire format (but not google's libraries) rocks: IoT over cellular and such, where you need to fit everything into a single datagram (number of roundtrips is what determines power consumption). As to those who say "UDP is unreliable" - what you do is you implement ARQ on the application level. Just like TCP does it, except you don't have to waste roundtrips on SYN-SYN-ACK handshake nor waste bytes on sending data that are no longer relevant.
Varints for the win. Send time series as columns of varint arrays - delta or RLL compression becomes quite straightforward. And as a bonus I can just implement new fields in the device and deploy right away - the server-side support can wait until we actually need it.
No, flatbuffers/cap'n'proto are unacceptably big because of fixed layout. No, CBOR is an absolute no go - why on earth would you waste precious bytes on schema every time? No, general-purpose compression like gzip wouldn't do much on such a small size, it will probably make things worse. Yes, ASN is supposed to be the right solution - but there is no full-featured implementation that doesn't cost $$$$ and the whole thing is just too damn bloated.
Kinda fun that it sucks for what it is supposed to do, but actually shines elsewhere.
_zoltan_ 17 hours ago [-]
and since it's UDP, if it's lost it's lost. and since it's not standard http/JSON, nobody will have a clue in a year and can't decode it.
to learn and play with it it's fine, else why complicate life?
Farmadupe 15 hours ago [-]
Using protobuf is practical enough in embedded. This person isn't the first and won't be the last. Way faster than JSON, way slower than C structs.
However protobuf is ridiculously interchangeable and there are serializers for every language. So you can get your interfaces fleshed out early in a project without having to worry that someone will have a hard time ingesting it later on.
Yes it's a pain how an empty array is a valid instance of every message type, but at least the fields that you remember to send are strongly typed. And field optionality gives you a fighting chance that your software can still speak to the unit that hasn't been updated in the field for the last five years.
On the embedded side, nanopb has worked well for us. I'm not missing having to hand maintain ad-hoc command parsers on the embedded side, nor working around quirks and bugs of those parsers on the desktop side
toolslive 26 minutes ago [-]
The author is right, but it could have been worse too. At least they were not using JSON for serialization.
iamdelirium 20 hours ago [-]
Yeah, oneOf fields can be repeated but you can just wrap them in a message. It's not as pretty but I've never had any issues with this.
The fact that the author is arguing for making all messages required means they don't understand the reasoning for why all fields are optional. This breaks systems (there are are postmortems outlining this) then there are proto mismatches .
mountainriver 20 hours ago [-]
> Protobuffers correspond to the data you want to send over the wire, which is often related but not identical to the actual data the application would like to work with
This sums up a lot of the issues I’ve seen with protobuf as well. It’s not an expressive enough language to be the core data model, yet people use it that way.
In general, if you don’t have extreme network needs, then protobuf seems to cause more harm than good. I’ve watched Go teams spend months of time implementing proto based systems with little to no gain over just REST.
recursive 19 hours ago [-]
Protobuf is independent from REST. You can have either one. Or both. Or neither. One has nothing to do with the other.
sieabahlpark 18 hours ago [-]
[dead]
nicce 20 hours ago [-]
On the other hand, ASN.1 is very expressive and can cover pretty much anything, but Protobuff was created because people thought ASN.1 is too complex. I guess we can't have both.
jandrese 18 hours ago [-]
"Those who cannot remember the past are condemned to repeat it" -- George Santayana
theamk 18 hours ago [-]
Oh, I remember ASN.1 very well, and I would not want to repeat it again.
Protobufs have lots of problems, but at least they are better than ASN.1!
ericpauley 19 hours ago [-]
I lost the plot here when the author argued that repeated fields should be implemented as in the pure lambda calculus...
Most of the other issues in the article can be solved be wrapping things in more messages. Not great, not terrible.
As with the tightly-coupled issues with Go, I'll keep waiting for a better approach any decade now. In the meantime, both tools (for their glaring imperfections) work well enough, solve real business use cases, and have a massive ecosystem moat that makes them easy to work with.
BugsJustFindMe 20 hours ago [-]
I went into this article expecting to agree with part of it. I came away agreeing with all of it. And I want to point out that Go also shares some of these catastrophic data decisions (automatic struct zero values that silently do the wrong thing by default).
sethammons 3 hours ago [-]
We got bit by a default value in a DMS task where the target column didn't exist so the data wasn't replicated and the default value was "this work needs to be done."
This is not pb nor go. A sensible default of invalid state would have caught this. So would an error and crash. Either would have been better than corrupt data.
vander_elst 20 hours ago [-]
Always initializing with a default and no algebraic types is an always loaded foot gun. I wonder if the people behind golang took inspiration from this.
wrsh07 20 hours ago [-]
The simplest way to understand go is that it is a language that integrates some of Google's best cpp features (their lightweight threads and other multi threading primitives are the highlights)
Beyond that it is a very simple language. But yes, 100%, for better and worse, it is deeply inspired by Google's codebase and needs
dano 20 hours ago [-]
It is a 7 year old article without specifying alternatives to an "already solved problem."
So HN, what are the best alternatives available today and why?
gsliepen 20 hours ago [-]
Something like MessagePack or CBOR, and if you want versioning, just have a version field at the start. You don't require a schema to pack/unpack, which I personally think is a good thing.
fmbb 20 hours ago [-]
> You don't require a schema to pack/unpack
Then it hardly solves the same problem Protobuf solves.
mgaunard 20 hours ago [-]
Arrow is also becoming a good contender, with the extra benefit it is better optimized for data batches.
thinkharderdev 20 hours ago [-]
Support across languages etc is much less mature but I find thrift serialization format to be much nicer than protobuf. The codegen somehow manages to produce types that look like types I would actually write compared to the monstrosities that protoc generates.
mdhb 4 hours ago [-]
CBOR is probably the best and most standards compliant thing out there that I’m aware of.
It’s the new default in a lot of IOT specs, it’s the backbone for deep space communication networks etc..
Maintains interoperability with JSON. Is very much battle tested in very challenging environments.
Depends. ASN.1 is a beast and another industry standard, but unfortunately the best tooling is closed source.
20 hours ago [-]
barrkel 18 hours ago [-]
I'm afraid that this is a case of someone imagining that there are Platonic ideal concepts that don't evolve over time, that programs are perfectible. But people are not immortal and everything is always changing.
I almost burst out in laughter when the article argued that you should reuse types in preference to inlining definitions. If you've ever felt the pain of needing to split something up, you would not be so eager to reuse. In a codebase with a single process, it's pretty trivial to refactor to split things apart; you can make one CL and be done. In a system with persistence and distribution, it's a lot more awkward.
That whole meaning of data vs representation thing. There's fundamentally a truth in the correspondence. As a program evolves, its understanding of its domain increases, and the fidelity of its internal representations increase too, by becoming more specific, more differentiated, more nuanced. But the old data doesn't go away. You don't get to fill in detail for data that was gathered in older times. Sometimes, the referents don't even exist any more. Everything is optional; what was one field may become two fields in the future, with split responsibilities, increased fidelity to the domain.
allanrbo 20 hours ago [-]
Sometimes you are integrating with system that already use proto though. I recently wrote a tiny, dependency-free, practical protobuf (proto3) encoder/decoder. For those situations where you need just a little bit of protobuf in your project, and don't want to bother with the whole proto ecosystem of codegen and deps: https://github.com/allanrbo/pb.py
bbkane 20 hours ago [-]
The author makes good arguments; I wish they'd offered some alternatives.
Despite issues, protobufs solve real problems and (imo) bring more value than cost to a project. In particular, I'd much rather work with protobufs and their generated ser/de than untyped json
spectraldrift 11 hours ago [-]
I'm not sure why this post gets boosted every few years- and unfortunately (as many have pointed out) the author demonstrates here that they do not understand distributed system design, nor how to use protocol buffers. I have found them to be one of the most useful tools in modern software development when used correctly. Not only are they much faster than JSON, they prevent the inevitable redefinition of nearly identical code across a large number of repos (which is what i've seen in 95% of corporate codebases that eschew tooling such as this). Sure, there are alternatives to protocol buffers, but I have not seen them gain widespread adoption yet.
briandw 20 hours ago [-]
The crappy system that everyone ends up using is better than the perfectly designed system that's only seen in academic papers. Javascript is the poster-child of Worse is Better. Protobuffs are a PITA, but they are widely used and getting new adoption in industry.
https://en.wikipedia.org/wiki/Worse_is_better
BoorishBears 19 hours ago [-]
I worked at a company that had their own homegrown Protobuf alternative which would add friction to life constantly. Especially if you had the audacity to build anything that wasn't meant to live in the company monorepo (your Python script is now a Docker image that takes 30 minutes to build).
One day I got annoyed enough to dig for the original proposal and like 99.9% of initiatives like this, it was predicated on:
- building a list of existing solutions
- building an overly exhaustive list, of every facet of the problem to be solved
- declare that no existing solution hits every point on your inflated list
- "we must build it ourselves."
It's such a tired playbook, but it works so often unfortunately.
The person who architects and sells it gets points for "impact", then eventually moves onto the next company.
In the meantime the problem being solved evolves and grows (as products and businesses tend to), the homegrown solution no longer solves anything perfectly, and everyone is still stuck dragging along said solution, seemingly forever.
-
Usually eventually someone will get tired enough of the homegrown solution and rightfully question why they're dragging it along, and if you're lucky it gets replaced with something sane.
If you're unlucky that person also uses it as justification to build a new in-house solution (we're built the old one after all), and you replay the loop.
In the case of serialization though, that's not always doable. This company was storing petabytes (if not exabytes) of data in the format for example.
ryukoposting 17 hours ago [-]
Protobuf's original sin was failing to distinguish zero/false from undefined/unset/nil. Confusion around the semantics of a zero value are the root of most proto-related bugs I've come across. At the same time, that very characteristic of protobuf makes its on-wire form really efficient in a lot of cases.
Nearly every other complaint is solved by wrapping things in messages (sorry, product types). Don't get the enum limitation on map keys, that complaint is fair.
Protobuf eliminates truckloads of stupid serialization/deserialization code that, in my embedded world, almost always has to be hand-written otherwise. If there was a tool that automatically spat out matching C, Kotlin, and Swift parsers from CDDL, I'd certainly give it a shot.
mdhb 3 hours ago [-]
Agreed the CDDL to codegen pipeline / tooling is the biggest thing holding back CBOR at the moment.
Some solutions do exist like here’s a C one[1] which maybe you could throw in some WASI / WASM compilation and get “somewhat” idiomatic bindings in a bunch of languages.
Here’s another for Rust [2] but I’m sure I’ve seen a bunch of others around. I think what’s missing is a unified protoc style binary with language specific plugins.
I'm more than a little curious what event caused such a strong objection to protobuffers. :D
I do tend to agree that they are bad. I also agree that people put a little too much credence in "came from Google." I can't bring myself to have this much anger towards it. Had to have been something that sparked this.
rimunroe 20 hours ago [-]
I'm just a frontend developer so most of my exposure is just as an API consumer and not someone working on the service side of things. That said:
A few years ago I moved to a large company where protobufs were the standard way APIs were defined. When I first started working with the generated TypeScript code, I was confused as to why almost all fields on generated object types were marked as optional. I assumed it was due to the way people were choosing to define the API at first, but then I learned this was an intentional design choice on the part of protobufs.
We ended up having to write our own code to parse the responses from the "helpfully" generated TypeScript client's responses. This meant we had to also handle rejecting nonsensical responses where an actually required field wasn't present, which is exactly the sort of thing I'd want generated clients to do. I would expect having to do some transformation myself, but not to that degree. The generated client was essentially useless to us, and the protocol's looseness offered no discernible benefit over any other API format I've used.
I imagine some of my other complaints could be solved with better codegen tools, but I think fundamentally the looseness of the type system is a fatal issue for me.
vl 12 hours ago [-]
It used to be that there was no official TypeScript protobuf generator from Google and third-party generators sucked. Using protobufs from web browser or in nodejs was painful.
Couple years ago Connect released very good generator for TypeScript, we use in in production and it's great:
Yeah, as soon as you have a moderately complex type the generated code is basically useless. Honestly, ~80% of my gripes about protocol buffers could be alleviated by just allowing me to mark a message field as required.
cherrycherry98 15 hours ago [-]
Proto2 let you do this and the "required" keyword was removed because of the problems it introduces when evolving the schema in a system with many users that you don't necessarily control. Let's say you want to add a new required field, if your system receives messages from clients some clients may be sending you old data without the field and now the parse step fails because it detects a missing field. If you ever want to remove a required field you have the opposite problem, there will components that have to have those fields present just to satisfy the parser even if they're only interested in some other fields.
Philosophically, checking that a field is required or not is data validation and doesn't have anything to do with serialization. You can't specify that an integer falls into a certain valid range or that a string has a valid number of characters or is the correct format (e.g. if it's supposed to be an email or a phone number). The application code needs to do that kind of validation anyway. If something really is required then that should be the application's responsibility to deal with it appropriately if it's missing.
> Philosophically, checking that a field is required or not is data validation and doesn't have anything to do with serialization.
My issue is that people seem to like to use protobuf to describe the shape of APIs rather than just something to handle serialization. I think it's very bad at the describing API shapes.
taeric 14 hours ago [-]
I think it is somewhat of a natural failure of DRY taken to the extreme? People seem to want to get it so that they describe the API in a way that is then generated for clients and implementations.
It is amusing, in many ways. This is specifically part of what WSDL aspired to, but people were betrayed by the big companies not having a common ground for what shapes they would support in a description.
iamdelirium 20 hours ago [-]
You think you do but you really don't.
What happens if you mark a field as required and then you need to delete it in the future? You can't because if someone stored that proto somewhere and is no longer seeing the field, you just broke their code.
thinkharderdev 19 hours ago [-]
If you need to deserialize an old version then it's not a problem. The unknown field is just ignored during deserialization. The problem is adding a required field since some clients might be sending the old value during the rollout.
But in some situations you can be pretty confident that a field will be required always. And if you turn out to be wrong then it's not a huge deal. You add the new field as optional first (with all upgraded clients setting the value) and then once that is rolled out you make it required.
And if a field is in fact semantically required (like the API cannot process a request without the data in a field) then making it optional at the interface level doesn't really solve anything. The message will get deserialized but if the field is not set it's just an immediate error which doesn't seem much worse to me than a deserialization error.
iamdelirium 16 hours ago [-]
1. Then it's not really required if it can be ignored.
2. This is the problem, software (and protos) can live for a long time). They might be used by other clients elsewhere that you don't control. What you thought might not required 10 years down the line is not anymore. What you "think" is not a huge deal then becomes a huge deal and can cause downtime.
3. You're mixing business logic and over the wire field requirement. If a message is required for an interface to function, you should be checking it anyway and returning the correct error. How is that change with proto supporting require?
ozgrakkurt 19 hours ago [-]
Maybe you don’t delete it then?
taeric 19 hours ago [-]
I mean, this is essentially the same lesson that database admins learn with nullable fields. Often it isn't the "deleting one is hard" so much as "adding one can be costly."
It isn't that you can't do it. But the code side of the equation is the cheap side.
taeric 19 hours ago [-]
To add to the sibling, I've seen this with Java enums a lot. People will add it so that the value is consumed using the enum as fast as they can. This works well as long as the value is not retrieved from data. As soon as you do that, you lose the ability to add to the possible values in a rolling release way. It can be very frustrating to know that we can't push a new producer of a value before we first change all consumers. Even if all consumers already use switch statements with default clauses to exhaustively cover behavior.
thinkharderdev 19 hours ago [-]
But this is something you should be able to handle on a case-by-case basis. If you have a type which is stored durably as protobuf then adding required fields is much harder. But if you are just dealing with transient rpc messages then it can be done relatively easily in a two step process. First you add the field as optional and then once all producers are upgraded (and setting the new field), make it required. It's annoying for sure but still seems better than having everything optional always and needing to deal with that in application code everywhere.
taeric 17 hours ago [-]
Largely true. If you are at Google scale, odds are you have mixed fleets deployed. Such that it is a bit of involved process. But it is well defined and doable. I think a lot of us would rather not do a dance we don't have to do?
jandrese 18 hours ago [-]
As a developer I always see "came from Google" as a yellow flag.
Too often I find something mildly interesting, but then realize that in order for me to try to use it I need to set up a personal mirror of half of Google's tech stack to even get it to start.
thinkharderdev 20 hours ago [-]
I feel like I could have written an article like this at various points. Probably while spending two hours trying to figure out a way to represent some protobuf type in a sane way internally.
mike_hearn 20 hours ago [-]
He says that in the article; he had to work on a "compiler" project that was much harder than it should have been because of protobuf's design choices.
taeric 19 hours ago [-]
Yeah, I saw that. I took that as something that happened in the past, though. Certainly colored a lot of the thinking, but feels like something more immediate had to have happened. :D
mrits 20 hours ago [-]
I've used them almost daily for 15 years. They are way down the list of things I'd want improved. It has been interesting to see the protobuffers killers die out every few years though
rednafi 17 hours ago [-]
I like the problems that Protobuf solves, just not the way it solves them.
Protobuf as a language feels clunky. The “type before identifier” syntax looks ancient and Java-esque.
The tools are clunky too. protoc is full of gotchas, and for something as simple as validation, you need to add a zillion plugins and memorize their invocation flags.
From tooling to workflow to generated code, it’s full of Google-isms and can be awkward to use at times.
That said, the serialization format is solid, and the backward-compatibility paradigms are genuinely useful. Buf adds some niceties to the tooling and makes it more tolerable. There’s nothing else that solves all the problems Protobuf solves.
zigzag312 19 hours ago [-]
Among other things, I don't like that they won't support nullable getters/setters:
I too was using PBs a lot, as they are quite popular in the Go world. But i came to the conclusion that they and gRPC are more trouble than they are worth. I switched to JSON, HTTP "REST" and websockets, if i need streaming, and am as happy as i could be.
I get the api interoperability between various languages when one wants to build a client with strict schema but in reality, this is more of a theory than real life.
In essence, anyone who subscribes to YAGNI understands that PB and gRPC are a big no-no.
PS: if you need binary format, just use cbor or msgpack. Otherwise the beauty of json is that it human-readable and easily parseable, so even if you lack access to the original schema, you can still EASILY process the data and UNDERSTAND it as well.
ants_everywhere 20 hours ago [-]
> Maintain a separate type that describes the data you actually want, and ensure that the two evolve simultaneously.
I don't actually want to do this, because then you have N + 1 implementations of each data type, where N = number of programming languages touching the data, and + 1 for the proto implementation.
What I personally want to do is use a language-agnostic IDL to describe the types that my programs use. Within Google you can even do things like just store them in the database.
The practical alternative is to use JSON everywhere, possibly with some additional tooling to generate code from a JSON schema. JSON is IMO not as nice to work with. The fact that it's also slower probably doesn't matter to most codebases.
thinkharderdev 19 hours ago [-]
> I don't actually want to do this, because then you have N + 1 implementations of each data type, where N = number of programming languages touching the data, and + 1 for the proto implementation.
I think this is exactly what you end up with using protobuf. You have an IDL that describes the interface types but then protoc generates language-specific types that are horrible so you end up converting the generated types to some internal type that is easier to use.
Ideally if you have an IDL that is more expressive then the code generator can create more "natural" data structures in the target language. I haven't used it a ton, but when I have used thrift the generated code has been 100x better than what protoc generates. I've been able to actually model my domain in the thrift IDL and end up with types that look like what I would have written by hand so I don't need to create a parallel set of types as a separate domain model.
danans 18 hours ago [-]
> The practical alternative is to use JSON everywhere, possibly with some additional tooling to generate code from a JSON schema.
Protobuf has a bidirectional JSON mapping that works reasonably well for a lot of use cases.
I have used it to skip the protobuf wire format all together and just use protobuf for the IDL and multi-language binding, both of which IMO are far better than JSON-Schema.
JSON-Schema is definitely more powerful though, letting you do things like field level constraints. I'd love to see you tomorrow that paired the best of both.
bloppe 19 hours ago [-]
Protobuf's main design goal is to make space-optimized binary tag-length-value encoding easy. The mentality is kinda like "who cares what the API looks like as long as it can support anything you want to do with TLV encoding and has great performance." Things like oneofs and maps are best understood as slightly different ways of creating TLV fields in a message, rather than pieces of a comprehensive modern type system. The provided types are simply the necessary and sufficient elements to model any fuller type system using TLV.
guelo 19 hours ago [-]
Yes but the point is that nobody outside of super big tech has a need to optimize a few bytes here and there at the expense of atrocious devx.
BinaryIgor 18 hours ago [-]
Avro (and others) has its own set of problems as well.
For messaging, JSON, used in the same way and with the same versioning practices as we have established for evolving schemas in REST APIs, has never failed me.
It seems to me that all these rigid type systems for remote procedure calls introduce more problems that they really solve and bring unnecessary complexity.
Sure, there are tradeoffs with flexible JSONs - but simplicity of it beats the potential advantages we get from systems like Avro or ProtoBuf.
wrsh07 20 hours ago [-]
> This insane list of restrictions is the result of unprincipled design choices and bolting on features after the fact
I'm not very upset that protobuf evolved to be slightly more ergonomic. Bolting on features after you build the prototype is how you improve things.
Unfortunately, they really did design themselves into a corner (not unlike python 2). Again, I can't be too upset. They didn't have the benefit of hindsight or other high performance libraries that we have today.
pshirshov 20 hours ago [-]
I've created several IDL compilers addressing all issues of protobuf and others.
This particular one provides strongest backward compatibility guarantees with automatic conversion derivation where possible: https://github.com/7mind/baboon
Protobuf is dated, it's not that hard to make better things.
guzik 19 hours ago [-]
We thought for a long time about using protobufs in our product [1] and in the end we went with JSON-RPC 2.0 over BLE, base64 for bigger chunks. Yeah, you still need to pass sample format and decode manually. The overhead is fine tho, debugging is way easier (also pulling in all of protobuf just wasn't fun).
[1] aidlab.com/aidlab-2
beders 19 hours ago [-]
If you opt for non-human-readable wire-formats it better be because of very important reasons. Something about measuring performance and operational costs.
If you need to exchange data with other systems that you don't control, a simple format like JSON is vastly superior.
You are restricted to handing over tree-like structures. That is a good thing as your consumers will have no problems reading tree-like structures.
It also makes it very simple for each consumer/producer to coerce this data into structs or objects as they please and that make sense to their usage of the data.
You have to validate the data anyhow (you do validate data received by the outside world, do you?), so throwing in coercing is honestly the smallest of your problems.
You only need to touch your data coercion if someone decides to send you data in a different shape.
For tree-like structures it is simple to add new things and stay backwards compatible.
Adding a spec on top of your data shapes that can potentially help consumers generate client code is a cherry on top of it and an orthogonal concern.
Making as little assumptions as possible how your consumers deal with your data is a Good Thing(tm) that enabled such useful(still?) things as the WWW.
18 hours ago [-]
MountainTheme12 20 hours ago [-]
I agree with the author that protobuf is bad and I ran into many of the issues mentioned. It's pretty much mandatory to add version fields to do backwards compatibility properly.
Recently, however, I had the displeasure of working with FlatBuffers. It's worse.
giveita 1 hours ago [-]
Out of interest why not make the version part of say the URL?
dinobones 19 hours ago [-]
lols, the weird protobuf initialization semantics has caused so many OMGs. Even on my team it lead to various hard to debug bugs.
It's a lesson most people learns the hard way after using PBs for a few months.
sylware 3 hours ago [-]
I don't recall properly (because I did selve my mapping projects for the moment), but don't openstreet map core data distribution format based on protobuffers?
shdh 19 hours ago [-]
I just wish protobuf had proper delta compression out of the box
mkl95 20 hours ago [-]
If you mostly write software with Go you'll likely enjoy working with protocol buffers. If you use the Python or Ruby wrappers you'd wish you had picked another tech.
jonathrg 20 hours ago [-]
The generated types in go are horrible to work with. You can't store instances of them anywhere, or pass them by value, because they contain a bunch of state and pointers (including a [0]sync.Mutex just to explicitly prohibit copying). So you have to pass around pointers at all times, making ownership and lifetime much more complicated than it needs to be. A message definition like this
type Example struct {
state protoimpl.MessageState
xxx_hidden_Value1 int32
xxx_hidden_Value2 float64
xxx_hidden_unknownFields protoimpl.UnknownFields
sizeCache protoimpl.SizeCache
}
For [place of work] where we use protobuf I ended up making a plugin to generate structs that don't do any of the nonsense (essentially automating Option 1 in the article):
type ExamplePOD struct {
Value1 int32
Value2 float64
}
with converters between the two versions.
fsmv 19 hours ago [-]
I actually really strongly prefer 0 being identical to unset. If you have an unset state then you have to check if the field is unset every time you use it. Using 0 allows you to make all of your code "just work" when you pass 0 to it so you don't need to check at all.
It's like how in go most structs don't have a constructor, they just use the 0 value.
Also oneof is made that way so that it is backwards compatible to add a new field and make it a oneof with an existing field. Not everything needs to be pure functional programming.
almost the entire purpose of anything like protocol buffers is to provide a safe mechanism for backwards-compatible forward changes -- "no one uses that stuff"?? what a weird and broken take
nice_byte 18 hours ago [-]
> Make all fields in a message required.
funnily enough, this line alone reveals the author to be an amateur in the problem space they are writing so confidently about.
giveita 1 hours ago [-]
I sniffed this. I am not familiar with protobufs, but aware they are for efficiency on the wire. The fact he only really talks about type systems and not the before vs. after of the affect on the wire was disappointing but also made me suspect to if this was a good piece.
nice_byte 18 hours ago [-]
the complaints about the Protobuf type system being not flexible enough are also really funny to read.
fundamentally, the author refuses to contend with the fact that the context in which Protobufs are used -- millions of messages strewn around random databases and files, read and written by software using different versions of libraries -- is NOT the same scenario where you get to design your types once and then EVERYTHING that ever touches those types is forced through a type checker.
again, this betrays a certain degree of amateurishness on the author's part.
> * Make all fields in a message required. This makes messages product types.
Meanwhile in the capnproto FAQ:
>How do I make a field “required”, like in Protocol Buffers?
>You don’t. You may find this surprising, but the “required” keyword in Protocol Buffers turned out to be a horrible mistake.
I recommend reading the rest of the FAQ [0], but if you are in a hurry: Fixed schema based protocols like protobuffers do not let you remove fields like self describing formats such as JSON. Removing fields or switching them from required to optional is an ABI breaking change. Nobody wants to update all servers and all clients simultaneously. At that point, you would be better off defining a new API endpoint and deprecating the old one.
The capnproto faq article also brings up the fact that validation should be handled on the application level rather than the ABI level.
Hello. I didn't invent Protocol Buffers, but I did write version 2 and was responsible for open sourcing it. I believe I am the author of the "manifesto" entitled "required considered harmful" mentioned in the footnote. Note that I mostly haven't touched Protobufs since I left Google in early 2013, but I have created Cap'n Proto since then, which I imagine this guy would criticize in similar ways.
This article appears to be written by a programming language design theorist who, unfortunately, does not understand (or, perhaps, does not value) practical software engineering. Type theory is a lot of fun to think about, but being simple and elegant from a type theory perspective does not necessarily translate to real value in real systems. Protobuf has undoubtedly, empirically proven its real value in real systems, despite its admittedly large number of warts.
The main thing that the author of this article does not seem to understand -- and, indeed, many PL theorists seem to miss -- is that the main challenge in real-world software engineering is not writing code but changing code once it is written and deployed. In general, type systems can be both helpful and harmful when it comes to changing code -- type systems are invaluable for detecting problems introduced by a change, but an overly-rigid type system can be a hindrance if it means common types of changes are difficult to make.
This is especially true when it comes to protocols, because in a distributed system, you cannot update both sides of a protocol simultaneously. I have found that type theorists tend to promote "version negotiation" schemes where the two sides agree on one rigid protocol to follow, but this is extremely painful in practice: you end up needing to maintain parallel code paths, leading to ugly and hard-to-test code. Inevitably, developers are pushed towards hacks in order to avoid protocol changes, which makes things worse.
I don't have time to address all the author's points, so let me choose a few that I think are representative of the misunderstanding.
> Make all fields in a message required. This makes messages product types.
> Promote oneof fields to instead be standalone data types. These are coproduct types.
This seems to miss the point of optional fields. Optional fields are not primarily about nullability but about compatibility. Protobuf's single most important feature is the ability to add new fields over time while maintaining compatibility. This has proven -- in real practice, not in theory -- to be an extremely powerful way to allow protocol evolution. It allows developers to build new features with minimal work.
Real-world practice has also shown that quite often, fields that originally seemed to be "required" turn out to be optional over time, hence the "required considered harmful" manifesto. In practice, you want to declare all fields optional to give yourself maximum flexibility for change.
The author dismisses this later on:
> What protobuffers are is permissive. They manage to not shit the bed when receiving messages from the past or from the future because they make absolutely no promises about what your data will look like. Everything is optional! But if you need it anyway, protobuffers will happily cook up and serve you something that typechecks, regardless of whether or not it's meaningful.
In real world practice, the permissiveness of Protocol Buffers has proven to be a powerful way to allow for protocols to change over time.
Maybe there's an amazing type system idea out there that would be even better, but I don't know what it is. Certainly the usual proposals I see seem like steps backwards. I'd love to be proven wrong, but not on the basis of perceived elegance and simplicity, but rather in real-world use.
> oneof fields can't be repeated.
(background: A "oneof" is essentially a tagged union -- a "sum type" for type theorists. A "repeated field" is an array.)
Two things:
1. It's that way because the "oneof" pattern long-predates the "oneof" language construct. A "oneof" is actually syntax sugar for a bunch of "optional" fields where exactly one is expected to be filled in. Lots of protocols used this pattern before I added "oneof" to the language, and I wanted those protocols to be able to upgrade to the new construct without breaking compatibility.
You might argue that this is a side-effect of a system evolving over time rather than being designed, and you'd be right. However, there is no such thing as a successful system which was designed perfectly upfront. All successful systems become successful by evolving, and thus you will always see this kind of wart in anything that works well. You should want a system that thinks about its existing users when creating new features, because once you adopt it, you'll be an existing user.
2. You actually do not want a oneof field to be repeated!
Here's the problem: Say you have your repeated "oneof" representing an array of values where each value can be one of 10 different types. For a concrete example, let's say you're writing a parser and they represent tokens (number, identifier, string, operator, etc.).
Now, at some point later on, you realize there's some additional piece of data you want to attach to every element. In our example, it could be that you now want to record the original source location (line and column number) where the token appeared.
How do you make this change without breaking compatibility? Now you wish that you had defined your array as an array of messages, each containing a oneof, so that you could add a new field to that message. But because you didn't, you're probably stuck creating a parallel array to store your new field. That sucks.
In every single case where you might want a repeated oneof, you always want to wrap it in a message (product type), and then repeat that. That's exactly what you can do with the existing design.
The author's complaints about several other features have similar stories.
> One possible argument here is that protobuffers will hold onto any information present in a message that they don't understand. In principle this means that it's nondestructive to route a message through an intermediary that doesn't understand this version of its schema. Surely that's a win, isn't it?
> Granted, on paper it's a cool feature. But I've never once seen an application that will actually preserve that property.
OK, well, I've worked on lots of systems -- across three different companies -- where this feature is essential.
palata 18 hours ago [-]
> I guess I'll, once again, copy/paste the comment I made when this was first posted
I had missed it those other times, and it's super interesting. So thank you for copy/pasting it once again :-).
klodolph 19 hours ago [-]
Protobuffers suck as a core data model. My take? Use them as a serialization and interchange format, nothing more.
> This puts us in the uncomfortable position of needing to choose between one of three bad alternatives:
I don’t think there is a good system out there that works for both serialization and data models. I’d say it’s a mostly unsolved problem. I think I am happy with protobufs. I know that I have to fight against them contaminating the codebase—basically, your code that uses protobufs is code that directly communicates over raw RPC or directly serializes data to/from storage, and protobufs shouldn’t escape into higher-level code.
But, and this is a big but, you want that anyway. You probably WANT your serialization to be able to evolve independently of your application logic, and the easy way to do that is to use different types for each. You write application logic using types that have all sorts of validation (in the "parse, don't validate" sense) and your serialization layer uses looser validation. This looser validation is nice because you often end up with e.g. buggy code getting shipped that writes invalid data, and if you have a loose serialization layer that just preseves structure (like proto or json), you at least have a good way to munge it into the right shape.
Evolving serialized types has been such a massive pain at a lot of workplaces and the ad-hoc systems I've seen often get pulled into adopting some of the same design choices as protos, like "optional fields everywhere" and "unknown fields are ok". Partly it may be because a lot of ex-Google employees are inevitably hanging around on your team, but partly because some of those design tradeoffs (not ALL of them, just some of them) are really useful long-term, and if you stick around, you may come to the same conclusion.
In the end I mostly want something that's a little more efficient and a little more typed than JSON, and protos fit the bill. I can put my full efforts into safety and the "correct" representation at a different layer, and yes, people will fuck it up and contaminate the code base with protos, but I can fix that or live with it.
ziml77 14 hours ago [-]
> My take? Use them as a serialization and interchange format, nothing more.
Isn't that exactly what they're intended for? I'm confused how anyone would even think to use them any other way.
klodolph 10 hours ago [-]
Like the author said, their usage in practice often creeps outside that.
nu11ptr 20 hours ago [-]
Should have (2018) call out
Analemma_ 20 hours ago [-]
The "no enums as map keys" thing enrages me constantly. Every protobuf project I've ever worked with either has stringly-typed maps all over the place because of this, or has to write its own function to parse Map<String, V> into Map<K, V> from the enums and then remember to call that right after deserialization, completely defeating the purpose of autogenerated types and deserializers. Why does Google put up with this? Surely it's the same inside their codebase.
Arainach 20 hours ago [-]
Maps are not a good fit for a wire protocol in my experience. Different languages often have different quirks around them, and they're non-trivial to represent in a type-safe way.
If a Map is truly necessary I find it better to just send a repeated Message { Key K, Value V } and then convert that to a map in the receiving end.
dweis 19 hours ago [-]
I believe that the reason for this limitation is that not all languages can represent open enums cleanly to gracefully handle unknown enums upon schema skew.
riku_iki 20 hours ago [-]
And v1 and v2 protos didn't even have maps.
Also, why you use string as a key and not int?
Arainach 20 hours ago [-]
proto2 absolutely supported the map type.
riku_iki 20 hours ago [-]
It could be, it looks like there was some versions misalignment:
The maps syntax is only supported starting from v3.0.0. The "proto2" in the doc is referring to the syntax version, not protobuf release version. v3.0.0 supports both proto2 syntax and proto3 syntax while v2.6.1 only supports proto2 syntax. For all users, it's recommended to use v3.0.0-beta-1 instead of v2.6.1.
https://stackoverflow.com/questions/50241452/using-maps-in-p...
m463 17 hours ago [-]
persuasive or pervasive?
19 hours ago [-]
19 hours ago [-]
yablak 19 hours ago [-]
"you're the worst serialization/config format I've ever heard of"
jeffbee 20 hours ago [-]
Type system fans are so irritating. The author doesn't engage with the point of protocol buffers, which is that they are thin adapters between the union of things that common languages can represent with their type systems and a reasonably efficient marshaling scheme that can be compact on the wire.
techbrovanguard 20 hours ago [-]
i used protobuffers a lot at $previous_job and i agree with the entire article. i feel the author’s pain in my bones. protobuffers are so awful i can’t imagine google associating itself with such an amateur, ad hoc, ill-defined, user hostile, time wasting piece of shit.
the fact that protobuffers wasn’t immediately relegated to the dustbin shows just how low the bar is for serialization formats.
esafak 9 hours ago [-]
What do you use?
Rendered at 12:01:08 GMT+0000 (Coordinated Universal Time) with Vercel.
I haven't used these very seriously but a problem I had a while back was that that the wire format was not what the applications wanted to use, but a good application format was to space-inefficient for wire.
As far as I could see there was not a great way to do this. You could rewrite wire<->app converter in every app, or have a converter program and now you essentially have two wire formats and need to put this extra program and data movement into workflows, or write a library and maintain bindings for all your languages.
Just with those two criteria you’re down to, like, six formats at most, of which Protocol Buffers is the most widely used.
And I know the article says no one uses the backwards compatible stuff but that’s bizarre to me – setting up N clients and a server that use protocol buffers to communicate and then being able to add fields to the schema and then deploy the servers and clients in any order is way nicer than it is with some other formats that force you to babysit deployment order.
The reason why protos suck is because remote procedure calls suck, and protos expose that suckage instead of trying to hide it until you trip on it. I hope the people working on protos, and other alternatives, continue to improve them, but they’re not worse than not using them today.
https://github.com/stepchowfun/typical
> Typical offers a new solution ("asymmetric" fields) to the classic problem of how to safely add or remove fields in record types without breaking compatibility. The concept of asymmetric fields also solves the dual problem of how to preserve compatibility when adding or removing cases in sum types.
An asymmetric field in a struct is considered required for the writer, but optional for the reader.
The recent CREL format for ELF also uses the more established LEB128: https://news.ycombinator.com/item?id=41222021
At this point I don't feel like I have a clear opinion about whether PrefixVarint is worth it, compared with LEB128.
We include a version number with each release of the game. If we change a proto we add new fields and deprecate old ones and increment the version. We use the version number to run a series of steps on each proto to upgrade old fields to new ones.
It sounds like you've built your own back-compat functionality on top of protobuf?
The only functionality protobuf is giving you here is optional-by-default (and mandatory version numbers, but most wire formats require that)
We do rename deprecated fields and often give new fields their names. We rely on the field number to make that work.
Dragging your org away from using poorly specified json is often worth these papercuts IMO.
Obviously if your thing HAS to communicate over the network that's one thing, but a lot of applications don't. The distributed system micro service stuff is a choice.
Guys, distributed systems are hard. The extremely low API visibility combined with fragile network calls and unsafe, poorly specified API versioning means your stuff is going to break, and a lot.
Want a version controlled API? Just write in interface in C# or PHP or whatever.
The article covers this in the section "The Lie of Backwards- and Forwards-Compatibility." My experience working with protocol buffers matches what the author describes in this section.
If you make any change, it's a new message type.
For compatibility you can coerce the new message to the old message and dual-publish.
Obviously you need to track when the old clients have been moved over so you can eventually retire the dual-publishing.
You could also do the conversion on the receiving side without a-priori information, but that would be extremely slow.
It's only positional protocols that have this problem.
Yet the author has the audacity to call the authors of protobuf (originally Jeff Dean et al) "amateurs."
In Java, you can accomplish some of this with using of Jackson JSON serialization of plain objects, where there several ways in which changes can be made backwards-compatibly (e.g. in the recent years, post-deserialization hooks can be used to handle more complex cases), which satisfies (a). For (b), there’s no automatic linter. However, in practice, I found that writing tests that deserialize prior release’s serialized objects get you pretty far along the line of regression protection for major changes. Also it was pretty easy to write an automatic round-trip serialization tester to catch mistakes in the ser/deser chain. Finally, you stay away from non-schemable ser/deser (such as a method that handles any property name), which can be enforced w/ a linter, you can output the JSON schema of your objects to committed source. Then any time the generated schema changes, you can look for corresponding test coverage in code reviews.
I know that’s not the same as an automatic linter, but it gets you pretty far in practice. It does not absolve you from cross-release/upgrade testing, because serialization backwards-compatibility does not catch all backwards-compatibility bugs.
Additionally, Jackson has many techniques, such as unwrapping objects, which let you execute more complicated refactoring backwards-compatibly, such as extracting a set of fields into a sub-object.
I like that the same schema can be used to interact with your SPA web clients for your domain objects, giving you nice inspectable JSON. Things serialized to unprivileged clients can be filtered with views, such that sensitive fields are never serialized, for example.
You can generate TypeScript objects from this schema or generate clients for other languages (e.g. with Swagger). Granted it won’t port your custom migration deserialization hooks automatically, so you will either have to stay within a subset of backwards-compatible changes, or add custom code for each client.
You can also serialize your RPC comms to a binary format, such as Smile, which uses back-references for property names, should you need to reduce on-the-wire size.
It’s also nice to be able to define Jackson mix-ins to serialize classes from other libraries’ code or code that you can’t modify.
ASCII text (tongue in cheek here)
Chances are, the author literally used software that does it as he wrote these words. This feature is critical to how Chrome Sync works. You wouldn’t want to lose synced state if you use an older browser version on another device that doesn’t recognize the unknown fields and silently drops them. This is so important that at some point Chrome literally forked protobuf library so that unknown fields are preserved even if you are using protobuf lite mode.
Just FYI: an obligatory comment from the protobuf v2 designer.
Yeah, protobuf has lots of design mistakes but this article is written by someone who does not understand the problem space. Most of the complexity of serialization comes from implementation compatibility between different timepoints. This significantly limits design space.
> oneof fields can’t be repeated.
Wrap oneof field in message which can be repeated
> map fields cannot be repeated.
Wrap in message which can contain repeated fields
> map values cannot be other maps.
Wrap map in message which can be a value
Perhaps this is slightly inconvenient/un-ergonomic, but the author is positioning these things as "protos fundamentally can't do this".
The author talks about compatibility a fair bit, specifically the importance of distinguishing a field that wasn't set from one that was intentionally set to a default, and how protobuffs punted on this.
What do you think they don't understand?
> Make all fields in a message required. This makes messages product types.
> One possible argument here is that protobuffers will hold onto any information present in a message that they don't understand. In principle this means that it's nondestructive to route a message through an intermediary that doesn't understand this version of its schema. Surely that's a win, isn't it?
> Granted, on paper it's a cool feature. But I've never once seen an application that will actually preserve that property.
Then it is fair to raise eyebrows on the author's expertise. And please don't ask if I'm attached to protobuf; I can roast the protocol buffer on its wrong designs for hours. It is just that the author makes series of wrong claims presumably due to their bias toward principled type systems and inexperience of working on large scale systems.
> Make all fields in a message required. This makes messages product types.
> Then it is fair to raise eyebrows on the author's expertise.
It's fair to raise eyebrows on your expertise, since required fields don't contribute to b/w incompatibility at all, as every real-world protocol has a mandatory required version number that's tied to a direct parsing strategy with strictly defined algebra, both for shrinking (removing data fragments) and growing (introducing data fragments) payloads. Zero-values and optionality in protobuf is one version of that algebra, it's the most inferior one, subject to lossy protocol upgrades, and is the easiest one for amateurs to design. Then, there's next lavel when the protocol upgrade is defined in terms of bijective functions and other elements of symmetric groups that can tell you whether the newly announced data change can be carried forward (new required field) or dropped (removed field) as long as both the sending and receiving ends are able to derive new compound structures from previously defined pervasive types (the things the protobuf says are oneoffs and messages, for example).
They seem to be saying that you have to publish code that can change a type from schema A to schema B... And back, whenever you make a schema B. This is the "algebra". The "and back" part makes it bijective. You do this at the level of your core primitive types so that it's reused everywhere. This is what they meant by "pervasive" and it ties into the whole symmetric groups thing.
Finally, it seems like when you're making a lossy change, where a bijection isn't possible, they want you to make it incompatible. i.e, if you replaced address with city, then you cannot decode the message in code that expects address.
True story: trying to reverse engineer macOS Photos.app sqlite database format to extract human-readable location data from an image.
I eventually figured it out, but it was:
A base64 encoded Binary Plist format with one field containing a ProtoBuffer which contained another protobuffer which contained a unicode string which contained improperly encoded data (for example, U+2013 EN DASH was encoded as \342\200\223)
This could have been a simple JSON string.
There's nothing "simple" about parsing JSON as a serialization format.
Write a correct JSON parser, compare with protobuf on various metrics, and then we can talk.
[1]: although to be fair, I am older than kids whose first programming language was JavaScript, so I do not think of JSON object format with property names in quotes and integers that need to be wrapped as strings to be safe, etc., lack of comma after the last entry--to be fair this last one is a problem in writing, not reading JSON--as the most natural thing
> Sure you can look at it[1], but you're not expected to look at Apple Photos database.
How else are you supposed to figure it out? If you're older then you know that you can't rely on the existence or correctness of documentation. Being able to look at JSON and understand it as a human on the wire is huge advantage. JSON being pretty simple in structure is as advantage. I don't see a problem with quoting property names! As for large integers and datetimes, yes that could be much better designed. But that's true of every protocol and file format that has any success.
JSON parsers and writers are common and plentiful and are far less crazy than any complete XML parser/writer library.
I don’t think this is a given at all. Depends on the context. I think it’s often overvalued. A lot of times the performance matters more. If human readability was the only thing that mattered, I would still not count JSON as the winner. You will have to pipe it to jq, realistically. You’d do the same for any other serialization format too. Inside Google where proto is prevalent, that is just as easy if not more convenient.
The point is how hard or easy it is for an app’s end user to decipher its file database is not a design goal for the serialization library chosen by Apple Photos developers here. The constraints and requirements are all on different axis.
Using Protobuffers for a few KB of metadata, when the photo library otherwise is taking multiple GB of data, is just pennywise pound foolish.
Of course, even my preference for a simple JSON string would be problematic: data in a database really should be stored properly normalized to a separate table and fields.
My guess is that protobuffers did play a role here in causing this poor design. I imagine this scenario:
- Photos.app wants to look up location data
- the server returns structured data in a ProtoBuffer
- there's no easy or reasonable way to map a protobuf to database fields (one point of TFA)
- Surrender! just store the binary blob in SQLITE and let the next poor sod deal with it
So it would make it a set of historical decisions, but I am not convinced they are necessarily bad decisions given the constraints. Each layer is likely responsible for handing edge cases in the application that you and I are not privy to.
https://news.ycombinator.com/item?id=18188519 (299 comments)
https://news.ycombinator.com/item?id=21871514 (215 comments)
https://news.ycombinator.com/item?id=35281561 (59 comments)
Here's a fun one:
https://news.ycombinator.com/item?id=21873926
This is a rage bait, not worth the read.
Over time we accumulate cleverer and cleverer abstractions. And any abstraction that we've internalized, we stop seeing. It just becomes how we want to do things, and we have no sense of what cost we are imposing with others. Because all abstractions leak. And all abstractions pose a barrier for the maintenance programmer.
All of which leads to the problem that Brian Kernighan warned about with, "Everyone knows that debugging is twice as hard as writing a program in the first place. So if you’re as clever as you can be when you write it, how will you ever debug it?" Except that the person who will have to debug it is probably a maintenance programmer who doesn't know your abstractions.
One of the key pieces of wisdom that show through Google's approaches is that our industry's tendency towards abstraction is toxic. As much as any particular abstraction is powerful, allowing too many becomes its own problem. This is why, for example, Go was designed to strongly discourage over-abstraction.
Protobufs do exactly what it says on the tin. As long as you are using them in the straightforward way which they are intended for, they work great. All of his complaints boil down to, "I tried to do some meta-manipulation to generate new abstractions, and the design said I couldn't."
That isn't the result of them being written by amateurs. That's the result of them being written to incorporate a piece of engineering wisdom that most programmers think that they are smart enough to ignore. (My past self was definitely one of those programmers.)
Can the technology be abused? Do people do stupid things with them? Are there things that you might want to do that you can't? Absolutely. But if you KISS, they work great. And the more you keep it simple, the better they work. I consider that an incentive towards creating better engineered designs.
You can kinda see how this author got bounced out of several major tech firms in one year or less, each, according to their linkedin.
That said the article is full of technical detail and voices several serious shortcomings of protobuf that I've encountered myself, along with suggestions as to how it could be done better. It's a shame it comes packaged with unwarranted personal attacks.
At some stage with every ESP or Arduino project, I want to send and receive data, i.e. telemetry and control messages. A lot of people use ad-hoc protocols or HTTP/JSON, but I decided to try the nanopb library. I ended up with a relatively neat solution that just uses UDP packets. For my purposes a single packet has plenty of space, and I can easily extend this approach in the future. I know I'm not the first person to do this but I'll probably keep using protobufs until something better comes along, because the ecosystem exists and I can focus on the stuff I consider to be fun.
Varints for the win. Send time series as columns of varint arrays - delta or RLL compression becomes quite straightforward. And as a bonus I can just implement new fields in the device and deploy right away - the server-side support can wait until we actually need it.
No, flatbuffers/cap'n'proto are unacceptably big because of fixed layout. No, CBOR is an absolute no go - why on earth would you waste precious bytes on schema every time? No, general-purpose compression like gzip wouldn't do much on such a small size, it will probably make things worse. Yes, ASN is supposed to be the right solution - but there is no full-featured implementation that doesn't cost $$$$ and the whole thing is just too damn bloated.
Kinda fun that it sucks for what it is supposed to do, but actually shines elsewhere.
to learn and play with it it's fine, else why complicate life?
However protobuf is ridiculously interchangeable and there are serializers for every language. So you can get your interfaces fleshed out early in a project without having to worry that someone will have a hard time ingesting it later on.
Yes it's a pain how an empty array is a valid instance of every message type, but at least the fields that you remember to send are strongly typed. And field optionality gives you a fighting chance that your software can still speak to the unit that hasn't been updated in the field for the last five years.
On the embedded side, nanopb has worked well for us. I'm not missing having to hand maintain ad-hoc command parsers on the embedded side, nor working around quirks and bugs of those parsers on the desktop side
The fact that the author is arguing for making all messages required means they don't understand the reasoning for why all fields are optional. This breaks systems (there are are postmortems outlining this) then there are proto mismatches .
This sums up a lot of the issues I’ve seen with protobuf as well. It’s not an expressive enough language to be the core data model, yet people use it that way.
In general, if you don’t have extreme network needs, then protobuf seems to cause more harm than good. I’ve watched Go teams spend months of time implementing proto based systems with little to no gain over just REST.
Protobufs have lots of problems, but at least they are better than ASN.1!
Most of the other issues in the article can be solved be wrapping things in more messages. Not great, not terrible.
As with the tightly-coupled issues with Go, I'll keep waiting for a better approach any decade now. In the meantime, both tools (for their glaring imperfections) work well enough, solve real business use cases, and have a massive ecosystem moat that makes them easy to work with.
This is not pb nor go. A sensible default of invalid state would have caught this. So would an error and crash. Either would have been better than corrupt data.
Beyond that it is a very simple language. But yes, 100%, for better and worse, it is deeply inspired by Google's codebase and needs
So HN, what are the best alternatives available today and why?
Then it hardly solves the same problem Protobuf solves.
It’s the new default in a lot of IOT specs, it’s the backbone for deep space communication networks etc..
Maintains interoperability with JSON. Is very much battle tested in very challenging environments.
I almost burst out in laughter when the article argued that you should reuse types in preference to inlining definitions. If you've ever felt the pain of needing to split something up, you would not be so eager to reuse. In a codebase with a single process, it's pretty trivial to refactor to split things apart; you can make one CL and be done. In a system with persistence and distribution, it's a lot more awkward.
That whole meaning of data vs representation thing. There's fundamentally a truth in the correspondence. As a program evolves, its understanding of its domain increases, and the fidelity of its internal representations increase too, by becoming more specific, more differentiated, more nuanced. But the old data doesn't go away. You don't get to fill in detail for data that was gathered in older times. Sometimes, the referents don't even exist any more. Everything is optional; what was one field may become two fields in the future, with split responsibilities, increased fidelity to the domain.
Despite issues, protobufs solve real problems and (imo) bring more value than cost to a project. In particular, I'd much rather work with protobufs and their generated ser/de than untyped json
One day I got annoyed enough to dig for the original proposal and like 99.9% of initiatives like this, it was predicated on:
- building a list of existing solutions
- building an overly exhaustive list, of every facet of the problem to be solved
- declare that no existing solution hits every point on your inflated list
- "we must build it ourselves."
It's such a tired playbook, but it works so often unfortunately.
The person who architects and sells it gets points for "impact", then eventually moves onto the next company.
In the meantime the problem being solved evolves and grows (as products and businesses tend to), the homegrown solution no longer solves anything perfectly, and everyone is still stuck dragging along said solution, seemingly forever.
-
Usually eventually someone will get tired enough of the homegrown solution and rightfully question why they're dragging it along, and if you're lucky it gets replaced with something sane.
If you're unlucky that person also uses it as justification to build a new in-house solution (we're built the old one after all), and you replay the loop.
In the case of serialization though, that's not always doable. This company was storing petabytes (if not exabytes) of data in the format for example.
Nearly every other complaint is solved by wrapping things in messages (sorry, product types). Don't get the enum limitation on map keys, that complaint is fair.
Protobuf eliminates truckloads of stupid serialization/deserialization code that, in my embedded world, almost always has to be hand-written otherwise. If there was a tool that automatically spat out matching C, Kotlin, and Swift parsers from CDDL, I'd certainly give it a shot.
Some solutions do exist like here’s a C one[1] which maybe you could throw in some WASI / WASM compilation and get “somewhat” idiomatic bindings in a bunch of languages.
Here’s another for Rust [2] but I’m sure I’ve seen a bunch of others around. I think what’s missing is a unified protoc style binary with language specific plugins.
[1] https://github.com/NordicSemiconductor/zcbor
[2] https://github.com/dcSpark/cddl-codegen
I do tend to agree that they are bad. I also agree that people put a little too much credence in "came from Google." I can't bring myself to have this much anger towards it. Had to have been something that sparked this.
A few years ago I moved to a large company where protobufs were the standard way APIs were defined. When I first started working with the generated TypeScript code, I was confused as to why almost all fields on generated object types were marked as optional. I assumed it was due to the way people were choosing to define the API at first, but then I learned this was an intentional design choice on the part of protobufs.
We ended up having to write our own code to parse the responses from the "helpfully" generated TypeScript client's responses. This meant we had to also handle rejecting nonsensical responses where an actually required field wasn't present, which is exactly the sort of thing I'd want generated clients to do. I would expect having to do some transformation myself, but not to that degree. The generated client was essentially useless to us, and the protocol's looseness offered no discernible benefit over any other API format I've used.
I imagine some of my other complaints could be solved with better codegen tools, but I think fundamentally the looseness of the type system is a fatal issue for me.
Couple years ago Connect released very good generator for TypeScript, we use in in production and it's great:
https://github.com/connectrpc/connect-es
Philosophically, checking that a field is required or not is data validation and doesn't have anything to do with serialization. You can't specify that an integer falls into a certain valid range or that a string has a valid number of characters or is the correct format (e.g. if it's supposed to be an email or a phone number). The application code needs to do that kind of validation anyway. If something really is required then that should be the application's responsibility to deal with it appropriately if it's missing.
The Captn Proto docs also describe why being able to declare required fields is a bad idea: https://capnproto.org/faq.html#how-do-i-make-a-field-require...
My issue is that people seem to like to use protobuf to describe the shape of APIs rather than just something to handle serialization. I think it's very bad at the describing API shapes.
It is amusing, in many ways. This is specifically part of what WSDL aspired to, but people were betrayed by the big companies not having a common ground for what shapes they would support in a description.
What happens if you mark a field as required and then you need to delete it in the future? You can't because if someone stored that proto somewhere and is no longer seeing the field, you just broke their code.
But in some situations you can be pretty confident that a field will be required always. And if you turn out to be wrong then it's not a huge deal. You add the new field as optional first (with all upgraded clients setting the value) and then once that is rolled out you make it required.
And if a field is in fact semantically required (like the API cannot process a request without the data in a field) then making it optional at the interface level doesn't really solve anything. The message will get deserialized but if the field is not set it's just an immediate error which doesn't seem much worse to me than a deserialization error.
2. This is the problem, software (and protos) can live for a long time). They might be used by other clients elsewhere that you don't control. What you thought might not required 10 years down the line is not anymore. What you "think" is not a huge deal then becomes a huge deal and can cause downtime.
3. You're mixing business logic and over the wire field requirement. If a message is required for an interface to function, you should be checking it anyway and returning the correct error. How is that change with proto supporting require?
It isn't that you can't do it. But the code side of the equation is the cheap side.
Too often I find something mildly interesting, but then realize that in order for me to try to use it I need to set up a personal mirror of half of Google's tech stack to even get it to start.
Protobuf as a language feels clunky. The “type before identifier” syntax looks ancient and Java-esque.
The tools are clunky too. protoc is full of gotchas, and for something as simple as validation, you need to add a zillion plugins and memorize their invocation flags.
From tooling to workflow to generated code, it’s full of Google-isms and can be awkward to use at times.
That said, the serialization format is solid, and the backward-compatibility paradigms are genuinely useful. Buf adds some niceties to the tooling and makes it more tolerable. There’s nothing else that solves all the problems Protobuf solves.
https://protobuf.dev/design-decisions/nullable-getters-sette...
I get the api interoperability between various languages when one wants to build a client with strict schema but in reality, this is more of a theory than real life.
In essence, anyone who subscribes to YAGNI understands that PB and gRPC are a big no-no.
PS: if you need binary format, just use cbor or msgpack. Otherwise the beauty of json is that it human-readable and easily parseable, so even if you lack access to the original schema, you can still EASILY process the data and UNDERSTAND it as well.
I don't actually want to do this, because then you have N + 1 implementations of each data type, where N = number of programming languages touching the data, and + 1 for the proto implementation.
What I personally want to do is use a language-agnostic IDL to describe the types that my programs use. Within Google you can even do things like just store them in the database.
The practical alternative is to use JSON everywhere, possibly with some additional tooling to generate code from a JSON schema. JSON is IMO not as nice to work with. The fact that it's also slower probably doesn't matter to most codebases.
I think this is exactly what you end up with using protobuf. You have an IDL that describes the interface types but then protoc generates language-specific types that are horrible so you end up converting the generated types to some internal type that is easier to use.
Ideally if you have an IDL that is more expressive then the code generator can create more "natural" data structures in the target language. I haven't used it a ton, but when I have used thrift the generated code has been 100x better than what protoc generates. I've been able to actually model my domain in the thrift IDL and end up with types that look like what I would have written by hand so I don't need to create a parallel set of types as a separate domain model.
Protobuf has a bidirectional JSON mapping that works reasonably well for a lot of use cases.
I have used it to skip the protobuf wire format all together and just use protobuf for the IDL and multi-language binding, both of which IMO are far better than JSON-Schema.
JSON-Schema is definitely more powerful though, letting you do things like field level constraints. I'd love to see you tomorrow that paired the best of both.
For messaging, JSON, used in the same way and with the same versioning practices as we have established for evolving schemas in REST APIs, has never failed me.
It seems to me that all these rigid type systems for remote procedure calls introduce more problems that they really solve and bring unnecessary complexity.
Sure, there are tradeoffs with flexible JSONs - but simplicity of it beats the potential advantages we get from systems like Avro or ProtoBuf.
I'm not very upset that protobuf evolved to be slightly more ergonomic. Bolting on features after you build the prototype is how you improve things.
Unfortunately, they really did design themselves into a corner (not unlike python 2). Again, I can't be too upset. They didn't have the benefit of hindsight or other high performance libraries that we have today.
This particular one provides strongest backward compatibility guarantees with automatic conversion derivation where possible: https://github.com/7mind/baboon
Protobuf is dated, it's not that hard to make better things.
[1] aidlab.com/aidlab-2
If you need to exchange data with other systems that you don't control, a simple format like JSON is vastly superior. You are restricted to handing over tree-like structures. That is a good thing as your consumers will have no problems reading tree-like structures.
It also makes it very simple for each consumer/producer to coerce this data into structs or objects as they please and that make sense to their usage of the data.
You have to validate the data anyhow (you do validate data received by the outside world, do you?), so throwing in coercing is honestly the smallest of your problems.
You only need to touch your data coercion if someone decides to send you data in a different shape. For tree-like structures it is simple to add new things and stay backwards compatible.
Adding a spec on top of your data shapes that can potentially help consumers generate client code is a cherry on top of it and an orthogonal concern.
Making as little assumptions as possible how your consumers deal with your data is a Good Thing(tm) that enabled such useful(still?) things as the WWW.
Recently, however, I had the displeasure of working with FlatBuffers. It's worse.
It's a lesson most people learns the hard way after using PBs for a few months.
It's like how in go most structs don't have a constructor, they just use the 0 value.
Also oneof is made that way so that it is backwards compatible to add a new field and make it a oneof with an existing field. Not everything needs to be pure functional programming.
https://hn.algolia.com/?q=Protobuffers+Are+Wrong
funnily enough, this line alone reveals the author to be an amateur in the problem space they are writing so confidently about.
fundamentally, the author refuses to contend with the fact that the context in which Protobufs are used -- millions of messages strewn around random databases and files, read and written by software using different versions of libraries -- is NOT the same scenario where you get to design your types once and then EVERYTHING that ever touches those types is forced through a type checker.
again, this betrays a certain degree of amateurishness on the author's part.
Kenton has already provided a good explanation here: https://news.ycombinator.com/item?id=45140590
> * Make all fields in a message required. This makes messages product types.
Meanwhile in the capnproto FAQ:
>How do I make a field “required”, like in Protocol Buffers?
>You don’t. You may find this surprising, but the “required” keyword in Protocol Buffers turned out to be a horrible mistake.
I recommend reading the rest of the FAQ [0], but if you are in a hurry: Fixed schema based protocols like protobuffers do not let you remove fields like self describing formats such as JSON. Removing fields or switching them from required to optional is an ABI breaking change. Nobody wants to update all servers and all clients simultaneously. At that point, you would be better off defining a new API endpoint and deprecating the old one.
The capnproto faq article also brings up the fact that validation should be handled on the application level rather than the ABI level.
[0] https://capnproto.org/faq.html
what alternative do we have? sending json and base64 strings
* https://news.ycombinator.com/item?id=18188519
* https://hn.algolia.com/?q=%22Protobuffers+Are+Wrong%22
I guess I'll, once again, copy/paste the comment I made when this was first posted: https://news.ycombinator.com/item?id=18190005
--------
Hello. I didn't invent Protocol Buffers, but I did write version 2 and was responsible for open sourcing it. I believe I am the author of the "manifesto" entitled "required considered harmful" mentioned in the footnote. Note that I mostly haven't touched Protobufs since I left Google in early 2013, but I have created Cap'n Proto since then, which I imagine this guy would criticize in similar ways.
This article appears to be written by a programming language design theorist who, unfortunately, does not understand (or, perhaps, does not value) practical software engineering. Type theory is a lot of fun to think about, but being simple and elegant from a type theory perspective does not necessarily translate to real value in real systems. Protobuf has undoubtedly, empirically proven its real value in real systems, despite its admittedly large number of warts.
The main thing that the author of this article does not seem to understand -- and, indeed, many PL theorists seem to miss -- is that the main challenge in real-world software engineering is not writing code but changing code once it is written and deployed. In general, type systems can be both helpful and harmful when it comes to changing code -- type systems are invaluable for detecting problems introduced by a change, but an overly-rigid type system can be a hindrance if it means common types of changes are difficult to make.
This is especially true when it comes to protocols, because in a distributed system, you cannot update both sides of a protocol simultaneously. I have found that type theorists tend to promote "version negotiation" schemes where the two sides agree on one rigid protocol to follow, but this is extremely painful in practice: you end up needing to maintain parallel code paths, leading to ugly and hard-to-test code. Inevitably, developers are pushed towards hacks in order to avoid protocol changes, which makes things worse.
I don't have time to address all the author's points, so let me choose a few that I think are representative of the misunderstanding.
> Make all fields in a message required. This makes messages product types.
> Promote oneof fields to instead be standalone data types. These are coproduct types.
This seems to miss the point of optional fields. Optional fields are not primarily about nullability but about compatibility. Protobuf's single most important feature is the ability to add new fields over time while maintaining compatibility. This has proven -- in real practice, not in theory -- to be an extremely powerful way to allow protocol evolution. It allows developers to build new features with minimal work.
Real-world practice has also shown that quite often, fields that originally seemed to be "required" turn out to be optional over time, hence the "required considered harmful" manifesto. In practice, you want to declare all fields optional to give yourself maximum flexibility for change.
The author dismisses this later on:
> What protobuffers are is permissive. They manage to not shit the bed when receiving messages from the past or from the future because they make absolutely no promises about what your data will look like. Everything is optional! But if you need it anyway, protobuffers will happily cook up and serve you something that typechecks, regardless of whether or not it's meaningful.
In real world practice, the permissiveness of Protocol Buffers has proven to be a powerful way to allow for protocols to change over time.
Maybe there's an amazing type system idea out there that would be even better, but I don't know what it is. Certainly the usual proposals I see seem like steps backwards. I'd love to be proven wrong, but not on the basis of perceived elegance and simplicity, but rather in real-world use.
> oneof fields can't be repeated.
(background: A "oneof" is essentially a tagged union -- a "sum type" for type theorists. A "repeated field" is an array.)
Two things:
1. It's that way because the "oneof" pattern long-predates the "oneof" language construct. A "oneof" is actually syntax sugar for a bunch of "optional" fields where exactly one is expected to be filled in. Lots of protocols used this pattern before I added "oneof" to the language, and I wanted those protocols to be able to upgrade to the new construct without breaking compatibility.
You might argue that this is a side-effect of a system evolving over time rather than being designed, and you'd be right. However, there is no such thing as a successful system which was designed perfectly upfront. All successful systems become successful by evolving, and thus you will always see this kind of wart in anything that works well. You should want a system that thinks about its existing users when creating new features, because once you adopt it, you'll be an existing user.
2. You actually do not want a oneof field to be repeated!
Here's the problem: Say you have your repeated "oneof" representing an array of values where each value can be one of 10 different types. For a concrete example, let's say you're writing a parser and they represent tokens (number, identifier, string, operator, etc.).
Now, at some point later on, you realize there's some additional piece of data you want to attach to every element. In our example, it could be that you now want to record the original source location (line and column number) where the token appeared.
How do you make this change without breaking compatibility? Now you wish that you had defined your array as an array of messages, each containing a oneof, so that you could add a new field to that message. But because you didn't, you're probably stuck creating a parallel array to store your new field. That sucks.
In every single case where you might want a repeated oneof, you always want to wrap it in a message (product type), and then repeat that. That's exactly what you can do with the existing design.
The author's complaints about several other features have similar stories.
> One possible argument here is that protobuffers will hold onto any information present in a message that they don't understand. In principle this means that it's nondestructive to route a message through an intermediary that doesn't understand this version of its schema. Surely that's a win, isn't it?
> Granted, on paper it's a cool feature. But I've never once seen an application that will actually preserve that property.
OK, well, I've worked on lots of systems -- across three different companies -- where this feature is essential.
I had missed it those other times, and it's super interesting. So thank you for copy/pasting it once again :-).
> This puts us in the uncomfortable position of needing to choose between one of three bad alternatives:
I don’t think there is a good system out there that works for both serialization and data models. I’d say it’s a mostly unsolved problem. I think I am happy with protobufs. I know that I have to fight against them contaminating the codebase—basically, your code that uses protobufs is code that directly communicates over raw RPC or directly serializes data to/from storage, and protobufs shouldn’t escape into higher-level code.
But, and this is a big but, you want that anyway. You probably WANT your serialization to be able to evolve independently of your application logic, and the easy way to do that is to use different types for each. You write application logic using types that have all sorts of validation (in the "parse, don't validate" sense) and your serialization layer uses looser validation. This looser validation is nice because you often end up with e.g. buggy code getting shipped that writes invalid data, and if you have a loose serialization layer that just preseves structure (like proto or json), you at least have a good way to munge it into the right shape.
Evolving serialized types has been such a massive pain at a lot of workplaces and the ad-hoc systems I've seen often get pulled into adopting some of the same design choices as protos, like "optional fields everywhere" and "unknown fields are ok". Partly it may be because a lot of ex-Google employees are inevitably hanging around on your team, but partly because some of those design tradeoffs (not ALL of them, just some of them) are really useful long-term, and if you stick around, you may come to the same conclusion.
In the end I mostly want something that's a little more efficient and a little more typed than JSON, and protos fit the bill. I can put my full efforts into safety and the "correct" representation at a different layer, and yes, people will fuck it up and contaminate the code base with protos, but I can fix that or live with it.
Isn't that exactly what they're intended for? I'm confused how anyone would even think to use them any other way.
If a Map is truly necessary I find it better to just send a repeated Message { Key K, Value V } and then convert that to a map in the receiving end.
Also, why you use string as a key and not int?
The maps syntax is only supported starting from v3.0.0. The "proto2" in the doc is referring to the syntax version, not protobuf release version. v3.0.0 supports both proto2 syntax and proto3 syntax while v2.6.1 only supports proto2 syntax. For all users, it's recommended to use v3.0.0-beta-1 instead of v2.6.1. https://stackoverflow.com/questions/50241452/using-maps-in-p...
the fact that protobuffers wasn’t immediately relegated to the dustbin shows just how low the bar is for serialization formats.