The title of this talk is “Hardware, Software, Trustware.” How many of you have been paying attention to this whole business about a fork? Has anybody heard of a fork? There is a culture gap that is being expressed in this fork. And most people look at this and they see the obvious culture gap. Most of the mining happens in China; most of the miners are Chinese; a lot of the development happens in Western nations; a lot of the developers are from Western nations. And at a first glance it looks like the culture gap is some kind of East/West thing.
It’s not. In fact, I think a much better framework for thinking about this is that the culture gap at the center of the debate we’re having today is a culture gap between people who build hardware and people who build software. And those cultures have been diverging since the 1950s.
If you’re involved in computer science, if you do hardware or software for a living, you know what I’m talking about. Software goes way back, but not as far back as hardware. Because the very first software was hardware. And if you wanted to use a computer, it ran one program. Some of the first computers ran very specific programs, probably one of the earliest examples is the Enigma cracking machine at Bletchley Park, built by Alan Turing to crack the German cryptography during World War II. It did not have software per se. It had inputs. You could input the message, and it would try to figure out the key. But it only ran one program. And you could barely really call it a computer. It was an electromechanical device with a bit of electronics; very primitive.
Software started happening in the 60s. And software represented a giant leap forward, because until then if you wanted to reprogram a computer you had to change its wiring. Or you had to flip a lot of switches. Like, a lot of switches. Like 10,000 switches in a big bank of switches, and if you got one of them wrong…that was a bit of a problem. Programming in binary, not fun.
And out of this, gradually we started having these two cultures emerge, the culture of people who build hardware and the culture of people who build software, and the fundamental difference has to do with the life cycle of development. And that persists to this day.
When you build a hardware device, your life cycle is measured in months if not years. Eighteen to twenty-four months. You design a chip to do something. You make some architecture decisions. These decisions will arrive at the marketplace two years from now. And if you got them wrong, you start again. If you make a mistake in hardware, things happen and you can’t issue a patch. If you ship a phone that has a nasty tendency of blowing up, there is no software patch that you can issue that will fix the battery issues. It’s out there, in people’s pockets, getting hot.
The culture that comes out of that is a very conservative culture. The planning timelines for building hardware are very long. And they require absolute precision. If you make a mistake you don’t just change a line, recompile, try again, issue a patch. You recall $1 billion worth of silicon and turn them into scrap, because you made a mistake. A nanometer-scale mistake.
And at first, software was like that. Because if you programmed, say in Fortran, on a punch card, and you had six minutes of compute time a day, and you spent twenty hours writing your software and punching it into cards, and then you submitted it to the mainframe and during your six minutes the mainframe would go, “Bzzt! Error,” now you have to go back, spend a day figuring out why, fixing it, punching it back into cards, and putting it back on the mainframe during your next six-minute window. That’s how programming started. And if you were a programmer in those days, the attitude was, “Well, thank God I don’t have to flip a thousand switches to do this. This is so much faster! It only takes forty-eight hours to do one life cycle.”
And gradually this gap started shrinking and shrinking and shrinking. If you’re a really really crap programmer, you don’t even plan, design, or think much about your software. You just…put some shit together, hit run, see what happens. Fix it, run, see what happens. Fix it, run, see what happens. You iterate really really fast. The really great programmers are not that sloppy. They put some thought into every line they change. But all of us have moments when we start hacking. That’s where the phrase “hacking” comes from. From Berkeley in the 1970s, where that’s how they fixed software. You just hack at it until it works.
But that encourages a mentality where your worst mistake lasts two minutes. And no one gets to see it, because you don’t ship that. That’s the beginning of a drift apart of two cultures. In hardware, your mistakes are immortalized on silicon. And they might start fires. In software, your mistakes disappear somewhere in a Git log. Nobody really has to look back at the previous commits and see all of the horrible code you wrote. Right, we all do that. If you’ve done software programming, that’s how most of us program.
That has some very interesting implications for the current debate. Because there’s another fundamental and really important difference in the culture between hardware and software. Hardware is all about ship date. You work two years for that day. When it ships. And once it ships, it’s out of your hands. You made the right design trade-offs, great. You made the wrong design trade-offs, you’re gonna be five years behind your competitors until you figure it out again. But that ship date is the last date. And once it’s out there it’s out of your hands.
Look recently at the perennial battle between AMD and Intel. How many of you are familiar with chip architecture at AMD and Intel? About two years ago— Not too many, so I’ll make it simple. Two years ago, AMD and Intel started working on their latest-generation chips. And, having a five-year horizon, they had to make some bets. Some trade-offs, some design decisions.
AMD made the bet that most systems that ship have a Graphical Processing Unit. And that for advanced mathematical operations (matrix manipulation, floating point arithmetic, etc.), you would have a development environment that would take that, optimize in OpenGL, and ship it to the graphics card for processing. Because it’s 1,000 times faster for that kind of work. So why would you do that on a general-purpose CPU? It didn’t seem sensible. So they decided, let’s put sixty-four cores on a chip but only give them four or eight floating-point arithmetic units.
Intel said the software is not going to get that good. It’s not going to specialize these instructions. Let’s put just eight cores on a chip but give every one of them a floating-point arithmetic unit.
Two simple trade-offs, right. Two decisions. Is the industry going this way or is it going that way? Are we going to be able to leverage this new technology or not? Will the software catch up with what we’re trying to do? Will the architectures of the future look more like this, or like this? Agonizing over that choice, they get a ship date in mind and they ship. Intel got it right. AMD got it wrong. Intel dominated the desktop and server environment, for this cycle. Now, AMD gets to try again, four years later. Three or four years later. That simple trade-off changed the fortunes of that company and the direction of an industry.
We see this in other examples. Let’s go for bigger hardware. I’m a fan of aviation; I’m a private pilot. I love that stuff. Boeing, Airbus. Five years or so ago, they made a very important bet.
Airbus said it’s mostly going to be about hub-and-spoke connections. You’re going to have big hub airports, and they’re going to run hundreds or thousands of passengers on single routes. So what we need to build is a double-decker aircraft that can seat more people than ever before.
And Boeing said no, it’s going to be mostly regional point-to-point. And if we take an aircraft and allow it to extend its range to 12,000 nautical miles, then that choice is the better choice. And they built the Dreamliner.
Boeing was right. Airbus was wrong. Boeing shipped a hundred times more Dreamliners than Airbus, who’ll never catch up, quite. So now they get to try that again. But it’s going to be a five-year life cycle.
If there’s a bug in your software, even in this environment, you can fix it in a matter of hours, ship a new release, done. So one important difference is this idea of ship date. Once you put it out there, you’re locked in for a couple of years. And that makes you have a much more conservative attitude.
But, it has a good side. Because you will only encounter one of those ships dates a couple of times a decade. Three times, four times a decade. That’s it. With software, however, something else happens. Your ship date is not the end of your problems. It is the beginning of your problems. Because once you ship, maintenance starts. And very quickly it became apparent to the software industry that software behaves a bit like perishable produce. It’s got three days of shelf life, after which if you haven’t done maintenance it starts rotting on you.
Software gradually degrades. Especially today’s software, which is open source software, in a very dynamic environment, with lots of dependencies to third-party libraries. Everything is moving. SSL changes something, we find a new bug, new release. The tolerances of the network change, new release. Berkeley DB doesn’t behave the way you expected, new release. And so software is like a never-ending relationship, which you’re trapped in. Like, there is no ship date, it’s continuous shipping. It’s continuous maintenance. There is no “and now we’re done.” There’s only “and now our troubles begin!”
This divergence has created a massive culture difference between the culture of miners and the culture of software developers. It is at the root of the current discussion we’re having. From the perspective of someone building hardware, of course you’re not going to ship a new chip architecture to fix the problem. Just juice up the clock speed. This architecture still has plenty to go, right. It’s got room to grow. Change a parameter for God’s sake, juice up the clock speed, and we can keep the current architecture and jut ship it.
From a software developer’s perspective, if you’re looking at the bigger space, it’s much better to change the architecture now before you have a lot of technical debt, and software crud, and accumulated UTXO, and enormous blockchain sizes on your data store that you have to keep forever. And of course, you’re going to be maintaining this shit either way…might as well do an architecture change.
This is the fundamental culture difference between the mining community and the software development community in Bitcoin, and not just Bitcoin. Bitcoin’s just the one that has the strongest, biggest, most vocal mining community.
But that’s not the end. That’s just the beginning. Because without even noticing, we now have a completely new category which is going to create a completely new culture. And that is trustware. What is trustware? Trustware is this weird emergent phenomenon that happens when you combine consensus rules that are running and instantiated in software with a backing of hardware deployed on a global network, with a diverse set of participants. All of the headaches of hardware, all of the headaches of software, and some new ones.
Now, when you ship is really important. Because if you’re making a consensus rule change, you have to coordinate an entire global network. But the ship date is no longer the end of your problems, it’s just the beginning of the problems because now you have to maintain it forever. And every mistake you make gets baked into the blockchain and has to be carried with you in the consensus rules, forever. In Bitcoin there are no bugs. There are only consensus rules created through tradition.
So how many people here are software developers who’ve worked in Bitcoin at all? Alright. You probably know about this one. It’s a classic. So when you write a multi-signature script and the code gets to the part where it says opcheck multisig verify
, the code has to go and pop as many keys off the stack as you’ve defined as the last parameter. N
, right? And then inspect the signatures which should be M
(M of N), and do something. Turns out opcheck multisig verify
pops one extra value. That’s a bit of a problem, because if you do a multisig of three keys and it pops four things, there aren’t four things on the stack. And if there aren’t four things on the stack, you get a stack error, and your script crashes, and your money’s no longer spendable.
Now, when that bug happened—because it did happen accidentally back in probably 2011—it wasn’t fixed before some people put spendable Bitcoin and redeemed it on the blockchain. And to do that, what they did was they put what’s called a null value—a dummy value. So they go, “Okay. You want to pop four things off the stack, three of which have to be keys and one which is gonna get ignored? Here’s bleh, key, key, key.” And so you pop “bleh, key, key, key,” throw away the bleh, keep the three keys, it works! Done. But now, that script has to be valid forever. Because every node in Bitcoin validates everything, forever. Oops.
Now, developers are writing new versions of the software—multisig is now part of a pay to script hash formula. Guess what. I still pops an extra value. And so I write in my book—there’s a big ol’ notice and it says, “You will see an extra value in all the redeem scripts. That’s because there’s a bug. That bug cannot be fixed. It’s with us forever.”
Why can it not be fixed? Because the fix is worse than the problem. I mean sure you could fix it. You could just put a thing in the code that says, “from now on just pop three.” And now everybody knows it and they write redeem scripts that just pop three. Fine. No problem. But now you have a piece of code in your blockchain, what is effectively a soft fork, that says, “Before Block X, pop four, after Block X, pop three if you see this script.” So that all of the scripts that came before are valid, and all of the scripts that will come next are also valid but don’t need that dummy value.
What you’ve done is you’ve moved the crud from your script into your code. And now these two, three lines of code need to be maintained forever. What if there’s a bug? If you write three lines of code, on average one of them’s going to be wrong. So for every line of code you add to the consensus rules, you’ve got to make sure you don’t add a bug. How do you coordinate an international network so that everybody makes sure they change the rules at the same time and you you don’t accidentally invalidate transactions? This is the essence of trustware.
The essence of trustware is we are now writing software, that gets backed by hardware, deployed on a network, and establishes a set of global consensus rules which if you make a mistake on these consensus rules and go out of consensus, you can lose millions. You can get cheated out of transactions. You can be suffering replay attacks or malleation attacks or all kinds of other attacks. This is not a game. This is a new software frontier, only it’s not software. It’s trustware. And trustware is way more complicated than software, or hardware, or software and hardware put together because there’s also a global network component that’s controlled by independent actors.
Why on earth would we do all this? I mean, it doesn’t sound like a fun development exercise. Why are we creating this thing that will have its own culture, trustware developers? Consensus experts. That requires its own deep understanding and analysis and review. Why are we doing this? Because it gives us something amazing. It gives us a centralized platform of trust that is neutral and not controlled by anyone—and that’s worth it. But it’s bloody painful.
And so this is where we are today. Within this network, especially in the case of Bitcoin, we are now seeing a direct conflict. It’s not a violent conflict. It’s simply a conflict of ideas. It’s a disagreement about the future of the network. It’s a disagreement about the future of the currency and the future of the consensus rules.
I have to assume good faith. I think both parties see the way forward as the best way forward for Bitcoin. But in technology it’s not just a matter of opinion. There is truth. Truth means something. There are correct opinions and incorrect opinions. There are opinions that match the facts, and ones that don’t. It’s not a system of belief, it’s a system of science.
Unfortunately most of the conversation that’s happening really looks like a system of belief. Or more likely a competition of soccer. So, on the one hand you have diehard fans of FC Barcelona and on the other hand diehard fans of Manchester United. There is no right or wrong. There is no correct answer. There is only my team and your team, and your team is wearing the wrong color. They look silly and they can’t play soccer. Clearly. Any intelligent person can see that. Unfortunately that doesn’t lead to any scientific conclusions, which is why we’re here.
So what we’re seeing today is the culmination of a fundamental culture clash between people who primarily build, manage, deploy, and run hardware; and people who fundamentally build, manage, and maintain software. And that’s why you’ll notice there’s some Chinese people on the software side and there’s some Westerners on the hardware side, and this is not a culture clash between East and West. It is a culture clash between hardware and software, and from within that a new culture is now emerging. A culture of developers who are building trustware. Who are gradually seeing the nuances and incredible difficulty of building a system of consensus rules that is backed by hardware and deployed on a global network, and that is trustware. Thank you.