HN2new | past | comments | ask | show | jobs | submitlogin

On the flip side, how many times do we hear about projects that failed because the dev team was stuck building things that were never needed or never used?

It's hard to see the future.



That's it. The causality analysis in the grandparent is exactly backwards. We don't live in a world of "productized prototypes" because people are too lazy to do things right. We live in the world of prototypes because prototypes are the only products that reach market. Even where beautiful works of engineering exist, then tend to be beaten to the punch by the competing prototype anyway.

You don't treat this by whining that the prototypes are winning. You treat it by finding ways to cleanly evolve prototypes[1], and by collecting intuition (c.f. the linked article) about common mistakes and how to avoid them.

[1] Once upon a time in an age lost to history, this was the idea behind something called "agile". The thing we call "agile" today is pretty much the opposite.


You can also design prototypes so that they're impossible to operationalize beyond experimental scale.

I forget the name of it (there are probably several by now), but there's a UX prototyping toolkit where all the components look hand-drawn, so that nobody expects to be able to interact with them and then gets mad when nothing happens. It also prevents you-the-designer from being tempted to add interactivity logic at design time.

There's nothing stopping someone from creating a programming language that works the same way — something intentionally constrained to e.g. run on an abstract machine that uses a 24-bit address space (and builds this into the design, using the other 40 bits of each pointer on 64-bit VM implementations for other important info) so that only prototype use-cases can be implemented and tested, while the system would inherently be unable to reach any kind of scale serving multiple concurrent customers.

While I don't think anyone's ever tried creating a runtime like this for general-purpose software prototyping, I know of at least one domain-specific example:

The PICO-8 (https://www.lexaloffle.com/pico-8.php) is a game runtime that's kind of like this, designed to put constraints on game development that resemble, but are orthogonal to, the kind of constraints imposed by old consoles like the Gameboy. (For example, it is programmed in high-level Lua, but has an 8192-lexeme source-code size limit, to achieve similar complexity constraints to old games that needed to fit their object code on 16k ROMs, while not forcing you to compile to/write in an assembly language, nor impacting source-code readability.)


I've done something like this when bootstrapping a system that would eventually have an API with proper admin tools, but we just need something quick-and-dirty to get off the ground.

So I wrote some scripts in bash to do things the "worst" way possible. Directly accessing the production DB, minimal safety checks, etc. Nobody else on the team liked writing bash, so there was no fight to replace these awful tools with proper APIs.


But that's missing the point again! If you do that, you will never reach the release with your perfect app because you wasted that time and effort on making an unusable mockup and there might be no money left to do the actual app. Those who make a mockup that can actually be used release it and that becomes the business.


Note this part of my statement: "It also prevents you-the-designer from being tempted to add interactivity logic at design time."

In practice, in many industries, prototyping tools are used as the first step in the design process. The constraints built into these prototyping tools force those using them to spend less — often orders of magnitude less — time, and effort, and money(!) prototyping, than they would if they allowed themselves to prototype with production-oriented tooling.

Consider painting. A painter — unless they're working off of a photographic reference — will almost always draw a pencil sketch of the scene they want to capture in paint, before they begin the actual painting. A pencil sketch has no need to consider color (or how to mix to achieve particular color effects), or light and shadow, or dimensionality (how real light reflects off of built-up paint on the canvas), or any of that. They just need to concentrate on proportion, perspective, anatomy, etc. The sketch focuses only on (a subset of) the design of the painting, while inhibiting any of the implementation details specific to the medium of paint, from being worked on. Which means the sketch only takes a few minutes, rather than days.

Once they have this sketch, they can show the sketch to the client who commissioned the painting (or to the master of the studio, if you're painting for gallery sale), and the client/"product owner" can point out places where the sketch does not align to their vision for the painting, which can be used to iterate on the design.

There are many other things to get right once the "real" painting starts, but if you don't get the "bones" of the thing right, the client won't want the painting. The sketch lets you evaluate just the "bones" of the painting before even considering the meat on those bones.

A prototyping tool for programming should be the same: something to let you consider the "bones" of business logic, preconditions/postconditions, etc., without the "meat" of the particular library APIs and data-structure juggling required to glue things together and achieve scalability in a particular language.

The best prototyping tools are ones that pare down your focus to the smallest useful kernel of design, and thereby allow you to iterate in near-realtime. An experienced painter will become skilled enough at making sketches, that they can sit down with a client and sketch in response to the client's words, changing the sketch "interactively" in response to the client.

In fact, some prototyping tools are streamlined enough to allow the client themselves to iterate and ideate on the sketch by themselves; and then only submit it to the productization process once they're happy with it!

Game-development example again: RPG Maker. As far as I can tell, RPG Maker as a software product was never really expected by its vendor to be used in the production of commercial games (although it has been repeatedly marketed that way.) Until very recent releases in the series, it was far too constrained for that — unless you entirely eschewed most of its engine [as most of the RPG Maker "walking simulators" like Yume Nikki do], it gives you a very static set of engines: battles that work exactly one way, menus that work exactly one way, etc. A tool that's actually for building role-playing games as commercial products, would have an almost-monomaniacal focus on letting you customize these systems to make your game distinctive; but RPG Maker is entirely the opposite. Rather, I believe that in its idiomatic use, RPG Maker has always been intended as a tool to allow a client to "sketch out" the narrative(!) "bones" of an RPG. That's why it gives you so many built-in assets to work with, but also why these assets are so generic, and also why ASCII/Enterbrain never created an "asset store", nor documented the asset formats: the assets are meant to act essentially as wireframe components. You're not meant to re-skin an RPG Maker game; you're meant to just use the generic assets to build a generic-looking game, because the look of the game isn't the point, any more than the look of a pencil sketch is the point.


The prototyping toolkit you're thinking of is probably Balsamiq Mockups, which explicitly emulates making wireframes on paper. Other wireframing tools adopted the technique as well.


Right. In that vein, it wasn't the 32 bit limitation that was the real problem. It was the inescapable 32 bit limitation built into the design that was the problem. They could have spent a few bits to version the packet format, for example, which would have at least provided an escape hatch. Or heck, a single bit: Value of 0, known original format. Value of 1, unknown format (or known future format, to be parsed by some future IP packet decoder, but which errors out old decoders).

And they repeated the same mistake with the IPv6 design by not making it an evolution of or backwards-compatible with IPv4.

Literally neither of these was built with scalability in mind.


The first 4 bits of both IPv4 and IPv6 packets are a version field.


> The first 4 bits of both IPv4 and IPv6 packets are a version field.

And in case it wasn't clear enough: it's called IPv4 because the value of that version field is 4, and it's called IPv6 because the value of that version field is 6.


TLS has version fields everywhere, so TLS 1.3 identifies as 1.2 and hides truth deep in extension fields, because anything else broke traffic analysis tools.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: