I love how all these 'brand' fonts look indistinguishable to an untrained eye and still brain-frying-bordedom-inducingly close to each other to someone like me who actually studied & worked in typography.
It's just the design team running in place. And at a certain scale, it's cheaper to pay a type foundry $100k once, rather than paying Monotype continuous fees for a legacy family.
But as someone who has made multiple neutral sans families, I agree. The launch rhetoric about creating a differentiated visual identity is comical when you look at all the interchangeable corporate sans together.
I haven't hit the "dumb zone" any more since two months. I think this talk is outdated.
I'm using CC (Opus) thinking and Codex with xhigh on always.
And the models have gotten really good when you let them do stuff where goals are verifiable by the model. I had Codex fix a Rust B-rep CSG classification pipeline successfully over the course of a week, unsupervised. It had a custom STEP viewer that would take screenshots and feed them back into the model so it could verify the progress resp. the triangle soup (non progress) itself.
Codex did all the planning and verification, CC wrote the code.
This would have not been possible six months ago at all from my experience.
Maybe with a lot of handholding; but I doubt it (I tried).
I mean both the problem for starters (requires a lot of spatial reasoning and connected math) and the autonomous implementation. Context compression was never an issue in the entire session, for either model.
The translations make no sense to a German native speaker. The list even swap meanings, i.e. between confusion and clutter.
Accurate translations are:
Verwirrung = Confusion
Zwietracht = Discord
You swapped i and e; somehow English speakders do this to German words all of the time. The 'ie' in here is a long 'i'.
Zweitracht on the other hand would mean a "double traditional costume", if that word existed (it does exist in theory, it is just then number two [Zwei] and the noun for a traditional costume [Tracht] strung together; would be a great name for a German shop that sells used/pre-owned traditional costumes btw.)
Unordnung = Clutter
Beamtenherrschaft = Rule of the public servant class
Illuminatus! is one of those works where there's a decent chance this is just a mistake or oversight, but also a decent chance this is exactly what the authors intended. You never can quite tell, and they definitely liked that.
I think the reason that English speakers swap ie/ei is that the pronunciations of these is not really consistent in English (at least in the American accent I speak), and I can't think of any words where both orderings exist but have different meanings. So the general impression I have about this is that I know there are supposed to be rules about it, but it seems pretty arbitrary and unimportant semantically.
Right, we truly don't have a strong rule about differentiating these in the standard American dialect! Most people say STINE for this one, but if you say STEEN, nobody is gonna be confused or tell you that it's wrong.
Its worth noting that in retrospect PageMaker won the DTP wars of the 1990s.
Quark XPress was the industry leader in that period (most users had a love-hate relationship with it). There was also Ventura Publisher but that had the least market share.
Adobe acquired Aldus and PageMaker became an Adobe product.
Quark were thought of (and reportedly thought of themselves) to be invincible in their DTP software market penetration/moat. Sounds familiar?
Their pockets were deep enough then that they even offered to buy PagerMaker from Adobe ... to bury it.
Instead, Adobe released InDesign and while a rewrite, it is clear to anyone who used PageMaker that the whole UX and ways of working was/were taken 1:1 from PageMaker, not XPress.
This was quite a daring move.
Adobe didn't have the standing yet in DTP to know if people would switch.
Especially since there were many software companies who had built an ecosystem of XPress plugins (very similar to the ecosystem if plugins that cemented Photoshop and later AfterEffect's positions as industry standards in image editing and motion graphics).
And for which Adobe wouldn't have a competing offer when InDesign shipped initially; and likely for years to come.
On the other hand, XPress was known for being unstable to the degree of being a PITA to work with. Even people who much preferred its UX over PageMaker were aware.
Still, I recall XPress users mocking InDesign and saying it would go nowhere.
Within a few years though, PageMaker's spiritual successor proved them wrong.
Sadly InDesign is now a joke.
Six years ago I had to use it for something simple like exporting 500 letter template instances to a single PDF for printing (where each letter gets the address/addressee replaced).
It couldn't do it. It would crash every time. I found bug reports and forum threads from years ago where people complained about this.
I managed to eventually export it as 'PDF for Web' as I found a reddit thread were someone noted this was hitting an entirely different PDF export code path.
In short, today, without serious competition for years, InDesign is just a cash cow for Adobe. Getting as much love as Quark XPress did, before its eventual demise.
"We sell car tires, selling fruit is just a side business."
"The fact that our fruit is rotten and customers complain about that does not faze us as, again: we're primarily a car tire business and that's where our revenue comes from."
The 'reasoning' of the sociopath-level[1] of the corporate hierarchy never fails to entertain.
> I dropped it in the end partly because of all the problems and edge cases, partly because its a solution looking for a problem an AI essentially wipes out any demand for generating video in browsers.
That is only because your view omits some other problems this solves/products this enables.
There is an incredible ecosystem of tools out the browser land, to create animation.
If you can capture frames from the browser you can render these animations as videos, with motion blur (render 2500 frame for a second of video, blend 100 frames each with a shutter function) to get 25fps with 100 motion blur samples (a number AfterEffects can't do, e.g).
There’s a tiny, tiny market for people who would pay for this.
Also you must understand that chrome is not a deterministic renderer. You cannot get the per frame control because it is fundamentally designed to get frames in front of the user fast.
They did some work around the concept of virtual time a few years ago with this sort of thing in mind and eventually dropped it.
> There’s a tiny, tiny market for people who would pay for this.
Not sure what market you are talking about.
What I was talking about: people pay for motion graphics. LLMs are excellent at creating motion graphics from/around browser technology ...
Advertising is a huge market and motion graphics is everywhere in video/film-based advertising.
> Also you must understand that chrome is not a deterministic renderer. You cannot get the per frame control because it is fundamentally designed to get frames in front of the user fast.
It absoluetly deterministic if you control the input. There is no "add random number to X" in Chrome. The non-determinism is user inputs and time.
I know this because the company I work for did extensive tests around this last year. I was one of the people working on that part.
We looked into the same approach as replit. The only reason we gave up on it was product-related which changed our needs. Not because it is impossible (which, I guess, their blog post prooves).
Not when you can say to nano banana “make a video showing a thousand monkeys running down a road all wearing suits, with cinema quality credits rolling over listing the ingredients of corn flakes”, and it spits out something amazing.
You are sold on the Ai snake oil. There are no models that can generate art-directed VFX or motion graphics for multi-second clips without breaking consistency, missing the mark, fucking up the timing.
An exact color from a customer's brand book? Complex animated typography? Forget it.
Check out my GH profile and who I work for. We're ex blockbuster VFX professionals. We use Ai everywhere. We know what is possible and what isn't.
The market we serve is huge. And every ad we create is bespoke. And the product the ads are for is unique.
A scratch on a rim of the left front wheel? When we do a turntable that scratch needs to be in every frame and the rim can not change number of spokes or the like (that's what latest gen models like Nano Banana v2 still do).
Show me a model that can do this level of detail. They barely manage now for a few seconds for special things like humans. Even there you may get subtle changes of eye or hair color. Anything that is not a human and needs to stay exacty the same each frame: good luck.
But let's assume the models were there today.
Still a dead end because: cost.
You know how much it costs to create a 10-15 second clip with a state-of-art/somewhat useful model on a high VRAM GPU instance vs such a clip that is rendered by a headless Chrome browser on a cheap, low RAM, CPU spot-instance?
We don't use Chrome (see previous post). We use a bespoke 2D/3D renderer with custom Ai/human-fed pipeline. But cricually, there are no final frames ever coming from Ai for now because of the reasons I mentioned at the top.
We're talking multiple orders of magnitude difference in cost.
As of this writing, a 15sec clip w. Veo 2 costs about 7.50 USD. This needs to be two orders of magnitude cheaper to become viable.
tl;dr if we relied on Ai for this, our business would not exist.
I just skimmed this but if you try any sort of color correction in a non-linear color space, e.g. display-transformed sRGB, your'e in for a world of pain.
What R, G & B mean must be known exactly, otherwise you may as well be rolling dice.
G = 1.0 has a completely different meaning in sRGB, ACEScg or Adobe ProPhoto.
And even what 'white' means depends on the color space you work in.
I started in commercials and VFx in the 90's when almost all places I worked at had poor at best incomplete understanding of color science.
Allmost all rendering, grading, etc. was done in the aforementioned (display-transformed) sRGB space.
So while there is the aspect of how something should look which I understand is a huge part of what this PDF is about, there is also the part of how to attain that look, once you know what it should be.
Giving you the benefit of the doubt and assuming [1] does not play a role in your thinking:
I don't mean this in any way rude and I apologize if this comes accross as such but believing it won't be used in exactly this way is just naive. History has taught us this lesson again and again and again.
Related: https://eidosdesign.substack.com/p/why-every-brand-looks-the...
reply