Re-factoring code is a _panacea_ -- it's more likely factors that contributed to the code needing re-factoring in the first place, are very much in place still to contribute to the same condition repeating eventually, and another round you go. The factors that produce the causes of re-factoring, usually border on psychological causes embedded deeply within the brains of the developer or developers that are owners of the code. Habits, beliefs, convictions, even "professional traumas". Related here is Conway's Law, where the team, for all individual capacity and capability, cannot but build software that mimics the structure of the developers' ultimate (larger) organisation, thus tying the success of the former to the success of the latter. Re-factoring will only largely repeat the outcome if the organisation hasn't changed.
The exception being obviously a team approaching someone else's codebase -- including that of their predecessor, if they can factor in for Conway's Law -- to re-factor it.
But the same person or persons announcing re-factoring? I always try to walk away from those discussions, knowing very well they're just going to build a better mouse trap. For themselves.
Don't get me wrong, iteration of your own then-brain's product is all well and good, but it takes _more_ to escape the carousel. It takes sitting down and noting down primary factors driving poor architecture and taking a long hard look in the mirror. Not everything is subjective or equivalent, as much as many a developer would like to believe. It's very attractive to stick to "as long as we're careful and diligent, even sub-optimal design can be implemented well". No, it won't be -- this one is a poster-child exception to the rule if there ever was one -- your _design_ is the root and from it and it alone springs the tree that you'll need to accept or cut down, and trimming it only does so much.
I find multiple "strange" flaws with the article, even for my appreciation of Ada _and_ the article as an essay:
* The article claims only Ada has true separation of implementation vs specification (the interface), but as far as I am able to reason, also e.g. JavaScript is perfectly able to define "private" elements (not exported by an ES6 module) while being usable in the module that declares them -- if this isn't "syntactical" (and semantical) separation like what is prescribed to Ada, what is the difference(s) the article tries to point out?
* Similarly, Java is mentioned where `private` apparently (according to the article) makes the declaration "visible to inheritance, to reflection, and to the compiler itself when it checks subclass compatibility" -- all of which is false if I remember my Java correctly -- a private declaration is _not_ visible to inheritance and consequently the compiler can ignore it / fast-track in a subclass since it works much the same as it has, in the superclass, making the "compatibility" a guarantee by much the same consequence
I am still reading the article, but having discovered the above points, it detracts from my taking it as seriously as I set out to -- wanting to identify value in Ada that we "may have missed" -- a view the article very much wants to front.
> The article claims only Ada has true separation of implementation vs specification (the interface), but as far as I am able to reason, also e.g. JavaScript is perfectly able to define "private" elements (not exported by an ES6 module) while being usable in the module that declares them -- if this isn't "syntactical" (and semantical) separation like what is prescribed to Ada, what is the difference(s) the article tries to point out?
This is false. For example in Ada you can write:
package Foo
type Bar is private;
procedure Initialize (Item : in out Bar);
private
type Bar is record
Baz : Integer;
Qux : Float;
end record;
end Foo;
Users of the Foo package know there is an opaque type called Bar. They can declare variables of the type, they can use the defined API to operate on it but they cannot reference the implementation defined private members (Baz, Qux) without compile errors. Yes Ada does give you the power and tools to in a very blatantly unsafe and obvious way cast it as another type or as an array of bytes or whatever but if you're doing stuff like that you have already given up.
In JavaScript there are no such protections. For example if you have a module with private class Bar and you export some functions that manipulate it:
class Bar {
constructor() {
this.Baz = 420;
this.Qux = 1337.69;
}
}
export function Initialize() {
return new Bar();
}
In client code you have no issue inspecting and using the private values of that class:
import { Initialize } from 'module';
let myBar = Initialize();
myBar.Baz = 42069; // works just fine
Object.keys(myBar).forEach(console.log); // you can iterate parameters.
myBar.Quux = 'Corge'; // add new parameters
delete myBar.Baz; // I hope no functions rely on this...
Using the private parts of Bar should 100% be a compilation error and even the most broken languages would have it at least be a runtime error. Lmao JS.
Who wants to bet that GP never reads that link and proceeds to continue to complain about the same outdated JavaScript issues for the next two decades.
Not to mention the fact that GP's issue only matters if you're using classes. You can define module-level variables and simply not export them and they are 100% private. Or, they can just define the variable inside of a function and protect it by a closure. I can't imagine writing multiple paragraphs of complaints about a language that I don't actually understand how to use.
I've read it and I agree private properties on classes satisfy the requirement. It does allow you to hide implementation details and I was unaware it was added; though with anything JavaScript you can usually find a way around it.
However I don't really want to talk to you. You are rude.
The reflection part is true. Private members are accessible to reflection in Java. You can call setAccessible(true) and then modify the contents of a String, for example.
"JavaScript's module system — introduced in 2015, thirty-two years after Ada's — provides import and export but no mechanism for a type to have a specification whose representation is hidden from importers."
Then:
"in Ada, the implementation of a private type is not merely inaccessible, it is syntactically absent from the client's view of the world."
Am I missing something -- a JavaScript module is perfectly able to declare a private element by simply not exporting it, accomplishing what the author prescribes to Ada as "is not merely inaccessible, it is syntactically absent from the client's view of the world"? Same would go for some of the other language author somewhat carelessly lumps together with JavaScript.
I loved the article, and I have always had curiosity about Ada -- beyond some of the more modern languages in fact -- but I just don't see where Ada separates interface from implementation in a manner that's distinctly better or different from e.g. JavaScript modules.
Assuming we’re talking about TypeScript here, because JavaScript doesn’t have exportable types… Any instance in JavaScript, whether or not its type is exported, is just an object like any other, that any other module is free to enumerate and mess with once it receives it. In Ada there are no operations on an instance of a private type except the ones provided by the source module.
In other words, if module X returns a value x of unexported type T to module Y, code in module Y is free to do x.foo = 42.
To preempt the obvious: yes, I know _everything_ (nearly) in JavaScript is an object, but a module exporting a `Function` can expect the caller to use the function, not enumerate it for methods. And the function can use a declaration in the module that wasn't exported, with the caller none the wiser about it.
I think you're confusing values with types. JS modules can certainly keep a value private, but there's no way for them to expose an opaque type, because that concept simply doesn't exist in JS. The language only has a few types, and you don't get to make more of them. TypeScript adds a lot of type mechanism on top, but because it's restricted to being strippable from the actual JS code, it doesn't fundamentally change that.
That field is opaque, but the entire type isn’t, no matter what you do. E.g.,
let x = new Age();
x.notSoOpaque = 42;
console.log(x.notSoOpaque);
We can all agree to layer conventions on top of the language so we just don’t do stuff that violates the opacity. But the same is true of assembly language.
Assigning to `notSoOpaque` (or any other) property on an object in this case doesn't modify its behaviour, because the property isn't structurally part of the interface -- there's no code defined by the creator / owner of the object (e.g. through the class) that uses it. So it doesn't violate the contract. Private fields are inaccessible, everything else is accessible and is thus part of the interface. I am not saying (and never did) this is the same level as Ada, but your example looks contrived to me -- I don't get the relevance.
But in defence of JavaScript -- since it enjoys routine bashing, not always undeserved -- it now has true runtime-enforced private members (the syntax is prefixing the name with `#`, strictly as part of an ES6 class declaration), but yeah -- this doesn't invalidate the statement "kind of got there 32 years after Ada, stumbling over itself".
JavaScript has supported real data hiding since the beginning using closures. You define your object in a function. The function's local variables act as the private members of the object. They are accessible to all the methods but completely inaccessible to consumers of the object.
I completely forgot about closures. Frankly, they're still my go-to method for encapsulation, in part because the Java-isation of JavaScript done with the private class members and the onslaught of the "Alan Kay's ideas meet Simula" OOP flavour, is relatively new and I am still unsure whether it's a critical thing to have in JavaScript.
Hey, $DEITY did its absolute best with the constraints and the requirements. But hey, can't please everyone apparently. Be happy you can relieve yourself well past the intended warranty period. The parts were designed to be easily _aftermarket_ replaceable with sufficient advances in technology, retaining the fundamental design without changes.
First, taking the opportunity this discussion presents, I'd like to state for the record, AGAIN, that I have long appreciated the Win32 API and still do -- not because it's great in and out of itself necessarily, it certainly has more warts than your average toad native to the Amazon, but because it de-facto worked for a long while through simple iteration (which grew warts too though) _and_ while it didn't demand Microsoft had everything for _everyone_, it kept Win32 development stable "at the bottom", as the "assembly" layer of Windows development, which everything else was free to build on, _in peace_. Ironically -- looking at the volume of APIs and SDKs Microsoft is churning out today, by comparison, through sheer mass and velocity -- they've proven utterly unable to be sole guardians of their own operating system. There's a plethora of articles shared on Hacker News on this inadequacy on their part to converge on some subset of software that a Windows developer can use to just start with a window or two of their own, on the screen. Win32 _gave you exactly that_. And even `CreateWindow2` export would have worked beyond what `CreateWindow` or `CreateWindowEx` couldn't provide, because you could count on someone who loved it more to just abstract it with a _thin_ layer like WxWidgets etc. Things _worked_. Now there's internal strife between the .NET and "C++ or bust" teams at Microsoft, and the downstream developers are just everything between confused and irritated, this is entirely self-inflicted, Microsoft. It's also a sign of bloat -- if the company could split these groups into subsidiaries, they could compete on actual value delivered, but under the Microsoft umbrella, the result is entirely different.
Second -- and this is a different point entirely -- not two weeks ago there was at least _two_ articles shared here which I read with a mix of mild amusement and sober agreement, about the _opposite_ of what the author of the article linked above, advocates for -- _idiomatic_ design (usually one that's internally consistent):
What I am getting at is that this is clearly different people vocally preferring different -- _opposite_ -- UX experiences. From my brief stint with graphic design, I know there's no silver bullet there either -- consistency is on some level in a locked-horns conflict with creativity (which in part suggests _defiance_), but it's just funny that we now have examples of both, with the above, to which I should add:
> This is why we can't have nice things!
Also, while we "peasants" argue about which way good design should lean -- someone likes their WinAmp-like alpha-blended non-uniform windows and someone else maintains anything that's not defined by the OS is sheer heresy -- the market for one or the other is kept well fueled and another round on the carousel we all go (money happily changing hands).
For my part I wish we'd settle, as much as settling can be done. The APIs should support both, but the user should get to decide, not the developer. Which is incidentally what CSS was _ideally_ kind of was supposed to give us, but we're not really there with that, and I am digressing.
The ugly truth indeed. It sucks to die for the world you won't enjoy, but sometimes it's the only viable solution. Much of our progress has been to minimise casualties and human suffering in order to sustain the world most can agree is better (than the alternatives), but it seems the period of the wave just hits the troughs farther apart, but when it hits them it's like taking breath before the water swallows you, and without training it's quite the panic and suffering (and prospect of death). We know it's in our bones but we want to forget because our bodies are made to interpret pain in the most direct and literal sense -- re-conditioning is always painful too. Strong people create weak people who create strong people, etc.
So yeah _we_ will be fine, but some of us definitely won't, and with the growth in our numbers on Earth, the proportion of martyrs may be growing. Quantifying personal suffering is not possible, especially if the prospect is death.
I don't want to stir up the hornet's nest here, but in my humble opinion the entire problem rests on the unabated and unchecked modern and "late-stage" capitalism model, championed by the U.S. and since exported to and sprung good root everywhere else, even in Europe where it as of yet has a few more checks and balances (which unsurprisingly draws a lot of ire from its acolytes and priests across the Atlantic).
Soviet Union lost due to an inferior societal model, but this too is too much along what once was a relatively sustainable path. The American dream is now a parody of itself, as it takes more to end up with the rest of them, I could go on about the irony of wanting to escape the pit but not wanting to acknowledge the pit is the 99% of the U.S. -- Not Altmans, Bezos'es, Musks or Trumps or their hordes of peripheral elites.
Point being, the model doesn't work _today_ with its cancerous appetite and correspondingly absurd neglect of the human, _any_ human. We can't have humanism and the kind of AI we're about to "enjoy".
The acceleration of wealth disparity may prove to be nearly geometrical, as the common man is further stripped of any capacity to inflict change on the "system". I hope I am wrong, but for all their crimes, anarchy and in a twist of irony -- inhumane treatment of opponent -- the October revolutionaries in Russia, yes bolsheviks, were merely a natural response to a similar atmosphere in Russia at the turn of the previous century. It's just that they didn't have mass surveillance used against them in the same capacity our gadgets allow the "governments" today, nor were they aided by AI which is _also_ something that can be used against an entire slice of populace (a perfect application of general principles put in action). So although the situation may become similar, we're increasingly in no position to change it. The difference may be counted in _generations_, as in it will take multiple generations to dismantle the power structures we allow be put in place now, with Altmans etc. These people may not be evil, but history proves they only have to be short-sighted enough for evil to take root and thrive.
Sorry for the wall of text, but I do agree with the point of the blog post in a way -- demanding people become civilised and refrain from throwing eggs (or Molotovs) on celebrities that are about to swing _entire governments_, is not seeing the forest for the trees.
There's also no precedent in a way -- our historical cataclysms we have created ourselves, have been on a smaller scale, so we're spiraling outwards and not all of the tools we think we have, are going to have the effect required in order to enact the change we want. In the worst case, of course.
I started using Git around 2008, if memory serves. I have made myself more than familiar with the data model and the
"plumbing" layer as they call it, but it was only a year ago -- after more than two decades of using Git, in retrospect -- that a realisation started downing on me that most folks probably have a much easier time with Git than I do, _due_ to them not caring as much about how it works _or_ they just trust the porcelain layer and ignore how "the sausage is made". For me it was always either-or situation -- I still don't trust the high-level switches I discover trawling Git's manpages, unless I understand what the effect is on the _data_ (_my_ data). Conversely, I am very surgical with Git treating it as a RISC processor -- most often at the cost of development velocity, for that reason. It's started to bug me really bad as in my latest employment I am expected to commit things throughout the day, but my way of working just doesn't align with that it seems. I frequently switch context between features or even projects (unrelated to one another by Git), and when someone looks at me waiting for an answer why it takes half a day to create 5 commits I look back at them with the same puzzled look they give me. Neither of us is satisfied. I spend most of the development time _designing_ a feature, then I implement it and occasionally it proves to be a dead-end so everything needs to be scrapped or stashed "for parts", rinse, repeat. At the end of the road developing a feature I often end up with a bunch of unrelated changes -- especially if it's a neglected code base, which isn't out of ordinary in my place of work unfortunately. The unrelated changes must be dealt with, so I am sitting there with diff hunks trying to decide which ones to include, occasionally resorting to hunk _editing_ even. There's a lot of stashing, too. Rebasing is the least of my problems, incidentally (someone said rebasing is hard on Git users), because I know what it is supposed to do (for me), so I deal with it head on and just reduce the whole thing to a series of simpler merge conflict resolution problems.
But even with all the Git tooling under my belt, I seem to have all but concluded that Git's simplicity is its biggest strength but also not a small weakness. I wish I didn't have to account for the fact that Git stores snapshots (trees), after all -- _not_ patch-files it shows or differences between the former. Rebasing creates copies or near-copies and it's impossible to isolate features from the timeline their development intertwines with. Changes in Git aren't commutative, so when my human brain naively things I could "pick" features A, B, and C for my next release, ideally with bugfixes D, E and F too, Git just wants me a single commit, except that the features and/or bugfixes may not all neatly lie along a single shared ancestral stem, so either merging is non-trivial (divergence of content compounded with time) or I solve it by assembling the tree _manually_ and using `git commit-tree` to just not have to deal with the more esoteric merge strategies. All these things _do_ tell me there is something "beyond Git" but it's just intuition, so maybe I am just stupid (or too stupid for Git)?
I started looking at [Pijul](https://pijul.org/) a while ago, but I feel like a weirdo who found a weird thing noone is ever going to adopt because it's well, weird. I thought relying on a "theory of patches" was more aligned with how I thought a VCS may represent a software project in time, but I also haven't gotten far with Pijul yet. It's just that somewhere between Git and Pijul, somewhere there is my desired to find a better VCS [than Git], and I suspect I am not the only one -- hence the point of the article, I guess.
Between the genuine weirdos, the autistic and/or the neuro-divergent, is there anyone left, really? Do the "normies" genuinely exist? Happy-go-lucky, knows a bit about everything but doesn't nerd out on anything, picks up every conversation subject and listens and holds their own in a manner that is just right? I am genuinely curious about the existence of these "superhumans".
There are many many of these socially-skilled normies. But, by virtue of being socially skilled, most have already pretty much filled up their social capacity and don't tend to show up at the kind of venues dedicated to helping under-socialized people meet up.
While there is often a "normal" (bell-curve fitting) distribution for individual factors, putting them together can be counter-intuitive.
> Even when considering just three dimensions, fewer than 5% of pilots were “average” in all. [1]
I would guess many/most people probably think they fall into either (1) the normal bucket or (変) the weird/fringe bucket. Either "I am pretty normal" or "I am an outsider". How many think "We're all fairly different once you cluster in any 3 interesting dimensions!"?
But people feel that dichotomy, which makes me think it is largely about perception relative to a dominant culture: the in-group versus out-group feeling. For example, atheists might feel like outsiders in many parts of the U.S., but less so in big cities and in other countries. In dense urban walkable cities (like NYC), people see diversity more directly and more often. Seeing a bunch of people is different than seeing a bunch of cars.
I think it should be fairly easy to determine if atheists really are outsiders in parts of the US or if it's just perception: just look at voting results, and church attendance for any given area. I don't think it's merely perception at all; visit any rural area and you'll likely see a surprising number of churches relative to the population.
Also, seeing people walking around in public doesn't tell you anything about their religious beliefs unless they're in some sect where they make it obvious with their clothing or hairstyle.
"Just"? How would you build a predictive model that inferred aggregate individual qualities such as
"% atheists" based on voting results? That would be a rather indirect and distorted path for estimation. There are better ways.
It's not a great way, admittedly, but there is a very high correlation between Republican voters and religiosity. Very high turnout for Republican candidates plus lots of active churches in an economically-poor area I think is a reliable indication that atheism in that area is low.
it's a Japanese word for "weird". I'm guessing that OP is a bit of an Otaku (aka "obsessed with Japan") -- which is either ironic or completely appropriate.
My first thought was that they were an LLM, but then checking their profile it seems they've been around since 2012 and have a comment expressing that they seem to get accused of being an LLM a lot, and suggesting people don't do that.
> Quite soon these accusations will nearly always be accurate.
/headscratching They don't have to be, do they? It is possible that some people will build identity systems with norms that e.g. humans type with their own hands. These could become popular, at least conceivably, in certain areas. Hard to enforce for sure. And getting harder and harder to distinguish reliably.
The "normie" doesn't really exist. Everyone is kind of weird in some aspect, which might not be obvious on a surface level.
But having gone to a bunch of programming meetups, the majority of people are perfectly pleasant and good to socialise with. The weirdos are usually non tech people who have an app or crypto idea they want help with. Or just total crazy people who just showed up to the first event they could find regardless of topic.
The hero image on the linked page, which consists of a muted teal background with the words "Introducing Muse Spark", weighs in at 3,5MB. I don't even...
"Please don't complain about tangential annoyances—e.g. article or website formats, name collisions, or back-button breakage. They're too common to be interesting."
Maybe they did get their models to test their pages, but they didn't tell their models to pretend that they're browsing on mobile using a 3G connection.
Good catch - looks like it's a PNG image, with an alpha channel for the rounded corners, and a subtle gradient in the background. The gradient is rendered with dithering, to prevent colour banding. The dither pattern is random, which introduces lots of noise. Since noise can't be losslessly compressed, the PNG is an enormous 6.2 bits per pixel.
While working on a web-based graphics editor, I've noticed that users upload a lot of PNG assets with this problem. I've never tracked down the cause... is there a popular raster image editor which recently switched to dithered rendering of gradients?
My reasoning is because once upon a time, I was using Macromedia Fireworks, and PNGs gave far far better results than JPGs did at the time, at least in terms of output quality. Nearly certainly because I didn't understand JPG compression, but for web work in the mid 2000s PNGs became my favourite. Not to mention proper alpha channels!
I am simply offended. By Meta's lack of sensibilities (or ability) towards use of images on the Web while touting their new flavour of artificial intelligence as a product.
The exception being obviously a team approaching someone else's codebase -- including that of their predecessor, if they can factor in for Conway's Law -- to re-factor it.
But the same person or persons announcing re-factoring? I always try to walk away from those discussions, knowing very well they're just going to build a better mouse trap. For themselves.
Don't get me wrong, iteration of your own then-brain's product is all well and good, but it takes _more_ to escape the carousel. It takes sitting down and noting down primary factors driving poor architecture and taking a long hard look in the mirror. Not everything is subjective or equivalent, as much as many a developer would like to believe. It's very attractive to stick to "as long as we're careful and diligent, even sub-optimal design can be implemented well". No, it won't be -- this one is a poster-child exception to the rule if there ever was one -- your _design_ is the root and from it and it alone springs the tree that you'll need to accept or cut down, and trimming it only does so much.
reply