Rust is the older project of the two, kicking off in 2006. Go, which set sail in 2007, duplicating the work of Rust would have been pointless. We already had Rust.
Go's objective was to become a faster Python. Which was something we also desperately needed at the time, and it has well succeeded on that front. Go has largely replaced all the non-data science things people were earlier doing with Python.
It would be helpful for you to share a link to the Github issue you created. If the TLA+ spec you no doubt put a lot of time into creating is contained there, that would be additionally amazing, but more relevant will be the responses from the maintainers so that we're not stuck with one side of the story.
Of course, expecting you to provide the link would be incredibly onerous. We can look it up ourselves just as easy as you can. Well, in theory we can. The only trouble is that I cannot find the issue you are talking about. I cannot find any issues in the Go issue tracker from your account.
So, in the interest of good faith, perhaps you can help us out this one time and point us in the right direction?
The actual most plausible explanation becomes clear when you rearrange the words into the right order: "There might be a good reason why people who want to avoid looking stupid are smart ..." Forcing oneself to become smart is the only escape from looking stupid.
"The people I think are smart are those that try to look smart", that is the most plausible. There are probably many smart people who aren't afraid of looking stupid that you think are stupid for that reason.
Personally I dislike people who never say stupid things, because they are focusing too much on appearances and too little on trying to figure things out.
> "The people I think are smart are those that try to look smart", that is the most plausible.
The story does not appear to define smart as "not looking stupid", rather something more towards "mastered the creative process".
There is only so much time in the day. An hour spent in interaction where you might look stupid is an hour not spent directly working on your craft. The most plausible explanation is that those who don't want to look stupid turn towards becoming smart as the escape. As in, a tendency to use time spent alone locked up in a room learning how to use a new tool instead of galavanting at an art show is what makes them become smart.
It is not. Seven teams all working under one leadership is quite different to seven leaderships each working with one team.
When different governments (e.g. USA and USSR), and thus different leaderships, are both trying to solve the same problem (e.g. travel to the moon), that too is considered efficient competition.
Oh, so seven /leaderships/ is what's made the difference?
If a government did this (e.g., seven independent agencies competing for a moon landing), people would call it "fragmented," "uncoordinated," and "bureaucratic infighting."
Seven independent government agencies are still an arm of the same leadership.
When complete organizational separation is introduced, the concerns you speak of go away. In the USA, the ARPA (you might recognize that name from the thing you're using right now) program regularly enables "seven" independent leaders to tackle a problem and this is widely considered a resounding success.
> The goal is to do this parsing exactly once, at the system boundary
You are only parsing once at the system boundary, but under the dynamic model every receiver is its own system boundary. Like the earlier comment pointed out, micro services emerged to provide a way to hack Kay's actor model onto languages that don't offer the dynamicism natively. Yes, you are only parsing once in each service, but ultimately you are still parsing many times when you look at the entire program as a whole. "Parse, don't validate" doesn't really change anything.
> but under the dynamic model every receiver is its own system boundary
I'm not claiming that it can't be done that way, I'm claiming that it's better not to do it that way.
You could achieve security by hiring a separate guard to stand outside each room in your office building, but it's cheaper and just as secure to hire a single guard to stand outside the entrance to the building.
>micro services emerged to provide a way to hack Kay's actor model onto languages that don't offer the dynamicism natively
I think microservices emerged for a different reason: to make more efficient use of hardware at scale. (A monolith that does everything is in every way easier to work with.) One downside of microservices is the much-increased system boundary size they imply -- this hole in the type system forces a lot more parsing and makes it harder to reason about the effects of local changes.
> I think microservices emerged for a different reason: to make more efficient use of hardware at scale.
Scaling different areas of an application is one thing. Being able to use different technology choices for different areas is another, even at low scale. And being able to have teams own individual areas of an application via a reasonably hard boundary is a third.
> I think microservices emerged for a different reason: to make more efficient use of hardware at scale.
Same thing, no? That is exactly was what Kay was talking about. That was his vision: Infinite nodes all interconnected, sending messages to each other. That is why Smalltalk was designed the way it was. While the mainstream Smalltalk implementations got stuck in a single image model, Kay and others did try working on projects to carry the vision forward. Erlang had some success with the same essential concept.
> I'm claiming that it's better not to do it that way.
Is it fundamentally better, or is it only better because the alternative was never fully realized? For something of modern relevance, take LLMs. In your model, you have to have the hardware to run the LLM on your local machine, which for a frontier model is quite the ask. Or you can write all kinds of crazy, convoluted code to pass the work off to another machine. In Kay's world, being able to access an LLM on another machine is a feature built right into the language. Code running on another machine is the same as code running on your own machine.
I'm reminded of what you said about "Parse, don't validate" types. Like you alluded to, you can write all kinds of tests to essentially validate the same properties as the type system, but when the language gives you a type system you get all that for free, which you saw as a benefit. But now it seems you are suggesting it is actually better for the compiler to do very little and that it is best to write your own code to deal with all the things you need.
More like something closer to 100%. The ATM was notable for enabling a complete change in mission. The historical job of teller largely disappeared, but a brand new job never done before was created in its wake. That is why there was little change in the number of people employed.
> because of deregulation and a booming economy and whatever else.
The deregulation largely happened in the 1970s, while you're talking about 1988 onward. The reality is that ATM actually was the primary catalyst for the specific branch expansion you are talking about. Like above, the ATM made the job of teller redundant, but it introduced a brand new job. A job that was most effective when the workers were closer to the customer, hence why workers were relocated.
The parent is talking about when the implementation is flaky, not the test. When you go to fix the problem under that scenario there is no reason for you to modify the test. The test is fine.
What you're describing is the every day reality but what you WANT is that if your implementation has a race condition, then you want a test that 100% of the time detects that there is a race condition (rather than 1% of the time).
If your test can deterministically result in a race condition 100% of the time, is that a race condition? Assuming that we're talking about a unit test here, and not a race condition detector (which are not foolproof).
> Assuming that we're talking about a unit test here
I think the categorisation of tests is sometimes counterproductive and moves the discussion away from what's important: What groups of tests do I need in order to be confident that my code works in the real world?
I want to be confident that my code doesn't have race conditions in it. This isn't easy to do, but it's something I want. If that's the case then your unit test might pass sometimes and fail sometimes, but your CI run should always be red because the race test (however it works) is failing.
This is also hints at a limitation of unit tests, and why we shouldn't be over-reliant on them - often unit tests won't show a race. In my experience, it's two independent modules interacting that causes the race. The same can be true with a memory bug caused by a mismatch in passing of ownership and who should be freeing, or any of the other issues caused by interactions between modules.
> I think the categorisation of tests is sometimes counterproductive
"Unit test" refers to documentation for software-based systems that has automatic verification. Used to differentiate that kind of testing from, say, what you wrote in school with a pencil. It is true that the categorization is technically unnecessary here due to the established context, but counterproductive is a stretch. It would be useful if used in another context, like, say: "We did testing in CS class". "We did unit testing in CS class" would help clarify that you aren't referring to exams.
Yeah, Kent Beck argues that "unit test" needs to bring a bit more nuance: That it is a test that operates in isolation. However, who the hell is purposefully writing tests that are not isolated? In reality, that's a distinction without a difference. It is safe to ignore old man yelling at clouds.
But a race detector isn't rooted in providing verifiable documentation. It only observes. That is what the parent was trying to separate.
> I want to be confident that my code doesn't have race conditions in it.
Then what you really WANT is something like TLA+. Testing is often much more pragmatic, but pragmatism ultimately means giving up what you want.
> often unit tests won't show a race.
That entirely depends on what behaviour your test is trying to document and validate. A test validating properties unrelated to race conditions often won't consistently show a race, but that isn't its intent so there would be no expectation of it validating something unrelated. A test that is validating that there isn't race condition will show the race if there is one.
You can use deterministic simulation testing to reproduce a real-world race condition 100% of the time while under test.
But that's not the kind of test that will expose a race condition 1% of the time. The kinds of tests that are inadvertently finding race conditions 1% of the time are focused on other concerns.
So it is still not a case of a flaky test, but maybe a case of a missing test.
> Taken to extreme this would mean getting rid of unit tests all together in favor of functional and/or end-to-end testing.
The dirty little secret in CS is that unit, functional, and end-to-end tests are all the exact same thing. Watch next time someone tries to come up with definitions to separate them and you'll soon notice that they didn't actually find a difference or they invent some kind of imagined way of testing that serves no purpose and nobody would ever do.
Regardless, even if you want to believe there is a difference, the advice above isn't invalidated by any of them. It is only saying test the visible, public interface. In fact, the good testing frameworks out there even enforce that — producing compiler errors if you try to violate it.
Yep, the 'unit' is size in which one chooses to use. The exact same thing happens when trying to discuss micro services v monolith.
Really it all comes down to agreeing to what terms mean within the context of a conversation. Unit, functional, and end-to-end are all weasel words, unless defined concretely, and should raise an eyebrow when someone uses them.
> The dirty little secret in CS is that unit, functional, and end-to-end tests are all the exact same thing.
I agree that the boundaries may be blurred in practice, but I still think that there is distinction.
> visible, public interface
Visible to whom? A class can have public methods available to other classes, a module can have public members available to other modules, a service can have public API that other services can call through network etc
I think that the difference is the level of abstraction we operate on:
unit -> functional -> integration -> e2e
Unit is the lowest level of abstraction and e2e is the highest.
The user. Your tests are your contract with the user. Any time there is a user, you need to establish the contract with the user so that it is clear to all parties what is provided and what will not randomly change in the future. This is what testing is for.
Yes, that does mean any of classes, network services, graphical user interfaces, etc. All of those things can have users.
> Unit is the lowest level of abstraction and e2e is the highest.
There is only one 'abstraction' that I can see: Feed inputs and evaluate outputs. How does that turn into higher or lower levels?
> When it passes, it's just overhead: the same outcome you'd get without CI.
The outcome still isn't the same. CI, even when everything passes, enables other developers to build on top of your partially-built work as it becomes available. This is the real purpose of CI. Test automation is necessary, but only to keep things sane amid you continually throwing in fractionally-complete work.
Go's objective was to become a faster Python. Which was something we also desperately needed at the time, and it has well succeeded on that front. Go has largely replaced all the non-data science things people were earlier doing with Python.
reply