GDP is also an amalgam of various indicators of general economic activity: Consumption + Investment + Government Spending + (Exports - Imports). It might be all in dollars, but it is kinda like adding $X of Apples with $Y of Oranges.
Its good as a rough score to do relative comparisons between countries (and actually Debt/GDP is useful in that sense too), but as an absolute amount it doesn't mean all that much.
What matters is how much the debt servicing costs versus government revenues. Also how much that debt is growing (deficit) and/or what it would cost to reduce it.
But there's not much of a consensus around what is too much or too little.
I suppose 100% Debt/GDP is a good arbitrary number to raise the alarm, but it doesn't mean much on its own.
That was like living out and episode of Arrested Development in real time. I have a hard time recalling it without mentally casting Jeffrey Tambor as Rudy Giuliani.
Much like the gym, meditation seems to me like an artificial alternative to an actually healthy lifestyle. Perhaps it is necessary to have such explicit and focused "exercising" to really get what you need nowadays, there may be merit to that.
But why not just go for a nice walk with no headphones?
Depending on who you ask, that is a form of meditation (or at the very least, a meditative activity).
And like the gym, it isn't necessarily orthogonal to a healthy lifestyle; sometimes, it’s just a way of focusing efforts toward a specific goal.
Some people get so much stimulation even when walking normally that it breaks the kind of focus they’re looking for (or they don't live in a zone particularly conducive to walking.) It is what it is
It's likely that there have been bottlenecks, where a single written version became the main common ancestor to copy from. Long after the oral tradition died down and other written versions were lost. Or because some patron decided to fund the dissemination of a particular copy, like Guttemberg or King James, or the Toledo School of Translators. Or because a particular heir of the oral tradition wrote it down, like Homer.
It doesn't necessarily mean that the story was stable, it's just the version that got to us.
What you are saying is generally true (and certainly true for many Indian texts), but the oral tradition of the Vedas really is old. Having been brought up in the West I only learned enough for daily and occasional rituals. My guru taught me without looking at a book and although I have such books now I bought them for curiosity only; if I had a question about recitation it would not occur to me to consult them. My son has learned the same way.
Could be but all across different regions across the subcontinent where the Vedas are orally recited, except for some technical tones and notes (which is the mechanical part of Vedic Sanskrit) there is little difference.
There are serious attempts made to write down the vedas. The thing is, historically, very few people learned all 4 vedas by heart; Instead different families recited very small part and passed the recitation as heirloom.
If you meet all those families and compile their recitation, it exactly matches to what we have from different earlier efforts of canonisation.
> For orders, messages, and real-time coordination, Nowhere uses Nostr relays as communication infrastructure. Relays see only encrypted data they cannot read, arriving from ephemeral keys they cannot trace, sent from a nowhere site they cannot identify.
When originally coined (circa 1950 around the Korean War), the First World was the US aligned block of countries, the Second World was the USSR aligned block of countries, and the Third World was all of the countries not part of either. Egypt, India, Yugoslavia, Ghana and Indonesia viewed themselves as leaders of the broader political movement during the 1960's and 1970's.
Even into the 1960's there were few industrialized nations outside of those two main blocks, so "Third World" quickly lost its explicitly political meaning and became more a description of the level of capital investment and worker productivity.
Frankly, it always feels a bit wrong to me to get around Rust's strictness for pointers by just replacing them with IDs/indexes/offsets into some collection. You can still have many of the same memory-safety issues as with pointers, except that now they become logic bugs that are undetectable by the compiler.
Either use unsafe and think about using raw pointers carefully, respecting the soundness rules, or truly redesign it using idiomatic Rust constructs. But don't hide complexity under the rug by using indexes instead of pointers, it's mostly the same thing.
I really enjoyed the write-up though, I learned a lot from it, not to discount that.
If we compare 1-to-1 indices to unsafe code, indices always win (assuming they are viable wrt. perf etc.). This is very simple: all else being equal, a mistake in unsafe code (can be) UB while a mistake in indices is at most a logic bug, and logic bug is always preferred to UB.
Of course, it does mean we should compare them 1-to-1, which means we should treat the indices code like we would treat unsafe code. The most important conclusion is to encapsulate it in a data structure without business logic.
Yeah but logic bugs cannot be found by tools. Whereas for UB at least we have address sanitizer and UB sanitizer and valgrind and many similar tools. You can recreate what these tools do in your API, and that’s extra work: a use-after-free bug may be easily detected by these tools, but when you are managing indices yourself, you may have to add assertions yourself to check for things that are logically deleted but not actually deleted.
Logic bugs are not memory safety issues, they're logic bugs. They cannot result in undefined behavior for the program as a whole, at least in the absence of unsafe code.
As far as I understand, memory safety goes well beyond undefined behavior. It’s a loose term including common pointer manipulation, allocation/deallocation and data-race issues.
Other than UB, using indexes instead of pointers can be exactly as error-prone and leads to the same kind of unexpected runtime crashes, memory corruption and security issues. It’s a false sense of security.
If you inspect the OP implementation in detail, you’ll notice that it is still plenty open to use-after-free bugs and lots of places where it can panic in an equivalent way to a segmentation fault.
Bugs in the use of indexes can corrupt memory only inside the corresponding array, they cannot corrupt arbitrary memory locations, like pointers.
Such a corrupted array should never cause a program crash, unless there is a second bug where unexpected values in the corrupted array can cause crashes or unless it is intended that any detected bug must abort the program, when it does not matter whether pointers or indexes were used.
Thus using indexes should always be safer than using pointers.
In modern CPUs, indexed addressing is as fast as addressing through a register that holds a pointer, which removes the main reason why pointers were preferred in the seventies of the last century, by the time of the C language design, when in most minicomputers and microcomputers using pointers was faster than using indexes.
Here OP is implementing a full allocator and something close to a new abstract layer of virtual memory backed by vectors and addressed by offsets.
For some use-cases, you will probably keep all dynamically allocated objects in this memory.
At that point you have circumvented most of Rust’s protections and ergonomics. The user will effectively be working with raw pointers into this memory. The implementation itself also doesn’t have the proper checks because it is manipulating indexes instead of pointers.
The post itself notes around the end how it is fairly easy to end up with a dangling pointer or one that points to a different object allocated to the same position after the old one was freed. It also has quite a few places where it can panic. And implementing its Trace trait wrong or not using Roots as intended can have complex consequences. And all that while feeling like this is the most safe GC because it has no unsafe.
I don’t disagree with anything you said, and I also quite like the work from OP. But I hope you can see that this is not fully aligned with Rust’s philosophy.
> it can panic in an equivalent way to a segmentation fault.
So, like bounds-checked array access? That's also a logic error. Common pointer manipulation (in the presence of discriminated union types), allocation/deallocation errors and data races (when involving array indexes, pointers or tagged data, not just simple valye types) can all lead to UB; bounds-checked array access on its own cannot.
> But don't hide complexity under the rug by using indexes instead of pointers, it's mostly the same thing.
I think the simple fact that pointers are not guaranteed aligned/valid even if they are in range of a particular slice/collection etc. actually makes it very different
> Suppose you have 10 loans and each has a 50% chance of default. Ignore coupon, and say they are $10 each. Expected value is $50
And that naive statistical reasoning is where it goes terribly wrong. You have to consider the causal process that generates that distribution!
The type of people who would default on a coinflip are extremely sensitive to how the economy changes. The probabilities are very correlated, the expected value is rather meaningless then. It's closer to having a 50% chance to either get a full return or get zero returns, depending the macroeconomy, quite the gamble. Actually, those people were in a rather dodgy situation in the first place, or are not great at decision-making, so it might be more like 50% chance either of getting 50% return or getting 0% return.
PS: Just elaborating on your point, not meant as a counterargument, I know you said the same thing.
Not sure if it's true that LSM trees are so dominant, it was a bit of a fad a few years ago during the NoSQL hype. They can be a good default for concurrent write-heavy workloads like analytics or logging, but they can be tricky to tune.
The good-ol' copy-on-write memmapped B-Tree is still widely used, even on newer key-value stores like redb (I am more familiar with the Rust ecosystem), and it claims to outperform rocksdb (the go-to LSM-tree kvs) on most metrics other than batch writes [1] (although probably biased).
LMDB is still widely used (one of the main classic B-tree kvs), and Postgres, SQLite and MongoDB (WiredTiger), among others, are still backed by B-Tree key-value stores. The key-value storage backends tend to be relatively easy to swap, and I don't know of major efforts to migrate popular databases to a LSM tree backend.
In my public university in Spain, we always had the option to do a single final exam instead of the continuous assessment, although very few chose it. Generally the continuous assessment was less stressful, and the material stuck better, with room to digest it rather than just cramming for the exam and forgetting it right after. Generally the default expectation was that everyone was a full time student yes, but there were proper accommodations for those that weren't.
It is definitely a lot more work for the professors though, most of my family are teachers. It's a lot of assessments and it's very rare to have funding for TAs. Some think that the extra work with worthwhile for the sake of transmitting the knowledge more effectively, but not all of them do.
Frankly, you sound a bit bitter about it from the professor's perspective, and somewhat rationalizing why it is bad for the students. But students do generally appreciate it, and yes good students too, not just cheaters. I think both good and bad students end up learning more and hating the process less.
Your comments on Bologna do resonate though, it was very confusing when I continued to study in Germany and the Netherlands. The massive reforms were supposed to be for alignment with EU, but if anything it got more misaligned. They unified all 3 and 5 year degrees into 4 year degrees, but in most of EU all degrees are 3 years now, for instance.
Regarding the parent comment, indeed, my Computer Science degree was mostly hand-written exercises and exams, and it wasn't that long ago. The degree is about fundamentals, about understanding concepts and applying them, about the tools you need to learn anything in CS afterwards. You are expected to learn most of the practical skills for building software on your own, since they are ever-changing. And I have to say, that style of education has served me very well in my career.
PS: I was also surprised to learn that most of the undergrad exams in Germany, and some in NL, are oral. I can see how that might be a disadvantage to some, but writing is also a disadvantage to others. I quite liked it, less intense than a long written exam, and I think the professor can get a much clearer understanding of the student's grasp of the subject. But again, it's a ton of work for the professor, 20-30 mins per student one-on-one, giving them your full attention, adds up quickly.
Not sure when this was supposed to be the case, but for actual universities (not meant in a deragotary way, Germany has two types of higher eduction) in hard sciences, most classes are graded on a single written exam. Both in undergrad/bachelors and masters.
Unless things have drastically changed in the last five years...
This was in Freiburg University, which is among the top in Germany and top 250 globally. Computer Science bachelors and masters, around 8 years ago, most courses had a final oral exam unless many students (roughly >25) signed up to the class. Some classes like Information Retrieval or Machine Learning had >100 students so they had a written exam, but most others were smaller and oral: Data Engineering and Databases, Cryptography, Physics Simulations for Graphics, Formal Verification Methods, Bioinformatics, Planning AI, P2P Networking... I had a couple oral exams in VU Amsterdam (masters) too but fewer and not the final exam.
I know that an oral exam might seem less serious and rigorous, but I do think the professor can get a better grasp of how much the student actually understands the subject through an interactive interview.
Its good as a rough score to do relative comparisons between countries (and actually Debt/GDP is useful in that sense too), but as an absolute amount it doesn't mean all that much.
What matters is how much the debt servicing costs versus government revenues. Also how much that debt is growing (deficit) and/or what it would cost to reduce it.
But there's not much of a consensus around what is too much or too little.
I suppose 100% Debt/GDP is a good arbitrary number to raise the alarm, but it doesn't mean much on its own.
reply