Hacker News .hnnew | past | comments | ask | show | jobs | submit | rustybolt's commentslogin

> Fun fact #1: you rent your cap and gown in the US. You have to return them. And they’re expensive, too! I paid $94 just for the privilege of renting mine, which is insane because they probably cost way less than that to manufacture.

Ah, yes, of course this is how it works in the US.


It depends entirely on school.

Mine wasn't like that (and still isn't). Multiple friends of mine are graduating from NYU this month, and their situation isn't like like that either.

Every school I've ever encountered gives an option to purchase (with some being way more affordable than others). E.g., NYU JD (law school) cap and gown is roughly $98 to purchase (not to rent) this year.


FWIW I'm in the US and I bought mine. Renting does seem to make more sense here as the gown has no utility outside of this one event.

High school lets you (makes you?) buy them. My HS and college had almost the exact same color, so I probably could have gotten away with reusing mine!

I don’t doubt that some offer and charge for rentals. (I think mine only charged $130 ish for purchase though.)

However, I am shocked to hear that there is not even an option to purchase them at OP’s university. Many people like to keep them, especially the cap at least, as a souvenir, or their parents will keep it. Besides, couldn’t they make more money selling them?


At my college you rented the gowns, or at least that was the default, but I think the cap was bought and yours to keep.

Also, as a souvenir, the tassel, which is decorated with a charm showing the year graduated, is also always (obviously) yours to keep.

It’s very common in America for young people to hang that tassel from their car’s center rear-view mirror.


That must be new; I walked in 2009 and we got to keep ours

I'm surprised at the concept, somehow I thought the whole "graduation cap" thing was just in movies. Seems out of place in a country that's otherwise so individualistic.

Colleges are far less “individualistic” than most places in the country.

Yes, you have to pay a decent wage to the people helping you fit, cleaning, and storing the goods. Manufacture is done in a low cost country with cheap labour, so buying clothing seems cheap.

Do you truly believe most of that goes towards wages?

Well, some will go on corporation tax, some on business rates, some on rent of the land the storage is on (which itself has to pay corporation tax, I suppose).

Yes. Competitive forces would push the cost toward the most expensive input which is likely people. That would be somewhat muted if the supplier was sole source but even then outright purchases would put downward pressure on the rental price.

Given that you’re forced to rent the cap and gown, I think it’s safe to say that competitive forces are entirely absent in this scenario.

What competitive forces? It's not like people have a choice in choosing whether they want a particular cap or gown and the people who contract the rental agreement (i.e. the university admin) are not the ones bearing the cost.

Yeah this is not universal. I bought mine

You generally rent caps and gowns in the UK too. Can you share where you are that you buy them?

Here is one company that provides the gowns for many UK universities: https://www2.edeandravenscroft.com/non-ceremony/. I could be kitted with the appropriate gowns for my UK degrees for a few hundred squid each.

Academics, who attend many graduation events, often end up buying their own gown. I believe you are supposed to use the gown of the university you graduated from, so they must be for sale somewhere.

Once you have your own gown you can customize it. I haven't seen any profs with LEDs additions to their gowns, but adding extra pockets to hide a book or snacks is fair game.


You are allowed to turn up in your own though, no one will check. I even had people turn up in the wrong colour gown at my graduation.

It's insane, why doesn't the school just store a bunch of them.

As opposed to what buying the thing and storing it or throwing it away?

It's a $10 gown, renting it for $100 is madness

Welcome to captive audience pricing! There are more a few companies who have this type of business, especially targeting those in institutions of all kinds.

They probably are fully gone now, but when I was in college some (IRL) classes, usually the big auditorium ones, added interactivity in the form of realtime polls and quizzes with a little “clicker” device. This was of course $30 or whatever and just used some custom RF protocol to register your vote across the room. Single-source, you have to buy it to be in the class.

Textbooks themselves, electronic or not, same racket. Professor is sold the book, but it’s the students who pay. (Don’t forget of course the scam of “writing your own math book” and requiring it!!)

Prisons: some private company always has a deal to “supply telephone service” and charges the inmates or their families rates that are higher than international long distance used to cost.

All of these things are sold to administrators who have no fiscal concerns with the service or product because the institution isn’t the one paying, so there’s zero pricing pressure. If there’s even multiple contractors in the niche, they are more incentivized to compete on sending cool freebies to the administrators, or add perks that benefit them, than they are to compete on pricing for the students/inmates/etc. like, say, Jostens might throw in “free school ID cards” which is technically “saving the school money” in order to get the yearbook contract, while making $100 a yearbook in gross profit on $150 yearbooks. Note: all numbers made up.


Throwing it away after single use is madness.

I think they’re saying it should be much cheaper to rent, and we shouldn’t throw them away.

Is it?

My issue with this type of thinking is it assumes "transport cost <<< manufacturing cost" -- a decent assumption for a lot of goods throughout a lot of history, but just... not really true for lots of things in a modern supply chain.

The cost of moving the gown between users -- in the form of the user needing to give back the gown to the service, who must then clean it, inspect it, etc. -- may in fact be far higher than the cost of manufacturing a new gown and only needing your supply lines to be "one way".


trash doesn't disappear, everything has to go somewhere

I was wondering why I saw them for cheap on aliexpress…

And if you order XXXXL it might fit. A little tight, but bearable.

We are by all concievable measures living in the best timeline and under the best economic system. Just look at the graphs. Just consider what an American symbol the graduation cap is. We don’t really know why, but I think a likely reason is that making graduation caps under most economic systems is too labor intensive. Some families might not have even been able to send their children to universities since they couldn’t rent or buy graduation caps—and certainly not make them themselvse—and not doing so would be a complete humiliation for their family or clan or what they have in other countries.

Too coherent. You need to work on your simulation.

What in the world is going on here?

Nobody said it better than von Neumann: "Young man, in mathematics you don't understand things. You just get used to them"

I don't have a lot to show for it yet, but I'm working on an online video course for software engineers aspiring to build their own CPU on an FPGA dev board.


This is great!

Some comments:

- I didn't like the "truth tables" one, I got many duplicate questions and for some reason I got only one second for the first question. The rest of the questions I managed to answer correctly but I still got only one start out of three?

- I got very confused by the capacitor. Capacitors do not have an "enable" gate! In fact, in 2.7 (1T1C) you are supposed to build the enable gate -- with a transistor. So currently, you can just simply not build the enable gate and use the one already in the primitive, meaning you don't need the NMOS gate at all.

Was this made using LLM-assistence? (Not judging, I'm just interested!) I'd love to hear more about your workflow and how you managed to produce a good UI as it's something I couldn't do if my life depended on it, and it's a skill I'd like to learn.


Oh, I didn't notice this capacitor bug, I changed it to add an enable gate for 2.4 (for context, i created 2.4 after 2.7 b/c i thought 2.7 wasn't obvious enough for some ppl). 2.4 kind of needs the enable pin b/c of how my simulation system works. Yeah, I felt pretty conflicted on the capacitors whilst building, theres actually a note about this in the capacitor info block in later levels, but I couldn't really make a true capacitor compatible with the underlying simulation system I had built (I should have thought it through from the start).

Ill fix the truth tables bug (i think i know the issue), the stars come from playing in endless mode

I used claude quite a bit, it struggled through a lot of this (wiring and simulation systems in particular), but managed to crank this out, for the graphics i was extremely detailed in terms of what i wanted i'd say


Since we're in feedback mode, 2.16 has no BitLineBar reference to feed to the comparators. I had to cheese the level by connecting the "capacitor" outputs straight to the outputs, and it worked.

On the capacitor though, the capacitor level is weird as you don't build the capacitor charge system with transistors. Though I definitely get that the simulation engine is for digital stuff, not analog :)

Also a general feedback on the time-based challenges: dial them back. A lot. Most of them are just not interesting and have zero learning value. In fact, the "DRAM refresh" one just made me quit the game (clicking on 8 rows to keep them fresh). Okay, 10s is enough, I got the point. No need to hold up for a whole minute. Kinda same for the hex one. However, some of them are good, and the UI for the binary ones is great, especially for the two's complement one!

Small nitpick on the UI: some blocks don't have their connections aligned with the grid, making the wiring OCD-incompatible. But that's minor. It's a shame since the wire routing algorithm works quite well overall, and I'm impressed an LLM could produce that good of an UI!

Otherwise, quite a fun little game, if slow paced when one already knows some bits of digital logic. Keep up!


lol, I encountered the same thing w/ the Dram one in particular during testing (I passed it by using number keys, but probably was a sign to remove)

Thanks, I appreciate all the feedback, fixes coming in the next push


I'm still confused about 2.7 because it says that we have to use WL so that "the bit line (BL) connects to the storage element for reading or writing". But the accepted solution connects to the storage element only for writing, not for reading (the capacitor is always connected to the output for reading).

I would think that if you wanted to make it so that the storage element was connected either for reading or for writing by the WL and otherwise disconnected, you would need two transistors, not just one.

Perhaps this was meant to say "for writing either a 0 or a 1"?


2.7 is confusing because you can wire up the bitline and the word line the wrong way around and the tests still pass.


Agreed, truth tables one is important. But it's backwards, you test people on truth tables before teaching them.

If someone is seeing this for the first time they may have never seen some of those gates and you quiz them.

Then finally after passing the quiz, you define NAND and NOR and Inverter.

Swap the teaching one to be the intro to the truth tables one.

Second bit of feedback is the timer. Increase the time allotted. I know them very well and still was struggling to get all the input correct before the timer hit. Or consider possibly just eliminating the timer completely - if your goal is to be sure that they know them.


good point, made an update that added difficulty levels to the minigames (handles the timer), and i'll probably move the truth tables minigame to after the user builds the truth tables, thx


Eh yeah, duh? I've been drilled to put every fart on GitHub. 98% of my repositories have 0 stars.


I have noticed a trend recently that some practices (writing a decent README or architecture, being precise and unambiguous with language, providing context, literate programming) that were meant to help humans were not broadly adopted with the argument that it's too much effort. But when done to help an LLM instead of a human a lot of people suddenly seem to be a lot more motivated to put in the effort.


In my years of programming, I find that humans rarely give documentation more than a cursory glance up until they have specific questions. Then they ask another person if one is available rather than read for the answer.

The biggest problem is that humans don't need the documentation until they do. I recall one project that extensively used docblock style comments. You could open any file in the project and find at least one error, either in the natural language or the annotations.

If the LLM actually uses the documentation in every task it performs- or if it isn't capable of adequate output without it- then that's a far better motivation to document than we actually ever had for day to day work.


I think this really depends on culture. If you target OS APIs or the libc, the documentation is stellar. You have several standards and then conceptual documentation and information about particular methods all with historic and current and implementation notes, then there is also an interactive hypertext system. I solve 80% of my questions with just looking at the official documentation, which is also installed on my computer. For the remaining I often try to use the WWW, but these are often so specific, that it is more successful to just read the code.

Once I step out of that ecosystem, I wonder how people even cope with the lack of good documentation.


The other problem is that documentation is always out of date, and one wrong answer can waste more time than 10 "I don't knows".


I have discovered that the measure of good documentation is not whether your team writes documentation, but is instead determined by whether they read it.


Paraphrasing an observation I stole many years ago:

A bunch of us thought learning to talk to computers would get them out of learning to talk to humans and so they spent 4 of the most important years of emotional growth engaging in that, only to graduate and discover they are even farther behind everyone else in that area.


This raises an interesting point. I've speculated that if someone has a hard time expressing themselves to other humans verbally or in writing, they're also going to have a hard time writing human-readable code. The two things are rooted in the same basic abilities. Writing documentation or comments in the code at least gives someone two slim chances at understanding them, instead of just one.

I have the opposite problem. Granted, I'm not a software developer, but only use code as a problem solving tool. But once again, adding comments to my code gives me two slim chances of understanding it later, instead of one.


> I've speculated that if someone has a hard time expressing themselves to other humans verbally or in writing

I don't think they have actually problems with expressing themselves, code is also just a language with a very formal grammar and if you use that approach to structure your prose, it's also understandable. The struggle is more to mentally encode non-technical domain knowledge, like office politics or emotions.


That's true. But people have had formal language for millennia, so why don't we use it?

Here's my hunch. Formal specifiation is so inefficient that cynics suspect it of being a form of obstructionism, while pragmatic people realize that they can solve a problem themselves, quicker than they can specify their requirements.


> But people have had formal language for millennia, so why don't we use it?

In case you don't refer to the mathematical notion of formal, then we use formal language all the time. Every subject has its formal terms, contracts are all written in a formal way, specifications use formal language. Anything that really matters or is read by a large audience is written in formal language.


I think there’s some of that, but it’s also probably a thing where people who make good tutors/mentors tend to write clearer code as well, and the Venn diagram for that is a bit complicated.

Concise code is going to be difficult if you can’t distill a concept. And that’s more than just verbal intelligence. Though I’m not sure how you’d manage it with low verbal intelligence.


Documentation rots a lot more quickly than the code - it doesn't need to be correct for the code to work. You are usually better off ignoring the comments (even more so the design document) and going straight to the code.


I maintain you’re either grossly misappropriating the time and energy of new and junior devs if this is the case on your project, or you have gone too long since hiring a new dev and your project is stagnating because of it.

New eyes don’t have the curse of knowledge. They don’t filter out the bullshit bits. And one of the advantages of creating reusable modules is you get more new eyes on your code regularly.

This may also be a place where AI can help. Some of the review tools are already calling us out on making the code not match the documentation.


No, they're 100% correct. This has been my experience at every place I've worked at in SV, from startup to FAANG.

You write the code so you can scan it easily, and you build tools to help, and you ask for help when you need it, but you still gotta build that mental map out


I've had LLMs proactively fix my inline documentation. Rather pleasant surprise: "I noticed the comment is out of date and does not reflect the actual implementation" even asking me if it should fix it.


I find LLMs more diligent about keeping the documentation than any human developer, including myself.


Well maybe if those people were managing one or more programmers and not writing the code themselves, they would have worked similarly.


The difference is that they’re using the LLM to write those readmes and architecture and whatever else documents. They’re not putting any effort in.


Surprisingly often people refuse to document their architecture or workflow for new hires. However, when it's for an LLM some of these same people are suddenly willing to spend a lot of time and effort detailing architecture, process, workflows.

I've seen projects with an empty README and a very extensive CLAUDE.md (or equivalent).


That could be because Claude offers a dedicated /init command to generate a CLAUDE.md if it doesn't exist.


Note that this doesn't answer the question in the title, it merely asks it.


Yeah, I had written the blog to wrap my head around the idea of 'how would someone even be printing Weights on a chip?' 'Or how to even start to think in that direction?'.

I didn't explore the actual manufacturing process.


You should add an RSS feed so I can follow it!


I don't post blogs often, so haven't added RSS there, but will do. I mostly post to my linkblog[1], hence have RSS there.

[1] https://www.anuragk.com/linkblog


Frankly the most critical question is if they can really take shortcuts on DV etc, which are the main reasons nobody else tapes out new chips for every model. Note that their current architecture only allows some LORA-Adapter based fine-tuning, even a model with an updated cutoff date would require new masks etc. Which is kind of insane, but props to them if they can make it work.

From some announcements 2 years ago, it seems like they missed their initial schedule by a year, if that's indicative of anything.

For their hardware to make sense a couple of things would need to be true: 1. A model is good enough for a given usecase that there is no need to update/change it for 3-5 years. Note they need to redo their HW-Pipeline if even the weights change. 2. This application is also highly latency-sensitive and benefits from power efficiency. 3. That application is large enough in scale to warrant doing all this instead of running on last-gen hardware.

Maybe some edge-computing and non-civilian use-cases might fit that, but given the lifespan of models, I wonder if most companies wouldn't consider something like this too high-risk.

But maybe some non-text applications, like TTS, audio/video gen, might actually be a good fit.


TTS, speech recognition, ocr/document parsing, Vision-language-action models, vehicle control, things like that do seem to be the ideal applications. Latency constraints limit the utility of larger models in many applications.


> There are very credible arguments that the-set-of-IETF-standards-that-describe-OAuth are less a standard than a framework. I'm not sure that's a bad thing, though.

Spoiler alert: it is.


I tried using this a while back and found it was not widely available. You need coreutils version 9.1 or later for this, many distros do not ship this.

I made https://github.com/rubenvannieuwpoort/atomic-exchange for my usecase.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: