HN2new | past | comments | ask | show | jobs | submit | ky3's commentslogin

You can ask the LLM to write a prompt for you. Example: "Explore prompts that would have circumvented all the previous misunderstanding."

LLM to the rescue. Feed in a problem and ask it to explain it to a layperson. Also feed in sentences that remain obscure and ask to unpack.

> If you are skipping a step in the procedure, aren't you also possibly going to skip writing a step down?

Exactly, so when she reviewed the notebook, she caught the error.

Even if she made a slip in the notebook, merely reviewing it helps jog the memory to revisit and replay what she did in the lab. It's the power of touchstones.

> What if you have to do several steps rather quickly? Say adding a particular chemical, then waiting for ten seconds and adding, another chemical? Do you have time to write it down?

The notebook doesn't always have to operate as a log. It can also operate as a plan of action.


Wouldn't attention to getenv() calls yield more benefit? Such calls are where input typically isn't parsed--because parsing is "hard"--becoming targets for exploit.

The present fix is to sanitize user input. Does it cover all cases?


EWD 1036: On the cruelty of really teaching computing science (1988)

“My point today is that, if we wish to count lines of code, we should not regard them as ‘lines produced’ but as ‘lines spent’: the current conventional wisdom is so foolish as to book that count on the wrong side of the ledger.”


The logical skills to evaluate the output of a LLM are the same skills brought to bear reading any book. What makes you trust this textbook then? Textbooks are not infallible.


Good textbooks have gone through expert reviews and multiple iterations of improvement. That can't be said of an LLM answering your personalized questions or the book problem

But why not both?


> Good textbooks have gone through expert reviews and multiple iterations of improvement.

That's an assumption increasingly false, unfortunately. The spirit of collegiality has been beaten back.

Far better to hone logical skills that sift between fact and error than to rely on social reputation. Ironically we're discussing a text designed to do exactly that.

The savvy LLM user already knows to be on the lookout for falsehood, if not bad pedagogy. That's a benefit, not a drawback of LLMs.


re: Chapter 15.8 on the so-called pigeonhole principle

Following Dijkstra’s EWD1094, here’s a way to solve the hairs-on-heads problem eschewing the language of pigeonholes and employing the fact that the mean is at most the maximum of a non-empty bag of numbers.

We are given that Boston has 500,000 non-bald people. The human head has at most 200,000 hairs. Show that there must be at least 3 people in Boston who have the same number of hairs on their head.

Each non-bald Bostonian must have a hair count between 1 and 200,000. The average number of such people per hair count is 500,000 / 200,000 = 2.5. The maximum is at least that; moreover, it must be a round number. So the maximum >= 3. QED.


For good measure here's a link to Dijkstra's The undeserved status of the pigeon-hole principle.

https://www.cs.utexas.edu/~EWD/transcriptions/EWD10xx/EWD109...


For even better measure here's a slice of HN reactions to EWD1094:

https://hackernews.hn/item?id=46085897


Such problems are a cakewalk for LLMs, you realize? Lots of didactic activities you could do with LLMs.


Are you arguing two wrongs make a right? This most recent wrong would likely gestate an even worse authoritarian regime than the earlier wrong.

Where is the right you're seeing?


By that logic the US shouldn’t get involved in any other foreign entanglement or global police action because of unintended consequences. Tell me, who from the international community will seize Russian shadow fleet oil tankers evading sanctions!

Wait! Crap!

We can’t sanction Russia - if we do it might destabilize the Russian dictator and if he goes out a worse authoritarian regime might come to power!


Which nation did Maduro invade again? Did you confuse Venezuela with Russia?

> By that logic the US shouldn’t get involved in any other foreign entanglement or global police action because of unintended consequences.

Strawman. No-one is claiming that.


A grid with 19 columns is enough. Every column at worst has all 3 colors, one of them used twice. Once we fix that one color, there are C(4,2)=6 ways of filling out the rest of the entries. Since there are 3 colors, there are exactly 6*3=18 worst possible columns. With 19 columns a repetition is guaranteed, yielding the desired rectangle.

For fun, try strengthening the result to a square.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: