HN2new | past | comments | ask | show | jobs | submit | CrazyStat's commentslogin

A friend of mine tries to bake a spherical pie for pi day (March 14) each year, with varying approaches (and levels of success).

I heard circles are also related to pi but have not had the time to confirm yet.

Pies are more of a Tau day thing https://www.tauday.com

That's a pi-ty

There are some old 18th century pies they cooked in boiling water inside a bag which could be quite spherical. Townsends on youtube has some videos on it.

The first two things that spring to mind are pasties from the UK (which are not usually spherical but can get quite hemispherical), and the "UFO-Döner" from Germany (which are more oblate spheroids). Maybe by combining these ideas, your friend can get closer to their dream?

Beef Wellington could be spherical if you so chose.

I suspect that deep-fried-battered haggis might exist which could be very spherical.


British steak and kidney pudding (a steamed pie of suet pastry) is a truncated cone shape, could go spherical with the right pastry case.

A truncated cone is called a "frustrum" which always seemed fitting to me.

I wonder if they could look to dim sum for inspiration? A apple dumbling is basically just a round apple pie right?

> A friend of mine tries to bake a spherical pie for pi day (March 14) each year, with varying approaches (and levels of success).

Could also do it on pi approximation day (July 22), then one doesn't have to be so exact about it.


Now I'm considering making a Matt Parker pie: a spherical pie made from a normal pie + calling it close enough in 2 out of 3 dimensions.

I didn't get it, so I looked it up.

22/7 ~= 3.14


Actually closer to π and matches the more sensible date format.

(Yes this is worth fighting over!)


Heating the middle has to be a pain. And cutting it…

Well if you insert metal rods through it you can help with the heat transfer, then you can lattice over the holes. If you pumpkin pie it, you might even be able to have it hold up under its own weight. Plus a bit of stiff whipped cream in the holes would help.

I would make them fairly small (personal pie-sized) and use a filling that doesn't need to be cooked in the oven to set. The main limiting factors, I think, would be structural integrity and heating the filling to the center. You could set it on a ring (like the rim of a spring-form pan) to support it better during cooking. Now, a four dimensional hyper pie, on the other hand...

If you’re not cooking the filling, then do a teflon ballon that you put the crust on. Cook. Remove balloon. Then pipe in ready to ready to set chocolate cream.

One of those spherical ice cube makers but made of cast iron, a little like those little waffle makers.

I don’t think those will work, you want the outer surface to be crispy. The dough’s gotta go on the outside of the sphere.

I would bake it on a pizza stone to ensure an even bake.

Has nobody here ever done this? It comes out perfectly cooked.


You cook a spherical pie on a pizza stone? Do tell.

If we don't care what the filling is you could just use sticky rice.

A pie like this, to the face of a problematic politician, would add drama and help resurrect the profile of pies as activists!

One could always precook the filling.

Is there? I followed the link[1] to the original author of the desktop software this web app is derived from, and he says:

> To make a long story short, by the third generation of ReferenceFinder (written in 2003), I had incorporated all 7 of the Huzita-Justin Axioms of folding into the program, allowing it to potentially explore all possible folding sequences consisting of sequential alignments that each form a single crease in a square of paper. Of course, the family tree of such sequences grows explosively (or to be precise, exponentially); but the concomitant growth in the availability of computing horsepower has made it possible to explore a reasonable subset of that exponential family tree, and in effect, by pure brute force, find a close approximation to any arbitrary point or line within a unit square using a very small number of folds.

(emphasis added)

[1] https://langorigami.com/article/referencefinder/


There's brute force involved, but it's not brute force by itself. It's like a chess engine, which yes, it checks thousands of positions, but only after filtering out hundreds of thousands of positions.

Are you involved in writing or maintaining this software? If so can you provide some more details on this “filtering”? Because I skimmed the source code [1] and it looks to me like it’s pure brute force building a database of lines and points up to a certain rank (number of operations required to create that line/point) and then searching through it.

[1] https://github.com/MuTsunTsai/reference-finder-cpp/blob/main...


No, you are right. The author even uses the expression "by pure brute force". I just supposed it would given that virrually every number a user would input is constructible with foldings.

The CBO estimates [1] that foreign exporters bear 5% of the burden of the tariffs, with American consumers bearing the remaining 95%:

> [T]he net effect of tariffs is to raise U.S. consumer prices by the full portion of the cost of the tariffs borne domestically (95 percent)

This is a serious document written by a bunch of serious economists. You can find a list of them at the bottom of the page. That you have written their conclusion off as "transparently false" should give you pause.

[1] https://www.cbo.gov/publication/62105#_idTextAnchor050


> you have written their conclusion off as "transparently false"

I didn't say that. I said that the common argument that tax/tariff increases are always passed along 100% to consumers is transparently false. And contrary to your criticism, the cited paper agrees with my claim (in this case, while my claim is general):

"In CBO’s assessment, foreign exporters will absorb 5 percent of the cost of the tariffs, slightly offsetting the import price increases faced by U.S. importers. In the near term, CBO anticipates, U.S. businesses will absorb 30 percent of the import price increases by reducing their profit margins; the remaining 70 percent will be passed through to consumers by raising prices."

It goes on to say that other businesses, whose costs haven't increased, will raise prices - which is not at all 'passing along costs to consumers' but a different dynamic - and that the combined two dynamics yield the overall consumer impact equal to 95% of tariff costs:

"In addition, U.S. businesses that produce goods that compete with foreign imports will, in CBO’s assessment, increase their prices because of the decline in competition from abroad and the increased demand for tariff-free domestic goods. Those price increases are estimated to fully offset the 30 percent of price increases absorbed by U.S. businesses that import goods, so the net effect of tariffs is to raise U.S. consumer prices by the full portion of the cost of the tariffs borne domestically (95 percent)."

I think the tariffs are a big mistake but the argument I was addressing - if you tax businesses then consumers effectively pay the tax - is widespread disinformation.


The final quoted portion doesn't seem to agree with your final statement though?

> Those price increases are estimated to fully offset the 30 percent of price increases absorbed by U.S. businesses that import goods, so the net effect of tariffs is to raise U.S. consumer prices by the full portion of the cost of the tariffs borne domestically (95 percent)."

The idea expressed previously in your excerpts is that domestically-produced US goods do increase their revenues by the amount that their produced-abroad competitors. So things are okay from that perspective.

But what that final quotation says is that those increased revenues are 95% paid for by US consumers. In other words, they "effectively pay the tax."


Blendle [1] had this model for a while but shut it down a couple years ago. It was nice to have to option to buy individual articles from publications that I enjoy reading occasionally but not enough to subscribe.

[1] https://www.niemanlab.org/2023/08/the-poster-child-for-micro...


ROT13 is cheap enough that you can afford to apply it many more times. I use one million iterations to store passwords securely.


640k oughtta be enough for anybody.


One of my favorite bits of my PhD dissertation was factoring an intractable 3-dimensional integral

\iiint f(x, y, z) dx dy dz = \int [\int g(x, y) dx]*[\int h(y, z) dz] dy

which greatly accelerated numerical integration (O(n^2) rather than O(n^3)).

My advisor was not particularly impressed and objectively I could have skipped it and let the simulations take a bit longer (quite a bit longer--this integration was done millions of times for different function parameters in an inner loop). But it was clever and all mine and I was proud of it.


The premise of the singularity concept was always superhuman intelligence, so it’s not so much a parallel as a renaming of the same thing.

> In Vinge’s analysis, at some point not too far away, innovations in computer power would enable us to design computers more intelligent than we are, and these smarter computers could design computers yet smarter than themselves, and so on, the loop of computers-making-newer-computers accelerating very quickly towards unimaginable levels of intelligence.


Would never work in reality, you can't optimize algorithms beyond their computation complexity limits.

You can't multiply matrix x matrix (or vector x matrix) faster than O(N^2).

You can't iterate through array faster than O(N).

Search & sort are sub- or near-linear, yes - but any realistic numerical simulations are O(N^3) or worse. Computational chemistry algorithms can be as hard as O(N^7).

And that's all in P class, not even NP.


https://en.wikipedia.org/wiki/Computational_complexity_of_ma...

The n in this article is the size of each dimension of the matrix — N=n^2. Lowest known is O(N^1.175...). Most practical is O(N^1.403...). Naive is already O(N^1.5) which, you see, is less than O(N^2).


Well, but still superlinear.


We don't need to optimize algorithms beyond their computational complexity limits to improve hardware.


Hardware is bound by even harder limits (transistor's gate thickness, speed of light, Amdahl's law, Landauer's limit and so on).


But that doesn't disprove the hypothesis that in principle you can have an effective self-improvement loop (my guess is that it would quickly turn into extremely limited gains that do not justify the expenditure).


Any such "self-improvement loop" would have a natural ceiling, though. From both algorithmic complexity and hardware limits of underlying compute substrate.

P.S. I am not arguing against, but rather agreeing with you.


The natural ceiling is the amount of compute per unit of energy. At the point you can no longer improve energy efficiency, you can still add more energy to operate more compute capacity.


Which hints that truly superintelligent AIs will consume vast amount of energy to operate and matter to build.


At some point they’ll hit the speed of light as a limit to how quickly it can propagate its internal state to itself - as the brain grows larger, the mind slows down or breaks apart into smaller units that can work faster before rejoining the bigger entity and propagating its new state.

Must feel really strange.


Realistically, the physical limits to computation are the speed of light and energy dissipation.


You're measuring speed not intelligence. It's a different metric.


It is exactly the same metric. Intelligence is not magic, be it organic or LLM-based. You still need to go through the training set data to make the any useful extrapolations about the unknown inputs.


I think you mean a Poisson process rather than a Poisson distribution. The Poisson distribution is a discrete distribution on the non-negative integers. The Poisson process’s defining characteristic is that the number of points in any interval follows the Poisson distribution.

There have been a large variety of point processes explored in the literature, including some with repulsion properties that give this type of “universality” property. Perhaps unsurprisingly one way to do this is create your point process by taking the eigenvalues of a random matrix, which falls within the class of determinantal point processes [1]. Gibbs point processes are another important class.

[1] https://en.wikipedia.org/wiki/Determinantal_point_process


For anyone one confused, the first part is an approximate transliteration into Cyrillic of the English sentence “Much like how you can/could spell English in Cyrillic.”


The legal premise of training LLMs on everything ever written is that it’s fair use. If it is fair use (which is currently being disputed in court) then the license you put on your code doesn’t matter, it can be used under fair use.

If the courts decide it’s not fair use then OpenAI et al. are going to have some issues.


Presumably the author is working on the basis that it is not fair use and wants to license accordingly.


Quite possibly. If they care a great deal about not contributing to training LLMs then they should still be aware of the fair use issue, because if the courts rule that it is fair use then there’s no putting the genie back in the bottle. Any code that they publish, under any license whatsoever, would then be fair game for training and almost certainly would be used.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: