Hacker News .hnnew | past | comments | ask | show | jobs | submit | AlexMennen's commentslogin

What do you think "bottom 25%" and "top 25%" mean, exactly?


Think he meant the bottom 25% of income not people with income.


This seems overly harsh. Few people took COVID-19 seriously on Feb 4, so this standard makes almost everyone have no credibility, including people who don't admit to the error. All humans are unreliable.


Here's a blog post from February 6 by Greg Cochran, one of the people mentioned. Looks pretty good in retrospect https://westhunt.wordpress.com/2020/02/06/strategy-not-the-f...


> Few people took COVID-19 seriously on Feb 4

Plenty who were paying attention took it seriously


> This implies second order logic.

No, it does not. The second incompleteness theorem is provable in first-order Peano Arithmetic.

> Of course I'm in no position to say a similar thing about Goedels incompleteness theorem, and I even referred to it's result in higher order logics, but I still doubt the relevance, as many seem to be ignorant of his former completeness theorem.

I have no idea what you're trying to say, but I assure you that people who do this kind of research are aware of the completeness theorem.


Oh, the second one comes of wrong. I did mean the parent quote

> A formal system cannot in general prove that it is reliable

I hadn't noticed when I wrote that, formal system is an idiom - even more specific in this specific context. How confusing.


I think the idea is that the machines are supposed to learn what humans want from human behavior, and then help humans get what they want their own way, rather than imitating human behavior.


This reasoning uses the self-indication assumption, which I tentatively also agree with, but there are some pretty good criticisms of it. http://www.anthropic-principle.com/preprints/olum/sia.pdf


That's just a special case of a general feature of probabilistic arguments: a small but positive fraction of the times they are used, they will give wildly misleading predictions. If we assume that everyone uses the doomsday argument with a uniform improper prior on the total number of people who will ever live, then all of them will conclude with 95% probability that they are not one of the first 5% of people to ever live (that is, that no more than 20 times the past population will be created in the future), and 5% of them will be wrong, which is exactly how probabilities are supposed to work.


No. It would probably pick up on the correlation, but AIXI assumes that its only interactions with the universe are through its input and output channels, so it cannot recognize that a particular part of the universe is itself. It does not know how to formulate that hypothesis.


So is 1 mort a guaranteed death or a [edit:] 1 - 1/e chance of death?


I think you mean 1-1/e (~= 1 - (1-10^-6)^(10^6)). But yes, the idea that you can just divide a statistic sampled over thousands of events linearly to get a meaningful statistic for one occurrence of the event is ridiculous.


Yes, thank you.


Neither. 1/e would require that they all be independent random events which is not true in the general case (and mathematically 1/e would only be an approximation anyway). 1 would require that they be entirely dependent, which is also not true in the general case.


2 micromorts does not mean 2 separate risks, each of which has a 1 in 1 million chance of killing you. It means a single risk that has a 1 in 500 thousand chance of killing you. Similarly, it would be absurd to define 1 mort as the combination of 1 million separate risks, each of which has a 1 in 1 million chance of killing you. Whether 1 mort is guaranteed death or 1 - 1/e chance of death (my previous mention of it being 1/e was mistaken) is a property of how we define the unit to work, not a property of how correlated 1 million separate risks happen to be. Definitions that give answers other than 1 - 1/e or 1 don't make any sense.

> mathematically 1/e would only be an approximation anyway

If we're going with the definition that treats micromorts as independent, then it would make sense to define a mort as exactly a 1 - 1/e chance of death, and a micromort as a 1 - e^(-10^(-6)) chance of death, rather than a 10^-6 chance of death, but the difference is smaller than the degree of risk that we can measure or care about, so the difference is inconsequential, and it would still make sense to describe a micromort as a 10^-6 chance of death.


Moving is generally much easier for sellers, who are more likely to be large corporations, than for buyers, who are more likely to be individuals.


If there is a significant chance that a comet will hit and destroy us, it makes sense to do everything we can to prevent it from doing so. Given the stakes, .1% chance should be more than enough for us to take it seriously. It does not seem plausible that we could harm ourselves much by nuking a comet that wasn't going to hit us anyway.


It does seem plausible that we could in fact harm ourselves by attempting to launch a nuclear device out of Earth's orbit. We do have a record of failed launches, several of them recent. Once the nuclear device is out of orbit however, I agree that the harm it can do to us is negligible.

Edit: Also, it just occurred to me the possibility of deflecting a comet (which was never in our path) into our path, instead of away from it, which is really great material for a Hollywood comedy.


Good point, but the chances of a failed launch are fairly small, and the damage from accidentally detonating a large nuke on the Earth's surface, while bad, would be small compared to human extinction.

If we are sufficiently uncertain about the comet's path, we might accidentally deflect it towards us. Obviously, if this is just as likely as deflecting it away from us, it would not be worthwhile to try. But most likely we would be able to reduce the probability of a catastrophic impact even after taking into account the possibility of such an error.


We've intentionally detonated hundreds of nukes in the atmosphere.. Even over heavily populated cities. Sure, it was bad, but not earth-ending bad.

If a lunch failed and "nuked" the launch pad, I hope we'd just move on to another continent and try again (and again)


If a billion-ton comet is about to hit our planet, a failed attempt to launch a nuke doesn't matter in the slightest.


Nuclear weapons have fallen out of the sky before. No, they won't explode -- it isn't easy to make one go boom.

But I would not be surprised if Green political parties argue that risking a few km^2 contamination is worse than a certain dinosaur killer... :-)

http://en.wikipedia.org/wiki/1966_Palomares_B-52_crash

"The non-nuclear explosives in two of the weapons detonated upon impact with the ground, resulting in the contamination of a 2-square-kilometer (490-acre) (0.78 square mile) area by plutonium. [... another which fell] into the Mediterranean Sea, was recovered intact"


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: