TS does not fight the math of venture capital the same way Indie.VC did. My understanding from Einar (and the reason I made an LP commitment) is that the same extreme power law of returns that characterizes traditional early-stage venture is at play in Micro-SaaS: the exits are an order of magnitude smaller but you also buy in at prices that are an order of magnitude smaller.
It may be telling that, in response to my "Seed investments follow an alpha < 2 power law" paper, Bryce (whom I have never met) posted something dismissive on twitter whereas Einar reached out to me to discuss how he could validate a similar hypothesis for his own investing.
Of course there is a way to broadly index among all non-negative seed investments: by broadly indexing among all seed investments.
The fraction of money-losing investments in a population will affect, for instance, whether we would expect the typical investor making five investments at random to make or lose money. But regardless of whether losers are 10%, 50%, or 90% of the investment pool, if the winners are drawing from an unbounded mean power law then broadly indexing raises an investor's expected return.
Your last paragraph is a misinterpretation of this paper. You are interpreting the result as saying "Investors would benefit from broadly indexing at seed". The paper instead says "Seed investors would benefit from broadly indexing".
You can think of the AngelList investment data as being split into three roughly equal-sized groups: markdowns, markups, and no valuation updates.
The reported IRRs are actually relatively high and the return multiples (which are compounded IRRs) are relatively low. That's because there are lots of one- and two-year-old companies in the dataset and---as we show---IRRs and investment durations are negatively correlated.
HiScore author here. HiScore allows domain experts to easily create and maintain scores. It is currently being used by a major environmental non-profit and by IES, a startup that assesses the safety and sustainability of fracking wells.
Seems strange to write an article complaining about the Harvard bubble when the only reason it's getting printed in the Paris fucking Review is that it's about Harvard in the first place. People have weird/terrible/difficult college experiences all the time, but the guy who went to Iowa State never gets a similar forum to talk about himself.
A lot of people at Harvard were uncomfortable there. I wasn't. I loved the place and made great friends. It was a safe place where I could challenge myself intellectually. And the plurality of Harvard students aren't wealthy legacies, they're striver upper middle class kids that worked their asses off in high school.
This author sounds like a poor fit for Harvard. Nothing wrong with it. But after two years should have really looked at transferring. Sure it's "Harvard", but you can still do quite well at other schools like Berkeley (where you get to stay a year on campus at best, and then you're off to fend for yourself).
The author admitted not being ambitious, and it really felt like he just took what was handed to him, rather than really seeking uut what would really click with him.
"And the plurality of Harvard students aren't wealthy legacies, they're extremely lucky middle class kids."
Need I remind you that there are a ridiculous amount of kids who take ridiculous courses for high schools (e.g. even college courses), get extremely high testing scores, and loads of extracurricular and get rejected on the basis that Harvard has a quota for each school and a quota for ethnicities.
Otherwise, Ivy Leagues' demographics would probably turn into the fair-and-balanced UC schools ...
Uncle Tom's Cabin is an awful book. First off, it's boring and damn near unreadable (it was one of the only assigned books I never made it through in college). But in a larger sense, the slaves are "heroic" and "emotionally nuanced" only in the sense that HBS makes them fulfill a racial type: sympathetic, penitent, long-suffering Christians. They're treated more as people than as property, but more as caricatures than as people.
The interesting contradiction of UTC, to me, is that it had this enormous significance to history despite being terribly written. As a modern reader, I couldn't get any emotion about the book other than it being terrible. James Baldwin trashes the book brutally but fairly in his great essay "Everybody's Protest Novel": http://www.uhu.es/antonia.dominguez/semnorteamericana/protes...
My favorite Friedman-ism is this one:
"I had lunch with a group of professors at the Hong Kong University of Science and Technology, or HKUST, who told me that this year they will be offering some 50 full scholarships for graduate students in science and technology. Major U.S. universities are sharply cutting back."
As an AI researcher, I think obstacles like "not having your robot fall over all the damn time" are a little more immediate than robots having a nuanced understanding of ethics. I can understand why this stuff is fun to think about and debate, but it's just not relevant at all to where AI is going to be for the next 50 (or 100, or probably 200) years.
I really think the stability of your robot is a completely separate issue. George W. Bush and Barack Obama both use flying robots with missiles to hunt down and kill people they don't like. Don't you think that perhaps, as these flying robots gain more and more autonomy, that discussions of ethics are actually important, and important now? 50 years is a long, long time in computer science.
I'm surprised that you are so pessimistic about your research that you think ethics won't even be relevant in year 2205. Holy cow you must think AI is hard.
George W. Bush and Barack Obama both use flying robots with missiles to hunt down and kill people they don't like.
This is a very good point. It's always good to be reminded that we're already living in the future.
That said, I feel like aothman is discussing real artificial intelligence, that is, an entity capable of making a conscious decision that it wants to, in this case, fire the missiles. If I had to guess, if predator drones gain the ability to "decide" for themselves whether or not to fire their missiles, it will be built on a system of complex rules, and not because they're "intelligent". Potayto, Potahto? Maybe. I'm not an AI researcher and I don't even come close to understanding human intelligence, but I feel like even if it is just a complex system of rules, it's at a much deeper level than we'll be able to simulate soon.
I'm disappointed in the number of people commenting on HN that assume AI = robotics. A true AGI will solve robotics itself whether it's already instantiated in a robotic form or not.
> AI is going to be for the next 50 (or 100, or probably 200) years.
To be clear about that "probably 200", are you saying that you believe we'll need trillions of times the processing power of the human brain in order to crack how it works, or that you believe that we've nearly reached the end of increases in processing power, for at least the next 200 years?
If I had to guess, he's saying that at the current rate of software and research progress, we won't be able to cobble together the weak and specialized subsystems that we currently call "AI" into anything more interesting for quite some time. Not an altogether uncommon belief, esp. amongst those that specialize in robotics or other practical AI applications, because they know first-hand how hard it is to create humanlike behavior with any of the techniques we know of today.
But self improving AI is not remotely predictable based on our current progress, and really, it's not even the same field as what we call AI today: extrapolating our current progress to predict where we'll be in 50 years is like asking a bombmaker from 1935 to look at a log log plot of historical explosive power in bombs to try to predict what the maximum yield from a bomb in 1950 would be. It doesn't matter how slow the mainstream research is if someone finds a chain reaction to exploit, and it's impossible to predict when someone will successfully exploit that chain reaction.
IMO there's very good reason to believe that we're already deep into the "yellow zone" of danger here, where we have more than enough computational power to set off a self-improving chain reaction, though we don't actually know how to write that software. What we really have to worry about is that as time goes by, we creep closer to the "red zone", where we don't even need to know how to write the software because any idiot with an expensive computer can brute force the search through program space (more realistically, they would rely on evolutionary or other types of relatively unguided methods). That's exceptionally dangerous because the vast majority of self improving AIs will be hostile, and we want to make sure that the first to emerge is benevolent.
So yes, there's a lot of uncertainty here, but I think it's a mistake to say that we don't need to worry about it until it's here. By the time it's inevitable and the mainstream has started to even accept it as possible, it's probably going to be impossible to ensure that (for instance) some irresponsible government won't be the first to achieve it merely by throwing a lot of funding at the problem and doing it unsafely.
"Proverbial Stanford" coincides a great deal with "Proverbial Harvard" - both are wealthy private schools that admit the very best students and have society's bias towards the well-to-do sons of well-to-do fathers. If you're looking for a school to contrast with Harvard, Stanford is a poor choice.
Furthermore, actual Stanford isn't doing any damage to actual Harvard. The data I've found suggest that 70+ or 80+% of undergrads admitted to both Harvard and Stanford pick Harvard.
All analogies are imperfect, but I chose Stanford for the specific reason that it's been at the forefront of perhaps the greatest semi-meritocratic* drive in modern American history: the high-tech industry. I very much meant the "proverbial" Stanford when I used Stanford in this example. I meant the Stanford of the movement Stanford has helped to shepherd -- the Stanford that exists in popular consciousness, regardless of how removed that perception may be from the reality of the school.
*I must use the "semi-" qualifier here because, as we all seem to agree, there's no such thing as a true meritocracy. The wealthy have advantages throughout life that maximize the chances of generating a meritorious CV.
It may be telling that, in response to my "Seed investments follow an alpha < 2 power law" paper, Bryce (whom I have never met) posted something dismissive on twitter whereas Einar reached out to me to discuss how he could validate a similar hypothesis for his own investing.