Hacker News .hnnew | past | comments | ask | show | jobs | submitlogin
Stanford just released a 386-page report on the state of AI (twitter.com/nonmayorpete)
243 points by doener on April 4, 2023 | hide | past | favorite | 117 comments


The report homepage:

https://aiindex.stanford.edu/report/

Honestly though a quick skim of those bullet points in the Twitter thread and looking at the length of it (nearly 400 pages) I am sceptical it's not a lot of corporatese for execs.

Ps. already submitted here:

https://hackernews.hn/item?id=35431057


If the top ten takeaways are ok using incredibly vaporous, non-sensical phrasing why would I dig any deeper?

“the number of AI incidents and controversies has increased 26 times since 2012”

Ok but how many “incidents,” also detectably fake deepfakes and call-monitoring inmates are top examples of misuse? Naive.

“BLOOM’s training run emitted 25 times more carbon than a single air traveler on a one-way trip from New York to San Francisco”

What does this mean? That sounds like a very small amount to me but the conclusion is that’s a huge environmental impact. No, I read, for the carbon cost of decommissioning one old jet, we can have a new LLM.


Not decommissioning one jet: just one fraction of a percent of one jet ride.

Honestly seems like a good tradeoff.


The utility of effective models (which can be used many times over) likely exceeds the utility of virtually any person travelling anywhere one time. That seems like a poorly thought statistic to mention. It makes me wonder if it’s misunderstood or misinterpreted.


Total passengers’ CO2/journey (KG) 292.50 according to https://applications.icao.int/icec/Home/Index

This is roughly a 400 mile car trip, or about a year of breathing.


Those are 100% not comparable metrics.

What you breathe out is mostly what you eat, and most of the carbon there is part of a continuous carbon cycle. A part of it comes from fossil fuels, mainly transport and energy to power the Haber Bosch (the source of most of (organic) hydrogen in your body).

LLM training process consumes practically only energy. As such, it could very easily be replaced by carbon-neutral sources (nuclear, solar).

All of the CO2 of a airplane flight comes from fossil fuels, and there is not viable technology to replace that yet.


not comparable from a climate change perspective, but helps put perspective into just how much CO2 that is. I disagree with you that it's unhelpful, context is always helpful.


> BLOOM’s training run emitted 25 times more carbon than a single air traveler on a one-way trip from New York to San Francisco

Less than the cost to send the SF Giants to play the NY Yankees then?

I wonder how many SF to NY carbon units Stanford expends each year to send their various sports teams out to do important work?

This game is fun!


I agree! So many more possibilities too, like for example rather than simply downloading and locally storing the movie Predator once I am forced to hunt down a provider who currently has it and stream it directly each time I want to watch it. Between google, the streaming provider, the cost of all the extra networking equipment required to support this data, and the extra cost on end-user devices themselves I wonder what the carbon cost of copyright is?


They are using it as an example but, if you take all of the training across all organizations it represents a huge carbon footprint due to training. Also, inference isn't exactly free either . Open sourcing your weights and putting them downloadable would get rid of that but on the other hand you have trade secrets that would make you not want to release your weights so inevitable another group will reproduce your weights and use more electricity.


I wonder how the carbon footprint of training every single large language model that exists today compares to the carbon footprint of a single day of global air traffic.


That question is unanswerable so its weird to wonder about it...


Is it? I wonder about unanswerable questions all the time.


What's answerable isn't even clear in advance most of the time, for many scientific, philosophical, and mathematical questions! Questions that are not obviously answerable often turn out to be really productive, important, or interesting, too.


I bet it's a smaller carbon footprint than dogecoin.


Execs won't read it until someone turns it into a 100-slide powerpoint and emails it to them.


I hate how correct this is. LOL


I've been told by multiple McKinsey people that execs won't read a god-damn thing unless it's, specifically, in a powerpoint deck. Typically never intended to be presented, even, often just these hideous walls of text copy-pasted into the deck then emailed around, often with tons and tons of slides.

They made it clear they weren't joking, nor exaggerating. I have no idea how things got this way, nor why. Must be some kind of filter?


To be fair, I've (tried to) read some articles that McKinsey puts out and it's just thousands upon thousands of words of rambling.

Forcing things onto a PPT forces clarity due to the form factor, not unlike a twitter thread.


We need a plugin to have ChatGPT read it and resume it on a 5 line post....


You can upload a pdf of the report to www.chatpdf.com and query it.


Thanks for chatpdf.com I found it very useful


Welcome.

Checkout sitegpt as well if you want to create a query bot for your website.


This is what Kagi's Universal Summarizer[0] generates[1]:

The Artificial Intelligence Index Report 2023 provides insights into the current state of AI research, development, and adoption. Key findings and implications from the report include:

1. Research and Development: The United States and China lead in cross-country collaborations in AI publications, with the number of collaborations increasing four times since 2010. China has also overtaken the European Union and the United Kingdom in AI conference publications, producing 26.2% of the world's share in 2021.

2. Technical Performance: AI models have become more advanced, with the release of text-to-image models like DALL-E 2 and Stable Diffusion, text-to-video systems like Make-A-Video, and chatbots like ChatGPT. AI systems are increasing in complexity due to advancements in hardware, data availability, and larger model sizes.

3. Technical AI Ethics: In 2022, new ethics benchmarks and diagnostic metrics were introduced to address concerns about AI fairness, bias, and transparency. However, challenges remain in steering AI models to avoid harmful outcomes.

4. The Economy: AI hiring has grown in various countries, with Hong Kong experiencing the highest growth in 2022. Private investment in AI decreased for the first time in a decade, but AI remains a topic of interest for policymakers and industry leaders.

5. Education: The United States leads in AI-related postsecondary education, with a significant number of AI courses and programs offered at universities.

6. Policy and Governance: AI-related policymaking has increased, with the United States, Spain, and the Philippines leading in AI-related laws. Legal cases involving AI highlight the challenges and complexities of AI in the courts.

7. Public Perception: Men are more likely than women to report that AI products and services make their lives easier and trust companies that use AI. Surveyed Americans are most excited about AI's potential to make life and society better (31%) and save time and increase efficiency (13%).

8. AI Skills: The top AI skills include machine learning, natural language processing, data structures, computer vision, image processing, deep learning, TensorFlow, Pandas (software), and OpenCV.

[0]: https://kagi.com/summarizer/index.html [1]: https://kagi.com/summarizer/index.html?url=https%3A%2F%2Faii...


Unrelated to TFA: everytime I see numbers like 386, 486, etc. I’m instantly triggered. Even more than for powers of two. Growing up with computers really „does a number“ on you.


My middle school enrollment number was 286. The number was significant because it was etched/embroidered/painted all over my belongings. A few years later they renumbered us all. I was a bit disappointed losing this number until I saw my new number - 640!


My first thought was "was the page count intentional?"


Also numbers for display resolutions (640 and 480, 800 and 600, 1024 and 768, etc)


I have 386 as last digits of my phone number. Hardly anyone notices though.


me too, actually I had the chance to pick among three and got the one ending in 386

the only persons that did pointed it out is my best friend and his sister which for almost a decade my beloved wife


What is significant about those numbers?


386 is an especially important milestone CPU, since it was the first CPU to offer a 32-bit mode in the Wintel space.


Found the OG Mac user :)


Ha! I was actually a DOS user at the time, but I know it wasn't the only platform in existence.


Wow, I assumed those numbers were still universally recognized in tech, but I guess it makes sense that they're not anymore.

Your question makes me feel old :)


The original Intel processors that kicked off the personal computing revolution was the 8086. It itself was a variation on earlier chips, the 8008 and 8080.

It took off and spawned its own line, the 80168, 80286, etc. Eventually, the 80 was dropped and they became known as the 286, 386, 486. Even the Pentium processor was really a brand name for the 586.

Every Core i7 is an x86.


These were the designations for intel processors in the 90s.


1985 is when the 386 came out (286 in 1982, 8086/8088 was in the 70s)! 486 was 1989.

Pentium (technically 586), released in 1993, is what I think of when I think of CPUs in the 90s. “n”86 has the 1980s association.

Generational divide, here.


ah my apologies, as a child of the 90s, the 486 was just what I had in my house then.


Same thing for me. For the past 30 years it feels like I'm seeing those numbers everywhere haha


glad I wasn't the only one


The Kagi Universal Summarizer (which is generally pretty great) gave me this as a summary:

The Artificial Intelligence Index Report 2023 provides insights into the current state of AI research, development, and adoption. Key findings and implications from the report include:

1. Research and Development: The United States and China lead in cross-country collaborations in AI publications, with the number of collaborations increasing four times since 2010. China has also overtaken the European Union and the United Kingdom in AI conference publications, producing 26.2% of the world's share in 2021.

2. Technical Performance: AI models have become more advanced, with the release of text-to-image models like DALL-E 2 and Stable Diffusion, text-to-video systems like Make-A-Video, and chatbots like ChatGPT. AI systems are increasing in complexity due to advancements in hardware, data availability, and the performance of larger models.

3. Technical AI Ethics: In 2022, new ethics benchmarks and diagnostic metrics were introduced to address concerns about AI fairness, bias, and transparency. However, challenges remain in steering AI models to avoid harmful outcomes, such as toxicity, bias, and privacy violations.

4. Economy: AI hiring has grown in various countries, with Hong Kong experiencing the highest growth in 2022. Private investment in AI decreased for the first time in a decade, but AI remains a topic of interest for policymakers and industry leaders.

5. Education: The United States leads in AI-related postsecondary education, with a significant number of AI courses and programs offered at universities.

6. Policy and Governance: AI-related policymaking has increased, with the United States, Spain, and the Philippines passing the most AI-related laws in 2022. Legal cases involving AI highlight the challenges and complexities of AI in the courts.

7. Public Perception: Men are more likely than women to report that AI products and services make their lives easier and trust companies that use AI. Surveyed Americans are most excited about AI's potential to make life and society better (31%) and save time and increase efficiency (13%).

8. AI Talent: The top skills in the AI skill grouping include machine learning, natural language processing, data structures, computer vision, and deep learning, among others.


Can we really trust an AI summarizer to summarize a report on AI which may reflect poorly on AI?


Trust? It's worse than that.

I don't believe AI output. My eyes glaze over and I scroll past it. Anything that looks AI formatted is branded with disbelief and a cognizant awareness that it produces unverifiable shades-of-grey.

Are we really going to live in a world where a blackbox program that does not produce meaningfully deterministic results and cannot be examined, is regarded as a source of truth?

If AI takes in a poisoned database and spreads it, who would know? The AI leaving out vital information is just as dangerous, though we are used-to that problem.

And that's before we even hit the accuracy of it's word predictions..

I don't understand why people are keen on 'being friends' with the grim reaper parrot.


As much as we can trust it to summarize anything! It doesn't, like, know that it is the subject of the paper. And either way, it doesn't care if it lives or dies!


We need them. How do we trust a car at 200km/h , or a plane ? We'll test, standardize, improve and trust them.


I wonder when South Asia will leapfrog the EU. It is insane to me still how orthogonal innovation is to European leaders.


What's interesting is that the trend in "Average Weighted Accuracy" seems to increase linearly in the past three years whereas the number of model parameters and required compute is growing exponentially. So accuracy is pretty expensive.


I mean, yes, you have around 100 trillion interconnections in your brain, likely the most complicated biological device on this planet, to create the accuracy you perform. We're just very power efficient when doing that computation.


Does it come with a math co-professor?


There's a separate 387-page report for that.


I wish I were half as witty as you


Who is this absolutely hapless person and why is this linked? He most probably parsed the research with chatGPT or so I hope. He managed to write a concoction of useless datapoints and ended up piling up mistake and bias in most.


Author seems to be an AI-fluencer/hustlebro.


So Gartner/McKinsey style crapola it is. Perhaps useful for AI Buzzword bingo.


Jeez thank you, one of my least favorite things on the internet is long and unnecessary twitter threads - just post the damn link!


Unfortunately, it's a rational response to Twitter's algos. They heavily downrank tweets that are just links as they want people to stay on the platform.


For me, the top takeaway:

"In 2022, there were 32 significant industry-produced machine learning models compared to just three produced by academia."

It seems frustrating to work in one of these top academic AI departments, with incredibly smart people, but with so much of the cutting edge work out of reach due to both the cost and the difficulty of running large scale infrastructure.


Is the page count really the most interesting thing to say about it?


What else can you expect from a McKinsey alumni


* alumnus


Probably not. It's just that nobody can find the interesting bits, because they're buried in all those pages.

Experiment: What does ChatGPT produce when you ask it for a 300-plus-page report? Is it better or worse than the average human-written report of similar size?


The GPT architecture can't really produce a 300+ page report right now, because that doesn't even come close to fitting within its window. Of course you can just keep running it and concatenating, but I wouldn't really call that a 300+ page report. It'll either wander off topic completely once it loses the prompt, or be very repetitive because it'll have no idea whatsoever what it said before its window.

I gather that with the current architecture it becomes exponentially more difficult to grow the window, but I'm not sure if that's what a computer scientist would call "exponential" or if it was the common colloquial usage that calls x^2 "exponential growth"; would actually be interested if anyone could clarify that. With the current expense though, any of the common complexity classes beyond O(n log n) isn't very feasible at the moment though.


I believe LLMs could generate limitless page reports. Not a single prompt of course. Pick a topic, outline it, grow the outline with subsections. Use a generic summary and LLM indexing to feed context every time you want a subsection, if you want do not stop there use text to image to add drawings, you have 1000 pages thing. Yes, currently it is a thing not an article or research paper. Simple tool and intention relation.


Attention used to be quadratic. But there have been linear attention advances. In fact, gpt-4 is almost certainly using some linear attention variant, maybe flash attention.

Then there's this https://hazyresearch.stanford.edu/blog/2023-03-27-long-learn...

I don't think a 300 page context window is too far away


You can prompt it for an outline, and then prompt it to go section by section. It’s a lot more work than “write 300 pages!”

It’s actually more like working with a junior copy editor that you have to constantly correct, so you keep wondering what you’re even paying them for. But at $20 a month it’s worth it for that.


I don't get this sentiment. Check the summary if you don't want to read everything before submitting? Has OP even opened the thing to check the first page of contents? If it's good stuff, one would keep reading and be sad for it to end, no matter if it's 5 or 5000 pages (I was sad for book series to end that were longer than that).

OP shouldn't have changed the report's title, or if this is the original title by some miracle, they should have changed it or posted a comment about why it's interesting. Presumably they did read at least part of it and have a reason for submitting it. As it is, the submission is entirely useless beyond pointing out that a certain URL exists.


Summarization by ChatGPT: The AI Index Report 2023 provides insights into the latest developments and trends in artificial intelligence. Key takeaways include:

01 Industry now leads academia in releasing significant machine learning models. 02 Traditional benchmarks are showing performance saturation, but new benchmarks are emerging. 03 AI has both positive and negative environmental impacts. 04 AI is accelerating scientific progress in various fields. 05 Incidents concerning AI misuse are rapidly rising. 06 Demand for AI-related skills is increasing across most American industrial sectors. 07 For the first time in a decade, year-over-year private investment in AI decreased in 2022. 08 While AI adoption by companies has plateaued, those that have adopted AI are experiencing cost and revenue benefits. 09 Policymaker interest in AI is growing, with more AI-related bills being passed into law. 10 Chinese citizens have the most positive view of AI products and services, while Americans are more skeptical.


First of all, since I'm not a native speaker and my solid intention is communicate - not to be formal -, I've used chatGPT to fix my grammar errors. I even tried to mimic my voice - it started to use ya, ugh, ... - The paragraph you read is my instant English - may be enough, but I like clear communication, message should be understandable - So before downvote, keep these in mind.

Two different summaries -I can't point them currently but one of them chatGPT, one of them a credible software- created by two distinct models. While it's not scientifically accurate to claim they are significantly different, there are noticeable differences between them. I'd like to use ChatGPT to compare these summaries, as I believe its contextualization capabilities could help clarify the situation. However, there are people who have strong opinions against using ChatGPT, and they criticize each other for doing so. This resistance is causing me frustration and holding me back from pursuing the comparison, which is a pain point for me.


> Chinese citizens have the most positive view of AI products and services, while Americans are more skeptical.

Is there a cultural reason for this?


Americans are worried that AI will make the US more totalitarian dystopian like China.


Screw that, I'm worried that the megacorps are gonna screw this up and make it even more American dystopian.


readers - you are invited to join me in downvoting each and every "ChaptGPT summary" of technical content, in any comments section, for the next .. 30 months..


What's weird is that many of the big names in AI (in CS) are not associated with this thing in any way. Maybe just Shoham?


Did they happen to release an LLM to summarize it for us?


The report isn't interested in this question but I am: If we had spent the last decade throwing this amount of resources at some other family of ML techniques, would they have shown the same improvement? Or is there some parallel universe where if we had made 500B parameter non-parametric bayesian models or something we would have gotten to a similar place?


I'm a deep learning researcher so I can speak to this a little. Until roughly 2010 neural networks were barely used and broadly considered to be less efficient and less accurate than rival techniques (decision trees, support vector machines, etc.). For decades, far more work was put into these other approaches but the amazing thing about neural networks is that, even in a relatively primitive and neglected state, they outperformed traditional techniques. AlexNet is the poster-child of this phenomenon, but look at facial recognition, speech recognition, etc. These are all fields where older approaches are just dead. Nobody is working on SVMs for facial identification or object pose estimation anymore.

It's more accurate to say that neural networks succeeded against the research darlings of the time despite being relatively ignored for decades.


Another counterfactual branch could be considered where GPU innovation and market pushes from Gaming and Crypto did not happen to make transformer parallelization such a great value proposition and what alternative paths have not been explored thoroughly as a result.


Not an expert but intuitively, let's say in classical stat's. If you add parameters the danger of overfit increases. So it is far from obvious (at least to me) why it works at all and why bigger is obviously better.


Interestingly, Saudi Arabia seems to believe so much in the future of AI according to the report, I would have imagined that such a religious country would not believe in it much, artificial intelligence kinda feels like an affront to God (I am not religious but that is what I would imagine).


Saudi Arabia believes in anything that looks like a great investment when you consider only as much as you can fit on 10 presentation slides...


did you come up with this comment out of your GPTbrain?


First of all, even with all the recent hoopla, "AI" is nothing like what I'd consider intelligent despite the name. Machines have been able to out-compute humans for a long time, but when I interpret Genesis's verse about mankind being made in God's image, I think part of it is reasoning abilities that other creatures lack (and that AI may never be able to fully replicate/comprehend in the way we understand the world), as well as concepts like emotion/empathy and the ability to enjoy people/things/experiences. Islam accepts the same verse in Genesis that Judaism & Christianity do. As someone who has been reading Asimov from an early age and who keeps an eye out for Skynet, I'm not too concerned about them taking over anytime soon, although as a software developer, I certainly believe in the possibility that a complex buggy super-calculator may wrongfully decide to launch nuclear missiles. :)


The core figures involved with AI all seem to be worried that they're actually building Physical Gods[0], so perhaps Saudi Arabia is in the race to make sure our manmade gods are suitably Quranic[1].

More likely they are just throwing money around in an attempt to make the physical embodiment of the Middle East's oil-fueled, centuries-long democratic backslide look modern and progressive. MBS loves to chuck money at flashy tech (and also Fox News).

[0] Roco's Basilisk is just Pascal's Wager with a computer program that doesn't exist yet.

[1] As interpreted by the most intolerant Wahhabist


Arabian cultures have a deep sense of geometry, math. That could be the key to human level A.I. observing the patterns in bigdata and biohacking models of the brain and the social information sphere.


Well most of the Islamic age progress was mostly done by Persians and Turkic folks rather than Arabs, out of which none of the scholars were based in Saudi Arabia of all places.

On the other hand, Arabs were extremely good at racial name calling (calling Persians Ajami, and calling chess evil and shit).


They’re big fans until you ask it to criticize a holy book.


If you're interested in some theological debate, I actually asked a question on Islam SE on this 11 years ago: https://islam.stackexchange.com/questions/2320/is-it-haram-t...

Take note the Islam SE community mostly comes from Stack Overflow and is somewhat biased towards tech. But it was a decent discussion with some solid citations.

tldr: Imitation of humans and other living creatures is not okay, but nothing wrong with imitation of intelligence.


Very interesting, thank you


how else will you run a country where the majority are slave labor ruled by a very, very rich minority.


Exactly as it's been run until now? Having access to cheap human labor negates the urgency to adopt AI into the workforce.


i was implying ai will be used as a mechanism for control.


The interesting one is East vs West. As a child, I remember some documentary saying that robotics development was ahead in China/Japan, because the East grew up on stories like Doraemon and Akira, while the West was resistant to it because of Terminator, The Matrix, Neuromancer, etc. The idea in the West is that AI is dangerous, while the East grew up with the idea of them as friends and leaders.

This probably wasn't that true because obviously AI & robotics research is booming in the US.

But I think the data hints that consumer demand for AI and AGI might be significantly higher in China, etc. There was also the recent controversy around Midjourney banning Xi Jinping images lately, which suggests that they're eyeing the Chinese consumer market.


> I remember some documentary saying that robotics development was ahead in China/Japan

I also had this impression, especially after seeing demos of ASIMO. Looking back, ASIMO was always just a demo, Germany is a strong player in industrial robots, and Boston Dynamics is doing some very impressive demos. I have no idea who is/was actually ahead.


The size and speed graphs from 1950 to 2022 essentially shape an exponential but are already shown at log scale.

There must be a word for that?


Unrelated to your question, but the tweet "AI research has been heating up for years, with no signs of slowing down" is a bit disingenuous by showing stuff happening only after 2010 (while "parameters for machine learning models" are shown from 1950 onwards).

There's been multiple AI winters, and it would be wise not to sweep them under the rug.


That would be a "double exponential function".

https://en.wikipedia.org/wiki/Double_exponential_function


super-exponential?


log log linear? :D


Seems like a great opportunity to put a LLM interface in front of this report so readers and query it...


it's probably already obsolete


Yep, I randomly picked a bit from the "technical performance" titled "capable language models still struggle with reasoning" and it doesn't mention GPT4


GPT4 also struggles with reasoning.


Indeed. Should be at least 486 pages.


It would be nice if it were available in ePub format. PDFs don't render so well on sub-$400 e-readers lacking large screens.


Where’s the chat interface for this?

No one wants to read a 386 page report.

Any links for a version we could ask questions to?


> No one wants to read a 386 page report.

You don't have to. The bulk of the report homepage is the top ten takeaways of the work in bite-size format. The body of the report is (or at least should be) the evidence that supports these statements. A good conclusion should always be supported by evidence.


Thank you. In fact the Twitter thread was very good.


the twitter summary is a bit garbage. Makes a lot of assumptions about the future, when all we really know is that recent developments take a huge leap. But that the trend will continue, or that it is even a trend, whatever that means in this context, is uncertain as every prediction. Is this a plateau? Will an even bigger development come, changing the game? Will AI close business forever after some unpredictable event? Nobody knows.


interestingly, no mention on how nowadays AI is being treated as web3 in the past couple crypto bubbles.


Instantly becomes out of date


How much was written by AI?


>> You should be in the US if you're starting an AI company

This is the Tweet author's comment on a graph that shows the number of AI companies in different countries, rendered as a bar chart. The comment is a little cryptic -why would the number of AI companies in a country advise where you should start an AI company?

So I figured I'd try to understand the comment by engaging in a bit of arbitrary, ad-hoc math.

  US: 542
  Everyone else: 160 + 99 + 73 + 57 + 47 + 44 + 41 + 36 + 32 + 26 + 23 + 22 + 12 + 12 = 684
It turns out the US has fewer AI companies than the rest of the world put together!

So I guess the logic was that, "you should be in the US if you're starting an AI company" because that's where you can expect the least competition.

That sounds like solid advice!


If you look at it like working on AI in the USA will provide you the best density of AI workers, researchers, and resources, it makes a lot of sense.

In terms of competition, evidently it’s working out relatively well in the USA as there are enough people there to support nearly half the AI companies on earth. It appears to be a very fertile bed for growth in this sector.

Further, most products will likely wind up online; there will not be much need for localized competition advantage outside of securing a team. The competition stage is largely a global market, online. You’re best forming your company where you can hire people and collaborate with other people in your sector. That’s ideally in the USA, so far.


I'm all with you about the principle i.e. that having the biggest number in a set (whether or not that number is even still minoritary by itself, though that does make the claim even more ridiculous) doesn't imply being the best let alone a necessity. Otherwise it would boil down to the classic "eat shit, billions of flies can't be wrong".

That being said, there are actual factors for which the US might indeed plausibly be the best location choice specifically for most AI companies in the current hype context. For example funding:

* while there is also some proportion of actually promising sustainable business models based on AI, the fact is that a very large proportion of AI companies are surfing on a hype/bubble and have somewhat of an exit strategy (e.g. being bought by some bigger player) but not a business model that would otherwise sustainably stand on its own feet.

* the US is arguably indeed the best location for most high-risk ventures in need of big funding… including and especially so for hype/bubble surfers. Nowhere else can you find that much early investment money with such a willingness to take risks and to ride a hype/bubble. For most AI companies, the US as a location means they are able to raise more at a better (for the founders) valuation.


As you say, there are advantages and disadvantages. What I don't see is how the bar chart in the tweet justifies the comment above it that "You should be in the US if you're starting an AI company". To be perfectly clear: it doesn't, and if one should move to the US to start an AI company, then the numbers of AI companies already there wouldn't really tell us why.

The error is of the same kind as counting the number of papers published by a researcher, or an institution, or an entire country, as some kind of indication for the quality of the research in those papers.

After all, if we just counted publications in AI journals, China, which is far ahead of all the rest of the world (with 39.87%, vs 10.03% of the US) would appear to be the leader in AI research. And that's according to Figure 1.1.11 in the report

Here's a link to the report btw:

https://aiindex.stanford.edu/wp-content/uploads/2023/04/HAI_...


https://en.wikipedia.org/wiki/Economies_of_agglomeration

I bet most of the startups in the US are concentrated in the SF/Bay Area. Would be stupid to go build something anywhere else in the US unless you already have a strong team/defined product.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: