Managers now requesting longer documents specifically to summarize with AI is almost funny, they've turned a discipline mechanism into a reading-avoidance tool.
Most enterprise AI adoption has this same problem. Companies automate the visible stuff and miss that coordination (the memos, meetings, approvals) is where productivity actually leaks..
77 suspicious positions across 60 wallets, 13 brand-new accounts appearing 40 hours before the browser launch. First confirmed case of a major tech company firing over prediction market trades.
It's interesting that the both replies under this comment are saying exact same thing, with the exact same term ("raison d'etre"... how often do you hear two random people think of this phrase at the same time?).
It might be nothing, but it'd be funny if karma farming bots are doing some 'reply frontrunning' over the internet.
Isn't insider trading on a prediction market only wrong to the extent the insider is violating some duty of secrecy to the company?
And isn't that just between them and their company in a case-by-case sense?
If there was some valuable-to-the-public information that the company did not care about keeping private but just hadn't bothered to make public, for whatever reason, and an insider traded on it on a prediction market, that would only benefit the public's interest in information and would not violate any duty to the company. It'd be a pure win for everyone.
It seems unfair to other traders, the way it would be in the stock market, but in prediction markets (unlike the stock market) all participants are explicitly taking on the risk that somebody else might have better access to information than they do. So it's not subverting the system in the way we have decided it does in stock markets.
A lot of commenters are getting the wrong take here by looking at this like it's a stock market where there is some society-level interest in giving participants protection from having less information than insiders. It's just a different thing.
The thing is you're still thinking of these insiders as someone who just got a juicy stock tip from a relative.
The much more serious problem is when these insiders actually have their hands on the levers which decide the outcome. It's really no different than a mobster who bets a bunch on money on an unlikely outcome then threatens one side to throw the match.
What possible economic benefit is there to society to allow ordinary people to bet in markets like that?
Would you really like to live in a world where "Will we nuke Iran?" Is a bet you can make? Then someone in government sees how much money they could make if they bet yes & push the button?
This is the entire idea behind the concept of "assassination markets" - "prediction" markets on assassinations that are just thinly veiled ways to crowdsource murders by taking bets that you expect to lose against an "insider" (the killer).
> Isn't insider trading on a prediction market only wrong to the extent the insider is violating some duty of secrecy to the company?
Yes.
Prediction markets, for corruption reasons, are regulated by the CFTC. In commodities markets, actors are assumed to be making trades based on propriety information. Hedging is the whole purpose!
> …like it's a stock market where there is some society-level interest in giving participants protection from having less information than insiders.
Ah, no!
Insider trading in the stock market is (usually) only illegal in your first case: when the person trading is violating confidentiality.
It is not about fairness.
Fairness is a poor proxy for whether specific trading is illegal.
For example:
If a company accidentally leaves a press release for a merger publicly available, I happen to guess the URL, and then I trade on it: Unfair (I have access to insider information that other market participants do not) but legal!
If I work at the company, am sent the press release to copy edit, and then trade on it: Illegal. I have a duty to the company not to trade on it.
Your comment seems to imply that trading based on material non-public information in prediction markets is always okay, which is not the case. The CFTC just made a press release detailing some instances of invalid use of nonpublic information on prediction markets: https://www.cftc.gov/PressRoom/PressReleases/9185-26
Interestingly, the CFTC objects to a political candidate trading on their own candidacy on the grounds that it is fraudulent. So it looks like they could attempt to regulate self-trading quite strictly, at least if that theory holds up after a court challenge.
Okay, then imagine you overhear at a bar. Yes “anyone could have” theoretically, but not actually. In either case, you have material non-public information that your counterparty in the market does not.
you got that piece of non public information was not because you are an insider. As long as the bar is not exclusive to insider, i don't see any difference
I think there is a society-level interest. It's very bad for the business environment if every employee of every company has monetary incentives to leak private information. It structurally encourages businesses to set up strict information silos where cross-team collaboration is hard no employee can ever be sure of the broader context of their work.
>Research shows prediction markets are often more accurate than experts, polls, and pundits. Traders aggregate news, polls, and expert opinions, making informed trades. Their economic incentives ensure market prices adjust to reflect true odds as more knowledgeable participants join.
>If you’re an expert on a certain topic, Polymarket is your opportunity to profit from trading based on your knowledge, while improving the market’s accuracy.
You know what's a great knowledgeable participant? An insider.
I have come to the opinion that every successful tech company is basically just an arbitrage scheme to avoid regulations and extract value based on that advantage.
Airbnb for unlicensed hotels. Uber for unlicensed taxis. Amazon for whitewashing fraudulent products. Bitcoin for unlicensed securities and laundering money.
That's what happens when the majority of people don't actually support the regulations.
If people thought it was wrong to be an unlicensed airbnb or uber, they wouldn't use them. In reality, those regulations are mostly protection rackets and most people don't care about violating them.
I disagree. When you give people strong economic incentives to ignore morality, some people will. Not all, but enough to make a hash of things. In any population there will be some people who will do things they know are wrong just to get ahead.
For Airbnb landlords I'm sure the thought process goes like " I'm just one person so I can't be having enough of an impact to be a problem. And besides, I need the money." But then enough people pile on and in aggregate they ruin the local housing market. But nobody thinks that they themselves are culpable
Your taxi crashes because the driver skipped brake maintenance and his insurance doesn't reimburse you for your hospital costs because commercial transportation isn't covered. Sure would be nice to have some minimum requirements for taxis.
You have two parties who want to enter into a contract and a third party unrelated to the contract that doesn’t for whatever reason. Just based on contract law and common sense the unrelated party shouldn’t have standing. Now if there’s externalities to the contract that impact that unrelated party sure, but only insofar as to get those externalities addressed.
This is not the same as a robbery which involves no contract or a willing counterparty to the robbery.
That's interpreting a failure to fight to preserve ethics as an internal rejection when it could be explained by a lack of fighting spirit, either because the fight seems impossible or the given hill not worth dying on. Another interpretation would be a comfort-oriented, avoidant, and possibly cynical culture facing a power imbalance.
This is certainly the most uncharitable way to think about it.
I see a prisoner’s dilemma where people often support regulations even if on an individual basis they would personally violate them, because they prefer living in a the less chaotic society. For example anti-dumping regulations… the expected value for any given individual is +EV, but when everyone is dumping, it’s a big -EV
The perfect example is speed limits: everybody thinks they're good and yet they all seem to classify all other drivers into two categories: slowpokes and maniacs.
Nobody seems to be able to agree on what a responsible set of rules is around the speed of vehicles.
That's because they are slowpokes and maniacs: In a decently flowing road, the majority of distinct cars you see are either moving significantly faster or slower than you (and the more extreme the difference the more likely you are to see them). Of cars that go at a similar speed to you, they approach you / you approach them more slowly so you'll see fewer of them.
From an economic perspective the majority of those regulations destroy economic value and those companies are unlocking value by finding clever ways around them.
No, they just shift the economic downsides to someone else so they can collect the difference. That's what I mean by arbitrage. Someone always pays the price, and now it's you and I.
That's not always true. Regulations increase the cost of transacting and make ranges of transactions non-viable, just like a tax.
So there is "dead weight loss", where transactions that would have been mutually beneficially and socially productive are eliminated by the regulation, and restored when somebody finds a loophole, restoring the individual and social benefit.
I agree, once they have bypassed regulations, they use that to essentially rent-seek from their monopoly/their unique position where the rent is paid by us public.
Their behaviour is very rent-seeking imo and at moments like these, its best to remind us that even the father of Capitalism, Adam Smith didn't like landlords
Had to search up some quotes from adam smith right now but here's a relevant one (imo) to this discussion:
"[the landlord leaves the worker] with the smallest share with which the tenant can content himself without being a loser, and the landlord seldom means to leave him any more." - Adam smith
I can't say this for the companies listed above but atleast within the realm of social media, they also want to bypass regulations and well, we all know how's it going.
On a long term, I do feel like there will be a drop in producitivity, thus destruction of economic value because of lack of enforcement of policies/such companies having reckless attitude about them.
Many of the products listed above actually seem to be very rent-seeking in my opinion (IIRC Someone on HN once said that from their personal experience talking to drivers, uber takes an approximate at the very least 40% cut or more)
(This might be a little off-topic?_ but one thing I think about tech regulations is that Facebook used to see if a young girl/minor girl took a selfie and then if they don't upload it, detect that she was insecure and then try to show them face beauty recommendations.
These girls can be our sisters/daughters fwiw. Facebook profits from insecurity/rage-bait and I would say that many social medias are the same as well, its just that the facebook example to me feels so eggregious and should be a uniting front for many to agree that there's a problem indeed.
You will be right when you say economical value is generated from profiting from insecurity/bypassing regulations but at what cost?
you might have been an insider working on the Apple Newton, and being enthusiastic about it you might have broken the rules and traded on your "knowledge"... and you would have lost your shirt. Same with your very knowledgeable enthusiasm about myriad other technologies. Ever wonder why Wall St doesn't show up at HN asking everybody's opinion about AI in order to leverage that info into billions?
an important element of "the wisdom of crowds" is many bits of microknowledge. How many Teslas will be sold next year is very dependent on how much the people who buy Teslas will earn next year (or how secure they will feel in their jobs) working in myriad other industries that have nothing to do with Tesla, along with the price of lithium, tires, and even ... wait for it... gasoline.
Polymarket's words you quote can just as likely refer to the wisdom of crowds. Or even, and this is the subtle part: Polymarket's insiders may believe, like you, that they are creating a market to trade on inside information, and yet they, like you, could be made wrong by the superior sum knowledge of the crowd exerting its invisible hands all together to tank your Apple Newtons.
Yes they are. Polymarket has an ad glorifying a "fictional" scenario where someone gets a job as a janitor in a video game company to bet on related events in polymarket
Yeah but someone has to give the money to the insider traders.
Betting and insider gambling wouldn’t work if people were educated and just didn’t gamble and so never used these platforms in the first place.
It’s an old question of whether government is responsible to protect people from themselves or should we give everyone freedom to go bankrupt in this specific way if they so desire.
I don’t know if there is a healthy way to gamble really. With drugs and substances at least there is some continuous spectrum but you either gamble your money or not.
Many, I suspect the overwhelming majority, of the markets are impossible to engage in insider trading in. So it's genuinely just an interesting way to monetize expertise. Chess is a great example. A lot of the money in that market is people turning on the latest chess engines and betting in accordance to position evals, but skilled players can see much more - like how a position that the computer gives as a dead drawn is, in reality, extremely difficult for one side to hold. So the market might give near 50% when it's perhaps more like 65/35. That's quite a large edge. There's also quite a lot of opportunities for arbitrage betting, which is by definition risk free.
Would you not say that somebody could equally cynically describe options trading in this way?
Prediction markets are very valuable because they provide information on issues that's generally much more accurate than alternative sources, such as polls. For instance Polymarket predicted 94% of the results for the 2024 election a month out, including the presidential. It can also provide more information than the news. For instance the chances of Khamenei being out as Supreme Leader of Iran by March 31st just skyrocketed up to 78%. That tells me far more than the various news sites minute by minute coverage.
Gambling = investing. Buying stocks is also gambling. Share buybacks, dividends, fancy words for forking money from workers to some joe schmoe that bought a lottery ticket, i.e., a stock.
One thing worth adding: the repo market underneath all of this is roughly $12.6 trillion in daily exposures, about $700B larger than previous estimates.
Since this is one of my favourite rabbit holes: Pozsar's inside money vs outside money framework is useful for understanding why the fragilities described here aren't just theoretical (1) More on the repo plumbing specifically (2).
The auth logic was literally inverted. Blocking people it should allow, allowing people it should block.
Probably any human reviewer would catch that in seconds, but AI code generation optimizes for code that runs, not code that's correct in domain-specific ways. I wrote about this pattern recently, AI converges to plausible output but misses the reasoning that requires actual expertise: https://philippdubach.com/posts/the-impossible-backhand/
Pretty sure I didn’t want to post that here. But then I got rate limited and upon coming out of rate limit jail blindly pasted this comment where my page reloaded - my bad should have been here: https://hackernews.hn/item?id=47193047
Netflix has the distribution already, it doesn't need legacy IP. Paramount is betting that combining DC, Harry Potter, GoT, Top Gun, and Star Trek under one roof creates enough gravity to survive. Maybe, but aggregators tend to win online and content suppliers compete each other down to cost.
If they wanted the policial platforms and had offered to buy them from Warner as standalone for a tenth of the cost Warner would have snapped their hand off.
They need the IP for scale. That is what this is about.
Walk me through the argument of how they do that or why this is worth $100 billion. Everyone is worried about them owning CNN, but CNN was dying. Their takeover of CBS was a Flubb. I think you could make the argument that part of the reason Trump won was the perception that liberals controlled all media. Change that perception in America, and I think our contrarian nature drives people into the arms of Democrats.
I think the perception that liberals control the media actually comes a bit from Hollywood and celebrity culture, which is more Progressive than average. People kind of conflate all that with news media (and there is genuine overlap sometimes).
But mainly it's driven by right-wing media propaganda. It was never really true, not even really a little. But even so, you can listen to hours and hours of content per day from right-wing personalities driving anti-media grievance. It's been that way for at least two decades.
To change the perception, far-right propaganda has to be countered or stopped. If the Ellison's and Weiss start meddling with CNN, it's probably not to counter far-right propaganda, but to spread even more.
The sharp results all came from pairing domain expertise with detailed AGENTS.md files. The impressive Rust output happened because someone who knows Rust was steering it. Vague prompts got mediocre output. A model on its own converges to the mean of its training data, which is why the "vibe code everything" thesis keeps not holding up: https://philippdubach.com/posts/the-impossible-backhand/
$730B pre-money for a company where each model is roughly 2x profitable on its own, but each next model costs 10x the last. The whole thing only works if scaling keeps delivering. Research (Sara Hooker et al.) is not encouraging on that front, compact models already outperform massive predecessors on downstream tasks while scaling laws only predict pre-training loss reliably.
Wrote about both the per-model math and the scaling question:
> each model is roughly 2x profitable on its own, but each next model costs 10x the last. The whole thing only works if scaling keeps delivering.
This is a decent argument, but it's not the death knell you think.
Models are getting 99% more efficient every 3 years - to get the same amount of output, combined with hardware and (mostly) software upgrades - you can use 99% less power.
The number of applications where AI is already "good enough" keeps growing every day. If the cost goes down 99% every three years, it doesn't take long until you can make a ton of money on those applications.
If AI stopped progressing today, it would take probably a decade or longer for us to take full advantage of it. So there is tons of forward looking revenue that isn't counted yet.
For the foreseeable future, there are MANY MANY uses of models where a company would not want to host its own models and would be GLAD to pay an 4-5x cost for someone else to host the model and hardware for them.
I'm as bullish on OpenAI being "worth" $730B as I was on Snap being worth what it IPO'd for - which it's still down about 80% (AFTER inflation, or about ~95% adjusting for gold inflation).
But guess what - these are MINIMUM valuations based on 50-80% margins - i.e. they're really getting about ~$30B - the rest is market value of hardware and hosting. OpenAI could be worth 80% less, and they could still make a metric fuck-ton of money selling at IPO with a $1T+ market cap to speculative morons easily...
Realistically, very rich people with high risk tolerance are saying that they think OpenAI has a MINIMUM value of ~$100B. That seems very reasonable given the risk tolerance and wealth.
When models get cheaper to run for OpenAI, they also get cheaper for everyone else. It gets commoditized. AI might be able to do more, but most people aren’t going to pay for a thing they could get for free. See the many models on Huggingface as examples of that.
And as the number of things AI is “good enough” at increases, the list of things on the frontier that people will want to pay OpenAI for shrinks. Even if OpenAI can consistently churn out PhD level math, most companies don’t care about that.
So a necessary (but not sufficient) condition for the math to work out is that frontier tasks still exist and are profitable. This is why CEOs keep hyping up AGI. But what they really want is for developers to keep paying to get AI to center a div.
> Wrong. They will use the model that gives them an edge. If they are using a PhD but their competitors are using Einstein, they will lose.
For some tasks that matters. But for a lot of tasks, "good enough but cheaper" will win out.
I'm sure there will be a market for whichever company has the best model, but just like most companies don't hire many PhD's, most companies won't feel a need for the highest end models either, above a certain level.
E.g. with the release of Sonnet 4.6, I switched a lot of my processes from Opus to Sonnet, because Sonnet 4.6 is good enough, and it means I can do more for less.
But I'm also experimenting with Kimi, Qwen, Deepseek, and others for a number of tasks, including fine-grained switching and interleaving. E.g. have a cheap but dumb model filter data or take over when a sub-task is simple enough, in order to have the smart model do less, for example.
Models will get smarter and cheaper. For those that are burned directly into silicon, there will be a market for old models - as the alternative is to dump that silicon in a landfill.
For models that run on general-purpose AI hardware, I don't know why the vendors would waste that resource on old models.
Who says anything about old models? What we're seeing is that as the frontier models get better, we get cheaper, better small models that leverage the advanced but cost a fraction. At the same time, hardware provides morez cheaper options. Sometimes far faster options too (e.g. Cerebras).
In terms of price, I can get 1m output tokens from Deepseek for 40 cents vs. 25 dollars for Opus, and a number of models near the 1-2 dollar mark that are increasingly viable for a larger set of applications.
Providers will keep running those cheaper models as long as there's demand.
What model? GPT4o certainly isn’t a moat for open ai. They need to keep training better and better models because qwen3, kimi k2.5 etc constantly nipping at their heels.
> Wrong. They will use the model that gives them an edge. If they are using a PhD but their competitors are using Einstein, they will lose.
It depends on the business. As much as I’d love to engage a PhD or an Einstein in my Verizon customer support call, it isn’t going to net the call center any value to pay for that extra compute.
> Models are getting 99% more efficient every 3 years - to get the same amount of output, combined with hardware and (mostly) software upgrades - you can use 99% less power.
Even if true, this still doesn't bend the curve when paying for the next model.
> If AI stopped progressing today, it would take probably a decade or longer for us to take full advantage of it. So there is tons of forward looking revenue that isn't counted yet.
If this is true, it's true for the technology overall, and not necessarily OpenAI since inference would get commoditized quickly at that point. OpenAI could continue to have a capital advantage as a public stock, but I don't think it would if the music stopped.
I would actually like to see the real math currently.
The market adoption has increased a lot. The cost to serve has come down a lot per token.
Model sizes have not increased exponentially recently (The high point being the aborted GPT-4.5), most refinement recently seems to be extending training on relatively smaller models.
When you take this into account together, the relative training to inference income/cost ratio likely has actually changed dramatically.
GPT-4 came out 3 years ago and you can run comparable models for 1% of the cost nowadays. That is not 2x efficiency. That's two orders of magnitude in end-to-end compute efficiency.
you're looking at nearly the entire curve of the tech's development. that's like saying lightbulbs became 99% more energy efficient and therefore will become another 99% more energy efficient. but most techs follow an S curve.
>you're looking at nearly the entire curve of the tech's development
That's a pretty strong statement that would need some data or at least a mathematical argument to back it up. Otherwise it's like saying in the 1980s that PCs with 640kB RAM have reached their pinnacle in terms of what users can expect in real life benefits and there's no reason to keep pushing the tech.
> Models are getting 99% more efficient every 3 years - to get the same amount of output, combined with hardware and (mostly) software upgrades - you can use 99% less power.
This is such a poor argument for a number of reasons.
1. Three years ago is basically when the "AI race" really kicked off amongst the frontier companies. You're effectively comparing a car from the 1920/30's to a modern car.
2. Past performance is not an indicator of future performance. You can't just say that LLM's will grow and improve at a fixed rate for all time, that isn't how they or anything else works in the real world.
3. Since it's an open secret that companies like Anthropic and OpenAI are running their models at a loss, a static 99% cheaper every three years arc still puts these companies at a net negative position unless compute, energy and water all somehow start getting 99% cheaper every three years.
We said all the same shit about VR, dude. Even had a global pandemic show up to boost everyone's interest in the key market of telepresence. Turns out the merry go round can stop abruptly.
No. Like many of us, I never saw much value in VR. LLMs have undeniable value that is general and broad. Now, does that mean OpenAI has a moat? No, it does not.
Yes, but there's a chance that actually training is done more or less for free by companies like OpenAI. The reason being that they do a gigantic amount of inference for end users (for which they get paid), but their servers can't be constantly utilized at 100% by inference. So, if they know how to schedule things correctly (and they probably do), they can do the training of their new model on the unutilized compute capacity. If you or I were to pay for that training, it would be billions of dollars, but for them it is just using compute that otherwise would be idle.
I was reading a paper on dark silicon and how it broke the beautiful scaling laws of the past (Moore's law/Dennard Scaling). We hit a wall, innovated and at the moment, the hardware industry is thriving. To me, that means scaling the industry and riding that momentum wasn't wrong. In fact, it allowed us to be where we are today.
Why are we so against, in principle, to the current pre-training scaling laws? Perhaps, we'll require new innovations at some point, but the momentum allows us to reach to newer heights that we've never climbed before.
What makes you think this trend will continue? In a situation with finite resources (eg the number of parameters), the default is to assume things will plateau.
The main thing i see here is that prompts never touch a third-party server. If you're in a regulated industry or just don't want proprietary context hitting an API, running inference on your own hardware with encrypted p2p from any device is really cool (and useful.)
(staying in userspace via tsnet without touching kernel sockets is a nice touch too.)
The entire screening infrastructure probably assumes cardiac events are a 60+ disease. Younger patients get atypical presentations, get sent home with "anxiety," and die. Metabolic syndrome in the 25-44 cohort has roughly doubled in two decades..
Most enterprise AI adoption has this same problem. Companies automate the visible stuff and miss that coordination (the memos, meetings, approvals) is where productivity actually leaks..
reply