Hacker News new | past | comments | ask | show | jobs | submit | trebor's comments login

So 5,000 qbits to crack a 50 bit prime key. That’s an interesting factor. Assuming a similar scale, 204,800 qbits would crack RSA 2048 keys. I’m curious why it scales to needing “millions” according to those researchers.

Fruit trees can take a very long time to mature and grow flowers/fruit. Hopefully it can pollinate with itself, and isn’t a kind that has sexes.

Worst case you can clone it and GMO it into a different sex. Wouldn't be easy or cheap, but it's definitely something that's possible.

I still believe Recall to be a very bad idea, a feature no one wanted, and a risk even as an Opt-In choice. But at least it will be harder to access.

I want to see a security researcher play with it though.

So far, I'm not a fan of Windows 11, Windows+AI, etc.


I have used and developed in Wordpress since 3.2. Mullenweng is a dictator and maverick, and I’m not convinced that he’s good for the Wordpress ecosystem.

But neither are highly customized WP hosting platforms.

Revisioning, especially since the post_meta table was added, is a huge burden on the DB. I’ve seen clients add 50 revisions, totaling thousands of revisions and 200k post meta entries. Important enough to call disabling it by default a “cancer”? Chill out Matt.

Revisions aren’t relevant past revision 3-5.


What database is burdened by 200k rows? That’s tiny.


It’s the excess, unaccessed content. The indexes haven’t been well optimized in MySQL (MariaDB is better).

But still. A lot of small companies only pay $20/mo for hosting …


But a database can handle tens of millions of rows with those resources.

If you’re worried about excess, why even use Wordpress? My god - serving rarely updated static content with a database? Stupid. The entire thing is excessive and wasteful.


Maybe you misunderstand the market my employer serves?

We've built sites for clients both huge and small. Our clients like WordPress because it's well supported, easy to roll out, and easy to find someone to work on. Lots of people have experience with it, from having their own blog.

Even infrequently updated content can go through a logjam of revisions. And this is the failure of WordPress's versioning model: there's no way to "check in" a revision. So you can't mark what's approved/reviewed. Instead of a "check in" that nukes the middle revisions ... now you have 5-7 revisions where someone updated a button text.

Add in ACF fields (which uses approx 2.5 rows in post_meta PER FIELD) and now you've got a complex page with lots of rows that build it up. Now each "revision" is 1 post row + 20 - even 1000 post_meta rows. You see the problem.

Over time the DB bloats, and the index isn't partitioned in any way. That page a cheap dev built to query something? Runs 5x slower after a year, and the client doesn't know why.

The only reason we use WP over other platforms is: support, maintenance, but most importantly to the client COST.


Wait, so they added 30g of erythritol and observed negative impacts? Most the time I see it around 7g plus other sweeteners. It’s almost never alone now.

I bet they need to look at 30-45g sugar harder now. Maybe uncover some of that (big sugar suppressed) research that it causes heart disease.


I think this flag was added because a lot of these sex workers want to post pictures without flagging their sensitive media. Then they dog pile replies on all the trending tweets, and people find the adult content.

Any time that happens to me, I flag sensitive content. And then I block the user—so I don't need to deal with it.

I see this as punishing people who're not following the terms, not gatekeeping.


I bet this is less time than my uBlock Origin rules have saved me. <_<


Uh... Post-truth has been here awhile. They’ve had the capacity to fake/alter live events for decades now.


"Some will make the argument “But isn’t this simply the same problems we already deal with today?”. It is; however, the ability to produce fake content is getting exponentially cheaper while the ability to detect fake content is not improving. As long as fake content was somewhat expensive, difficult to produce, and contained detectable digital artifacts, it at least could be somewhat managed."

Yes, indeed. However, to the degree it is getting worse is substantial, rapid and significantly different than prior.


In the 90’s, the internet was cool, but you went to the library to do research.

For about 20 years, that flipped. I think it is flipping back. For me, the eye opener was trying to diagnose a roof vent issue. The small local library has two relevant books, each with 2-3 pages of information.

Those pages were more informative than a 6 hour internet search.


Yes, somewhat a different topic, but so much information now exists in video form versus written due to the ease of video recording.

So many videos on YT that could be 2 lines of text are 5-10 min videos of someone talking about something irrelevant until we find the part where the problem is discussed.

I suppose AI might actually be solution to this. Allowing efficient video search.


That works both ways. a short video about how to properly chop potatoes is better than a text document. They are different forms of expression


I suspect both of you are hopelessly old and uncool like me.

It’s easier to have a machine autogenerate textual spam.

Content production costs are higher on YouTube, so there is a lower ratio of spam to content. Some of my hipper millennial friends caught on to this five years ago.

Of course, that heuristic is going to stop working in about 5 years, based on current photo generation quality. (Just have ChatGPT 10 auto gen the script and stage blocking directions for Stable Diffusion, AV edition or whatever).


This entire comment is really amusing to me. "Some of my hipper millennial friends caught on to this five years ago" has me chuckling.


I guess my question is if that actually makes things worse. The people who were likely to believe falsehoods unquestioningly probably already do. How much does the ease of making new convincing falsehoods ensnare new people vs. causing the wary to just get paranoid and unlikely to believe things without overwhelming evidence?

At some point, getting people to believe things, let alone care about them, is already something that is generally known to hit diminishing returns, mostly in terms of reach. The percentage of the population that even is following wherever you're posting your fake content isn't 100%, even if you're so convincing that it gets onto major news sources.

Post-truth always feels like it's brought up as "And then anarchy follows", but it's unclear to me that it's that different than what things were like in pre-internet society where "fake news" was just someone in your town telling you something they'd heard from someone the town over about something happening hundreds of miles away. It can be a regression, but it's unclear to me that it's necessarily worse for people, since it's not clear to me that the fact that I know about some natural disaster in Laos is good or important.


This argument around cost of fake content would be more convincing if it weren't already used countless times throughout history. Socrates saying that writing will atrophy people's memories. The priest's fear that books will replace them when it comes to preaching. Gessner and his belief that the unmanageable flood of information unleashed by the printing press will ruin society.

The social dilemma, and all those that were convinced social media would spell the end of modern society.

Instead each of these technologies improves access to information and makes it easier for most to determine the truth via multiple sources. I'd imagine in the future there will be many AI agents that can help to summarize the many viewpoints. Just like anything, don't trust any one of them in isolation, consider many sources, and we'll be fine.


> I'd imagine in the future there will be many AI agents that can help to summarize the many viewpoints. Just like anything, don't trust any one of them in isolation, consider many sources, and we'll be fine.

It would be promising if there were any formal theories proposing any such methods of verification. However, I'm presently aware of none.

Instead, we see AI detection methods failing and AI cyber defense failing at present.

“As cybercriminals are turning to AI to create more advanced malicious tools, a separate report by the web security company Immunefi said cybersecurity experts are not having much luck with using AI to fight cybercrime”

https://decrypt.co/150899/wormgpt-fraudgpt-ai-hackers-phishi...

I will have a different opinion when we see some real solutions being proposed or implemented. If you have some references in that regard, let me know.


You mean it'll fuck up SEO? The web is unsearchable already, anyway.

And people have their attention already saturated, this is the bottleneck. Not the amount of bullshit society can produce.


It sounds like you're making an argument about people's relationship to truth that pivots entirely around the unknowns presented by AI, and not around historical truths about how disinformation already works.


And the capability hasn't mattered for nearly as long. Large swaths of society have chosen to reject inconvenient truths and instead believe convenient lies. The battle's already lost. I'm not sure I can be convinced that we aren't 4th century Rome.


What were the convenient lies in 4th century Rome?


Isn't this just AI-powered cat-fishing? Even if that's not your _intended_ use-case, I anticipate it being >90% of all users being cat-fishers.


Bun is a very nice runtime. I played with it for several hours, and was impressed by how snappy it feels.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: