Hacker News .hnnew | past | comments | ask | show | jobs | submit | andruby's commentslogin

What horrible acronym/concept are those.

It seems to be a US thing: https://en.wikipedia.org/wiki/Partial_zero-emissions_vehicle

> In California, PZEVs have their own administrative category for low-emission vehicles. The category was made in a bargain between automakers and the California Air Resources Board (CARB), so that automobile makers could delay making mandated zero-emission vehicles (ZEVs)—battery electric and fuel-cell electric vehicles.


They do indeed. See https://developers.cloudflare.com/workers-ai/models/ They seem to allow some free usage without user account. Do they list limits anywhere?

The point they are making is that if the limit is _per device_ than using 10 devices doesn't break the rules.

The FCC or whoever is almost 100% just looking at power/time/location. Those 10 devices will look like 1 device.

when would the 10 radios be sufficiently spaced to count as separate devices?

A company doesn't need $55bn to buy a $55bn company. They can issue new GME shares and exchange $EBAY for $GME. These are sometimes called "stock-for-stock" transactions

Except a sudden dilution usually tanks the stock by the exact % its diluting

So GME dilutes by 20%, stock price immediately goes down by 20%. its not some infinite money hack


Except in this case, the company also now owns EBay with a market cap of around $44B before the takeover bid was announced.

I don’t think GP was claiming it was an infinite money hack at all.


Basically a merger.

OP is just saying that PE uses the same playbook, not that this move is "private equity".

> it should do everything in its powers

That's a scary thought.

Hey Claude, why haven't you finished yet? ... Because the human I'm holding hostage hasn't finished the drawing yet.


Are there any examples one could view before downloading?

There's nothing to download - you can run everything from your browser, and the photo you upload is not uploaded anywhere, it stays in your browser.

The point is what does a typical result look like, especially before loading a 2GB model or making a browser crash.

Model results https://apple.github.io/ml-sharp/


For those that don’t get this. It’s a reference to West World, where the “hosts” (androids) say this sentence when they see something from the outside world that they are programmed to ignore

Seize all motor functions.

It's not "_cease_ all motor functions"?

I thought it was freeze all motor functions!

I did a quick ctrl+f through the season 1 .srts and it looks like it's usually freeze but sometimes cease. E.g. S01E10 has both in different parts.

There might be a few (debatable) counter-examples: 37 signals, balsamiq, zoho?

Afaik they never took any (serious) VC money.


I was referring to the case where the founders and investors sell the startup to larger company. Of course, if they don't sell, and the company stays founder-led, the outcome is often better. I didn't know Zoho never took (serious) VC money.

We've been happy with WAL-E and now WAL-G (successor). The streaming PITR nature of these won over pgbackrest when we did the analysis ~9 years ago.

Are you using WAL archiving? As far as I understand, pgbackrest and Barman can also use direct streaming from the DB (same mechanism as replication), I didn't find any mention of this in the WAL-G documentation.

With WAL archiving you need to wait for a WAL segment to finish before it's backed up. With streaming backups the deadtime is minimized. At least that's as far as I understand this, I didn't get to try this out in practice yet.


WAL-G's PITR backups are insurance against data loss through erroneous data manipulations (eg: accidental DELETE/DROP/UPDATE). WAL-G's streaming approach (using pg_receivewal or similar) sends WAL records to backup storage continuously as they're generated, rather than waiting for a full segment to complete.

On top of that, for availability (and minimizing deadtime), we have 2 replicas using streaming replication. If the lead PG crashes, one of the replicas is promoted to lead (and starts accepting writes), and we "only" lose the writes that haven't been sent over the streaming replication.

You can fully eliminate that window of data loss with synchronous replication (vs the default asynchronous replication - which we use). The write slowdown (replica network round trip + 2nd write at replica) isn't worth it for us


Are you using `walg wal-receive` for streaming? As far as I can tell, that command will wait for the full wal segment before it pushes anything to storage. I don't see any way to stream wal records continuously in wal-g.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: