Hacker News .hnnew | past | comments | ask | show | jobs | submit | naranha's commentslogin

The only interface that works for me efficiently with LLMs is the chatbot interface. I rather copy and paste snippets into the chat box than have IDEs and other tools guess what I might want to ask AI.

The first thing I do with these integration is look how I can remove them.


It's still common in IT departments to enter the domain administrator password to join a computer to a domain or install software on a client machine. This seems insane to me, you can just fake the windows gui in a Fullscreen application and keylog the password - even using a web browser. I think AD is a relic of the 90s that should be retired


Now do one with ECC!


... and one with 32GB/64GB/128GB ECC RAM ... and one with USB4/USB4 2.0 ... and one with 2.5/5/10Gbps Ethernet ... and one with M.2 SSD port ... and one with 2nm litography


If the browser has 1080 vertical pixels, the scrollbars has max say 1000 possible positions. According to my napkin math* if you scroll over 100 uuids per second it would take up to ~1.7 septillion or ~1 700 000 000 000 000 000 000 000 years to scroll to an uuid of which you know the position if you hit the spot exactly on the scrollbar.

* https://www.wolframalpha.com/input?i=ROUND%5B2%5E122%2F1000%...

Edit: Use 122 bit instead of 128 due to UUIDv4


Not OP, but I have converted a couple of projects from knex to kysely recently. With Typescript kysely is much better. With kysely-codegen you can generate typescript types for a pre existing schema too.


That was my impression too. Glad to have it confirmed. Thank you.


Stuff becomes legacy not because of the language, but because of outdated libraries or frameworks or unavailable/uninterested developers. For me PHP is legacy, because I refused to work on PHP codebases, except I find someone who's willing to do the job for me.


At least couchdb is also append only with vacuum. So it's maybe not completely outdated.


High performance has never been a reason to use couchdb.


I build an internal project using CouchDB2 as a backend in 2017 and it's still used today. CouchDB definitely surpassed my expectations in what has been possible to do with it. Its biggest advantages are that data sync is effortless and that it uses HTTP as a protocol, so you can communicate directly with it from a web application.


uuidv7 will be supported in PostgreSQL 17, at which point generation should be as fast as uuidv4, or if you implement it in pgsql now it will be as fast as the proposed ULID algorithm.

Insert performance could be even better, iirc for BTree Indexes monotonic increasing values are better than random ones, but feel free to correct me on that ;)


Commenting on this a bit late, but in case anyone reads this later too:

UUIDv7 support unfortunately didn't make it to Postgres 17, since the RFC wasn't completely finalized yet by the time of feature freeze (April 8), see discussion on pgsql-hackers:

https://www.postgresql.org/message-id/flat/ZhzFQxU0t0xk9mA_%...

So I guess we'll unfortunately have to rely on extensions or client-side generation for the time being, until Postgres 18.


In that case I don't understand why the author didn't go for uuidv7? It seems like existing tooling (both in database and outside) deals with it better, it seems like there are no downsides unless you expect your identifiers to be generated past 4147 but don't care if they are generated past 10889 (I'd love to hear that use-case, seems like it must be interesting).


As with databases it always depends, for maximum insert performance you'd actually often go with random uuids so you dont get a hot page.


Hot page?

Using a monotonically increasing PK would cause pages in the index to be allocated and filled sequentially, increasing throughput.

Using random UUIDs would lead to page-splitting and partially-filled pages everywhere, negatively impacting performance and size-on-disk.


Not always, this article describes the problem: https://learn.microsoft.com/en-us/troubleshoot/sql/database-...


Exactly, with one big insert its better to have a sequential value, for many small ones its often better to not.

As with all databases, measure before you cut.


The way to take advantage of the bandwidth of multiple storage devices is to distribute concurrent writes across them, rather than forcing everything to commit sequentially using contended locks or rollbacks.


The DB (or any application) should not have any need to know what devices are underneath its mount point. If you’re striping across disks, that’s a device (or filesystem, for ZFS) level implementation.


idk, when I want productivity I go with nodejs these days. lodash for some quick data crunching and pg or mariadb for db access using promises simply beats native php functions. with express you can spawn a http server in under 10 lines, while with php you need to setup apache/nginx or docker.

at some point in the past PHP was the most productive tool for some quick & dirty coding, but not anymore for me.


I recommend checking out FrankenPHP, where you can spin up a production php server with a single cli command, or compile your php into a self-executable binary.

I’m a contributor over there.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: