Hacker News .hnnew | past | comments | ask | show | jobs | submit | more CodeIsTheEnd's commentslogin

This release includes a feature I added [1] to support partial foreign key updates in referential integrity triggers!

This is useful for schemas that use a denormalized tenant id across multiple tables, as might be common in a multi-tenant application:

    CREATE TABLE tenants (id serial PRIMARY KEY);
    CREATE TABLE users (
      tenant_id int REFERENCES tenants ON DELETE CASCADE,
      id serial,
      PRIMARY KEY (tenant_id, id),
    );
    CREATE TABLE posts (
        tenant_id int REFERENCES tenants ON DELETE CASCADE,
        id serial,
        author_id int,
        PRIMARY KEY (tenant_id, id),
        FOREIGN KEY (tenant_id, author_id)
          REFERENCES users ON DELETE SET NULL
    );
This schema has a problem. When you delete a user, it will try to set both the tenant_id and author_id columns on the posts table to NULL:

    INSERT INTO tenants VALUES (1);
    INSERT INTO users VALUES (1, 101);
    INSERT INTO posts VALUES (1, 201, 101);
    DELETE FROM users WHERE id = 101;
    ERROR:  null value in column "tenant_id" violates not-null constraint
    DETAIL:  Failing row contains (null, 201, null).
When we delete a user, we really only want to clear the author_id column in the posts table, and we want to leave the tenant_id column untouched. The feature I added is a small syntax extension to support doing exactly this. You can provide an explicit column list to the ON DELETE SET NULL / ON DELETE SET DEFAULT actions:

    CREATE TABLE posts (
        tenant_id int REFERENCES tenants ON DELETE CASCADE,
        id serial,
        author_id int,
        PRIMARY KEY (tenant_id, id),
        FOREIGN KEY (tenant_id, author_id)
          -- Clear only author_id, not tenant_id
          REFERENCES users ON DELETE SET NULL (author_id)
          --                                  ^^^^^^^^^^^
    );
I initially encountered this problem while converting a database to use composite primary keys in preparation for migrating to Citus [2], and it required adding custom triggers for every single foreign key we created. Now it can be handled entirely by Postgres!

[1]: https://www.postgresql.org/message-id/flat/CACqFVBZQyMYJV%3D...

[2]: https://www.citusdata.com/


Wait, did we also get trailing comma support in column lists?


Thanks for your work on that, very useful.


What happens if someone deletes a tenant?


The posts table also has a foreign key to the tenants table specified with ON DELETE CASCADE, so all the tenant's posts will be deleted. (I don't know if Postgres makes any effort to find some optimal ordering of the referenced tables—in this case deleting the tenant's posts firsts, then its users—to avoid updating records that will just get deleted anyway by cascading deletes.)


Nice, thanks!


Shameless self-promotion: the homepage of plaintextsports.com is 5.2kb today [1], an in-progress WNBA game (4th quarter) is 11.2kb [2], and an extra inning MLB game is 8.8kb [3]. I wasn't aware of this size threshold, and I'm not at this level of optimization, but I'm always pleased to find more evidence of my playful claim that it's the "fastest website in the history of the internet".

[1]: https://plaintextsports.com/all/2022-08-24/

[2]: https://plaintextsports.com/wnba/2022-08-24/conn-dal

[3]: https://plaintextsports.com/mlb/2022-08-24/laa-tb


It's very small, but it's difficult to scan and painful to read. You could easily use built-in HTML structures to make it actually readable. Your site is, in my opinion, as much a deviation from the old readable web as the over-designed modern sites are.

There are lots[1] of small, "class-less" CSS libraries that would keep your site as small (or smaller, with tree-shaking in a modern build system) and it would end up much more user-friendly.

1. https://css-tricks.com/no-class-css-frameworks/


The Light Mode toggle pretty much fixed it for me. Dark Mode on this was harsh and difficult to scan/read


I found it easy to read on my phone in light mode, still easy to skim in dark mode but the losing team text is too dark and I have to focus to read it.


It looks like it is designed to be a sidebar.


or at least a bigger font-size


It has a pleasing teletext aesthetic. [1]

[1] https://en.wikipedia.org/wiki/Teletext


Exactly where my head went too. I immediately had memories of Ceefax scrolling.


I check local scores on your site all the time, it rocks!


I don't watch sports, but I love how this site looks! It reminds me of wttr.in a bit.


I'm not at this level of optimization

I think you are! It's less work to keep it simple, than to make it heavy then try to pare it back.


I wish I were into sports, I'd definitely use this!

It fits nicely into the smol web


Fantastic. I’ve finally found a tolerable sports site.


Great use of `white-space: pre-wrap;`


so good! added to my bookmarks for future visits.


I had never heard of GROUP BY CUBE either! It looks like it's part of a family of special GROUP BY operators—GROUPING SETS, CUBE, and ROLLUP—that basically issue the same query multiple times with different GROUP BY expressions and UNION the results together.

Using GROUP BY CUBE(a, b, c, ...) creates GROUP BY expressions for every element in the power set of {a, b, c, ...}, so GROUP BY CUBE(a, b) does separate GROUP BYs for (a, b), (a), (b) and ().

It's like SQL's version of a pivot table, returning aggregations of data filtered along multiple dimensions, and then also the aggregations of those aggregations.

It seems like it's well supported by Postgres [1], SQL Server [2] and Oracle [3], but MySQL only has partial support for ROLLUP with a different syntax [4].

[1]: https://www.postgresql.org/docs/current/queries-table-expres...

[2]: https://docs.microsoft.com/en-us/sql/t-sql/queries/select-gr...

[3]: https://oracle-base.com/articles/misc/rollup-cube-grouping-f...

[4]: https://dev.mysql.com/doc/refman/8.0/en/group-by-modifiers.h...


I would gladly buy a book of "SQL Recipes" ranging from beginner-level to advanced stuff that uses features like this, ideally with coverage of at least a few popular database systems, but at minimum Postgres.

Is there such a book?


In fact, Yugabyte is giving it away for free - https://downloads.yugabyte.com/marketing-assets/O-Reilly-SQL...


This is great! Thanks for the link. :)


Joe Celko has a number of SQL for Smarties books I've been meaning to look through.


These are phenomenonal books. Your understanding of sql as both a language and as a tool will be transformed.

A bit dated in places but that also is helpful as you learn how a technique works instead of just relying on a vendor specific extension


Yes, PostgreSQL (9 and later), SQL Server (2008 and later), Oracle, SQL Anywhere and DB2 support grouping sets. All had it for longer than a decade.

Some also support the MySQL/MariaDB with rollup syntax introduced in 2018.


Not a pivot table equivalent. Most useful for calculating multiple related aggregates at once for reporting purposes, but ROLLUP doesn't substitute values for columns, ie. it doesn't pivot results on an axis.

MS SQL Server has native pivot support, and Postgres has a limited emulation of it through the crosstab(...) function. https://stackoverflow.com/a/11751905 https://www.postgresql.org/docs/current/tablefunc.html

For folks just learning about ROLLUP et al, I highly recommend this comparison chart for an overview of major features offered by modern relational databases. https://www.sql-workbench.eu/dbms_comparison.html

There's a whole constellation of advanced features out there that arguably most application developers are largely unaware of. (Which explains why most app devs still treat relational databases like dumb bit buckets at the far end of their ORMs.)


I should amend for clarity: GROUPING SETS are really useful for speeding up complex report generation.


I had a situation recently where I had a huge amount of data stored in a MariaDB database and I wanted to create a dashboard where users could interactively filter subsets and view the data. The naive solution of computing the aggregate statistics directly based on the users' filter parameters was too slow, most of the aggregation needed to be done ahead of time and cached. The website's backend code was a spaghetti house of horrors so I wanted to do as much as possible in the DB. (The first time in my career I chose to write more SQL rather than code)

If I had a fancy DB I could use CUBE or GROUPING SETS and MATERIALIZED VIEWs to easily pre-calculate statistics for every combination of filter parameters that automatically get updated when the source data changed. But I had MariaDB so I made do. I ended up with something like this:

    SELECT ... SUM(ABS(r.ilength)) AS distance, COUNT(*) AS intervals FROM r
    GROUP BY average_retro_bucket, customer, `year`, lane_type, material_type, state, county, district WITH ROLLUP
    HAVING average_retro_bucket IS NOT NULL AND customer IS NOT NULL;
"The WITH ROLLUP modifier adds extra rows to the resultset that represent super-aggregate summaries. The super-aggregated column is represented by a NULL value. Multiple aggregates over different columns will be added if there are multiple GROUP BY columns."

So you can query like this to get stats for all districts in CA->Mendocino county:

    SELECT * FROM stats_table WHERE state = 'CA' AND county = 'Mendocino' AND district IS NULL
or like this to get a single aggregate of all the counties in CA put together:

    SELECT * FROM stats_table WHERE state = 'CA' AND county IS NULL AND district IS NULL
However unlike CUBE, WITH ROLLUP doesn't create aggregate result sets for each combination of grouping columns. If one grouping column is a NULL aggregate, all the following ones are too. So if you want to query all the years put together but only in CA, you can't do:

    SELECT * FROM stats_table WHERE year IS NULL AND state = 'CA'
If `year` is null, all the following columns are as well. The solution was to manually implement wildcards before the last filtered group column by combining the rows together in the backend.

I worked around not having materialized views by creating an EVENT that would re-create the stats tables every night. The stats don't really need to be real-time. Re-writing the multiple-GB statistics tables every night will wear out the SSDs in 20 years or so, oh well.


The most common constraint is whether the DB as a service offerings support a given extension, since they don't support installing custom ones. Naturally choosing to support an extension across a fleet of hundreds of thousands of instances (running dozens of different minor versions) isn't a decision made lightly, so it can take a while for new extensions to be supported.

Supported extensions on AWS RDS: https://docs.aws.amazon.com/AmazonRDS/latest/PostgreSQLRelea...

Supported extensions on Google Cloud SQL: https://cloud.google.com/sql/docs/postgres/extensions

Supported extensions on Azure Database: https://docs.microsoft.com/en-us/azure/postgresql/single-ser...


Shameless plug (StackGres team member here) but StackGres possibly has the largest selection of ready-to-use Postgres extensions [1].

Give it a quick try on any Kubernetes cluster, like k3s on your laptop (one command install), and install any extension from the Web Console or a 1-line in the SGCluster yaml.

[1]: https://stackgres.io/extensions/


If anyone's looking for a great long-form article about calculating digits of Pi from the good ol' days, this 1992 New Yorker profile of the Chudnovsky brothers (creators of algorithm used by Google here!) is fantastic:

https://www.newyorker.com/magazine/1992/03/02/the-mountains-...


Reading that, it's hard not to see the parallels to Darren Aronofsky's movie Pi. Building a supercomputer in an apartment to find patterns in Pi that have some higher meaning, the computer constantly breaking, etc.


Dammit I was going to post this, but you beat me to it!


Very similar to a game posted a couple months ago:

https://hackernews.hn/item?id=30956775 (368 points, 124 comments)


I know, the hard level is however novel. I started prototyping this game several weeks ago but Shubham beat me to the punch.


Thank you! (I made plaintextsports!)

On my to-do list is making a personalized page where you can select exactly which teams you want to follow, something closer to what dpeck wants, but I've been having trouble finding the time...


This would be awesome! I love plaintext sports and the minimalist aesthetics of it.

Curious, it seems college baseball scores are generally not as easy to find as others. Do they tend to be behind more restrictive APIs, or is it more just that there’s so little interest in it vs MLB?


For college football and basketball I just use some JSON endpoints that the ncaa.com frontend hits, and they're actually very good. Literally all I had to do to support Women's basketball in addition to Men's was change "men" to "women" in the url. I haven't checked the data for baseball, but I'm sure it's more than adequate. That being said, I personally don't have any interest in college baseball, and I don't think it has a huge following, so I'm not going to invest any time into it, especially when there are higher profile leagues that I still haven't added yet (specifically, all of European football).


completely understand that, and you’re right on the small following. I like the high energy (for baseball that is) offensive heavy style of play, but I know I’m an outlier vs the MLB fan base size.

Great site and wish you much continued success with it!


Oh hey! We talked over Twitter once about the site a few weeks ago, had no clue you were on HN :)


But you only need to write the destructor once.


Fair point, but on the other hand, the destructor has to work for all contexts. The usual argument is that the creator of the class has the best vision for what should be in the destructor, but I don't agree with that either.


You can use `yq` to copy a jq-style path to currently focused node to your clipboard! (I’m the author of jless.)

There are a couple other copy variants as well:

https://jless.io/user-guide.html#copying


Oh fantastic thankyou! And thank you for jless!

I was running 0.7.1 rather than 0.8.0, so will upgrade now

Edit: Works great!


TIL: #method(:method_name) in Ruby

I'm a big fan of Ruby, and feel like I know the language pretty well, but, my goodness, I don't know how I've never come across this one before. THIS is something that feels like magic!!

    [1]> 3.method(:days).source_location
    => [".../lib/active_support/core_ext/numeric/time.rb", 37]

    [2]> puts 3.method(:days).comment
    # Returns a Duration instance matching the number of days provided.
    #
    #   2.days # => 2 days
    => nil

    [3]> puts 3.method(:days).source
      def days
        ActiveSupport::Duration.days(self)
      end
    => nil
https://ruby-doc.org/core-3.1.1/Method.html


Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: