It is not currently exposed in JFR for JDK 26, but I agree that it would be the logical next step. Now that the underlying telemetry framework (cpuTimeUsage.hpp) is in place within HotSpot, wiring it up to JFR events would be a natural extension.
I recently wrote an extremely basic Rust web service using Axum. It had 10 direct dependencies for a total of 121 resolved dependencies. I later rewrote the service in Java using Jetty. It had 3 direct dependencies for a total of 7 resolved dependencies. Absolutely nuts.
I don't think number of dependencies is a useful comparison metric here. Java runtime already implements stuff that you have to use libraries for in Rust, and it's a design choice. Rust also has slimmer std. Both languages have different constraints for this.
Yeah, currently you have to be really careful about what you do in virtual threads. However, they are actively working on a solution to the issue with synchronized blocks. Once that's been solved, they should be a lot easier to use. Unfortunately, many of us will be stuck on 21 for some time now, and will need to continue to worry about synchronized blocks.
The article makes it sound like the feature set has been finalized, but is that really the case? I'm struggling to find evidence for this, and the official page does not seem to indicate this at all: https://openjdk.org/projects/jdk/23/
No, the feature set is not finalized. This is just a random blog post, nothing more. Like who says "New Features Are Officially Announced"? Which source? They make it look like Java 23 is done, but it's not.
If you want see what's coming in Java 23 you have to check https://openjdk.org/projects/jdk/23/ - the only source of truth. If you look at the Schedule of that page you will see at least before Rampdown Phase One there still is time to add new JEPs. And there will more JEPs be added for sure, that's how it was done in the past.
The comments about this feature are always so tiresome. Please just scroll down and read the "Alternatives" section at the bottom. They're not idiots. They're working within backwards compatibility constraints. They can't just do it like other languages.
It's moot, anyway, as parser changes are already required by the STR." part. About the only possible defence is that one could use the same lexer, but even that's not true because you can presumably now embed a multi-line string in a single-line string, like so:
Even with their constraints they could have made this nicer, especially around the default formatter. Always requiring an explicit formatter is a classic example of Java being unnecessarily verbose, `\{ ` already wasn't permitted by previous java compilers as it was an unknown escape sequence so we could have nicer templates for the stock formatting like "\{foo} bar".
One of the arguments is that existing $ in strings would have to be escaped. Every other language supported $ in strings before interpolation appeared. Not Java specific.
Every other language is susceptible to SQL injection style attacks. Not Java specific.
You don't want friction on the default path to be too low because that would inevitably increase the occurrence of default path where it must not be used, e.g. cause sql injections.
Much easier to accidentally write something that boils down to
"\{bobby}"
where it should have been
SQL."\{bobby}"
than
STR."\{bobby}"
instead of
SQL."\{bobby}"
(i admit that this argument would work better if STR was typographically more different from the name you'd inevitably use for an SQL statement processor)
They want to ensure the new interpolation sequence is a compiler error in non-template strings, so that the incorrect “hi \{name}” is an error instead of silently doing the wrong thing. Using \{ accomplished this because it’s an unknown escape sequence in older Java (so forbidden there), and in new Java will have the more specific error of “yo you forgot the template processor at the start of your template”.
The company I interned at back in '08 had multiple Tichu games going daily over lunch. It's where I learned to love the game. Sadly, they had a hiring freeze when I graduated in '09, and I had to move on. I still play, but I will always look back fondly on those daily games over lunch.
When I do play Tichu these days, it's actually with the designer of Haggis, Sean Ross. He's been putting out a lot of traditionally inspired games over the past few years. He just published a game called Bacon which is a climbing game based on the traditional climbing game Guan Dan (https://www.pagat.com/climbing/guan_dan.html). I'd recommend checking out both games! The hand size in Guan Dan is 27 cards, and that's not even the most in a traditional climbing game!
An updated version of Haggis will also be coming out later this year, and it will support playing up to 6 players. The 4-player partnership version is particularly good. The reprint will also include some updated rules for the 3-player game, which I like a lot too.
(I'd give you some BGG links, but it looks like it's currently down for maintenance.)
That aside, Pagat is truly a gem. One of the best sites on the internet. I have spent hours and hours there learning and trying obscure card games. Thank you, John McLeod.
most excellent. i've had a little interaction with sean, and recall some thoughts he shared in a thread about changes to haggis (in bgg). love to hear there is an update coming later this year!
something i like about my copies of the original printing are the little rule/scoring cards that were great to hand to friends learning. i would love to pick up reprints
>> How do you ship a Java library that uses native code? Python/Node/Ruby worlds know, but OpenJDK ignores the question.
>It doesn't. jmod files and the jmod tool were added in JDK 9 precisely for that (and other things, too).
I have never seen a lib package native dependencies using jmod. It's not even clear to me how this would be done. Everyone I'm familiar with bundles binaries inside the jar. Can you point to an example? I would love to learn a better way of doing this.
That's what JavaFX does (https://openjfx.io/openjfx-docs/#modular), but the problem is that Maven and Gradle don't support that well, which is why few libraries do that.
The idea is that an application is best deployed with a custom runtime created with jlink (well, all Java runtime these days are created by jlink and so every Java program uses jlink whether it knows it or not, but few use it directly because, well, Maven and Gradle don't support jlink well, and so few applications use jlink well), and jmod files are jlink's input from which it generates the image.
Anyway, build tools aside, a library can be distributed as a jmod file, consuming it with jlink places the native libraries in the right place, and that's about it.
JavaFX doesn't use jmods, not really. Try following the tutorials for getting started and see for yourself: you end up using custom JavaFX build system plugins that download jar versions of the modules, not jmods. Also some alternative JDK vendors pre-ship JavaFX in their JDK spins to avoid people having to use jlink.
I've worked with jlink extensively. The problems are greater than just Maven and Gradle, which at any rate both have quite sophisticated plugins for working with it. There was just a major error at the requirements analysis phase in how this part of Java works. Problems:
1. You can't put jmods on the module path, even though it would be easily implemented. They can only be used as inputs to jlink. But this is incompatible with how the JVM world works (not just Maven and Gradle), as it is expected to be able to download libraries and place them on the class/module path in combination with a generic runtime in order to use them. Changing that would mean every project to get its own sub-JDK created at build time, which would explode build times and disk space usage (jlink is very slow). So nobody does it, and no build system supports it.
2. When native code gets involved they are platform specific, even though (again) this could have been easily avoided. Even if JVMs supported jmods just like jars, no build system properly supports non-portable jars because the whole point of a jar is that it's portable. It has to be hacked around with custom plugins, which is a very high effort and non-scalable approach.
3. jlink doesn't understand the classpath at all. It thinks every jar/jmod is a module, but in reality very few libraries are usable on the module path (some look usable until you try it and discover their module metadata is broken). So you can't actually produce a standalone app directory with a bundled JVM using jlink because every real app uses non-modularized JARs. You end up needing custom launchers and scripts that try to work out what modules are needed using jlink.
4. The goal of the module system was to increase reliability, but it actually made it lower because there are some common cases where the module system doesn't detect that some theoretically optional modules are actually needed, even in the case of a fully modularized app. For example apps that try to use jlink directly are prone to experiencing randomly broken HTTP connections and (on desktop) mysterious accessibility failures, caused by critical features being optional and loaded dynamically, so they have to be forced into the image using special jlink flags. This is a manhole sized trap to fall into and isn't documented anywhere, there are no warnings printed. You are just expected to ship broken apps, find out the hard way what went wrong and then fix it by hand.
At some point the JDK developers need to stop pointing the finger at build tool providers when it comes to Jigsaw adoption. It's not like Maven and Gradle are uniquely deviant. There are other build systems used in the JVM world and not one of them works the way the OpenJDK devs would like. They could contribute patches to change these things upstream, or develop their own build system, but have never done it so we end up with a world where there are ways to distribute libraries that work fine if you pretend people never moved beyond 'make' and checking jars into version control.
> You can't put jmods on the module path, even though it would be easily implemented.
Yes, it could be easily implemented, although things would work nicely even without that. You make it sound as if there would have been proper build tool support if that were the case, but JARs can be easily put on the module path and still build tools don't properly support even that yet.
> When native code gets involved they are platform specific
Well, yeah. Native code is platform specific, and that's one of its main downsides and why most libraries don't (and shouldn't) use it. But when it is used, it's used for its upsides.
> At some point the JDK developers need to stop pointing the finger at build tool providers
All of the problems you mentioned could only be solved by build tools, but we're not pointing fingers in the sense that we blame build tools for things being bad. After all, modules have been very successful at allowing us to do things like virtual threads and FFM and remove things like SecurityManager. Build tools providers can have their own priorities, just as we do. But if you want to enjoy your own modules or 3rd party modules then you'll need good support by build tools.
> They could contribute patches to change these things upstream, or develop their own build system, but have never done it
Yet. There were more urgent things to work on, but maybe not for long.
Gradle will put JARs on the module path if the app itself is a module. If JMODs were an extension to the JAR format rather than a new thing, and if there was no such thing as the module path (modules were determined by the presence of their descriptor), then nothing new would be needed in build systems and everything would just work.
No, working with modules requires the ability to place some JARs on the classpath and some on the classpath arbitrarily. Any kind of automatic decision regarding what should be placed on the classpath and what on the module path misunderstands how modules are supposed to work. The -cp and -p flags don't and aren't supposed to differentiate between different kinds of JAR contents. If they did, you're absolutely right that the different flags wouldn't be needed -- the JDK could have looked inside the JAR and automatically say this is supposed to be a module or not; the reason they are needed is because that's not what the flags mean.
A module in Java is really a different mode of resolving classes and applying various encapsulation protections. If x.jar is a JAR file, `-cp x.jar` means "apply certain rules to the resolution and access protection of the classes in the JAR" while `-p x.jar` means "apply different rules for those same classes". In most situations both `-cp x.jar` and `-p x.jar` would work for the same x.jar, applying those different rules, regardless of whether x.jar declares module-info or not. The decision of which rules need to be applied belongs to the application, not the JAR itself; being a module or not is not something intrinsic to the JAR, it's a rule chosen by the application.
It's a little like choosing whether or not to run a program in a container. You can't look at the executable and say this should run in a container or not. The decision of whether to set up a container is up to the user when configuring their setup. module-info basically means: if the user sets up a container to run me, then these are hints to help configure the dockerfile. In an even stronger sense, a module is similar to a class loader configuration; some classes may work better with some classloader configurations than with others, but ultimately the decision on the setup is not something intrinsic to the classes but to the application that loads them, and the same goes for modules.
So having the build tool or the JDK guess which rules need to apply to which classes makes as much sense as having them guess the proper heap and GC configuration the application needs -- you can have an okayish default, but for serious work the app developer needs to say what they want. The JDK makes it very easy; build tools make it very hard.
> Having to use streams in order to use map, etc., is noisy. Why weren't these APIs retrofitted to collections?
Sure, methods along the lines of the following could have been added:
public <R> List<R> map(Function<? super E, ? extends R> mapper) {
return stream().map(mapper).toList();
}
Yes, having them would make Java less verbose in the case where you only want to do a single map, filter, whatever operation on a collection, but I'm personally glad that they weren't because they produce extremely inefficient behavior when chained. So, it would add a performance footgun to save ~18 characters.
Will the new metric be exposed in JFR recordings as well?