Hacker News .hnnew | past | comments | ask | show | jobs | submit | Mindless2112's commentslogin



It supports linking two Android devices.


This is huge. There's been 3rd party Signal library for this for years -- and for some reason I can't determine, the developers have opted NOT to do this.


Yeah, the Signal team's roadmap seems very strange to me as an outsider. There are some low hanging fruits which they just seem to refuse to fix.

And given how in this case Molly could fix it it cannot have been that hard to fix.


this is why molly.im was a lifesaver for me.. trying to move a family member from VIBER to SIGNAL and ran into the annoying roadblock of not being able to link an Android tablet to an Android phone like Viber can - but molly does it fine.


If you use Signal on your Android phone, can you link with Molly on another Android device (tablet) without Signal complaining?


I just tried it on a 2nd phone and it seems to work.


I got logged out of Signal after setting up Molly


Now reading this on the GitHub page:

"If you wish to use the same phone number for both Molly and Signal, you must register Molly as a linked device. Registering the same number independently on both apps will result in only the most recently registered app staying active, while the other will go offline."


Yeah, pretty sure that's what me and the other comment meant. Linked device, like using Signal on Desktop. Or Signal on iPad. Linking wasn't available on Signal for Android for some reason.

Specifically I'm using Signal as the main device, with Molly as the linked device on 2nd phone.


There was a time when we could say "our greenhouse gas emissions are nothing compared to regular biological processes," and yet here we are.


> “The outbound and cross-bound DDoS attacks can be just as disruptive as the inbound stuff,” Dobbin said. “We’re now in a situation where ISPs are routinely seeing terabit-per-second plus outbound attacks from their networks that can cause operational problems.”

ISPs are starting to feel the pain, so perhaps in the near future they will do something about it.


Perhaps, or perhaps not. Maybe if we held them accountable they would?


I, too, am jealous of China's high speed railroads. However, on the whole, China has overbuilt their infrastructure, and that may not look so smart in 40-50 years when the maintenance bills start coming due.


Is it factually true? Because some routes that I’m personally aware of are constantly over booked when it comes to rails. Some, I guess, might be overbuilt, but time will show. I’ll agree on some malls though, but it’s more like private stuff, than government-led initiatives.


So, perhaps 2020s China ~ 1950s US demographics. The bridges that recently collapsed in the US (2024 Baltimore/Francis Scott Key Bridge and 2007 Minneapolis I-35W Mississippi River Bridge) were built in 1964 and 1972-1977 respectively.

Noone has yet compared the Chinese construction times/costs to the replacement Baltimore Francis Scott Key Bridge: cost ~$2b, estimated October 2028. Will have 600ft bridge towers, 1600ft main span (increased from 1209ft), total span length 3300 ft, improved pier protection. Surprised they didn't add a freight rail link.


Perhaps. One would hope that ability to build would correlate with ability to maintain, so that nothing falls into disrepair - but we'll have to see.


The Freestyle Pro is almost a good keyboard. The Esc and function keys are all offset to the left by one key compared to a standard layout, which drove me nuts. I have a Freestyle Edge RGB now, which I like much better. (Though I replaced the wrist rests with some from Goldtouch.)


It's two different teams inside Google. Some part of the Chrome team is trying to quash JPEG XL.


Sure, but if it becomes political I expect the Chrome team to fully quash the JPEG XL team to hurt Firefox and JPEG XL in one go.


Other than Jon at Cloudinary, everyone involved with JXL development, from creation of the standard to the libjxl library, works at Google Research in Zurich. The Chrome team in California has zero authority over them. They've also made a lot of stuff that's in Chrome, like Lossless WebP, Brotli, WOFF, the Highway SIMD library (actually created for libjxl and later spun off).


It's more likely related to security, image formats are a huge attack surface for browsers and they are hard to remove once added.

JPEG XL was written in C++ in a completely different part of Google without any of the safe vanity wuffs style code, and the Chrome team probably had its share of trouble with half baked compression formats (webp)


I'd argue the thread up through the comment you are replying to is fact-free gossiping - I'm wondering if it was an invitation to repeat the fact-free gossip, the comment doesn't read that way. Reads to me as more exasperated, so exasperated they're willing to speak publicly and establish facts.

My $0.02, since the gap here on perception of the situation fascinates me:

JPEG XL as a technical project was a real nightmare, I am not surprised at all to find Mozilla is waiting for a real decoder.

If you get _any_ FAANG engineer involved in this mess a beer || truth serum, they'll have 0 idea why this has so much mindshare, modulo it sounds like something familiar (JPEG) and people invented nonsense like "Chrome want[s] to kill it" while it has the attention of an absurd amount of engineers to get it into shipping shape.

(surprisingly, Firefox is not attributed this - they also do not support it yet, and they are not doing anything _other_ than awaiting Chrome's work for it!)


> JPEG XL as a technical project was a real nightmare

Why?

> (surprisingly, Firefox is not attributed this - they also do not support it yet, and they are not doing anything _other_ than awaiting Chrome's work for it!)

There is no waiting on Chrome involved in: https://bugzilla.mozilla.org/show_bug.cgi?id=1986393


> (surprisingly, Firefox is not attributed this - they also do not support it yet, and they are not doing anything _other_ than awaiting Chrome's work for it!)

The fuck are you talking about? The jxl-rs library Firefox is waiting on is developed by mostly the exact same people who made libjxl which you say sucks so much.

In any case, JXL obviously has mindshare due to the features it has as a format, not the merits of the reference decoder.


> they'll have 0 idea why this has so much mindshare

Considering the amount of storage all of these companies are likely allocating to storing jpegs + the bandwidth of it all - maybe the instant file size wins?


Hard disk and bandwidth of jpegs are almost certainly negligible in the era of streaming video. The biggest selling point is probably client side latency from downloading the file.

We barely even have movement to webp &avif, if this was a critical issue i would expect a lot more movement on that front since it already exists. From what i understand avif gives better compression (except for lossless) and has better decoding speed than jxl anyways.


> We barely even have movement to webp &avif

If you look at CDNs, WebP and AVIF are very popular.

> From what i understand avif gives better compression (except for lossless) and has better decoding speed than jxl anyways.

AVIF is better at low to medium quality, and JXL is better at medium to high quality. JXL decoding speed is pretty much constant regardless of how you vary the quality parameter, but AVIF gets faster and faster to decode as you reduce the quality; it's only faster to decode than JXL for low quality images. And about half of all JPEG images on the web are high quality.

The Chrome team really dislikes the concept of high quality images on the web for some reason though, that's why they only push formats that are optimized for low quality. WebP beats JPEG at low quality, but is literally incapable of very high quality[1] and is worse than JPEG at high quality. AVIF is really good at low quality but fails to be much of an improvement at high quality. For high resolution in combination with high quality, AVIF even manages to be worse than JPEG.

[1] Except for the lossless mode which was developed by Jyrki at Google Zurich in response to Mozilla's demand that any new web image format should have good lossless support.


> AVIF is better at low to medium quality, and JXL is better at medium to high quality.

BTW, this is no longer true. With the introduction of tune IQ (Image Quality) to libaom and SVT-AV1, AVIF can be competitive with (and oftentimes beat) JXL at the medium to high quality range (up to SSIMULACRA2 85). AVIF is also better than JPEG independently of the quality parameter.

JXL is still better for lossless and very-high quality lossy though (SSIMULACRA2 >90).


>AVIF is better at low to medium quality,

>The Chrome team really dislikes the concept of high quality images on the web for some reason though, that's why they only push formats that are optimized for low quality.

It would be more accurate to say Bit per Pixel (BPP) rather than quality. And that is despite the Chrome team themselves showing 80%+ of images served online are in the medium BPP range or above where JPEG XL excel.


Isn't medium quality the thing to optimize for? If you are doing high quality you've already made the tradeoff that you care about quality more than latency, so the precieved benefit of mild latency improvement is going to be lower.


jxl let’s you further compress existing JPEG files without additional artifacting, which is significant given how many jpeg files already exist.


It's a nice logo, but...

> So why would we change it? As a non-Indigenous entity, we acknowledge that it is inappropriate for the Foundation to use Indigenous themes or language.


It seems like this could be easily solved in models that support tool calling by providing them with a tool that takes a token and returns the individual graphemes.

It doesn't seem valuable for the model to memorize the graphemes in each of its tokens.


Yes, but are you going to special case all of these pain points? The whole point of these LLMs is that they learn from training data, not from people coding logic directly. If you do this people will come up with a dozen new ways in which the models fail. They are really not hard to find. Basically asking them to do anything novel is at risk of complete failure. The interesting bit is that LLMs tend to work best a "medium difficulty" problems. Homework questions and implementing documented APIs and things like that. Asking them to do anything completely novel tends to fail as does asking them to do something so trivial that normal humans won't bother even writing it down.


It makes sense when users ask for information not available in the tokenized values though. In the abstract, a tool that changes tokenization for certain context contents when a prompt references said contents is probably necessary to solve this issue (if you consider it worth solving).


It's a fools errand. The kinds of problems you end up coding for are the ones that are blatantly obvious and ultimately useless except as a gotcha to the AI engines. All you're doing is papering over the deficiency of the model without actually solving a problem.


This is less a deficiency of the model, and more of a deficiency of the encoder IMO. You can consider the encoder part of the model, but I think the semantics of our conversation require differentiating between the two.


Tokenization is an inherent weakness of current LLM design, so it makes sense to compensate for it. Hopefully some day tokenization will no longer be necessary.


That takes away from the notion that LLMs have emergent intelligent abilities. Right now it doesn't seem valuable for a model to count letters, even though it is a very basic measure of understanding. Will this continue in other domains? Will we be doing tool-calling for every task that's not just summarizing text?


> Will we be doing tool-calling for every task that's not just summarizing text?

spoiler: Yes. This has already become standard for production use cases where the LLM is an external-facing interface; you use an LLM to translate the user's human-language request to a machine-ready, well-defined schema (i.e. a protobuf RPC), do the bulk of the actual work with actual, deterministic code, then (optionally) use an LLM to generate a text result to display to the user. The LLM only acts as a user interface layer.


How is counting letters a measure of understanding, rather than a rote process?

The reason LLMs struggle with this is because they literally aren't thinking in English. Their input is tokenized before it comes to them. It's like asking a Chinese speaker "How many Rs are there in the word 草莓".


It shows understanding that words are made up of letters and that they can be counted

Since tokens are atomic, which I didn't realize earlier, then maybe it's still intelligent if it can realize it can extract the result by writing len([b for b in word if b == my_letter]) and decide on its own to return that value.


But why doesn’t the LLM reply “I can’t solve this task because I see text as tokens”, rather than give a wrong answer?


We're up to a gazillion parameters already, maybe the next step is to just ditch the tokenization step and let the LLMs encode the tokenization process internally?


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: