HN2new | past | comments | ask | show | jobs | submit | dfabulich's commentslogin

The most controversial claim in this letter is in the section that "Existing Measures Are Sufficient."

In Google's announcement in Nov 2025, they articulated a pretty clear attack vector. https://android-developers.googleblog.com/2025/11/android-de...

> For example, a common attack we track in Southeast Asia illustrates this threat clearly. A scammer calls a victim claiming their bank account is compromised and uses fear and urgency to direct them to sideload a "verification app" to secure their funds, often coaching them to ignore standard security warnings. Once installed, this app — actually malware — intercepts the victim's notifications. When the user logs into their real banking app, the malware captures their two-factor authentication codes, giving the scammer everything they need to drain the account.

> While we have advanced safeguards and protections to detect and take down bad apps, without verification, bad actors can spin up new harmful apps instantly. It becomes an endless game of whack-a-mole. Verification changes the math by forcing them to use a real identity to distribute malware, making attacks significantly harder and more costly to scale.

I agree that mandatory developer registration feels too heavy handed, but I think the community needs a better response to this problem than "nuh uh, everything's fine as it is."

A related approach might be mandatory developer registration for certain extremely sensitive permissions, like intercepting notifications/SMSes...? Or requiring an expensive "extended validation" certificate for developers who choose not to register...?


> I agree that mandatory developer registration feels too heavy handed, but I think the community needs a better response to this problem than "nuh uh, everything's fine as it is."

Why would the community give a different response? Everything is fine as it is. Life is not safe, nor can it be made safe without taking away freedom. That is a fundamental truth of the world. At some point you need to treat people as adults, which includes letting them make very bad decisions if they insist on doing so.

Someone being gullible and willing to do things that a scammer tells them to do over the phone is not an "attack vector". It is people making a bad decision with their freedom. And that is not sufficient reason to disallow installing applications on the devices they own, any more than it would be acceptable for a bank to tell an alcoholic "we aren't going to let you withdraw your money because we know you're just spending it at the liquor store".


What if we asked users if they want extra protection? I think that would be nice..

This is the status quo. APK installation is disabled by default, and there is a warning when you go to enable it.

The point is "a warning" is not enough to communicate to people the gravity of what they are doing.

It is not enough to write "be careful" on a bag you get from a pharmacy... certain medications require you to both have a prescription, and also to have a conversation with a pharmacist because of how dangerous the decisions the consumer makes can be.

Normal human beings can be very dumb. It's entirely reasonable to expect society to try to protect them at some level.


You can add 5 layers of "are you sure you want to do this unsafe thing" and it just adds 5 easy steps to the scam where they say "agree to the annoying popup"

You could even make this an installation-time option. If you want to enable the switch afterwards, you have to do a factory reset. Then, the attackers convincing the victims would get nothing.

then make the unlock cost money

relatively easy for devs, but hard to scale for scammers


It's either that or as suggested, hard require developer validation for specific API permissions.

If those bad decisions have a lot of higher order effects and they turn out to be very costly for society, then limiting freedom seems worth it.

And it seems Google thinks society is beginning to unravel in SEA due to scammers. Trust breaks down, people stop using phones to do important things, GDP can shrink, banks go back to cheques, trees will be cut down!!

It's bad to let people go and catch the zombie virus and the come back and spread it, right?

...

I don't like it, but the obvious decision is to set up a parallel authority that can issue certificates to developers (for side loading), so we don't have to trust Google. Let the developer community manage this. And if we can't then Google can revoke the intermediary CA. And of course Google and other manufacturers could sell development devices that are unlocked, etc.


This is a terrible response as a Software Developer by the way. You can just use this to ignore any security concern.

It signals that you don't care much about security, and that you don't care about non-technical users, and don't even have the capacity to see how they view a system.

Sure, you can analyze domain names effectively, you can distinguish between an organic post and an ad, you know the difference between Read and Write permissions to system files, etc...

But can you put yourself on the shoes of a user that doesn't? If not, you are rightfully not in a position as a steward of such users, and Google is.


The reality in South East Asia doesn't support that. You're assuming that the potential victims are able to either use Android alternative or that they are willing and able to educate themselves about scams. The reality in these countries is that neither is the case in practice. Daily lives depend a lot on smartphones and they play a big role in cashless financial transactions. Networking effects play a big role here. Android devices are the only category that is both widely available and affordable.

Education is also not that effective. Spreading warnings about scams is hard and warnings don't reach many people for a whole laundry list of reasons.

The status quo is decidedly not fine. Society must act to protect those that can't protect themselves. The only remaining question is the how.

Google has an approach that would work, but at a high cost. Is there an alternative change that has the same effects on scammers, but with fewer issues for other scenarios?


The status quo may not be perfect but it is the best we can do. We try to educate people about scams. We give them warnings that what they are doing can be dangerous if misused. If they choose to ignore those things and proceed anyway, the only further step society could take is to take away the person's freedom to choose. And that is an unacceptable solution.

Society takes away individual's freedom to choose all the time. You can't choose not to pay your taxes. You can't choose to board a passenger plane without passing a security check. You can't just get a loan without any guarantees to the bank etc.

Education isn't really working at this global scale. It doesn't reach people the way you seem to belive it does. Many, if not most people are generally disinterested in learning new things and this gets amplified when it involves technology.


> The status quo may not be perfect but it is the best we can do.

Nope. We could, for example, ask developers to register with their legal identity to release apps.


That would be worse than the status quo.

the open source community should ask for their own install key and that's it

Play store can be fast and verification based and the F/OSS stores can be slower, reputation and review based.

...

But fundamentally the easiest thing is to ask people to pay to unlock the phone's security barriers, this makes it harder and costlier for scammers.


> Life is not safe, nor can it be made safe without taking away freedom.

So... no food and safety regulations, because life is not safe, and people should have the freedom to poison food with cheaper, lethal ingredients because their freedom matters more?

You're right that things can't be made more safe without taking away the freedom to harm people. Which is why even the most freedom-loving countries on earth strike a balance. They actually have tons and tons of safety regulations that save tons and tons of lives, even you from your point of view that means not "treating people as adults". You have to wear a seatbelt, even if you feel like you're not being treated like an adult. Because it's also not just your own life you're putting at risk, but your passengers' as well.

You're taking the most extreme libertarian stance possible. Thank goodness that's an extremely minority view, and that the vast, vast majority of voters do actually think safety is important.


Thank goodness there are FOSS options, even for mobile phones, and none of us are required to accept proprietary junk.

If they make FOSS illegal, guess I’ll be a criminal. Come and take it.


Your analogy is terrible because it doesn't do a proper accounting of "harm" and "risk."

Food and seatbelts, that's literal health and life-and-death; very immediate and visible.

"Cybersecurity" rarely is; and even when it is, the problem is that the centralized established authorities (like google) aren't at all provably good at this.


Your post is addressing a strawman, not what I said. But to answer the words you so ungraciously put in my mouth:

> So... no food and safety regulations, because life is not safe, and people should have the freedom to poison food with cheaper, lethal ingredients because their freedom matters more?

This is harm to others and is very obviously something we should enforce. There are unreasonable laws about food (banning the sale of raw milk cheese for example, which most of the world enjoys with perfect safety), but by and large they are unobjectionable.

> You're right that things can't be made more safe without taking away the freedom to harm people. Which is why even the most freedom-loving countries on earth strike a balance.

I never said I was opposed to striking a balance. Of course we can strike a balance. Indeed we already have when it comes to installing apps on Android. But these measures are being advanced as if safety were the only consideration, which it isn't.

> You're taking the most extreme libertarian stance possible.

No, that is what you have projected onto me. That's not actually what my stance is.


> At some point you need to treat people as adults, which includes letting them make very bad decisions if they insist on doing so.

That's right, it's your decision to use Android. If you choose to do so, that's on you.


If there was a choice to a non-walled garden. It has been taken away, how can you bank without one of the two?

You're right, all Android users who are upset about this change are free to switch to iOS.

Right like someone who can only afford a $100 phone can buy the cheapest iPhone which is 5x more expensive.

This is about like the geeks who hate the idea of ad supported services and think that everyone should just pay for every service they use.

FWIW: I do exclusively buy Apple devices, pay for streaming services ad free tier, the Stratechery podcast bundle, ATP and the Downstream podcasts and Slate. I also pay for ChatGPT and refuse to use any ad supported app or game.


I think that OP's point was that the alternative is even more locked down. There is no option for people who don't want to be nannied.

> At some point you need to treat people as adults, which includes letting them make very bad decisions if they insist on doing so.

The world does not consist of all rational actors, and this opens the door to all kinds of exploitation. The attacks today are very sophisticated, and I don't trust my 80-yr old dad to be able to detect them, nor many of my non-tech-savvy friends.

> any more than it would be acceptable for a bank to tell an alcoholic "we aren't going to let you withdraw your money because we know you're just spending it at the liquor store".

This is a false equivalence.


It's not a false equivalence at all. Both situations are taking away someone's control of something that they own, borne from a paternalistic desire to protect that person from themselves. If one is acceptable, the other should be. Conversely if one is unacceptable, the other should be unacceptable as well. Either paternalistic refusal to let people do as they wish is ok, or it isn't.

Maybe not, but I think that overextending any idea like that in the opposite direction of whatever point you are trying to make at least devolves into a "slippery slope" argument. For instance, is your point that all security on phones that impede freedom of the user (for instance, HTTPS, forced password on initial startup, not allowing apps to access certain parts of the phone without user permissions, verifying boot image signatures) should be removed as well?

No, that's not my point at all. Measures such as that are a tool which is in the hands of the user. There is a default restriction which is good enough for most cases, but the user has the ability to open things up further if he needs. What Google is proposing takes control out of the user's hands and makes Google the sole arbiter of what is and is not allowed on the device.

None of the measures I mentioned are changeable by the user, except possibly sideloading an HTTPS certificate. That's the only way any of those measures even work; if it wasn't set as invariants by the OS, they would be bypassable.

>There is a default restriction which is good enough for most cases, but the user has the ability to open things up further if he needs.

But this is what the other guy's point is. You are defining "good enough for most cases" in a way that he is not, then making the argument that what he says is equivalent to not allowing an alcoholic to buy beer. Why can you set what level is an acceptable amount of restriction, but he can't?


But it's not a slippery slope, because it's not taking it to the next level. It's the same level, just a different thing.

The alcoholic knows the bad outcomes, and chooses to ignore them. The hapless Android user does not understand the negative consequences of sideloading. I think this makes for a substantial differerence between those two.

Protecting from scams isn't protection from the victim themselves. That should be obvious from the fact that very intelligent and technologically literate people too can fall for phishing attacks. Tell me for example, how many people in your life know how a bank would ACTUALLY contact you about a suspected hijacking and what the process should look like? And how about any of the dozens of other cover stories used? Not to mention the situations where the scammers can use literally the same method of first contact as the real thing (eg. spoofed). ...And the fact that for example email clients do their best to help them by obscuring the email address and only showing the display name, because that's obviously a good idea.

> Protecting from scams isn't protection from the victim themselves.

That is where we differ. It is, ultimately, the victim of a scam who makes the choice of "yes, this person is trustworthy and I will do what they say". The only way to prevent that is to block the user from having the power to make that decision, which is to say protecting them from themselves.


But the proposal here, requiring developers to register their identities, doesn't actually impact consumers at all. They still have the ability to make the decision about whether or not to trust someone.

None of these things requires "locking down phones." Every single thing you've mentioned can be done in a smarter way that doesn't involve "individuals aren't allowed to modify the devices they purchase."

You can't make a statement like that and provide no examples. What are some of your ideas for doing that?

There is some world where somebody scammed through sideloading loses their life savings, and every country is politically fine with the customer, not the bank, taking the losses.

But for regular people, that is not really the world they want. If the bank app wrongly shows they’re paying a legitimate payee, such as the bank, themselves or the tax authority, people politically want the bank to reimburse.

Then the question becomes not if the user trusts the phone’s software, but if the bank trusts the software on the user’s phone. Should the bank not be able to trust the environment that can approve transfers, then the bank would be in the right to no longer offer such transfers.


If the actual bank app does that, or is even easy to fool into doing that, then the bank should be responsible. That's the world "regular people" want and it's the world as it should be.

If random malware the user chose to install does that, then that is not the bank's fault. The bank is no more involved than anybody else. And no, I don't think "regular people" want to make that the bank's fault.


The legal infrastructure for banking and securities ownership has long had defaults for liability assignment.

For securities, if I own stock outright, the company has to indemnify if they do a transfer for somebody else or if I lack legal capacity. So transfer agents require Medallion Signature Guarantees from a bank or broker. MSGs thereby require a lengthy banking relationship and probably showing up in person.

For broker to broker transfers, there is ACATS. The receiving broker is in fact liable in a strict, no-fault way.

As far as I know, these liabilities are never waived. Basically for the sizable transfers, there is relatively little faith in the user’s computers (including phones). To the extent there is faith, it has total liability on some capitalized party for fraud.

These defaults are probably unknown for most people, even those with large amounts of securities. The system is expected to work since it has been set up this way.

Clearly a large number of programmers have a bent to go the complete opposite direction from MSGs, where everything is private keys or caveat emptor no matter the technical sophistication of the customer. I, well, disagree with that sentiment. The regime where it’s possible for no capitalized entity to be liable for wrongful transfers (defined as when the customer believes they are transferring to a different human-readable payee than actually receiving funds) should not be the default.


Keeeep going.

Are banks POWERFUL? Do they have lots of money and/or connections to those who do? Do they have a vested interest in getting transactions right?

Absolutely!

Now, with all that money and power -- they -- whoever THEY are, need to come up with smart ways to verify transactions that don't involve me giving them all the keys to all my devices.

We have protections like this elsewhere - even when they have some "ownership." The bank kinda owns my house, but they still can't come in whenever they want.


Why do banks go through all the know-your-customer (KYC) process if not to identify the beneficial owner of every account? If they receive a transfer via fraud, then they either get it clawed back, have to pay it back, and/or get identified to law enforcement. If the last bank in the chain doesn't want to play by the rules, then other banks shouldn't transfer into them, or that bank itself should be held liable.

This is more or less how people expect things to work today ....


In the case of some knowing or blindfully unknowing money mule in the chain or at the end of the chain, the intermediary or final banks may not be at fault. The bank could have followed KYC procedures in that somebody with that name actually existed who controlled the account.

The money mule themselves is almost certainly insolvent to pay the damages. Currencies can also change by the money mule (either to a different fiat currency or crypto), putting the ultimate link completely out of reach of the originating country.

If intermediary banks are deputized and become liable in a no-fault sense, then legitimate transfers out become very difficult. How does a bank prove a negative for where the funds come from? De-banking has already been a problem for a process-based AML regime.


I'm a "regular" person, as are all the signatories, and you don't speak for us.

I am the author of the letter and the coordinator of the signatories. We aren't saying "nuh uh, everything's fine as it is." Rather, we are pointing out that Android has progressively been enhanced over the years to make it more secure and to address emerging new threat models.

For example, the "Restricted Settings"¹ feature (introduced in Android 13 and expanded in Android 14) addresses the specific scam technique of coaching someone over the phone to allow the installation of a downloaded APK. "Enhanced Confirmation Mode"², introduced in Android 15, adds furthers protection against potentially malicious apps modifying system settings. These were all designed and rolled out with specified threat models in mind, and all evidence points to them working fairly well.

For Google to suddenly abandon these iterative security improvements and unilaterally decide to lock-down Android wholesale is a jarring disconnect from their work to date. Malware has always been with us, and always will be: both inside the Play Store and outside it. Google has presented no evidence to indicate that something has suddenly changed to justify this extreme measure. That's what we mean by "Existing Measures Are Sufficient".

[^1]: https://support.google.com/android/answer/12623953

[^2]: https://android.googlesource.com/platform/prebuilts/fullsdk/...


I guess it's too late now, but I think "sufficient" is much too strong a word to use for that position, and puts Google in a position where they can disregard you because they "know" that existing measures aren't "sufficient."

"Existing measures are working," perhaps?


Would you say that iOS ecosystem suffers the same rate of malware as Android?

Not OP, but my experience was most of the malware-like apps on App Store were top ads of apps with names similar to the original ones: such as Whatsapp or Office.

There could be many other factors, like abysmal patch policies. Many vendors still only do Android Security Bulletins (which are only vulnerabilities marked as high and critical), do them late (despite a three month embargo for patches), very delayed device firmware updates, and sometimes only for two or three years.

Many Android phones still do not have a separate secure element.

Also, the Play Store itself regularly contains malware.

In the end it is mostly about control, dressed up as protecting users. If it was about security, Google would support GrapheneOS remote attestation for Google Pay (for being the most secure Android variant) and cut off many existing phones with deplorable security.


The app store does contain malware, although arguably less than the play store. Apple devices would be much more secure without the app store. Apple should remove the app store.

Of course not.

In other news, a new study shows that cutting off your feet is 100% effective against athlete's foot.


> all evidence points to them working fairly well.

What is this evidence? Please share it.


Like you said, for years now they have added more and more restrictions to address various scams. So far none of them had any effect, other than annoying users of legitimate apps, because all the new restrictions were on the user side. This new approach restricts developers, but is actually a complete non-issue for most, since the vast majority of apps is distributed via Google Play already.

In the section "Existing Measures Are Sufficient." your letter also mentions

> Developer signing certificates that establish software provenance

without any explanation of how that would be the case. With the current system, yes, every app has to be signed. But that's it. There's no certificate chain required, no CA-checks are performed and self-signed certificates are accepted without issue. How is that supposed to establish any form of provenance?

If you really think there is a better solution to this, I would suggest you propose some viable alternative. So far all I've heard for the opponents of this change is, either "everything is fine" or "this is not the way", while conveniently ignoring the fact that there is an actual problem that needs a solution.

That said, I do generally agree, with you that mandatory verification for *all* apps would be overkill. But that is not what Google has announced in their latest blog posts. Yes, the flow to disable verification and the exemptions for hobbyists and students are just vague promises for now. But the public timeline (https://developer.android.com/developer-verification#timelin...) states developer verification will be generally available in March 2026. Why publish this letter now and not wait a few weeks so we can see what Google actually is planning before getting everybody outraged about it?


Developer registration doesn't prevent this problem. Stolen ID can be found for a lot less money than what a day in a scam farm's operation will bring in. A criminal with access to Google can sign and deploy a new version of their scam app every hour of the day if they wish.

The problem lies in (technical) literacy, to some extent people's natural tendency to trust what others are telling them, the incompetence of investigative powers, and the unwillingness of certain countries to shut down scam farms and human trafficking.

My bank's app refuses to operate when I'm on the phone. It also refuses to operate when anything is remotely controlling the phone. There's nothing a banking app can do against vulnerable phones rooted by malware (other than force to operate when phones are too vulnerable according to whatever threshold you decide on so there's nothing to root) but I feel like the countries where banks and police are putting the blame on Google are taking the easy way out.

Scammers will find a way around these restrictions in days and everyone else is left worse off.


My guess is that Android 17 will show the registered name of the developer of the app you're trying to install. With stolen IDs you can only get accounts for individual developers not for organisations.

When a scammer pretending to be your bank tells you to install an app for verification and it says "This app was created by John Smith" even grandma will get suspicious and ask why it doesn't show the bank's name.


> Stolen ID can be found for a lot less money than what a day in a scam farm's operation will bring in.

Well, in that case, Google has an easy escalation path that they already use for Google Business Listings: They send you a physical card, in the mail, with a code, to the address listed. If this turns out to be a real problem at scale, the patch is barely an inconvenience.


So they'll have a lead time building up a set of verified developers. These scams are pulled by organized crime syndicates, using human trafficking and beatings to keep their call centers manned with complicit workers.

Now they'll need to pay off a local mailman to give them all of Google's letters with an address in an area they control so they can register a town's worth of addresses, big whoop. It'll cost them a bit more than the registration fee, but I doubt it'll be enough to solve the problem.


> Now they'll need to pay off a local mailman to give them all of Google's letters with an address in an area they control so they can register a town's worth of addresses, big whoop. It'll cost them a bit more than the registration fee, but I doubt it'll be enough to solve the problem.

Yeah, this is a huge amount more work than, like, nothing.


All it will do is create a new low risk black market job. Someone will manufacture and sell bulk identities like they do fake social accounts.

> Someone will manufacture and sell bulk identities

How? You've now moved the level of sophistication required from "someone runs some bots on the facebook website" to "someone is now committing complex fraud against a government".

If the only people who can run scams are state sponsored, that's still vastly better than the status quo.


If you can "coach someone to ignore standard security warnings", you can coach them to give you the two-factor authentication codes, or any number of other approaches to phishing.

yeah the thing is, if someone can social engineer you on the phone and make you do their bidding, you've lost no matter what

Installing an app that silently intercepts SMS/MMS data is a persistent technical compromise. Once the app is there, the attacker has ongoing access.

In contrast, convincing someone to read an OTP over the phone is a one-time manual bypass. To use your logic..

A insalled app - Like a hidden camera in a room.

Social engineering over phone - Like convincing someone to leave the door unlocked once.


> Installing an app that silently intercepts SMS/MMS data is a persistent technical compromise. Once the app is there, the attacker has ongoing access.

The motivating example as described involves "giving the scammer everything they need to drain the account". Once they've drained the account, they don't need ongoing access.


Persistence allows the scammer free license to attempt password recoveries for every account the victim could possibly have. Other banks, retirement accounts, the victim's email account.

When the victim's relatives send them money because they need to eat and pay rent after handing everything over to the scammer, the persistent backdoor lets that money be drained as well... You're underestimating the persistence and ruthlessness of the scammers.

This is still not a root cause solution, it's just a mitigation. Because you do not require side loading to install malware. The play store and apple app store both contain malware, as well as apps which can be used for nefarious purposes, such as remote desktop.

A root cause solution is proper sandboxing. Google and apple will not do this, because they rely on applications have far too much access to make their money.

One of the fundamentals of security is that applications should use the minimum data and access they need to operate. Apple and Google break this with every piece of software they make. The disease is spreading from the inside out. Putting a shitty lotion on top won't fix this.


>The play store and apple app store both contain malware

Wow, that a major claim. What apps are malware, exactly?

>This is still not a root cause solution, it's just a mitigation.

Requiring signed apps solves the issue though, as it provides identification of whoever is running the scam and a method for remuneration or prosecution.


> Wow, that a major claim. What apps are malware, exactly?

I don't understand how this is a major claim at all, it should be obvious. All repositories of large enough sizes contain malware because malware doesn't declare itself as malware.

This is exacerbated by the fact the Google Play Store and Apple App Store allow closed-source applications. It's much easier to validate behavior on things like the Debian repos, where maintainers can, and do, audit the source code.

Google does not have a magic "is this malware" algorithm, that doesn't exist. They rely on heuristics and things like asking the authors "hey is this malware". As you can imagine, this isn't very effective. They don't even install and test the apps fully. Not that it matters much, obviously malware can easily change it's behavior to not be detectable from the end-user just running the app.

> Requiring signed apps solves the issue though, as it provides identification of whoever is running the scam and a method for remuneration or prosecution.

It doesn't, for three reasons:

1. Identifying an app doesn't magically make it not malware. I can tell you "hey I made this app" and you still have zero idea if it's malware. This is still a post mitigation. Meaning, if we somehow know an app is malware, we can find out who wrote it. It doesn't do the "is this malware" part of the mitigation, which is the most important part.

2. Bad actors typically have little allegiance to ethics, meaning they typically will not be honest about their identity. There are criminal organizations which operate in meatspace and fake their identities, which is 1000x harder than doing it online. Most malware will not have a legitimate identity tacked to it.

3. Bad actors typically come from countries which don't prosecute them as hard. So, even if you find out if something is malware, and then find out the actual people behind it, you typically can't prosecute them. Even large online services like the Silk Road lasted for a long time, and most likely still do exist, even despite the literal US federal government trying to stop them.


> Installing an app that silently intercepts SMS/MMS data is a persistent technical compromise.

Why would an app silently intercepts SMS/MMS data ? Why does an app needs network access ?

Running untrusted code in your browser is also "a persistent technical compromise" but nobody seems to care.


The 2-factor SMS messages usually say: "Do not give this code to anyone! The bank will NEVER ask you for this code!".

The sideloading warning is much much milder, something like "are you sure you want to install this?".


You'll then get more warnings if you want to give the sideloaded app additional permissions. And if they want to make the sideloading warnings more dire, that wouldn't be nearly as unreasonable.

the main issue is the bank using sms and OTP apps instead of something like passkeys and mandatory in bank setup.

One of my banks uses a card reader and pin to log in, seems more secure.

Pins can still be phished. Just make the phishing a live proxy resembling the real site.

A fundamental difference with e.g. FIDO2 (especially hardware-backed) is that the private credentials are keyed to the relying party ID, so it's not possible for a phising site to intercept the challenge-response.


> The bank will NEVER ask you for this code!

> Please enter the code we sent you in the app.

lol, lmao even


The phisher’s app or login would be from a completely new device though.

Passkeys are also an active area to defeat phishing as long as the device is not compromised. To the extent there is attestation, passkeys also create very critical posts about locking down devices.

Given what I see in scams, I think too much is put on the user as it is. The anti-phishing training and such try to blame somebody downward in the hierarchy instead of fixing the systems. For example, spear-phishing scams of home down payments or business accounts work through banks in the US not tying account numbers to payee identity. The real issue is that the US payment system is utterly backward without confirmation of payee (I.e. giving the human readable actual name of recipient account in the banking app). For wire transfers or ACH Credit in the US, commercial customers are basically expected to play detective to make sure new account numbers are legit.

As I understand it, sideloading apps can overcome that payee legal name display in other countries. So the question for both sideloading and passkeys is if we want banks liable for correctly showing the actual payee for such transfers. To the extent they are liable, they will need to trust the app’s environment and the passkey.


Never ending worm approach is to get remote control via methods on android or apple. Then scam other contacts. It’s built into FaceTime. Need 3rd party apps for android.

Does your logic extend to PCs? If not, why?

Because I hope you realize that clamping down on “sideloading” (read: installing unsigned software) on PCs is the next logical step. TPMs are already present on a large chunk of consumer PCs - they just need to be used.


You missed their point. They are not saying that what Google is doing is a good way to address the underlying problem Google says it is addressing.

They are saying that claiming the underlying problem is not real or not big enough to need addressing is an ineffective way to argue.


Right, but this same problem (scamming) exists on PCs.

Would it make sense to then argue that enforcing TPM-backed measured boot and binary signature verification is a legitimate way to address the problem?


Their point, applied to that situation, would be that if someone does argue for enforcing TPM-backed measured boot yadda yadda to address scamming, trying to counter it by dismissing scamming as not a real problem is useless.

I get it dude, but my wider point is that we need to question where this line of argumentation leads to.

Are we saying that, because scamming exists and we haven’t proposed an alternative, it means that clamping down on software installation methods is a legitimate solution to the problem?


Of course it extends to PCs. It'd suck for us, but end users, software vendors, content providers, and service providers all benefit from a more restricted platform that can provide certain guarantees against malware, fraud, piracy, and so forth. It's pathologically programmer-brained to assume that the good old days of being able to run arbitrary code on a networked computing device would last forever. That freedom must be balanced against the interests of the rest of society to avoid risk from certain kinds of harm which can easily proliferate in an environment where any program can run with the full authority of the owner and malware spreads willy-nilly.

The "programmer-brained" assumption is that I will be able to write any program and run it on my machine and that this ability isn't reserved for only me or some limited class of people and that I can share what I write with others. One big plus of the current stye of AI will be that "end users" will be able to write simple programs and will value this ability. Thus helping protect general purpose computing from this bit of evil for a while longer.

Exactly. I own a few dozen computers, if you count some low powered SBCs. But even those can run lightweight Linux.

That’s enough for me to distribute a few freedom devices to friends and neighbors, and still have extras to account for normal failures.

I also hoard source code, and will happily distribute that with the computers! Maybe that’s “programmer brained,” if so then fine by me!


Users get way more out of it when the device is free. Even if they don't use this option, it makes it easier to set up competing services. This includes ones that would never be allowed in an official store because they're DRM-free alternatives to big streaming services but still offer all the same content. The existence of such alternatives, if they are easy to use, can force the big services to become more user-friendly. Just as happened back then with Napster.

Also every user is free to simply not use the option of installing things outside of the store.


> This includes ones that would never be allowed in an official store because they're DRM-free alternatives to big streaming services but still offer all the same content.

Do you know anyone who works in a professional creative field that doesn't involve writing code? If so, ask them how they'd feel about their work bring out there on the internet free to all takers. What the implications would be for their ability to feed their children and pay their mortgage doing the things they love.

This is what I mean by "programmer-brained." Of all creative workers, only programmers seem okay with abolishing IP laws, I guess because they figure they'll be okay living out of an office at MIT, or even worse out of an office at some YC startup that turns the user into the product. But artists, musicians, writers, filmmakers, etc. all put food on the table because of those IP laws programmers hate so much. Taking that protection for the fruit of your labor away would be at least as disruptive as AI has been.


Obviously I disagree completely. But it is still sad to see this kind of reasoning on HN of all places :(

If you want a picture of the future, imagine a boot stamping on a human face — for $29.95/month.

Show HN: O'Brien (YC S29), new AI-powered Boot as a Service provider

> That freedom must be balanced against the interests of the rest of society to avoid risk from certain kinds of harm which can easily proliferate in an environment where any program can run with the full authority of the owner and malware spreads willy-nilly.

No, no, a thousand times no. This is an argument for authoritarian clampdown on general computing and must be opposed by all means necessary. I have the right to run whatever code I wish on my own damn property without the permission of arbitrary authorities or whatever subset of society you favor, and if you or they have a problem with this, you or they can proceed to pound sand.


Safety fascists won’t stop until every human interaction requires permission.

It’s a good time to buy a pallet of old SFF computers, just in case.


>I agree that mandatory developer registration feels too heavy handed, but I think the community needs a better response to this problem than "nuh uh, everything's fine as it is."

OK, so instead of educating stupid (or overly naive) people, we implement "protections" to limit any and all people to do useful things with their devices? And as a "side effect" force them to use "our" app store only? Something doesn't smell that good here …

How about a less drastic measure, like imposing a serious delay for "side loading" … let's say I'd to tell my phone that I want to install F-Droid and then would have to wait for some hours before the installation is possible? While using the device as usual, of course.

The count down could be combined with optional tutorials to teach people to contact their bank by phone meanwhile. Or whatever small printed tips might appear suitable.


How would that solve scammer-driven installs? The scammer is not in a rush, they already have the victim listening and following their instructions.

There simply isn't a known solution to this problem. If you give users the ability to install unverified apps, then bad actors can trick them into installing bad ones that steal their auth codes and whatnot. If you want to disallow certain apps then you have to make decisions about what apps (stores) are "blessed" and what criteria are used to make those distinctions, necessarily restricting what users can do with their own devices.

You can go a softer route of requiring some complicated mechanism of "unlocking" your phone before you can install unverified apps - but by definition that mechanism needs to be more complicated then even a guided (by a scammer) normal non-technical user can manage. So you've essentially made it impossible for normies to install non-playstore apps and thus also made all other app stores irrelevant for the most part.

The scamming issue is real, but the proposed solutions seem worse then the disease, at least to me.


> There simply isn't a known solution to this problem. If you give users the ability to install unverified apps, then bad actors can trick them into installing bad ones that steal their auth codes and whatnot.

This is also true if they can only install verified apps, because no company on earth has the resources to have an actually functional verification process and stuff gets through every day.


> This is also true if they can only install verified apps, because no company on earth has the resources to have an actually functional verification process and stuff gets through every day.

This is true, but if this goes through, I imagine that the next step for safety fascists will be to require developer licensing and insurance like general contractors have. And after that, expensive audits, etc, until independent developers are shut out completely.


The solution would be a "noob mode" that disables sideloading and other security-critical features, which can be chosen when the device is first turned on and requires a factory reset to deactivate. People who still choose expert mode even though they are beginners would then only have themselves to blame.

This should be voted higher, it quite literally is this simple.

We know how to do hardware-bound phishing-resistant credentials now, it is a solved problem.

I'm going to assume you're referring to auth codes, especially the ones sent via SMS? In which case yes, banks should definitely stop using those but that alone doesn't solve the overarching issue.

The next step is simply that the scammer modifies the official bank app, adds a backdoor to it, and convinces the victim to install that app and login with it. No hardware-bound credentials are going to help you with that, the only fix is attestation, which brings you back to the aformentioned issue of blessed apps.


SMS 2FA is neither hardware-bound nor phishing resistant, I'm referring to hardware-bound phishing-resistant 2FA methods like passkeys.

Read my previous comment again. Passkeys are nice, but they don't solve the problem that's being discussed here.

I'm not sure if you understand what makes passkeys phishing-resistant?

The backdoored version of the app would need to have a different app ID, since the attacker does not have the legitimate publisher's signing keys. So the OS shouldn't let it access the legitimate app's credentials.


I understand how passkeys work. You don't need the legitimate app's credentials, we're talking about phishing attacks, you're trying to bring the victim to giving you access/control to their account without them realizing that that's what is happening.

A simple scenario adapted from the one given in the android blog post: the attacker calls the victim and convinces them that their banking account is compromised, and they need to act now to secure it. The scammer tells the victim, that their account got compromised because they're using and outdated version of the banking app that's no longer suppported. He then walks them through "updating" their app, effectively going through the "new device" workflow - except the new device is the same as the old one, just with the backdoored app.

You can prevent this with attestation of course, essentially giving the bank's backend the ability to verify that the credentials are actually tied to their app, and not some backdoored version. But now you have a "blessed" key that's in the hands of Google or Apple or whomever, and everyone who wants to run other operating systems or even just patched versions of official apps is out of luck.


> He then walks them through "updating" their app, effectively going through the "new device" workflow - except the new device is the same as the old one, just with the backdoored app.

This is where the scheme breaks down: the new passkey credential can never be associated with the legitimate RP. The attacker will not be able to use the credential to sign in to the legitimate app/site and steal money.

The attacker controls the fake/backdoored app, but they do not control the signing key which is ultimately used to associate app <-> domain <-> passkey, and they do not control the system credentials service which checks this association. You don't even need attestation to prevent this scenario.


> do not control the signing key which is ultimately used to associate app <-> domain <-> passkey, and they do not control the system credentials service which checks this association.

You're assuming the attacker must go through the credential manager and the backing hardware, but that is only the case with attestation. Without it, the attacker can simply generate their own passkey in software, because the backend on the banks side would have no way of telling where the passkey came from.


I understand how passkeys work. You don't need the legitimate app's credentials, we're talking about phishing attacks, you're trying to bring the victim to giving you access/control to their account without them realizing that that's what is happening.

That doesn't work, because the scammer's app will be signed with a different key, so the relying party ID is different and the secure element (or whatever hardware backing you use), refuses to do the challenge-response.


Correction: nothing prevents the attacker from using the app's legit package ID other than requiring the uninstall of the existing app.

The spoofed app can't request passkeys for the legit app because the legit app's domain is associated with the legit app's signing key fingerprint via .well-known/assetlinks.json, and the CredentialManager service checks that association.


If the side loaded app does not have permission to use the passkeys and cannot somehow get the user to approve passkey access of the new app, that would be a good alternative to still allow custom apps.

I don't think you understand. This exists _today_, regardless of how you install apps, because attackers can't spoof app signatures. If I don't have Bank of America's private signing key, I cannot make an app that requests passkeys for bankofamerica.com, because bankofamerica.com publishes a file [0] that says "only apps signed with this key fingerprint are allowed to request passkeys for bankofamerica.com" and Android's credential service checks that file.

No need for locking down the app ecosystem, no need to verify developers. Just don't use phishable credentials and you are not vulnerable to malware trying to phish credentials.

0: https://www.bankofamerica.com/.well-known/assetlinks.json


I like the idea of requiring extra work to get notification access. But really what all these scams pray on are time sensitivity, take that away and you solve the problem in many ways. For example, your bank shouldn't let you drain your account without either being in person or having a mandatory 24hr waiting period. Same could be done with side loaded apps getting notifications, if it's side loaded and wants to read notifications, then it needs to wait 24 hrs. Mostly it won't ever matter.

Alternatively reading notifications could be opt in per app, so the reading app needs to have permission to read your SMS message app notifications, or your bank notifications, that would not be as full proof as that requires some tech literacy to understand.


>A related approach might be mandatory developer registration for certain extremely sensitive permissions, like intercepting notifications/SMSes...? Or requiring an expensive "extended validation" certificate for developers who choose not to register...?

I think my overriding concern is not nuking F-Droid. I actually think that's a great solution and, interestingly, F-Droid apps already don't use significant permissions (or often use any permissions!) so that might work. Also it would be good if perhaps F-Droid itself could earn a trusted distributor status if there's a way to do that.

Or a marriage of the two, F-Droid can jump through some hoops to be a trusted distributor of apps that don't use certain critical permissions.

I think there have to be ways of creatively addressing the issue that don't involve nuking a non-evil app distribution option.


> community needs a better response to this problem than "nuh uh, everything's fine as it is."

You can also cut yourself with a kitchen knife but nobody proposes banning kitchen knives. Google and the state are not your nannies.


>You can also cut yourself with a kitchen knife but nobody proposes banning kitchen knives.

oh nice, i love this game.

you cant carry a kitchen knife that is too long, you cant carry your kitchen knife into a school, you cant brandish your kitchen knife at police, you cant let a small child run around with a kitchen knife...

literally most of what "the state" does is be a "nanny"

(not agreeing or disagreeing with google here, i have no horse in this particular race. but this little knife quip is silly when you think about it for more than 5 seconds)


In this example we still don't require you to register with anyone to buy a knife, get the blessing of some institution to sell knives, or, as in this case, get a certification before you can start making knives.

its crazy that different things, like knives and app stores, have different rules. maybe thats why the quip about the knife sounded super cool but fell apart as an analogy for this scenario when thought about for more than 5 seconds?

the point of my comment was that the state does implement a lot of rules (read: "is a nanny"), despite the claim otherwise.


I think it's important to consider the intent of those laws, too. They are primarily or even exclusively to prevent you from hurting others with knives. They are not really intended to protect you from cutting yourself in your own home. So I think the parent's comment still holds weight.

All of these rules, and yet people still cut themselves and others.

you cant buy a kitchen knife that is too long

What?


sorry, should say "carry", not "buy". most states have a maximum length you can carry (4-5.5 inches is common).

although, i would imagine at some length, it becomes a "sword" (even if marketed as a knife) and falls under some other "nanny"-ing. i have not googled that.


You still have an hour or two to edit your comment. Look in that line of text where you see your user name, click “Edit”.

Doesn’t editing require a karma threshold?

it does not (thankfully!)

Apostrophe's don't have a karma threshold, either. ;-)

As kevin_thibedeau points out elsewhere in the thread, he's not necessarily wrong. In many states and foreign countries it's illegal to carry a large knife in public without a reason and I'm sure purchases are restricted in some places as well. Most people are more or less OK with that, it seems, so there historically hasn't been a lot of pushback.

So, having been given the proverbial inch (or centimeter), those obsessed with banning potentially-dangerous tools are trying to take the next mile (or kilometer): https://theconversation.com/why-stopping-knife-crime-needs-t...


Long knives in the UK are like full auto guns in the rest of the world.

> In Google's announcement in Nov 2025, they articulated a pretty clear attack vector. https://android-developers.googleblog.com/2025/11/android-de...

This reeks of "think of the children^Wscammed". I mean, following this principle the only solution is to completely remove any form of sideloading and have just one single Google approved store because security.

> A related approach might be mandatory developer registration for certain extremely sensitive permissions, like intercepting notifications/SMSes...? O

It doesn't work like that. What they mean with "mandatory developer registration" is what Google already does if you want to start as a developer in Play Store. Pay 25$ one-time fee with a credit card and upload your passport copy to some (3rd-party?) ID verification service. [1] In contrast with F-Droid where you just need a GitLab user to open a merge request in the fdroid-data repository and submit your app, which they scan for malware and compile from source in their build server.

[1] but I guess there are plenty of ways to fool Google anyway even with that, if you are a real scammer.


the whack-a-mole problem is real but mandatory registration doesn't actually fix it for sophisticated actors -- they'll just use burner entities or buy aged developer accounts. it mostly raises costs for hobbyists and side projects. the permission-gating approach dfabulich mentions (require registration only for notification/SMS interception APIs) seems more targeted.

That attack vector is just a symptom. It’s unfathomably foolish to use two-factor authentication via something as easy to intercept as SMS. Two-factor authentication should be done using a separate hardware token that generates time-based one-time codes. Anything else is basically security theater.

One time codes are still vulnerable to phishing by a site that proxies the bank's authentication challenge. You need something like FIDO2 where a challenge-response only works when the relying party ID is correct.

There will _always_ be a need to balance between safety and the cost of adding more safety. There is no point at which safety is complete; there is always more that can be done, but the cost gets higher and higher.

So yes, "its fine the way it is" _is_ valid; but the meaning it "we're at a good point in the balance, any more cost is too much given the gains it generates"


> I think the community needs a better response to this problem than "nuh uh, everything's fine as it is."

People choosing between the smartphone ecosystems already have a choice between the safety of a walled garden and the freedom to do anything you like, including shooting yourself in the foot.

You don't spend a decade driving other "user freedom" focused ecosystems out of the marketplace, only to yank those supposed freedoms away from the userbase that intentionally chose freedom over safety.


How about.

"I am responsible for my own actions" mode.

You click that, the phone switches into a separate user space. Securenet is disabled, which is what most financial apps rely on.

Then you can install all the fun stuff you want.

This is really a matter of Google not sandboxing stuff right. Why the hell does App A need access to data or notifications from App B.


> Why the hell does App A need access to data or notifications from App B.

Advertising networks. Just like how you see crap like a metronome app have a laundry list of permissions that it doesn’t need. Some cases they are just scammy data harvesters, but in other cases it’s the ad networks that are actually demanding those permissions.

Google won’t sandbox properly because it’s against their direct business interest for them to do so. Google’s Android is adware, and that is the fundamental problem.


The new "Terminal" app might eventually evolve into something like that.

This mode already exists. It's called "Install LineageOS".

> the malware captures their two-factor authentication codes

Aren't we supposed to have sandboxing to prevent this kind of thing? If the malware relies on exploiting n-days on unpatched OSes, they could bypass the sideloading restrictions too.


Codes arrive via SMS, which is available to all apps with the READ_SMS permission. This isn't an OS vuln. It is a property of the fact that SMS messages are delivered to a phone number and not an app.

On the Play store there is a bunch of annoying checking for apps that request READ_SMS to prevent this very thing. Off Play such defense is impossible.


If they restricted sideloaded apps from sniffing SMS then I wouldn't mind all that much.

So no access to SMS for apps distributed on F-Droid?

The main problem here is the banks relying on an untrusted device as second factor.

Only immutable devices should be allowed as second factor.


Maybe we should take away peoples' phone calls, ability to use knives, walking on the street, swimming in water, drinking liquids of any kinds, alcohol, trains, while we are at it.

I think there's room to raise the bar of required tech competency without registration.

Manually installing an app might be close to the limit of what grandma can be coached through by an impatient scammer.

Multiple steps over adb, challenges that can't be copy and pasted in a script, etc. It can be done but it won't provide as much control over end user devices.


I don’t want to be too flippant, but I think there is a real trade off across many aspects of life between “freedom” and “safety”.

There is a point at which people have to think critically about what they are doing. We, as a society, should do our best to protect the vulnerable (elderly, mentally disabled, etc) but we must draw the line somewhere.

It’s the same thing in the outside world too - otherwise we could make compelling arguments about removing the right to drive cars, for example, due to all the traffic accidents (instead we add measures like seatbelts as a compromise, knowing it will never totally solve the issue).


> In Google's announcement in Nov 2025, they articulated a pretty clear attack vector.

If you can be convinced by this, you can be convinced by anything. What if the scammer uses "fear and urgency" to make the person log onto their bank account and transfer the funds to the scammer?

If you can convince people to install new apps through "fear and urgency," especially with how annoying it often is to do outside of the blessed google-owned flow (and they're free to make it more annoying without taking this step), that person can be convinced of anything.

> I agree that mandatory developer registration feels too heavy handed, but I think the community needs a better response to this problem than "nuh uh, everything's fine as it is."

There's no other "solution" other than control by an authority that you totally trust if your "threat" is that a user will be able to install arbitrary apps.

The manufacturer, service provider, and google, of course, won't be held to any standard or regulations; they just get trusted because they own your device and its OS and you're already getting covertly screwed and surveilled by them. Google is a scammer constantly trying to exfiltrate information from my phone and my life in order to make money. The funny thing is that they are only pretending to defend me from their competition - they're not threatened by those small-timers - they're actually "defending" me from apps that I can use to replace their own backdoors. Their threat is that they might not know my location at all times, or all of my contacts, or be able to tax anyone who wants access to me.


Google's announcement is just trolling, there's an order of magnitude more scams on the Play store and they don't call for its closure.

Right now when I search for "ChatGPT", the top app is a counterfeit app with a fake logo, is it really this store which is supposed to help us fight scams?


> Right now when I search for "ChatGPT", the top app is a counterfeit app with a fake logo, is it really this store which is supposed to help us fight scams?

Just did Play search for "ChatGPT" and the top-2 results were for OpenAI's app (one result was sponsored by OpenAI one result was from Google's search). So anecdotally your results may vary.


Agree with this middle path you point out. On one hand, I do not want some apps to be distributed anonymously, I need to know who is behind it in order to trust the app. On the other hand, many apps are benign.

Permissions are a great way to distinguish.


Do you need Google to compel the author to start a business relationship with them, which they can cut off at any time?

Or would you be OK knowing that Thunderbird you downloaded from https://thunderbird.net/ is signed by the thunderbird.net certificate owner?


Typo squatting is a thing, and so are Unicode homographs.

The permissions approach isn't bad. I may trust Thunderbird for some things, but permission to read SMS and notifications is permission to bypass SMS 2FA for every other account using that phone number. It deserves a special gate that's very hard for a scammer to pass. The exact nature of the gate can be reasonably debated.


Something like Thunderbird might be an exception, but also domain confusion exists, so in the general case, most likely not because most users are susceptible to this.

should I be confident that thunderbird.net is the real one, or could it be hosted at thunderbird.org, thunderbird.com, or thunderbird.mozilla.org?

> standard security warnings

Make the warning a full screen overlay with a button to call local police then.

(Seriously)

"but local police won't treat that seriously..." "the victim will be coached to ignore even that..." well no shit then you have a bigger problem which isn't for google to fix.


> but I think the community needs a better response

The community does not need to do that. Installing software on my device should not require identification to be uploaded to a third party beforehand.

We're getting into dystopian levels of compliance here because grandma and grandpa are incapable of detecting a scam. I sympathize, not everyone is in their peak mental state at all times, but this seems like a problem for the bank to solve, not Android.


These people would try to ban talking if the scams moved to in-person conversations. At some point individual responsibility has to come into play.

You can’t even win with adding more scare screens because as soon as Epic isn’t allowed to bypass the scare screens, they’ll sue you.

Just like they went after Samsung for adding friction to the sideload workflow to warn people against scams.

https://www.macrumors.com/2024/09/30/epic-games-sues-samsung...


I agree with Epic. It should be like on windows or macOS where you can register, get notarized, and then distribute without scare screens. I don’t see why phones are inherently different than computers.

So, what's your counterproposal?

Each of these tools provides real value.

* Bundlers drastically improve runtime performance, but it's tricky to figure out what to bundle where and how.

* Linting tools and type-safety checkers detect bugs before they happen, but they can be arbitrarily complex, and benefit from type annotations. (TypeScript won the type-annotation war in the marketplace against other competing type annotations, including Meta's Flow and Google's Closure Compiler.)

* Code formatters automatically ensure consistent formatting.

* Package installers are really important and a hugely complex problem in a performance-sensitive and security-sensitive area. (Managing dependency conflicts/diamonds, caching, platform-specific builds…)

As long as developers benefit from using bundlers, linters, type checkers, code formatters, and package installers, and as long as it's possible to make these tools faster and/or better, someone's going to try.

And here you are, incredulous that anyone thinks this is OK…? Because we should just … not use these tools? Not make them faster? Not improve their DX? Standardize on one and then staunchly refuse to improve it…?


I'm being a little coy because I do have a very detailed proposal.

In want the JS toolchain to stay written in JS but I want to unify the design and architecture of all those tools you mentioned so that they can all use a common syntax tree format and so can share data, e.g. between the linter and the formatter or the bundler and the type checker.


Yeah it's a shame that few people realize running 3 (or more) different programs that have separate parsing and AST is the bigger problem.

Not just because of perf (though the perf aspect is annoying) but because of how often the three will get out of sync and produce bizarre results

Hasn't that already been tried (10+ years ago) with projects like https://github.com/jquery/esprima ? Which have since seen usage dramatically reduced for performance reasons.

Yeah, you are correct. But that means I have the benefit of ten years development in the web platform, as well as having hindsight on the earlier effort.

I would say the reason the perf costs feel bad there is that the abstraction was unsuccessful. Throughtput isn't all that big a deal for a parser at all if you only need to parse the parts of the code that have actually changed


You can rip fast builds from my cold, dead hands. I’m not looking back to JS-only tooling, and I was there since the gulp days.

All I can say for sure is that the reason the old tools were slow was not that the JS runtime is impossible to build fast tools with.

And anyway, these new tools tend to have a "perf cliff" where you get all the speed of the new tool as long as you stay away from the JS integration API sued to support the "long tail" of uses cases. Once you fall off the cliff though, you're back to the old slow-JS cost regime...


> […] the reason the old tools were slow was not that the JS runtime is impossible to build fast tools with.

I don't have them at hand right now but there are various detailed write-ups from the maintainers of Vite, oxc, and more, that are addressing this specific argument to point out that indeed the JavaScript runtime was a hard limitation on the throughput they could achieve, making Rust a necessity to improve build speeds.


Why do you need high throughput though? Isn't that a metric of how fast a batch processing system is?

Why are we still treating batch processing as the controlling paradigm for tools that work on code. If we fully embraced incremental recomputation and shifted the focus to how to avoid re-doing the same work over and over, batch processing speed would become largely irrelevant as a metric


> For security, other than what the MCP protocol itself provides, what should be defined?

The MCP protocol itself provides no security at all.

The MCP specification includes no specified method of authorization, and no specified security rules. It lists a handful of "principles," and then the specification simply gives up on discussing the problem further.

https://modelcontextprotocol.io/specification/2025-11-25#sec...

    3.2 Implementation Guidelines

    While MCP itself cannot enforce these security principles at the protocol
    level, implementors **SHOULD**:

    1. Build robust consent and authorization flows into their applications
    2. Provide clear documentation of security implications
    3. Implement appropriate access controls and data protections
    4. Follow security best practices in their integrations
    5. Consider privacy implications in their feature designs

it's just an http or stdio server, would there be considerations beyond that of any other http server or cli app? shouldn't the security be dependent on deployment details? Like you wouldn't require OAUTH if it is deployed on localhost only, or if there is a reverse proxy handling that bit.

There is a reason it cannot enforce those principles, an MCP is a web service. it could use SQL as a backend for some reason, or use static pages. it might be best to use mTLS, or it might make sense to make it open to the public with no authentication or authorization whatsoever, and your only concern might be availability (429 thresholds). the spec can't and shouldn't account for wildly varying implementation possibilities right?


The difference is that MCP introduces a third party: the agent isn't the user and isn't the service, but it's acting on behalf of one to call the other. Standard HTTP auth assumes two parties. That's the gap the spec needs to address.

> I only scan the headlines

Have you scanned any headlines about ICE lately? Maybe do a quick search for news about Minnesota?

(I'm pretty sure that if you'd been putting your pants on in Minnesota, you would not have written this comment.)


Are you saying legal US citizens are having a tough time in Minnesota with ICE? My cousins and their families aren't. They're too busy leading their own normal, daily lives.


Yes; my neighbors had trouble going to the grocery store. From appearances, you might think they're on vacation from Mexico. They have been here for generations, and one of their family is a high enough ranking member of the military that I won't say more to avoid the risk of doxxing them.


Yes, two of them were just killed. Does that qualify as "having a tough time?"


And how many people live in Minnesota? What were they doing when they were killed?


I don't get your point. What proportion of residents does an event need to negatively impact for you to believe that it's hassling people?

Surely it can't be 100%, right? No event in any major city, even horrific events, actually affect everyone.


How many illegal aliens were killed in Minnesota?


What's the ratio of citizens to non-citizens that's okay? One citizen per every hundred or are you thinking 10-1?


Have you considered they could maybe just stop interfering with federal law enforcement and let them do their jobs as they have been doing for decades under all sorts of administrations? You'll be hard pressed to find a tear shed for agitators protecting illegal immigrant criminals with deportation orders.


Neither you nor anyone else believes this is how immigration enforcement has been done "for decades under all sorts of administrations."

You can make it appear as if you have a better grasp on reality by just acknowledging that this is a much different enforcement mechanism than we've seen in the past, but you think that's okay.

Anyway there are now several known cases of people being detained or deported without deportation orders. This is another point that you could at least give the appearance of honesty and grasp on reality by acknowledging.


You're right that immigration enforcement in the past did not have to deal with mobs trying to interfere with that enforcement.


DHS's own data proves that current enforcement priorities have changed.

So what's more probable in your mind?

( Hypothesis A ) -- Mobs trying to interfere with law enforcement has caused DHS to focus on arresting and deporting immigrants without criminal background

( Hypothesis B ) -- DHS's focus on arresting and deporting immigrants without criminal background has required significant scale-up of personnel with minimal training (validated by DHS's own data) and required tactics that a large number of Americans believe to strike an unacceptable cost-benefit balance

( Hypothesis C ) -- The two facts (enforcement approach and public response) are not causally related to each other at all


It's telling you chose to not answer the question and instead chose to introduce a different (straw man) question in response.

At least people in the past had the integrity to acknowledge their positions head-on. One of the lamentable things missing today


Interfering with federal law enforcement is not punishable by summary execution.


Huh? Did you respond to the wrong comment?


> What were they doing when they were killed?

One was returning from dropping off her 6 year old child at school.

The other was videotaping ICE activity with one hand while holding out the other hand to show he was no was no threat.

What is your point, exactly? Neither was doing anything illegal, neither was directly trying to interfere with ICE actions. (The first wasn't trying to interfere at all.)

Although normally I'd say wait for the full evidence to be revealed, in this case (1) there's already a wealth of evidence from bystanders, and (2) the investigations are actively being interfered with so official evidence is not forthcoming.

Those are the 2 citizens killed. CBP and ICE killed at least 25 other people in the field and at least 30 died in custody (one source cites 30-32, another 44).

Apparently, the violence is necessary to deport at (checks notes) a lower rate than Biden's. It might make sense if the current enforcement was aimed at serious criminals, but only the rhetoric is. The current enforcement is much less selective. More damage, less gain.


A corollary I don't see mentioned enough by the morons who believe there are roving hordes of violent illegal criminals:

Let's assume there was. Then what on earth is the administration doing tracking down and putting cuffs on so many people who do not fit in that category?

Every seat in a detention center, courtroom, or plane filled by a random guy stopped in the Home Depot parking lot is a seat taken away from one of these allegedly numerous violent rapist/murders/whatever.

So even if you were stupid enough to believe all the transparent bullshit from this gang of liars, they'd still be fucking awful!

All this stuff does, in addition to squelching public appetite for immigration enforcement writ large, is keeps the actual bad guys inside the country even longer!


You keep moving the goalposts that much and maybe the patriots can win the Super Bowl.


[flagged]


There has been no such thing.


Just curiously, what do you personally get out of lying constantly in this thread?


It's not a lie to point out the truth. Words have meaning and wantonly applying the most scariest sounding words you can find does not help your cause.


Dopamine.


In 2024, Starcloud posted their plans to "solve" the cooling problem. https://starcloudinc.github.io/wp.pdf

> As conduction and convection to the environment are not available in space, this means the data center will require radiators capable of radiatively dissipating gigawatts of thermal load. To achieve this, Starcloud is developing a lightweight deployable radiator design with a very large area - by far the largest radiators deployed in space - radiating primarily towards deep space...

They claim they can radiate "633.08 W / m^2". At that rate, they're looking at square kilometers of radiators to dissipate gigawatts of thermal load, perhaps hectares of radiators.

They also claim that they can "dramatically increase" heat dissipation with heat pumps.

So, there you have it: "all you have to do" is deploy a few hectares of radiators in space, combined with heat pumps that can dissipate gigawatts of thermal load with no maintenance at all over a lifetime of decades.

This seems like the sort of "not technically impossible" problem that can attract a large amount of VC funding, as VCs buy lottery tickets that the problem can be solved.


Yes, on the face of it, the plan is workable. Heat radiation scales linearly with area and exponentially (IIRC) with temperature.

It really is as simple as just adding kilometers of radiatiors. That is, if you ignore the incredible cost of transporting all that mass to orbit and assembly in space. Because there is quite simply no way to fold up kilometer-scale thermal arrays and launch in a single vehicle. There will be assembly required in space.

All in all, if you ignore all practical reality, yes, you can put a datacenter in space!

Once you engage a single brain cell, it becomes obvious that it is actually so impractical as to be literally impossible.


I kind of want to play it out though... if someone did do this (for whatever reasons), what would the real benefits even be? Something terrestrial operations wouldn't be able to catch up to in 5-10 years?


This article includes a graph with a negative slope, claiming that AI tools are useful for beginners, but less and less useful the more coding expertise you develop.

That doesn't match my experience. I think AI tools have their own skill curve, independent of the skill curve of "reading/writing good code." If you figure out how to use the AI tools well, you'll get even more value out of them with expertise.

Use AI to solve problems you know how to solve, not problems that are beyond your understanding. (In that case, use the AI to increase your understanding instead.)

Use the very newest/best LLM models. Make the AI use automated tests (preferring languages with strict type checks). Give it access to logs. Manage context tokens effectively (they all get dumber the more tokens in context). Write the right stuff and not the wrong stuff in AGENTS.md.


That sounds exhausting.

I'd rather spend my time thinking about the problem and solving it, than thinking about how to get some software to stochasticaly select language that appears like it is thinking about the problem to then implement a solution I'm going to have to check carefully.

Much of the LLM hype cycle breaks down into "anyone can create software now", which TFA makes a convincing argument for being a lie, and "experts are now going to be so much more productive", which TFA - and several studies posted here in recent months - show is not actually the case.

Your walk-through is the reason why. You've not got magic for free, you've got something kinda cool that needs operational management and constant verification.


I’ve seen otherwise intelligent and capable people get so addicted to the convenience and potential of LLMs, that they start to lose their ability to slowly go through problems step by step. it’s sad.


Agreed. My work is mandating Claude Code usage this week for everyone. I spent all day today getting it to write tickets, code, and tests for something I knew how to do. I don’t understand the appeal. Telling the AI “commit those changes and then push,” then waiting for the result, takes way longer than gcmsg <commit msg> && gp.


If you're not developing an iOS/macOS app, you can skip Xcode completely and just use the `swift` CLI, which is perfectly cromulent. (It works great on Linux and Windows.)


There'a great indie app called Notepad.exe [1] for developing iOS and macOS apps using macOS. You can also write and test Swift apps for Linux easily [2]. It also supports Python and JavaScript.

If you hate Xcode, this is definitely worth a look.

[1]: https://notepadexe.com

[2]: https://notepadexe.com/news/#notepad-14-linux-support


So wait this thing is real? Calling it notepad.exe gave me the impression that it's just an elaborate joke about how you can code any program in Notepad...


It might have a joke name but it costs $80!


That's the real joke...


Or pay $19.99 for a year and be able to run it on 3 devices.

That's a pretty good deal.


It claims “native performance”, which makes me suspect it’s another Electron bloat.


Instead of speculating you could download and see for yourself that it’s not. It’s by Marcin Krzyzanowski who is all about native iOS and macOS apps.


Even if you're developing for macOS you can skip xcode. I've had a great time developing a menubar app for macOS and not once did I need to open xcode.


curious what you used - I've been looking into making a menubar app and really hate xcode


claude -p "Make a menubar app with AppKit (Cocoa) that does X"


I would avoid it for Linux and Windows. Even if they are "technically supported", Apple's focus is clearly macOS and iOS. Being a second- (or even third-) class citizen often introduces lots of issues in practice ("oh, nobody teenaged that functionality on Windows"...)


Self-driving municipal busses would be fantastic.


Also, a real nightmare for the municipal trade unions. (Do you know why every NYC subway train needs to have not one but two operators, even though it could run automatically just fine?)


Why?


Because the Transport Workers Union fought tooth and nail for it. Laying off hundreds of operators would be a politically very dangerous move.


Huh. I wonder if that makes any sense. It doesn't seem to make sense to keep employing people if you no longer need them. It sucks to be layed off, but that's just how it works.


It also shows a lack of imagination. If you have to provide a union with a job bank, why not re-deploy employees to other roles? With one person per train, re-deploy people to run more trains therefore decreasing the interval between trains. Stations used to have medics but this was cut. How about re-train people to be those medics? The subway could use a signaling upgrade and positive train control. Installing platform screen doors to greatly reduce the incidence of people falling onto the tracks is going to need a lot of labor.


Why would you need buses?


Mass transit is a capacity multiplier. If 35 people are headed in the same direction compare that with the infrastructure needed to handle 35 cars. Road capacity, parking capacity, car dealerships, gas stations, repair shops, insurance, car loans.


Believe it or not, in some cities that have near grid-lock rush-hour traffic - there's between 50-100%+ as many people traveling by bus as by car.

If all of those people switch to cars, you end up with it taking an hour to travel 1 mile by car.

It's almost as if they have busses for a reason.


First, these cities should be fixed by removing the traffic magnets. It's far past the point where we used the old obsolete ideology of trying to supply as much traffic capacity as possible.

But anyway, your statement is actually not true anywhere in the US except NYC. Even in Chicago, removing ALL the local transit and switching to 6-seater minivans will eliminate all the traffic issues.


> First, these cities should be fixed by removing the traffic magnets.

If you remove the jobs and housing, traffic does get a lot better. But it's not much of a city without jobs and housing.


Indeed. And people live better lives, with better job accessibility and variety. Once you remove dense office cores.


Car traffic magnets like highways inside urban cores? Or people traffic magnets like office buildings, colleges, sports stadiums, performing arts venues, shopping malls?


Office buildings. Everything else is just noise.

Large stadium arenas are a special case, but they don't create sustained traffic, and their usage periods typically do not overlap with the regular rush hour.


6-seater self-driving municipal minivans would be fantastic, too. (I would still call that a "bus", but I don't care what we call it.)


That's the testing matrix we have to do for iOS and Android apps today. The screen sizes don't go all the way up to ultrawide, but 13" iPad (portrait and landscape) down to 4" iPhone Mini, at every "Dynamic Type" display setting is required.

It's not that tough, but there can be tricky cases.


Also with every relevant locale, as English UI strings are usually abnormally short.


I think the industry settled on pretty good answers, using lots of XML-like syntax (HTML, JSX) but rarely using XML™.

1. Following Postel's law, don't reject "invalid" third-party input; instead, standardize how to interpret weird syntax. This is what we did with HTML.

2. Use declarative schema definitions sparingly, only for first-party testing and as reference documentation, never to automatically reject third-party input.

3. Use XML-like syntax (like JSX) in a Turing-complete language for defining nested UI components.

Think of UI components as if they're functions, accepting a number of named, optional arguments/parameters (attributes!) and an array of child components with their own nested children. (In many UI frameworks, components literally are functions with opaque return types, exactly like this.)

Closing tags like `</article>` make sense when you're going to nest components 10+ layers deep, and when the closing tag will appear hundreds of lines of code later.

Most code shouldn't look like that, but UI code almost always does, which is why JSX is popular.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: