Hacker News new | past | comments | ask | show | jobs | submit login
Another flaw in Signal desktop app leaks chats in plaintext (thehackernews.com)
451 points by workerthread on May 17, 2018 | hide | past | favorite | 221 comments



From the researcher who found it:

we were able to compile a list of strategic defense-in-depth recommendations for Signal Desktop which we’ve sent to the Signal security team per their request. At the end of the day there will always be new “hot” vulnerabilities, but the “vendor” response is generally what separates the wheat from the chaff. The Signal team’s quick patch time along with a strong interest in mitigating vulnerabilities of this type in the future was encouraging to see. I’ll remain a Signal user for the foreseeable future :)

https://thehackerblog.com/i-too-like-to-live-dangerously-acc...


>Researchers—Iván Ariel Barrera Oro, Alfredo Ortega, Juliano Rizzo, and Matt Bryant—responsibly reported the vulnerability to Signal, and its developers have patched the vulnerability with the release of Signal desktop version 1.11.0 for Windows, macOS, and Linux users.

>However, The Hacker News has learned that Signal developers had already identified this issue as part of a comprehensive fix to the first vulnerability before the researchers found it and reported them.

>Signal app has an auto-update mechanism, so most users must have the update already installed. You can read this guide to ensure if you are running updated version of Signal.

Seems everything is patched, and was already going to be patched before the vuln was reported.


Maybe secure chat clients shouldn't be written in JavaScript or other languages that have excessive dynamicness? Signal seems to be written mostly in languages that are bad for security (significantly worse than the best alternatives). Maybe I'm just a language nerd without any clue about the trade-offs, but I trust the Wire software more. Note that this just applies to mobile clients and server - Wire, like Signal, chose to build their desktop+webapp in JavaScript :(


As much as I'm not a fan of JavaScript, the problem is not so much the language but rather the choice of Electron and all that comes with it. Heck, even a web version or Chrome app would've successfully mitigated these attacks. Electron means you're one XSS away from remote code execution, and even worse, it makes it way harder to mitigate XSS through CSP (which Signal did utilize, but script-src 'self' can easily be bypassed in Electron).

FWIW, Signal's native mobile apps are written in Java and Objective-C respectively, so there's not really much of a difference compared to Wire (which is a good choice as well). Still, even a hypothetical React Native app written in JavaScript wouldn't be much worse; after all, React Native isn't just a Web View made to look like a native app, but uses actual native components.


While I agree that Electon offers a massive amount of footguns, neither Javascript nor Electon was the issue in this case. The issue was using innerHTML (or rather $.html()) with strings concatenated together from user input. Something you should never do. Could as well just call eval() directly on it, or pass the input to gcc, compile it and run the resulting binary.

The Signal devs thought $.html() does some kind of escaping: https://github.com/signalapp/Signal-Desktop/commit/9d41b8616... (this commit made something that was easy to exploit into something that was even easier to exploit).

To be honest, I'd lay more blame on the authors of the DOM spec making innerHTML a setter than on Electron, and jQuery exposing this misfeature even more with $.html(), teaching an army of web developers to do the wrong thing. We've all seen numerous (XSS) vulnerabilities in all kinds of websites, browser extensions, Electron apps, etc resulting from this API, tho in Electron apps it gets particularly devastating as often you'd get code execution not just in a sandboxed website but full code execution under the current user credentials in the system.


> The Signal devs thought $.html() does some kind of escaping: https://github.com/signalapp/Signal-Desktop/commit/9d41b8616.... (this commit made something that was easy to exploit into something that was even easier to exploit).

This is an absolutely egregious rookie error. I wouldn't touch the Signal desktop app with a 10 foot pole after seeing that commit.


What's incredible is that the author actually had to modify an XSS test so that it read:

> const expected: string = "Hello<br><script>alert('evil');</script>World!";

(Meaning they actually changed a line that had "<script>alert('evil')</script>" and didn't notice.)

I have seen this before though, with some folks removing path sanitisation code I added several years prior to fix a CVE. So it's not uncommon (it also got merged, so when I found out and fixed it I added a very large and scary comment to stop people from doing it again).


Why is this a 404 now?


Looks like grrowl copied and repasted the truncated display text of the link. The full link is 2 comments up: https://github.com/signalapp/Signal-Desktop/commit/9d41b8616...


>The Signal devs thought $.html() does some kind of escaping:

Uhm... that's a really rookie mistake to make. Like, one of the very basics of jQuery usage. I'm not exactly sure what to think about it after seeing this commit you linked...


The worst part is that someone assumed something then removed the code that did the escaping without even doing the most basic of tests, like even in the browser just doing a quick foo.html('<script>alert("oh snap this is bad")</script>')


Even worse than that, it looks like there _were_ unit tests to check that input was correctly sanitized, but the patch that introduced the bug also explicitly changed the unit tests to ensure that input was _not_ being escaped!

A mistake that seems like it could've been caught in a code review!


These are the kinds of things that make tinfoil hats wonder if any agencies interested in subverting encryption "plant" employees. It's difficult to differentiate stupidity from malice. We know they use shell companies and subvert device manufacturers and standards, so it's not unfathomable.


Maybe he did and it didn't work for some reason and he thought it was good. Shit happens. I think the most worrying part is not having been caught in code review. Signal does code review, right?


I don't know if this is correct, but, I once got the impression that Signal Desktop was under the sole purview of a new hire at OWS. In other words, Moxie doesn't review the commits. I hope I'm wrong, but even if I'm not, I suppose it makes no difference, as he's arguably responsible either way.


Isn't this supposed to be an app for secure communication? Theo would have a conniption.


Has Theo de Raadt ever endorsed the security of anything except the default install of OpenBSD?


So to be clear, a lot of the blame definitely belongs in the "all that comes with it" bucket here, which is one of the reasons why you should think twice about developing desktop apps using a platform that forces you to deal with not only the usual desktop app security concerns, but also all the things that make web apps vulnerable.

Still, when you ship an app with a relatively strict Content Security Policy as Signal did (including using script-src 'self'), you don't really expect a simple XSS vulnerability to lead to RCE, but it turns out that policy doesn't really do much in an Electron app.


> The issue was using innerHTML (or rather $.html()) with strings concatenated together from user input.

> The Signal devs thought $.html() does some kind of escaping

I mean, it does do a kind of escaping. If you assign javascript to innerHTML directly, it won't execute. jQuery specifically checks whether you're adding a script tag, and if so, it takes the extra step to execute it for you.


You mean, the innerHTML of a <script> element. Which isn’t really a thing, because the inside of a <script> tag is a document boundary—assigning raw Javascript to innerText or innerHTML directly would make no sense in either case. You need to wrap your Javascript in a CDATA node ;)


No, analogously to adding a script tag with jQuery, I meant adding a script tag, with javascript inside, to the innerHTML of some other element.


Surely the collective noun for footguns is a cache :)


> The issue was using innerHTML (or rather $.html()) with strings concatenated together from user input. Something you should never do. Could as well just call eval() directly on it, or pass the input to gcc, compile it and run the resulting binary.

Yes, but most engineers would look at that last element and say "what on earth is going on here", where $.html() being dangerous is something that engineers who don't usually work on web might not know about. You're right about blaming the DOM spec, but there's no actual reason for signal desktop to interact with that poorly designed spec except that they chose electron as a framework.


Is there a linter that can catch this kind of mistakes?


I disagree; the issue is Electron.

Sure, you can write secure apps in Electron, just like you can do risky stuff in real-life and be fine most of the time. But why take the risk?

Had the app been written with a native language and SDK, they wouldn’t need to worry about escaping or anything. I have yet to hear about getting remote code execution for dumping text into an UILabel or similar, while XSS happens almost every day.


Signal Desktop actually used to be a Chrome app. Then Google announced the deprecation of that feature and they ported it over to Electron.


Sad day for Chrome OS users. No more Signal updates! :(


One thing I'd like to see is more use of containers and permissions locally.

For example, my IntelliJ runs as my user account, but it doesn't need access to all my files.

I should be able to select which directories it has access to and it's within a container by default.

I mean I can set this stuff up manually, but in the future I'd like to see this as the default.

Similar to the way Android apps ask for permissions.


It seems you are calling for Qubes OS [1], which does that but using VMs (which should be more secure than containers).

It will take a looong time for "standard" OSes to get there, if they ever do. The required changes in UX are very significant...

[1] https://www.qubes-os.org/


That is the whole idea of the UWP model on Windows, and the ongoing work to put Win32 apps inside of the same containers.

Or the sandbox models on Android, iOS and macOS.


And Flatpak on Linux. IIRC the Signal Flatpak is sandboxed.


> Electron means you're one XSS away from remote code execution

So, electron is the new flash. I'll be avoiding that, then.


>Electron means you're one XSS away from remote code execution, and even worse, it makes it way harder to mitigate XSS through CSP (which Signal did utilize, but script-src 'self' can easily be bypassed in Electron).

Can you explain that last part? I can't think of how an XSS attack could get around that, and Electron's documentation specifically recommends it: https://github.com/electron/electron/blob/master/docs/tutori...


There's a bit of an explanation of this in the article describing the other XSS that's recently been found in Signal[1]. Basically, since the Electron app itself runs under the file:// origin, 'self' can be bypassed with varying degrees of difficulty depending on the platform. On Windows, it's trivial because you can use UNC paths to a SMB share containing a malicious JavaScript file (i.e. file://1.2.3.4/payload.js). On other platforms, you'd need to find a way to place the file on a path accessible via file:// first, for example by sending the file via Signal itself and hoping the user accepts the download.

There are ways to lock down the CSP further to mitigate this, but no one really expects script-src 'self' to be unsafe, especially when it's what their documentation recommends.

[1]: https://ivan.barreraoro.com.ar/signal-desktop-html-tag-injec...


None of this is caused because of them using a dynamic language. It's caused because the developers used a function literally called "dangerouslySetInnerHTML" that doesn't escape HTML. That's it. It's just lazy programming.


We called it that way in React in order to call out attention to the fact that it was actually dangerous. React also properly escapes everything else it prints.

The app isn't using React but jQuery, which doesn't have those protections.


This doesn't seem to be true, here's the v1.10.0 code:

https://github.com/signalapp/Signal-Desktop/blob/f6eb745632c...

They do seem to be using react, and using dangerouslySetInnerHTML. Now that said, I haven't confirmed that this is the code that caused the issue, but it is in the Quotes component, which is referenced in the article.

They seem to have fixed this specific issue a few days ago (v.11.0):

https://github.com/signalapp/Signal-Desktop/blob/0d00fbfb7a2...

They do seem to be using jQuery elsewhere in the code base, but I'm not familiar enough to determine how it all fits together.


Oh sorry you are right! I was reading the comment ( https://news.ycombinator.com/item?id=17097501 ) which said that the issue was caused with $.html


They are using many frameworks mixed together like jQuery, React, and underscore.


I may be wrong, but I think the Signal desktop app was written by like interns or new developers working for Signal.

The Android app was written by Moxie, and I think it's the one about which Matthew Green said:

After reading the code, I literally discovered a line of drool running down my face. It’s really nice.

https://signal.org/


I hope Moxie learned that programming his people/team is as programming his software. This tarnishes Signal, regardless of how good the Android app is.


The native code (RedPhone remnants?) did not inspire confidence last I looked at it.


Naive question: Are there functions analogous to `dangerouslySetInnerHTML` in non-JS GUI libraries (e.g. Qt) that will allow a similar attack?


I mean, in C++ "=" could be called "dangerouslySetAribtraryMemoryLocation" and it would be just as accurate. In native code, even trivial operations like concatenating two strings or setting a variable can cause arbitrary code to execute.


Care to give an example?


Assuming OP was talking about overloading the '=' operator:

  struct A {
    int *p = nullptr;
  
    A& operator=(int i) {
      *p = i;
      return *this;
    }
  };

  int main(void)
  {
    A a;
    a = 1;  /* boom! */
    return 0;
  }


strcat (or, honestly, anything in string.h). strcat assumes its first argument has enough allocated space for the contents of the 2nd argument, and that the 2nd argument is NULL terminated. If either of those assumptions is wrong, strcat will overwrite memory, corrupting either your heap or your stack, both of which can lead to arbitrary code execution. It's laughably easy to do, so easy that even typing the letters `strcat` into your program is forbidden in basically every C/C++ shop.


strcat is C and not C++ though.


Nah it's both. C++ was deliberately designed to be a superset of C. It's diverged a little bit, but it's mostly still the case. Or, call it `std::strcat` if you like.


Maybe, and I know this sounds crazy, secure chat clients shouldn't execute user/potential-attacker supplied code at all???


Of course they shouldn't, that is the bug, I think? The authors thought they were displaying user-supplied HTML, not executing user-supplied code.

You can say secure chat clients should not display HTML messages, but that's a pretty different thing.


Yes the only problem is that the text markup language happens to include by default a Turing complete network-enabled live-interpreted programming language because 25 years ago someone wanted to write a funny message in the Netscape status bar.


there is plenty of secure software written in javascript. poor engineering can occur in any language


If a presentation layer API (=HTML) provides developers with the convenience of composing UI elements by concatenating markup with remotely sourced input, and at the same time allows inline scripts to be eval'ed when merely present in particular markup attributes, and additionally sometimes hooks up un-sandboxed native APIs with full access to $HOME, it has really laid down the groundwork for a client-side can of worms that makes sql injection and php evals a run for its money.

With the prevalence of XSS and CSRF vulns on the regular sandboxed web, it's pretty brave to take that model into unsandboxed fat clients..?!


It'd probably be useful to distinguish between runtime environments that Javascript is typically encountered (Browsers + Electron) as opposed to the language itself. If it were a language issue, a developer might believe that they could simply switch to a different language, say rust, and compile to WebAssembly and be safe.

However, as you point out, the issue lies in the presentation layer unexpectedly executing code (or receiving inputs from unexpected and untrusted sources). This issue wouldn't be solved by switching to a different language. The core issue here isn't Javascript per se, but the dangerous runtime environments that are browsers and browser approximations (electron) that are designed to execute code from 3rd parties.


A database provides developers with the convenience of composing queries by concatenating strings. You still need to be really incompetent to do so in 2018.


Can you provide some examples?


Implying in any way that all JS programs are insecure and all other languages are secure is ridiculous and probably harmful, leading people to insecure choices in other languages. I've seen plenty of security vulnerabilities in strongly typed languages in my days as a software dev


openpgp.js, professionally audited several times over https://openpgpjs.org/


Audited by whom? Where are their findings?


Googled it for you. From their github repo... "To date the OpenPGP.js code base has undergone two complete security audits from Cure53. The first audit's report has been published here." https://github.com/openpgpjs/openpgpjs


OpenSSL? Remember Heartbleed?


I believe parent was asking for secure software written in JS. I'm curious, too. (And examples of insecure software written in C suprise no one, do they?)


JavaScript may have many problems, but I don't really think security is one of them. In a properly isolated sandbox, such as a web browser, it's much more difficult to gain arbitrary code execution privileges than a native desktop app.


Then JavaScript is a huge problem, only made safe when wrapped in a professionally built and battle tested bubble.

OpenBSD is a famously secure unix(alike) distro. That doesn't mean that every piece of software in OpenBSD is safe to use in any other context.


So all code is a huge problem? Makes sense to me.


In security less is more.

The more we try to make encryption mainstream, the more difficult it gets because the mainstream interacts with computers predominately via browsers. The mainstream won't adopt something that isn't highly similar to what a browser has to offer in terms of media richness (photos, videos, html), so you see Signal choosing technologies like Electron, a browser, to develop their native applications. The heart of what signal is and does well (encrypt, decrypt, authenticate) is dwarfed by a pile of code that was added to make signal usable by the mainstream. Desktop Signal, in terms of code and complexity, is no longer a security product -- it's an application with a web-like media experience that happens to tack on a very good library to do encryption and authentication.

As we all know, sometimes vulns are in broken crypto, but most of the time they're in a gotcha beneath a mountain of code.


In general less is more, with proper abstraction, not just in security.


I don't know if this is exploitable, but they are using many different methods to escape HTML content:

https://github.com/signalapp/Signal-Desktop/blob/d1f7f5ee8c1...

Then here it's a different function:

https://github.com/signalapp/Signal-Desktop/blob/d1f7f5ee8c1...

Then sometimes they use the underscore library to do it:

https://github.com/signalapp/Signal-Desktop/blob/d1f7f5ee8c1...

Which their implementation seems to be using regular expressions as well.


It looks like those are 3 separate third-party libraries (Mocha, Mustache, and Backbone), so each doing HTML escaping a bit differently shouldn't be too surprising.


The first one doesn't escape single quotes or slash, but I have no idea how to get any HTML parser to treat just those as anything but text. Underscore's implementation will be correct, I'm sure.


Slash doesn't need to be encoded. Only 5 characters that have special meaning have to be encoded (&, <, >, " and ').


Which raises the best question: how would you exploit someone not escaping single quotes? I do not know. Perhaps it isn't possible.


I think escaping quotes only matters for attributes (which can use ' or "). Example:

    <img src="$url">
Exploit:

    foo.jpg" onload="alert('pwned')


Heh, found the exact bug on a live bbcode parser some 5 years ago.


It was probably written using regexps? One should make full syntax analysis instead of writing regexp hacks.


Honestly, and none of you are going to like hearing this, and the Signal people aren't going to appreciate me saying it: if you're serious about messaging securely, don't use Signal Desktop; don't use desktop secure messengers at all. Desktop applications are incredibly risky, far more so than iOS mobile apps are.


> don't use desktop secure messengers at all. Desktop applications are incredibly risky, far more so than iOS mobile apps are.

It's risky to use an open source OS. If you are serious about security, use Android or iOS. Instead of direct ssl connection to XMPP server, it's much safer to send all your data with Google Cloud Messaging. /s

Desktop computers are currently the most open sourced, least opaque, least spyware, non gps tracking, non "microphone always listening for 'ok google'" computer average person has. Why do you suggest that iphone is much more private device?


You have my upvote, but I imagine that tptacek means that iOS is very very well sandboxed, and has an extremely tight and well authenticated download and update system which is extremely difficult for a third party to monkey with.

This is security via centralization and trusting a benevolent capitalist dictator. As long as your personal interests are aligned with interests of the benevolent capitalist's shareholders, you should be fine.

It is my least favorite security model. But, in the case of iOS it seems to be working well (for now). My long-term hope is for a decentralized FOSS model, but for the time being, in the USA, on a multipurpose machine, the benevolent capitalist dictator beats it, especially on sandboxing and package/app authentication.


I like open source software as much as most people on HN, and have worked with it for most of my career. But help me understand how a decentralized FOSS model gets ordinary lawyers, reporters, and congressional campaign staffers the level of security that iOS does? What are the mechanisms that assure safety for users?

The closest I can come to seeing something like this work is a Chromebook, and Chromebooks are locked-down and get their security model from a central authority.


> What are the mechanisms that assure safety for users?

What are the mechanisms that assure safety for users of iOS? I understand that it's had a good track record so far, but the proprietary closed nature doesn't inherently inspire trust. Surely a decentralised FOSS model done right could be secure for lawyers &c.


As the old saying goes, "if you could have invented a secure open source desktop chat app, you would have developed a secure open source desktop chat app."

In practice, empirict results win over theoretically optimal designs.


And as the old saying continues, "...so instead you invented a proprietary one, with hidden code, and told everyone it's secure."


I think that’s unfair; the Ghost.io post from a day or so ago is a good reminder of just how much harder it is to do things when you have to make them work in a decentralized fashion. Decentralization makes everything harder.

Doesn’t mean it’s the wrong thing to do! :)


You're right and this is just the classic walled garden tradeoff - freedom for convenience. Depending who you are, this might be acceptable, and it's good that we have choices.


I'm pretty sure Signal doesn't send all your data through GCM.

Edit: https://support.signal.org/hc/en-us/articles/217524107


Its sad enough they use GCM at all. Android Apps can works perfectly well without GCM.


Signal merged support for devices without Play Services about a year ago. Is that what you're looking for?


In fact I am not looking for anything, as I am using conversations, but any unnecessary use of (Google) services is something I consider not privacy friendly, even if it is used just as signaling channel.


You should be far more worried that desktop apps don't require permission before eavesdropping on your conversations using the microphone (including 3rd level sub-dependencies of that NPM module you installed), than that Android or iOS is secretly recording your conversations under the guise of "ok Google" or "Hello Siri".

Regardless of the device form factor, if you don't want the OS to have access to the microphone then you have to physically disable the microphone (but if you are that paranoid you should probably live in the woods away from all electronic devices).


> Desktop applications are incredibly risky, far more so than iOS mobile apps are.

Ok, I'll play.

I get to choose 10 arbitrary apps from the Apple App store for you to install on an Iphone model of your choice.

You get to choose 10 arbitrary apps for me to install from the default Debian repos (which I believe excludes nonfree). Let's say Sid to make it interesting.

Who is going to be in worse shape after installing those apps?

Edit: typo


You will be in worse shape than I will be. It's possible, in that insane proposition, that your Debian machine will be conceding remote code execution to the whole Internet, while my phone will just have some crappy apps on the home screen.


Doesn't that answer assume some or all of the following?

a) Apple does a better job reviewing apps than Debian maintainers do.

b) iPhone app code is better quality than Debian packages.

c) iOS sandboxing is better than Linux.

Default configuration may mean c) is true. However not if you use wayland, apparmor, seccomp, namespaces etc. What do you think about a) and b)?


The question wasn't whether they could write an elaborate seccomp policy to contain any given Debian package. I just got to pick 10 of them, and install them.


The beep local root suggests the Debian review system has room to improve. It's a pretty deep barrel. You're sure there's no crud at the bottom?


Hi Ted,

Oh wow, is there ever crud at the bottom.

Yesterday I apt-get install'd probably 5 such cruds just to record a small rectangle on my desktop. TBH I did this after apt-get install'ing 3 other animated-gif related cruds to do simple motion animation, then just gave up and used the half-baked Web Animations API in devtools of Firefox because it was easier and better documented than anything else I could find.

That's 8 total cruds written by who-knows, maintained by whoever, audited probably-never by no-one. Also, they pulled in various dependencies I didn't pay the faintest attention to.

How many of those 8 apps would you estimate sent my email contacts to a third party upon instantiation?

How many of those 8 apps would you estimate gathered various pieces of data to fingerprint my device? How many keep gathering data from every sensor source they can poll every time I run and use the app?

How many of those 8 apps would you estimate even touched the network at all?

Now let's suppose I download 8 cruds on iOS just as mindlessly as I did here. Do you think the answers to those questions will be different?


You should try to think of some specific problems that can arise in each environment instead of conducting some sort of weird Socratic dialogue about imaginary apps and a seemingly made-up iOS.


How is the beep local root different from jail breaks in the past? Seems both are local privilege exploit, and I recall seeing that iphone has had a long list of those in the past.


beepmargeddon is a local privilege exploit, not a RCE.

I'm not aware of any debian package (I don't use debian that often though so mind that) that A) installs a network service and B) uses unsafe defaults while C) activating the service on boot by default


Mobile OS have much better isolation between apps than Debian has. In default Debian configuration any app can access all of your data, microphone, webcam, Internet, GPS sensors etc. While the maintainers do a good job reviewing all of the software, having isolation on OS level provides better security.

For example, Chrome has process sandboxing. It might seem unnecessary because the code is written by highly professional developers, but it helps Chrome to be the most difficult to exploit browser. I am sure that if Debian could adopt something similar to Android permission system, it would make it even more secure.


Care to explain? I mean, in principle. I distrust Signal Desktop, whether it's built on Chrome or on Electron, because either of those "platforms" are more complex than my OS (Debian GNU/Linux). But you seem to be making a more general point... what's the reasoning?


No matter what Signal does with Desktop, it will remain a standard desktop application, meaning it will in general be as secure as the least secure application sharing that desktop.


Is there currently any desktop application delivery/sandboxing mechanism that has any hope of changing this situation in the future?


Something like Qubes is the only way we'll ever get decent security on the desktop, I'd say.


Wayland, the xdg-apps sandboxing (now FlatPak), containers in general, etc.


If all your software comes from the App Store and is properly sandboxed, it's possible for this to approach the security of iOS. Of course, many people will install unsandboxed software, making this extremely difficult to actually achieve, but it's at least possible.


What's a desktop and what does it mean to share it?

My applications share an X11 display, and if I'm not mistaken, they are pretty well isolated from each other. (Are you referring to MicroSoft Windows[TM] by any chance? Yeah, that's different.)


> My applications share an X11 display, and if I'm not mistaken, they are pretty well isolated from each other.

You are probably mistaken.

https://github.com/esonn/x11log

http://blog.martin-graesslin.com/blog/2015/01/why-screen-loc...

You might want to look at qubesos :

https://www.qubes-os.org/doc/gui/


I dunno if it's still true, but it used to be that every process running in your X11 window had access to every keystroke from every other application. Like, say, every password you type into a terminal window or browser... If it _is_ still true (and I've noticed a bit of recent discussion asking why Ubuntu 18LTS is switching back from Wayland to X11 which hints to me that it might be), then your apps are _very_ much not "pretty well isolated"...


What are you using for X security context isolation? I've been wanting a good solution for that one for a while now, and the end of my list is still “write my own isolating proxy” since I never found a good one.


See: https://www.qubes-os.org/doc/gui/

(they have essentially written 2500 lines of c that acts as a proxy of sorts)


Nothing... but can X11 applications actually steal data from each other? I wouldn't know how to do that, but I know precious little about the X11 protocol. (Screen shots are an awkward option, I guess.)

If I was worried enough (I'm not), I could use multiple logins. Two different X servers under different users would be completely isolated.


Even using two X servers under two logins, the fact is that desktop apps by default have much more freedom and therefore much more surface to attack. Even the kernel syscalls alone are not safe; exploits that elevate processes to root are not rare. You'd need to lock them down much more than just using different users.

By the way, they may have an history, but nowadays I wouldn't bet on the security of the average distro vs Windows, and I say this as a dyed-in-the-wool Debian user. For example, their Desktop environment does have protections against cross-application attacks, unlike X.


Yeah, I definitely don't like hearing that, because phones can fuck right off. That's not my computer, that's someone else's computer that they're letting me use.

I'm not going to get a smartphone just to use Signal. I'll use my spare laptop instead, that I already have, and just run Signal on it. It's not going to get compromised by Boogeymen From The Scary Browser Tab because I won't be running a browser on there.


I think that is a fine plan for the "people who have spare laptops they use to run single programs" people.


Well, a spare computer is cheaper than a spare phone. You can get those suckers free on the curb sometimes.

Anyway, I don't think we should give up on user-controlled platforms so readily, even if there's a slight short-term benefit to privacy or security.


This is easily proven false by simply looking at professionals which either by necessity or regulation require a strict secure environment.

How common is it that the military use iphone apps for classified information or to interface with military equipment. Do operators in powerplants or other sensitive infrastructure use iphone apps as interface to their systems. When security is a primary objective then having it hooked up with a third-party that continuously collects information that you have no control over sounds as a very bad security recommendation, especially when it has a independent GPS, GSM, and network capability which bypass normal network security tools.

If signal crashes and a crash report is generated, who get access to it on a iphone? Who might get a potential memory dump with plaintext? If that party is apple I would strongly recommend that one is first okey with the idea that apple has access to the plaintext before using such app.


If your standard for operational security is the military, I have very bad news for you. Or good news, if your opsec goal is "be way better than the US Military" (that news is: you are already way better than the US Military).

The military obtains what security it has by attempting complete segregation and isolation; because it's the USG, the world's largest IT department, there are "public" and "private" networks, both clones of each other, both running the same insecure software. Both public and "secure" networks have been owned up comprehensively by malware in the past.

To get a sense of how bad the situation is, go look at the Common Criterial EAL vendor list, note which vendors have obtained EAL4 certification, and then compare to the security track records of those versions. That'll give you the spirit of the situation without requiring to you actually endure an EAL validation, which is something I have had the misfortune of participating in.


This may or may not be true, but in a lot of cases where you need encryption, you also need not to have a GPS tracker on you while you're using it. You have (at least slightly) more chance of being anonymous with a dedicated laptop computer than you have with any smartphone.


Ignoring the recent LocationSmart revelations, if you disable location access you should basically have the same level of geospatial anonymity.


> Ignoring the recent LocationSmart revelations

You can't, though; that's kind of the point.


There's a "herd immunity" component - if you're in a group of 10 000 people with GPS tracking on - your position might be possible to guess quite precisely based on meta data like IP, network latency etc - that can be compared across a large population.


All somebody needs to find your location is a list of SSIDs and MAC addresses within range of you.


And we're back at "really good security is to inconvenient to use" :/


This is a rare case where things are inconvenient for nerds to use, but more convenient for ordinary people, who tend to be more comfortable doing stuff on mobile platforms than nerds are.


Is that also true for professional use? For private use/relatively few threads I'd agree with that, the insistence on IM on desktop is a niche, but I'd think most people using chat a lot professionally (e.g. Slack) do not use it mobile-only. And if you'd want them to shift communication away from e-mail, it gets even more important.

(as p49k alluded to, iPads help, but few professional users have them as the primary device)


To be fair, Apple has done a great job at making a device that does both without much compromise.


Desktop applications are incredibly risky, yes; as for iOS mobile apps we can't even know, as these devices don't allow auditing what software is running on them.

PGP has many problems and I hope a better replacement will come along, but the first step of secure messaging can't be using devices with closed, unauditable software...


A reason you are getting downvotes is that it’s not true that closed source software is unauditable.

This is a common but untrue belief. It’s a fundamental axciom of software security that you can’t trust source, so you must diagnose the binary.

Source may be helpful, but in the grand scheme of things lots of other properties are more important.


> It’s a fundamental axciom of software security that you can’t trust source, so you must diagnose the binary.

What if you trust the build tool chain and can reproduce the binary from source?


That’s functionally harder in most cases than just inspecting the binary.


They down-vote you, but you are 100% right. Using a mobile phone is intrinsically more insecure because they can track you much more easily, and you have much less control over the OS.


I vaguely remember someone on HN (or maybe some other forum) making loud endorsements of Signal over any other encrypted chat app. Even a statement like "9 out of 10 cryptographers would recommend Signal". What's the difference between Signal Desktop and Signal?


Signal Desktop is one of several clients for Signal Protocol; the most common client is --- I believe, but am not sure, but have good reason to believe --- either the Android or iOS mobile client, neither of which is a Javascript application.

We've recommended the mobile versions of Signal for a long time (see, for instance, the Tech Solidarity security resources, which haven't changed in a year), and everyone still recommends Signal Protocol. I think we all should have been noisier about the security limitations of the desktop app environment. And about Electron.


I was always uncomfortable about signal on non-iOS devices. I feel the same about password managers too, it’s a giant PITA but I specifically do not want all my passwords in one place if that places is a wild desktop.


A properly sandboxed desktop application is no more dangerous than an iOS app. Of course, Chrome doesn't work in that sandbox, and they're using Electron, so this doesn't quite work.


>Desktop applications are incredibly risky

Oh so just use your phone that has a 100 background crapware apps running and a hidden baseband OS running under the parent OS/UI?


Remember that time another app (and the baseband!) compromised Signal on iOS?


FWIW, the sort of people with access/leverage to be able to compromise a device through the baseband probably don't leave behind traces revealing it happened. They just drop hints to the local cops that they ought to find a reason to pull you over for a traffic stop and coincidentally smell pot smoke to give them probably cause to search your car...

(waves at the NSA guys...)

I _hope_ that sort of capability is still a year or two away from guys with an Ettus USPR and a bunch of open source software and hacking tools glued together with Python... But keep your eyes on DefCon and CCC to be sure...


They probably don't leave traces because compromising a modern Apple device through the baseband would be quite a trick, given that it's an independent peripheral connected to the AP over on-chip USB.


That's good to know (and for almost anybody else I'd add "citation needed"...)

Didn't the iPhone baseband processor at some time in the past have dma? I vaguely recall a perhaps Usenix paper that seemed to claim any phone that had a software unlock where you could disable the carrier locking, was almost certainly using dma connections between the baseband and AP. Any hints or links or search terms which would show me how modern an iPhone needs to be to be "safe" from that?


I don't know what the first iPhone to have an HSIC baseband was, but it has been awhile. I assume every iPhone anyone is really using today fits the description I gave. The iPhone 4 does. This is a really basic security design concern for mobile devices; you can assume that neither Apple nor Google (for their own Google-branded phones) ships products where a corrupted baseband can simply DMA its way into the AP. It is a little weird to me that people on message boards assume they've outguessed the hardware security teams at both Apple and Google on one of the most obvious attack vectors for their phone designs; both companies spend huge amounts of money on this stuff.


"It is a little weird to me that people on message boards assume they've outguessed the hardware security teams at both Apple and Google on one of the most obvious attack vectors for their phone designs; both companies spend huge amounts of money on this stuff."

For what it's worth, that isn't the assumption people are making. The easy assumption to make is that the security teams were unable to convince product owners at these companies that the extra expense of solving this was worth their investment. Especially because practical baseband attacks still haven't hit them as an issue, and it's very rare for product owners to take on major changes or invest in security for "theoretical" threats.

Just look at all of the services still allowing SMS for 2FA - it's not because the security team doesn't know that is an insane thing to be doing in 2018.


No, it's because end-users don't adopt 2FA via TOTP and, to a first approximation, nobody uses U2F. It's not a corner security teams are cutting. Microsoft's security team makes the same decisions.


Microsoft doesn’t have “a” security team. They have dozens of security-ish teams. And I can assure you that none of them made that decision. The product teams did. And security people there disagree with it. You likely think security teams at BigCorp have more teeth than they do. With a few exceptions, Microsoft is known for hiring well qualified security people and then giving them no authority to do much until something is already on fire.

Search for a tweet by Kostya expressing surprise and delight at the fact that they actually fixed an internal find a couple years ago. Or grab a beer with the nearest ex-microsoft security person and have them tell you stories about servicing internally discovered vulns.

You’re right that the security team aren’t cutting corners. Because that isn’t how things work.


it's not because the security team doesn't know

It's also not because they don't think it's 'worth the investment' or due to extra expense.


No, it's cheaper for them to just replace the person you're talking to with a Cylon.


Some may argue this has already happened - to at least half their social circle. (Not me though, I consider myself "recreationally paranoid" rather than "raving looney paranoid" - other people's opinions on that probably differ...)


No?


I think he forgot a /s. (Seriously).


You people!


Dang it.


Signal runs just fine on an iPod Touch (after a little fussing around getting it set up with a phone number...)

If you're paranoid enough, it's easy enough to avoid installing things that're likely to be crapware on your secure comms device.

Apart from Signal, the only other non iOS supplied apps I have installed on mu iPod are a bitcoin wallet and Onion Browser - both of which I angst a little about, since they're both in the first category of app I'd attempt to subvert if I were a nation state actor, or a blackhat looking to steal bitcoin from people least likely to report it to authorities...


If you're willing to adopt "run a dedicated device for crypto" approach, something like Tails running on a USB-key-like device gives you the same kind of security position and without having to trust Apple.


Yeah, I do that too (and an offline wallet on a RasPi which has never been internet connected) - but for secure messaging and for some bitcoin transactions, I want that device in my pocket. I treat the iPod cryptocurrency wallet like, well, a wallet - containing amounts I feel comfortable carrying around (like the couple of hundred in cash I might have in my wallet at any time). I "trust" Apple enough to store that. I wouldn't treat the iPod as a bank, storing significant or lifechanging amounts of value.


What about qubesOS?


Not mainstream, and there must be a reason why. I guess ease of use, though I don't know.


Its popularity is irrelevant to how secure it is.


Wasn't this domain imitating the actual Hacker News banned years ago?

Plus, I think they violate rules because this is just blog spam.

The actual source of the story is: https://ivan.barreraoro.com.ar/signal-desktop-html-tag-injec...


I don't see anything imitating this website.. other than a technology based news feed and a similarly used name.

I used to browse a website called "hacker news" back in the late 90s / early 2000s, but I wouldn't go as far as to call News YC a copy of that.


I was referring to the fact of imitating the HN brand by capitalizing on the domain name so they could scoop up all the traffic and SEO love. That's why the domain was banned to begin with a few years back. There was a whole discussion about it.


The site that I was referring to was literally hackernews.com, and was very popular among the tech crowd from the late 90s onward. (long before YC was conceived)

ALmost 20 years ago, I used to rotate between hacker news, fark, and slashdot to get my daily dose of internet.

https://web.archive.org/web/*/hackernews.com

One would be perfectly justified in also trying to claim that the name here was stolen from the original.

.. but sometimes, just because things share a common name, does not necessarily mean they are related.

https://en.wikipedia.org/wiki/Post_hoc_ergo_propter_hoc


Not that one. That's owned by Space Rogue. I'm talking about the one linked now, owned by some Indians who keep copying articles off other sites. There was a reason this got banned years ago. At one point you could trace articles from The Register and Motherboard paragraph by paragraph to their stories, but with bad grammar and bad sentence structure.


On their Android app, first thing it makes you do is give them permission to read your SMSs. It wont let you vefiry by entering a code. I immediately uninstalled - doesn't seem like a privacy focussed organisation to me.


> It wont let you vefiry by entering a code

That's odd because I've absolutely verified numerous Signal clients by entering a code, without granting access to my SMS (I use Google Voice so my SMS database is basically empty except for random spam from my shitty carrier)


It used to be that Signal required reading the code from SMS and didn't work with Google Voice. But they listened to complaints and changed it to allow entering the code.


That sounds absolutely horrendous. Even Whatsapp allows you to verify using a fixed line and claim that number on the mobile for privacy.

Coupled with the recent LocationSmart revelations, it would make Signal unusable for those who wish to keep their location private. You absolutely need to provide the mobile number of the actual terminal being used.


Not true. I have Signal running in an iPod Touch.

I needed to give them a phone number I could read SMS from to set it up, but there's no need for that to be "the mobile number of the actual terminal being used".


At least on iOS, apps cannot get the phone number of the device through any APIs in the iOS SDK (AFAIK). So no app on the platform can reliably confirm if the number you entered is the number of the device. They all assume that to be true when you confirm the code they send by SMS. As another reply here has put it, you just need a device to receive the code via SMS. You can then use that code on any other device to set it up.


That seems odd. Perhaps it's to do with the fact that the app can act as your primary SMS app as well (I use it like this). But still, should be able to have it as an option.


Yep. I've complained about this so much.


What is this website?

"The Hacker News"? And no actual relation to HN? This website doesn't even have an about page...


This is BLASPHEMY!!


From TFA:

"...the new vulnerability (CVE-2018-11101) exists in a different function that handles the validation of quoted messages, i.e., quoting a previous message in a reply.

"In other words, to exploit the newly patched bug on vulnerable versions of Signal desktop app, all an attacker needs to do is send a malicious HTML/javascript code as a message to the victim, and then quote/reply to that same message with any random text.

"If the victim receives this quoted message containing the malicious payload on its vulnerable Signal desktop app, it will automatically execute the payload, without requiring any user interaction."

Is it the case that you don't even need to have the attacker's number in your contacts list?


No, that's incorrect. You have to have someone's number to send a message to them.


This news saddens me. I’ve been the last user of the Signal desktop app around me and it looks like I have been too optimistic about Electron. I’ve now deleted any Electron app and recommend everyone to do the same.


It's mind boggling why messaging app has 181 MB.


I'm really starting to get tired of all these bloated JavaScript desktop apps. I get that it's more convenient for developing cross-platform apps with modern looking UIs, but I really wish there would be an increased focus on reducing the overall bloat and resource use, both among app and framework devs.

Speaking as a Windows user, I would vastly prefer a well-designed native application (WinForms/WPF) over a JS monstrosity any day.


Pretty mindblowing that Signal allows things like `dangerouslySetInnerHTML` in any of their apps. A simple linter would have caught this.


With such an obviously "DON'T USE THIS" method name as dangerouslySetInnerHtml, I'd expect that we'd see something like // eslint-disable-next-line above it.


Interesting, more or less, nothing is 100% secure. Looks like DEA had cracked the whatever crypto Blacberry was using and quite a few drug dealers were caught that way (one example: https://www.thedailybeast.com/the-deas-dirty-cop-who-tipped-... ). They must have been using because of the reputation BB had. I wonder what will we find out in time about the narcos, terrorists etc using Signal.


Someone invited me to use signal. I thought "It's a trap!"


It's a pun on inter-process communication signals and traps in UNIX.

https://www.tutorialspoint.com/unix/unix-signals-traps.htm


Is the chrome extension also vulnerable to this?


No, it is not.


Anyone else have an aesthetic feeling for this? Signal desktop felt clunky to such a degree that takes away from trust that Telegram feels equally secure - even though it is not.


Why is this flagged?


I had the same question, the title seemed generous considering this was technically a RCE exploit.

EDIT It also appears lots of comments just got hit with a wave of downvotes. It's possible there is some brigading or vote manipulation.


I usually have a "vouch" button but I can't find it - maybe I don't have it on posts and only on comments?


Same question.

Also, is there somewhere where we can see why a thread is flagged ?


Unfortunately there's no way to know why users flag something, but that brings up an interesting idea: in order to flag something require the user types out a reason, and if an article is flagged it could show why people flagged it.


Because the discution is already a flame war. I feel like everybody has a solution. Try not to get burn because JS is bad! a sorry electron is terrible! No innerhtml is a sin!


I thought we already accepted that JS is bad. It is objectively a bad language, with the weak typing, quirky edge cases and a hundred bad libraries, and some backdoored ones available on npm as well.

We've also accepted that Electron is bad, we don't need native applications that bundle Chrome so they can run JavaScript. It's a bloated memory haemorrhaging mess pretending to be a cross platform framework.

And if that wasn't bad enough, we have JavaScript developers who know nothing about security, writing a supposedly "secure" desktop app where a XSS is now a RCE.

We've come such a long way that browsers can now almost securely execute JavaScript and you're telling me that a 2018 "secure" chat messaging desktop application has a RCE from a XSS in the messaging feature. It sends and receives text to other people for god's sake. Myspace wasn't even this bad and it didn’t pretend it was secure.

It is absolute lunacy. I won't be surprised that the Signal desktop brand is completely ruined by all this fallout because these are just rookie mistakes compounded on rookie mistakes.


Is there a native Signal client that isn’t an Electron abomination?

It is clear at the point the Signal desktop people has no idea what they are doing and cannot be trusted to write a secure desktop application.


Nope. But you can try Threema or Wire. They're both pretty good.


You can run it as a chrome extension, which I do, in a dedicated VM.


Just go XMPP with OMEMO or Matrix.


When will people start using plain old PGP — a tool that does one thing only, and does it right? Sure, it's a little harder than using just one tool that handles contacts, communication, formatting, and encryption, while making popcorn and walking the dog, but it works, and it's secure if you use it right.

Our efforts to make encryption easy are going to get someone killed.


Surely this is sarcasm. Just in case it isn't, it's only fitting to link back to what Moxie Marlinspike wrote about PGP/GPG: https://moxie.org/blog/gpg-and-me/ (HN commentary: https://news.ycombinator.com/item?id=9104188 ).

TL;DR: When will people start using gpg: they won't.


After reading that article I want to use PGP just so I never run the risk of interacting with people like Moxie.


I think this sentiment explains half of the negative reactions I’ve seen towards Moxie over the years. I guess it pays to be likeable.


> a tool that does one thing only, and does it right?

It does multiple things (signing messages, encrypting messages, signing and encrypting messages, signing other people's keys, publishing keys, downloading keys, finding trust paths between keys, publishing your contact list to the world, publishing information about when you met certain people, displaying photos of people, revoking keys, symmetrically encrypting files with a password), and it does none of those things right.

In particular, it unambiguously does authenticated encryption wrong (streaming decrypt, then authenticate), which was one of the root causes of the EFAIL vulnerability.


If you can find 3 crypto engineers who aren’t PGP maintainers that agree with this I might start using PGP for secure comms.



The PGP vulnerability is actually in e-mail clients, and it affects almost nobody. And how often do PGP vulnerabilities happen?

Signal got two vulnerabilities that affect everyone JUST THIS WEEK.


By "almost nobody", you mean everyone who used Apple Mail/GPGTools and Thunderbird/Enigma, meaning, the vast majority of everybody who used PGP?


Are GPGMail and Enigmail really more popular than other extensions/clients? genuinely asking since I am not aware of any study on PGP usage.


Enigmail 2 was not affected.


I'm not sure what you're trying to say here, but Enigmail 2 was released a few months after the researchers disclosed the vulnerability to the project[1], so it would've been a rather sad state of affairs if the release hadn't included a fix. That's not to say that everything's fine now for users of Thunderbird and Enigmail[2].

[1]: https://sourceforge.net/p/enigmail/bugs/721/

[2]: https://twitter.com/hanno/status/997138771194859521


Thunderbird does not download remote content by default.

I don't know anybody who is using Apple Mail with GPG, but if there are such people, they have been doing it very wrong regardless of this vulnerability. It's an unsafe combination.

I have no statistics on what people use PGP with, but asserting that most people use it with Apple Mail and Thunderbird is baseless and without proof.


> Thunderbird does not download remote content by default.

The researchers behind EFAIL found a number of ways to bypass the remote content setting. Not only that, but Hanno Böck found another one today[1] that hasn't been fixed yet.

[1]: https://twitter.com/hanno/status/997138771194859521


Why would Apple Mail and PGP be a unsafe combination by default?


The Signal vulnerabilities did not affect the mobile apps, only the desktop app.


Literally not PGP, but clients built around it to make encrypted email easy. Read the article you linked.

EDIT: typo.


Ah, so don't use the now secured opensource client using Signal's protocol. We should use PGP with all the weak yet-to-be-patched clients. Cause it's not PGP which got hacked it was the client. Very different from how the Signal client got hacked not their protocol. /s


No, I'm saying use just PGP - manually - and don't use any client interface to it. Control the encryption yourself. Your sarcasm is misplaced.


If I accept that using PGP manually is acceptable for getting wide-spread encryption then you're right, but the fact is that tech people rarely use it because it's so cumbersome that the chance of normal users using it successfully, let alone using the terminal these days, is so slim that it's basically absurd to think that is the way we're gonna get wide spread encryption.

I don't want to only talk to my tech friends. My best friend runs a small business doing nothing to do with tech and when I talk to him I want to be sure no one else can read/listen to what we're saying because it's private. There is no version where my friend learns to properly use PGP consistently so we can talk that way. The only reasonable way is WhatsApp or Signal.


Given the absent security of desktop Linux, and the dreadful opsec of its users, as revealed by many cryptocurrency wallet thefts, I wouldn't place much trust in persistent PGP secrecy unless all participants used Heads, and renewed keys regularly, and ran nothing except gpg. How many people do that, in the entire world, do you think? A hundred, at best?


It relies on several features of PGP that mean that messages aren't tamper-proof. If PGP made sure messages replayed and altered by attackers would not decrypt, then the attacks wouldn't work- you couldn't replay an email back to someone to steal it.

It relies on a client having an HTML renderer, but the underlying issue is messages that can be tampered with.


It also relies on clients misusing an API call. And then passing the result to a full HTML viewer that can access the internet. So, again, not an exploit if you use plain PGP.


I don't know anyone who uses PGP/GPG to communicate securely, and I'm a cryptographer!




Consider applying for YC's W25 batch! Applications are open till Nov 12.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: