we were able to compile a list of strategic defense-in-depth recommendations for Signal Desktop which we’ve sent to the Signal security team per their request. At the end of the day there will always be new “hot” vulnerabilities, but the “vendor” response is generally what separates the wheat from the chaff. The Signal team’s quick patch time along with a strong interest in mitigating vulnerabilities of this type in the future was encouraging to see. I’ll remain a Signal user for the foreseeable future :)
>Researchers—Iván Ariel Barrera Oro, Alfredo Ortega, Juliano Rizzo, and Matt Bryant—responsibly reported the vulnerability to Signal, and its developers have patched the vulnerability with the release of Signal desktop version 1.11.0 for Windows, macOS, and Linux users.
>However, The Hacker News has learned that Signal developers had already identified this issue as part of a comprehensive fix to the first vulnerability before the researchers found it and reported them.
>Signal app has an auto-update mechanism, so most users must have the update already installed. You can read this guide to ensure if you are running updated version of Signal.
Seems everything is patched, and was already going to be patched before the vuln was reported.
Maybe secure chat clients shouldn't be written in JavaScript or other languages that have excessive dynamicness? Signal seems to be written mostly in languages that are bad for security (significantly worse than the best alternatives). Maybe I'm just a language nerd without any clue about the trade-offs, but I trust the Wire software more. Note that this just applies to mobile clients and server - Wire, like Signal, chose to build their desktop+webapp in JavaScript :(
As much as I'm not a fan of JavaScript, the problem is not so much the language but rather the choice of Electron and all that comes with it. Heck, even a web version or Chrome app would've successfully mitigated these attacks. Electron means you're one XSS away from remote code execution, and even worse, it makes it way harder to mitigate XSS through CSP (which Signal did utilize, but script-src 'self' can easily be bypassed in Electron).
FWIW, Signal's native mobile apps are written in Java and Objective-C respectively, so there's not really much of a difference compared to Wire (which is a good choice as well). Still, even a hypothetical React Native app written in JavaScript wouldn't be much worse; after all, React Native isn't just a Web View made to look like a native app, but uses actual native components.
While I agree that Electon offers a massive amount of footguns, neither Javascript nor Electon was the issue in this case.
The issue was using innerHTML (or rather $.html()) with strings concatenated together from user input. Something you should never do. Could as well just call eval() directly on it, or pass the input to gcc, compile it and run the resulting binary.
To be honest, I'd lay more blame on the authors of the DOM spec making innerHTML a setter than on Electron, and jQuery exposing this misfeature even more with $.html(), teaching an army of web developers to do the wrong thing. We've all seen numerous (XSS) vulnerabilities in all kinds of websites, browser extensions, Electron apps, etc resulting from this API, tho in Electron apps it gets particularly devastating as often you'd get code execution not just in a sandboxed website but full code execution under the current user credentials in the system.
(Meaning they actually changed a line that had "<script>alert('evil')</script>" and didn't notice.)
I have seen this before though, with some folks removing path sanitisation code I added several years prior to fix a CVE. So it's not uncommon (it also got merged, so when I found out and fixed it I added a very large and scary comment to stop people from doing it again).
>The Signal devs thought $.html() does some kind of escaping:
Uhm... that's a really rookie mistake to make. Like, one of the very basics of jQuery usage. I'm not exactly sure what to think about it after seeing this commit you linked...
The worst part is that someone assumed something then removed the code that did the escaping without even doing the most basic of tests, like even in the browser just doing a quick foo.html('<script>alert("oh snap this is bad")</script>')
Even worse than that, it looks like there _were_ unit tests to check that input was correctly sanitized, but the patch that introduced the bug also explicitly changed the unit tests to ensure that input was _not_ being escaped!
A mistake that seems like it could've been caught in a code review!
These are the kinds of things that make tinfoil hats wonder if any agencies interested in subverting encryption "plant" employees. It's difficult to differentiate stupidity from malice. We know they use shell companies and subvert device manufacturers and standards, so it's not unfathomable.
Maybe he did and it didn't work for some reason and he thought it was good. Shit happens. I think the most worrying part is not having been caught in code review. Signal does code review, right?
I don't know if this is correct, but, I once got the impression that Signal Desktop was under the sole purview of a new hire at OWS. In other words, Moxie doesn't review the commits. I hope I'm wrong, but even if I'm not, I suppose it makes no difference, as he's arguably responsible either way.
So to be clear, a lot of the blame definitely belongs in the "all that comes with it" bucket here, which is one of the reasons why you should think twice about developing desktop apps using a platform that forces you to deal with not only the usual desktop app security concerns, but also all the things that make web apps vulnerable.
Still, when you ship an app with a relatively strict Content Security Policy as Signal did (including using script-src 'self'), you don't really expect a simple XSS vulnerability to lead to RCE, but it turns out that policy doesn't really do much in an Electron app.
> The issue was using innerHTML (or rather $.html()) with strings concatenated together from user input.
> The Signal devs thought $.html() does some kind of escaping
I mean, it does do a kind of escaping. If you assign javascript to innerHTML directly, it won't execute. jQuery specifically checks whether you're adding a script tag, and if so, it takes the extra step to execute it for you.
You mean, the innerHTML of a <script> element. Which isn’t really a thing, because the inside of a <script> tag is a document boundary—assigning raw Javascript to innerText or innerHTML directly would make no sense in either case. You need to wrap your Javascript in a CDATA node ;)
> The issue was using innerHTML (or rather $.html()) with strings concatenated together from user input. Something you should never do. Could as well just call eval() directly on it, or pass the input to gcc, compile it and run the resulting binary.
Yes, but most engineers would look at that last element and say "what on earth is going on here", where $.html() being dangerous is something that engineers who don't usually work on web might not know about. You're right about blaming the DOM spec, but there's no actual reason for signal desktop to interact with that poorly designed spec except that they chose electron as a framework.
Sure, you can write secure apps in Electron, just like you can do risky stuff in real-life and be fine most of the time. But why take the risk?
Had the app been written with a native language and SDK, they wouldn’t need to worry about escaping or anything. I have yet to hear about getting remote code execution for dumping text into an UILabel or similar, while XSS happens almost every day.
>Electron means you're one XSS away from remote code execution, and even worse, it makes it way harder to mitigate XSS through CSP (which Signal did utilize, but script-src 'self' can easily be bypassed in Electron).
There's a bit of an explanation of this in the article describing the other XSS that's recently been found in Signal[1]. Basically, since the Electron app itself runs under the file:// origin, 'self' can be bypassed with varying degrees of difficulty depending on the platform. On Windows, it's trivial because you can use UNC paths to a SMB share containing a malicious JavaScript file (i.e. file://1.2.3.4/payload.js). On other platforms, you'd need to find a way to place the file on a path accessible via file:// first, for example by sending the file via Signal itself and hoping the user accepts the download.
There are ways to lock down the CSP further to mitigate this, but no one really expects script-src 'self' to be unsafe, especially when it's what their documentation recommends.
None of this is caused because of them using a dynamic language. It's caused because the developers used a function literally called "dangerouslySetInnerHTML" that doesn't escape HTML. That's it. It's just lazy programming.
We called it that way in React in order to call out attention to the fact that it was actually dangerous. React also properly escapes everything else it prints.
The app isn't using React but jQuery, which doesn't have those protections.
They do seem to be using react, and using dangerouslySetInnerHTML. Now that said, I haven't confirmed that this is the code that caused the issue, but it is in the Quotes component, which is referenced in the article.
They seem to have fixed this specific issue a few days ago (v.11.0):
I hope Moxie learned that programming his people/team is as programming his software. This tarnishes Signal, regardless of how good the Android app is.
I mean, in C++ "=" could be called "dangerouslySetAribtraryMemoryLocation" and it would be just as accurate. In native code, even trivial operations like concatenating two strings or setting a variable can cause arbitrary code to execute.
strcat (or, honestly, anything in string.h). strcat assumes its first argument has enough allocated space for the contents of the 2nd argument, and that the 2nd argument is NULL terminated. If either of those assumptions is wrong, strcat will overwrite memory, corrupting either your heap or your stack, both of which can lead to arbitrary code execution. It's laughably easy to do, so easy that even typing the letters `strcat` into your program is forbidden in basically every C/C++ shop.
Nah it's both. C++ was deliberately designed to be a superset of C. It's diverged a little bit, but it's mostly still the case. Or, call it `std::strcat` if you like.
Yes the only problem is that the text markup language happens to include by default a Turing complete network-enabled live-interpreted programming language because 25 years ago someone wanted to write a funny message in the Netscape status bar.
If a presentation layer API (=HTML) provides developers with the convenience of composing UI elements by concatenating markup with remotely sourced input, and at the same time allows inline scripts to be eval'ed when merely present in particular markup attributes, and additionally sometimes hooks up un-sandboxed native APIs with full access to $HOME, it has really laid down the groundwork for a client-side can of worms that makes sql injection and php evals a run for its money.
With the prevalence of XSS and CSRF vulns on the regular sandboxed web, it's pretty brave to take that model into unsandboxed fat clients..?!
It'd probably be useful to distinguish between runtime environments that Javascript is typically encountered (Browsers + Electron) as opposed to the language itself. If it were a language issue, a developer might believe that they could simply switch to a different language, say rust, and compile to WebAssembly and be safe.
However, as you point out, the issue lies in the presentation layer unexpectedly executing code (or receiving inputs from unexpected and untrusted sources). This issue wouldn't be solved by switching to a different language. The core issue here isn't Javascript per se, but the dangerous runtime environments that are browsers and browser approximations (electron) that are designed to execute code from 3rd parties.
A database provides developers with the convenience of composing queries by concatenating strings. You still need to be really incompetent to do so in 2018.
Implying in any way that all JS programs are insecure and all other languages are secure is ridiculous and probably harmful, leading people to insecure choices in other languages. I've seen plenty of security vulnerabilities in strongly typed languages in my days as a software dev
Googled it for you. From their github repo... "To date the OpenPGP.js code base has undergone two complete security audits from Cure53. The first audit's report has been published here." https://github.com/openpgpjs/openpgpjs
I believe parent was asking for secure software written in JS. I'm curious, too. (And examples of insecure software written in C suprise no one, do they?)
JavaScript may have many problems, but I don't really think security is one of them. In a properly isolated sandbox, such as a web browser, it's much more difficult to gain arbitrary code execution privileges than a native desktop app.
The more we try to make encryption mainstream, the more difficult it gets because the mainstream interacts with computers predominately via browsers. The mainstream won't adopt something that isn't highly similar to what a browser has to offer in terms of media richness (photos, videos, html), so you see Signal choosing technologies like Electron, a browser, to develop their native applications. The heart of what signal is and does well (encrypt, decrypt, authenticate) is dwarfed by a pile of code that was added to make signal usable by the mainstream. Desktop Signal, in terms of code and complexity, is no longer a security product -- it's an application with a web-like media experience that happens to tack on a very good library to do encryption and authentication.
As we all know, sometimes vulns are in broken crypto, but most of the time they're in a gotcha beneath a mountain of code.
It looks like those are 3 separate third-party libraries (Mocha, Mustache, and Backbone), so each doing HTML escaping a bit differently shouldn't be too surprising.
The first one doesn't escape single quotes or slash, but I have no idea how to get any HTML parser to treat just those as anything but text. Underscore's implementation will be correct, I'm sure.
Honestly, and none of you are going to like hearing this, and the Signal people aren't going to appreciate me saying it: if you're serious about messaging securely, don't use Signal Desktop; don't use desktop secure messengers at all. Desktop applications are incredibly risky, far more so than iOS mobile apps are.
> don't use desktop secure messengers at all. Desktop applications are incredibly risky, far more so than iOS mobile apps are.
It's risky to use an open source OS. If you are serious about security, use Android or iOS. Instead of direct ssl connection to XMPP server, it's much safer to send all your data with Google Cloud Messaging. /s
Desktop computers are currently the most open sourced, least opaque, least spyware, non gps tracking, non "microphone always listening for 'ok google'" computer average person has. Why do you suggest that iphone is much more private device?
You have my upvote, but I imagine that tptacek means that iOS is very very well sandboxed, and has an extremely tight and well authenticated download and update system which is extremely difficult for a third party to monkey with.
This is security via centralization and trusting a benevolent capitalist dictator. As long as your personal interests are aligned with interests of the benevolent capitalist's shareholders, you should be fine.
It is my least favorite security model. But, in the case of iOS it seems to be working well (for now). My long-term hope is for a decentralized FOSS model, but for the time being, in the USA, on a multipurpose machine, the benevolent capitalist dictator beats it, especially on sandboxing and package/app authentication.
I like open source software as much as most people on HN, and have worked with it for most of my career. But help me understand how a decentralized FOSS model gets ordinary lawyers, reporters, and congressional campaign staffers the level of security that iOS does? What are the mechanisms that assure safety for users?
The closest I can come to seeing something like this work is a Chromebook, and Chromebooks are locked-down and get their security model from a central authority.
> What are the mechanisms that assure safety for users?
What are the mechanisms that assure safety for users of iOS? I understand that it's had a good track record so far, but the proprietary closed nature doesn't inherently inspire trust. Surely a decentralised FOSS model done right could be secure for lawyers &c.
As the old saying goes, "if you could have invented a secure open source desktop chat app, you would have developed a secure open source desktop chat app."
In practice, empirict results win over theoretically optimal designs.
I think that’s unfair; the Ghost.io post from a day or so ago is a good reminder of just how much harder it is to do things when you have to make them work in a decentralized fashion. Decentralization makes everything harder.
You're right and this is just the classic walled garden tradeoff - freedom for convenience. Depending who you are, this might be acceptable, and it's good that we have choices.
In fact I am not looking for anything, as I am using conversations, but any unnecessary use of (Google) services is something I consider not privacy friendly, even if it is used just as signaling channel.
You should be far more worried that desktop apps don't require permission before eavesdropping on your conversations using the microphone (including 3rd level sub-dependencies of that NPM module you installed), than that Android or iOS is secretly recording your conversations under the guise of "ok Google" or "Hello Siri".
Regardless of the device form factor, if you don't want the OS to have access to the microphone then you have to physically disable the microphone (but if you are that paranoid you should probably live in the woods away from all electronic devices).
> Desktop applications are incredibly risky, far more so than iOS mobile apps are.
Ok, I'll play.
I get to choose 10 arbitrary apps from the Apple App store for you to install on an Iphone model of your choice.
You get to choose 10 arbitrary apps for me to install from the default Debian repos (which I believe excludes nonfree). Let's say Sid to make it interesting.
Who is going to be in worse shape after installing those apps?
You will be in worse shape than I will be. It's possible, in that insane proposition, that your Debian machine will be conceding remote code execution to the whole Internet, while my phone will just have some crappy apps on the home screen.
The question wasn't whether they could write an elaborate seccomp policy to contain any given Debian package. I just got to pick 10 of them, and install them.
Yesterday I apt-get install'd probably 5 such cruds just to record a small rectangle on my desktop. TBH I did this after apt-get install'ing 3 other animated-gif related cruds to do simple motion animation, then just gave up and used the half-baked Web Animations API in devtools of Firefox because it was easier and better documented than anything else I could find.
That's 8 total cruds written by who-knows, maintained by whoever, audited probably-never by no-one. Also, they pulled in various dependencies I didn't pay the faintest attention to.
How many of those 8 apps would you estimate sent my email contacts to a third party upon instantiation?
How many of those 8 apps would you estimate gathered various pieces of data to fingerprint my device? How many keep gathering data from every sensor source they can poll every time I run and use the app?
How many of those 8 apps would you estimate even touched the network at all?
Now let's suppose I download 8 cruds on iOS just as mindlessly as I did here. Do you think the answers to those questions will be different?
You should try to think of some specific problems that can arise in each environment instead of conducting some sort of weird Socratic dialogue about imaginary apps and a seemingly made-up iOS.
How is the beep local root different from jail breaks in the past? Seems both are local privilege exploit, and I recall seeing that iphone has had a long list of those in the past.
beepmargeddon is a local privilege exploit, not a RCE.
I'm not aware of any debian package (I don't use debian that often though so mind that) that A) installs a network service and B) uses unsafe defaults while C) activating the service on boot by default
Mobile OS have much better isolation between apps than Debian has. In default Debian configuration any app can access all of your data, microphone, webcam, Internet, GPS sensors etc. While the maintainers do a good job reviewing all of the software, having isolation on OS level provides better security.
For example, Chrome has process sandboxing. It might seem unnecessary because the code is written by highly professional developers, but it helps Chrome to be the most difficult to exploit browser. I am sure that if Debian could adopt something similar to Android permission system, it would make it even more secure.
Care to explain? I mean, in principle. I distrust Signal Desktop, whether it's built on Chrome or on Electron, because either of those "platforms" are more complex than my OS (Debian GNU/Linux). But you seem to be making a more general point... what's the reasoning?
No matter what Signal does with Desktop, it will remain a standard desktop application, meaning it will in general be as secure as the least secure application sharing that desktop.
If all your software comes from the App Store and is properly sandboxed, it's possible for this to approach the security of iOS. Of course, many people will install unsandboxed software, making this extremely difficult to actually achieve, but it's at least possible.
What's a desktop and what does it mean to share it?
My applications share an X11 display, and if I'm not mistaken, they are pretty well isolated from each other. (Are you referring to MicroSoft Windows[TM] by any chance? Yeah, that's different.)
I dunno if it's still true, but it used to be that every process running in your X11 window had access to every keystroke from every other application. Like, say, every password you type into a terminal window or browser... If it _is_ still true (and I've noticed a bit of recent discussion asking why Ubuntu 18LTS is switching back from Wayland to X11 which hints to me that it might be), then your apps are _very_ much not "pretty well isolated"...
What are you using for X security context isolation? I've been wanting a good solution for that one for a while now, and the end of my list is still “write my own isolating proxy” since I never found a good one.
Nothing... but can X11 applications actually steal data from each other? I wouldn't know how to do that, but I know precious little about the X11 protocol. (Screen shots are an awkward option, I guess.)
If I was worried enough (I'm not), I could use multiple logins. Two different X servers under different users would be completely isolated.
Even using two X servers under two logins, the fact is that desktop apps by default have much more freedom and therefore much more surface to attack. Even the kernel syscalls alone are not safe; exploits that elevate processes to root are not rare. You'd need to lock them down much more than just using different users.
By the way, they may have an history, but nowadays I wouldn't bet on the security of the average distro vs Windows, and I say this as a dyed-in-the-wool Debian user. For example, their Desktop environment does have protections against cross-application attacks, unlike X.
Yeah, I definitely don't like hearing that, because phones can fuck right off. That's not my computer, that's someone else's computer that they're letting me use.
I'm not going to get a smartphone just to use Signal. I'll use my spare laptop instead, that I already have, and just run Signal on it. It's not going to get compromised by Boogeymen From The Scary Browser Tab because I won't be running a browser on there.
This is easily proven false by simply looking at professionals which either by necessity or regulation require a strict secure environment.
How common is it that the military use iphone apps for classified information or to interface with military equipment. Do operators in powerplants or other sensitive infrastructure use iphone apps as interface to their systems. When security is a primary objective then having it hooked up with a third-party that continuously collects information that you have no control over sounds as a very bad security recommendation, especially when it has a independent GPS, GSM, and network capability which bypass normal network security tools.
If signal crashes and a crash report is generated, who get access to it on a iphone? Who might get a potential memory dump with plaintext? If that party is apple I would strongly recommend that one is first okey with the idea that apple has access to the plaintext before using such app.
If your standard for operational security is the military, I have very bad news for you. Or good news, if your opsec goal is "be way better than the US Military" (that news is: you are already way better than the US Military).
The military obtains what security it has by attempting complete segregation and isolation; because it's the USG, the world's largest IT department, there are "public" and "private" networks, both clones of each other, both running the same insecure software. Both public and "secure" networks have been owned up comprehensively by malware in the past.
To get a sense of how bad the situation is, go look at the Common Criterial EAL vendor list, note which vendors have obtained EAL4 certification, and then compare to the security track records of those versions. That'll give you the spirit of the situation without requiring to you actually endure an EAL validation, which is something I have had the misfortune of participating in.
This may or may not be true, but in a lot of cases where you need encryption, you also need not to have a GPS tracker on you while you're using it. You have (at least slightly) more chance of being anonymous with a dedicated laptop computer than you have with any smartphone.
There's a "herd immunity" component - if you're in a group of 10 000 people with GPS tracking on - your position might be possible to guess quite precisely based on meta data like IP, network latency etc - that can be compared across a large population.
This is a rare case where things are inconvenient for nerds to use, but more convenient for ordinary people, who tend to be more comfortable doing stuff on mobile platforms than nerds are.
Is that also true for professional use? For private use/relatively few threads I'd agree with that, the insistence on IM on desktop is a niche, but I'd think most people using chat a lot professionally (e.g. Slack) do not use it mobile-only. And if you'd want them to shift communication away from e-mail, it gets even more important.
(as p49k alluded to, iPads help, but few professional users have them as the primary device)
Desktop applications are incredibly risky, yes; as for iOS mobile apps we can't even know, as these devices don't allow auditing what software is running on them.
PGP has many problems and I hope a better replacement will come along, but the first step of secure messaging can't be using devices with closed, unauditable software...
They down-vote you, but you are 100% right. Using a mobile phone is intrinsically more insecure because they can track you much more easily, and you have much less control over the OS.
I vaguely remember someone on HN (or maybe some other forum) making loud endorsements of Signal over any other encrypted chat app. Even a statement like "9 out of 10 cryptographers would recommend Signal". What's the difference between Signal Desktop and Signal?
Signal Desktop is one of several clients for Signal Protocol; the most common client is --- I believe, but am not sure, but have good reason to believe --- either the Android or iOS mobile client, neither of which is a Javascript application.
We've recommended the mobile versions of Signal for a long time (see, for instance, the Tech Solidarity security resources, which haven't changed in a year), and everyone still recommends Signal Protocol. I think we all should have been noisier about the security limitations of the desktop app environment. And about Electron.
I was always uncomfortable about signal on non-iOS devices. I feel the same about password managers too, it’s a giant PITA but I specifically do not want all my passwords in one place if that places is a wild desktop.
A properly sandboxed desktop application is no more dangerous than an iOS app. Of course, Chrome doesn't work in that sandbox, and they're using Electron, so this doesn't quite work.
FWIW, the sort of people with access/leverage to be able to compromise a device through the baseband probably don't leave behind traces revealing it happened. They just drop hints to the local cops that they ought to find a reason to pull you over for a traffic stop and coincidentally smell pot smoke to give them probably cause to search your car...
(waves at the NSA guys...)
I _hope_ that sort of capability is still a year or two away from guys with an Ettus USPR and a bunch of open source software and hacking tools glued together with Python... But keep your eyes on DefCon and CCC to be sure...
They probably don't leave traces because compromising a modern Apple device through the baseband would be quite a trick, given that it's an independent peripheral connected to the AP over on-chip USB.
That's good to know (and for almost anybody else I'd add "citation needed"...)
Didn't the iPhone baseband processor at some time in the past have dma? I vaguely recall a perhaps Usenix paper that seemed to claim any phone that had a software unlock where you could disable the carrier locking, was almost certainly using dma connections between the baseband and AP. Any hints or links or search terms which would show me how modern an iPhone needs to be to be "safe" from that?
I don't know what the first iPhone to have an HSIC baseband was, but it has been awhile. I assume every iPhone anyone is really using today fits the description I gave. The iPhone 4 does. This is a really basic security design concern for mobile devices; you can assume that neither Apple nor Google (for their own Google-branded phones) ships products where a corrupted baseband can simply DMA its way into the AP. It is a little weird to me that people on message boards assume they've outguessed the hardware security teams at both Apple and Google on one of the most obvious attack vectors for their phone designs; both companies spend huge amounts of money on this stuff.
"It is a little weird to me that people on message boards assume they've outguessed the hardware security teams at both Apple and Google on one of the most obvious attack vectors for their phone designs; both companies spend huge amounts of money on this stuff."
For what it's worth, that isn't the assumption people are making. The easy assumption to make is that the security teams were unable to convince product owners at these companies that the extra expense of solving this was worth their investment. Especially because practical baseband attacks still haven't hit them as an issue, and it's very rare for product owners to take on major changes or invest in security for "theoretical" threats.
Just look at all of the services still allowing SMS for 2FA - it's not because the security team doesn't know that is an insane thing to be doing in 2018.
No, it's because end-users don't adopt 2FA via TOTP and, to a first approximation, nobody uses U2F. It's not a corner security teams are cutting. Microsoft's security team makes the same decisions.
Microsoft doesn’t have “a” security team. They have dozens of security-ish teams. And I can assure you that none of them made that decision. The product teams did. And security people there disagree with it. You likely think security teams at BigCorp have more teeth than they do. With a few exceptions, Microsoft is known for hiring well qualified security people and then giving them no authority to do much until something is already on fire.
Search for a tweet by Kostya expressing surprise and delight at the fact that they actually fixed an internal find a couple years ago. Or grab a beer with the nearest ex-microsoft security person and have them tell you stories about servicing internally discovered vulns.
You’re right that the security team aren’t cutting corners. Because that isn’t how things work.
Some may argue this has already happened - to at least half their social circle. (Not me though, I consider myself "recreationally paranoid" rather than "raving looney paranoid" - other people's opinions on that probably differ...)
Signal runs just fine on an iPod Touch (after a little fussing around getting it set up with a phone number...)
If you're paranoid enough, it's easy enough to avoid installing things that're likely to be crapware on your secure comms device.
Apart from Signal, the only other non iOS supplied apps I have installed on mu iPod are a bitcoin wallet and Onion Browser - both of which I angst a little about, since they're both in the first category of app I'd attempt to subvert if I were a nation state actor, or a blackhat looking to steal bitcoin from people least likely to report it to authorities...
If you're willing to adopt "run a dedicated device for crypto" approach, something like Tails running on a USB-key-like device gives you the same kind of security position and without having to trust Apple.
Yeah, I do that too (and an offline wallet on a RasPi which has never been internet connected) - but for secure messaging and for some bitcoin transactions, I want that device in my pocket. I treat the iPod cryptocurrency wallet like, well, a wallet - containing amounts I feel comfortable carrying around (like the couple of hundred in cash I might have in my wallet at any time). I "trust" Apple enough to store that. I wouldn't treat the iPod as a bank, storing significant or lifechanging amounts of value.
I was referring to the fact of imitating the HN brand by capitalizing on the domain name so they could scoop up all the traffic and SEO love. That's why the domain was banned to begin with a few years back. There was a whole discussion about it.
The site that I was referring to was literally hackernews.com, and was very popular among the tech crowd from the late 90s onward. (long before YC was conceived)
ALmost 20 years ago, I used to rotate between hacker news, fark, and slashdot to get my daily dose of internet.
Not that one. That's owned by Space Rogue. I'm talking about the one linked now, owned by some Indians who keep copying articles off other sites. There was a reason this got banned years ago. At one point you could trace articles from The Register and Motherboard paragraph by paragraph to their stories, but with bad grammar and bad sentence structure.
On their Android app, first thing it makes you do is give them permission to read your SMSs. It wont let you vefiry by entering a code. I immediately uninstalled - doesn't seem like a privacy focussed organisation to me.
That's odd because I've absolutely verified numerous Signal clients by entering a code, without granting access to my SMS (I use Google Voice so my SMS database is basically empty except for random spam from my shitty carrier)
It used to be that Signal required reading the code from SMS and didn't work with Google Voice. But they listened to complaints and changed it to allow entering the code.
That sounds absolutely horrendous. Even Whatsapp allows you to verify using a fixed line and claim that number on the mobile for privacy.
Coupled with the recent LocationSmart revelations, it would make Signal unusable for those who wish to keep their location private. You absolutely need to provide the mobile number of the actual terminal being used.
I needed to give them a phone number I could read SMS from to set it up, but there's no need for that to be "the mobile number of the actual terminal being used".
At least on iOS, apps cannot get the phone number of the device through any APIs in the iOS SDK (AFAIK). So no app on the platform can reliably confirm if the number you entered is the number of the device. They all assume that to be true when you confirm the code they send by SMS. As another reply here has put it, you just need a device to receive the code via SMS. You can then use that code on any other device to set it up.
That seems odd. Perhaps it's to do with the fact that the app can act as your primary SMS app as well (I use it like this). But still, should be able to have it as an option.
"...the new vulnerability (CVE-2018-11101) exists in a different function that handles the validation of quoted messages, i.e., quoting a previous message in a reply.
"In other words, to exploit the newly patched bug on vulnerable versions of Signal desktop app, all an attacker needs to do is send a malicious HTML/javascript code as a message to the victim, and then quote/reply to that same message with any random text.
"If the victim receives this quoted message containing the malicious payload on its vulnerable Signal desktop app, it will automatically execute the payload, without requiring any user interaction."
Is it the case that you don't even need to have the attacker's number in your contacts list?
This news saddens me. I’ve been the last user of the Signal desktop app around me and it looks like I have been too optimistic about Electron. I’ve now deleted any Electron app and recommend everyone to do the same.
I'm really starting to get tired of all these bloated JavaScript desktop apps. I get that it's more convenient for developing cross-platform apps with modern looking UIs, but I really wish there would be an increased focus on reducing the overall bloat and resource use, both among app and framework devs.
Speaking as a Windows user, I would vastly prefer a well-designed native application (WinForms/WPF) over a JS monstrosity any day.
With such an obviously "DON'T USE THIS" method name as dangerouslySetInnerHtml, I'd expect that we'd see something like // eslint-disable-next-line above it.
Interesting, more or less, nothing is 100% secure. Looks like DEA had cracked the whatever crypto Blacberry was using and quite a few drug dealers were caught that way (one example: https://www.thedailybeast.com/the-deas-dirty-cop-who-tipped-... ). They must have been using because of the reputation BB had. I wonder what will we find out in time about the narcos, terrorists etc using Signal.
Anyone else have an aesthetic feeling for this? Signal desktop felt clunky to such a degree that takes away from trust that Telegram feels equally secure - even though it is not.
Unfortunately there's no way to know why users flag something, but that brings up an interesting idea: in order to flag something require the user types out a reason, and if an article is flagged it could show why people flagged it.
Because the discution is already a flame war. I feel like everybody has a solution. Try not to get burn because JS is bad! a sorry electron is terrible! No innerhtml is a sin!
I thought we already accepted that JS is bad.
It is objectively a bad language, with the weak typing, quirky edge cases and a hundred bad libraries, and some backdoored ones available on npm as well.
We've also accepted that Electron is bad, we don't need native applications that bundle Chrome so they can run JavaScript. It's a bloated memory haemorrhaging mess pretending to be a cross platform framework.
And if that wasn't bad enough, we have JavaScript developers who know nothing about security, writing a supposedly "secure" desktop app where a XSS is now a RCE.
We've come such a long way that browsers can now almost securely execute JavaScript and you're telling me that a 2018 "secure" chat messaging desktop application has a RCE from a XSS in the messaging feature. It sends and receives text to other people for god's sake. Myspace wasn't even this bad and it didn’t pretend it was secure.
It is absolute lunacy.
I won't be surprised that the Signal desktop brand is completely ruined by all this fallout because these are just rookie mistakes compounded on rookie mistakes.
When will people start using plain old PGP — a tool that does one thing only, and does it right? Sure, it's a little harder than using just one tool that handles contacts, communication, formatting, and encryption, while making popcorn and walking the dog, but it works, and it's secure if you use it right.
Our efforts to make encryption easy are going to get someone killed.
> a tool that does one thing only, and does it right?
It does multiple things (signing messages, encrypting messages, signing and encrypting messages, signing other people's keys, publishing keys, downloading keys, finding trust paths between keys, publishing your contact list to the world, publishing information about when you met certain people, displaying photos of people, revoking keys, symmetrically encrypting files with a password), and it does none of those things right.
In particular, it unambiguously does authenticated encryption wrong (streaming decrypt, then authenticate), which was one of the root causes of the EFAIL vulnerability.
I'm not sure what you're trying to say here, but Enigmail 2 was released a few months after the researchers disclosed the vulnerability to the project[1], so it would've been a rather sad state of affairs if the release hadn't included a fix. That's not to say that everything's fine now for users of Thunderbird and Enigmail[2].
Thunderbird does not download remote content by default.
I don't know anybody who is using Apple Mail with GPG, but if there are such people, they have been doing it very wrong regardless of this vulnerability. It's an unsafe combination.
I have no statistics on what people use PGP with, but asserting that most people use it with Apple Mail and Thunderbird is baseless and without proof.
> Thunderbird does not download remote content by default.
The researchers behind EFAIL found a number of ways to bypass the remote content setting. Not only that, but Hanno Böck found another one today[1] that hasn't been fixed yet.
Ah, so don't use the now secured opensource client using Signal's protocol. We should use PGP with all the weak yet-to-be-patched clients. Cause it's not PGP which got hacked it was the client. Very different from how the Signal client got hacked not their protocol. /s
If I accept that using PGP manually is acceptable for getting wide-spread encryption then you're right, but the fact is that tech people rarely use it because it's so cumbersome that the chance of normal users using it successfully, let alone using the terminal these days, is so slim that it's basically absurd to think that is the way we're gonna get wide spread encryption.
I don't want to only talk to my tech friends. My best friend runs a small business doing nothing to do with tech and when I talk to him I want to be sure no one else can read/listen to what we're saying because it's private. There is no version where my friend learns to properly use PGP consistently so we can talk that way. The only reasonable way is WhatsApp or Signal.
Given the absent security of desktop Linux, and the dreadful opsec of its users, as revealed by many cryptocurrency wallet thefts, I wouldn't place much trust in persistent PGP secrecy unless all participants used Heads, and renewed keys regularly, and ran nothing except gpg. How many people do that, in the entire world, do you think? A hundred, at best?
It relies on several features of PGP that mean that messages aren't tamper-proof. If PGP made sure messages replayed and altered by attackers would not decrypt, then the attacks wouldn't work- you couldn't replay an email back to someone to steal it.
It relies on a client having an HTML renderer, but the underlying issue is messages that can be tampered with.
It also relies on clients misusing an API call. And then passing the result to a full HTML viewer that can access the internet. So, again, not an exploit if you use plain PGP.
we were able to compile a list of strategic defense-in-depth recommendations for Signal Desktop which we’ve sent to the Signal security team per their request. At the end of the day there will always be new “hot” vulnerabilities, but the “vendor” response is generally what separates the wheat from the chaff. The Signal team’s quick patch time along with a strong interest in mitigating vulnerabilities of this type in the future was encouraging to see. I’ll remain a Signal user for the foreseeable future :)
https://thehackerblog.com/i-too-like-to-live-dangerously-acc...