Hacker News .hnnew | past | comments | ask | show | jobs | submit | dododo's commentslogin

new tool for phishing. perfect.


I guess that's the negativity Larry Page was talking about yesterday, sad indeed.


Because security really is never a good reason to be skeptical and skepticism, no matter how valid, is just a cloak for hating on Google. I love GMail and lots of Google's products. I even like that this feature will be limited to companies that register with Google. It doesn't change the fact that those partners could be broken into and used to send out malicious emails.

Also, please don't try to insinuate that I'm against the feature. I'm not. I can stop using GMail if I ever want to. I just think everyone should consider their own security and decide how valuable it is to them based on reasonable possibilities.


Yeah. There are so many positive possibilities yet a large group of people still choose to focus on the negative side. I hope this can change one day


So many possibilities? There is nothing here that couldn't be achieved by having the user click on a link.

And he's right, it does make phishing easier.


> So many possibilities? There is nothing here that couldn't be achieved by having the user click on a link.

Because its machine parseable, it makes a lot of presentation options available that aren't available when you rely on a standard hyperlink without a data format with a standardized identification of the requested action.

> And he's right, it does make phishing easier.

Well, that depends on what the requirements are to have the client present the actions from the schemas: the current Google requirements, I would say, do not make phishing easier. You must register with Google for the schemas in the email you send to be recognized in Google products (e.g., Gmail) [1], and the registration is per-set-of-emails, and fairly specific as to the content, and appears to be manually reviewed [2].

[1] https://developers.google.com/gmail/schemas/registering-with... [2] https://docs.google.com/forms/d/1PA-vjjk3yJF7MLPOVKbIz3MBfhy...


>Because its machine parseable, it makes a lot of presentation options available that aren't available when you rely on a standard hyperlink without a data format with a standardized identification of the requested action.

You're right: this addition turns email into a data or event queue of sorts with standardized actions that can be performed on it. I like it. Given that email is one of the few non vendor-locked communication technologies we have and we already have a lot of infrastructure to deliver it reliably, this seems a promising evolution path.

I'd like to see something similar for IM: currently SMS is the only open standard for instant messaging, and any other option locks you into either a platform or a specific client, which the other person will probably not use.


> currently SMS is the only open standard for instant messaging

XMPP is an open standard (through IETF RFCs and related standards) for messaging and presence whose motivating use case was instant messaging: http://en.wikipedia.org/wiki/XMPP


Right. My comment is more on the adoption rather than availability of open standards.


XMPP is also used, behind the scenes, with Google Chat and Facebook Chat. It has a fair amount of adoption; you just don't really hear about it much.


Just like short sellers, we need these naysayers to keep us grounded. :) Yes, I choose to see the positive possibilities and the opportunities that show up thanks to our beloved naysayers.


No. It's more that there are so many positive possibilities yet a large group of people still choose to exploit them to make themselves money by harming others and end up breaking things for everyone else.

The whole history of modern operating systems and the Web is the example of that. Think of all the amazing and useful things that could be (and have been) done had there was no Data Execution Prevention or Same-Origin Policy or any other limit introduced because of security.


[deleted]


> This thread is hilarious. Keep your heads in the sand.

Does everyone need to get excited for features that existed in Outlook ca. 2000 A.D. just because they are wrapped in a shiny Web 2.0 veneer?


It did? Using an open format? I've never heard of that. Do you know the name of the feature?


> It did? Using an open format?

As much as Web 2.0 enthusiasts loathe the fact, Microsoft actually invented XMLHTTPRequest. See http://stackoverflow.com/a/12067786/112125


What does that have to do with anything? Netscape invented Javascript. Oh look another random point that seems to provide nothing to the conversation.

I think it's pretty well aware Microsoft (or should I say a select few working on IE at Microsoft) invented "AJAX"


You keep harping on the "open format". So what? That doesn't change any of the talking points here.


Being an open format that can be implemented by any other mail client is, in my opinion, an important part of the feature, especially for those of us who don't use Gmail.


They might allow for tar files in the subject and come to think of it, ogg files as well...

So what?


I don't think so, it seems an obvious concern to me.


How many spam/phising emails do you actually get in your Gmail? And of those, how many are DKIM/SPF signed?


The people targeted by phishing attacks have no idea what those terms mean.


It doesn't matter. Gmail does, and they block the feature for any sender which doesn't sign their emails.

https://developers.google.com/gmail/schemas/actions/securing...


I routinely send emails through a mail server that doesn't sign them and they are delivered to Gmail recipients just fine.


Sorry, I edited the post. They block that feature, not the whole mail.


a company that inserts this type of response hook in their emails needs to register with google. the response interface looks clearly separated—its not in the body of the email—so there is no way that a phisher could fake it.

so it actually could help to SOLVE the phishing issue. especially if other mail providers sign on.


Hopefully Google will get S/MIME built in.


i think you mean DES not DSA.


Whoops, my bad.


vitamin A is not water soluble. a significant excess of vitamin A is not fun.

https://en.wikipedia.org/wiki/Vitamin_a#Toxicity


The point is not so much that "megadoses" are healthy, the point is that at BEST they're wasteful.


quantum theory is a hypothesis. perhaps it's the best one so far for what the brain is made from. but it's truth is only as demonstrable as this: it's not yet been falsified.


You're confusing hypothesis with theory. Hypothesis is a proposed explanation that can be tested (i.e. is falsifiable). Theory, on the other hand, is a hypothesis that has undergone extensive testing and has been proven to be a plausible explanation for observed phenomena. Quantum mechanics has undergone rigorous testing, and has proven time and time again that it can accurately explain many of the properties of our universe.


You can say the exact same thing about any scientific model. My view on this has always been that as long as the model accurately predicts experimental results, assume it is correct for your calculations until it is proven to be wrong.

Even then, we never stopped using classical mechanics even though they were proven to be wrong at a variety of scales. They just happen to very closely approximate reality in some contexts and are useful.

The fact of the matter is, we have tools that are correct as far as we know and they point towards thinking that every quantum system is computable. Until this has been proven wrong, the fallacy is believing the brain is different, not the other way around.


Theories not only have to predict outcomes of events, they must also be falsifiable (and must expand on something thus far unexplained by other theories, you can't just recreate gravitational theory, for example).

You are, by your own admission, working with an incomplete understanding of how a scientific model functions. So I ask you, why should you be even commenting on this topic? Why should anyone take what you have to say seriously on this specific topic?


So no one should be commenting on this topic unless they have a perfect understanding of scientific theory? That seems terribly counter-productive.

I'm commenting on this topic to share my opinion and, to the extent of my knowledge, try to explain why I believe someone else's reasoning is flawed.

Now if you believe my reasoning is false, you're free to call that out. You're not free, however, to dismiss my contribution to the discussion simply because I'm not operating under perfect understanding of a field that isn't mine.

Call it out, explain why, participate in the discussion, and drop the personal attacks. I think at least part of my point is valid, even after what you pointed out.


I'm pretty sure I am free to dismiss your contribution "simply" because you don't know what you're talking about.

But let's not get caught in the weeds here; I don't think you're correctly conveying the level of certainty with which we understand quantum mechanics. There's a ton we don't have the slightest idea about in this area of science, so let's not forget that.


Quantum theory is a "hypothesis"? The entire foundation of modern electronics is based on quantum theory. It's something that has been tested over and over in labs and the regular world for over a century.


It's been tested and proven FALSE, just look at relativistic effects. We just don't have anything better.


if you want to build a reliable system, one useful thing to do is use equipment from multiple vendors. sure it's inconvenient, but by doing this you can often de-correlate failures. especially if you want to improve someone else's reliability.

e.g., from simple things like hard drives in a raid from different vendors, to n-version programming in safety critical systems (like airplanes).


That works when the interfaces are totally standard, but edge/core routers are not like that. Cisco supports one set of protocols for talking to other Cisco products; another set for talking to everything else. The "everything else" protocols suck in a lot of ways (they're ok inter-site, but not really so great intra-site).

Same with Juniper. (there aren't really other viable options besides those two)

You could build the same site fully independently with all-Cisco on one, and all Juniper on another, and potentially get some better isolation from vendor faults, but at very high expense.

You end up with much worse reliability if you have a mixed Cisco/Juniper network without a lot of additional isolation otherwise.


Re: "there aren't really any viable options..."

Total misconception. BGP, OSPF, ISIS, LISP, etc. are all non proprietary. Sure, the root cause of this particular problem is that CF is using something specific to Juniper, however router interoperability is not predicated on components like that. This example was a tool CF operationalized, and likely had little to do with their routing with the exception of it being a metric they may have influenced routes with.

People who have all Cisco or all Juniper shops namely do it from a cost perspective. Sure, there are some reasons outside of that but it's likely the big driver. The more you buy, the more you save. And the network sales realm is royally messed up to begin with. I've seen Juniper give 90% discounts on hardware just to break into a Cisco shop. But, the reality of the situation is that all of this gear is marked up well into the thousands of percent. So if you're not getting, minimally 50% then your probably not doing yourself due diligence.


"there aren't really any viable options" to juniper or cisco for core/edge routers.

There are some routing protocols which interoperate (which is how different sites on the Internet can talk to each other), but most of the protocols used for HA or management of a given set of routers, or, more importantly, most tested/debugged implementations of HA and device management, are Cisco or Juniper specific.

No big deal announcing routes to your upstream if you use Juniper and they use Cisco. Big deal if you have Cisco+Juniper and want to do HSRP (Cisco-only).


Well, no.

I've been in network engineering for 12+ years and I fundamentally disagree with a lot of what is said about "networking" and interop by many programmer-types (not casting here, but) on HN. Yes, yes, you may understand system DevOps to a point, however I'm not sure you've spent a significant amount of time studying Dijkstra's algorithm or truly have an idea of how to deploy a global IPv6 overlay. I'm also not trying to be snide here but I feel that, often times, many things that come up on HN are just fundamentally designed wrong from PHY all the way up until the devs get a hold of the rest. I've been in a very successful startup (think one of the top online backup services) wherein their network was run on commodity junk hardware. They were asking me how I'd troubleshoot this, that and the other thing - obviously with no debug (this guy said that with a grin). First and foremost, you designed it wrong - I can show you inefficiency in about 10 minutes of performance engineering that I would have designed around without thinking about those things. So, yes, I can waste time tracking down a bad NIC on your network, but if you feel that you've earned geek cred because you fired up Wireshark and parsed through a few simplistic ARP tables - you haven't impressed anyone but yourself. That's when I realized I was working with professional developers, and not network architects.

Your simplistic view of FHRP is trivial at best. Maybe if you were talking about how you'd design fault tolerance into a virtual link, say an LSP, with something like BFD in your design I'd be more impressed than conversations about proprietary redundancy protocols of which most network engineers won't touch for a variety of other reasons than the big "C".

</endrant>


Virtually no network engineers (by percentage) have to do anything other than worry about what their vendor supports for a given configuration (and usually a fairly small set of configurations, too); it's much more about policy and operations.

Similarly very few developers have to solve open CS problems in writing a CRUD application (or I guess more comparably to ops, come up with a novel implementation of a complex algorithm).

This is progress, though.


"Virtually no network engineers (by percentage) have to do anything other than worry about what their vendor supports for a given configuration" - this statement puts a perspective on your thinking. And then I read your information on the services your company offers, and I realize that it's not worth having a discussion.

"<redacted> takes your security very seriously." - right. That's a statement, not information regarding the thought or implementation. There's not even a mention of technology. <sigh>


Be nice.


Thats why software defined networking, Openflow etc, are going to take off, as you can get back control of the protocols and what is going on, and avoid the vendor lockin.


> Thats why software defined networking, Openflow etc, are going to take off

I've been hearing this for a decade. It's still not true. I'm not sure why, either.


What do you call Arista? Also there is a lot of interesting "virtual appliance" networking going on.

I agree the right choice today is almost certainly a C or J router and probably C or A switches, but e.g. hardware load balancers like F5 seem to be losing out to software in most deployments (increasingly).

I built a decent sized network with Zebra 15y ago, which was pretty obviously the wrong tech, but interesting.


Cisco + Juniper environment and you want gateway redundancy?

Hello, VRRP!

There are open standards for pretty much every Cisco proprietary protocol. Even EIGRP is now available as an informational RFC.


you can't utilize OSPF, ISIS, BGP, et. al. for high availability? You do realize HSRP is merely for a redundant gateway address, right?

most service provider networks manage via the CLI (generally scripted, for better or worse) and occasionally a vendor-specific API.

while I will agree juniper and cisco are generally the best choice for core/edge routers, there are other 'viable' options depending on your requirements. if you need in excess of 2500 BGP sessions on a single chassis, there aren't many viable options besides a Cisco 7600.

I certainly do not mean to be rude, but I feel you're attempting to speak from experience you don't fully have (yet, hopefully!)


Wow, just... Wow. So. Far. Off. Base. (see my post above)


I view it less as a cost factor and more as a convenience. Developing expertise with juniper and Cisco takes a lot longer. Each router vendor has its own quirks. Even just buolding software to monitor routers is basically a full time job since its a constantly moving target. New bugs are always coming up....


"But, the reality of the situation is that all of this gear is marked up well into the thousands of percent."

You seem to be confusing hardware with software. Juniper's gross margin was 64.25% for the quarter ending Dec. 31, 2012, and in that ballpark for previous quarters back to inception.


You'd be amazed how often "standard" network protocols behave subtly different between vendors. You have to exhaustively test interoperability for every single feature and config option if you want assurance that it isn't going to break in some bizarre way.


It is also really nice to be able to call one TAC and have them devote effort to fixing it. If you have a heterogenous network, they can pass the buck, or even if they are awesome and try to help you out, there is no way Cisco's TAC knows as much about Juniper stuff as they do about Cisco, it is harder for them to put together a duplicate config, etc.

Back around 2000 this was a big deal. Cisco slacked on gigabit routers, and Juniper didn't have a comprehensive product portfolio, so while SP networks could be all J (but maybe with some switches from Extreme, etc,), enterprise networks were a lot more likely to have juniper and Cisco mixed, if they needed juniper performance in the core. Juniper ended up broadening their portfolio and Cisco improved their high performance offerings a few years later.


http://www.zdnet.com/uk-internet-hit-by-linx-router-failure-...

Here's an example of a large provider with parallel infrastructure each powered by a different hardware provider(Brocade/Extreme). One failed and one kept working. I seem to recall a more detailed RFO but my Google-fu has failed me this morning.


LINX just runs inter-provider switch fabric, though, which is vastly simpler, and just runs two separate switch fabrics for customers to plug into.

Running an anti-DDoS/CDN service which handles traffic like Cloudflare does would be vastly more difficult.

It's certainly possible to do, but I think the given ~reasonable engineering resources, the net reliability of a heterogenous J/C version of CloudFlare would be less, and performance worse, than what they have now.

Switch fabric is a lot closer to the "run different models of hard drives" (although, you don't do that WITHIN a RAID group either -- you do it on separate RAIDs and possibly separate chassis), than routing infrastructure (which is like running a 777 with 1 GE engine and 1 RR engine. At best, you can turn it back into a 747 and run 2 GE engines and 2 RR engines.)


I'm not going to go much further because this debate is useless without a context of limits and expectations. No one is discussing simplicity. LINX's operation is not simple. A specialized provider is offloading a difficult function as a core competency in return for simplicity. As an end-user, the difficulty is a non-factor. Just make it happen. Couple in "reasonable" with expectations and then we know what to expect. If it costs the moon to never make this happen again, then charge accordingly. If this happens once in a blue moon, then charge a lesser price.


I'm afraid this perspective is misconceived. I, too, used to believe it. After all, in any portfolio, risk is reduced by diversification, right?

Unfortunately, amortized over the lifetime of a computer system, risk is not reduced in this manner. There is no hedging of vendor vs vendor in a technical portfolio; what happens instead, for any tech of significance, is the internal development of an abstract control plane that can communicate with both, and that control plane is then the single source of defects. In the meantime your engineers have to become world-class experts on two platforms rather than one. In practice the divided loyalties will turn one world-class engineer into two half-assed ones.

Domino-effect failures, or global misconfiguration failures like those experienced by Cloudflare are edge cases in my experience and not something you should optimise for. When they happen they tend to be catastrophic, but worse is the insidious decline in quality caused by carrying too much technical debt.

Cloudflare's scenario is not comparable to the installation of a RAID set. They are more comparable to a developer of RAID controllers. The experience curve for such is very, very long.

Not saying they couldn't have done other things to make this situation less catastrophic, but diversity of core technology portfolio isn't a winning ticket.


Will customers be willing to pay for the additional costs incurred by that inconvenience? I think there are a whole lot of different things to try before you start introducing different routers with different os's/quirks/capabilities into the mix. Frankly that sounds like a recipe for not just inconvenience but chaos.


perhaps you'd like to use -nativecss as the prefix rather than -ios? you'd hope styles would be portable across platforms?


Good point, my reasoning was that I kept getting confused between -ncss, -nativecss, -nc, so I kept it -ios and -android.


i wonder if you are thinking of another monoid property of a gaussian?

suppose we have:

   x ~ N(0,1)
   y|x ~ N(x, 1)
then we have:

   y ~ N(0, 2)
i.e., gaussians are closed under marginalization.

however, i believe gaussians are not the only distribution with this property either: i think this property corresponds to the stable family of distributions: https://en.wikipedia.org/wiki/Stable_distributions


Stable distributions are something else, not related to marginals or conditioning. They come up when studying laws of averages.

Gaussian distributions belong to the class of stable distributions, though, because of another of their properties: independent Gaussians, when added, are again Gaussian.


the particular property of the stable distribution, i was thinking of, is "closure under convolution" which is the above marginalisation (i believe?).

infinite divisibility is (yet another) property of gaussians!


Nope. Closure under convolution is the same as closure under summation of the associated random variable, which is the defining property of stable distributions. This is explained in the first paragraphs of the wikipedia page you linked to ;-)

Closure under marginalization is something else.

It so happens that the functional form of the gaussian satisfies both, but the two properties are not at all the same.

  P1: X, Y gaussian => Z = X+Y gaussian
  P2: X, Y gaussian => X | Y gaussian


Yes, that's what I was thinking of! Thanks for the link to stable distributions -- hadn't heard of them before.


all exponential family distributions may be written in a form that depends upon a set of fixed dimension sufficient statistics. https://en.wikipedia.org/wiki/Exponential_family these sufficient statistics have the additive form described in this article (this is a consequence of i.i.d. sampling). it is common to exploit this structure when implementing efficient inference in, for example, mixture models.

if you combine this property with a bayesian analysis, and put a conjugate prior on the parameters of an exponential family distribution, then the posterior distribution, and the marginal likelihood depend upon on the data only through these sufficient statistics and everything else is easily computed. in this form, one of the sufficient statistics often has an interpretation as a "pseudo-count"; how many effective samples are encoded in your prior?

exponential family distributions include: poisson, exponential, bernoulli, multinomial, gaussian, negative binomial, etc.


these torrent sites seem to have a business model that makes them money, even a profit. what prevents larger media companies from adopting this proven(?) business model?


Two reasons:

* The torrent sites don't have to spend money producing content. If they had to actually make the content they were promoting, would they be making a profit?

* Even if it were possible to make money under this model, it would be much less than the big media companies are making now. Why would they switch?


You're confusing revenue with profits. There is certainly one revenue stream current media could be exploiting but are ignoring. If they could turn profit from it, after subtracting their costs, is another question. But they're not even trying.

You cannot make sure it would be much less money. What we can be sure is that A) megaupload and others are/were making tons of money that old media wish they were doing instead. And that B) the old media's current business model is decaying, flawed and doomed from existence. They would switch because or else they'll disappear.


> Even if it were possible to make money under this model, it would be much less than the big media companies are making now. Why would they switch?

Well, they're not making money from these people anyway, why not find a way to monetize them? You don't cease your current distribution practices, you are merely creating an alternate way to monetize those people who you viewed as "stealing" in the first place.


There's some obscure home improvement shows from overseas that I really enjoy watching. I'm surprised a local-to-me cable channel doesn't pick it up as cheap content, because it's well produced and pulls in big audiences in it's home market.

I'd happily pay $20 or so to be able to download/stream a season of it. But I suspect there's not enough people like me in the foreign-to-them markets who would pay the money to make it worth their while to get it on to iTunes (or similar), write some press releases, update their website to say it's available, etc.


Because trying to bring in $0.05/week from all these new people will completely destroy their existing market.

http://www.joelonsoftware.com/articles/CamelsandRubberDuckie...


Because they can't adopt this model without losing another one. The majority of the content we're talking about here is BBC TV shows, which are paid for by the UK TV license. At one point the BBC mentioned opening up their online iPlayer to international audiences- but if they did so, the US cable networks would immediately remove their (profit-generating) BBC America channel from the air.

So they'd lose money, overall.


Because they won't do anything that disrupts their own current cash cow models. Even if you explain that this is the future, that it's what their customers want, and that they will make money from it, the argument will fall on deaf ears. Currently entrenched media companies are not playing to win, they are playing to not lose, which means they inevitably will.

The most readily available example to analogize is Blockbuster/Netflix. BB had to declare bankruptcy and completely reorganize their entire company because they stubbornly refused to adapt to a changing market.

Other media companies will follow similar fates. They will have to be dragged kicking and screaming into the future, and many (jobs) will be hurt in the process.


Since I only have experience working with American broadcaster, my answer will come from that POV.

Realistically, neither of the two biggest reasons involves the protection of the dwindling home video market. Broadcasters make a lot of money off of their local affiliates (who pay them for the privilege of running their programming) and cable providers (who pay a subscriber fee for the ability to carry that network.) Each of these existing revenue pipelines brings in a lot of money - orders of magnitude more than any torrent site would. Now you may say "why would a torrent site disrupt their existing streams?" and the answer is that while it wouldn't, those existing streams are very protective of their domain, and would likely consider it an immediate breach of their exclusivity, and in the long term would use the existence of a "competing" product to lower their fees on renegotiation. So it's a lose/lose for any network or cable providers at the moment at least in the US.

That being said, it's their job to have their business model compete in the modern world - it's not the job of the modern world to adapt to their business model. Either they'll figure out a method to do so, or they'll die in 6-8 years and we'll build something better.


If your neighbor can make money siphoning gas out of your tank and selling it to strangers, why can't you?


Oh look, another flawed comparison that doesn't take into account the realities of digital distribution.


"Digital distribution" aren't magic words that make the production costs go away. In fact, they make production costs the most expensive part of the whole proposition.

Business 1 produces content and sells it in one venue. Business 2 takes all of business 1's content and sells it in another venue. And people are wondering why business 1 cannot just use business 2's model instead.

I don't believe that people here are really so dense as to not understand the problems with that scenario.


I don't believe that people here are really so dense as to not understand the problems with that scenario

Nor do I believe that you are so dense as to conflate copying with theft, so can we dispense with the comparisons that wouldn't look out of place in *AA propaganda?


As with most forms of media, the distribution costs are a small fraction of the production costs.

Film crews, especially good ones, cost money. Post-production crews, especially good ones, cost money. Actors, even bad ones, cost money. The equipment, sets, prop, makeup, computers, and all other physical items that go into making/producing a show cost money.


i'm a bit confused: the bsd license allows you to re-sell someone else's source, binaries, images, etc for a profit. you just have to include the copyright notice.

is that what you're sending the dmca notice about? because you couldn't find the necessary copyright notice? (if you didn't download it, how do you know it's not in the about box?)


* Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer. * Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution. * Neither the name of ChatSecure nor the names of its contributors may be used to endorse or promote products derived from this software without specific prior written permission.

I am not a lawyer, but my interpretation of the license was that you had to attribute me if you were selling my source code. Either way, they are the ChatSecure name to promote themselves and representing the work as their own.

Representing someone else's work as your own is plagiarism, regardless of the software license.


Note: I think these guys are obnoxious freeloaders, and I'm on your side, however:

> I am not a lawyer, but my interpretation of the license was that you had to attribute me if you were selling my source code.

Perhaps they do - and if they do, they are within their rights (as granted to them by yourself through the BSD license).

> Either way, they are the ChatSecure name to promote themselves and representing the work as their own.

That would be a trademark violation, and potentially false representation.

As for trademark, if you want to take legal action, you probably need to register it with the USPTO (assuming you are in the US). I am not a lawyer either, but when I inquired about it a few years ago, it was apparently required to register trademarks and copyrights before taking legal action.

False pretenses have much higher standard than you would assume: Unless they write anywhere "we are the sole author of this work", they are probably legally ok.

If this affair upsets you (as it seems it does), you would probably be better off with a GPL license - it's a signal to (ab)users that you care, as opposed to BSD which is a signal that you don't.

I think you are the good guy here, but I'm not sure you have much legal standing after using the BSD -- unless they removed your copyright notices.


IANAL, but reading this: - Redistributions of Source Code must retain....

Doesn't seem to me to exclude selling the source code without displaying the information up front, as long as the copyright notice is still included with the source code (which we don't know, because we don't know anyone who bought it). And I can't see the third clause being triggered, because neither the name you gave it, or your name are being used.

Edit: fceccon pointed out that the name and logo are being used in the header, so yes that seems to make it much more clear cut, albeit turning on the legal definition of "Derived"


    And I can't see the third clause being triggered, because neither the name you gave it, or your name are being used.
They're using the ChatSecure logo and name on the page[1] header, so I think the third clause is triggered.

[1] http://www.chupamobile.com/products/details/600/Secured+Chat...


Interesting - I hadn't considered that the BSD licence doesn't seem to require the copyright notice to be displayed before downloading - so it could display the copyright notice in the purchased program, and be compliant?

This case doesn't seem on the surface of it, to be completely clear cut. Is there a case to argue that it is morally wrong for them to charge for something that is free, (apparently) without adding value? Perhaps. Legally, though, the case might be different.


the bsd license says: "Redistributions in binary form must reproduce the above copyright notice, .... in the documentation and/or other materials provided with the distribution."

much of open source is redistributed at a cost, "(apparently) without adding value". https://www.gnu.org/philosophy/selling.html


I'm only interested in the representation of this software as his own, and including it in his "portfolio". If it turns out the BSD doesn't offer me strong enough legal protection against this kind of behavior, I will consider a move to the GPL with an App Store exception.


Incidentally, you mentioned you sent DMCA takedowns, but Chupamobile seems to offer submissions to it's own copyright notice address too on the T&C,

> "If you think your products has been copied or there was some breach of copyright please inform us by sending an email at report@chupamobile com, including:"...

So it might be worth hitting that also. I'd also like to make clear that I completely sympathise with your situation, if my other responses had implied otherwise.


The GPL doesn't prevent people from selling your code as-is either, though it does prevent them from doing so using a different license.


If you go to the plagiarized product page (http://www.chupamobile.com/products/details/600/Secured+Chat...) and click more info under Regular License, the license terms are more restrictive than the BSD, which is a violation. However, the author is in Indonesia, so there's probably not much recourse.

Edit: the plagiarist is in Indonesia


More restrictive terms than BSD aren't a violation.

Plenty of proprietary systems have been based on BSD code. That's pretty much the point of BSD.

If you want to prevent people placing more restrictive conditions on your code, that's what the GPL is for.


> the license terms are more restrictive than the BSD, which is a violation.

I think you are confused with the GPL.

BSD only requires you to acknowledge original authorship, and declines any suitability or liability. It requires you to preserve the first. (You're welcome to take liability, though ...)


You can package up BSD code and resell it under a more restrictive license.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: