Does the FSF own their office space and own their generator, or are they using Electricity As A Service and a Building As A Service. For a lot of people, maintaining a server and service is much much more hassle, and much more distracting than the payoff would be, and thus, contract it out. I have quite a few non-technical oriented friends who would not be able to figure out the decentralized solutions that have been bandied about, nor do they care. If you want something decentralized, make something better, rather than just bitching.
A client/server architecture has technical and efficiency advantages as well, though. We host an AGPL v3 licensed SaaS application for our clients; they can switch to self-hosting at any time, so they're hardly captive, yet none has ever made the switch.
Yes, but the access to sources creates transparency and furthermore the lack of lock-in creates trust... Very much like in romantic relationships where freedom of choice actually strengthen the bonds. Users who feel locked-in will be planning an escape, however staggering the obstacles might be.
Those interested in serverless "backends" should check out JAWS (https://github.com/jaws-framework/JAWS), as they've done a great job building out a framework around this very idea.
Apparently they've got a big update coming Monday, so keep an eye out.
EDIT: Not sure where to put the quotes... "serverless" backends, serverless "backends", both, or none? :P
If you are also a little disappointed that this is not no servers but someone else's servers, maybe you'll like this project which really is serverless. http://thaliproject.org/
AWS Lambda is awesome -- for about a month, I was head over heels in love with it. However, the language support turns out to be a huge issue. If Javascript, Python, or Java 8 isn't the right tool for the job, you're stuck. I hear Go, and then Ruby, support is slated for 2016. I am optimistic that as time goes on, language and binary support will make this the future of computing.
What exactly was your use case that couldn't be solved with javascript, python, or java? I'm not being pedantic I'm just curious because I've been using it for a variety of workloads.
In fact, you can even compile and embed binary dependencies and native packages, zip it up and give that to lambda -- or even possibly write your own in C or whatever language is compatible with FFI. So you can basically do anything.
The only limitation I've run across has been the size restriction, which you can easily run into if you have a lot of shared libraries....but it can be worked around by being careful and only including what your application code needs.
I wonder what "go support" means. Would that not just executing a binary? If data is passed to it via a file handle, wouldn't that mean any executable would work? Or will you give it source code?
I know you don't have the answers, just stuff I think about when go support for lambda is discussed.
I'm just guessing based on my experience with Amazon's Kinesis Client Library (http://docs.aws.amazon.com/kinesis/latest/dev/developing-con...), but it probably means that you write something as a package with certain public methods, and they have a wrapper around it that knows how to invoke it. That's how KCL integrates with the languages that it supports.
You can run arbitrary executables, so it's quite possible to run Go and Ruby now - you're just wasting a few tens of milliseconds starting up a python process that's not going to do anything other than kick off your Ruby (or whatever) script.
You can also use Scala and Clojure (and really, anything that runs on the JVM -- but those are the only I've used). We're doing this for some things in production and it has worked well.
Before you go planning to build everything on Lambda, understand there's still no VPC support, so no RDS databases.
"AWS Lambda functions cannot currently access resources behind a VPC."
It's not serverless if you use Amazon; it's just that you're using Amazon's servers. There is hardware, and there is the signal, and then there is a chain of software between the two. That is all there is in computing, and all there ever will be.
I think what he's getting at is that he gets backend functionality without ever deploying and managing instances himself (similar to what App Engine offers).
You just deploy your code, and Amazon automagically handles scaling, etc.
This is newer and shinier, and not all use cases overlap, but at the end of the day, yes its someone else managing the execution of your processes.
Having worked on GAE in a very large scale, my gut thinks that while this has many different use cases, it will make sense for small to medium size projects, and there will be a tipping point surrounding cost and this will appear less like a silver bullet after some horror stories.
At small scale this may even save money, if you truly don't need your code running all the time, and can deal with latency of starting your code. For projects that think terms of N thousands of transactions per second, the math breaks down real badly. It would cost approximately $2,500 dollars to run a 100ms job a billion times in a month on a machine with only 1.5GB of allocated to this process. You could pay $4350 per year w/ reserved pricing (is there even reserved pricing w/ Lambda? Couldn't find it.) for a c3.4xlarge, which has 30 gigs of memory and 16 cores, which could easily process many more times work than a billion 100ms jobs per month. So lambda seemingly works out well over 6x more than rolling it yourself on EC2 with my overly conservative napkin math.
The other pain point these type of systems produce is that instead of scaling a server here or a few servers there, your thinking in terms of processes, and time and time again this becomes a very rapid way to not only provision more horsepower instantly, but also run up the bill instantly. Code that might have ran on 1000 processes on traditional infrastructure because you had to tune it to fit in that infrastructure (and I don't mean fine tuned, things like, setting your database pool correctly and having basic concurrency), can accidentally be scaled up to 10,000 processes and appear that everything is fine. Unfortunately have seen this quite a bit on GAE and lead to some very costly mistakes.
Processes are harder (in my humble opinion) to have a good feel for what costs what, especially if your scaling up rapidly. Instead of adding a few servers and understanding how that effects the bill, going from 1000 to 1500 processes happens much easier, and you pay for that convenience.
My .02 is that this would be ideal if your either in a hurry and money is no object, or you don't plan on growing past a certain size, or possibly for workloads that are truly not running that often. I think this whole craze of lets replace everything with lambda is going to lead to rapid development (awesome) at high prices if your project takes off, which will leave developers rushing to get off of it if/when their project needs to scale with financial constraints. (not awesome)
With AWS, once you start using multiple services (e.g. lambda, S3, dynamo db, etc), it's difficult to estimate/model the costs in advance.
First you have to estimate how many times your app is pulling down a file or hitting a given db table. But this is dependent on unpredictable user behavior. You don't really know what users will do with your app until they actually do it.
Then, with e.g. Dynamo DB, you have to think about how many indexes you have on a table, and how often your app will query those vs how many times your app might do a full table scan. You have to allocate these specially defined AWS resource units. I don't find it intuitive.
I wonder how many developers really know in advance what their app is going to cost per user.
It might make it faster and easier to get an idea to a functional state. While you'd have to pay for it later when you start getting a lot of users that's not necessarily a bad problem to have. If you account for it and plan for it, it might make a lot of sense for a small 1-2 person team.
The title may be a little bit funky, but guys, this is a great post to lead someone through the nightmare that are the Lambda/API Gateway docs and setup process. I tried doing something similar about a week ago while not operating at full brain capacity, and could not figure it out. This is a great narrative of how someone figured it out, and it was great to follow along and actually learn.
Big misunderstanding here:
it's not "server-less" (as in without-a-server), it's Serverless (as in "these products we're trying to sell through HN"...)
You need your users to be able to share information between them? You will need a server at some point. Period.
Call it miraculous AWS or whatever, it's still a server.
Someone was recently credited for an idea that has just been around forever? "Fred George, a former colleague & mentor at Thoughtworks is widely credited as one of the originators of the concept of microservices."
They do a pretty good job, yes. I've been building something on Parse, and the biggest issue I've run into is that they don't allow for npm modules. Smaller modules can be ported to parse cloud code, but if you have an interesting module with native code or just many dependencies, this becomes unworkable.
Firebase doesn't have a Cloud Code like offering at the moment, which can be very limiting depending on what you're building.
Who ever thought hacking meant "building centralized services to sell to plebes"?