HN2new | past | comments | ask | show | jobs | submit | cevaris's commentslogin

Don't you need to know some of Distributed Systems to consider yourself Full Stack engineer?


I think it depends on the scale of apps you build for. One can deploy a simple backend API or web app to a PaaS like Heroku or Zeit, or even a PaaS-like environment managed by another team, without knowing much about distributed. I would consider someone that can do that + build a modern front end as full stack.

IMO the average CRUD web app doesn't have the scale to have distributed systems problems. Also there are managed service versions of everything these days even when one scales up.


Yeah. And the scales have to get pretty crazy before "rails on postgres with some in-memory caching" stops cutting it. Your average developer will never see it.


I believe a basic understanding is enough to be a Full Stack engineer. For example, it would be enough to understand that my horizontally scaling backend would imply stateless servers. It may not be necessary to understand how the clocks are synced across my clusters.


No, only a JS framework (or two if a senior)


Saw the title "Build Jarvis" and laughed to myself thinking yeah right.

Then saw it was a Facebook link and was oh...



Which needs to be put in the context of its 6.78B in revenue.


Debt in of itself isn't a bad thing. Apple has $79 billion in debt. Facebook has $3B in debt. Microsoft is $60B in debt.


They have debt because they don't want to move money from foreign accounts to the US so they borrow money to pay dividends, but they don't have net debt.


Seems like a lot of work to setup. Also, spent several minutes on the docs, and did not one line of code; examples?


An 8 line Makefile translates to an 80 line common workflow (.cwl) file.

https://github.com/common-workflow-language/workflows/tree/m...


Honestly the format looks a bit over-engineered IMHO. The task is quite simple: make build system that executes jobs on clusters. So why not get best build configuration format practices and just make them run over network? For example ninja build system [1] format is quite good in my opinion, so just make runtime execute commands over network. Or travis-ci [2] is another example of well designed configuration format, and it really enables developers to write small and powerful configurations. Sure it was even done before (though mostly for C/C++ stuff), like IncrediBuild [3] for example, or FASTBuild [4] or distcc [5]. Though the case with precise control of pipes could be improved in current build systems, but not sure how important it is for this application.

- [1] https://ninja-build.org/ - [2] https://travis-ci.org/ - [3] https://www.incredibuild.com/ - [4] http://fastbuild.org/ - [5] https://github.com/distcc/distcc


Haven't checked ninja, but I've blogged a bit on limitations in common build systems, such as make and its various derivatives:

"The problem with make for scientific workflows":

http://bionics.it/posts/the-problem-with-make-for-scientific...

"Workflow tool makers: Allow defining data flow, not just task dependencies"

http://bionics.it/posts/workflows-dataflow-not-task-deps

The last of which is a limitation of even the most of the "very much-engineered" ones, as the post goes on to explain.


From the first blog post:

> Files are represented by strings

I think it's especially true for make - looks like it was designed to efficiently express operations for transformations of the same type (like .cpp -> .o/.obj). So in different use case it may become a bit clumsy to use. Ninja should help a bit in this case - you can define a rule, and just use rule name when defining inputs and outputs of a build statement, though it still operates on files.

>[Problems with] combinatorial dependencies

Yes, partially this could be fixed with wildcards in make. Ninja doesn't have wildcard support, so I've created the buildfox [1] to fix it :)

>Non-representable dependency structures

I think it's a limitation of this type of build systems, their configuration language oriented on expressing "how" to achieve things, not "what" to achieve.

- [1] https://github.com/beardsvibe/buildfox/


Ouch. I suppose that's the price paid for portability, though.

Anything beyond that trivial example Makefile will rather the reflect the system and environment on which it was written.


Lets be practical, I am sure the actual app - Works without network connection - Metrics (offline syncronization) - User logins - Includes price of iPads themselves? - Involved government and IBM personal

300K sounds about right


Feel there is an underlying meaning here.....


At this point isnt't like asking Microsoft to port over Word to Linux?


Feel there is really no way to prevent this. This was doubtfully uploaded knowingly (or possibly with ignorance). Data dumps occur all the time. As a file, they can too easily be shared. For sure, the school should work on increasing their awareness of handling secure data. But, in the end, nothing would really prevent this from happing again.


We do SSL and content inspection of uploads to unexcepted sites to prevent this sort of thing where I work.


I'm completely ignorant here, but how do you do that without this? https://hackernews.hn/item?id=11042353

Hoping to learn something, honest question.


I'm imagining IT has proviosned certificates on the computers under their control that allows them to do a MITM attack on either blacklisted sites, or non-white listed sites.


would be nice to auto generate these TIL readmes based off personal stack overflow upvotes.


You'd have to capture the action with a browser extension - it doesn't look like "who upvoted what" is exposed at all: http://data.stackexchange.com/stackoverflow/query/new


Especially when the correct / better answer is not the accepted one.


Hey Bob, what is up with the `git push --force master`?


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: