Reminds me of calls I receive from top flight hedge funds.
"You'll be working with astronomically smart people who
use crystalline cohomology to obtain the best
polynomial time approximation algorithms for
intractable problems. The engineer who did this taught
himself calculus within ten to the negative sixty-seven
seconds of conception."
"Is that the work you have in mind for me?"
"No. You'll be cleaning the group's digital bed pans."
"That's OK. Perhaps you should recruit a Nobel Laureate
for that role."
Here's the thing that I think some of the engineers in this discussion might be missing: A company shouldn't consist of only engineers.
Don't get me wrong: I'm a software developer, and I'm a bit of a software snob. I have trouble (though I've mostly overcome it) seeing Managers, CEOs, marketing people, sales people, "designers", etc. as doing "real work" because they're not engineers.
Maybe designers feel the same way about me. They should.
But the reality is, good companies need good designers, and with experience I've discovered that designers can really blow your mind sometimes.
But the hiring process for a designer is not the same as the hiring process for an engineer.
One area where I've personally been very dissatisfied with google is the lack of customer support. I can't imagine asking programming questions to a customer support person who was expected to help people with their adwords account.
The person being interviewed here, and who wrote this blog post is one of those "customer support" type people. Not pretending to be an engineer.
If you only hire engineers and your hiring process is engineer focused, you're going to miss out on a lot of really good people, and you're going to have non-engineering roles staffed by engineers who just want google on their resume so they can go get an engineering job later.
And these are not the people you want helping others with their adwords account.
(Of course I'm not familiar with the internal structure of google, so I'm giving examples meant to be representative of the problem, but that might not actually be true.)
The people who do Google interviews do not spend any time looking at a candidates past work or blog or skills because that is not what Google engineers evaluate when it comes to hiring you. They are supposed to ask technical questions and evaluate how a candidate answers those questions. That is it... No analysis of their open source code, no mentions of an insightful blog post they had written, just the information gleaned in the 45 minute interview (at least in 2009, I don't know if this has changed since then). Engineers actually took great pride in providing a good and welcoming interview with thoughtful questions. There was even an internal forum where people would discuss the merits of certain interview questions and ban bad questions.
Recruiters however have a completely different job. They go around the internet looking for candidates who look like they are pretty smart (insightful blog posts, open source contributions, etc), however they don't always have the deep technical ability to differentiate people who would and would not be a good fit for Google. That is why you get into situations where a recruiter excitedly encourages people to apply for a job, and then the interview is upsetting for the candidate because it isn't what they were expecting.
When interviewing people for my current job, I do a lot more of looking at a persons blog or open source contributions if they have them when doing diligence about potential hires, but I also expect them to perform, write code and solve technical problems during the interview. I feel it does give me a better overall sense of the candidate and allows me to make better hiring decisions. However, at Google scale this just doesn't make sense, as when you have tens of thousands of ultra-qualified candidates beating down the door to work at your company, missing some people would have been good employees just isn't that big a deal if that means your engineers can save a lot of time in the hiring process.
"They go around the internet looking for candidates who look like they are pretty smart (insightful blog posts, open source contributions, etc), however they don't always have the deep technical ability to differentiate people who would and would not be a good fit for Google."
I once created a LinkedIn account to see someone's profile. It was meant to be a throwaway but I didn't delete it. The account had no activity, no "insightful blog posts", no open source contributions, nothing but a single job listing under a previous position.
Not being a real person, nobody contacted this account... except google recruiters. And they did it persistently. It was right out of glengarry glenross. They were very smug, which I thought was amusing because, well, they were chasing a non-existent entity. (even if I didn't have my own startup, I have no interest in ever working for google.)
When I didn't answer the first recruiter, after many emails, I was handed off to another guy who felt this event was significant enough to send me, yet another email about it. In it one of the things he said was: "I identify top-tier engineers for Google"
Of course, when looking at his profile, his background is in recruiting, not engineering, and his degree is in accounting, not engineering. (No disrespect to accountants.)
I'm completely burned out on recruiter hyperbole, and absolutely have spent the last conversation I'll ever have with a non-technical recruiter whose trying to "evaluate" me. Here's how the last one went (before I started refusing to deal with recruiters, which, by the way, produced better job leads and better jobs.)
Her: Do you have oracle experience? (having not told me anything about the job, other than it was a "coder" job.)
Me: Well, I was writing java that generated SQL dynamically to talk to an oracle database. The SQL had to be dynamic because...
Her: (interrupting) which oracle?
Me: Uh, the one from Oracle corporation.
Her: What was the version number
Me: (just because I'd happened to see it in passing) 8i I believe.
Her: OH, sorry, the client wants 9i experience. (9i, I later discovered had been released within the previous 3 months.) I'll keep you in mind and let you know if I find anything that matches your level of qualifications. (From tone of voice, it was obvious I wasn't "up to speed" in her eyes.)
Later I was awarded a patent as sole inventor for the technology I'd tried to describe in her question.
Anyway, getting tired of constant emails from google recruiters, and irritated at this non-engineers belief that he "identifies" "top flight" engineers, I asked him what he looks for in a phone interview.
Here's a direct quote of his answer. Because this is obvious nonsense (he can't evaluate any of these things from the perspective of an accountant) he's clearly just playing buzzword bingo, and in the hopes of getting other engineers past this fake "evaluation" to talk to real engineers, this is all you have to say:
"The l things I look for when I'm having a phone conversation with an engineer (at a high level):
- What big problems are they solving (large scalability and storage issues, advanced algorithms, etc...)
- Involvement in complete product development (are they shipping products or are mainly doing integration)
- Technical coding, % of time they spend coding, languages they're coding in.
- use of data structures and algorithm writing.
- Educational background and motivation for making a change."
Note that "I got tired of being harassed by google recruiters" is probably not a good answer for the last one.
From the original article:
"I guess that’s why when I interviewed for my transfer, I was told I was “not technical enough” to do the job I’d been doing for 3 years already, supporting the Freebase community."
Google literally told this guy he was not capable of doing the job he was already doing, and given that it was a startup, a job he had likely created himself.
"There are plenty of fresh-faced kids from Stanford and MIT and whatever other elite universities are on Google’s preferred list, who can solve stupid puzzles and tell you the O notation of anything you want."
Well, when your founders were graduate students at Stanford, and they implemented a hiring policy right out of school, it shouldn't be surprising that they have an arbitrary filter like that, and a focus on identifying personalities that are nearly identical to the founders.
Finding people like yourself is orthogonal to finding good employees who will find the best position. In the worst case it can result in a monoculture where literally every employee is blind to a solution that would be obvious to at least somebody if you had more diversity.
I've not worked with google, so obviously I'm just giving my impressions based on my experiences being recruited by them (and this has happened ore than once, the LinkedIn example was just the most recent.)
"Well, when your founders were graduate students at Stanford, and they implemented a hiring policy right out of school, it shouldn't be surprising that they have an arbitrary filter like that, and a focus on identifying personalities that are nearly identical to the founders."
IIRC, Google's hiring process is largely imported from Microsoft (which was the reigning tech giant at the time of Google's founding, so naturally they looked to them to see what they were doing right) and GE (where Google's SVP of People Ops had previously been VP of HR).
A Google recruiter cold called me about a year ago after finding my profile on LinkedIn, and asked me if I was interested in a QA position he was recruiting for. I told him that I was quite happy running my own startup, but if I ever wanted to work anywhere else, it would not be for a QA position given my background which includes a Masters in C.S. from Georgia Tech, 5 years of experience as a software engineer, including 2 as a research engineer at a startup that was acquired by McAfee, where I spent 2 more years and then 1 year as CTO of the startup I cofounded. He proceeded to inform me "to be frank, you have no chance of getting a better position at Google". At this point I disconnected the call.
It will be interesting to see how a company that hires with a strong academic bias competes in the long run.
I'm not sure but I suspect Apple is the opposite, and look how they are doing.
Update: is this not a relevant comment? What would motivate somebody to down vote it? I'm curious because I care about the quality of HN, and I don't want to see it become something like reddit where up/down votes are actually like/dislike votes
If you say something that is positive to Apple and negative to google, you will get down voted here. IF you ask why you're getting down voted, you'll be introduced to all the logical fallacies and errors in your comment, however, this is a red herring. Note the votes that pro-google, ant-apple comments get, even when they are merely assertions.
Of course, my crime is much worse. I have pointed this out. But there it is. (Do it too many times and you'll get banned completely, so beware, and log out periodically to make your your contributions are actually appearing on the forum.)
You assume something that is pretty illogical with no evidence (Google may have a stronger academic bias than most, but Silicon Valley in general has an education bias, Apple included). Also, the concept that Apple is doing well and they don't have a academic bias, so that must be correct is correlation = causation. For example, I hear Google has a strong academic bias, and look at how they are doing.
I would be shocked to find that apple had the same kind of comp-sci bias that google had. They must have a huge number of developers who have been with that company since the 80's and 90's, a time when much of the industry was being created by people with very diverse backgrounds.
Be prepared to be shocked - What company wanted all their employees to know how to program (even the secretary!)
(Hint, it wasn't Google.)
But I think this is best summed up by an (admittedly paraphrased) quote from our good friend, Steve Jobs - "Engineers are the only kinds of employees I really hire, everyone else is expendable, because everyone else isn't very smart."
I recently interviewed at Google and got an offer. I live in Bangladesh and I am pretty sure no one involved in the interview process ever heard the name of my university or cared what my CGPA was. So, the part about coming from an elite school does not make a lot of sense to me.
The interviews were intense but all the interviewers were very friendly. I got stuck at several problems but was given a lot of hints. And even when I completely missed an answer, the interviewer was nice enough to point it out without sounding rude or arrogant - or making me feel like an idiot. I must say this was unexpected since I did make quite a few mistakes.
The way I see it, knowing about the right tool to use is important and useful in our day to day work. But the guys at Google often have to create the tools themselves (or better versions of them) for the scale at which they work. Also, I got the feeling they want their engineers to be aware about what actually goes on inside the framework/tool/database etc. instead of just using it like a black box.
True story: in my interview I was asked how I would extract entities from an HTML page. I suggested using OpenCalais.
So you're interviewing for a job as a programmer for a search engine, and your answer to a simple parsing / text processing question is, "I'm going to use some proprietary black-box third-party API to do the parse", and then you refuse (on moral grounds?) to elaborate on how that service could possibly be implemented? What?
I could understand something like, "Oh, I did that last week, and we used the OpenViagra API." OK, cool, but the follow up question is going to be, "but if you had to do it yourself, how would you do it?"
This is a good question for many reasons: for one, it's a basic "how do you handle parsing" question. Parsing comes up all the time in programming, and it's good to know if the candidate knows about regular expressions, parser generators, parser combinator libraries, etc. It's not about "do you know how to get HTML entities", it's about "do you know about parsing".
There's also something weird about relying on third-party APIs for simple programming tasks. Sure, if you need to know your current location, you have to depend on a bunch of orbiting satellites with precisely-set atomic clocks. But if all you need to do is /&([^&;]+);/g, then write that one fragment of code and move on with your life. The simplicity is well worth not having to set up a TCP connection, wait for it to be established, generate a request, send it over the network, wait for a response, time out and throw an error if you don't get one, parse the response, and finally return the answer to the calling code. Think of all the bugs you're going to have to fix with this approach!
I'm going to start asking this question, just to see where it leads. If the answer is more than one sentence, it will give me valuable insight into how the candidate solves programming problems. If they answer, "well, I refuse to do that", then that's valuable insight, too. All in all, a great interview question!
So anyway, technical interview questions are not about getting things done, they're about thinking through problems. Parsing HTML is a solved problem, and of course you're going to delegate to a library. But tomorrow, you're going to have a very similar problem to solve that is more nuanced and difficult to explain in 30 seconds. And that's what the question is really getting at; if you can parse HTML, you can parse the weird config file format you invent tomorrow.
Google's interview process makes me want to work at Google!
Side note: /&([^&;]+);/g won't actually parse character entities in HTML correctly. There're a number of corner cases in the HTML5 spec for historical reasons, and this is one of them. In some cases, it's legal (well, it's a parse error, but the parser must return the specified entity) to leave off the trailing semicolon. Except in attributes, where it depends on what the character following the last character of the entity is. The particular cases are enumerated in a table that is over 2000 lines long.
Here's the meat of my point. The correct answer when writing software is "use a library". The correct answer when answering an interview question is code with minimal dependencies.
>Here's the meat of my point. The correct answer when writing software is "use a library". The correct answer when answering an interview question is code with minimal dependencies
Which can be paraphrased as:
"Here's the meat of my point. The correct answer when building a wooden shack is "use a hammer". The correct answer when answering an interview question about building wooden shacks is to make the hammer first".
It's an attempt to understand how you would solve problems of the class that the question is of, not an attempt to get a solution to a specific problem.
Not using a library when you should have is a problem solved by a 20 second code review.
Code reviews are normally done after the developer has spent some time on the code. That is fine that it takes 20 second to determine the code is totally worthless, but the developer has already wasted DAYS of effort.
I think you're misunderstanding the question. It sounds to me like they're asking about named entity recognition[1], which is a bit more involved than just having to write a parser (unless you're crazy enough to try parsing it by hand). And it's certainly more involved than I would expect someone who works in Developer Relations to have to answer.
The reality of the situation is that the author should probably have just taken the question that the interviewer was a techie who didn't know how to ask about soft skills.
The position she was applying for was not a programmer job; it was a community manager position, and doesn't involve any programming apparently. So your arguments, while valid, are besides the point.
I lucked out; when a Google recruiter contacted me, I had just gotten a job (had been doing contract work for a year prior.)
I'm pretty happy not being Google material - I have no degree, and I'll be the first to admit that there are many Comp. Sci. graduates who can code circles around me. I think that's awesome - I've reaped the benefits of the work of many others before, and from the looks of it will continue to reap them (I'm looking at you, PyPy team. Keep up the wonderful work! Let me be lazy, and not have to write an extension in C or C++. :D)
The most advanced work I've done thus far would be writing out a somewhat decent decision-tree system in C++ (I know, mind-bogglingly advanced! (: )
There's plenty of room in the mediocre to almost-above-average scale.
Someone's down-voted you without explanation, which is rather poor form. Entities in this case means extracting the text of the page - with nokogiri for example, or a variant of readability to get just the article - and then extracting things that are named in that text.
I used a modded version of readability and a simple entity extractor in python on the article and got: 'Sudoku', 'Search', 'Google', 'SRE', 'Stanford', 'Freebase', 'OpenCalais', 'SWE', 'Metaweb', 'MIT', 'Wrong', 'HTML', 'Developer Relations', 'Interest', 'DON', 'True', 'CPU'.
A great introduction to all this is the O'Reilly book Natural Language Processing with Python (free online[1]).
2b) - i could write a litte parser (potentially much faster)
Since XML entity references can be described with a regular language, there is an equivalent deterministic finite state acceptor. So, any parser you will write, will probably be slower.
Anyway, I think that the interviewer referred to named entity recognition.
My friend works at Google. The interview process is far from efficient, yet the people involved genuinely felt the inefficiency was a good thing, some sort of initiation to test candidates' perseverance.
From first contact to offer letter took about three months. He had four phone interviews, a video chat interview, two email questionnaires, and two recruiters before he met a human being.
Google is also unusually fixated on GPAs. My friend has a BA from UC Berkeley and an MS from another school. Because his undergrad GPA (from 20 years ago!) was not 4.0, the Google hiring managers wanted to know how many hours per week he had worked during his undergrad to account for his 3.5 GPA.
Have we really reached a point in industry where asking a candidate to go through the process of problem solving is frowned upon? Where "I don't need to solve that because I have Google!" is really the right answer? I mean the question seriously wasn't that difficult. What if there isn't an open source library to cover your every need? Are you screwed?
Critical thinking and problem solving are important in any position. If you are unwilling to perform this simple task then I would probably not want you on my team either, no matter how many blog posts you had written.
I am sorry but extracting html elements is not crazy algorithms, hardcore theoretical computer science, or nearly as "academic" as everyone is screaming.
The author talks about being asked a question regarding web elements when he was leaving the Search team for the Developer Relations team.
A commenter talks about being asked algorithm questions despite being interviewed for a sysadmin role.
Another commenter talks about telling the interviewers that she is not a coder and then is asked a bunch of coding questions despite interviewing for a testing position and having written a book on testing and having years of domain experience.
If you are just wanting algorithm people then surely they're going to be lacking in other areas (e.g. Google Wave, designed by algorithmers).
The objections raised here seem to be along the lines of that the questions are not related to the jobs people are interviewing for. This has two impacts:
- people are not being asked questions about their domain (it seems incredibly stupid not to ask questions about testing if the interviewer is a tester)
- you will cut down on the diversity of your staff if you just want algorithmists (which means that you are more likely to produce more Google Waves).
Well, to be a very good software tester you do need to code. I wouldn't have thought this to be controversial. Yeah, there are activities that don't involve programming, like plain manual testing, test planning and management, etc. But test automation and making code testable require programming skills.
The common factor is programming. They're looking for web designers with programming experience, sysadmins with programming experience, managers with programming experience.
It sounds like they're doing it right to me. It's not the be-all and end-all but it is a differentiator.
"Developer Advocate is a highly technical role [1]."
Doesn't appear to be from the link you gave.
From your link:
"You will be an evangelist for our newest technologies in the outside world, as well as a vocal advocate for developers' needs within Google. You will be an engineer who thrives on the cutting edge of technology and loves seeing exciting, new applications and business that other developers are building. Your job is to drive momentum for exciting new technologies such as Chrome, Android, App Engine, Google Wave, Google Maps API, HTML5, and our core Google Apps and Ads APIs. "
Guy Kawasaki was one of the first evangelists. He was a jeweler previously. Of course Guy has a great many talents, but I don't think anyone would call him "highly technical." Notice the middle sentence says "you will be an engineer", but the rest of it talks about being an "Advocate" and "your job is to drive momentum", etc.
The requirements don't seem to be highly technical either. (most of the languages are scripting languages, for instance.)
From the description:
BA/BS in Computer Science or a similar technical degree preferred.
Experience blogging and writing technical articles, ideally with an existing follower base.
Strong command of web application or mobile application development landscapes.
Experience working directly with large partners, or with press and bloggers preferred.
Considerable success as a software developer, architect, technology evangelist, CTO, or consultant working with web or mobile technologies.
Strong command of web application or mobile application development landscapes.
Solid programming abilities in one or more of the following languages: Java, PHP, Python, Ruby, .NET, JavaScript.
I've interviewed many engineers in my career (I've been writing software for over 20 years.) I've evolved my process for finding good engineers.
The problem with most technical interviews is that they end up asking trick questions, where the answer is more important than the process (and thus the "process of problem solving" is not even relevant.) You'll notice, this guy actually solved the problem in question, immediately, but that wasn't acceptable because it wasn't a technical solution.
Critical thinking is very important, I agree. Far more important than problem solving, and one of the real problems is, that people think that these kinds of interviews actually reveal ones ability to think critically. They are not the same. Problem solving is more of a process one can learn, critical thinking is more of a talent, or maybe a willingness, to step outside the problem and evaluate, for instance, whether it should even be solved.
In my experience- but I could be wrong- I would say there is a very strong correlation between being able to solve mathematical brain teasers and having trouble with critical thinking. (Unfortunately, these people do not realize that they have trouble engaging in critical thinking.) I can't find a correlation between solving the type of problems on google billboards and being good at critical thinking. Its mostly about awareness of obscure (to the mainstream, anyway) math.
Being able to solve problems is good for an engineer. All good engineers can do this.
Critical thinking, however, makes the great engineers great, and it eliminates the need to solve large numbers of tough problems. Engineers can end up in the weeds because they chase tough problems to solve (they're fun!) rather than employing critical thinking to keep focused on the important business problems.
The examples from this blog are not very illustrative of my point, but there it is.
> Engineers can end up in the weeds because they chase tough problems to solve (they're fun!) rather than employing critical thinking to keep focused on the important business problems.
Just a wild guess, but I don't think Google is interested in engineers that can "remain focused on the important business problems" anymore. Maybe it's just because of their size, or maybe just because they're more corporate now and procedures have gotten more important than the end-results, I cannot tell, either way, they're not a start-up anymore, where as an engineer you do have to look at the bottom line.
The original poster seems to be complaining more about the validity of the testing, rather than the fact that testing is happening.
For a role in developer relations, the real core competencies should be very high scores on "soft skills". Like being personable, communicate well with others, able to make people "feel good", "able to spread joy".
The number of times he's going to have to recursively solve the quadratic formula while displaying the data points on a graph using pictures of Barbara Striesand downloaded in realtime, is going to be infrequent at best.
So the complaint that they're testing for engineering aptitude, when interpersonal creativity is the required core competency seems (from what I've seen of google's hiring) a pretty valid one.
Perhaps what they really need to do is look at engineering their scientific hiring process to add a weighting to for "people skills".
This could be easily accomplished by evaluating candidates using the NEO-PI-R* candidates would then be awarded scores out of 5 on various personality areas such as Trust, Straightforwardness, Altruism, Compliance, Modesty, Tendermindedness. Community relations people should score very high in Altruism and Tenderness.
I also believe that Google's hiring process undervalues the human element. When dealing with pure engineering issues that might be OK but there are some positions that could use people with more 'creative flair' than others.
A story like this gives me the impression that even if you want to clean the floor at Google, you need to know when to mergesort and when to bubblesort[1]. Why does Google find this important? Why do people who will not do programming at all need to know (relatively) low-level software optimizations?
To be sure, if I'd ever apply at Google (which I doubt), I'd probably apply for a programming position and totally expect such questions. I'm asking about the non-programming jobs.
It's actually funnier when non-Google people try to give the Google interview, but can't articulate the questions well enough to elicit clever solutions...
I interviewed there for a Product Manager position, and it was by far the most technical interview I've ever had, and like the OP said, featured a lot of computer science type questions. I know very few product managers that would have done even as poorly as I did (I have a cs degree, but its been a while since I've really used it). If I could have changed anything, I would have loved to have been told that I would be asked CS questions in the interview.
Has it been two weeks already (since, you know, the last thread complaining about Google's hiring processes)?
You can know a library that does exactly what the question wants, if anything that's a plus, but you should roughly know how something like that actually works.
Our process isn't problem-free. We gets hundreds of thousands of applications a year. Who else has that scale of a recruitment problem?
There are a lot of myths about our interview process. Take this post: he says he doesn't have a degree from an "elite university". Neither do I and neither do most of my coworkers.
Having no degree at all is a problem but not an insurmountable one.
As for us allegedly not being interested in your open source contributions, nothing could be further from the truth. Recently the guy who did Firebug just got hired to work on Chrome developer tools. Do you really think his Firebug work had nothing to do with this?
As for what team you'll be working on, you as a candidate have a lot of power in that regard. You can specify you're only interested in working on something in particular or you can simply communicate some preferences. Those preferences affect the allocation process.
Our predilection for simple coding problems and requiring a certain level of algorithms knowledge is nothing new. What constantly surprises me is how many people I see go through the process that obviously haven't just picked up a copy of Skiena's book and brushed up on the fundamentals.
I can understand the motivation of people not to go through the process a second time. I was in the same boat. It can be a frustrating process. It's imperfect. Occasionally I'll hear people say they will never go through it because of what they've heard.
As Homer puts it, "trying is the first step to failing".
I'll leave you with three recommendations for anyone interested in applying here:
1. Go through the first half of Skiena's algorithm book (the last half is applications, which, while interesting, isn't as crucial). If you're not comfortable with graphs and dynamic programming, sorry but you just haven't prepared;
2. Practice some code problems on a whiteboard; and
3. Go through recruitment with a referrer. You're MUCH better off with a recommendation from someone who already works here. Even if they don't necessarily know you (and thus can't provide a strong reference), they can chase up what's happening with your application with the recruiter and probably get more information than you, as an external applicant, can.
EDIT: let me add a fourth piece of advice:
Treat your career at Google, if you're fortunate enough to have one, as a marathon not a sprint.
If you're really unhappy on whatever project you end up on or with the team you're with, that can all be changed. A lot of effort is made to make people happy. So you can express your concerns and disappointment with your manager, your manager's manager, HR or whatever is most appropriate.
Additionally, the ramp-up time at Google can be significant. 3-6 months. Possibly longer. Another approach is to do what project you end up, learn our tools, build system, processes, etc. In that time make connections to other teams. Find out what else is going on and what you're interested in doing.
There is an awful lot of internal mobility that's possible.
The original poster is complaining that evaluating a "Community Relations Manager" against their ability to reguritate the first half of Skiena's algorithm book (on a whiteboard, without syntax errors) is not a valid test of their ability to manage community relations.
Google's "scientific hiring process" appears to be broken , not because of any problems with the principles of testing candidates, but because the engineers who designed the tests failed to ensure that their tests were valid measures for the roles they're hiring people into.
You know the phrase "when all you've got is a hammer, everything in the world starts to look like a nail"? I think that's possibly what the engineer-hiring process has managed to create at google.
It isn't all disadvantages. There's every possibility that placing an engineer in a community-manager role could lead to new solutions to "the problem of community management".
So maybe the external evaluation of "invalid and broken" is actually a business decision and done that way by design.
You haven't addressed his fundamental complaint: don't contact him.
"As for what team you'll be working on, you as a candidate have a lot of power in that regard."
Are you serious? Allocation is a joke. It's hard to say what you are willing to work on in sufficient detail when you don't know what choices you have.
I was apparently lost in the shuffle, so when I contacted them a week before my start date to ask what was going on, they came back three days later with a single team.
I met with the team, and told my recruiter that I didn't mind working with them (though it's not clear what would have happened if I had). I was then given a form with room for a dozen teams, and asked to rank my only choice on a scale from 1-10.
If you asked for a team straight up, you basically didn't go through allocation.
You're right that anybody can ask for a team. In practice, most don't realize that they should be discussing allocation before they sign the offer letter, when they have the most leverage.
I know of people asking for and getting put on 'interesting teams' -- only to find themselves idiotically placed, in parts of a shockingly large team that make no sense given their backgrounds and motivations.
It's not the end of the world, but it makes for a tough first year.
I was given a preference sheet to fill out after receiving an offer, and I was given my first choice, also. It seems there is quite a bit of leeway to change teams if one finds something else more interesting.
I don't think I had more leverage than most (I came in off a failed startup, plus two years of work experience before that and a pretty mediocre GPA), and they gave me my choice of teams. I was allocated to Search, but the recruiter made it clear that if I had a problem with that, there were other teams - GMail, Docs, etc. - that wanted me and I could go there.
I also work at google, and also wonder what your bad allocation experiences are. A friend of mine started on Android team, didn't like it, and transferred to Google Books 5 months later. I think you are only supposed to transfer once every 1.5 years, but there's leeway to accomodate for bad allocations.
Leeway seems to vary across different parts of the company, but the presence of leeway is irrelevant to the quality of allocations.
I don't want to focus on my experiences in public. They've given me a bias, yes, but lots of other sample points I've gathered indicate that allocation is broken, and that it's not a priority to fix it. Nooglers have to be prepared to sink or swim.
A. Inevitably in a company of this size, certain groups and certain job categories have more trouble filling positions than others.
B. The technology stack at Google is deep and complex, has poor useability, and requires time to acquire fluency in. Given a choice any group will recruit experienced Googlers over nooglers.
C. Combine A and B and you end up in a situation where nooglers are, by and large, shoveled into large projects that 'nobody wants to go to'.
D. In theory you get to chat with 6 different groups. In practice things are far more perfunctory. 2 or even 1 is not uncommon (running out of time like prospero is common: https://hackernews.hn/item?id=2801016). If you indicate the weakest sense of 'yeah I could work in this team,' prepare to receive no more options.
E. The difference in quality of service (response time, level of understanding of your situation) between hiring and allocation is night and day. It's obvious why: hiring has to interact with recruits before they commit to joining, while allocation interacts after.
F. Even if you had 6 options, you're still chatting with managers in the presence of a huge information imbalance. You have nothing to go on but what they tell you. Even without meaning to be misleading or dishonest, they're unlikely to give you more than a perfunctory understanding of what your prospective team does, what it's working on (they wouldn't have mentioned Google+), or what skills it requires (rarely what you were interviewed about).
---
Google's a great place to work, and it's been very good to me. I've learned huge quantities working here. None of these problems are insurmountable. I see signs that they're seasonal; they gradually get worse for a time until they start impacting metrics, at which point leadership focuses on them and fixes them for a time. Some of us have a tough first year; it's not the end of the world.
Answer: A lot of people? Google isn't anywhere close to the top employer by size in the US/world (and even by number of applications, surely other companies get more).
Like you, I disagree with some of the points in her blog post, but I can't agree with your statement "As for what team you'll be working on, you as a candidate have a lot of power in that regard."
I had one referral decline an offer because he wouldn't know what project he would be working on until he was hired, and therefore, didn't know if it would be more enjoyable than his current job. I also knew PhDs who were hired and expressed disappointment that the project they were working on didn't make use of the specialized material they studied towards their degree. And I wouldn't think that the majority of new engineers assigned to ads projects expressed advertisement as a preference with their recruiters.
But most new hires, well, get over it.
Yes, there are some cases where people are recruited with specific expertise to fulfill specific roles, e.g. John Barton of Firebug or Sebastian Thrun for the self-driving car. But this is the exception and not the rule.
The thing that really kills me about Google's allocation process is how it screws nice people.
I keep hearing about applicants who get offered something that they never in a million years would have applied for. They say something like "well, I guess I could live with that," which is nice-personese for "I wouldn't actually kill myself, but I might consider it." And bam, they're working on something they have no passion for.
>Yes, there are some cases where people are recruited with specific expertise to fulfill specific roles, e.g. John Barton of Firebug or Sebastian Thrun for the self-driving car. But this is the exception and not the rule.
But they make such good cover for people who want to play apologist for Google's hiring practices!
It sounds like your process optimizes for bright 22yo kids straight out of the institutional lifestyle of school. SAT prep -> college -> Google.
Any other company, I can talk to them about what they actually need, what challenges the business is facing, what I would bring to the table, etc, basically a human conversation about the day-to-day and the big picture at that company.
It sounds like an interview with Google would go like BLEEP YOU HAVE SCORED 4.789/5 BLEEP SCORE SUFFICIENT BLEEP PROCEED TO ALLOCATION.
I mean, not to put too fine a point on it but fuck that noise. There are plenty of kids with a higher GPA than I had, graduating from top schools, and they're much more cool with being herded through a system like sheep. They're your ideal candidates. They also don't have much/any real-world experience, so you're relying on your senior engineers for anything that requires spider-sense. You're not importing any, anymore, because most good/solid 30+yo engineers want to know what they're working on before seriously considering an offer.
Not that I have a better idea for recruiting for companies with >1k engineers, I doubt IBM's process is much better. ("I can be a senior solutions architect? Cool!").
So if you're a tester or a UI interface person or a sysadmin whose work shouldn't really mean worrying about the big O order of algorithms - should these people also be asked questions about algorithms?
The comments at the end of the blog post talked about a tester with years of experience being asked coding questions after she told the interviewer that she wasn't a coder.
Another comment mentions a sysadmin applicant being asked similar things.
The impression I get is that Google hires a lot of smart people to make technically hard things work very well (e.g. search), but fail in other areas that require a softer approach (e.g. Google Wave).
So perhaps instead of focussing on algorithms, would it be wise to talk about problems that an interviewer (say a tester or sysadmin) might be facing in their daily work?
> So if you're a tester or a UI interface person or a sysadmin whose work shouldn't really mean worrying about the big O order of algorithms - should these people also be asked questions about algorithms?
Let's address each individually:
- UI design person: we have UI/UX people, which is essentially a non-engineering discipline so won't have the same requirements. There are FE (front end) engineers who will be expected to have the same theoretical foundation as any other engineer;
- the guy says he interviewed as a tester. Google's definition of a tester is different to that of most company. We have SETs (System Engineers in Test), who are expected to have a solid theoretical foundation. This makes more sense once you understand that most of our testing is automated rather than, say, writing and executing manual test scripts.
- Sysadmins (SREs; Site Reliability Engineers) fall into two different categories: those with a more programming bent and those with a more sysadmin bent. The first lot will be asked algorithm questions. The second are more likely to be asked questions about networking, Linux administration and so on (the first will get these too but probably less).
As for Google Wave, my personal opinion is that it was a solution in search of a problem so I wouldn't look at it as a failing of "softer" disciplines.
Google+, as an example, seems to have been received very well, including on the UI/UX front, which would seem to fall in the same "softer" category. The early G+ successes and positive reaction IMHO stem from a more solid design that delivers value to users, something I don't think Wave ever did.
One last thing I'll add is that Google's career ladders don't necessarily match up exactly to what you'd expect and those ladders are constantly re-evaluated. New ones come into being. Some disappear entirely.
I've personally found the Google hiring process disappointingly vague, and I felt a lot like the OP. Do you want me to build a solution or do you want me to tell you what the Big-O notation is for an algorithm? These two are not necessarily the same thing. The one-size-fits-all interview style doesn't work, particularly if you get a crappy draw on the interviewers.
The odd thing to me is that the Google hiring process optimizes for the theoretical, but everything I've seen seems to indicate that the practical is what moves you up the ladder at Google. And by practical, I really do mean it. Google (rightly, IMHO) values Getting Stuff Done and then optimize later for a number of job roles. And the people who I know who are very good at the theoretical can be quite poor at the practical, and it's detrimental to Google to hire those people into those roles.
The hiring process simply doesn't seem to link to the reality.
(I'm a Google intern, so I agree there's likely to be some skewed viewpoint)
As for Google Wave, my personal opinion is that it was a solution in search of a problem so I wouldn't look at it as a failing of "softer" disciplines.
It kills me that so few people at Google even understand what good product people do.
We engineers love solutions. We have entire books of them. We are hypnotized by tool catalogs and hardware stores because they are full of lovely, lovely solutions.
Good product people, though, focus on the actual problems that people have. When solutions are proposed, they test them intensively on real people to see if they actually deliver benefit. If not, they don't ship them.
Google Wave is absolutely a failure to appreciate "softer" disciplines.
I think Skud's comment (Skud is a she btw), that google relies too much on technical knowledge and has a dearth of people with soft technical knowledge in their organisation is probably valid.
You can know a library that does exactly what the question wants, if anything that's a plus, but you should roughly know how something like that actually works.
Even if the person interviewing is for a developer relations position, a job they were already doing pre-acquisition?
Also there is something very inhuman about calling the process "allocation".
Oh, come off it. The vast majority of people in tech are male. It's a good statistical bet that referring to an unknown techie as such will be correct. Saying "she" by default is silly, but men just don't complain about it. "They" is awkward and imprecise, and "xe" is retarded. If you're offended by someone referring to you as "he", simply correct them. Or, you know, try not getting offended by stupid minutiae.
If we're going to have a grammar flame, then using "They" for the first-person singular of indeterminate gender has wide usage & goes back a long, long way.
Personally, I'd far rather people use "They" than drop people in the "male" box because they can't be bothered to find out the gender of the person they're referring to.
This argument about 'they' being acceptable seems to have become extremely popular in the last couple months (or maybe I just didn't notice it before then). I find it to be unconvincing at best.
I think that "aks"[1] instead of "ask" has wide usage and goes back a long, long way and yet I suspect that if you here someone say or write "Let me aks you a question" you would think they were completely wrong.
Just because Shakespeare used a word a certain way doesn't mean that it's usage is acceptable.
Singular they has seen wide use for centuries starting with Chaucer and picking up Lewis Carroll, Walt Whitman, George Eliot, Shakespeare, William Thackeray, Jane Austen and Oscar Wilde along the way (list shamelessly stolen from http://motivatedgrammar.wordpress.com/2009/09/10/singular-th... ).
If you've only noticed it in the last few months then I suggest that you've not been looking hard enough!
The point of my comment was that I am certain that if you look hard enough you will find any number of nonstandard constructions that you would reject, despite it being included in Chaucer and Shakespeare.
I was already well aware of it's long historical usage, I simply would only rarely see someone say "Chaucer used it, therefore it's fine to use on a resume!" The spelling "aks" for "ask" is one such example that you could find nearly as many high profile historical usages, and no one argues that it is an acceptable spelling.
(Also half that list is exactly the kind of people that you would find an enormous amount of nonstandard usages; Chaucer is Middle English, Shakespeare was famous for writing in common vernacular, Lewis Carroll is famous for his literary nonsense and wordplay; hardly the best sources for what would be included in 'high' English)
Did you read the rest of the blogpost I referred to? Alternatively, Language Log has a whole category assigned to the use of singular they: http://languagelog.ldc.upenn.edu/nll/?cat=27
English is defined by usage; Singular they has very widespread use from the time of Chaucer to the present day. Only mad grammatical prescriptivists object to it :)
I'm fine with a solution that is elegant and works perfectly well 95% of the time, rather than one which is clumsy and works sort of well 100% of the time.
I also love that you think reinforcing biases and helping to keep women out of the field qualifies as working "perfectly well". Perfectly well for you, I guess.
Say, I've noticed that most of the time when somebody argues vigorously in favor of some sexist behavior, they're a misogynist asshole. I'm sure you won't mind if I assume that describes you as well, right?
Never heard of a book by Skiena, so at Amazon found it and looked at the table of contents.
Google should be ashamed to be very impressed with that book! The topics that are just computer science are not very good, and the topics that are good are not really computer science and are covered poorly in the book.
One way and another, for nearly all the topics in that book, I've worked much more deeply with the topics from other sources.
E.g., there is just one, short section on linear programming. Gee, that's part of optimization! I've worked in linear, non-linear, linear integer, multi-objective, quadratic, network linear, and dynamic programming! I've published peer-reviewed original research in non-linear programming.
Network linear programming is especially important: (1) the simplex algorithm becomes especially efficient, and astoundingly large problems can be solved astoundingly quickly (e.g., see the work of W. Cunningham on 'strongly feasible' bases), (2) if the arc capacities are integers and the problem is feasible and bounded, then there is an initial basic feasible that is integer and the network simplex algorithm will maintain integer solutions to optimality, (3) network simplex is also a good way to solve a wide variety of matching problems. In particular, a large fraction of practical integer linear programming problems are in fact such network flow problems or closely related so that a network flow formulation and the network simplex algorithm yield integer programming at no extra cost!
In particular, seeing integer linear programming, there is no good reason to rush to claim that the problem is in NP-complete. Instead, if only via network linear programming, often in practice there is good news.
E.g., there is a short section on hashing, but a discussion on hashing should discuss both extendible hashing as in
Ronald Fagin, Jurg Nievergelt, Nicholas Pippenger, H. Raymond Strong, 'Extendible hashing—a fast access method for dynamic files', "ACM Transactions on Database Systems", ISSN 0362-5915, Volume 4, Issue 3, September 1979, Pages: 315 - 344.
and also perfect hashing. Extendible hashing is a very nice idea; we used it in one large project that resulted in a high quality commercial product.
For "If you're not comfortable with graphs and dynamic programming, sorry but you just haven't prepared"
If Google wants people to know dynamic programming from Skiena, then Google is "not prepared"!
The glory of dynamic programming is how it handles uncertainty. Then it is essentially the discrete time case of stochastic optimal control and Markov decision processes. The Markov assumption, e.g., via conditional independence, is important. There is a lot to the subject, e.g., the certainty equivalence of the linear, quadratic, Gaussian case, multi-variate spline approximation, scenario aggregation, dynamic programming approaches to the knapsack problem, the technique of doubling up number of stages, and more. There are some theoretical issues, e.g., measurable selection.
The interview I had from Google just asked my "favorite programming language". Apparently the answer had to be C++. Due to the semantic mud hole of Stroustrup's book, the terrible threat of memory leaks, the nonsense of 'cast', 'the heap', and 'the stack', the brain-dead exceptional condition handling, the really weak compiling of string operations, the clumsy and slow design of arrays, the far too simple design of structures, the brain-dead rules for scope of names, etc., no one who takes solid software very seriously should have C++ as a 'favorite'. Moreover, the question of a 'favorite' programming language drags the discussion into the old mud hole of religious arguments about programming languages any organization serious about computing should long since have known to avoid. A good answer is that all the common programming languages suck; some suck in unique ways; some suck for certain purposes; and overall some suck more than others. Once I didn't say C++, the interview was over. Good riddance.
Apparently the Google interview process is looking for only not very well informed candidates with excessively narrow and elementary qualifications.
The people running the interview processes seem not very well qualified and a bad influence on the future of Google.
The topics that are just computer science are not very good, and the topics that are good are not really computer science and are covered poorly in the book.
You know that after reading only the table of contents?
One way and another, for nearly all the topics in that book, I've worked much more deeply with the topics from other sources.
You worked much more deeply the topics in a book that you didn't read or even see? And that's an argument for what? What relevance this has to anything?
E.g., there is just one, short section on linear programming. Gee, that's part of optimization! I've worked in linear, non-linear, linear integer, multi-objective, quadratic, network linear, and dynamic programming! I've published peer-reviewed original research in non-linear programming.
Good for you, I guess. Again, what relevance this has to the discussion? Because you did research you now want everyone to spend at least a year learning optimization? You want all books to cover only advanced optimization and not the basic stuff? You want Google to hire only people who took a graduate-level optimization course? Doesn't make much sense.
It's hard to read this combination of arrogance, self-promotion and name-dropping, which in the end hardly adds up anything to the discussion.
Also, the book by Skienna isn't the "official Google knowledge base", it is just a book people recommend because it happens to be an easy way to prepare for this certain kind of interview, I guess it started with Steve Yegge recommending it on his blog for this purpose. I don't understand what's at all the purpose of discussing its contents, when it doesn't have any relevance to the topic. From what I understand they simply ask you some basic algorithm questions at Google and I guess that's pretty natural.
My main point in response to the post from the Google recruiter is that Google's recruiting process is messed up, say, arrogant, inwardly directed, process oriented, too narrow and particular, bending over backwards to find silly reasons to reject people, and with irony, actually not "prepared".
My evidence is (1) the emphasis in the post on Skienna and the contents of that book, (2) the claim about lack of knowledge of dynamic programming meant not "prepared", (3) the common complaints about the Google process emphasizing tricky questions, and (4) my experience where, with irony, if they like the topics in Skienna, I certainly should have done well in the interview but didn't.
"You know that after reading only the table of contents?"
Sure: I know nearly all the topics quite well. For the depth of coverage of the book of each of the many topics, can conclude that the depth is shallow because of the wide variety of topics in the book, a 'catch all', the few number of pages for each topic, and the lack of more table of contents outline details for the topics. E.g., in linear programming also need to discuss slack and surplus variables, artificial variables, feasible, infeasible, unbounded, bounded, optimal, basic feasible solutions, the simplex algorithm with reduced costs, a pivot rule, entering variables, leaving variables, convexity, extreme points, degeneracy, cycling, and performance. So, for much on linear programming, will need more than one entry in the table of contents.
"You worked much more deeply the topics in a book that you didn't read or even see? And that's an argument for what? What relevance this has to anything?"
It says (1) I'm qualified to claim that that book is a poor means of preparation for a good interview in an appropriate recruiting effort and (2), with irony, if Google actually likes the topics in that book, since I know the topics well, I should have done well on the interview but did not due to some Google recruiter's religious worship of C++.
"Good for you, I guess. Again, what relevance this has to the discussion?"
Again, the "relevance" of my knowledge of the topics in the book means that I have some qualifications to comment on the relevance of that book for interview preparation. My points here are that (1) Google should not be recommending a particular book, since really there are many sources including many much deeper than that book, but, perhaps, recommending a list of topics instead and (2) they should not be asking about dynamic programming.
"Because you did research you now want everyone to spend at least a year learning optimization?"
Heck no: I'm saying that mostly software developers should not be studying optimization and that a Google interview should not be asking questions about optimization at the level of that book. Mostly they shouldn't be asking about optimization at all. If, in some particular case there is a good reason for knowledge of optimization, then ask about the subject in a serious way.
Again, the claim that someone needed to know dynamic programming at the level of that book or was not "prepared" is absurd. At the level of that book, f'get about dynamic programming.
"You want all books to cover only advanced optimization and not the basic stuff?"
You won't find anything like good coverage of the "basic stuff" about optimization in that book. It's a very old story: There was a chemistry book that wanted to introduce group representation theory because it plays a role in molecular spectroscopy important for identifying chemical molecules. So the book had a few pages on group representation theory. They had a mess. The chemists came to the math department for some help. I ended up writing my honors paper on group representation theory. The lesson is, people in field B should not write short introductions to topics in field A. Instead, if want a short introduction to field A, then get it from someone actually in field A. It's important. Otherwise too often end up reading junk as in that chemistry book.
That book is awash in topics in applied math outside of computer science. The chances that the content is good, even for its short length, are slim to none.
There is a pattern in computer science: It keeps trying to borrow from other fields and, then, makes a mess out of the content of the other fields. A big example is how computer science borrowed 'machine learning' from statistics and made a mess.
More generally, nearly none of the profs in computer science are qualified to write about math at all. Sorry 'bout that.
"You want Google to hire only people who took a graduate-level optimization course?"
No. Except for a position that is clearly in optimization, Google should just f'get about optimization.
"It's hard to read this combination of arrogance, self-promotion and name-dropping, which in the end hardly adds up anything to the discussion."
What I "add" to the discussion is that Google is making a mess of their interviews. The "name dropping" is to establish my qualifications for saying that Google should mostly just f'get about optimization. The "arrogance" is Google saying that lack of knowledge of dynamic programming means not "prepared". Then there is the irony: Apparently Google is not "prepared" in dynamic programming in any meaningful sense.
For your last paragraph, the original post did make Skienna look like the "official Google knowledge base" for job interviews.
"From what I understand they simply ask you some basic algorithm questions at Google and I guess that's pretty natural."
No: The original post implied that a candidate should have reviewed the first half of Skienna including dynamic programming, and my view, from good knowledge of that material, is that, then, Google is doing poor interviewing.
You are probably a very smart person, but your replies in this thread read like the epitome of academic pedantry, verbosity, pomposity, etc... the `my claim is (1) (2) and (3), my qualifications are these, my blah blah blah' -- Google is an engineering company; maybe you are better off working somewhere else?
I wrote a post that was widely misunderstood. So, I responded with the style of (1), (2), (3), etc. to say JUST what I was claiming and to support my claims in a way that even a good present or would be Google employee could understand! So, yes, I'm "pedantic": Considering the complaints about what I posted, the 'pedantry' was, unfortunately, necessary.
"Google is an engineering company". My Ph.D. is in engineering!
Google should be a good place for people with my qualifications. That it is not is Google's failure, not mine.
For where I would be "better off working", I agree that I have better alternatives than Google, especially now if not when I had a Google interview.
My main point is nothing like your objections to my posts: Instead, my main point is just that the Google recruiting process is a mess. I am not the subject here; Google's recruiting process is! Making me the subject is confused based on some emotional instead of rational reactions. So, come on, hard nosed, highly rational, detail-oriented software 'engineers': Stay on the subject -- Google's recruiting, not me!
After reading your contributions to the topic I can't say I'm surprised that Google rejected you.
Arrogant, aggressive people drain so much energy from a team that it doesn't even matter how brilliant they are, they end up being a negative contribution.
The interview never got to any topics in that book. The interview got only to "what is your favorite programming language" and essentially stopped when I didn't say C++. There wasn't time for me to be "aggressive". I wasn't "aggressive" and was just looking for a job. To me, saying that C++ was my "favorite" programming language could have been a reason for disqualification! C++ plays an important role, but it is tough to have it as a "favorite".
I find it hard to believe any company, unless you were applying to Oracle to work on the JRE, would care what your favorite programming language was.
I don't work at Google, but having had a lot of other technical interviews, I'm almost certain that was a primer question in order to:
1) Get you to talk in a relaxed fashion, to calm your nerves. You get to talk about something you already know.
2) Guage your "passion level"
3) See how you think by probing why you picked that language.
Assuming you're right though, and it's because you said you love Arc and they decided to dismiss you, why exactly do you assume it was C++ you were supposed to answer? Did they specifically tell you "Sorry, the correct answer was C++"?
I'm really going to go with the grandparent and assume your answer to the "What is your favorite programming language" turned out to be so obnoxious that they rejected you based on personality. This is honestly something you can work on though.
No, it was clear enough that they wanted C++. I said PL/I. I wasn't arrogant about it at all. I doubt that the recruiter knew anything about PL/I or its pros and cons. So, I didn't get into a description of the pros and cons or get near any religious battles about programming languages.
In fact, PL/I has a lot of really nice design features missing from all the other programming languages popular now. Much or all of Multics was written in it. The Prime operating system Primos, much like Multics, was in part written in PL/I.
Why missing? C came forward in the 1970s because Bell Labs designed it for Unix for word whacking and wanted everything to run on an 8 KB machine or some such. At the time, IBM's PL/I ran fine on a 128 KB 360 Model 40, but then 128 KB was in every sense a LOT bigger than 8 KB. Also, at the time, writing a PL/I compiler was considered expensive, say, $40 million or more. That later PL/I got handled for quite modest funding was a surprise.
Due to anti-trust issues, Bell couldn't sell Unix so essentially just gave it away. Many universities got DEC computers and ran Unix. So, a lot of students learned C. C has a lot of problems with some traditional solutions given pointers, 'structures' of some kind, dynamic memory allocation, and 'entry' variables. Then C++ was just a pre-processor to C to make more definite these traditional solutions. Alas, both the syntax and the semantics of C++ are a mess.
C, and still C++, were, in a word, cheap. When Bell did C, and then C++, it was considered that implementing anything like PL/I would be far too expensive. PL/I has a much better collection of lessons for progress than C does. That C got so popular and PL/I was largely forgotten was a sad day for practical computing. Object oriented programming? It's easy enough with PL/I as it is, and, really, in part or whole, long was popular before C++. But that object oriented programming got to be mostly C++ built as a pre-processor to C instead of drawing from the lessons of PL/I (and more, e.g., Algol) was sad.
Net, for me, PL/I is much, much better than C and, still, even if want to use 'objects', better than C++.
But I didn't go into any of this with the recruiter at all. Such a description would have been considered too long and arrogant.
Net, Google is bending their arm all out of shape patting themselves on their back telling themselves that they are eager to hire Michelangelo to paint their ceiling but are using at best house painters to do the recruiting. There is a wide range between house painters and Michelangelo that Google doesn't know how to recruit. It won't work, and it doesn't work.
Google is violating a simple rule in technical recruiting: Under no circumstances should anyone in 'recruiting' or HR have any technical communications at all with a candidate. None. Zip, zilch, zero. The recruiting and HR people can schedule phone calls and visits, help with coffee, tea, water or soft drinks, explain where the restroom is, hand out the benefits packet, smile, be nice, ask what they can do to help, help with plane and hotel reservations, help with car rental, make getting reimbursed easy, etc. But technical? NEVER!
Any technical communications have to be limited to the management chain and, really, some other processes.
There is a fundamental problem: The need and the goal is to hire people who know things that some or all of the company so far does not know, or has capabilities the company does not have. So, that broad idea that the company will look down and 'examine' the candidate on material the company does not understand is fundamentally hopeless. Can't work. There are ways to select experts, but anything like the Google process is hopeless.
In particular, that book as "preparation" is an insult to any employee who would bring something new to Google.
Google believes that they are high up and looking down. That they are worth $195 billion, they are. Technically, especially in their recruiting, they are not.
This thread is not about me; it's about Google's recruiting. My experience is relevant only as a source of data I do have about their recruiting. Again, it's Google's recruiting, not me.
Since you like name-dropping credentials in relevant areas, I do minimal research in PL and I can say that PL/I is objectively far, far worse than C/C++. Why? Well, C and C++ both have huge flaws. Huge flaws -- no one who has ever programmed anything nontrivial in these languages would argue otherwise. So why are they better than PL/I?
PL/I has just as many flaws, if not more. Let's look at some of them.
1. PL/I compilers are awful. The number of people working on PL/I is a fraction of the people working on C/C++. C/C++ are getting faster, more correct, and more succinct every day. One of the most promising techniques for PL/I is compiling it into C and then calling gcc because PL/I is so bad. (And if you don't think build times are a factor in compiler construction you are sorely mistaken).
2. No one knows PL/I. This one's pretty easy. PL/I is write once, read once. C++ is write once, read many.
3. PL/I provides no compensation for its flaws by providing higher-level transformations and PL features (e.g. true higher-order functions, strong type safety, garbage collection). The cost of switching to PL/I is actually made worse by the fact that PL/I is at best slightly better to program in than C++.
4. PL/I build tools and deployment tools suck. Who cares if you can run it in a hundred processors on a mainframe? How is that useful in scaling to millions of people every second?
5. Where's the support for interoperability with other languages? C++ can be used with other languages through well-known and well-maintained paths. PL/I has no support.
Basically, you may have a solid theoretical background but your practical experience in modern PL engineering is sorely limited.
First you are angry. You are not so much attacking PL/I as attacking me personally.
Second you are pursuing religious arguments about programming languages.
Third you are arguing things about PL/I that are really not part of the language.
Fourth much of what you say about PL/I is technically wrong.
"PL/I compilers are awful".
What PL/I compilers are awful? As far as I know, there aren't very many and the more common ones there are, essentially all from IBM, are highly polished.
Compared with C/C++, the polished PL/I compilers are terrific because, with the language features, they actually do really compile the work instead of just call functions. In fact, the early versions of PL/I did string, etc. manipulations by having the compiler call run time functions; then the result was much like what C/C++ programmers are forced to do.
The compiling of functions for string and bit manipulation was done in part to have PL/I be faster than the then common practice of using functions for such things in Fortran. Net, for string manipulation, PL/I is faster than Fortran, C, and C++ because it actually compiles the work and avoids the overhead of function calls. Here C/C++ are behind and have no easy way to catch up.
For 2, who knows PL/I is not about the language itself. For it being my favorite, I know it!
That it's my 'favorite' doesn't mean that I suggest that others use it. I used the IBM PL/I on OS/2 a few times; I have the IBM PL/I for Windows but don't even have it installed. On Windows I use Visual Basic .NET, if only because it has such good access to .NET, ADO.NET, and ASP.NET. In many ways, I would prefer PL/I, but it is not a practical option.
PL/I has some features that were deliberately included to make learning it relatively easy. E.g., PL/I has no reserved words! That is, all the 'key' words in the language can be used by programmers for their own identifier names. So, a beginner doesn't have to worry about using a reserved word. I taught some elementary parts of PL/I to some not very good students in the business school at Georgetown University, and they learned fine.
For 3, the only serious problem with 'type safety' is for pointers. True, in PL/I, pointers do not have 'types' based on what they point to. That is, any pointer can point to anything. But, then, using pointers in PL/I is not nearly as necessary or common as in C/C++, is quite advanced, and is not common. I liked using pointers because can really work with the memory and, at times, write some 'polymorphic' functions. Tricky work with pointers is always tricky, and that the pointers in PL/I are not 'strongly typed' didn't make the work harder. Again, PL/I is not nearly as dependent on pointers as C; a C programmer is forced to used pointers frequently, and PL/I programmer can do fine using pointers only rarely.
Otherwise on types, PL/I took the attitude that converting a string to floating point, etc., should need just an assignment statement. For execution time, there is a warning from the compiler when such a conversion might be expensive.
But PL/I is far better than 'cast': Cast is just an override of the 'strong typing', that is, immunity from the strong typing police. The problem with cast is that it's super tough to find HOW THE HECK the conversion is done. Mostly I don't much care about pleasing the strong typing police, but I do usually very much care about the details of how the conversion is done. Right from the start, the IBM PL/I documentation was fully explicit on the conversions of all the pairs of the 'elementary' (no aggravates) data types -- good.
For garbage collection, where is that in C/C++? There is garbage collection in Visual Basic .NET, but it was not easy to implement. There is always some question if garbage collection should be implemented by the run-time. The way I'm depending on garbage collection in Visual Basic is quite similar to how I depended on automatic storage in PL/I, and PL/I automatic storage is MUCH more efficient to implement than garbage collection. Here the excellent, and advanced, scope of names in PL/I are a big help.
PL/I does do quite well on memory management, especially with its attribute 'automatic' which, for each 'task', has in effect a 'stack of dynamic descendancy' and allocates and frees automatic storage just as would want across the quite advanced scope of name rules.
This automatic storage works great with the PL/I 'structures' where a 'structure' is a list of elementary data types, arrays, or arrays of structures, all efficiently mapped to sequential storage in a clever, easy to understand way. The 'extents' (string maximum lengths and array bounds) need only be known when the structure is to be allocated. So, in particular can pass arguments to routine parameters what are the extents. Or can do a calculation, enter a Begin-End block and do the automatic allocation inside that block. Works great. That can't do such things in C, especially for arrays, not even array parameters, is one of the most serious failings of C and, thus, C++. To get around the problem, end up using C structures or C++ classes, both of which are much less efficient. PL/I structures are about as efficient as Fortran arrays, depending on what is being done, a little more or a little less.
Then scope of names and dynamic descendancy are well coordinated with exceptional condition handling so that the code that executes in response to an exceptional condition can be 'on the stack' several levels back and, then, if it wishes, do a 'no-local goto' to pop the stack back to its own level. In this way, all the relevant automatic storage gets freed and lots of memory leaks get avoided.
The problem here with C/C++ is that a C programmer, and, thus, also the pre-processor definition of C++, do not have enough access to memory management to do such good things.
Moreover, C and C++ have nothing in the language about 'tasks' or threads, and PL/I does, did from the beginning. In particular, the storage attribute 'controlled' (roughly like malloc and free) is 'task-relative', that is, goes away when the task does. Files are also task-relative and get closed when the task goes away. Nice.
4. For the 'build tools', never had a problem. I was in the group at Yorktown Heights that did the artificial intelligence language KnowledgeTool (KT) which was a pre-processor to PL/I, and we did a lot of building but had no problems with 'build tools'. I was the guy who used dynamic descendancy in a tricky way to make the 'rule subroutines' in KT simple and efficient and won an award for the work.
Of course, for a scripting language we had Mike Cowlishaw's Rexx, and for an editor we had XEDIT with its macro language, right, the same Rexx. Rexx is a great candidate for the most elegant scripting, macro language going. Rexx was the main reason we had no problems with build tools.
Actually, can claim that for some years Rexx, with a few extensions for some lower level OS access, basically 'ran' all of IBM: There were about 4000 mainframes around the world connected with simple bisync lines. In the end, it all looked much like the Internet today. So, the mainframes were acting as both the servers and the routers. The hard work of the routing, security, etc. was done with 'server virtual machines' programmed mostly in Rexx with a few routines for some lower level access. It worked surprisingly well. Rexx was no toy.
"5. Where's the support for interoperability with other languages? C++ can be used with other languages through well-known and well-maintained paths. PL/I has no support."
A lot of nonsense. On IBM, PL/I used standard OS calling sequences. Calling Fortran, Cobol, assembler, and C was routine. I wrote a collection of routines in PL/I to call C to call the TCP/IP routines. Occasionally I called assembler from PL/I.
Got'a tell you, calling Visual Basic .NET 'managed code' from C won't be a picnic! In some of my current project, at one point I call some C from Visual Basic .NET managed code and have the effort fairly simple. On Windows, calling one language from another is okay as long as they are both 'managed code'. Otherwise, on Windows or nearly anything else, calling one language from another is at least a little tricky and, in general, tough.
You are making four big mistakes:
First you are angry. You are not so much attacking PL/I as attacking me personally.
Nope. Besides noting that you have very little knowledge in PL research at the end, I made no comments about you personally.
Second you are pursuing religious arguments about programming languages.
Nope, I specifically mentioned practical things that matter.
Third you are arguing things about PL/I that are really not part of the language.
If you're talking about things like, "no one knows PL/I," so what? This is entirely relevant from an engineering standpoint. Just because it's not "really part of the language" is irrelevant because we're not debating whether or not PL/I is better than C/C++ in the 1950s in a perfect world, we're talking about practical engineering right now on real systems.
Fourth much of what you say about PL/I is technically wrong.
Not a one.
What PL/I compilers are awful? As far as I know, there aren't very many and the more common ones there are, essentially all from IBM, are highly polished.
Only if you're using compiler benchmarks from the 90's. I'm looking for things like packrat parsing and partial and incremental linking. Auto SSE/SIMD would be nice. If you're so convinced that IBM PL/I is good enough, why don't you benchmark it against GCC (on Intel ICC often outperforms GCC but I'm confident even GCC will far outperform PL/I).
That it's my 'favorite' doesn't mean that I suggest that others use it. I used the IBM PL/I on OS/2 a few times; I have the IBM PL/I for Windows but don't even have it installed. On Windows I use Visual Basic .NET, if only because it has such good access to .NET, ADO.NET, and ASP.NET. In many ways, I would prefer PL/I, but it is not a practical option.
My entire post was about how it's not a practical option and how you shouldn't be surprised when no one wants to use PL/I in production...
PL/I has some features that were deliberately included to make learning it relatively easy. E.g., PL/I has no reserved words! That is, all the 'key' words in the language can be used by programmers for their own identifier names. So, a beginner doesn't have to worry about using a reserved word. I taught some elementary parts of PL/I to some not very good students in the business school at Georgetown University, and they learned fine.
This is unimportant. Arguably it's worse than having reserved words because it allows for inadvertent shadowing, but really it's just a back-and-forth thing that no one cares about.
For 3, the only serious problem with 'type safety' is for pointers. True, in PL/I, pointers do not have 'types' based on what they point to. That is, any pointer can point to anything. But, then, using pointers in PL/I is not nearly as necessary or common as in C/C++, is quite advanced, and is not common. I liked using pointers because can really work with the memory and, at times, write some 'polymorphic' functions. Tricky work with pointers is always tricky, and that the pointers in PL/I are not 'strongly typed' didn't make the work harder. Again, PL/I is not nearly as dependent on pointers as C; a C programmer is forced to used pointers frequently, and PL/I programmer can do fine using pointers only rarely.
You completely misunderstood -- C and C++ are the standards. If you want people to use something not-standard you need to provide something which is far better in your own language than in the standards in order to compel people to switch. Being a little bit better isn't good enough, you need to be a lot better. Strong typing, garbage collection, and higher-order functions are all things which suck in C/C++. If you want people to adopt your language you should offer full support for these things because that gives you a compelling reason to switch.
Otherwise on types, PL/I took the attitude...
Read more about strong typing and read about type or category theory (preferably both). C and Java types are not strong typing, they're typing done in probably the worst possible way. Learn ML.
Or can do a calculation, enter a Begin-End block and do the automatic allocation inside that block.
Welcome to RAII, circa 2000.
To get around the problem, end up using C structures or C++ classes, both of which are much less efficient.
Only in C or C++ compilers from 1995...
Actually, can claim that for some years Rexx, with a few extensions for some lower level OS access, basically 'ran' all of IBM: There were about 4000 mainframes around the world connected with simple bisync lines. In the end, it all looked much like the Internet today. So, the mainframes were acting as both the servers and the routers. The hard work of the routing, security, etc. was done with 'server virtual machines' programmed mostly in Rexx with a few routines for some lower level access. It worked surprisingly well. Rexx was no toy.
It really is. IBM hasn't even come close to building what would be termed a modern distributed system infrastructure. There's no PL/I equivalent for MapReduce, BigTable, GFS, etc.
A lot of nonsense. On IBM, PL/I used standard OS calling sequences. Calling Fortran, Cobol, assembler, and C was routine. I wrote a collection of routines in PL/I to call C to call the TCP/IP routines. Occasionally I called assembler from PL/I.
OS calling sequences are the bare-minimum. If I wanted to optimize my Python code by dropping down and rewriting the code in a systems language, C has my back. Good luck with PL/I.
You seem to be using a lot of examples from HPC but they're not really relevant. HPC is a pretty easy target because you get to make a lot of assumptions about the DS you're architecting and the software that will be run on it. PL/I is just a dead language -- there's no reason to ever switch to it and while it may have been slightly better than C++ during IBM's heyday, the world has moved on. Even scientific computing prefers parallel Fortran, which ever since the latest version is incredibly fast.
PL/I was designed in IBM by a committee headed by George Radin in about 1963. First versions were running by 1966. Version 4 was running by 1969 and quite clean. There have been later versions. By the time IBM slowed maintaining the language, it was polished. Finally there was just one guy in CA maintaining PL/I. I suggested adding AVL trees (see Knuth, TACP).
The main intention of PL/I was to serve people using any or all of Cobol, Fortran, and assembler (at least for applications programming).
Again, versions of PL/I have been used for system programming in at least Multics and Primos.
It has been noted that the 1960s were "the golden age of programming language design". In comparison, progress for the next several decades was disappointing.
C was designed at Bell Labs in the 1970s. Likely the designers of C knew PL/I if only because they borrowed the semi-colon to end a statement and the syntax of comments.
C was supposed to be a minimal language, as simple as possible, to work on an 8 K DEC machine. The clever part of C was that while it had so little, with pointers, structures, malloc, and free, it still had enough for system programming. Also, since C was so simple, it needed no 'run-time' and could be used in embedded code in read only memory.
C++ was just a pre-processor to C to formalize some of the then standard ways to use C to make it livable for more complicated applications programming.
So, PL/I started off with much more than C. Since neither PL/I nor C can or will change, PL/I is still far ahead of C.
The preprocessor C++ is a bit ugly. Tough to say that just from a pre-processor C++ is much better than PL/I.
So, net, when Google asked me what my favorite programming language was, I said PL/I instead of the answer they wanted, C++.
I did not say that PL/I was the ultimate programming language, the end of programming language design, a programming language for 10,000 machines with 10 processors each with 1000 cores, etc. I didn't say that people should convert to PL/I. But, then, I would feel sorry for anyone to start a new project with C or C++.
I had my fling with programming language design with KnowledgeTool and a subsequent project. At the time I looked at the literature of programming language design and was not impressed. To me the literature looked like it was just rehashing old ideas back to Algol, looked like 'research' in cooking that was just remixing Escoffier's collection of sauces.
If since then the design of programming languages, compilers, operating systems, etc. have made progress, then about time and good.
For what is 'practical' now, I've voted with my feet: At present, I'm concentrating on my project in applications programming and am using essentially just Visual Basic .NET, ADO.NET for getting to SQL Server, ASP.NET for Web pages, and .NET for some of the other functionality it has, e.g., a lot in time and date manipulation.
That Microsoft went for their common language runtime (CLR), 'managed code', 'garbage collection' (with memory 'compactification'), common 'intermediate language', invited others to write their own syntactic sugar on top of the these, and provided the syntactic sugar C# and Visual Basic strikes me as good for now.
For my project, I decided to stand on Microsoft Windows instead of flavors of Unix. On Microsoft, I went with Visual Basic (VB) .NET. So far Windows and VB .NET have been as promised. I miss some of the features of PL/I, but the missing features don't keep me from getting my work done.
The VB .NET complier has been terrific: It compiles my programs in time in practice for me equivalent to 0 seconds. I will never type in enough code to slow down that compiler. The error messages, including in the context of ASP.NET, have been quite nice. I've found no bugs at all. The compiler has been easy to use just from command lines driven by some simple ObjectRexx scripts. I'm thrilled.
The main problem I have in my software development is some of Microsoft's documentation: (1) It is horrendous, thousands of Web pages, and, thus, tough to work with, even when it is good. (2) For SQL Server, especially management and administration, especially the 'security model', and a lot more in Windows, the documentation was awful in ways that have cost me unbelievable time and effort: Things didn't work anything like promised; I had to mount side projects to diagnose the problems and work around them; I had to write my own documentation, develop my own scripts to lock in the solutions, etc. to get around the nonsense and get back to my work, etc. It was horrendously expensive.
But, just for a programming language that is practical for applications programming now, I selected VB .NET. I certainly didn't try to continue with PL/I.
All this started just because Google asked me what my favorite programming language was and I said PL/I instead of C++ like they wanted. My answer was fine. The HR-recruiter rube didn't see PL/I on their list of acceptable answers and ended the interview. That was Google's error, not mine.
While the history is an interesting aside (I exaggerated my dates because they weren't particularly relevant), I was specifically speaking to modern implementations of the languages. Your history of C++ is correct but the modern version looks completely different.
I had my fling with programming language design with KnowledgeTool and a subsequent project. At the time I looked at the literature of programming language design and was not impressed. To me the literature looked like it was just rehashing old ideas back to Algol, looked like 'research' in cooking that was just remixing Escoffier's collection of sauces.
I completely agree that PL spent (and is spending) a huge amount of time rehashing old ideas but I think all the effort spent on the Fortran legacy (including C, C++, Algol, and PL/I) was a waste of time -- it took almost 20 years to get around to rehashing all the old ideas in LISP, which was a much better idea anyway.
As far as the Microsoft stack goes, I see no problem. My own experience with the internals of SQL Server makes me extremely reticent about using it myself but my time spent with the MS VC++ and CLR teams has made me very impressed with the entire .NET stack.
All this started just because Google asked me what my favorite programming language was and I said PL/I instead of C++ like they wanted. My answer was fine. The HR-recruiter rube didn't see PL/I on their list of acceptable answers and ended the interview. That was Google's error, not mine.
If that is actually what happened then I agree. That said, I think that the question of "what's your favorite programming language" is a pretty stupid one anyway and my answers of Haskell, ML, or Prolog wouldn't have been on the recruiter's list either.
The key element that you're missing is 'and brushed up on the fundamentals.', nobody is saying that Skiena is the only source you should be learning and that you only need to know what is in that book.
How I read recommendations like that is that it is probably a book that gives a good overview of the different topics that might be relevant. Of course for every separate subject there is probably a better resource, but that's not the point.
Preparing isn't the same as going about and learning complete new things, preparing is making sure that all the stuff you've learned in the past is fresh in your memory and ready to be used. In which case a book that gives you a quick overview does the job perfectly.
No, my remarks are fair and well supported: The post praised Skiena's book, and that was a poor move because (1) there are many other sources back to, say, Knuth's TACP, and (2), as I said, the computer science in that book is not good and the good is not computer science and is covered poorly.
In particular, Google is showing that they are far too centered on 'computer science' where they accept low quality material just because it was hijacked from its real origins in various fields of applied mathematics into a book on 'computer science' and poorly presented there.
The post insisted that a candidate needed to know dynamic programming as in Skiena or was not "prepared", and this is sick-o, brain-dead, ignorant, arrogant, destructive nonsense: First, there is nothing serious about dynamic programming in Skiena; Google is not "prepared" in dynamic programming. Next, while I happen to know a lot about dynamic programming, I've never asked about it in interviewing people for computer science positions. One reason I know a lot about dynamic programming is that my Ph.D. dissertation was in stochastic optimal control.
What I didn't say, but which is true, and which I implied, is that the topics in Skiena should not be given so much importance. Or, to be so positive on Skiena's book is to be far too fond of a particular collection of gnats and to forget a herd of elephants.
One way and another, I happen to have a good background in nearly all the topics in that book, usually far deeper than in that book, but in interviewing I would not ask for knowledge of topics from that book. Here Google is making a mistake.
In addition, I did once have a Google interview: It was a disaster. For really brain-dead reasons, they didn't like me. Their worship of C++ as a religion showed that they have a bad interview process which meant that I didn't like them. Good riddance.
That they want their interview process to be so severe indicates both arrogance and ignorance; they should be brought down a few notches.
There is a fundamental problem in such interviewing that apparently Google has not understood: Can't hire a bunch of house painters or people who have just heard about house painting to select someone to paint the ceiling of the Sistine Chapel. Google's process would definitely filter out any Michelangelo. They'd likely also filter out a Steve Jobs, Larry Ellison, or Bill Gates.
In fact, based on a brain-dead religious devotion to C++, their process filtered out someone with expertise in nearly all the topic in Skiena's book much deeper than in the book.
Actually, mostly people who are really good with the better topics in Skiena's book never touch C++!
Google is making mistakes. Their interview process is a symptom, among others, of some serious management problems at Google.
I think the point was that the interviews don't require you to be able to have published research in optimization or algorithms --- they want you to be able to solve a relatively constrained class of problems quickly, for which a specific kind of preparation is required.
I think a lot of confusion on this topic stems from what Google values from a candidate's knowledge of algorithms. From my experience, these technical interview of an algorithm, but you sure wouldn't have to prove anything formally.
Algorithm and computer science knowledge can be applied very practically. Skienn a also wrote a book called Programming Challenges, which features problems very similar to those asked in these technical interviews. The coverage of topics like dynamic programming may be very shallow from a theoretical perspective; however, an intuitive understanding and mastery of when to use the technique and how to write the code is absolutely crucial to solving many difficult programming problems.
I like dynamic programming for various reasons, and at times it can get used as a technique for what really are computer science algorithms, but my experience is that that usage is rare.
With dynamic programming, I have to conclude that Google is just looking for ways to toss people out.
Network linear programming does not even have a Wikipedia article; you expect every candidate to have read some complex and specific literature about it?
"Get it": No, I am saying that in asking about linear programming and dynamic programming as in Skiena, Google is not "prepared" and, really, is dealing in nonsense. If want someone to know some optimization and have a good reason for this, then fine. Then, cover some good material on optimization. But evaluating people on trivia about optimization as in Skiena is silly and arrogant, uninformed, dysfunctional, destructive, etc. It's something from the Queen in Alice in Wonderland:
"They didn't review Skiena? They don't know dynamic programming? Then, off with their heads!".
At the level of Skiena, f'get about optimization.
Again, Google is showing that they are far too centered on 'computer science' where they accept low quality material just because it was hijacked from its real origins in various fields of applied mathematics into a book on 'computer science' and poorly presented there.
For the network simplex algorithm, about the most elementary case is the transportation problem where find a least cost way to ship widgets from several factories to several warehouses. Then generalize to a network. Then, a simplex algorithm linear programming basis is just a spanning tree in the network. To add a variable to the basis, add an arc from the network. Then the spanning tree will be converted to a network with a circuit. Then run flow around that circuit in the direction that reduces cost until the flow on some arc hits zero. Remove that arc from the basis and again have a spanning tree. Cunningham's work guarantees to avoid cycles and tends to be faster.
See, say, pages 311-317 of
Va\v sek Chv\'atal, {\it Linear Programming,\/} ISBN 0-7167-1587-2, W. H. Freeman, New York, 1983.\ \
(in TeX to get the accents right!),
W. H. Cunningham, "A Network Simplex Method," 'Mathematical Programming', volume 11, pages 105-116, 1976.
W. H. Cunningham, "Theoretical Properties of the Network Simplex Method," 'Mathematics of Operations Research', volume 4, pages 196-208, 1979.
William H. Cunningham and John G. Klincewicz, "On Cycling in the Network Simplex Algorithm", 'Mathematical Programming', Volume 26, pages 182-189, North Holland, 1983.
Cunningham has long been at the Waterloo department of Combinatorics and Optimization.
Then, too, there is the guaranteed polynomial algorithm of D. Bertsekas. See:
Dimitri P. Bertsekas, 'Linear Network Optimization: Algorithms and Codes', ISBN 0-262-02334-2, MIT Press, Cambridge, MA, 1991.
But, if Google calls you, don't mention such things if you want a job! Instead, just mumble on about how great C++ is!
"Oh, I have Stroustrup under my pillow!"
Also mention some other buzz words.
Google has become arrogant, inwardly directed, and process-oriented. The history of companies that do that is not good.
I passed the software engineering intern interview process but wasn't matched with a host (I assume because I applied late). I then reapplied for the fall and am back at the host matching stage.
I have lots of high-quality programming experience, a MS from Stanford, and am now wrapping up a PhD. 4.0/4.0 GPA.
If you work at Google and want a fall intern who will work his ass off for you, please email me: stevebanders@gmail.com (not my real name - will reply with my real name - don't want it associated with this post :)
My recruiter recently indicated that I should try to reach out to Googlers as it might help draw attention to my application.
Edit: For the past two years I have been building search interfaces and programming tools. However, I'm flexible - happy to work on just about anything!
Did you notice that you were responding to a MS+PhD applicant? Placing him is going to be orders of magnitude harder than placing someone working on their bachelor's.
If Google wanted to have him churn out features on Google Docs for a couple months, they could, but that'd be an enormous waste of a PhD.
Thats exactly what some do. Some work on protocols, backend efficiency, general coding, internal applications, and then spend their 20% on something 'really interesting'.
That you're reapplying makes me question whether you're persistent or just lack good judgement.
I think people get taught in high school to chase after "top flight" schools for their education, and you went to stanford so you've done that.
It isn't the same in the working world. You don't need to work for google.
I am sure there are dozens of startups who would appreciate you much more, and where your talents would be able to express themselves to a much fuller extent than at google.
I also was rejected after 'failing' a technical interview. What was annoying to me is that I told the recruiter that I am rusty on my CS theory. I wish he has just said; "We are only interested in people that could do very well on a senior CS final exam right now"
I can sympathize with the algorithm pop quiz tests. I've done a few of them in various interviews over the last year, and I have to say that in every case they bear precisely zero relevance to any software engineering that I've done in the last 15 years. In one recent case I've even had an interviewer standing and pointing over my shoulder at pieces of paper. It was pretty farcical, and somewhat intimidating. In the real world, this isn't what writing software is about.
> I've done a few of them in various interviews over the last year, and I have to say that in every case they bear precisely zero relevance to any software engineering that I've done in the last 15 years. In one recent case I've even had an interviewer standing and pointing over my shoulder at pieces of paper.
That's when I would have gotten out of the room and said "thank you, I don't think we'd be a good match" :)
Frankly, even a developer relations position can be a technical position. I personally think a developer relations person who does understand the inner workings of the code base he represents, or at least demonstrates the ability to understand it given time, will be much better at his job than someone who doesn't. So really, I agree with Google here in their decision.
There are a number of jobs at google that although do not involve writing code, are closely coupled with the technology google is working on - developer relations actually being a very good example. I think they are keeping the bar high in these instances for valid reasons. I'm sure there are positions where technical background is less crucial, graphic design comes to mind, but that's to be judged on a case by case basis. Everybody wants to work there, so they can demand very high standards, but in most instances i've heard about the standards are not incorrect, just high.
EDIT: And probably another reason is they simply want to keep the particular technical culture throughout the company.
Every interview process sucks. I interviewed and got a job at a company that modeled their hiring on Google's asking me puzzle type questions. I can do really good with those. Doesn't say a thing about my coding skills. I feel sorry for the companies that model their process on Google's, especially the small startups because you need the exceptional engineers that fall through such cracks to really boost your business.
It's not so much the rationale of this interview process, reading the response from other Googlers seems to show that this selection process does a good job of maintaining a cultural fit for those that make the cut. The problem is that it seems to make enemies of people without a hardcore technical background when a simple review of their background would help both sides from pain.
I was surprised at how CS Theory oriented they are in hiring. My technical interview went like a CS final exam. I don't expect that I will apply to Google again. If I was interested in CS Theory I would have got a masters in CS. They have to weed people out somehow though.
I would have thought the easiest way to land a job at google these days would be to start a blog, become a 'thought leader', start a few important open source projects, etc, that kind of thing.
That gets you an interview. Blogs let you know you're a good writer, but that's about it. Open source contributions can say a lot, but may not. DarkShakira's work, for example, probably darn near gets him a job. 99% of what I see on GitHub probably doesn't.
Little bit off-topic; at least you get interview call from GOOG. I've never had that privilege of interviewing with Google. Though I've always applied for their design positions. And I was always referred by someone working at Google (and that too stellar recommendations). I've interviewed with all the biggies (including some of the hottest companies) but never with Google.
I think part of the problem with Google's hiring process is the attitude you're demonstrating here, where somehow even just interviewing with them is some sort of "privilege". Yes, I know they have to deal with a stupendous number of applications, and much of their process is a direct result of that sheer volume, but so long as people walk around with that kind of attitude towards them, they're given all the excuse they need to be imperious about it, to boot.
Agreed. By the way I remember talking with Google recruiter when recently I was in school (one of the west coast elites) doing my masters; he told me the best way to get into Google is via internship. I was interested in working at startup for internship and told him the same. He bluntly told me that its impossible for me to get a job at Google then.. :)
P.S. His prediction was true and I never interviewed at Google. However, I got job offer from one of their biggest rival :)
I've never had that privilege of interviewing with Google.
GOOG is just another company... it's not some special privilege to interview with them. In fact, not everybody would want to work for them... I, for one, turned down a chance to interview with them a few months ago. I don't see much that strikes me as particularly appealing about working for Google, especially compared to the opportunity of doing a startup of my own.
If anything, based on sheer size, I'm fairly confident that I wouldn't enjoy working at Google. Some of us just like smaller companies, with less bureaucracy and what-not.