Instrumental | Onsite (remote during SARS-CoV-2) | Palo Alto, CA, USA; Chicago, IL, USA; New York City, NY, USA; San Diego, CA, USA; Longhua, Shenzhen, Guangdong, China
Can you imagine writing software with no debugger or logging, and you only get to run it every few weeks? That's what building hardware is like for many product companies. At Instrumental, we're building a manufacturing optimization system that collects data no one else can, and uses machine learning and visualizations to automatically identify defects and help product companies understand their assembly processes.
With ~35 people we are a small but mighty team with a collaborative, friendly culture. We value an inclusive environment and actively work to promote diversity in our team.
We also value good tools. For example, I manage the web app team - our deploys involve running a single command and take just a few minutes, so we deploy frequently and with confidence. "Watch mode" compilation usually takes under a second. We have "branch environments" to test any pull request in a production-like environment with real data. And we have robust tests.
Instrumental | ONSITE | Palo Alto, CA, USA; Chicago, IL, USA; Charlotte, NC, USA; Longhua, Shenzhen, Guangdong, China
Can you imagine writing software with no debugger or logging, and you only get to run it every few weeks? That's what building hardware is like for many product companies. At Instrumental, we're building a manufacturing optimization system that collects data no one else can, and uses machine learning and visualizations to automatically identify defects and help product companies understand their assembly processes.
With ~30 people we are a small but mighty team with a collaborative, friendly culture. We value an inclusive environment and actively work to promote diversity in our team.
We also value good tools. For example, I manage the web app team - our deploys involve running a single command and take just a few minutes, so we deploy frequently and with confidence. "Watch mode" compilation usually takes under a second. We have "branch environments" to test any pull request in a production-like environment with real data. And we have robust tests.
This article links to [1] as an explanation for the delay, but that article says at the top that Github has since updated their configuration instructions to help people avoid the issue.
I got frustrated with this and created BigConsole[1] to solve it. It's a Chrome extension that adds a Firebug-style split console to DevTools.
Then I tweeted about it, and Addy Osmani responded[2] pointing out that Chrome does in fact support multiline consoles with its Snippets[3] feature. It's still kind of obnoxious, but it does the job.
There are an infinite number of ways to fail and only a few ways to succeed. If you learn from failure, you have learned one way that doesn't work out of an infinite number. If you learn from success, you have learned one way that does work out of a small finite number.
Obviously there are some ways to fail that are much more common than others, but those tend to be based on lack of action, e.g. failure to actually ask users if they want to use your product before building it. There is plenty of material on those common sorts of failures, it's just usually phrased constructively, i.e. "how to do SEO" rather than "we died because we posted duplicate content on every page of our site."
There are an infinite number of ways to succeed and an infinite number of ways to fail.
Think of the raven paradox (look it up on wikipedia). Finding something that isn't black and isn't a raven should be evidence that all ravens are black.
Why isn't it? Because there are an infinite number of things that aren't black, and a finite number of ravens. You could actually count all of the ravens in the world but you can't count all of the things in the world that aren't black.
Startups are not like this. Failures compare to successes at a rate of, say, 8 out of 10. Both classes are of the same order of magnitude. You need to examine both to find discriminating variables.
The hard part is pinning down the cause of a successful startup. Most people just point at highly visible things, and make a claim like "StartupX is successful because the founders worked extremely hard, the office culture was well-developed, and there were lots of team building activities." The problem is that this ignores the 5,000 other startups that did all those same things, but failed. Perhaps it turns out that StartupX really succeeded because they had a sales guy with lots of good connections, etc.
More prevalent in the financial industry, you have a lot of successful retirees who hold talks and write books what kind of tea they drank. Statistically nobody beats the market, so these people had very likely just luck.
Causation is not relation, everybody knows that. But it is damn hard to figure out what and often the people themselves are biased. I would say a big percentage of tips you get this way are totally useless, but to figure out which one of them depends on your judgement.
Performance and goals don't have to mean the same thing. An example of a goal is "acquire 30 new customers by the end of the month" and the relevant performance metric there would be "number of new customers acquired." Better performance in that case might mean 50 customers acquired while worse performance might mean 10 customers acquired.
I agree that "it's easy to get frustrated if you bite off more than you can chew - so content yourself with learning simpler things at first." I don't think that's contradictory to what I wrote; one component of good goals is that they're achievable. In retrospect I can see how you would read that lesson into Dan's post, but what he actually wrote is that what you should do if you want to "build skill in the long term" is "setting firm goals and keeping track of what you're doing is a problem for beginners." I think that is not appropriate advice for most people.
Sorry -- some people struggle with light-on-dark text. I tend to like it (obviously) but I'll probably switch at some point for this reason. I also want to experiment with just making the text larger.
Would be interested in hearing if other people have experienced this issue and had success solving it.
Can you imagine writing software with no debugger or logging, and you only get to run it every few weeks? That's what building hardware is like for many product companies. At Instrumental, we're building a manufacturing optimization system that collects data no one else can, and uses machine learning and visualizations to automatically identify defects and help product companies understand their assembly processes.
With ~35 people we are a small but mighty team with a collaborative, friendly culture. We value an inclusive environment and actively work to promote diversity in our team.
We also value good tools. For example, I manage the web app team - our deploys involve running a single command and take just a few minutes, so we deploy frequently and with confidence. "Watch mode" compilation usually takes under a second. We have "branch environments" to test any pull request in a production-like environment with real data. And we have robust tests.
Open roles:
- Senior Backend Engineer, Platform: https://jobs.lever.co/instrumental/4c155db0-8506-4096-ae7d-c...
- Senior Backend Engineer, Web App: https://jobs.lever.co/instrumental/317d0eb2-e2db-498a-b79c-6...
- Senior Full Stack Engineer, Factory Software: https://jobs.lever.co/instrumental/9d9cee6e-a0b2-4484-8171-7...
- and others at https://instrumental.com/careers