HN2new | past | comments | ask | show | jobs | submit | alexk's commentslogin

For folks interested in 101 on linear algebra - I highly recommend book "Linear Algebra: Theory, Intuition, Code" by Mike X Cohen.

After trying a couple of courses and books, I liked it the most because it gives a pretty deep overview of the concepts, alongside the numpy and matlab code, which I found refreshing.

It's has good amount of proofs and has sections designed to build your intuition, which I really appreciated.


Teleport (YC S15) | Senior Technical Support Engineer | US, Europe, Canada, Remote OK | https://goteleport.com We are looking for Senior Tech Support Engineers who can script, love systems troubleshooting and helping customers.

Join Us:

https://jobs.lever.co/teleport/7c7f25dd-21c7-45c9-a107-98d8a...

Most of our code is Go, we have very little technical debt, our codebase is clean and small.

We expect you to be comfortable with the following:

  * Scripting in Python, Go or Bash
  * Deep understanding of Linux and networking administration
  * Understanding of modern infra stacks - Kubernetes, Terraform, Ansible
  * Scalability or security experience for systems software is welcome.
What to expect once you apply: * We will send you a 30-50 minute SRE hacking/troubleshooting quiz * You will join 30 minute intro call and we will walk you through the compensation, interview process and requirements. * You will join a live troubleshooting session simulation and try to find all issues we planted in a cloud infrastructure.

https://jobs.lever.co/teleport/7c7f25dd-21c7-45c9-a107-98d8a...


Small error with the links.


Thank you! I made a copy-paste error, fixed.


You can't leak API keys if there are no API keys to leak! The article recommends OIDC for apps, which is a step up, especially if you rotate the bearer token, however there is another option - use short-lived certs.

Our project Machine ID is replacing API keys with short-lived certificates:

https://goteleport.com/docs/machine-id/introduction/

Another great option is SPIFFEE https://spiffe.io/

The adoption is slower than we wanted, because it's not trivial to replace API keys, but we see more and more companies using mTLS + short lived certs as alternative to shared secrets.


How does this approach practically differ from using short-lived JWTs+TLS?


Short version is that with mTLS and short-lived certificates you don't have to worry about anyone stealing and re-using your JWT tokens and revoking tokens.

LVH from Latacora explains it way better than I could in "A child's garden of inter-service authentication" [1]

However, here is my view:

If your token is not bound to the connection, someone can steal and reuse it, just like any other token. It is possible to use OAuth token binding [2], but at this level of complexity, mTLS + short lived certs deliver the same security and are easier to deploy.

It's easy to mess up JWT signatures, although, to be fair, it's not like X.509 certificates format is any better, however it's been more tested over years of use.

[1] https://latacora.micro.blog/2018/06/12/a-childs-garden.html [2] https://connect2id.com/learn/token-binding


Interesting. You already don’t have to worry about revoking JWTs if they’re sufficiently short lived. This gives you the exact level of protection as a short-lived mTLS cert, because if that gets stolen the attacker can continue to establish connections until it expires, unless as you say you revoke the certificate. So clearly I am missing something.


They key difference is with mTLS approach you'd have to steal the private key of the client certificate if you want to impersonate the client. In most secure deployments of mTLS and short lived certs, private key never escapes the TPM, Secure enclave or Yubikey, so it's extremely hard to mount an attack and impersonate a service.

With JWT (assuming it's not bound) you can steal JWT token and re-use it until it expires.


I still don't get the difference.

If someone steals your secrets you're screwed. No matter what kind of secrets that are. That's clear.

But if you keep your secrets in a HSM (TPM, SmartCard, …) and only use them to derive session keys directly on the secure device there is absolutely no difference which concrete tech you're using (given that secure cryptography is in place).

mTLS is a great approach, no question. But I just don't see how it's more secure than any other public key crypto.


If I'm not mistaken, the difference is that in the case of JWT, your app manipulates the secret directly, so it must show up in clear form somewhere, from the app's perspective.

So, if the app host is compromised, the attacker shouldn't have too hard a time to extract the JWT and use it from somewhere else.

In contrast, with an HSM, the attacker would need to have the HSM sign any new connection attempt, which should be a bit more involved if it happens on a different machine.


mTLS tends to add infrastructure complexity, I much prefer it when you can "terminate" auth where you like in the ingress path for user-facing stuff rather than first-hop auth.

I wish there were "real" standards and a better ecosystem for request signing.


Right! Which is why we use (public) short-lived JWTs and (private) long-lived refresh tokens. What’s missing?


Frankly, I think it will take years to replace API-keys (if it will ever happen). Developers are much better-off using CLI tools that prevent leaking secrets by blocking commits to git (e.g., https://github.com/Infisical/infisical or https://github.com/trufflesecurity/trufflehog)


I don't think those are mutually exclusive options :) Most developers, especially with lots of legacy apps are better off using a secrets manager. But there is no reason to not push the boundaries of security for new software and onboard passwordless and secretless options.

P.S.

I tried Infisical a couple of months ago. I think if I was Hashicorp Vault team's PM, I'd be worried. Your team has done such a great job at U.X. I was astonished to see an early startup with such a great integration catalog. I think you aced it - modern developers are desperate for out of the box integrations with 100+ services they have to use every day.


Wow! Thanks you Alex. This feedback means a lot coming from you! We're huge fans of Teleport, and learned a lot from you as a fellow YC company :)


No problem! Keep it up with out of the box integrations, focus on U.X. and developer experience and I think you will be on track to become as big or bigger than Hashicorp :)


How do you generate the short-lived certs?


Teleport (YC S15) | Senior Technical Support Engineer | US, Europe, Canada, Remote OK | https://goteleport.com

We are looking for Senior Tech Support Engineers who can script, love systems troubleshooting and helping customers.

Join Us:

https://jobs.lever.co/teleport/7c7f25dd-21c7-45c9-a107-98d8a...

Most of our code is Go, we have very little technical debt, our codebase is clean and small.

We expect you to be comfortable with the following:

  * Scripting in Python, Go or Bash
  * Deep understanding of Linux and networking administration
  * Understanding of modern infra stacks - Kubernetes, Terraform, Ansible
  * Scalability or security experience for systems software is welcome.
What to expect once you apply:

  * We will send you a 30-50 minute SRE hacking/troubleshooting quiz
  * You will join 30 minute intro call and we will walk you through the compensation, interview process and requirements.
  * You will join a live troubleshooting session simulation and try to find all issues we planted in a cloud infrastructure. 
https://jobs.lever.co/teleport/7c7f25dd-21c7-45c9-a107-98d8a...


Sasha, CTO@ Teleport here. Thank you for the kind words! And congrats to the Tailscale team on launching SSH product.

Let me share a bit more about our auditing capabilities:

Teleport captures session PTY output and stores it in S3 or any S3 compatible storage for your records by default.

If you would like to get additional, more in-depth insight into the session, Teleport captures syscalls, file access calls and network calls done during SSH session by correlating it with sessions' cgroup using our BPF module:

https://goteleport.com/docs/server-access/guides/bpf-session...

Teleport provides a lot of other in-depth SSH integration for auditing and compliance, for example we support moderated sessions access control with a required session moderator, or per session-MFA.


Sasha, CTO@Teleport here. Congrats on the launch!

RE: Teleport design

Teleport does not require a centralized proxy, because it is based on certificate authorities. You can issue a certificate with or without Teleport proxy and access any cluster that trusts that certificate directly.

Because of this design you can have a completely decentralized system, with cold storage for your CA, HSM or any parallel system issuing certificates. There is also no need to revoke your credentials, because your certs are short-lived and bound to the device and cluster, so there is less opportunity for pivot attacks.

RE: GRPC

First version of Teleport also had HTTP/JSON REST API, but we have migrated to GRPC to support events streaming and have one type system across multiple languages and services boundaries.

Re: Managed clusters

Teleport supports all CNCF-compatible clusters, including AKS, EKS and GKE out of the box.


Great point on GRPC having better support for event streaming! We originally built Infra to have a GRPC API, but many users we spoke to didn't yet have load balancers or ingress controllers that supported the GRPC protocol (e.g. one user had to consider upgrading their AWS Load Balancer controller to put Infra behind it).

We wanted to remove as many hurdles as possible for teams to deploy Infra in their environments. Event streaming will invariably become an important part of the API (e.g. for features like audit logs), and we'll consider GRPC again for internal components of Infra.

RE using Teleport without the proxy, how would a target cluster's Kubernetes API server (e.g. an EKS cluster) verify certificates without Teleport's proxy?


> one user had to consider upgrading their AWS Load Balancer controller to put Infra behind it

Huh?

The AWS load balancer for which gRPC is relevant is their Application Load Balancer (ALB), which would require you to terminate TLS at the ALB and does not support mutual TLS (which is how short-lifetime client certificates work in this case). To the best of my knowledge, you can't pass through a client-key-encrypted gRPC session through an ALB (maybe I'm wrong?).

Typically this requires an NLB, which will treat all TCP traffic (REST and gRPC) the same, so gRPC wouldn't require an upgrade?


Re: GRPC

My bet that you'd migrate to GRPC eventually as you scale :) I like the simplicity of HTTPS/JSON API as well, but it just broke down for us at a certain scale point.

Re: Teleport with EKS

True, CNCF clusters support mTLS out of the box, but EKS hides the endpoint and does not let you provision CA to trust. You will have to run teleport proxy inside the EKS cluster to translate mTLS to EKS IAM auth. However, you don't have to have a centralized proxy, you can just deploy Teleport proxy agent in each cluster and hide your K8s endpoint.

You also don't have to have a single Teleport proxy to do that.


Thanks! Curious, where did HTTP+JSON break down for you? Was it specifically around audit/event streaming? This would be helpful as we consider building out future updates to Infra, especially considering tools like Kubernetes have put HTTP+JSON APIs the test (at least in their user-facing APIs)

Indeed! EKS + others don't allow custom authentication methods or allow you to use an external CA for the cluster. Running a proxy agent in each cluster makes sense and is similar to how Infra approaches it: I hadn't seen that configuration in your architecture pages!

Have you considered distributing certificates signed by the cluster CA itself (to avoid proxies altogether)? In 1.22 onwards there's a new ExpirationSeconds field when creating a certificate signing request: https://github.com/kubernetes/enhancements/issues/2784 . I imagine this will be supported by all the hosted Kubernetes services - we've been watching this closely.


this looks like a centralized proxy to me: https://goteleport.com/docs/architecture/proxy/

are you saying that because you can have multiple proxies, they aren't centralized? or that at least this is one mode you can use, but the standard one is using a proxy?


Teleport consists of a couple of components:

* Proxy is used to handle SSO, Web UI and intercept traffic for session capture. You can have one proxy per your organization, multiple proxies or, if you don't want to intercept traffic, no proxies at all.

* Auth server is used to issue certificates and send audit logs and session recordings to external systems.

* Nodes (end system agents) sometimes are helpful, but not required. For example, if you want to capture system calls in your SSH session, you can deploy node. Or you can use OpenSSH with Teleport if you wish.

Because Teleport is based on certificate authorities, the following deployments are possible:

* One, "centralized" HA pair of proxies intercepting all your traffic (K8s, databases, web, etc). This is actually helpful for many cases, as you have just one entry point in your system to protect, vs many.

* Multiple, "decentralized" proxies in multiple datacenters. This is helpful for large organizations with many datacenters all over the world.

* No proxies at all. You can issue certificates with or without Teleport and reach your target clusters directly, as long as they trust the CA. It's a bit harder for managed K8s, but easy to do with self-hosted K8s, SSH, Databases, etc that support mTLS cert auth. This is super helpful for integrations with larger echo system - any system that supports cert auth should work with Teleport out of the box.

* You can have one auth server HA pair managing a single certificate authority.

* You can have multiple, independent auth servers (teleport clusters) with certificate authorities and trust established between them.

* You can use your own CA tooling with Teleport.

The way we think about Teleport is that it's a combination of certificate authority management system, proxies (intercepting traffic and recording sessions) and nodes (for some services, like SSH providing advanced auditing capabilities with BPF).

You can combine those components, or replace them with whatever makes sense.


does the standard deployment use a centralized proxy?

like it's your Basic Architecture in the diagram in your docs. so i feel like i'm being put on.


Sorry you feel that way!

We haven't counted, but my bet is that most smaller deployments just use the single proxy.

I also know that most larger deployments use multi-DC and multi-cluster design with independent CAs for availability and latency.


that answers my question perfectly. thank you!


Sasha, CTO @ Teleport here.

I agree, our enterprise product is quite expensive. Let me explain why:

* We are going through several security audits by third party agencies several times per year. We are trying to hire the best security agencies to audit our code and it is quite expensive.

* We are recruiting globally and try to place our comp at 90th+ percentile of the compensation as listed in opencomp.com and other sources we have access to.

* Our sales process also takes time, and the sales team employs sales engineers, sales and customer success specialists to assist with deployments of such a critical piece of the infrastructure.

* For all our employees we have wellness benefits for home office improvement, personal development, healthcare packages.

All of these factors above add up and we charge a lot for building a quality security product supported 24/7 across the globe.

However, this might not work for everyone, and we have a completely free and open source version that people can use without ever talking to our sales team:

https://github.com/gravitational/teleport


Hey Sasha :) Price should be justified by value to the customer, not overhead costs of the company. Even though your value/benefits are listed on the site, this is a good opportunity to reiterate them.


It’s an intersection of those two things. Hawks can profitably prey on squirrels, while lions could not.

There’s room in the security market for $10/mo/user products and room for <whatever it is that Teleport charges>. If not, they’ll find out in an expensive and painful fashion…

Given that they have paying customers, their price is justified to at least those customers.


gk1 thanks, this is a valid point!

Teleport solves many quite important problems four our enterprise customers' infrastructure. Our users use Teleport to replace secrets and static keys with short lived certificates, manage certificate authorities, add audit and compliance controls for access to critical data, consolidate access for SSH, Kubernetes, Databases and Desktops.


You have no idea how much money you are leaving on the table because of your insane pricing strategy. Your expenses do not scale with a customer's use. Amateur mistake.


I don’t follow this comment. The last time I engaged with Teleport’s sales team they somewhere between $40-$80/host (server, VPS, etc). That seems like it would definitely scale with use.

Edit: per year. And there was a minimum order quantity.


Hey, I'm Sasha, CTO @ Teleport. I have designed our interview process and have described it here:

https://goteleport.com/blog/coding-challenge/

We are also trying to be as transparent as possible with our challenges being open source:

https://github.com/gravitational/careers/tree/main/challenge...

and requirements being published here:

https://github.com/gravitational/careers/blob/main/levels.pd...

I am sorry to hear that you had bad experience. Our interview process is a trade-off and has one big downside - it may take more time and efforts compared to classic interviews. It could also feel disappointing if the team does not vote in favor of the candidate's application.

However, if there was something else wrong with your experience and you are willing to share, please send me an email to sasha@goteleport.com.


non-involved opinion here - it appears that a self-confident and clearly communicating C*O person is explaining exactly why the company is completely correct, while evidence of at least two actual non-company people show examples of this not being the case. Isn't it common for self-assured execs to explain away all the objections of outsiders, despite evidence directly presented? looks like it here $0.02


bingo. CTOs should realise that job ads are screened by devs just as they attempt to screen for mini-me's and protect from dead weight.

Devs (who arnt desperate for richers) look at your company and think, how cr*p would it be to work there? where are the indicators?


No specific criticism of the process was offered, so a general justification is warranted.

Personally I became interested in working for Teleport in large measure because the interview process tested my practical skills, rather than having me pull leetcode trivia out of my ass. I haven’t regretted my decision whatsoever, all of my engineering teammates here that I’ve worked directly with are very responsible and competent and the company appears to be growing mostly in the right directions.


I like Teleport. If you're doing work samples, why is your team voting in favor of applications? Part of the point of work samples is factoring out that kind of subjectivity.


That's a fair question. The team votes on specific aspects of implementation that can not be verified by running a program, for example:

* Error handling and code structure - whether the code processes errors well and has a clear and modular structure or crashes on invalid inputs, or the code works, but is all in one function.

* Communication - whether all PR comments have been acknowledged during the code review process and fixed.

Others, like whether the code uses good setup of HTTPS and has authn are more clear.

However, you have a good point. I will chat to the team and see if we can reduce the amount of things that are subject to personal interpretation and see if we can replace them with auto checks going forward.


We're a work-sample culture here too, and one of the big concerns we have is asking people to do work-sample tests and then face a subjective interview. Too many companies have cargo-culted work-sample tests as just another hurdle in the standard interview loop, and everyone just knows that the whole game is about winning the interview loop, not about the homework assignments.

A rubric written in advance that would allow a single person to vet a work sample response mostly cures the problem you have right now. The red flag is the vote.


That's a fair concern. We don't have extra steps to the interview process, our team votes only on the submitted code. However, We did not spend enough time thinking about automating as many of those steps as possible as we should have.

For some challenges we wrote a public linter and tester, so folks can self-test and iterate before they submit the code:

https://github.com/gravitational/fakeiot

I'll go back and revise these with the team, thanks for the hint.


The good news is, if you've run this process a bunch of times with votes, you should have a lot of raw material from which to make a rubric, and then the only process change you need is "lose the vote, and instead randomly select someone to evaluate the rubric against the submission". Your process will get more efficient and more accurate at the same time, which isn't usually a win you get to have. :)


Disclaimer: I'm a Teleport employee, and participate in hiring for our SRE and tools folks.

> A rubric written in advance that would allow a single person to vet a work sample response mostly cures the problem you have right now. The red flag is the vote.

I argue the opposite: Not having multiple human opinions and a hiring discussion/vote/consensus is a red flag.

The one engineer vetting the submission they may be reviewing before lunch or have had a bad week, turning a hire into a no-hire. [1] Not a deal breaker in an iterated PR review game, but rough for a single round hiring game. Beyond that, multiple samples from a population gives data closer to the truth than any single sample.

There is also a humanist element related to current employees: Giving peers a role and voice in hiring builds trust, camaraderie, and empathy for candidates. When a new hire lands, I want peers to be invested and excited to see them.

If you treat hiring as a mechanical process, you'll hire machines. Great software isn't built by machines... (yet)

[1] https://en.wikipedia.org/wiki/Hungry_judge_effect


Disclaimer: this comment ticked me off a bit.

If you really, honestly believe that multiple human opinions and a consensus process is a requirement for hiring, I think you shouldn't be asking people to do work samples, because you're not serious about them. You're asking people to do work --- probably uncompensated --- to demonstrate their ability to solve problems. But then you're asking your team to override what the work sample says, mooting some (or all) of the work you asked candidates to do. This is why people hate work sample processes. It's why we go way out of our way not to have processes that work this way.

We've done group discussions about candidates before, too. But we do them to build a rubric, so that we can lock in a consistent set of guidelines about what technically qualifies a candidate. The goal of spending the effort (and inviting the nondeterminism and bias) of having a group process is to get to a point where you can stop doing that, so your engineering team learns, and locks in a consistent decision process --- so that you can then communicate that decision process to candidates and not have them worry if you're going to jerk them around because a cranky backend engineer forgets their coffee before the group vote.

I don't so much care whether you use consensus processes to evaluate "culture fit", beyond that I think "culture fit" is a terrible idea that mostly serves to ensure you're hiring people with the same opinion on Elden Ring vs. HFW. But if you're using consensus to judge a work sample, as was said upthread, I think you're misusing work samples.

You can also not hire people with work samples. We've hired people that way! There are people our team has worked with for years that we've picked up, and there are people we picked up for other reasons (like doing neat stuff with our platform). In none of these cases did we ever take a vote.

(If I had my way, we'd work sample everyone, if only to collect the data on how people we're confident about do against our rubric, so we can tune the rubric. But I'm just one person here.)

Finally: a rubric doesn't mean "scored by machines". I just got finished saying, you build a rubric so that a person can go evaluate it. I've never managed to get to a point where I could just run a script to make a decision, and I've never been tempted to try.

I'll add: I'm not just making this stuff up. This is how I've run hiring processes for about 12 year, not at crazy scale but "a dozen a year" easily? It's also how we hire at our current company. I object, strongly, to the idea that we have a culture of "machines", and not just because if they were machines I'd get my way more often in engineering debates. We have one of the best and most human cultures I've ever worked at here, and we reject idea that lack of team votes is a red flag.


Strongly agree with this, two key concepts in particular:

1. Using group discussion to make the principled rubric is incredibly respectful of everyone’s (employee and candidate) time, not just now but future time. Using the rubric is also unreasonably effective at getting clearer pictures of people quickly.

2. Systematic doesn’t mean automated, and that hiring should aspire to be systematic to the point it makes no difference who interviewed the candidate, and all the difference which candidate interviewed.

I’ll add one …

3. If you have a rubric setting a consistent bar, share feedback with the candidate in real time (such as asking to ‘help me understand your choice I might have done differently?’) as well as synthesized feedback at the end: “This is my takeaway, is it fair?”

Contrary to urban legend this never got us sued. Every candidate, particularly those being told no, said it was refreshing to hear where they stood and appreciated the opportunity to revisit or clarify before leaving the room. Key is non judgmental clear synthesis with, “Is that fair?”


You’re mistaken, we do have a rubric. All of the members of the interview team grade the interviewee according to the rubric, and the scores are then combined into “votes”.


That's good. I'm responding to "Not having multiple human opinions and a hiring discussion/vote/consensus is a red flag". I think having combined scores is an own-goal, but having people vote based on their opinions is something worse than that (if you're having people do work samples).


Thanks for replying!

Here's what I think it boils down to: working on a codebase with your coworkers is (or at least certainly should be) an inherently collaborative process. On the other hand, a job interview is, in a sense, inherently antagonistic. No matter what shape the interview takes, these people aren't your friends, they aren't your coworkers, they are gatekeepers.

I already have a job as a programmer. At work, I can push back on my coworkers and debate the merits of various designs until we all reach a consensus. But with the Teleport interview, there's an inherent power imbalance that makes that impossible: "I'd really like to argue about this, because I don't think I agree, but I'm afraid that will decrease the chances of them hiring me."

And the only people who are in a position to change this process are the ones who have already gotten through it successfully.


From my perspective you’re unfairly projecting bad faith onto Teleport and shooting your self in the foot in the process.

1) You’re assuming that a good faith argument would decrease the chances of us hiring you, but for the most part that isn’t the case. We’re an engineering company building a complex security product — the only way that can be done well is via a culture that’s perennially open to criticism, debate, and going with the better argument. In my tenure at Teleport, I’ve never experienced explicit or implicit punishment for voicing my opinion, even when it contradicted a more senior engineer’s opinion. The argument has always been evaluated on its merits and the correct option taken. An interviewee making a good argument and proving an interviewer wrong should, and based on my experience would, increase your chances of being hired.

2) I can imagine you retorting that even if that’s truly the case at Teleport, there’s no way you could know that beforehand, and due to the “antagonistic” nature of us being the “gatekeepers”, you’re forced to assume the worst. But if your goal is to work in a collaborative environment where criticism and debate is tolerated, then your implicit strategy makes no sense. If Teleport is that type of place you’d like to work, then pushback in the interview process will be well received; if it isn’t, then you won’t even get an offer. So you have nothing to lose by giving your true opinion, but if you assume the worst and self censor in an attempt to brown nose the hiring team, you risk ending up in a shitty work environment that you were hoping to avoid.


Yep imbalance, dynamics, so much to skew the process. If you think your interview process works, great, but likely it doesnt and you just get lucky. All the good people you screened out vs all the cruft you saved yourself from.. you will never know....!

Being a programmer isnt about what you know, its about how you learn. Born programmers vs learned programmers, you got a coding test for that? really? If you think you can screen anything more then selecting for familiarity; your been sniffing that corperate glue for too long.

If you come to me thinking i am suitable for a job, you reach out via linked in, you see my public repos, then ask me to code for you on demand like a monkey?! Pull the other one!

(not referncing OP, general comment on interview processes)


Teleport (YC S15) | Backend, Fullstack, Frontend Engineer | US, Europe, Canada, Brazil, Australia, Remote OK | https://goteleport.com

Do you enjoy building security and deployment tools for other engineers? Join us to hack on https://github.com/gravitational/teleport anywhere in the U.S, Canada, Europe, Brazil, Australia. Most of our code is Go, we have very little technical debt, our codebase is clean and small.

If you are backend and fullstack engineer, we expect you to be comfortable with the following:

  * Go or Rust for backend and Typescript for frontend engineers
  * Linux, networking.
  * Scalability or security experience for systems software is welcome.
We’re looking for senior and junior engineers to join the Teleport team. Teleport is a company started by engineers to build products for engineers. We are a stable and growing company.

We offer:

  * Competitive salary and equity.
  * 401k with company match.
  * Excellent health insurance.
  * Work anywhere in the U.S, Canada, Europe, Australia, Brazil
Apply: https://jobs.lever.co/teleport

What to expect once you apply:

  * We will send you a 20-30 minute programming quiz
  * You will join 30 minute intro call and we will walk you through the compensation,
    interview process and requirements.
  * You join a slack channel and submit a coding challenge in Golang, Rust or Typescript depending on the position, using Github.


Hey there, very interested in a junior role but it looks like everything listed is a senior SWE role. Are there any junior positions still open?


Teleport (YC S15) | Backend, Fullstack, Frontend Engineer | US, Europe, Canada, Brazil, Australia, Remote OK | https://goteleport.com

Do you enjoy building security and deployment tools for other engineers? Join us to hack on https://github.com/gravitational/teleport anywhere in the U.S, Canada, Europe, Brazil, Australia. Most of our code is Go, we have very little technical debt, our codebase is clean and small.

If you are backend and fullstack engineer, we expect you to be comfortable with the following:

  * Go or Rust for backend and Typescript for frontend engineers
  * Linux, networking.
  * Scalability or security experience for systems software is welcome.
We’re looking for senior and junior engineers to join the Teleport team. Teleport is a company started by engineers to build products for engineers. We are a stable and growing company.

We offer:

  * Competitive salary and equity.
  * 401k with company match.
  * Excellent health insurance.
  * Work anywhere in the U.S, Canada, Europe, Australia, Brazil
Apply: https://jobs.lever.co/teleport

What to expect once you apply:

  * We will send you a 20-30 minute programming quiz
  * You will join 30 minute intro call and we will walk you through the compensation,
    interview process and requirements.
  * You join a slack channel and submit a coding challenge in Golang, Rust or Typescript depending on the position, using Github.


Teleport is kinda magic. Unified remote access is a super tricky problem, and nothing I know of solves it quite as neatly as Teleport does. Latacora clients either solve a simplified problem (because they don't have the hard mode version yet and they can cobble together what they need with e.g. AWS SSM + RDS w/ IAM), use Teleport, or don't have a great answer here.

I've known Sasha ('alexk) for the better part of a decade, they and the entire team there is great.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: