Hacker News .hnnew | past | comments | ask | show | jobs | submit | michaelgv's commentslogin

I’ve always said to hire the people who can do what we need as a business, regardless of their gender


Which is a noble sentiment but simply doesn't happen in the business world (or in many other places). So the hard question is, how do you effectively encourage business to choose the best people regardless of gender?

Some have tried blind interviews but these are very hard to do properly and effectively. A 40% quota is another way to push business into looking beyond gender.


> Which is a noble sentiment but simply doesn't happen in the business world (or in many other places).

On the contrary, all the evidence I've seen points to discrimination being a very small part of the gender pay gap.

The problem is that men and women are different.[1] So it becomes very difficult to even measure if women are being discriminated against much or are just different to men.

[1] https://academic.oup.com/oep/advance-article/doi/10.1093/oep...


Then a minority-only company should be both cheaper and more productive, by supply and demand.


I think there are three potential issues with this, firstly, minorities are being discriminated against and paid less in the scenario so it wouldn't be desirable on that metric. Secondly competition in many markets isn't robust enough to have an even playing field if you wanted to start a new company with all "minorities". Third, if the company was all "minorities" you would be drastically reducing the hiring pool so even with good competition it would be hard to hire well (which could wipe out any minor efficiency gains).


As someone who just recently CDN hell, and rebuilt our entire CDN network from the ground up (software and hardware), I was wondering why you picked RoR?


It’s what I know best and what I’m most productive in. The project is to get something running and learn a handful of new things, and learning a new framework would be a detriment to that first goal.

The manager app is not in the hot path with this design so performance doesn’t matter all that much.


Are you designing this CDN to pull from origin, cache temporarily? Or to pull from local file and put strong cache on it?

If you need a hand let me know, I’ve built pretty large CDNs before (10M r/s at peak)


The former to start but I want to add push zones and/or “s3sync” zones that proactively sync an s3 bucket to local disk.

Thanks for the offer! I might just take you up on it :)


Just be careful, understand that if you do a PULL only CDN, you're not going to gain big benefits. If you do want a pull only CDN, have a background task runner to retrieve the files, and update them locally.


> understand that if you do a PULL only CDN, you're not going to gain big benefits.

This statement makes no sense. A CDN edge node is just a cache; its size and your access patterns determine the hit ratio.

At $dayjob we get Nginx cache hit ratios on our edge in excess of 99% for “an origin fetch” setup. That is a very large benefit.

Cloudflare works entirely on origin fetch. They seem to be doing okay.


Sure. I have Nginx set to keep files around for s long time and serve stale and refresh in the background, but proactively refreshing periodically is a good idea.


What would you suggest and why? Not a loaded question for all the people eager to downvote.


Personally I would build it in something like Go. I've done a lot of work in Rails and I would probably have the signup/profile/interface built in Rails 5.2 but use a high performant go framework for the really intensive stuff.

I've been considering building my own but getting an up and running gossip protocol to have it share data between nodes isn't the easiest thing in the world to code.


Its a pain to code. I’ve done it, and I hated every second of it. Keeping data in sync with dynamic data in near real-time is terrible.

I wrote the CDN in Go, with Redis and a smaller go-powered daemon to retrieve assets every 20 seconds, sync them to a local storage drive, and after 5 days retrieve again - or, if there is no requests within 48 hours, clear the unused items.

Then I setup a system that if one edge requests an “unpopular” file, it’ll ping a simple REST API and have all the other edges pull that file, this allowing the edges to stay “one step ahead” of the user load


Yeah, when thinking it through personally it comes down to a hard math problem. Because you have to maintain the state of the local files, whether they should live in memory vs ssd vs another node. Did you use an LRU cache for expunging less utilized resources?


State is much less important to track. It’s easier to do, the real challenge is garbage collection - you need it, but you don’t want to collect too much in memory. That’s why Redis is a great tool for our edge servers.


And nothing has made me realize just how slow the speed of light is until I started looking into the CAP theorem and distributed databases like CockroachDB.


Does that make them “Professional Dicks”?


Is this some kind of bust?


Meltable. Careful.


I’d be open to this if it was built in Firefox and let you run the models on your own machine instead of blindly trusting a third party.


It sounds like you would prefer Pocket Recommendations then, as that is exactly how it works.


I imagine that when you request data deleted it’s only anonymized, and the actual learning data is kept.


Precisely the key, you need in my opinion:

1. Someone you can trust 2. Someone you’re not afraid to speak up with or against, without fear of repercussion personally within reason (ie, if you call each other crap every day, you’re not a good match most of the time)


Instead of asking me to complete some boring challenge, bring me into your office, and let me help solve a real issue you’re facing, then at least we both don’t waste our time on challenges that only prove you have free time on your hands.


BUT also pay me for my time at market consulting rates.


So, in other words, "don't interview me at all, just hire me"?


No -- pay candidates for the day they spend interviewing with you.


I received one also, unfortunately I don’t have a co-founder but have been seeking one out


What've you tried to seek one out? I notice there's no contact info on your profile.


I’ve searched local circles to see if there’s someone I’m not best friends with, someone I can argue with, with no long term feelings in the line, and can feel I trust.


Didn’t know the Git trick, goodbye gitkraken!


My plans for future cloud platforms are to have the ability to create your own platform file, that’ll house all the required code to get it running, and then having a simple command to install them, ie...

ignite install —platform {platform_name}

All platform code will be saved in your home directory under a folder called .ignite, ie: /home/Michael/.ignite/platforms/Heroku ...etc

This new platform design will be in the next update, and you’ll be able to generate a base template by doing...

ignite create —provider {provider_name}

Then in terms of sharing the provider file, and all its sub files, you would do the same ignite share command, with a flag of —provider, and then it’s a matter of sharing and installing it.

Apex is very good, but I draw the line when it comes to being reproducible universally, the goal of ignite is to be an AIO package that can manage configuration, dependencies, and deployment without having to create complex structures for deployment/sharing/etc, and the key factor is everything that is created must be reproducible regardless of platform, IDE, etc. and that’s why Ignite exists, to help take away that pain and just have a very simple CLI do the tasks automatically


Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: