Hacker News new | past | comments | ask | show | jobs | submit | bndr's comments login

Hey HN!

I’m working on SEOJuice [1], an automated tool for internal linking and on-page SEO optimizations. It's designed to make life a little easier for indie founders and small business owners who don’t have time to dig deep into SEO.

So far, I’ve managed to scale it to $3,000 MRR, and recently made the move from the cloud to Hetzner, which has been a game-changer for cost efficiency. We’re running across multiple servers now, and handling everything from link analysis to on-page updates with a bit more control.

The journey’s been a mix of hands-on coding (and a lot of coffee) and constant optimization. It’s been challenging but incredibly fun to see how much can be automated without compromising on quality.

Happy to chat more about the tech stack or any of the growth pains if anyone’s interested!

[1] https://seojuice.io


Oh wow, my package on the front page again. Glad that it's still being used.

This was written 10 years ago when I was struggling with pulling and installing projects that didn't have any requirements.txt. It was frustrating and time-consuming to get everything up and running, so I decided to fix it, apparently many other developers had the same issue.

[Update]: Though I do think the package is already at a level where it does one thing and it does it good. I'm still looking for maintainers to improve it and move it forward.


> This was written 10 years ago when I was struggling with pulling and installing projects that didn't have any requirements.txt

And 10 years later this is still a common problem!


Not if you use proper python env and package management tools like pdm or poetry.


Hey everyone, I know HN community is very polarizing, and the discussions here are always great to read through as both sides are always eager to prove the other wrong. I think we need more of that in the community. People not being afraid to disagree.

I'm really curious to hear your thoughts and experiences.


I have a bit of a niggle about the use of the word "polarizing". That word implies things that I think are harmful overall, such as being unwilling to work with people you disagree with.

That said, I agree that it's important to express your opinions and stand by things you think are right. It's equally important to listen to those who disagree with you and take what they say as additional data that may (or may not) lead you to modify your opinion. At the very least, openly and honestly listening to others will inform you as to why they have a differing opinion. "Everyone seems crazy if you don't understand their point of view."

Also, "compromise" isn't a dirty word. It's how we get anything done.


Looking for feedback.

Cheers :)


If you see any bugs, please let me know. The product is quite fresh.


Why not just retrain the python team to another language? I mean, software engineers are not really language specific, they can learn other languages if needed.


They were maintaining Python itself, likely very well (as one would expect) compensated. It’d be a waste to have these devs do product development.


All SWEs at the same level at Google are making the same compensation (with some exceptions for high-flying AI researchers). They Python SWEs certainly weren't making more than anyone else.


That's not true at all. (Excluding the factor of location), the compensation of a SWE depends not only the level, but also on tenure, on performance rating (and the history of rating), and on stock market fluctuations (whether the stock price was low or high when the stocks were granted).

One of the rumors is that the better compensated you are on your level, the more likely you are to be targeted for layoff, because it saves the eng cost the most.


None of those depend on the project you're working on, which is my point.


All SWEs at Google are well compensated. Not all of them would be a good fit for maintaining Python.


Then I don't understand the point you were making in your first post.


A waste of talent not of cash.


That's the thing, it's not clear that the Python core engineers are more talented than other Google SWEs on average. You have all sorts of talented engineers working on all sorts of random projects within Google.


They have three months to find new roles/teams. Their employment only ends if they can't.


Like finding a lunch table to sit at on your first day of school


Assuming this wasn't financially motivated.


https://vadimkravcenko.com

Mostly I help developers grow — I share my thoughts as a CTO about building digital products, growing teams, scaling development and in general being a good technical founder.

Some of the popular posts are:

- https://vadimkravcenko.com/shorts/things-they-didnt-teach-yo... - Things they didn't teach you at the university

- https://vadimkravcenko.com/shorts/project-estimates/ - Rules of thumb for Project Estimations

- https://vadimkravcenko.com/shorts/contracts-you-should-never... - Contracts you should never sign.

Most of the blog posts have ended up on the Frontpage here, here's the list: https://hn.algolia.com/?dateRange=all&page=0&prefix=true&que...

Cheers, Vadim


Hey there HN! :) Just wanted to share an article I've written about the highly subjective nature of ALL software development practices. With the amount of conflicting best practices out there in the internet — there's no right way, just a way that balances your specific tradeoffs. What seems like 'best practice' to one team could be inconceivable to another.


I think the main point is — the AI don't really read and understand, they read and remember the patterns of words, which is different from understanding. They see a chain of words often enough that it becomes statistically the next best output based on some input. (with some randomness in-between.)


Maybe. Or maybe some lower layers learn to recognise language, some later layers learn to recognise concepts, some layers above learn to do some sort of reasoning. Perhaps. We have seen behaviour like that in CNNs where basic features were built up on the lower layers.

The fact that a single word is output does not imply that the system is only working on that single word. Yes, that's the task and a good way - as a human - to do that task is to think about what you want to say. So the network may do something like that in between.


I'm tired of this trope. How is remembering patterns of words different from understanding? (Even ignoring the fact that the patterns themselves are not remembered, and LLMs don't deal with words directly but their embeddings.)

Without the difference described, your words don't mean anything.


Do you imply that human do not learn from studying the pattern?

For instance, Paragraph A and paragraph B put together, you would say that makes no sense because the logic is broken. How can you know that the logic is broken solely by those words?

The only reason for meanings can never be extracted from a text is that the text contains no meanings.

So now words do not contains meanings?


Something about reading between the lines I think.


And how does a human understand?


Good point.

I made a note in the article to mention that there are a lot of different definitions of the roles, but usually, the definitions are the same, just a different naming convention.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: