I've had a similar journey to yours. Started January 1st 2019, early 30s, had a second kid. Loving every minute of it, though I do wonder when the fun is going to end. Every year I'm surprised the business keeps growing. This year my goal is to diversify to try and ease that gnawing anxiety, but every time I try working on something else I feel guilty about ignoring what's already been working for me. Tempted to just sell so I can have a bigger cushion and less distraction on my next venture, but not sure how the current climate would affect the valuation.
I recommend having an initial talk with a broker if you're thinking about it at all.
The brokers I've met are happy to talk if you have a profitable business, even if you don't plan on selling anytime soon, and you don't have to sign anything to commit to them. They can talk to you about what you can do to prepare for an exit even if you don't expect that to happen for several years (and it might take you several years to manage yourself out of the business).
I personally liked the broker I worked with and recommend him.[0]
This is a really good idea, similar to raising money when you don't need it. It could help prevent you from making a mistake or getting into a unattractive position for when you are thinking of selling. I've seen this with one & few person companies, everything from messy ownership & cap tables, to tax obligations, to the structure of multi-year sales or contracts. This is why even big PE portfolio companies start planning for a sale years in advance; the difference (making it harder for you IMO) is that they have a pretty good idea when the sale will happen.
They get more data this way. I think their actual end goal is to supplant the software industry. I would not be surprised if there is an OpenAI OS in the next 5 years.
Being the 'good guy' is just marketing. It's like a unique selling point for them. Even their name alludes to it. They will only keep it up as long as it benefits them. Just look at the comments from their CEO about taking Saudi money.
Not that I've got some sort of hate for Anthropic. Claude has been my tool of choice for a while, but I trust them about as much as I trust OpenAI.
How do you parse the difference between marketing and having values? I have difficulty with that and I would love to understand how people can be confident one way or the other. In many instances, the marketing becomes so disconnected from actions that it's obvious. That hasn't happen with Anthropic for me.
I am a fairly cynical person. Anthropic could have made this statement at any time, but they chose to do it when OpenAI says they are going to start showing ads, so view it in that context. They are saying this to try to get people angry about ads to drop OpenAI and move to Anthropic. For them, not having ads supports their current objective.
When you accept the amount of investments that these companies have, you don't get to guide your company based on principles. Can you imagine someone in a boardroom saying, "Everyone, we can't do this. Sure it will make us a ton of money, but it's wrong!" Don't forget, OpenAI had a lot of public goodwill in the beginning as well. Whatever principles Dario Amodei has as an individual, I'm sure he can show us with his personal fortune.
Parsing it is all about intention. If someone drops coffee on your computer, should you be angry? It depends on if they did it on purpose, or it was an accident. When a company posts a statement that ads are incongruous to their mission, what is their intention behind the message?
> Anthropic could have made this statement at any time, but they chose to do it when OpenAI says they are going to start showing ads, so view it in that context.
Obviously they did do it for that reason, but it does make sense. They've positioned themselves from day 1 as the AI company built on more values; that doesn't make them good but it's self-consistent. If, out of the blue earlier on when nobody was talking about ads in AI, they said "we're not going to put ads in AI", that would have been a Suspiciously Specific Denial: "our shirt saying we're not going to put ads in AI has people asking a lot of questions already answered by our shirt".
> Can you imagine someone in a boardroom saying, "Everyone, we can't do this. Sure it will make us a ton of money, but it's wrong!"
Yes. But that's not how you'd say it. "First of all, this would go against our established ethical principles, which you knew when you invested with us. Second, those ethical principles define our position in the market, which we should not abandon."
Ideally, ethical buyers would cause the market to line up behind ethical products. For that to be possible, we have to have choices available to us. Seems to me Anthropic is making such a choice available to see if buyers will line up behind it.
Companies, not begin sentient, don't have values, only their leaders/employees do. The question then becomes "when are the humans free to implement their values in their work, and when aren't they". You need to inspecting ownership structure, size, corporate charter and so on, and realize that it varies with time and situation.
My point is that the leaders have constraints on them that prevent them actually executing on their values. E.g. imagine leadership dislikes spam, but an institutional investor on the board has warned the CEO that if there's a sales dip before quarterly earnings and the market reacts badly he'll get fired. So the CEO - against his values - orders the VP or marketing to spam for all his life is worth. This stuff gets so internalized, that we routinely make decisions at work that go against our values because we know that's what's demanded of us by our organizations.
I think there are two key imperatives that lead to company "psychopathy".
The first imperative is a company must survive past its employees. A company is an explicit legal structure designed to survive past the initial people in the company. A company is _not_ the employees, it is what survives past the employees' employment.
The second imperative is the diffusion of responsibility. A company becomes the responsible party for actions taken, not individual employees. This is part of the reason we allow companies to survive past employees, because their obligations survive as well.
This leads to individual employees taking actions for the company against their own moral code for the good of the company.
See also The Corporation (2003 film) and Meditations On Moloch (2014)[0].
By providing product or services of value, not by maximing profits at any cost (definitely not by taking advantage of people, shortcomings of rules/laws, ... , or by harming people, ... , environment)
No company has values. Anthropic's resistance to the administration is only as strong as their incentive to resist, and that incentive is money. Their execs love the "Twitter vs Facebook" comparison that makes Sam Altman look so evil and gives them a relative halo effect. To an extent, Sam Altman revels in the evil persona that makes him appear like the Darth Vader of some amorphous emergent technology. Both are very profitable optics to their respective audiences.
If you lend any amount of real-world credence to the value of marketing, you're already giving the ad what it wants. This is (partially) why so many businesses pivoted to viral marketing and Twitter/X outreach that feels genuine, but requires only basic rhetorical comprehension to appease your audience. "Here at WhatsApp, we care deeply about human rights!" *audience loudly cheers*
Anthropic is a PBC, not a "company", and the people who work there basically all belong to AI safety as a religion. Being incredibly cynical is generally dumb, but it's especially dumb to apply "for profit company" incentives to something that isn't a traditional "for profit company".
The difference is if they are willingly ready to lose money on a personal level. If folks in the company are not willing to sacrifice their comp for good, they are not "good" guys.
For Anthropic and lot of startups with very high growth(even including OpenAI 4 years back or Google or Amazon), they don't have to lose anything to be good as they can just raise money. But when the growth stops that's when the test starts.
I believe in "too big to have values". No company that has grown beyond a certain size has ever had true values. Only shareholder wealth maximisation goals.
Anthropic is a PBC. The shareholder goals are public benefit (PB) not "wealth maximization".
(Also, wealth maximization is a dumb goal and not how successful companies work. Cynicism is a bad strategy for being rich because it's too shortsighted.)
Yes and OpenAI was a not for profit and look how that’s going. Now it’s a PBC. So Anthropic won’t even be the first PBC AI company pretending that they’re doing it for the good of the world and then trying to shove in porn and Add for wealth maximisation. Also most companies that go big have an IPO, and it’s mostly just about short term strategies to make share price go up after that.
What evidence do you have for that? Your point about Saudi is literally mentioned by the parent as one of the few negative points.
I’m not saying this is how it will play out, but this reads as lazy cynicism - which is a self-realising attitude and something I really don’t admire about our nerd culture. We should be aiming higher.
Agreed. Companies don’t have the capacity to be moral entities. They are driven purely based on monetary incentives. They are mechanical machinery. People are anthropomorphizing values onto companies or being duped by marketing speak.
I mean, yes and. Companies may do things for broadly marketing reasons, but that can have positive consequences for users and companies can make committed decisions that don't just optimize for short term benefits like revenue or share price. For example, Apple's commitment to user privacy is "just marketing" in a sense, but it does benefit users and they do sacrifice sources of revenue for it and even get into conflicts with governments over the issue.
And company execs can hold strong principles and act to push companies in a certain direction because of them, although they are always acting within a set of constraints and conflicting incentives in the corporate environment and maybe not able to impose their direction as far as they would like. Anthropic's CEO in particular seems unusually thoughtful and principled by the standards of tech companies, although of course as you say even he may be pushed to take money from unsavory sources.
Basically it's complicated. 'Good guys' and 'bad guys' are for Marvel movies. We live in a messy world and nobody is pure and independent once they are enmeshed within a corporate structure (or really, any strong social structure). I think we all know this, I'm not saying you don't! But it's useful to spell it out.
And I agree with you that we shouldn't really trust any corporations. Incentives shift. Leadership changes. Companies get acquired. Look out for yourself and try not to tie yourself too closely to anyone's product or ecosystem if it's not open source.
> and even get into conflicts with governments over the issue.
To be fair, they also cooperate with the US government for immoral dragnet surveillance[0], and regularly assent to censorship (VPN bans, removed emojis, etc.) abroad. It's in both Apple and most governments' best interests to appear like mortal enemies, but cooperate for financial and domestic security purposes. Which for all intents and purposes, it seems they do. Two weeks after the San Bernardino kerfuffle, the iPhone in question was cracked and both parties got to walk away conveniently vindicated of suspicion. I don't think this is a moral failing of anyone, it's just the obvious incentives of Apple's relationship with their domestic fed. Nobody holds Apple's morality accountable, and I bet they're quite grateful for that.
At the end of the day, the choices in companies we interact with is pretty limited. I much prefer to interact with a company that at least pays lip service to being 'good' as opposed to a company that is actively just plain evil and ok with it.
That's the main reason I stick with iOS. At least Apple talks about caring about privacy. Google/Android doesn't even bother to talk about it.
That's probably not true - government regulators require a lot of privacy work and Android certainly complies with that. Legal compliance is a large business strategy because small companies can't afford to do it.
Kind of funny to say you helped make the Harvard CS curriculum and then dropped out. Your own curriculum was not good enough for you? Probably extenuating circumstances, but still seems funny.
Quantized, heavily, and offloading everything possible to sysram. You can run it this way, just barely reachable with consumer hardware with 16 to 24gb vram and 256gb sysram. Before the spike in prices you could just about build such a system for $2500, but the ram along probably adds another $2k onto that now. Nvidia dgx boxes and similar setups with 256gb unified ram can probably manage it more slowly ~1-2 tokens per second. Unsloth has the quantized models. I’ve test Kimi though don’t have quite the headroom at home for it, and I don’t yet see a significant enough difference between it and the Qwen 3 models that can run in more modest setups: I get a highly usable 50 tokens per second out of the A3B instruct that fits into 16gb VRAM with enough left over not to choke Netflix and other browser tasks, it performs on par with what I ask out of Haiku in Claude Code, and better as my own tweaking improves with the also ever better tooling that comes out near weekly.
I have an AMD Epyc machine with 512GB of RAM and a humble NVIDIA 3090. You will have to run a quantized version but you can get a couple tokens per second out of it since these models are optimized to split across the GPU/RAM and it's about as good as Claude was 12 months ago.
Full disclosure, I use OpenRouter and pay for models most of the time since it's more practical than 5-10 tokens per second, but the option to run it "If I had to, worst case" is good enough for me. We're also in a rapidly developing technology space and the models are getting smaller and better by the day, ever year the smaller models get better
There are several different types of work they can do, each one of which has a different hourly rate. The time of day affects the rate as well, and so can things like overtime.
It's definitely a bit of an unusual situation. It's not extremely complicated, but it was enough to be annoying.
Jesus, are you ok? Can’t you just, like, give em a 20 when you get home?
I find it quite funny you’ve invented this overly complex payment structure for your babysitter and then find it annoying. Now you’ve got a CLI tool for it.
GP has provided an anecdote with no supporting evidence, nor any code examples. So it is as fair to assume the story is a fabrication as much as it is to assume it has any truth to it
I am really shocked at the response this trivial anecdote has gotten.
I could state it much more generically: we had an annoying Excel sheet that took ~10 minutes a week, I vibe coded a command line tool that brought it down to ~1 minute a week. I don't think this is unusual or hard to believe in any way.
I didn't choose the payment structure, and the point is that a CLI is not a high bar. Something that we used to spend ~10 minutes a week on with spreadsheets is now ~1 minute/week.
Why didn’t you work out a more manageable billing structure with them?! Or to put it another way: if it took you 10 minutes a week with spreadsheets to even figure out what their bill is, how on earth did they verify your invoices were even correct? And if they couldn’t—or if it took more than 10 minutes each week—why wouldn’t they prefer a billing system they could verify they were being paid correctly?
I’m not trying to give advice, I’m just curious about their arrangement. When I did consulting, I hated billing, and would have wanted a system that was as easy as possible.
reply