HN2new | past | comments | ask | show | jobs | submit | Calavar's commentslogin

This is a good point. It is not humanly possible to verify every claim you read from every source.

Ideally, you should independently verify claims that appear to be particularly consequential or particularly questionable on the surface. But at some point you have to rely on heuristics like chain of trust (it was peer reviewed, it was published in a reputable textbook), or you will never make forward progress on anything.


> In this case, the idea that Cantor can't do something because the former head is now in a government job is crazy. No one "in the business" thinks Cantor is suddenly hobbled.

That's not the idea, and it almost seems like a straw man to be honest. The actual idea is that the current head of Cantor can't do something because he's a direct relative of a high ranking government official whose powers and job duties present a conflict of interest for this specific set of transactions.


Cantor Fitzgerald is an investment bank. Rather than claim a straw man, think about what they do and how it interacts with the administration. Everything they do is heavily regulated. If they couldn't do anything that gave an appearance of a conflict, they literally couldn't do a single thing that makes up their business and would be hobbled.

I think a lot of people feel like people who have one foot in a heavy regulated industry shouldn't have their other foot in the regulatory body that regulates that industry.

> If they couldn't do anything that gave an appearance of a conflict

This time I won't say maybe - that's a straw man.

I never said Cantor shouldn't be able to do anything that even gives the appearance of a conflict. Or anything even close to that really.

As you said yourself further up the thread, investments of investment bank employees are highly regulated. And not only employees themselves, but also their immediate family members.

Yet that same level of legal regulation doesn't apply to immediate relatives of government officials. We've seen frequently with spouses and children of congressmen, and now we're seeing it with the son of a cabinet member. Yes, this may technically be legal, but legal does not equate to just and desirable. This reads to me like a serious loophole in the law that needs to be closed.


You are just out of your depth in this area. You don't understand what Cantor has done here, you don't understand what Howard can or cannot do in his role.

Howard Lutnick's positions have been directly opposite of what Cantor has bet will happen. Cantor has 10 or 12 thousand employees and is constantly doing all manner of things. Howard has no power over the supreme court. His son is the chairman, he's miles away from being in the weeds on what specific things they do. He isn't going to be comped like crazy as the chairman.

There is no conflict. There is only the appearance of one and it only appears that way to people who don't understand the situation.


> Rather than claim a straw man, think about what they do and how it interacts with the administration.

Uh, essentially betting against a policy your former head put in place isn't a typical thing?

You would absolutely steer clear of this. There's plenty of other things they could be doing, no?

Just to make the point. This is such a typical thing investment banks do, that (especially) they are the ones doing it and nobody else?


It's an investment bank. They have a million things going on, sometimes counter to each other.

It looks like a screenshot from the Claude desktop app, so I don't think the author is trying to disguise the AI origin of the marerial

I'm sorry for not elaborating. My original complaint is with Anthropic! The article is about how Anthropic's published "tips" are incorrect, but I am saying of course it's flawed because there is no way for the AI to already have latent knowledge about how to use itself since that wouldn't have been part of the internet/books/github training material.

I know you qualified your assertion of three patients an hour with general practice, but there are plenty of specialty practices where six patients an hour is common. Dermatology and ophthalmology clinics often run at that pace (at least in the US). Some surgical clinics can run at that pace for follow up visits (not for initial visits)


That's exactly what I said in my 3rd sentence.

The average American says US healthcare spending, which is 3x to 20x that of other OECD countries on a per capita basis, is way too high.

The average American also thinks they should be provided testing and procedures that their insurance deems medically unnecessary.

Try to reconcile these two beliefs. (Hint: It's impossible)


Maybe there's a bunch of inflated profit margins and people getting filthy rich off a poorly regulated market.


I agree that not all nudity is porn - nudity is porn if the primary intent of that nudity is sexual gratification. When the nudity in question was a Playboy magazine centerfold, the primary intent is fairly obvious.


It could once upon a time: https://www.bellard.org/tcc/tccboot.html

It can't do that today though. Linux uses C11 features and also many GCC extensions that tcc doesn't implement.


I'm not sure I'd call agents an army of juniors. More like a high school summer intern who has infinite time to do deep dives into StackOverflow but doesn't have nearly enough programming experience yet to have developed a "taste" for good code

In my experience, agentic LLMs tend to write code that is very branchy with cyclomatic complexity. They don't follow DRY principles unless you push them very hard in that direction (and even then not always), and sometimes they do things that just fly in the face of common sense. Example of that last part: I was writing some Ruby tests with Opus 4.6 yesterday, and I got dozens of tests that amounted to this:

   x = X.new
   assert x.kind_of?(X)
This is of course an entirely meaningless check. But if you aren't reading the tests and you just run the test job and see hundreds of green check marks and dozens of classes covered, it could give you a false sense of security


> In my experience, agentic LLMs tend to write code that is very branchy with cyclomatic complexity

You are missing the forest for the trees. Sure, we can find flaws in the current generation of LLMs. But they'll be fixed. We have a tool that can learn to do anything as well as a human, given sufficient input.


We have heard that for years

"trust us, it will work soon .. we just need a bit more time and a couple more dozens billions of dollars .. just trust us, bro .."


> We have heard that for years

LLMs have been a thing for about three years now, so you can't have been hearing this for very long. In those three years, the rate of progress has been astounding and there is no sign of slowing down.


And god knows that we have heard that a lot in just 3 years.


Claude already knows who the characters Frodo, Sam, and Gollum are, what their respective character traits are, and how they interacted with each other. This isn't the same as writing something new.


Sure, maybe it's tricky to coerce an LLM into spitting out a near verbatim copy of prior data, but that's orthoginal to whether or not the data to create a near verbatim copy exists in the model weights.


Especially since the recalls achieved in the paper are 96% (based on block largest-common substring approaches), the effort of extraction is utterly irrelevant.


Like with those chimpanzees creating Shakespeare.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: