HN2new | past | comments | ask | show | jobs | submit | hintymad's commentslogin

> Review by a senior is one of the biggest "silver bullet" illusions managers suffer from

Especially in a big co like Amazon, most senior engineers are box drawers, meeting goers, gatekeepers, vision setters, org lubricants, VP's trustees, glorified product managers, and etc. They don't necessarily know more context than the more junior engineers, and they most likely will review slowly while uncovering fewer issues.


> It has no real world model, no ability to learn in any but superficial ways

I also think so, and in the meantime I have to admit a lot of people don't learn deeply either. Take math for example, how many STEM students from elite universities truly understood the definition of limit, let alone calculus beyond simple calculation? Or how many data scientists can really intuitively understand Bayesian statistics? Yet millions of them were doing their job in a kinda fine way with the help of the stackexchange family and now with the help of AI.


Well part of that is because STE folks aren't typically required to take any kind of theoretical maths. It's $Math for Engineers and it eschews theoretical underpinnings for application. I don't think it's any kind of failing, it's just different. My statistics class was a dense treatise in measure theory. Anyone who took the regular stats class is almost surely way better than me at designing an experiment, but I can talk your ear off about Lebesgue measure to basically zero practical end.

I was not talking about theoretical foundations like Analysis or measure theory, but just basics in college-level math class. There can be other examples. The point is that many people didn’t have intuitive understanding of what they use everyday — in a way they are like AI, only slower and know less than AI

> when does AI flip from "powerful automation with humans propping it up" to autonomous output?

Another scenario of economics is that AI does not not necessarily output autonomously, but does output so much so fast that companies will require fewer workers, as the economy does not scale as fast to consume the additional output or to demand more labor for the added efficiency.


This is just standard automation, which always increases the scope and size of the economy. Automation actually results in a net increase in jobs.

Honest question: why do people automatically equate "fully autonomous weapons" to something like killer robot? My immediate reaction is that even the best-in-class rapid-fire gun has a hard time identifying and tracking drones. So, we'd need AI to do better tracking, which leads to a fully autonomous weapon. And I really don't get why that's a bad thing.

Of course, a company should have freedom to choose not to do business with the government. I just think that automatically assuming the worst intention of the government is not as productive as setting up good enough legal framework to limit government's power.


What you are describing would be "partially autonomous." Per Dario Amodei's original statement here: https://www.anthropic.com/news/statement-department-of-war he had no issue with that. "Fully autonomous" specifically means that the AI chooses a target and engages without any human intervention at all. If the human selects or approves a target, and the weapon then automates tracking and engagement, that's still only partially autonomous.

I’m not sure that “killer robot” is the actual concern outside of media hyperbole. I’m imagining a loitering munition-type drone that has some kind of targeting package loaded into it with different parameters describing what it should seek and destroy. Instead of waiting for intelligence and using human command to put the munition on target, it hangs out and then engages when it’s certain enough that it’s found something valid.

In a world where LLMs produce very convincing but subtly wrong output, this makes me uncomfortable. I get that warfare without AI is in the past now, but war and rules of engagement and AI output etc etc etc all seem fuzzy enough that this is not yet a good call even if you agree with the end goals.


> I’m imagining a loitering munition-type drone that has some kind of targeting package loaded into it with different parameters describing what it should seek and destroy. Instead of waiting for intelligence and using human command to put the munition on target, it hangs out and then engages when it’s certain enough that it’s found something valid.

I'm sorry, you've just literally described a "killer robot" in more words.


Yeah, I guess my point is that “killer robot” evokes a terminator-like image for a lot of people. Something that marches around and kills of its own accord. I don’t like either one, but I don’t think they’re the same thing.

The only saving grace is that the killbots had a pre-set kill limit which I exceeded by throwing wave after wave of my own men at them until they simply shut down.

Dario himself said that he was against using Claude to build a fully automated weapon because the technology was far from perfect, so he didn't want to hurt our soldiers or innocent people. I think his description matched a killer robot, and I don't agree with his reasoning because it's not like the military researchers didn't have the agency to find out what works and what doesn't.

On the other hand military researchers once considered training pigeons to act as torpedo guidance systems by pecking on levers.

We have traditional autonomous weapons (and counter-defense). They operate on millisecond or faster timescales with existing RF sensors. They are not and will not be using LLMs or other transformers. Maybe ChatGPT will update some realtime Ada code; they formally verify some of that stuff so maybe that won't be terrifyingly dangerous.

Where autonomous transformer-based munitions will be used are basically "here is a photo of a face, find and kill this human" and loitering munitions will take their time analyzing video and then decide to identify and attack a target on their own.

EDIT: Or worse: "identify suspicious humans and kill them"


We all do business with the government. We pay the military to protect our gold. It is fundamentally a protection racket that we voted for. And one could argue that the military, as the protector of your gold, has the final decision as to what it can and can't do with your technology.

Please define what kind of fully autonomous weapons system the Pentagon would build wouldn't be designed to kill people.

For that matter, explain why the Pentagon would balk at not spying on every American.


assuming the worst from the government IS how you set up good legal frameworks to limit government's power, have you ever heard of the first and second amendments?

Oh, you think the current administration only wants robots that kill other robots! Sweet Summer Child!

Its not fully autonomous ice cream machines, its fully autonomous _weapons_. are you stupid or are you dumb? I don't think you're asking an honest question.


There has been tension between Qwen's research team and Alibaba's product team, say the Qwen App. And recently, Alibaba tried to impose DAU as a KPI. It's understandable that a company like Alibaba would force a change of product strategy for any number of reasons. What puzzled me is why they would push out the key members of their research team. Didn't the industry have a shortage of model researchers and builders?

Perhaps they wanted future Qwen models to be closed and proprietary, and the authors couldn't abide by that.

Results as good as Qwen has been posting would seem to trigger a power struggle.

I think companies that don’t navigate these correctly eventually lose.


> I cannot be alone in feeling that titles (within "tech" in particular) are almost completely arbitrary?

I remember that 10 years or so ago, an E5 in Google is considered a pretty prestigious position. In Amazon, L6 is such a high achievement that the entire India site had one L6 for more than 300 engineers. But somehow things started to change. Everyone expected herself to get promoted every couple of years. There was a joke in Amazon along the line of L8 is the new L7.

My guess is that two factors came to play. One is that Meta (and then the Facebook) started to promote people really fast, so other companies followed. Also managers gradually treated promotion as a tool to retain the people they need. Once that's the incentive, a long title ladder becomes a natural choice.


Isn't being an engineering manager about leverage? Someone needs to organize people, allocate resources, or even decide the direction of products. We may say that ICs can make equally good such decisions, but every company has a hierarchy and someone does call the shots. And for better or for worse, some people are indeed good at navigating company dynamics and driving an organization forward, even though they may suck at building. An example would be IBM's Watson Jr. He was known for being awkward at mastering IBM tech as a salesperson. Even in a holacratic company like Zappos or Valve, some people still manage, right?

Richard Gabriel wrote a famous essay Worse Is Better (https://www.dreamsongs.com/WorseIsBetter.html). The MIT approach vs the New Jersey approach does not necessarily apply to the discussion of the merits of coding agent, but the essay's philosophy seems relevant. AI coding sometimes sacrifices correctness or cleanness for simplicity, but it will win and win big as long as the produced code works per its users' standards.

Also, the essay notes that once a "worse" system is established, it can be incrementally improved. Following that argument, we can say that as long as the AI code runs, it creates a footprint. Once the software has users and VC funding, developers can go back and incrementally improve or refactor the AI's mess, to a satisfying degree.


I hope people can ask themselves why the goal is "winning" and "winning big", and not making a product that you are proud of. It shouldn't be about VC funding and making money, shouldn't we all be making software to make the world a little bit better? I realize we live in an unfortunate reality surrounded by capitalism, but giving in to that seems shortsighted and dismissive of actual problems.

I hope people can see that "winning big" using that process is very unlikely NOT to be "winning long term".

(From GP) "AI coding sometimes sacrifices correctness or cleanness for simplicity, but it will win and win big as long as the produced code works per its users' standards."

Those user's standards are an ephemeral target for any software beyond a one-shot script or a hobby project with minimal user:dev ratio. That incorrect and unclean code simply isn't conducive to the many iterations needed when those "users' standards" change. And as we all know, that change is _inevitable_, and oftentimes happens before the software in question has even had a single release! Get ready to throw ever more tokens at trying to correct and clean if you ever really "win big" and need to actually support the product.

It's very much gross short-sighted thinking that goes right along with the gross short-sighted thinking providing all the [fake] value around this crap.


Some of my projects are built with the goal of making really good software. Some of my projects are built with the goal of making money. I take pride in doing things well, but I don't let my pride get between me and financial freedom.

“Winning” is just the the subjective word I quickly picked. It certainly can be another one, such as the success due to a great product as you mentioned

That is true, but society as a whole does not reward "making software to make the world a little bit better". No one will come and say wow, only you self in the mirror.

I have the same feeling when creating my art-works I suffer through the process of creation and learning. While someone makes money with an ai generated art work.

Sometimes I wonder if it matters at all.


> It shouldn't be about VC funding and making money, shouldn't we all be making software to make the world a little bit better

I agree with you, but I think you and I are on the wrong website for this mentality.


  Once the software has users and VC funding, developers can go back and incrementally improve or refactor the AI's mess, to a satisfying degree.
Or in my case, the AI is going back to refactor some poor human written code.

I will fully admit that AI writes better code than me and does it faster.


What definition of simplicity implies that it can be at odds with correctness?

I would say "facility" instead of "simplicity" here.

So essentially California is becoming more and more like EU? It's curious to see how it pans out. Maybe EU's model turns out to be better than a more laissez-faire world like the US. Who knows.

What's even more curious is that the California voters seem not care at all. As long as the government can collect more taxes with more altruistic slogans, the voters will stay happy.


Which EU law mandates age verification on my personal computer at home exactly?

> save face from the absurd overhiring that they did in 2022 and 2023

I wonder how we all of sudden got so many candidates back from 2020 to 2022


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: