I would like to challenge the fundamental premise of the article. Just because you can generate 50 PRs doesn’t mean you should. In fact the same bottleneck they’re describing is present if you have 50 coders in your team.
The problem therefore is not how to scale PR review and rather how to select meaningful work to perform, which brings me to the second point made by TFA: humans making judgment calls on whhich PR should be prioritized being a uniquely defining human feature.
I beg to differ here as well. All the problems described in the article are high context decisions, you need to take a lot in consideration (user request, product strategy, market dynamics, cost/benefit, rou..) to decide which feature should be prioritized in the next release. What prevents LLMs from being able to help with that is the sheer amount of information to ingest which is a still a limitation despite the long context windows we see nowadays.
tl;dr: this is a problem of prioritization and product strategy and nothing specific to AI. Scaling so-called judgment is a red herring and better focus and scope management should be aimed for instead.
The problem therefore is not how to scale PR review and rather how to select meaningful work to perform, which brings me to the second point made by TFA: humans making judgment calls on whhich PR should be prioritized being a uniquely defining human feature.
I beg to differ here as well. All the problems described in the article are high context decisions, you need to take a lot in consideration (user request, product strategy, market dynamics, cost/benefit, rou..) to decide which feature should be prioritized in the next release. What prevents LLMs from being able to help with that is the sheer amount of information to ingest which is a still a limitation despite the long context windows we see nowadays.
tl;dr: this is a problem of prioritization and product strategy and nothing specific to AI. Scaling so-called judgment is a red herring and better focus and scope management should be aimed for instead.