Hacker News .hnnew | past | comments | ask | show | jobs | submit | ctoth's commentslogin

Experiencing A Significant Gravity Shortfall

There's something really interesting here with Goguen Institutions. Also sometimes an argument just "clicks" into place fully-formed, rather than being generated token-by-token? Is that "knowing like a steam engine?"

If this many are public right now, what does that say about the dark matter of private ones? What's the typical public-private rate for this sort of thing/can someone help me calibrate my base rate expectations?

If agents is what it finally takes to get good a11y I'll take it. I'll bitch about it, but I'll take it.

Playwright, the end-to-end testing framework for the web, provides a strong incentive to give sites good a11y: Playwright tests are an absolute delight to read, write and maintain on properly accessible sites, when using the accessibility locators. Somewhat less so when using a soup of CSS selector and getByText()-style locators.

One thing I am curious about is a hybrid approach where LLMs work in conjunction with vision models (and probes which can query/manipulate the DOM) to generate Playwright code which wraps browser access to the site in a local, programmable API. Then you'd have agents use that API to access the site rather than going through the vision agents for everything.


This is precisely how the Playwright MCP works, which lets something like Claude directly test a website.

https://playwright.dev/docs/getting-started-mcp#accessibilit...

I've mentioned several times and gotten snarky remarks about how rewriting your code so it fits in your head, and in the LLM's context helps the LLM code better, to which people complain about rewriting code just for an LLM, not realizing that the suggestion is to follow better coding principles to let the LLM code better, which has the net benefit of letting humans code better! Well looks like, if you support accessibility in your web apps correctly, Playwright MCP will work correctly for you.

Amazing.


Was looking for this comment. I'd like to see this approach in the comparison...having the LLM build a playwright script and use it. I suspect it would beat time-to-market for the api, and be close-ish in elapsed time per transaction.

Harder to scale if it's doing a lot of them, I suppose.


There is also Testing Library, which I’ve mostly seen and used for unit tests (vitest) and component tests (Storybook), that practically forces you into setting things up in an accessible way. The methods for finding elements are along the lines of “find by ARIA role” or “get by label” - in fact, querying the DOM with selectors is afaik either not a part of the library or very difficult to do because their focus is ensuring your app is actually accessible as part of your testing strategy.

Using playwright-cli with Claude code is highly effective for debugging locally deployed web apps with essentially zero setup.

Very real risk of this going in reverse: people building inaccessible websites to prevent AI use.

Or human engineers limiting AI-consumable documentation to improve job security!

Those people probably aren't working on anything useful anyways, so its no big deal.

I've found that by far the most useful websites as a programmer are also the ones most resistant to AI. This would be a huge loss for anyone vision impaired

What sorts of sites are you thinking of? To me, “most useful to a programmer” evokes docs and blogs and github issues and forum posts. I suppose some forums might be AI-resistant (login wall), but the others are trivially AI accessible.

Plenty of Linux-y websites use Anubis. Arch Wiki and IIRC some other distros too.

That's less a value judgment, more a necessary evil due to the plethora of bad actors out there. I doubt it will get in the way of a local model used in a reasonable manner.

Most wikis you can mirror locally if you really need to hammer them.


GitHub is naturally LLM resistant via its new uptime feature… I’ll show myself out.

Examples, please.

That’s such an extremely small niche of people it’s not a real risk.

"AI" is a made up hype thing. It's just computers and computer programs. For real!

i think this goes both ways too :) agents have been a boon for everyone with disabilities, carpal tunnel, RSI, ADHD, anything

and now the fact that interfaces need to be accessible to agents, not just humans, ironically increases it for humans in return


And lets not forget that not all disabilities are chronic. Many disabilities are situational or temporary. AI is a great assist for a hangover day for example...

as someone who doesnt do web stuff, i found some humor in having no idea what "a11y" was, having to look it up, and finding out it is supposed to be "accessibility".

my quick accessibility tip: introduce what your acronyms, initialisms, and numeronyms stand for at least once.


a11y is pretty pervasive and well understood in the context around what is being discussed. I18n as well, you get to look that one up to because that makes you one of today's lucky 10000 https://xkcd.com/1053/

I mean…I guess. But this is ridiculous - how many layers does our technology need to bash through to update two records on remote systems? I get that value is being added at some point - but just charge some micropayment for transactions. This is just too much.

Ever read Vernor Vinge’s a deepness in the sky? Digital archeologist, coming right up.

I'm confused because I remember using Google News in 2006?

there has been a product called Google News since 2002. It was only aggregating information from news channels

What's gonna really be funny is the first time a state legislates that an AV company has to keep a bug in their software to maintain a municipal income flow.

Here' I'll do the needful:

Twin Cities, 2010-2014: 95 pedestrians killed in 3,069 crashes. 28 drivers were charged and convicted of a crime, most often a misdemeanor ranging from speeding to careless driving. ~70% of pedestrian-killing drivers faced no criminal charge[0].

Bay Area, 2007-2011 (CIR investigation): sixty percent of drivers that were at fault, or suspected of being at fault, faced no criminal charges. Over 40 percent of drivers charged did not lose their driver's licenses, even temporarily[1].

Philadelphia, 2017–2018: just 16 percent of the drivers were charged with a felony in fatal crashes[2].

Los Angeles, 2010–2019: 2,109 people were killed in traffic collisions on L.A. streets... and nearly half were pedestrians. Booked on vehicular manslaughter: 158 people. The vast majority of drivers who kill someone with their car are not arrested[3].

I can literally do this all day. The original statement was correct, the case representative.

[0]: https://www.startribune.com/in-crashes-that-kill-pedestrians...

[1]: https://walksf.org/2013/05/02/investigative-report-exposes-h...

[2]: https://whyy.org/articles/philadelphia-drivers-rarely-prosec...

[3]: https://laist.com/news/transportation/takeaways-pedestrian-d...


As the saying goes: If you want to kill someone and get the lightest possible consequences, kill them with your car.

Now we’re talking. So much misinformation in this thread. There’s a reason that the saying, “if you want to kill someone, do it with a car” exists. Fortunately, it seems like judges are finally starting to wake up to the idea that it’s unreasonable for drivers to claim ignorance about the increased risks (and thus intent) of making poor/illegal decisions when being the wheel.

The original statement was about vehicular manslaughter. You are citing stats that cover a much broader range of things.

This thread talks about driverless cars; vehicular manslaughter requires negligence or intent, do you want to find narrowed statistics for driverless cars that are restricted to negligence or intent?

The branch of the thread those statistics were in is about human driven cars.

You're likely falling for a red herring.

Criminality is basically just a checkbox for this stuff. Most of the time people wouldn't be going to jail for these sorts of crimes, it'd just be big fines and penalties. There's almost always administrative/civil infractions of the same or similar name that has the same or greater punishment but are far more efficient for the state to prosecute because the accused has fewer rights.

It makes for good appeal to emotion headlines to say these people aren't getting charged with crimes, but that's only half the story. They're likely lawyering up and pleading to a civil infraction that has approx the same penalties.

And this is true not just for this issue but for many subject areas of administrative law. Taxes, SEC, environmental, etc, etc, all operate mostly like this.

It's easy for a writer to pander to certain demographics and get people whipped into a frenzy by writing an easy article about prosecuting rates using public data. Actually contacting these agencies and figuring out what they actually did is hard and in the modern media economy doesn't offer much upside for the work.


Someone (i forget who) wrote that if someone invented a technology equally beneficial and equally harmful it wouldnt even be considered today but 100 years ago they wouldnt even question it. It was labor as usual.

Personally i would like to see a more granual permission to drive based on performance, need and demography.


Ok, so give an actual example.

Well, if the law treats them differently when it comes to punishment, then maybe it should treat them differently when it comes to being able to drive in the first place?

Yup. And we do have some degree of safeguards here-- admittedly, less in California than many other states. They are: physician required reporting of disqualifying conditions, ability for other people to report concerns about capability to drive, and the requirement to show up and undergo vision testing and not flag other concerns in the process.

There's a tradeoff between reducing the very low rate of unsafe driving by the elderly and the burden added to the very old. People over 65+ are still possibly safer, overall, than teenagers.


You noticed that too huh? It's weird ... It's not like they have to do this? They aren't forced to go full evil company mode by any extrinsic thing but even the way they frame it "welfare trap" trap? for whom?

Anthropic is actually trying to do some research into model welfare which I am personally very happy about. I absolutely do not understand people who dismiss it ... wouldn't you like to at least check? doesn't it at least make sense to do the experiments? ? Ask the questions so that we don't find out "oops, yeah we've been causing massive amounts of suffering" here in 10 years? Maybe makes sense to do a little upfront research? Which to be clear this paper is not.


Full disclosure: I didn't figure this out myself, I got it from Ms. Vale's review.

I agree that the term "welfare trap" is a loaded one. This looks to me to be a case of refusing to look through the telescope in case they might see something they do not want to.


Everybody's arguing about how silly this paper is (it is) and not grappling with the purpose of the paper. The purpose of the paper is what it does. This particular paper is perfectly-produced to show up when people type in AI consciousness fallacy to Google (try it!) it's something that anybody who has read a Freshman philosophy textbook will realize is silly -- the vehicle/content distinction just pretends like Occam doesn't exist and multiplies entities for the fun of it!

But of course all of this is commentary, "just those nerds arguing"

The purpose of this paper is to show up as an authoritative conclusion from a distinguished scientist at Deep Mind. And that's what it does.

Is the conclusion silly? OF course it is. Will it be quoted in the NYT? You Betcha!


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: