HN2new | past | comments | ask | show | jobs | submitlogin

In order to make the point that

> energy usage for parsing isn't why

You'll need to provide actual figures and benchmark these against an actual parser.

I've written parsers for larger-scale server stuff. And while I too don't have these benchmarks available, I'll dare to wager quite a lot that a dedicated parser for almost anything will outperform an LLM magnitudes. I won't be suprised if a parser written in rust uses upwards of 10k times less energy than the most efficient LLM setup today. Hell, even a sed/awk/bash monstrosity probably outperforms such an LLM hundreds of times, energy wise.



How many times would you need to parse to get an energy saving on using an lm to parse vs using an llm to write a parser, then using the parser to parse.


It sounds like you need to learn how to program without using a LLM, but even if you used one to write a parser, and it took you 100 requests to do so, you would very quickly get the desired energy savings.

This is the kind of thinking that leads to modern software being slower than software from 30 years ago, even though it is running on hardware that's hundreds of times faster.


People not using The AWK Programming Language as a reference to parse stuff and maybe The C Programming Language with AWKA (AWK to C translator) and a simple CSP library for threading yeilds a disaster on computing.

LLM's are not the solutions, they are the source of big troubles.


> using an llm to write a parser

You're assuming OP needs an LLM to write a parser, since they mentions writing many during their career they probably don't need it ;)


I was thinking more of when a sufficiently advanced device would be able to “decide” the task would be worth using its own capabilities to write some code to tackle the problem rather than brute force.

For small problems it’s not worthwhile, for large problems it is.

It’s similar to choosing to manually do something vs automate it.


I didn't use an LLM back then. But would totally do that today (copilot).

Especially since the parser(s) I wrote were rather straightforward finite state machines with stream handling in front, parallel/async tooling around it, and at the core business logic (domain).

Streaming, job/thread/mutex management, FSM are all solved and clear. And I'm convinced an LLM like copilot is very good at writing code for things that have been solved.

The LLM, however, would get very much in the way in the domain/business layer. Because it hasn't got the statistical body of examples to handle our case.

(Parsers I wrote were a.o.: IBAN, gps-trails, user-defined-calculations (simple math formulas), and a DSL to describe hierarchies. I wrote them in Ruby, PHP, rust and perl.)




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: