LLMs sample the next token from a conditional probability distribution, the hope is that dumb sequences are less probable but they will just happen naturally.
Had this same experience back when I first learned to program a PIC microcontroller. You really shouldn't be driving LEDs directly off IO pins anyways. I think the digitalness of IO pins also lends itself to not thinking about the underlying circuitry and coming at it from a software lens.
It depends. Many modern microcontrollers are perfectly fine driving LEDs directly off IO pins if the pin specs say it is rated for sufficient current (like 20mA). However, older ones like ESP8266 can only do like 2mA and the 8051 even less. Or you run into a total power budget issue if your are running too many pins. Also, some IO pins are perfectly fine at sinking current to ground but aren't suited for sourcing current, in which case the LED would be directly connected to an external high voltage and the IO pin would simply be switching to ground or not.
Correct! The key insight isn't the algorithm itself—it's that structural metadata is enough. Traditional tools assume you need semantic understanding (embeddings), but we found that path structure + filename + recency gets you 90% of the way there in <200ms.
The 'custom ranking' is inspired by how expert developers navigate codebases: they don't read every file, they infer from structure. /services/stripe/webhook.handler.ts is obviously the webhook handler—no need to embed it.
The innovation is removing unnecessary work (content reading, chunking, embedding) and proving that simple heuristics are faster and more deterministic.
Which is great, but a JSON parser fundamentally can't avoid looking at every byte. You can't jump to the next key, you have to parse your way to the next key.
Slightly offtopic but i feel like someone around here might be able to help. I've been learning how to do SLAM with LIDAR data and I was curious what algorithms robot vacuums use. I'm currently implementing a particle filter and will also try out EKF.
I've been playing around with SLAM using a depth camera, so I can't really tell you about LIDAR specifically, but I'd suggest doing some Deep Researches on the topic to get you a good lay of the land. During my searches for example, I came across this great compilation of visual SLAM projects: https://github.com/VSLAM-LAB/VSLAM-LAB
Unless you're a massive operation, you're probably just using an existing academic project, many of which handle a variety of inputs (depth, 2D lidar, 3D lidar etc), ie RTABMAP (what I started with), ORB-SLAM, nVidia Issac ROS SLAM (if you're on Jetson) etc. AMCL is an old-school algorithm for localization with 2D data - I tried it by taking a fake 2D scan from the depth camera and it was pretty terrible, so currently I'm trying to get visual-only SLAM working well enough for me because I don't want to spend $1k on a decent 3D lidar.
> but I'd suggest doing some Deep Researches on the topic to get you a good lay of the land
thanks for the resources !. I've been trying to get a wide view by looking at different algorithms, but I was curious what was actually used in production systems especially for consumer products.
RTABMAP and Cartographer came up in my searches, will definitely give these a closer look to understand how they work.
Right now im starting off with filter based approaches like Particle filter and Kalman filter, but i'd also like to understand how the graph based approaches work.
reply