HN2new | past | comments | ask | show | jobs | submit | BenoitP's commentslogin

> the whole process remains differentiable: we can even propagate gradients through the computation itself. That makes this fundamentally different from an external tool. It becomes a trainable computational substrate that can be integrated directly into a larger model.

IMHO the key point at which this technique has an unfair advantage vs a traditional interpreter is here.

How disruptive is it to have differentiability? To me it would mean that some tweaking-around can happen in an LLM-program at train-time; like changing a constant, or switching from a function call to another function. Can we gradient-descent effectively inside this huge space? How different is it from tool-calling from a pool of learned programs (think github but for LLM programs written in classic languages)?


> He's still computing cross(z, d) and dot(z, d) separately. that looks like a code smell to me. with quaternions ...

Fair point, but I think you misspelled Projective Geometric Algebra


If you only care about rotations in 3d, quaternions do everything you need :) with all the added benefits of having a division algebra to play with (after all the cross product is a division-algebraic operation). PGA is absolutely great, but quite a bit more complex mathematically, and its spinors are not as obvious as quaternionic ones. in addition GA is commonly taught in a very vector-brained way, but i find spinors much easier to deal with.

But the newer commenters most probably could be younger

> laser pulses

> phased-array

I'm not well versed into RF physics. I had the feeling that light-wave coherency in lasers had to be created at a single source (or amplified as it passes by). That's the first time I hear about phased-array lasers.

Can someone knowledgeable chime in on this?


The beam is split and re-emitted in multiple points. By controlling the optical length (refractive index, or just the length of the waveguide by using optical junctions) of the path that leads to each emitter, the phase can be adjusted.

In practice, this can be done with phase change materials (heat/cool materials to change their index), or micro ring resonators (to divert light from one wave guide to another).

The beam then self-interferes, and the resulting interference pattern (constructive/destructive depending on the direction) are used to modulate the beam orientation.

You are right that a single source is needed, though I imagine that you can also use a laser source and shine it at another "pumped" material to have it emit more coherent light.

I've been thinking about possible use-cases for this technology besides LIDAR,. Point to point laser communication could be an interesting application: satellite-to-satellite communication, or drone-to-drone in high-EMI settings (battlefield with jammers). This would make mounting laser designators on small drones a lot easier. Here you go, free startup ideas ;)


In principle, as the sibling comment says, you could measure just the phase difference on the receiver end. The trick is that it's much harder for light frequencies than radar. I'm non even sure we can measure the phase etc of a light beam, and if we could, the Nyquist frequency is incredibly high - 2x frequency takes us to PHz frequencies.

There might be something cute you can do with interference patterns but no idea about that. We do sort of similar things with astronomic observations.


A phased array is an antenna composed of multiple smaller antennas within the same plane that can constructively/destructively aim its radio beam within any direction it is facing. I'm no radio engineer but I think it works via an interference pattern being strongest in the direction you want the beam aimed. This is mostly used in radar arrays though I suppose it could work with light too since it is also a wave.


Not an expert, but main challenges with laser coherency are present when shaping the output using multiple transmitters.

For lidar you transmit a pulse from a single source and receive its reflection at multiple points. Mentioning phased array with lidar almost always means receiving.


I think about it like a series of waves in a pool. One end has wave generators (the lasers) spaced appropriately such that resulting waves hitting the other end interfere just right and create a unified wavefront (same phase, amplitude, frequency).

NB: just my layman's understanding


A hard problem, especially wrt to transactions on a moving target.

From memory, handful of projects just dedicated to this dimension of databases: Noria, Materialize, Apache Flink, GCP's Continuous Queries, Apache Spark Streaming Tables, Delta Tables, ClickHouse streaming tables, TimescaleDB, ksqlDB, StreamSQL; and dozens more probably. IIRC, since this is about postgres, there is recently created extension trying to deal with this: pg_ivm


On the brand new and shiny 486 running dos and windows 3.1 that my father bought, on the qbasic language. With only a paper book as reference. No llm, no stackoverflow, no pageranked search engine, no internet, and not even Ctrl+F. In these days when you had a bug, you could chew on it for days.


Similar for me but maybe on a 386; used this 1989 book which had its own version of quickbasic called qbi since qbasic only got added to ms-dos in 1991.

https://www.amazon.com/Learn-Basic-Now-Mike-Halvorson/dp/155...


>No llm, no stackoverflow, no pageranked search engine, no internet, and not even Ctrl+F.

Wait, MS QBasic had fantastic in built help, with examples. That is the best kind of help.


Of note is that once you've got planes, you can define points as intersections of n hyperplanes.

In 2D, 2 intersecting hyperplanes (=lines here) will define a point.

But what if these lines are parallel? Well you just got the "point at infinity" abstraction for free. And if you defined operators on points as intersections of lines they will also work with the points at infinity.

All this being nicely described under Projective Geometric Algebra: https://projectivegeometricalgebra.org/projgeomalg.pdf

Also: with a few modifications you get conformal geometry as well; with everything being defined as intersections of spheres. After all, what is a plane but a sphere that has its center at infinity?


Most programs should be written in GCd languages, but not this.

Except in a few cases, GCs introduce small stop-the-world pauses. Even at 15ms pauses, it'd still be very noticeable.


Might want to read up on Ponylang. GC'd FOSS language without the stop-the-world pauses, demonstrating that it's possible. Should also be pointed out, that there are a number of proprietary solutions that claim GC with no pauses. Unfortunately, if coming from more common C-family languages, Ponylang may require more to get used to the actor model and different syntax.


There are GC algorithms that don’t require stopping the world.


> > Just intuitively, in such a high dimensional space, two random vectors are basically orthogonal.

> What's the intuition here? Law of large numbers?

Yep, the large number being the number of dimensions.

As you add another dimension to a random point on a unit sphere, you create another new way for this point to be far away from a starting neighbor. Increase the dimensions a lot and then all random neighbors are on the equator from the starting neighbor. The equator being a 'hyperplane' (just like a 2D plane in 3D) of dimension n-1, the normal of which is the starting neighbor, intersected with the unit sphere (thus becoming a n-2 dimensional 'variety', or shape, embedded in the original n dimensional space; like the earth's equator is 1 dimensional object).

The mathematical name for this is 'concentration of measure' [1]

It feels weird to think about it, but there's also a unit change in here. Paris is about 1/8 of the circle far away from the north pole (8 such angle segments of freedom). On a circle. But if that's the definition of location of Paris, on the 3D earth there would be an infinity of Paris. There is only one though. Now if we take into account longitude, we have Montreal, Vancouver, Tokyo, etc ; each 1/8 away (and now we have 64 solid angle segments of freedom)

[1] https://www.johndcook.com/blog/2017/07/13/concentration_of_m...


> "theory building"

Strongly agree with your comment. I wonder now if this "theory building" can have a grammar, and be expressed in code; be versioned, etc. Sort of like a 5th-generation language (the 4th-generation being the SQL-likes where you let the execution plan be chosen by the runtime).

The closest I can think of:

* UML

* Functional analysis (ie structured text about various stakeholders)

* Database schemas

* Diagrams


Prolog/Datalog with some nice primitives for how to interact with the program in various ways? Would essentially be something like "acceptance tests" but expressed in some logic programming language.


Cucumber-style BDD has been trying to do this for a long time now, though I never found it to be super comfortable.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: