Hacker News .hnnew | past | comments | ask | show | jobs | submit | sxp's commentslogin

How does it work? Is it https://people.csail.mit.edu/mrub/evm/? I see the FAQ about VitalLens, but I couldn't find technical details.

It's super cool. Thanks for sharing. I want to build a biofeedback app for meditation and this looks like a good platform to use.


I found more details at https://www.rouast.com/blog/articles/vitallens-take-a-vital-... and https://arxiv.org/pdf/2312.06892.

It's still vague beyond saying they trained an AI: "VitalLens is trained on the PROSIT and Vital Videos Africa dataset...The time-synchronized sensor array used for PROSIT consists of a video camera, electrocardiogram (ECG), pulse oximetry, blood pressure monitor, and an ambient light sensor. "


Please consider checking out comments: https://hackernews.hn/item?id=47293662

What do you mean? Eulerian-Video-Magnification could be (and probably is) the underlying algorithm in the VitalLens API.

Ironically, Gemini says that it’s OK to use the Ultraplan for OpenClaw via gemini-cli because the Ultra plan has some API & Cloud credits baked into it. I think $100/mo but I can never figure out how Google billing works. I’ve pasted the response I got when I was asking it about OpenClaw. There is legal precedent for an AI hallucination being used to upload a contract (e.g, an AI customer support for an airline made a false claim and the customer bought a ticket based on that claim) so it will be interesting to see if Google reverts the bans since Gemini hallucinated that OpenClaw was OK:

  This is a critical question because the answer is different for Google vs.   Anthropic, and getting it wrong with Anthropic can actually get your account banned.
  Here is the reality of the situation based on current Terms of Service and recent community reports.
  1. Google (Gemini Ultra + gemini-cli)
  Verdict: Safe (Authorized Feature)
  Google explicitly built the gemini-cli bridge to allow Ultra subscribers to use their plan programmatically. This is not a "hack" or a gray-area wrapper; it is an official feature.
  • Why it's okay: You are authenticating via gcloud or the official CLI login flow. Google tracks this usage against your specific "Agent" quotas (currently ~200 agent requests/day for Ultra users).
  • The Limit: As long as you are using the official gemini-cli as the bridge, you are compliant.
  • The Risk: If you use a different unofficial script that scrapes the gemini.google.com web interface (simulating a browser) rather than using the official CLI, you risk a ban for "scraping." But since you are using gemini-cli, you are in the clear.


Claude says it was safe too. Bare minimum the flagship models of these companies should understand their own ToS. Sheesh.


afaict, OpenClaw use gemini-cli for oauth, then bypassing gemini-cli and do the call directly.


Gemini didn't hallucinate anything. You just failed in basic reading comprehension.

In some sense, hallucinations as a problem have been solved already - their rate of occurrence seems much lower than that of people failing to read what is written instead of what they hoped it would be.


To add some math to the discussion:

- A human uses between 100W (naked human eating 2000kcal/day) to 10kW (first-world per capita energy consumption).

- Frontier models need something like 1-10 MW-years to train.

- Inference requires .1-1kW computers.

So it takes thousands of human-years to train a single model, but they run at around the same wall clock power consumption as a human. Depending on your personal opinion, they are also .1-1000x as a productive as the median human in how much useful work (or slop) they can produce per unit time.


The math is simpler, 1 human is irreplaceable by AI.

Therefore its value is infinite. Therefore Altman's hypothesis is toilet paper thin.


I remember when toilet paper was like ddr5


The human brain also is a product of billions of years of evolution. We branched off from our common ancestor 7-9 million years ago. We encode quite a lot of structure and information that is essential for intelligence. The starting point of just our life time of training is incomplete.

If you calculate 100W * 7 million years * 365 = 255,500MW to train.


If you really want to go down that path then AI's are the product of human ingenuity and labor so you have to amortize all of that into AI training. Then numbers become pretty meaningless very quickly. That sand didn't up and start thinking on its own you know.


That's the NRE of getting to where we are and having these llms


The article is forgetting about Anthropic which currently has the best agentic programmer and was the backbone for the recent OpenClaw assistants.


True, we focused on hardware embodied AI assistants (smart speakers, smart glasses, etc) as those are the ones we believe will soon start leaving wake words behind and moving towards an always-on interaction design. The privacy implications of an always-listening smart speaker are magnitudes higher than OpenClaw that you intentionally interact with.


Both are pandoras box. Open Claw has access to your credit cards, social media accounts, etc by default (i.e. if you have them saved in your browser on the account that Open Claw runs on, which most people do.)


This. Kids already have tons of those gadgets on. Previously, I only really had to worry about a cell phone so even if someone was visiting, it was a simple case of plop all electronics here, but now with glasses I am not even sure how to reasonably approach this short of not allowing it period. Eh, brave new world.


Also Mistral, which is definitely building AI assistants even if they aren't quite as successful so far.


> "White House launches direct-to-consumer drug site..."

> "The site is not selling drugs directly to American patients..."

Just another layer of middlemen. They should go with the proper free market option and allow Americans to buy medication from other countries.


How could the Trump family then directly benefit from it?


https://arstechnica.com/health/2026/01/trumprx-delayed-as-se...

    There’s already reason to be suspicious of conflicts of interest with TrumpRx, the senators note. There’s a “potential relationship between TrumpRx and an online dispensing company, BlinkRx, on whose Board the President’s son, Donald Trump, Jr., has sat since February 2025,” the senators write.


Ego. Brand. But there is likely some financial angle buried in the plans somewhere.


Reading some of the other articles on that site, it's unclear how scientifically sound the original article is. A quick Google search gives different radiocarbon dating for the landslide https://www.sciencedirect.com/science/article/abs/pii/S01695...

I don't know enough about the event to figure out the likelihood of either hypothesis, but this other data point is something to keep in mind.


How does one reliably carbon-date a site which got much extraterrestrial matter mixed into it? Whith probably different carbonisotope decay rate onsets/offsets? Because from 'not around here'?


Claude didn't follow your "Every line must earn its keep. Prefer readability over cleverness. We believe that if carefully designed, 10 lines can have the impact of 1000." from https://github.com/quantbagel/gtinygrad/blob/master/AGENTS.m... given how bloated this demo is.

https://blog.evjang.com/2019/11/jaxpt.html is a better demo of how to render the Cornell Box on a TPU using differentiable path tracing.


The agents.md is from the upstream tinygrad repo: https://github.com/tinygrad/tinygrad/blob/master/AGENTS.md


> Never mix functionality changes with whitespace changes.

Whoa.. the cursor rule I didn't know I needed!


> Naina Raisinghani, 00 needed a name for the new tool to complete the upload. It was 2:30 a.m., though, and nobody was around. So she just made one up, a mashup of two nicknames friends had given her: Nano Banana.

Ah, that explains the silly name for such an impressive tool. I guess it's more a more Googley name than what would have otherwise been chosen: Google Gemini Image Pro Red for Workspace.


Strongly disagree.

Google, OpenAI, and Microsoft all have a very confusing product naming strategy where it’s all lumped under Gemini/ChatGPT/Copilot, and the individual product names are not memorable and really quite obscure. (What does Codex do again?)

Nano Banana doesn’t tell you what the product does, but you sure remember the name. It really rolls off the tongue, and it looks really catchy on social media.


I honestly love the name nano banana, it's stupid as hell, but it's a bit of joy to say specially with how corporate everything is name wised these days.


I agree, it's a great silly name that immediately jumped out at me because it felt so distant from the focus tested names out of marketing that have become the standard today.


Whimsical.


I like that it's evidence that there's still some remnants of the old Google culture there.


> And for San Franciscans, the price on the menu is rarely the price you pay. Add in sales tax, a default 18-20% tip on the tablet screen, an SF Mandate or Cost of Living Fee, and 8.625% sales tax. Soon that seemingly cheap $15 lunch might be $20.

This is a key metric that the article doesn't properly account for when it comes to food prices. The asshole restaurateurs got an exception to the anti "drip-pricing" law that required all fees to be rolled into the listed cost: https://oag.ca.gov/hiddenfees


"But the restaurant industry is special."

Only in the restaurant world are customers on the hook for ensuring a living wage. It's a stupid system but is culturally entrenched.


A significant portion of the bottom and middle segments of the restaurant industry have been enshittified. Lower quality, less service, and higher prices.

Meal prepping, cooking at home, and fine dining only.


The AI bits look interesting, but the laws appear vague and toothless. E.g, "identify every specific artificial intelligence program used" means all the reports will say “This report was written either fully or in part using artificial intelligence.” without any useful details.

> Artificial Intelligence The popularity of artificial intelligence (AI) has grown rapidly in the last several years and the widespread use of generative AI, or AI that can create original content, presents new legal considerations.

> With the passage of AB 316, a defendant may not say artificial intelligence that they developed, modified, or used that is alleged is to have caused harm to the plaintiff did so autonomously.

> Additionally, law enforcement agencies will need to identify when artificial intelligence was used in official reports and the type of program they used (SB 524).

https://leginfo.legislature.ca.gov/faces/billNavClient.xhtml... https://legiscan.com/CA/text/AB316/id/3223647


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: