The consequence of saying they cannot choice to not have them. Is saying your requiring them to have them whether or not the people their want them. Its also a temporary moratorium. Maybe the industry should have been more responsible and not pasted so many externalities on to the public sector if they didnt want to face regulations.
I think the highest parent comment basically hasn't engaged in any of the cost benefit analysis just strawman the subject to banning all industry. They are not doing that and allow other manufacturing to exist maybe the data center business should learn from those industries how to conduct themselves
So if companies are actively trying to build them in the state. And your claim is the state has to allow them to be built? Isn't this just a delay requirement to force them to have data center? Sure they aren't build today but if the government cannot stop them at the permit, or at any point after its a requirement to have them. If you want to deny a state that right to decide via democratic processes you are effectively requiring them to build in their state.
How else could states that deny those data centers if they cannot pass legislation to prevent them or require XYZ parameters before they are allowed to be built? Your argument is nonsensical in my opinion especially in context. I get that if you do a string compare they are different sentences but the semantic effects of the two statements are equavalanet in the framing that comapnies are actively trying to permit and build them.
As someone who's really into music theory, I am always annoyed by what I perceive as a patronizing faux exaltation of it supposedly being mathematically based. It's not math; it's cyclical patterns. Yes, it can all be represented mathematically, and it is surprising to some people how something with feeling can map to these interesting cycles of discrete values in unexpectedly regular ways, and there are very interesting mathematical ratios involved, but that doesn't make it math. I don't think we need to pat John Coltrane on the head and talk about how he's actually kind of smart because he's doing math.
Actually I think that maths and jazz have something in common in the general public peception that you have to be smart to "get it".
Nobody will try to perform a deep intellectual analysis of Lady Gaga's or Ed Sheeran's work the way they analyse Coltrane or Miles Davis (or Mozart, or Stravinsky). Those musicians are intellectuals of the sort Einstein is, unlike Lady Gaga or Ed Sheeran (in the collective perception). Jazz is intellectual music.
And when they analyse something, "smart" people use maths.
I am putting scare quotes around "smart" here to insist that this is largely a social perception and expected behaviour. However, maths can sensibly be used to analyse art, just like it's used elsewhere. This is not patronising, it is more that maths provides a useful language to talk about patterns.
Because I have personally never seen "jazzes" pluralised and I didn't think of it.
Maths and math are both used, and the reason I used the plural form is not because I insist on anything but because it is the most commonly used of the two. I personally don't mind either forms.
With that said, linguistically using the plural for either is a bit odd, since that would imply you can pick "a" mathematic out of many, or "a" jazz out of many. But linguistic is not math (nor are lingustics maths), logic doesn't always apply.
Number theory is all about cyclical patterns, and its theorems fetishize finding cycles of discrete values with suspiciously regular behavior. Last I heard, number theory, group theory, and Fourier analysis are all math.
And yes, I will die on this singular hill: it's all one math, not a bunch of "maths". Math is one interconnected cathedral with music flowing through it, not a drawer full of unrelated trinkets. The British habit of calling it "maths" is oddly reductionist -- it makes it sound like you've got separate jars labeled "algebra", "geometry", and "spicy numbers".
>In linguistics, a mass noun, uncountable noun, non-count noun, uncount noun, or just uncountable, is a noun with the syntactic property that any part and quantity of it is treated as an undifferentiated unit, rather than as something with discrete elements. Uncountable nouns are distinguished from count nouns.
So "math" is the proper shortening of the mass noun "mathematics". What other mass nouns do you shorten by abbreviate by keeping the "s" ending?
We do not say "phys" for physics or "econs" for economics, so keeping the "s" in "maths" breaks the rule.
That's an excellent comparison and it raises an interesting question. When cooking a basic understanding of chemistry techniques will generally prove quite useful but when it comes to music I'm not so sure about math. Maybe some electronic artists who write their own tools?
There are features described with math, but if you try to approach music purely as math it evaporates.
DSP uses a lot of actual math for processing and synthesis. But trad music's chords, rhythms, melodies, and forms are linguistic grammars that can be annotated mathematically after they're defined.
The creation process isn't mathematical. Composers are always making choices from possibilities, and the choices rely on subjective taste.
With Coltrane there a lot of similar structures he could have used, and likely experimented with.
But he picked this particular one for subjective creative reasons.
I'm certainly no chef, and am only somewhat familiar with one particular side of chemistry (physical chemistry) but I don't see how it would be useful in cooking. Unless you count boiling water as chemistry.
The logic behind organic extractions, the temperatures at which different things oxidize or otherwise degrade, the temperature dependence of reaction kinetics (it's nonlinear which is incredibly important when you want one thing but not another), the thermal transfer characteristics of different materials and configurations, all sorts of stuff. The actual "doing" in cooking and baking is figuratively 95% chemistry (and 5% biology) even if the goal is different.
You don't see as much of that mindset in the mainstream of the layman but it's how all industrial processing is done. As an arbitrary example, given a process involving yeast you can construct time vs temp vs moisture vs salt curves to model its behavior.
I would love to see the effect of the mirror's effect on the motion of the camera in a weightless environment. I bet it's enough to measurably affect the picture, especially on a long exposure. Net torque of it opening and then closing should be near (but probably not exactly) zero, but while it's open the camera should spin a tiny amount.
Oracle Cloud isn't actually "terrible cloud", but it definitely isn't geared toward smaller users like startups and individuals. It's downright hostile to casual use. But for fortune 500 companies who don't mind being in bed with Oracle, the price can be right.
The focus not being the DB for 20 years is mostly true, with the exception that all of their applications are well-served by having a very scalable and very bulletproof database in-house.
Just let go of the notion that a 4 day github history necessarily means the project is only 4 days old. It's a ridiculous assumption to base an argument off of. It's extremely normal to have work in one, perhaps internal, repo which you then blast over to a public repo in one (or a few) big commits. There is zero reason for them to let you see their internal progress.
> It's extremely normal to have work in one, perhaps internal, repo which you then blast over to a public repo in one (or a few) big commits.
Did you even read the commit history? That is not what is happening here.
This is turning into a "don't believe your lying eyes" situation. Why are you people so desperate to pretend this wasn't written in a weekend?
> There is zero reason for them to let you see their internal progress.
Again, I ask you -- what is the reason for them to edit commit history to show incremental progress as if it were written in a weekend, when it actually was not?
Okay, so there's overwhelming evidence that their public github history is accurate and Nemoclaw was written in a weekend, and the only reason to think it's not accurate is that... it's technically possible to edit git history, and also there's no reasonable explanation for why they would have edited git history they way they did.
So... yeah, draw your own conclusion I guess, whatever.
Lmfao. This is how I know you have never worked at a big company before. I promise you every big company has processes around open sourcing things. It's not something that just whip up and release over a weekend. Just the legal approval would have taken months
I have buddies at Nvidia. Their primary platform is not GitHub. Sorry you're so naive. Almost certainly this was built in house for at least a month or two prior. Then private repo. Approvals. Then public
Not to mention the fact that Jensen literally announced it in their biggest yearly launch conference. No you're totally right. He mandated someone build it over the weekend while drafting up a full presentation and launch announcement about it
That's more plausible than the very normal practice of developing internally, scrubbing commits of any accidental whoopsies, vetting it and then putting it out publicly
"Overwhelming evidence" = git history that is completely fungible. Once you're done here I have a lobster claw to sell you
> Again, I ask you -- what is the reason for them to edit commit history to show incremental progress as if it were written in a weekend, when it actually was not?
Answer this question or we're done here, thanks.
> Almost certainly this was built in house for at least a month or two prior. Then private repo. Approvals. Then public.
Source, other than you making it up?
> That's more plausible than the very normal practice of developing internally, scrubbing commits of any accidental whoopsies, vetting it and then putting it out publicly
Could you point to a specific commit you believe is a representation of an internal data transfer from a separate source control system which is not representative of work achievable within the time period represented by the differential between the commit time and the time of the prior commit?
You cannot really be this naive but i'll play along:
> what is the reason for them to edit commit history to show incremental progress as if it were written in a weekend, when it actually was not?
Like i said. You are letting on that you have never actually worked on an internal project that is going to go open source. There are a million and one reasons. Here are some completely normal and plausible ones. It was worked on over weeks internally, commits referenced other internal NVIDIA software/libraries they used. It name dropped projects and code names. Maybe it was just an extremely long chain of messy commits that is improper to have on a potentially big open source repo. So here's what happens (since you clearly are unaware of how people operate in this world), you "unstage" everything and write canonical commits free of all the garbage. You squash, you merge, you set up standards, you leave a clean commit history. All of it very important for open source
> Source, other than you making it up?
Ah yes let me just go ping the people who worked on it. Lol. Source is my decade long experience working on similar projects where i literally did this scrubbing of commits. You have a circuitous argument "It was done in a weekend because the commits say so" is really quite the hill to die on
> Could you point to a specific commit you believe is a representation of an internal data transfer
If there was any indication left over of a "transfer", it wouldn't have done it's purpose would it? But if you really are looking for something, how about the fact that there's only one human contributor of the first few commits. Very odd, you would think a massive open sourcing of a project like this would probably involve a team right? Or do you believe AI tools have gotten that good that one engineer is just driving with Claude and open sourcing full launches?
Here, how about we just do some critical thinking. Nvidia setup a "Set up NemoClaw" booth at their GTC that was happening just a few days ago. Jensen had a full presentation for it and it was a big highlight.
Do you really think a company as big as Nvidia is hinging the release of a big announcement on the hope that ONE engineer is going to START working on it a few days before the announcement and ACTUALLY get it done to a point where they can talk about it on stage?
Please come on, no one can be this dense. You have to be trolling. Try another argument than "The commits say so". Just apply a basic level of understanding of how software is built and released
Nothing more to discuss here, the commit history (and your lack of coherent responses beyond hypothetical "it's technically possible it COULD have happened this way") speak for themselves. Thanks for trying though.
edit: Wait, you don't "have buddies at NVidia" -- you literally work at NVidia. Weird that you tried to hide this information? No wonder you're so desperate to pretend this project is more than it actually is though, it must be embarrassing for you that your company didn't scrub git history properly before making this public!
Ding ding ding. See it would have been too easy to just say "i know for a fact". I just wanted to walk you to the conclusion. Congrats.
Now you are more enlightened about how things work. Of course Nvidia is a big company not everyone that works at nvidia knows everything about every team. That's by design. Welcome to working at a big company! I do have buddies that worked on this project internally and yes it was done over many weeks and months
Thanks for playing. I do know for a fact it's definitely not what you think it is but i had a chuckle watching you twist yourself in a knot trying to convince me you knew better. Why would i disclose information about myself? odd thing to expect from someone. But had you riled up enough to have you go looking through my comment history then my github then my website huh! Must have really struck a nerve. Don't worry i won't do the same to you. I don't care about random people yapping on the internet enough
edit: Removing, not productive to engage with this. pre-emptive apology to dang/tom if this gets cleaned up, most of this thread is not productive and I should not have continued responding much earlier.
Lol where did i make it sound like any of that? Just saw you confidently make the wrong claim and tried to socratic method you into understanding. You are sadly too far gone to understand
Good ad hominem. I'd be riled up too if i was publicly dressed down and proved to be wrong. So now you know, commit history doesn't mean jack sh!t. Sorry i had to ruin Christmas for you
> you guys wanted to make this look like it was written in a weekend though
Imagine thinking this was done to convince anyone about the TIME it took to write this project. Here's a very simple explanation, those commits reflect a PORT over to public Github to reflect launch. Author chose to do it in some number of commits instead of "feat: Full implementation in one commit". The port happened before their announcement. Not the write of it
Now I won't propose hypotheses because clearly the socratic method didn't work on you. So now sit down and learn how things work
And next time, try not to be so confidently wrong on the internet. I had a very good laugh watching you twist and turn yourself. Must have been typing furiously thinking you really were in the right :)
> Why are you people so desperate to pretend this wasn't written in a weekend?
Because it wasn't? And your only "proof" of it was commit history. "You're telling me to not believe my lying eyes" hilarious. You are being told again and again that it means nothing. It's not blockchain. You are allowed to write commits as you see fit without making it a system of record of time spent
> People with above room temp IQ can figure out what's going on here
Yes we can. We have one person convinced they can look at commit history and say for sure that is exactly when that code was written. No developer agrees with you. As you have been told a couple times by other people above as well
It's quite obvious you work at some small shop or are a freelancer and have never done work in any kind of big environment. No you cannot just open source a "weekend" project at any big company. Wherever you are you may be allowed to vibe code and ship something under your company's github willy nilly.
It's just not the reality in any serious place. No one is trying to deceive you. You have just deceived yourself. Thanks again for playing
You can have the last word you are so desperate for
> Here are some completely normal and plausible [reasons]. It was worked on over weeks internally, commits referenced other internal NVIDIA software/libraries they used. It name dropped projects and code names. Maybe it was just an extremely long chain of messy commits that is improper to have on a potentially big open source repo.
... it referenced internal servers and they want to scrub that for security reasons
... it might have had secrets embedded at some point because it was a quick and dirty proof-of-concept
... it could have had swear words in the code
... it had enormous binaries checked in at one point and they don't want the repo to be huge
... they don't want you to know the names of everyone that worked on it
... it's forked off other internal work that isn't public yet
There are so many reasons that the easiest thing to do is just snapshot it and have minimal public git history. Some places I've worked make it so publicly, there's one commit per release. Did NVidia do this? Well, they didn't collapse it down to a single commit, but we have no evidence that the commits we see were the actual internal development timeline.
Because unless you can fund several teams - kernel, firmware(bios,etc), GPU drivers, qemu, KVM, extra hardening(eg. qemu runs under something like bpfilter) + a red team, security through obscurity is cheaper. The attack surface area is just too large.
What is this "security through obscurity" you're talking about? We're talking about running linux in a VM running in a browser. That has just as much attack surface (and in some ways, more) as running linux in a hypervisor.
I just had a conversation with gemini where I asked it to analyze my style and one of the things it claimed was that I referred to things as "AI slop" and "brainrot", both of which are terms I haven't ever used. I spent a few minutes trying to get cites for that and it kept producing the same quotes from other people and insisting it had corrected the record.
Seems like it's overstating perceived anti-AI sentiment. :)