HN2new | past | comments | ask | show | jobs | submit | earthnail's commentslogin

I don’t understand the approach

> TADA takes a different path. Instead of compressing audio into fewer fixed-rate frames of discrete audio tokens, we align audio representations directly to text tokens — one continuous acoustic vector per text token. This creates a single, synchronized stream where text and speech move in lockstep through the language model.

So basically just concatenating the audio vectors without compression or discretization?

I haven’t read the full paper yet (I know, I should before commenting), but this explanation puzzles me.


It's a variable-rate codec. The audio is still compressed, but by how much depends on the duration of the segment corresponding to a particular text token. The TTS model predicts one audio token per text token and its duration, and the audio decoder fills in a waveform of the appropriate length.

German here. Some of it I can agree with. The traffic light though is very simple. Yellow means “it was green and will turn red”. Red+yellow means “it was red and will turn green”.

It’s this way in the entire country. There are many things I can get upset with in Germany (I moved abroad 10y ago and have an outsider’s perspective by now) but the traffic light example to me just indicates you didn’t ask why certain things were the way they were.


"The traffic light though is very simple. Yellow means “it was green and will turn red”. Red+yellow means “it was red and will turn green”."

That was not the point, that's the standard way the lights will work.

The point is that sometimes you have a light combination where actual light post only has yellow/red for example (no green light). Sometimes you have a light post that has red only (on, off).


I have never seen those anywhere in Germany. Are you sure you are talking about traffic lights?

After reading some parts of this thread, all I can recommend is the following: https://www.goodreads.com/quotes/539867-never-argue-with-an-...

(I'm sure there are various German things to criticize or to make fun of. Much of what I read here, however, says more about the (US?!) authors, though.)


It’s so German to blame the user!

The only monitor on the market of this size and resolution that I am aware of that has really high brightness and works well when I work outside on the terrace.

Really glad Apple is building it.


Are you being cheeky or do you really drag a monitor outside?

Well, they did stand up to the US administration and lost a lot of money in the process. That takes courage. They clearly were being bullied into compliance, and they stood their ground.

You can see the significance of this is you look at German Nazi history. If more companies had stood up to the administration, the Nazi state would have been significantly harder to build.

In my opinion, what Anthropic did is not a small thing at all.


The comment I replied to said that they believed OpenAI would allow "AGI to be used for truly evil purposes".

By contrast Anthropic wouldn't? Yet Anthropics stance is only two narrow restrictions. As I said are those two things the only evil things possible?

If not, why is it that people on HN think Anthropic would not allow evil usage?

My hypothesis is a halo effect. We are so enthralled by Claudes performance that some struggle to rationally assess what Anthropic has actually done.

Yes it's no small thing to say no to the Trump administration but that does not mean they haven't said Yes to otherwise facilitated other evils.

In fact to me the statements from Anthropic seem to make clear they are okay with many evils.


> Yet Anthropics stance is only two narrow restrictions.

Really I think Anthropic should have a single restriction: to not assist with illegal or unconstitutional activities. If automated killings etc is illegal then it would be covered by that one rule.

I don't think Anthropic should be in the business of deciding what is "evil".


If each of us individually or as corporations should not be in the business of deciding what it "evil", who should be in that business?

Everyone SHOULD continuously consider, decide, and live by moral judgements and codes they internalize, and use to make choices in life.

This aspect of life should NEVER be outsourced — of course, learn from and use codes others have developed and lived by — but ALWAYS consider deeply how it works in your situation and life.

(And no, I do NOT mean use situational ethics, I mean each considering, choosing, and internalizing the codes by which they live).

So, yes, Anthropic and anyone else building products absolutely should be deciding for themselves what they will build, for what purposes it is fit to use, and telling others about those purposes. For products like AI, this absolutely includes deciding what is "evil" and preventing such uses.

If the customer finds such restrictions are not what they want, they ARE FREE to not use the product.


> If each of us individually or as corporations should not be in the business of deciding what it "evil", who should be in that business?

This is easy imo. Two methods:

1. The law. It should not be legal for the US Govt to murder people at will. If it is legal, then of course they'll use tools to make it easier. Maybe AI, maybe Clippy. If they can't use AI then they'll fall back to using some other way of doing it like they've already been doing for several years.

2. Voting. For representatives that actually represent us and have our interest in mind rather than their own corrupt interests. And voting with our wallet against companies that do legal but morally bankrupt things.

Of course we're failing both of these hard right now. But imo the answer is not to give up and let corporations make the rules.

In other words, if it were legal for a normal citizen to murder anyone they wanted, of course they'll use Google Maps to help them do that. We don't put restrictions on how people can use Google Maps. Instead we've made murder illegal. We should be doing the same thing here.


It's illegal to drive drunk or read your cell phone and hit strangers head on.

Nevertheless, it wasn't lawmakers, it was car makers who innovated to build-in airbags and seatbelts and lane assist and and and ... under the theory that though it's illegal, bad things are done anyway, and guardrails still matter.

Colloquialism: "belt and suspenders".

Many, like Volvo, go above and beyond the requirements to make their vehicles safer, and then having demonstrated these guardrails, some become law as well (even as other makers in the industry kick and scream about being forced to, and riders rebel against buckling up).

As we haven't solved this stand off for a century, we are unlikely to resolve it within the pace needed by expansion of AI. In this scenario, Anthropic is Volvo.


Exactly zero of those account for an individual's or company's ability tolive by their own moral code

And this AI software is not a mere static object like a hammer that can be handed off to a customer and what it is used for is their business, to build a house or bash a living skull.

This is a system that must be constantly maintained by it's builders.

Moreover, even if we use your standard, the law, it has already been decided in Anthropic's favor.

What you require is that Anthropic actively participate in activities that they consider abhorrent and/or unwise. SCOTUS has already ruled that a business cannot even be required to sell a cake to someone if it does not like the intended purpose (in that case, at a celebration at a gay wedding).


> even if we use your standard, the law, it has already been decided in Anthropic's favor.

I support Anthropic here. They had a deal with the Govt and the Govt bullied them. That should not be allowed, and Anthropic is suing which makes sense to me. Anthropic should be allowed to set any terms of use for the product that they want, and gain or lose business based on those terms. That's fine.

I'm saying that the failure is actually upstream. It should not be possible for Anthropic's AI to be used to mass surveile or murder people, because those things should be illegal by law and the govt should not be allowed to do it and should not be doing it. Somehow it isn't this way though.

So now that we find ourselves in this failed state, we have to rely on Anthropic to be "the law": to identify what what's "evil" and disallow it. I'm saying that's out of scope for a tech company and they shouldn't be expected to do that. They should only be in the business of making good tech and then be free to let it be used by anyone for any purpose that that the law allows.

This also means that if it's illegal to share information on how to build a bomb without AI, then it should be illegal for Claude to share that information with AI. So Anthropic to does need to make sure they're not breaking the law themselves as well.


Ah, good, we generally agree

For sure, Anthropic should NOT have been forced to decide the ethics of deploying their tech

Nevertheless, they should always be considering the ethics of their own creations and actions, and it seems they are — as soon as they got bullied by a failing regime, they had the right answer: 'no, that is not ethical and we won't allow it with our products'.

The problem is that the law only very roughly captures what is right and just, so there are many things that are legal that are unethical, at the same time there are many things that are ethical but illegal. So, we can't entirely outsource our personal or corporate ethics to the law.


I tried it as well with a contrarian view on UBI. I think the UBI one is a great test case. If you’re against the idea you will likely argue that it is idealistic and that in the real world it would create bad incentives.

So basically you end up arguing for a darker, more pessimistic world view, and that tends to get flagged very quickly by the tool right now. I think you should fix that. It’s a mistake in modern discussions to be overly positive; HN feels real because people can leave pretty harsh critiques. It just has to be well argued. Don’t raise the bar for well-argued too high though, because nobody’s perfect.

Anyway, I love the idea and really hope you’ll succeed. Hope my feedback has been somewhat helpful.


Yes, thanks very much! I appreciate your support very much.

You make a good point -- and that is exactly the kind of thing we are trying to do, i.e. enable a good-faith, but strongly disagreeing, discussion on something like UBI.


> It’s a mistake in modern discussions to be overly positive

According to who?

I trust you'll publish your double blind study with sample sizes and p values shortly /s


Somewhat weirdly I’m very happy about this price increase as a customer. The messaging is clear and completely understandable. Well done.


Not necessarily. If TSMC doesn’t build these fabs in Japan or USA, these governments might just mandate that chips are manufactured elsewhere. Intel could have a big comeback.

This keeps ppl locked in to the TSMC universe. The Japan and US fabs produce just a fraction of what these countries need.


Just doesn’t match my experience at all. AWS isnsuper complex but stuff works. GCP has clearly the nicest interface but not every feature that AWS has. Azure is complex, slow, hard to use and incredibly opaque. No way I’ll use it again out of my own free will.


We've been through the big three, starting with AWS, then GCP and now Azure. I long for the days of AWS and GCP.


Well then we have to agree to disagree, I will keep Azure on top of my list, AWS second and GCP last.


I moved my stuff to Hetzner. Obv I have no idea about your situation, but I found it fairly trivial for my stuff.

But I can't figure out how to replace GSuite.


Well for one thing, call me a sell-out or accuse me of lacking craftsmanship, but I like my databases managed. Then also storage buckets, IAM, general cloud security and other niceties.

And I don't think it's for a lack of skills, I know my way around a Linux box - it's just that I save so much time. I'll occasionally build small projects in a VPS (sometimes cramming the db in there too!) but I don't feel I can do it for other more serious work projects.

Hetzner has basic load-balancing and security around the VPS and that's it, OVH has a bit more but it all looks quite green.


You're not wrong. Europe has no clouds only hosting & vps providers. Nothing has changed in 20 years. Really sad actually.


Oh, I wasn't trying to say you're wrong. Just wanted to share that for me, the bottleneck has been elsewhere, and that I personally found GSuite harder than the compute cloud.


> but I like my databases managed. Then also storage buckets, IAM, general cloud security and other niceties.

Scaleway offers all of these.


> Well for one thing, call me a sell-out or accuse me of lacking craftsmanship, but I like my databases managed.

I had the same worries and then we moved to OVH and Hetzner and had no issues.

AWS RDS is about 10x more expensive than bare metal with maybe 1/4th the disk performance.

Regarding operations I simply setup a primary and read replica together with a PGBackRest continuous archiving and backup solution to a S3 compatible storage service.

Has worked like a charm in the last two years and recreating the database is a breeze.

Our database is ~8 TB large.


Do they have any plans to move off the US?


No plans according to some recent reddit threads, but you can probably email their support for a more up-to-date info.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: