What are the fatalities for e-bikes vs SUVs in the US per year?
Your comment is irrelevant otherwise because last time I checked cars are the real problem, and concerns over e bikes / delivery bots is just another lame extension of “safetyism” and ignorance around public transport failures that just misses the mark.
“Riding in traffic” is half the issue here. Like trying to explain water to fish.
I'd like to think I'm about as car-skeptical as your average person with no driver license who just got back from taking three forms of transit home from an all-day recreational road cycling event. But I'm a bit nervous about the speeds of some e-bikes.
A friend of mine spent a week in the hospital recently after crashing his new e-bike almost immediately after buying it. One interpretation of his accident is that he didn't have some of the right instincts for riding a bicycle at that speed.
I don't actually have a clear sense of the breakdown of risk attributable to the different factors of lack of appropriate cycling infrastructure, lack of appropriate rider training or experience, lack of appropriate rider expectations, or inherent safety or stability problems of some designs. My friend whom I mentioned above said his doctors told him that they had been seeing a lot of patients who'd crashed e-bikes (as well as electric mopeds and electric skateboards) at speeds that produced fairly serious injuries.
That is a regulatory issue and a name issue. Those are actually motorcycles. In Europe e-bikes are capped at 25 km/h (the electric assistance stops at that speed).
So your problem is (electric) motorcycles that are (legally?) accessible without a motorcycle license and motorcycle equipement. For safety what matters is the speed and the weight of the vehicle, the faster and heavier, the more dangerous.
I am also noting that unlike with SUV accidents, your friend put a lot less people in danger if not only himself.
I'm sorry to hear about your friend, and hope they recover well.
Something I think a lot about when it comes to e-bikes, is the level of protective gear people feel they ought to wear on "a bike". Not all cyclists even wear helmets (obviously bad), but in addition to a helmet, on an e-bike you really ought to be wearing elbow and knee protection, purely because of the speed involved.
However, my sense is that people (a) don't think about that at all because they think of it as just like a bicycle, or (b) don't want travel with all of that extra gear. They want to treat an e-bike like a bicycle, when it is something much more.
I say all of this as a cyclist (non-e-bike) and rollerblader. On my bicycle I will just wear a helmet, but because of the particulars of rollerblading, I always wear elbow-pads and knee-pads. Differing circumstances require different adaptations.
Indeed, if it's going above 50 km/h, it's not a bike it's a motorbike. Protective gear should match the speed and weight of your vehicle. To drive a motorbike, you should have motorbike license and equipment. It feels like a regulatory issue frankly.
They can both be a problem. I saw a kid hitting a dike like a ramp with one of these electric dirt bikes. I've seen kids too small for these cruising around way too fast with no helmet.
Big trucks and SUVs are a much bigger problem. But that doesn't mean kids riding around on motorcycles isn't a problem either.
The point of contentions is calling them e-bikes instead mopeds or e-motos or motorcyles, which you did, but the article didn't. And they are a journalist so I hold them to higher standard.
I think we can tackle different issues at the same time
For heart issues, it is a bit hard to fix. You need a healthy lifestyle is general is something you need the correct environment for and a good education about. Still, it's not impossible and any sane country has food labeling requirements and education around it as well as promotion of physical exercise. It's being done.
Similarly car-centric city design is not easy at first but it can be done and has been done:
Relax zoning and parking requirements, provide good fast collective transport alternatives,that is with dedicated lanes and safety staff. The general idea is that you shouldn't be forced to have a car if you don't want one. Even people who do want to keep their cars will be happier because there will be less people on the road overall: Imagine that traffic jam you're stuck into if half the people vanish because they are using a bike or a subway, woosh, no more traffic jam.
Not like the US didn't try. California spent 15yrs trying to build a high speed train and failed. Canada has been talking about building trains forever too and it usually goes nowhere because the budgets explode like every major infrastructure project these days.
I wonder what's different between these English speaking countries you mention failing to build out rail transit, and places like Japan and China that have built fabulous rail networks.
Japan is a fairly unique case, and probably does not share much with China aside from being in the same region. Japan is geographically well suited to serving a large portion of the population with one long line with a few branches. That's a convenient advantage.
China just doesn't have to worry about environmentalists or anyone else locally trying to stand in the way, they just bulldoze them and build.
China also has much lower labor costs, and even Japan is a good bit cheaper (than the US, at the least)
Most of the rail has get around mountainous, uneven terrain subject to earthquakes, strong winds, and heavy rain. California should be able to build rail parallel to the I-5, a long, flat terrain without extreme weather or strong earthquakes. The problem seems to be a political one, not an engineering one. In fact, if the Interstate Highway System did not already exist, I doubt the U.S. today would be able to accept and complete it.
> one long line with a few branches
I currently live in Japan, and that does not really match what I've observed. There are three distinct railway companies in my area (JR, Tokyu, Yokohama Municipal Subway), each with their own dedicated rail, trains, power supply, etc.
The situation is more like "a disjoint union of graphs, where some of the graphs are connected".
LA proper seems to have a density of 3000/km^2 according to Wikipedia
A perhaps more interesting use case is the utsunomiya light rail. Utsunomiya has a density of around 1200/km^2.
What they ended up doing was building a new tram with exactly one line. The main thing they did was make sure the tram comes frequently, including off peak.
End result is people rely on the tram line and the tram is making good money, being operationally profitable (still gotta pay back construction costs of course).
Utsunomiya is obviously not exactly greater LA, but Utsunomiya has on average 2.25 cars per household[0]. It has traffic issues and people feel the need to own a car. And yet the tram line is finding success because transportation is a local issue, not a global one!
You can solve for transportation issues in crowded areas. Few reasonable people are lamenting that you don't have a train between madison, WI and Chicago every 15 minutes. Many are simply lamenting that even at a local level PT in many places is leaving a lot on the table despite there being chances of success!
Smaller focused PT has proven itself to work time and time again, and compounds on other PT projects in the area.
California high speed rail isn't running now but it is improving lots of things along the way. For example one of the most dangerous crossings in the state is now grade separated with the Rosecrans/Marquardt Grade Separation Project.
I wonder if California high speed rail will ever surpass quadcopter personal vehicles in passenger miles per year. I know which way I'd bet for the year 2040.
Ha, even using the UK as a counterpoint, they do pretty well. I enjoy taking the LNER, and appreciate that it is a 'slow' train that happens to run 50% faster than the top speed of Amtrak in all but a very limited set of tracks in the NEC. And maybe I've just had unusually good luck, but LNER has almost always been punctual.
OTOH, on my visits to Europe I am simultaneously impressed with the prevalence of passenger train options, but disheartened by the price. If Europe struggles to provide really affordable trains, there isn't much hope for the US. Aside from regional train options in the densest areas, we just have too much distance to cover. Infrastructure costs would kill the plan. At this point maybe we should just be trying harder to produce renewable fuels for planes.
As a tourist or outsider, the cost of trains in Europe is going to be much more expensive. In the Netherlands for example, the price of a train ticket without a subscription (such as for tourists) is very high; the price of a monthly subscription for free train rides outside rush hour is €130/month, which is way less than monthly cost of car use.
You cant compare qualia of suffering. At least not with our current technology. Thats the point - they both involve suffering but that doesn’t mean one is inherently worse than the other. The details and experience matter which got glossed over in these stupid debates- hence loss of perspective.
Honestly I had to read the wiki page of false equivalence and you’re not asserting the fallacy correctly.
“A zero-knowledge rollup (zk-rollup) is a layer-2 scaling solution that moves computation and state off-chain into off-chain networks while storing transaction data on-chain on a layer-1 network (for example, Ethereum). State changes are computed off-chain and are then proven as valid on-chain using zero-knowledge proofs.”
The prompts and responses are used as training data. Even if your provider allows you to opt out they are still tracking your usage telemetry and using that to gauge performance. If you don’t own the storage and compute then you are training the tools which will be used to oppress you.
> The prompts and responses are used as training data.
They show a clear pop-up where you choose your setting about whether or not to allow data to be used for training. If you don't choose to share it, it's not used.
I mean I guess if someone blindly clicks through everything and clicks "Accept" without clicking the very obvious slider to turn it off, they could be caught off guard.
Assuming everyone who uses Claude is training their LLMs is just wrong, though.
Telemetry data isn't going to extract your codebase.
I am curious where your confidence that this is true, is coming from?
Besides lots of GPU's, training data seems the most valuable asset AI companies have. Sounds like strong incentive to me to secretly use it anyway. Who would really know, if the pipelines are set up in a way, if only very few people are aware of this?
And if it comes out "oh gosh, one of our employees made a misstake".
And they already admitted to train with pirated content. So maybe they learned their lesson .. maybe not, as they are still making money and want to continue to lead the field.
1. There are good, ethical people working at these companies. If you were going to train on customer data that you had promised not to train on there would be plenty of potential whistleblowers.
2. The risk involved in training on customer data that you are contractually obliged not to train on is higher than the value you can get from that training data.
3. Every AI lab knows that the second it comes out that they trained on paying customer data saying they wouldn't, those paying customers will leave for their competitors (and sue them int the bargain.)
4. Customer data isn't actually that valuable for training! Great models come from carefully curated training data, not from just pasting in anything you can get your hands on.
Fundamentally I don't think AI labs are stupid, and training on paid customer data that they've agreed not to train on is a stupid thing to do.
1. The people working for these companies are already demonstrably ethically flexible enough to pirate any publicly accessible training data they can get their hands on, including but not limited to ignoring the license information in every repo on GitHub. I'm not impressed with any of these clowns and I wouldn't trust them to take care of a potted cactus.
2. The risk of using "illegal" training data is irrelevant, because no GenAI vendors have been meaningfully punished for violating copyright yet, and in the current political climate they don't expect to be anytime soon. Even so,
3. Presuming they get caught redhanded using personal data without permission- which, given the nature of LLMs would be extremely challenging for any individual customer to prove definitively- they may lose customers, and customers may try to sue, but you can expect those lawsuits to take years to work their way through the courts; long after these companies IPO, employees get their bag, and it all becomes someone else's problem.
4. The idea of using carefully curated datasets is popular rhetoric, but absolutely does not reflect how the biggest GenAI vendors do business. See (1).
AI labs are extremely shortsighted, sloppy, and demonstrably do not care a single iota about the long term when there's money to be made in the short term. Employees have gigantic financial incentives to ignore internal malfeasance or simple ineptitude. The end result is, if anything, far worse than stupidity.
There is an important difference between openly training on scraped web data and license-ignored data from GitHub and training on data from your paying customers that you promised you wouldn't train on.
Anthropic had to pay $1.5bn after being caught downloading pirated ebooks.
So Anthropic had to pay less than 1% of their valuation despite approximately their entire business being dependent on this and similar piracy. I somehow doubt their takeaway from that is "let's avoid doing that again".
First: Valuations are based on expected future profits.
For a lot of companies, 1% of valuation is ~20% of annual profit (P/E ratio 5); for fast growing companies, or companies where the market is anticipating growth, it can be a lot higher. Weird outlier example here, but consider that if Tesla was fined 1% of its valuation (1% of 1.5 trillion = 15 billion), that would be most of the last four quarter's profit on https://www.macrotrends.net/stocks/charts/TSLA/tesla/gross-p...
Second: Part of the Anthropic case was that many of the books they trained on were ones they'd purchased and destructively scanned, not just pirated. The courts found this use was fine, and Anthropic had already done this before being ordered to: https://storage.courtlistener.com/recap/gov.uscourts.cand.43...
Every single point you made is contradicted by the observed behavior of the AI labs. If any of those factors were going to stop them from training on data they legally can't, they would have done so already.
> I am curious where your confidence that this is true, is coming from?
My confidence comes from working in big startups and big companies with legal teams. There's no way the entire company is going to gather all of the engineers and everyone around, have them code up a secret system to consume customer data into a secret part of the training set, and then have everyone involved keep quiet about it forever.
The whistleblowing and leaking would happen immediately. We've already seen LLM teams leak and and have people try to whistleblow over things that aren't even real, like the Google engineer who thought they had invented AGI a few years ago (lol). OpenAI had a public meltdown when the employees disagreed with Sam Altman's management style.
So my question to you is: What makes you think they would do this? How do you think they'd coordinate the teams to keep it all a secret and only hire people who would take this secret to their grave?
"There's no way the entire company is going to gather all of the engineers and everyone around, have them code up a secret system "
No, that is why I wrote
"Who would really know, if the pipelines are set up in a way, that only very few people are aware of this?" (Typo fixed)
There is no need for everyone to know. I don't know their processes, but I can think of ways to only include very few people who need to know.
The rest is just working on everything else. Some work with data, where they don't need to know where it came from, some with UI, some with scaling up, some .. they all don't need to know, that the source of DB XYZ comes from a dark source.
> I am curious where your confidence that this is true, is coming from?
We have a legal binding contract with Anthropic. Checked and vetted by our laywers, who are annoying because they actually READ the contracts and won't let us use services with suspicious clauses in them - unless we can make amendments.
If they're found to be in breach of said contract (which is what every paid user of Claude signs), Anthropic is going to be the target of SO FUCKING MANY lawsuits even the infinite money hack of AI won't save them.
> Besides lots of GPU's, training data seems the most valuable asset AI companies have. Sounds like strong incentive to me to secretly use it anyway. Who would really know, if the pipelines are set up in a way, if only very few people are aware of this?
Could be, but it's a huge risk the moment any lawsuit happens and the "discovery" process starts. Or whistleblowers.
They may well take that risk, they're clearly risk-takers. But it is a risk.
Eh they’re all using copyrighted training data from torrent sites anyway. If the government was gonna hold them accountable for this it would have happened already.
> 24-bit helps in production pipelines for mixing, but for end user playback it's pointless.
If you have two versions of something, where one is better than the other and the resource cost is more or less the same it makes more sense to provide the better than the worse.
Maybe the end-user takes interest in mixing/production for which they then have the higher version allowing them to work with without the faff of having to obtain the better quality works. The end-user won't know the difference and the new apprentice has a copy that they can work with.
That's not a loss, that's a benefit even if pointless to the end user.
> Maybe the end-user takes interest in mixing/production for which they then have the higher version allowing them to work with without the faff of having to obtain the better quality works.
16-bit is enough for mixing. 24-bit (or 32-bit floats, even better) are useful _within_ the mixing pipeline, so you don't need to care if one of the steps results in clipping as long as the final result is within the bounds.
Your example assumes there would be sufficient liquidity on that bet. The existing platforms aren’t houses or market makers that just provide functionally infinite liquidity on any bets. The “win” criteria on this example is so specific that verification becomes its own problem.
In theory a fun example, but practically it doesn’t play out the way you’re describing.
Not everyone reading these discussions is going to be expecting humor, and will take any commentary affirming their prior indoctrination at face value.
That sounds like a YP not an MP though. Everyone jokes about the sarcasm font being hard to use, but the printed word has been around for a long time, much longer than the internet, yet the sarcasm font complaint has only been an internet thing.
Courts and law enforcement certainly provide these things, but they are not required. The inherent design of blockchains makes them trustworthy (an oversimplified statement), which is even better.
Blockchains don’t, and can’t, solve for the risk of the off chain component of an exchange.
The transactions aren’t atomic so someone is taking on counterparty risk. One of governments prime responsibilities is dealing with that risk, no matter the currency in question.
The prediction algorithms are so good that indirect behaviors and data can be informative.
You might also be profiled by Google and bucketed into a group of similar people who leak their data. They also went to this website and their YT recommendations became a signal to inform your own.
Not claiming any certainty here just possible ideas.
Your comment is irrelevant otherwise because last time I checked cars are the real problem, and concerns over e bikes / delivery bots is just another lame extension of “safetyism” and ignorance around public transport failures that just misses the mark.
“Riding in traffic” is half the issue here. Like trying to explain water to fish.
reply