Hacker News .hnnew | past | comments | ask | show | jobs | submit | Insanity's commentslogin

Happy I never had to use Teams so far. Only heard bad things about it lol.

Then you knowall there is to say about it.

I'm writing this, rather than use teams.

Messaging is everything!

Yup, and the ceiling could be at 11% or at 50%. But my bet is closer to a lower-range ceiling than an upper-range. Model's are no longer revolutionary, they are evolutionary, and the evolution and per model-version difference is narrowing each release.

> Model's are no longer revolutionary, they are evolutionary, and the evolution and per model-version difference is narrowing each release.

We've definitely culled some low hanging fruit, but I think there's still a lot of room for improvements that could lead to step changes in capabilities. I think we're only scratching the surface of looped language models, thinking in latent space, and multimodality.

And even if the per-model differences are narrowing, even single digit improvements in performance metrics could yield outsized effects in applicability and productivity. Consider services that guarantee one 9 of reliability vs. five 9s. In absolute terms that change is a trivia difference, but the increased reliability allows use in way, way more domains.


It's been some years since I've seen The Office, but I thought David was the only somewhat reasonable person. Don't see how he would match up with the sociopath either, but my memory might be failing me.

That's an interesting idea. But IMO the real 'token saver' isn't in the language keywords but it's in the naming of things like variables, classes, etc.

There are languages that are already pretty sparse with keywords. e.g in Go you can write 'func main() string', no need to define that it's public, or static etc. So combining a less verbose language with 'codegolfing' the variables might be enough.


I'm not an expert in LLMs, but I don't think character length matters. Text is deterministically tokenized into byte sequences before being fed as context to the LLM, so in theory `mySuperLongVariableName` uses the same number of tokens as `a`. Happy to be corrected here.

Running it through https://platform.openai.com/tokenizer "mySuperLongVariableName" takes 5 tokens. "a", takes 1. mediumvarname is 3 though. "though" is 1.

You're more likely to save tokens in the architecture than the language. A clean, extensible architecture will communicate intent more clearly, require fewer searches through the codebase, and take up less of the context window.

Go is one of the most verbose mainstream programming languages, so that's a pretty terrible example.

Maybe not a perfect example but it’s more lightweight than Java at least haha

If by lightweight you mean verbosity, then absolutely no.

In go every third line is a noisy if err check.


Well LLMs are made to be extremely verbose so it's a good match!

I think there's a huge range here - ChatGPT to me seems extra verbose on the web version, but when running with Codex it seems extra terse.

Claude seems more consistently _concise_ to me, both in web and cli versions. But who knows, after 12 months of stuff it could be me who is hallucinating...


To you maybe, but Go is running a large amount of internet infrastructure today.

How does that relate to Go being a verbose language?

Its not verbose to some of us. It is explicit in what it does, meaning I don't have to wonder if there's syntatic sugar hiding intent. Drastically more minimal than equivalent code in other languages.

Verbosity is an objective metric.

Code readability is another, correlating one, but this is more subjective. To me go scores pretty low here - code flow would be readable were it not for the huge amount of noise you get from error "handling" (it is mostly just syntactic ceremony, often failing to properly handle the error case, and people are desensitized to these blocks so code review are more likely to miss these).

For function signatures, they made it terser - in my subjective opinion - at the expense of readability. There were two very mainstream schools of thought with relation to type signature syntax, `type ident` and `ident : type`. Go opted for a third one that is unfamiliar to both bases, while not even having the benefits of the second syntax (e.g. easy type syntax, subjective but that : helps the eye "pattern match" these expressions).


Every time I hear complaints about error handling, I wonder if people have next to no try catch blocks or if they just do magic to hide that detail away in other languages? Because I still have to do error handling in other languages roughly the same? Am I missing something?

Exceptions travel up the stack on their own. Given that most error cases can't be handled immediately locally (otherwise it would be handled already and not return an error), but higher up (e.g. a web server deciding to return an error code) exceptions will save you a lot of boilerplate, you only have the throw at the source and the catch at the handler.

Meanwhile Go will have some boilerplate at every single level

Errors as values can be made ergonomic, there is the FP-heavy monadic solution with `do`, or just some macro like Rust. Go has none of these.


Lots of non-go code out there on the Internet if you ever decide you want to take a look.

You’re not missing anything. I’ve worked with many developers that are clueless about error handling; who treat it as a mostly optional side quest. It’s not surprising that folks sees the explicit error handling in Go as a grotesque interruption of the happy path.

That’s a pretty defensive take.

You don’t have to hate Go to agree that Rust’s `?` operator is much nicer when all you want to do is propagate the error.


It's only going to get worse with the brain drain as a result of the layoffs. Which will increase the use of AI assisted coding and increase the number of outages related to this.

Imagine having to debug code that caused an outage when 80% is written by an LLM and you now have to start actually figuring out the codebase at 2am.. :)


but thats what it was like when i started at amazon in 2016?

i think the team i was on was a bit of an outlier in terms of owning 40 dumptser fires at once, and the first time reading any one of them was at 2AM because it was down.

having an LLM give early passes on reading the godawful c++ code that you can tell at a glance that its not gonna work as expected, but you cant tell why, or what expected actually is would have been phenomenal, and gotten me back to sleep at 3 on those codebases rather than 5.


That's what it was like when you started out, but did you eventually learn that code? Imagine constantly getting out back into square one on understanding a legacy code base you just inherited, forever. This is what it's be like with constant LLM-induced churn on code repositories.

Whenever I see claims about AGI being reachable through large language models, it reminds me of the miasma theory of disease. Many respectable medical professionals were convinced this was true, and they viewed the entire world through this lens. They interpreted data in ways that aligned with a miasmatic view.

Of course now we know this was delusional and it seems almost funny in retrospect. I feel the same way when I hear that 'just scale language models' suddenly created something that's true AGI, indistinguishable from human intelligence.


> Whenever I see claims about AGI being reachable through large language models, it reminds me of the miasma theory of disease.

Whenever I see people think the model architecture matters much, I think they have a magical view of AI. Progress comes from high quality data, the models are good as they are now. Of course you can still improve the models, but you get much more upside from data, or even better - from interactive environments. The path to AGI is not based on pure thinking, it's based on scaling interaction.

To remain in the same miasma theory of disease analogy, if you think architecture is the key, then look at how humans dealt with pandemics... Black Death in the 14th century killed half of Europe, and none could think of the germ theory of disease. Think about it - it was as desperate a situation as it gets, and none had the simple spark to keep hygiene.

The fact is we are also not smart from the brain alone, we are smart from our experience. Interaction and environment are the scaffolds of intelligence, not the model. For example 1B users do more for an AI company than a better model, they act like human in the loop curators of LLM work.


If I'm understanding you, it seems like you're struck by hindsight bias. No one knew the miasma theory was wrong... it could have been right! Only with hindsight can we say it was wrong. Seems like we're in the same situation with LLMs and AGI.

The miasma theory of disease was "not even wrong" in the sense that it was formulated before we even had the modern scientific method to define the criteria for a theory in the first place. And it was sort of accidentally correct in that some non-infectious diseases are caused by airborne toxins.

Plenty of scientific authorities believed in it through the 19th century, and they didn't blindly believe it: it had good arguments for it, and intelligent people weighed the pros and cons of it and often ended up on the side of miasma over contagionism. William Farr was no idiot, and he had sophisticated statistical arguments for it. And, as evidence that it was a scientific theory, it was abandoned by its proponents once contagionism had more evidence on its side.

It's only with hindsight that we think contagionism is obviously correct.


> It's only with hindsight that we think contagionism is obviously correct.

We, the mere median citizen on any specific topic which is out of our expertise, certainly not. And this also have an impact as a social pressure in term of which theory is going to be given the more credits.

That's not actually specific to science. Even theological arguments can be dumb as hell or super refined by the smartest people able to thrive in their society of the time.

Correctness of the theories and how great a match they are with collected data is only a part of what make mass adoption of any theory, and not necessarily the most weighted. It's interdependence with feedback loops everywhere, so even the data collected, the tool used to collect and analyze and the metatheorical frameworks to evaluate different models are nothing like absolute objective givens.


> Only with hindsight can we say it was wrong

It really depends what you mean by 'we'. Laymen? Maybe. But people said it was wrong at the time with perfectly good reasoning. It might not have been accessible to the average person, but that's hardly to say that only hindsight could reveal the correct answer.


It's unintuitive to me that architecture doesn't matter - deep learning models, for all their impressive capabilities, are still deficient compared to human learners as far as generalisation, online learning, representational simplicity and data efficiency are concerned.

Just because RNNs and Transformers both work with enormous datasets doesn't mean that architecture/algorithm is irrelevant, it just suggests that they share underlying primitives. But those primitives may not be the right ones for 'AGI'.


> Of course you can still improve the models, but you get much more upside from data, or even better - from interactive environments.

I'm on the contrary believe that the hunt for better data is an attempt to climb the local hill and be stuck there without reaching the global maximum. Interactive environments are good, they can help, but it is just one of possible ways to learn about causality. Is it the best way? I don't think so, it is the easier way: just throw money at the problem and eventually you'll get something that you'll claim to be the goal you chased all this time. And yes, it will have something in it you will be able to call "causal inference" in your marketing.

But current models are notoriously difficult to teach. They eat enormous amount of training data, a human needs much less. They eat enormous amount of energy to train, a human needs much less. It means that the very approach is deficient. It should be possible to do the same with the tiny fraction of data and money.

> The fact is we are also not smart from the brain alone, we are smart from our experience. Interaction and environment are the scaffolds of intelligence, not the model.

Well, I learned English almost all the way to B2 by reading books. I was too lazy to use a dictionary most of the time, so it was not interactive: I didn't interact even with dictionary, I was just reading books. How many books I've read to get to B2? ~10 or so. Well, I read a lot of English in Internet too, and watched some movies. But lets multiply 10 books by 10. Strictly speaking it was not B2, I was almost completely unable to produce English and my pronunciation was not just bad, it was worse. Even now I stumble sometimes on words I cannot pronounce. Like I know the words and I mentally constructed a sentence with it, but I cannot say it, because I don't know how. So to pass B2 I spent some time practicing speech, listening and writing. And learning some stupid topic like "travel" to have a vocabulary to talk about them in length.

How many books does LLM need to consume to get to B2 in a language unknown to it? How many audio records it needs to consume? Life wouldn't be enough for me to read and/or listen so much.

If there was a human who needed to consume as much information as LLM to learn, they would be the stupidest person in all the history of the humanity.


>With only instructional materials (a 500-page reference grammar, a dictionary, and ≈400 extra parallel sentences) all provided in context, Gemini 1.5 Pro and Gemini 1.5 Flash are capable of learning to translate from English to Kalamang— a Papuan language with fewer than 200 speakers and therefore almost no online presence—with quality similar to a person who learned from the same materials

https://arxiv.org/abs/2403.05530


I'm not entirely sure, that I totally convinced, but yeah, it is better than me. I mean, I could do the same, but it would take me ages to go through 500 pages and to use them for the actual translation.

I'm not sure, because Gemini knows a lot of languages. The third language is easier to learn than the second one, I suppose 100th language is even easier? But still Gemini do better, than I believed.


Are you asking how many books a large language model would need to read to learn a new language if it was only trained on a different language? probably just 1 (the dictionary)

Do you know anything about how languages work? A dictionary doesn't have sufficient information to speak a language.

Actually I do know how latent space works, If you meant achieving excellence in syntax and grammar then much like us more examples are better

LLMs need vastly more examples than humans do, many orders of magnitude more.

If model arch doesn't matter much how come transformers changed everything?

Luck. RNNs can do it just as good, Mamba, S4, etc - for a given budget of compute and data. The larger the model the less architecture makes a difference. It will learn in any of the 10,000 variations that have been tried, and come about 10-15% close to the best. What you need is a data loop, or a data source of exceptional quality and size, data has more leverage. Architecture games reflect more on efficiency, some method can be 10x more efficient than another.

That's not how I read the transformer stuff around the time it was coming out: they had concrete hypotheses that made sense, not just random attempts at striking it lucky. In other words, they called their shots in advance.

I'm not aware that we have notably different data sources before or after transformers, so what confounding event are you suggesting transformers 'lucked' in to being contemporaneous with?

Also, why are we seeing diminishing returns if only the data matters. Are we running out of data?


The premise is wrong, we are not seeing diminishing returns. By basically any metric that has a ratio scale, AI progress is accelerating, not slowing down.

For example?

For example:

The METR time-horizon benchmark shows steady exponential growth. The frontier lab revenue has been growing exponentially from basically the moment they had any revenues. (The latter has confounding factors. For example it doesn't just depend on the quality of the model but on the quality of the apps and products using the model. But the model quality is still the main component, the products seem to pop into existence the moment the necessary model capabilities exist.)


Note we're in a sub-thread about whether 'only data matters, not architecture', so I don't disagree that functionality or revenue are growing _in general_, but that's not we're talking about here.

The point is that core model architectures don't just keep scaling without modification. MoE, inference-time, RAG, etc. are all modifications that aren't 'just use more data to get better results'.


The miasma theory of disease, though wrong, made lots of predictions that proved useful and productive. Swamps smell bad, so drain them; malaria decreases. Excrement in the street smells bad, so build sewage systems; cholera decreases. Florence Nightingale implemented sanitary improvements in hospitals inspired by miasma theory that improved outcomes.

It was empirical and, though ultimately wrong, useful. Apply as you will to theories of learning.


Is AGI about replicating human intelligence though? Like, human intelligence comes with its own defects, and whatever we fantasize about an intelligence free of this or that defects is done from a perspective that is full of defectiveness.

Collectively at global level, it seems we are just unable to avoid escalating conflicts into things as horrible as genocides. Individuals which have remarkable ability to achieve technical feats sometime can in the same time fall behind the most basic expectation in term of empathy which can also be considered a form off intelligence.


RIP.

His presentation on his billion dollar mistake is something I still regularly share as a fervent believer that using null is an anti-pattern in _most_ cases. https://www.infoq.com/presentations/Null-References-The-Bill...

That said, his contributions greatly outweigh this 'mistake'.


Anti patterns are great, they act as escape hatches or pressure release valves. Every piece of mechanical equipment has some analogue for good reason.

Without things like null pointers, goto, globals, unsafe modes in modern safe(r) languages you can get yourself into a corner by over designing everything, often leading to complex unmaintainable code.

With judicious use of these anti-patterns you get mostly good/clean design with one or two well documented exceptions.


The "goto" in languages like C or C++ is de-fanged and not at all similar to the sequence break jump in "Go To Statement Considered Harmful". That doesn't make it a good idea, but in practice today the only place you'll see the unstructured feature complained of is machine code/ assembly language.

You just don't need it but it isn't there as some sort of "escape hatch" it's more out of stubbornness. Languages which don't have it are fine, arguably easier to understand by embracing structure more. I happen to like Rust's "break 'label value" but there are plenty of ways to solve even the trickier parts of this problem (and of course most languages aren't expression based and wouldn't need a value there).


That relies on a programmer doing the right thing and knowing when to use the escape valve. From the codebases I've seen, I don't trust humans in doing the right thing and being judicious with this. But it's a good point, knowing when to deviate from a pattern is a strong plus.

That's why code reviews exist, it's good process to make code reviews mandatory.

It's too much of a stretch to call null an escape hatch, or to pretend that code reviews will somehow strip it out.

The OpenJDK HashMap returns null from get(), put() and remove(), among others. Is this just because it hasn't been reviewed enough yet?


> pretend that code reviews will somehow strip it out.

Code reviews 'somehow' strip out poorly thought out new uses of escape hatches.

For your example, it would be an use of get, put or remove without checking the result.


> I don't trust humans in doing the right thing and being judicious with this.

Language-level safety only protect against trivial mistakes like dereferencing a null-pointer. No language can protect against logical errors. If you have untrusted people comitting unvetted code, you will have much worse problems.


You misunderstand the “billion dollar mistake”. The mistake is not the use of nulls per se, the mistake is type-systems which does not make them explicit.

Looking at the stuff China puts in their food (like the actual instant noodles), definitely a red flag if even they ban it.

Generally I look at EU for what's good/bad to consume though. It's scary how much stuff is banned there that's in everyday US products.


I generally think that as well and so was surprised to read that they're planning to not label CRISPER fruits as such.

"European Union’s Parliament and Council, the bloc’s governing body, reached a provisional deal in December to “simplify” the process for marketing plants bred through new genomic techniques, such as by scrapping the need to label them any differently from conventional ones."

https://hackernews.hn/item?id=47271338


> Farmed animals could live happy, healthy lives and then be culled in a humane way.

Breeding animals _specifically for killing them_, no matter how they are killed, is not what I'd consider humane. If we take 'humane' literally, it means to be treated as you would treat a human. I doubt we'd do this to humans. So the only way to be okay with this is adhering to a form of specieism.


Improving demographics so that your country has soldiers for future wars seems in the same ballpark.

I agree but it's also like massively enormously better and would be a first step many people would be on board with towards treating them humanely.

> specieism

Wait, do we not agree that people are inherently more valuable than pigs?


Should we?

Leaving aside pigs being intelligent creatures almost akin to humans for medical purposes, pigs fetch a lot more per kilo than humans do.

I'm not sure there's even much of an open market for human meat.

Unless you mean some nebulous human centic notion of "valuable" that takes no account of the pigs PoV.


A high-quality diamond is worth more than its weight in gold. That does not make the gold worthless.

If so, by how much? 1.5 billion pigs are slaughtered each year, is that too few to matter?

It's so normalized to think of animals as worthless that I don't blame anyone for not having thought about it, but the moral calculus is really easy. Most people wouldn't be comfortable with killing a dog, and yet, every three days, we have two more Holocaust-worths of a smarter animal (namely, pigs).


I happen to not eat meat, but I still think that your bar is too high.

If those animals would not be grown by humans, it could still be said that in their natural conditions they are grown specifically to be killed at some time, since few animals die "of natural causes", instead of being killed either by predators or by parasites.

For examples, my grandparents grew chicken. There is no doubt that those chicken had a more pleasant life than any wild chicken. During the day, they roamed through a vast space with abundant vegetation, insects and worms, where they fed exactly like any wild chicken would do. They also received from time to time some grain supplement that was greatly appreciated. They were not permanently harassed and stressed by predators, as it would have happened in their natural environment.

From time to time, some chicken was caught and killed out of sight. This would shorten the lives of the chicken that were not left for egg laying, but in comparison with the lives of wild chicken, most of these domestic chicken would have still lived longer on average.

So I think that as long as domestic animals do not live worse than their wild ancestors, this can be called as a "humane way", even when they are grown with the purpose of eventually being killed. At least being grown by humans has ensured the survival of the species of domesticated animals, while all species of not very small non-domesticated terrestrial animals have been reduced to negligible numbers of individuals in comparison with humans and domestic animals and for most of them extinction is very likely.

Unfortunately, today it has become unlikely to see animals grown in such conditions, like at my grandparents. Most are grown in industrial farms in conditions that cannot be called anything else but continuous torture, and I despise the people who torture animals in order to increase their revenues.


Many, many people disagree with you.

It is not at all a logical, ethical, or emotional contradiction to say that we should have humanely-raised livestock whose purpose is to be slaughtered for meat. We can ensure that they are kept in healthy, safe, and humane ways during their lifetimes, and are killed as quickly, cleanly, and painlessly as we can reasonably manage.

And, um, the meaning of "humane" is only loosely related to "treat like a human." It means "treat well, view with compassion", and similar. We talk of things being "humane" or "inhumane" in how we treat each other, too. [0]

Humans have been raising meat animals with compassion and treating them well during their lifetimes for longer than we have had written language. Veganism/vegetarianism is not even physically an option for many people, excludes many cultures' traditional practices, and very, very often requires (or at least tends toward) supplementing with foods that are at least as unethical as factory farming practices.

[0] https://en.wiktionary.org/wiki/humane


As unethical as purposefully murdering animals?

I don’t think you can call it a compassion when you own an animal with sole purpose of efficiently growing it and then killing it so its body can be dismembered and sold off. This is treating animals as property, that produces profit.


not any more unethical than any other animal that "murders" to eat

Do other animals have moral agency?

Why wouldn't they? Animals definitely have moral sense. Monkey for example react violently to social injustice against them. Animals also take compassionate actions that bring them no benefit or even incur cost. Straightly moral act.

Which is more unethical:

Raising animals compassionately, slaughtering them for meat as painlessly as possible after a healthy, happy life

OR

Letting people who require meat in their diet to live (there are a number of reasons this may be the case) die slow, painful deaths as their bodies fail around them?

It's real easy to say that "no one should ever kill an animal to live" when you ignore the disabilities and chronic conditions that make surviving on plants alone impossible, or prohibitively expensive.


> supplementing with foods that are at least as unethical as factory farming practices

That's the first time I ear about supplement being unethical, let alone "compared to factory farming". Stretching the usual arguments, it may be almonds' water or soy in brazil ? I'd be glad you clarify your point.


The harvesting of a number of the common foods used to supplement vegetarian and vegan diets—eg, soy, agave, quinoa—are variously destructive to the environment and based on labor practices exploitative enough that they sometimes verge on slavery.

Oh I understand the confusion: soy, quinoa and agave are not supplements but food. I guess we might agree on "alternative" but the word choice isn’t your point.

Soy is a strange pick at it’s mostly cultivated to feed livestock (77%) and using it instead for humans instead would require substantially less crops.

https://www.deforestationimportee.ecologie.gouv.fr/en/affect...

Agave… is mostly water and glucose, without much minerals or protein. How does vegs requires or tends toward that aliment more than other?

Quinoa original region doesn’t have the same working standards as un US/UE but I’m not aware of a difference with banana, coffee, avocado, cacao, vanilla, coconut, palm oil (or soy)… however Quinoa also grow in other regions: here in France you can find local quinoa at the same price as the one from Bolivia: around 8€/kg (organic). It’s super healthy but not very popular through.

Beans and lentils are more popular I think (self non-scientific estimation) but yeah soy is great and tasty.


> I doubt we'd do this to humans. So the only way to be okay with this is adhering to a form of specieism.

The obvious historical pointer to the Holocaust as a well covered example of people being treated like that - and a fact often swept under the rug: people are being treated essentially like that in North Korea, today.

An older interview but still good to watch

https://youtu.be/ZGJm4bjRaaE?is=E2hFWYi-ynPnWGfm


Ok, I should maybe have said “I doubt we’d think it’s Okay to do this to humans”.

Yeah, but it seems we do think it's okay, after all North Korea has been doing that for decades and people don't care whatsoever. NK contractors continue to be employed all over the world.

Instead people are outraged about the dictators in the middle east etc.

Whatever you wanna say about the Gaza atrocities - it applies 100x to North Korea. And yet, nobody cares.

I think that's the real thing you can take away from these issues: nobody cares unless there is a politically motivated party that wants to achieve their goals and thus shapes the public narrative enough to get it's will through


I can't invade North Korea. I can avoid eating pigs.

That may be true in your region. I myself live in Germany. Here, you get subventions by animal on the farm - hence here, you're literally paying for and enabling it wherever you actually purchase the meat or not. But I get where you're coming from.

Many farm animals aren't bred specifically for killing them. Think egg-laying hens and ducks, milk-producing cows and goats, etc.

Not too different from humans in that respect; humans are bred systematically (we have dedicated hormonal supplements, birth facilities, documented birthing procedures, standardized post-birth checklists of forms of vaccination regiments, standardized mass schooling, government-subsidized feeding programs, etc) and most are used machinistically by society exclusively for productive output, regardless of whether the society is corporatist, capitalist, socialist, communist, etc.


I think you misinterpreted GP's emphasis.

But still, egg and dairy animals are culled when productivity drops. The human equivalent would be killing all male babies, and females after age ~40.

This does not seem more "humane" than the human equivalent of meat farming, where all human offspring would be harvested at age ~15.


Yup exactly. And when animals are bred specifically for milk, they aren't treated well even before they are killed. Dairy cows need to be kept continuously pregnant / in lactation state through artificial insemination. They don't magically produce milk all-year round.

And pregnancy is _hard_ on animals (including humans), it changes your physiology and psychology. Even if we take for granted that a cow isn't as conscious as a human (IMO consciousness is a sliding scale, not a binary), then they are still being primed for giving birth and taking care of offspring which never comes. Imagine doing that to a human - it's a definite form of cruelty.


To be fair you can keep a cow milking for LONG after it gave birth, but yields are a bit lower and baby cows are very valuable in themselves, which is generally the same kind of problem that keeps most bad farming practices going. Dairy farming is already a bottom of the barrel low margin business. And running a dairy farm while not maximizing calf output is like running a low-yield gold mine while dumping gem quality emeralds out with the tailings. Yeah it can be done, but you would be foolish not to do both, especially if it means your competitors across the road are able to squash your business with far higher capital returns after a few years.

What do you think happens to a cow, when she stops producing milk? She is kept constantly pregnant, what do you think happens to most of her male offsprings? In the end, almost every single cow’s life ends in violence.

I mean businesses, the government, and capital economies in general treat me like garbage to be disposed of when its convenient, so while ideally I would like to agree with you, in practice I don't see that much difference. I can't even count how many times my life has been put on the line because some asshole wanted to save $10 or 2 minutes.

Also many domesticated animals would be completely extinct if we didn't eat or farm them, which isn't a great prize, but it isn't nothing either.

Personally I don't believe, until energy is so plentifully produced to be approaching on free for individuals, that we can even really afford to forgo animal agriculture. Oh yeah sure, we use farmland to grow animal feed and not human feed, but the animal feed we grow is often much easier to grow, and in many cases reduces artificial fertilizer usage which is a huge usage of fossil fuels. Take cows for example, they are fed 90% grass or alfalfa to grow, yeah their last month or two of life they are given grain to fatten them up (which is optional but customers prefer it), but that alfalfa they ate for 90% of their growth required no fertilizer or pesticides, and even produces nitrogen fertilizer in the ground, while harvesting consists of mowing it down and packing it into bails, and can be stored for years almost anywhere, including outside or in just random pits or 100 year old barns. Dent corn does require fertilizer (especially if you don't rotate with alfalfa or other legumes), however dent corn can also be stored for years with no more processing than is done within 10 seconds of harvest by the combine that is harvesting it, and can be fed to animals directly, while sweet corn is far more delicate, less fertilizer efficient, and doesn't store for long without refrigeration, canning, or freezing. Also animals like pigs are often fed otherwise bad or inedible food.

Meat/animals it also a great emergency store of food for disaster. A volcano can reduce global yields by 30%, and while crop subsidization will help as we might have an extra 10% growing to start with, that still may end with shortages because people keep fighting crop subsidization to keep it low not understanding why it exists (to prevent famine which it has successfully done for over 100 years in every country that implements it). But you can give yourself an extra few years of food from animal stores by slaughtering them down and feeding them on old animal feed stockpiles, either just to wait for overall crop yields to increase again, or to give time to increase farming efforts.

In a more ideal world I would definitely go along with reducing or even eliminating most animal agriculture, but in the world we currently live in it seems like a liability to me. I simply don't trust the people controlling most of our capital and wielding most of the power of human civilization to manage it well enough now or anytime in the near future. Not to mention the educational requirements to most cultures to switch to a pure vegetarian diet without all sorts of side effects from poor nutritional balance.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: