I mean, Elinor Ostrom won a Nobel Prize for saying that there are loads of great alternatives besides choosing between property rights and the tragedy of the commons.
Ostrom's work is parochial. Her real-life examples center on small populations adjacent to large natural resources with access negotiated via long-standing community mores. That simply doesn't apply to the high-density populations urban environment where most of humanity lives, and where community ties have less sway. In these situations where most interactions are transactional a robust and fair regime of property rights is probably the ideal solution. With property rights the transaction costs are borne by the property owner up front, while in the community-based the transaction costs are encoded in tradition and personal relationships.
I'm currently doing a literature review for a research project closely related to this exact topic. And they say reading HN is a waste of time! ;)
But seriously. This is a great example of what a lot of other research also points to: the growing vulnerability of the US food system to systemic existential risks due to geographic specialization, market consolidation, and decrease in network resilience of local food systems. This is important stuff. Glad to see food systems pop up here.
Absolutely. But that's the not box to think in to see the problem. Sure, we haven't had that happen in recent memory. Soooo... if there were to be a problem then, that means it wouldn't look what like we think "food problems" should look like. What we see doesn't determine the problem space, the problem space suggests ways we may need to change our viewing angle.
By the time you're responding to a catastrophe there have already been system level changes happening for quite some time before that kind of observable "phase transition" from normal function to disaster even occurs.
We've gotten increasingly better at producing a particular type of highly optimized food and distributing it through just as optimized Just in Time/LEAN systems. Which is great and allows you to carry less "inventory waste" - we've managed to get down to now keeping about 75 days worth of food reserves globally - HOWEVER, the trade off to that type of efficiency is that introduces new sources of risk. It (by definition) removes a ton of redundancy from the system and severely decreases the resilience of the total system to any sort of shock - let along multiple coincident ones.
THAT then brings in our outdated ways of conceptualizing, preparing for, and mitigating risks across functional and geographic areas. If you're dealing with an issue as complicated the global food system it's not productive to just have individual risk calculations with insurance/consumer pricing baked in for individual scenarios. For example, in any particular area there may be an x% chance of a 1000 year flood. So if there are 5000 of those communities and the total system would have been able to withstand some number of 1000 year floods distributed between them in a normal year, what happens when you factor in that, for instance, our climate "context" is different than when lots of infrastructure was built? What's the ability of the system to withstand the normal amount of thousand year floods when you add in climate effects that may both cause more events and make each event more severe/destructive and there also happens to be a "bad year"/5-10% reduction in (mostly monoculture) cereal crop output or a case of African swine fever that kills 300 million pigs in China. Now factor in the systemic changes and consolidation leading to less redundancy in the food system and you've got a hard risk management problem on your hands.
Because my undergrad is in physics and I'm still in science, E&M opportunities are evident to me, but I know I would struggle to really go after them myself. So I get help, but I sure wish I could do it on my own.
Why not either? For both the Christian and the Atheist, honest scholarship is inseparable from their beliefs. It's hard to imagine either changing their beliefs from the evidence they could find.
The consequences of being able to prove with certainty either that Jesus existed and is who Christians claim him to be, or did not exist or was not divine would have tremendous consequences on the lives of hundreds of millions people who do not ascribe to the proven belief.
They could say "I am intellectually honest and will change my beliefs depending on the evidence," but we both know that 1. Concrete evidence strong enough to convince someone to change their beliefs, beyond what is already known, likely does not exist. And 2. Their current beliefs already match their known evidence so any additional information is unlikely to radically change their beliefs, only be accounted for and integrated into their current belief system.
I'm not trying to say all scholarship is biased or we shouldn't believe anything. In all likelihood, most people are not deliberately lying or falsifying anything.
What does that mean about which scholarship to trust? I have no idea.
Christians (ignoring some defunct or very rare forms) have it as an article of faith that Jesus was a real historical human who was also divine.
Atheists would deny that Jesus or anyone else was God, but there is nothing about atheism that is opposed to duly considering possibly trustworthy evidence that the Jesus whom Christians believe on faith was/is God may have indeed existed as a real historical human.
At the same time, unlike (for example) Muslims who believe Jesus was a human prophet but not himself divine, atheists don't have any specific belief about Jesus's human historical reality at all. If the evidence shows that he was made up by combining bits of multiple people's teachings, atheism is unaffected.
Therefore, an atheist's writing about this question is, all things being equal, more likely to neutrally consider all the available the secular observable facts about the question, without influence from religious motivations, than a Christian's.
However, that italicized precondition is usually false.
Real writers vary in their approach to scientific and historical rigor, being imperfect humans themselves and not abstractions. So an individual Christian could well be a more trustworthy source on this than an individual atheist, or vice versa.
amatthew: by definition, being Christian means one believes Jesus existed. How does one objectively consider the non-existence of Jesus if one has already arrived at a conclusion? That's some serious cognitive dissonance.
1. Existing employee, so he was valuable to them, and had proven he could work shorter hours since he'd been there part-time initially when he was in university.
2. Willing to quit. In fact, what actually happened is he said "I quit", they said "but we don't want you to quit" and he said "how about 4 day workweek".
This is not a very reproducible method, but I've interviewed another person (for my book - https://codewithoutrules.com/3dayweekend/) who was somewhat early in his career. Again, he did it at his current job, with willingness to leave if necessary.