Hacker News new | past | comments | ask | show | jobs | submit | more dbingham's comments login

Which one? As with all things, there is a lot of variation. There will always be good places and bad. The question is whether they are better on average.


But the employees don't own a controlling share. For it to be a worker cooperative it really needs to be entirely owned and governed by the people working there.


The tradeoff is that requiring all shares be worker-owned devalues the shares owned by workers, due to massively reduced liquidity. This situation does not necessarily benefit the workers since they have an interest in maximizing the value of their share.


How does reduced liquidity equal lower share value?

Makes no sense to me.

Case in point example: SpaceX shares are very illiquid but also very very much up since SpaceX was founded.


Suppose I own 0.01% of the company I work at. If that company is publicly traded, I can sell those shares to anyone.

If that company is a co-op in which only employees are allowed to own shares, the only people I can sell my shares to are other employees.

In general, more willing buyers (who in turn know they can they sell those shares unrestricted in the future to any buyer), increases the people willing to bid on those shares at any given moment in time. (It’s the same basic reason that you’d rather have $100 in cash than $100 gift card for Starbucks.)


Are you even allowed to buy/sell shares in an employee owned co-op? I understood that share ownership was part of employment with the company. It entitles you to a share in the profits and a share in the decision making, but I didn't think it was a tradable instrument itself.


This is the answer to all the above. These shares give you a right to part of the profit, but cannot be sold.


What does it mean to own something that you cannot sell/assign?

That sounds like plain old profit sharing, not ownership.


Ownership in the sense of governance, not of property.

Worker cooperatives are organizations that are governed by their workers for their workers, not owned as property to be traded or sold.


> How does reduced liquidity equal lower share value?

1: Lower liquidity usually goes hand in hand with higher transaction costs, which means a bigger gap between how much the buyer pays and how much the seller walks away with.

2: Time value of money: Suppose some liquid asset can be exchanged for $X right now, and an otherwise equivalent illiquid asset can be exchanged for $X by, let's say, a month from now. $X today is more valuable than $X in a month, so no one is going to buy the illiquid asset today for $X if they could get the liquid one instead.


How do you handle companies like AMZN where Jeff Bezos still plays a role. Yes, he seems to be busy doing other things but he does seem to have a role. And the founders at all of these companies are still doing things. Even Bill Gates has some kind of role, right?


Because labor is calling the shots and reaping the benefits, not the capital investors.


The hard part about starting worker owned co-ops is financing. We need good financing systems for them. People/firms who are willing to give loans for a reasonable interest, but on the scale of equity investment in tech start ups.


The problem is risk —- most new businesses will go under. Who’s going to take on that unreasonable risk without commensurate reward (high interest loan rate, if any, or equity).

Co-ops could go the angel/VC route for funding if they don’t give up a controlling share.


I think part of the issue is disagreement about what "harmony and compassion" actually looks like.

One of the sibling comments immediately jumped to the assumption you're referring to veganism and the use of animals in the food system. I'm not going to assume that's what you're referring to, but I will use it as a case study.

The problem with veganism's approach to this is that it's a limited extension of empathy. If you've been following along with the advances in plant behavior, it's pretty mind blowing. There is a growing body of evidence that plants have cognition (using a different system than animal's nervous system), memory, environmental awareness, the ability to learn, and communicate. In other words, it seems increasingly likely that plants are _also_ conscious.

In fact, there are hints of evidence that microbes are aware and making decisions. It may well be that environmental awareness and consciousness are just the defaults of life.

So then, for autotrophs like us, what does harmony and compassion look like? How do we feed ourselves at scale without causing harm or exploiting nature?


I do not believe that veganism can be criticized for a lack of empathy versus plants.

Even if we value identically plants and animals, plants are exploited in a much more benign way than animals.

In the past, there were many domestic animals which could be said to have lived quite a happy life for their kind, until the moment when they were slaughtered.

Nowadays, the vast majority of the domestic animals live in conditions that can be hardly named other than torture.

On the other hand, the cultivated plants do not really live in any worse way than in their wild state. The majority of the cultivated plants are either annual plants, which are killed a very short time before the moment when they would have died anyway, or they are perennial plants from which we take only their fruits, which have been developed by the plants especially for being taken by animals, as a payment for being aided in reproduction.

So without giving any preference to cultivated plants or domestic animals, the more ethical choice is to continue to exploit in the current way only the former.

I believe that in the future not even cultivating plants will be the most efficient way for producing food and other organic substances.

The most efficient way will be to use solar energy gathered by photovoltaic cells to capture carbon dioxide and dinitrogen and incorporate them in some simple organic molecule or molecules, perhaps glycine or a mixture of urea or ammonia with a simple carbohydrate or a short-chain fatty acid.

Whatever will be synthesized using solar energy, at a better efficiency than currently achieved by plants, will be used to feed some genetically engineered fungi (or parasitic plants, i.e. non-phototrophic plants), which will produce any kind of desired food or other useful complex organic substances. (A first step in this direction is shown by the recent news about strains of the Trichoderma fungus that have been genetically engineered to produce either whey protein or egg white protein, but in the future it should be possible to make for instance fungi able to grow fruiting bodies that are bananas or turkey thighs).


> How do we feed ourselves at scale without causing harm or exploiting nature?

Half jokingly, maybe we don’t, and human society develops morality that is not compatible with the continued existence of humans.


This looks... Interesting and also weirdly suspicious. It's "made by Backbone". Backbone appears to be an enterprise security startup (?) but it's unclear because the website tells you almost nothing about the companies history, finances, or who makes up the company.

The committers appear to be "Backbone Authors". The organizations membership is not visible.

With something like this, trust is vital. I need to be able to trust the code now and into the future. For trust, transparency is key. And this project has zero transparency.

For all I know, this could be a state actor trying to lay the foundation for future backdoors.


I'm one of the authors. We built Minibone as a community contribution because we realized how unnecessarily vulnerability-prone E2EE app development is today - after seeing app after app repeatedly making the same mistakes.

Minibone is an initial attempt to address this challenge in the single-user setting (that allows a concise and easily auditable implementation).

This is all part of our broader work that you can read about here: https://backbone.dev/company


Jumping on this, I've also noticed you don't seem to have an obvious "Terms and Conditions" or "Privacy Policy" on your site.

It's also not immediately obvious to me where the company is registered or info about the people behind the company.

For a security focused company these are all things I would expect to be rock-solid and as transparent as possible for the initial due-diligence when evaluating services like this.


Do you have a name? Which commits you authored?

After the xz debacle, knowing who wrote which commits became important


> The committers appear to be "Backbone Authors". The organizations membership is not visible.

This is slightly insane. How can they release something as Apache License if they aren't even giving out the name of the developers? Exactly WHO is licensing this source code?

There are many open source crypto libraries and it's probably not the wisest to use one authored by anonymous developers


Why would somebody have to list their name to Apache license something?


> zero transparency

> could be a state actor trying to lay the foundation for future backdoors

idk if presence of “names” are a good signal to indicate otherwise either

https://www.wired.com/story/jia-tan-xz-backdoor/


It's the contrary. It's only because we can identify Jia Tan's contributions that we can throw out just his contributions and revert to (say) xz 3.2

If xz contributors were anonymous, we would need to throw out the whole thing


If the Minibone repo turns out to be malicious I don’t think it makes much of a difference whether they are committing as one anonymous user, or as 12 fake people.


Tracking the fake people still give some information (for example, the more sock puppets, the harder it is to simulate discussions in issues, PRs, etc)


No, it's not. This was against contemporary morality in the society in which it was done as well.


Hi! Author here.

Preprint review as it is being discussed here is post-publication. The preprint is shared first, and review layered on top of it later. Click through to the paper[1] I'm responding to and give it a read.

But, also, prepublication review doesn't need to be "reintroduced". It's still the standard for the vast majority of scholarship. By some estimates there are around 5 million scholarly papers published per year. There are only about 10 - 20 million preprints published total over the past 30 years since Arxiv's introduction.

There are a bunch of layered institutions and structures that are maintaining it as the standard. I don't have data for it to hand, but my understanding is that the vast majority of preprints go on to be published in a journal with pre-publication review. And as far as most of the institutions are concerned, papers aren't considered valid until they have published in a journal with prepublication review.

There is a significant movement pushing for the adoption of preprint review as an alternative to journal publishing with the hope that it can begin a change to this situation.

The idea is that preprint review offers a similar level of quality control as journal review (which, most reformers would agree is not much) and could theoretically replace it in those institutional structures. That would, at least invert the current process: with papers being shared immediately and review coming later after the results were shared openly.

[1] https://journals.plos.org/plosbiology/article?id=10.1371/jou...


Thanks for your charitable response to a reader who quite obviously misunderstood what you were writing about. You might want to update your article to make that clear, as "preprint review" isn't a term I had ever encountered before, and a simple reading of your article doesn't obviously indicate that you were talking about post-publication review. (Note that outside of academia, upload to arXiv is publication, so yes your "preprint review" would be a post-publication review.)


No worries. I was really writing for an audience of academics and, in particular, people involved in the various open science and publication reform movements. Sharing here was a bit of an after thought, so it wasn't written for an audience unfamiliar with the intricacies of academic publishing. It's a complicated enough space as it is that this is already a 4-5 article series with the potential to grow, even when I'm writing with the assumption of quite of a bit of knowledge.

But yeah, "preprint review" is considered post-publication review both in and outside of academia. There are nuances to what is consider "publication" in academia. A preprint is not a "Version of Record", meaning it doesn't count towards tenure and promotion. The movement pushing for preprint review is attempting to layer review on top of already public preprints in the hopes that reviewed preprints can begin to count as VORs. It's unclear whether that will work.

Some models, like eLife's, seem more promising than others. But eLife got a ton of backlash when they switched to their new reviewed preprint model, so it remains to be seen whether it will work in the long run.


Quality control was handled fine enough by editors for literally all output on the planet pre 1970s.

There’s nothing physically stopping that paradigm from returning now that we have the internet, other than the fact that there’s only a few thousand (?) such editors.

If somehow there were fifty thousand such editors, then the whole peer review system would be completely unnecessary.

Of course not enough people want to pay for that many editors, but that doesn’t stop a partial adoption by those willing to do so via some arrangements.


There are nearly 50,000 commercial journals and a long tail of non-commercial journals each with teams of editors. There are probably hundreds of thousands of people currently serving as editors.

The issue isn't editorial bandwidth, it's that peer review is currently built into the promotion and tenure structure for academics, who produce the vast majority of scholarship and thus dictate the shape of the scholarly publishing system.

Academics have to publish the papers in peer reviewed journals for it to count towards tenure and promotion. And in fact, they are limited to a small set of journals that are deemed high quality enough for their fields. These journals are chosen by tenure and promotion committees composed of their senior peers and school administration. There are over 1000 R1 and R2 universities worldwide, each with hundreds of departments each with their own tenure and promotion committees. So changing the system is a massive collective action problem.


I’m including only actually competent, full time editors, with sufficiently high reputation that their decisions will be taken seriously. There’s definitely not 50000 of those.

A huge number of journals by numerical count, along with their ‘editors’, are literally laughed at in many fields.

As you’ve mentioned, trying to expand the actually reputable number by 10x, 20x, etc… is a huge problem.

Hence it has to be paid for, quite highly paid for, otherwise the coordination problem is probably impossibly difficult.


Nice catch! I was going from the data shared in that paper[1] and didn't notice that it excluded OpenReview.net (which I'm aware of). The paper got their data[2, 3] from Sciety and it looks like OpenReview isn't included in Sciety's data.

It may have been excluded because OpenReview (as I understand it) seems to be primarily used to provide open review of conference proceedings, which I suspect the article puts in a different category than generally shared preprints.

But it would be worth analyzing OpenReview's uptake separately and thinking about what it's doing differently!

[1] https://journals.plos.org/plosbiology/article?id=10.1371/jou...

[2]https://zenodo.org/records/10070536

[3] https://lookerstudio.google.com/u/0/reporting/b09cf3e8-88c7-...


I do agree it's a bit different. How close maybe depends on what motivates you to be interested in the preprint review model in the first place? Could imagine this varies by person.

In a certain sense, the entire field of comp sci has become reorganized around preprint review. The 100% normal workflow now is that you first upload your paper to arXiv, circulate it informally, then whenever you want a formal review, submit to whatever conference or journal you want. The conferences and journals have basically become stamp-of-approval providers rather than really "publishers". If they accept it, you edit the arXiv entry to upload a v2 camera-ready PDF and put the venue's acceptance stamp-of-approval in the comments field.

A few reasons this might not fit the vision of preprint review, all with different solutions:

1. The reviews might not be public.

2. If accepted, it sometimes costs $$ (e.g. NeurIPS has a $800 registration fee, and some OA journals charge APCs).

3. Many of the prestigious review providers mix together two different types of review: review for technical quality and errors, versus review for perceived importance and impact. Some also have quite low acceptance rates (due to either prestige reasons or literal capacity constraints).

TMLR [1] might be the closest to addressing all three points, and has some similarity to eLife, except that unlike eLife it doesn't charge authors. It's essentially an overlay journal on openreview.net preprints (covers #1), is platinum OA (covers #2), and explicitly excludes "subjective significance" as a review criterion (covers #3).

[1] https://jmlr.org/tmlr/


They require wild fires to reproduce: https://en.m.wikipedia.org/wiki/Sequoiadendron_giganteum


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: