Hex math is weird. Since there are 6 directions instead of 4, there's no simple mapping between hex positions and 2D x,y coordinates.
There is, a hexagonal grid is isomorphic to a [skewed] rectangular grid, i.e. it can also be indexed with a coordinate pair (u, v). The neighbours are at offsets (+1, 0), (-1, 0), (0, +1), and (0, -1) - just as in a rectangular grid - and the two additional neighbours are at (+1, -1) and (-1, +1). The coordinates in the plane are (x, y) = (√3(u + v/2), 3v/2) or some variation of this, depending on how exactly one lays out the hexagons and picks axes.
It is surprisingly hard to find a good illustration for this, or maybe I am using the wrong search terms, but here [1] is the best one I could quickly find.
That is one of the first results I found and it is probably a great resource if you want to dive into the topic, but it also lacks an image of the rectangular - or rhombic - grid on top of the hexagonal tiling which I think is visually much clearer than those interactive maps highlighting only the axes of the cell under the cursor. But I understand why they made the choice, other coordinate systems do not admit the same type of visualization while the interactive maps are universal in that sense.
Why are people even having problems with sharing their changes to begin with? Just publishing it somewhere does not seem too expensive. The risk of accidentally including stuff that is not supposed to become public? Or are people regularly completely changing codebases and do not want to make the effort freely available, maybe especially to competitors? I would have assumed that the common case is adding a missing feature here, tweaking something there, if you turn the entire thing on its head, why not have your own alternative solution from scratch?
Let us ignore the specific case of Jeff Bezos and Amazon, let us look at a generic founder starting a company that turns into a billion or trillion dollar company making the founder a billionaire.
The founder founded the company but the billions were earned by the thousands of employees working for the company. The founder alone would not have earned a single dollar without the employees and there would not be a company the be employed by without the founder.
If you start a business, you create a company to isolate the business risk from your personal risk, if the business does not work out, the company goes down, the founder should be fine. You will probably risk some of your personal money as a founder in many cases, but how much of a reward do you want for that? If you risk a million and make a billion, is that not more than enough? Did you really start a business where you expected to fail with more than ninety-nine point nine percent?
On the other hand, even if the founder would not get an oversized portion of the profit, because that money would get distributed to many employees or many sold products, the effect is relatively small, it would neither make all employees earn millions nor the product significantly cheaper. Bushiness owners making billions is just being in a position where you can take a little money from very many others and that adds up.
Also founders getting rich is capitalism not working as intended. The point of an economy is to provide goods and services that people want as efficiently as possible. Business making a lot of profit means that things are not as cheap as they could be and competition is supposed to correct that. Making a profit is a mean to an end, an incentive for the creation of businesses to satisfy demands, it is not the end itself.
I guess the percentage of crashes due to hardware is high because people with faulty hardware are experiencing the vast majority of crashes.
It is not that simple, it does not only depend on the hardware but also the code. It is like a race, what happens first - you hit a bug in the code or your hardware glitches? If the code is bug free, then all crashes will be due to hardware issues, whether faulty hardware or stray particles from the sun. When the code is one giant bug and crashes immediately every time, then you will need really faulty hardware or have to place a uranium rod on top of your RAM and point a heat gun at your CPU to crash before you hit the first bug, i.e. almost all crashes will be due to bugs.
So what you observe will depend on the prevalence of faulty hardware and how long it takes to hit an hardware issue vs how buggy the code is and how long it takes to hit a bug.
Yes but also absolutely not. The evolution of the wavefunction when nobody is looking is unitary, which among other things means it is time-reversible. That math works extremely well and predicts the correct outcome.
When we are measuring a quantum system, the probability distribution of the measurement outcome is described by the Born rule, the amplitude of the wavefunction squared, and the collapse postulate tells us that after the measurement the wave function will be in the measured state, which is a non-unitary and non-time-reversible process. That math works extremely well and predicts the correct outcome.
But - really big but - what is a measurement device but a huge quantum system, what is a measurement but a quantum system and a measurement device and an environment undergoing time evolution? So both descriptions should apply, unitary time evolution and wave function collapse, but that can not be the case because they are incompatible, one is unitary, the other is not. The mathematical description is inconsistent.
Do I get this right? Wave function collapse due to measurements is not real, the wave function evolves unitarily all the time. But as quantum states get amplified into the macroscopic world, superposition states are somehow amplified asymmetrically which makes it look like wavefunction collapse.
The wave function is still symmetric, but it takes on a bimodal distribution, with very little overlap. For any given event, it will be affected only by the half of the distribution that it's in. The other half has basically zero effect. The further time evolves, that effect becomes even smaller -- as in, the odds of an experiment demonstrating it quickly go towards 1 in 10^googol^googol.
You can round that down to exactly zero and call it "collapse". Or you can keep thinking about the entirety of the wave function, and call it a "multiverse". That rounding is technically invalid, but it simplifies the conceptualization (and the math) to a massive, massive degree without affecting the outcome in any pragmatically measurable way.
(One more caveat: "symmetry" implies we're talking about a wave function with a 50-50 superposition. That's not a requirement, but it simplifies an already complex explanation.)
But isn’t it conceivable, because the original quantum state contains probabilities of different outcomes, that one imprint might correspond to “up” and another to “down,” [...] [Zurek’s theory] predicts that all the imprints must be identical.
Does this not imply that there is an asymmetry, one half of the state gets imprinted, the other half neglected? This however also raises the question about the basis, what is a superposition and what is not depends on the choice of basis. Is there a special basis just as pointer states are somehow special?
Indeed, as you say, Decoherence explains why certain bases are special: when a system is in a pointer basis state, it does not continue entangling with environment (or, at least, does so minimally). When a spinning particle enters a Stern-Gerlach apparatus oriented in z-direction, spin-z is the pointer basis of the system during its time in the apparatus. A spin-up or spin-down particle does not entangle with the environment, but spin +x state would quickly entangle with environment, placing environment in a superposition and "branching" the total state vector of all the stuff in the universe.
Quantum Darwinism is just a refinement of this picture in which the "environment" interacting with the system is itself modeled a series of fragments (i.e. all the different photons that bounce off object). It turns out that the information about which pointer basis state the system is in (spin up or spin down) is redundantly encoded in each of these fragments. Hence, intercepting one photon that interacted with system and reveals "spin-up" (because the particle is in upper path) agrees with other photons that also bounce off object.
BUT, of course, due to linearity of unitary time evolution, there is another "branch" in which spin-down was the outcome of the measurement and everyone agrees on spin-down. This is exactly the Everett picture.
I pay $19 per month to some company X, and company X distributes this money to all participating websites I visit during that month, in return I get ad-free access to all the content. And this is implemented in a way that no website learns who I am and company X does not learn which websites I visited.
Or you could cut out the middleman and use a micropayment system like GNU Taler to pay the websites directly.
That way you dont have to hope and pray that the middleman doesn't decide to track you censor, and charge increasing fees, which current middlemen like patreon currently do.
GNU Taler also has a middleman, the exchange. And they play more or less exactly the same role as my company X is supposed to do, exchange money for tokens that are handed to websites when you visit them and which they can then redeem. And I would want to avoid true micro transactions, i.e. pay amount x for looking at one article, because then the amount you spent will depend on how many pages you visited in a given month and that might make you think about each click and in turn hinder adoption. I want to pool the money of all users, divide it by the total number of payed links visited in the last month, and then pay that much per click to each participating website.
I think free will exists just because we can imagine a math object into being that is neither caused nor random.
Can you? I can only imagine world_state(t + ε) = f(world_state(t), true_random_number_source). And even in that case we do not know if such a thing as true_random_number_source exists. The future state is either a deterministic function of the current state or it is independent of it, of which we can think as being a deterministic function of the world state and some random numbers from a true random number source. Or a mixture of the two, some things are deterministic, some things are random.
But neither being deterministic nor being random qualifies as free will for me. I get the point of compatibilists, we can define free will as doing what I want, even if that is just a deterministic function of my brain state and the environment, and sure, that kind of free will we have. But that is not the kind of free will that many people imagine, being able to make different decisions in the exact same situation, i.e. make a decision, then rewind the entire universe a bit, and make the decision again. With a different outcome this time but also not being a random outcome. I can not even tell what that would mean. If the choice is not random and also does not depend on the prior state, on what does it depend?
The closest thing I can imagine is your brain deterministically picking two possible meals from the menu based on your preferences and the environment respectively circumstances, and then flipping a coin to make the final decision. The outcome is deterministically constraint by your preferences but ultimately a random choice within those constraints. But is that what you think of as free will? The decision result depends on you, which option you even consider, but the final choice within those acceptable options does not depend on you in any way and you therefore have no control over it.
> But neither being deterministic nor being random qualifies as free will for me
Not sure what you mean here, but non-random + non-caused is the very definition of free will. It is closely bound up with the problem of consciousness, because we need to define the "you" that has free will. It is certainly not your individual brain cells nor your organs.
But irrespective of what you define "you" to be, free will is the "you"'s ability to choose, influenced by prior state but not wholly, and also not random.
Not sure what you mean here, but non-random + non-caused is the very definition of free will.
Now describe something that is non-random and not-caused. I argue there is no such thing, i.e. caused and random are exhaustive just as zero and non-zero are, there is nothing left that could be both non-(zero) and non-(non-zero). Maybe assume such a thing exists, how is it different from caused things and random things?
[...] free will is the "you"'s ability to choose, influenced by prior state but not wholly, and also not random.
I am with you until including influenced by prior state but not wholly but what does and also not random mean? It means it depends on something, right? Something that forced the choice, otherwise it would be random and we do not want that. But just before we also said that it does not wholly depend on the prior state, so what gives?
I can only see one way out, it must depend on something that is not part of the prior state. But are we not considering everything in the universe part of the prior state? Does the you have some state that the choice can depend on but that is not considered part of the prior state of the universe? How would we justify that, leaving some piece of state out of the state of the universe?
> Now describe something that is non-random and not-caused. I argue there is no such thing, i.e. caused and random are exhaustive just as zero and non-zero are, there is nothing left that could be both non-(zero) and non-(non-zero).
That's my point. The fail to exist only in a certain axiomatic system that is familiar to us. But in a certain mathematical/platonic sense there is nothing essential about that axiomatic system.
Well, what does random mean? Unpredictable, right? Why is it unpredictable? Because the outcome is not determined by anything else. [1] So random just means not determined. And instead of caused I would say determined, because caused is a pretty problematic term, but for this discussions the two should be pretty much interchangeable. And this is probably the best place to attack my argument, to point out something wrong with that. Once you agree to this, it will be a real uphill battle.
So your non-random + not-caused just says non-(non-determined) and non-determined. Now you have to pick a fight with the law of excluded middle [2]. You are saying that there exists a thing that has some property but also does not have that property. Do you see the problem? Nothing makes sense anymore, having a property no longer means having a property, everything starts falling apart.
Maybe you can resolve that problem in a clever way, but you will have to do a lot more work than saying there is some axiomatic system where this is not an issue. Which one? Or at least a proof of existence? And even if you have one, does it apply to our universe?
[1] Things may also seem random because you do not have access to the necessary state, for example a coin flip is not truly random, you just do not have detailed enough information about the initial state to predict the outcome. Or you may not know the laws or have the computing power to use the laws and that bares you from seeing the deterministic truth behind something seemingly random. But all those cases are not true randomness, they are just ignorance making things look random.
Yep, the law of the excluded middle is one place to start attacking your argument, I assume you know not all philosophers accept it.
Then, you are also right that semantics intertwine with logic in a way that needs careful interrogation and is open to different perspectives. I'd be very careful making the leap you make from:
> non-random + not-caused
to:
> non-(non-determined) and non-determined.
Your arguments also contain an interesting thing to think about: True randomness. If you really think about it, true randomness should not exist. And yet we think radioactive decay at the quantum level is truly, fundamentally, irreducibly random. If that is so, here is an example of things happening that we, by definition, cannot explain in any more fundamental way.
Which is to say, the universe is not bound by the logic of our experience. In the same way we had to break out of our basic intuition about numbers to create new ones that gave us more power, in the same way we could never have logically reasoned our way into quantum mechanics and needed experimental evidence to accept something so radical, yes in the same way math does not care that our minds/logic is currently too weak to conceive of a mechanism for free will.
Here is mind twister for you: Imagine a chain of antecedents for an action. In our intuition, the chain stretches backwards infinitely. But what is it could somehow wrap around to form a ring at infinity? Analogous to the way cosmologists think the universe is not infinite in all dimensions
I was wrong about the law of excluded middle, that is not an issue. Intuitionistic logic rejects it, because it says P or not P is definitely true, whether or not we have any proof for P or not P. But that is not really relevant here, the real question is whether there are things that are neither determined nor random. If random means not determined, then no such thing can exist, unless you accept a violation of the law of noncontradiction [1]. So are random things and determined things complementary sets with respect to some universe of things under consideration?
I assistance produces significant productivity gains across professional domains, particularly for novice workers.
We find that AI use impairs conceptual understanding, code reading, and debugging abilities, without delivering significant efficiency gains on average.
Are the two sentences talking about non-overlapping domains? Is there an important distinction between productivity and efficiency gains? Does one focus on novice users and one on experienced ones? Admittedly did not read the paper yet, might be clearer than the abstract.
Not seeing the contradiction. The two sentences suggest a distinction between novice task completion and supervisory (ie, mastery) work. "The role of workers often shifts from performing the task to supervising the task" is the second sentence in the report.
The research question is: "Although the use of AI tools may improve productivity for these
engineers, would they also inhibit skill formation? More specifically, does an AI-assisted task completion workflow prevent engineers from gaining in-depth knowledge about the tools used to complete these tasks?" This hopefully makes the distinction more clear.
So you can say "this product helps novice workers complete tasks more efficiently, regardless of domain" while also saying "unfortunately, they remain stupid." The introductiory lit review/context setting cites prior studies to establish "ok coders complete tasks efficiently with this product." But then they say, "our study finds that they can't answer questions." They have to say "earlier studies find that there were productivity gains" in order to say "do these gains extend to other skills? Maybe not!"
The first sentence is a reference to prior research work that has found those productivity gains, not a summary of the experiment conducted in this paper.
In that case it should not be stated as a fact, it should then be something like the following.
While prior research found significant productivity gains, we find that AI use is not delivering significant efficiency gains on average while also impairing conceptual understanding, code reading, and debugging abilities.
That doesn't really line up with my experience, I wanted to debug a CMake file recently, having done no such thing before - AI helped me walk through the potential issues, explaining what I got wrong.
I learned a lot more in a short amount of time than I would've stumbling around on my own.
Afaik its been known for a long time that the most effective way of learning a new skill, is to get private tutoring from an expert.
This highly depends on your current skill level and amount of motivation. AI is not a private tutor as AI will not actually verify that you have learned anything, unless you prompt it. Which means that you must not only know what exactly to search for (arguably already an advanced skill in CS) but also know how tutoring works.
There is, a hexagonal grid is isomorphic to a [skewed] rectangular grid, i.e. it can also be indexed with a coordinate pair (u, v). The neighbours are at offsets (+1, 0), (-1, 0), (0, +1), and (0, -1) - just as in a rectangular grid - and the two additional neighbours are at (+1, -1) and (-1, +1). The coordinates in the plane are (x, y) = (√3(u + v/2), 3v/2) or some variation of this, depending on how exactly one lays out the hexagons and picks axes.
It is surprisingly hard to find a good illustration for this, or maybe I am using the wrong search terms, but here [1] is the best one I could quickly find.
[1] https://www.researchgate.net/figure/Rhomboidal-and-hexagonal...
reply