> mathematics only exists in a living community of mathematicians that spreads understanding and breaths life into ideas both old and new. The real satisfaction from mathematics is in learning from others and sharing with others. All of us have clear understanding of a few things and murky concepts of many more. There is no way to run out of ideas in need of clarification.
Yes! And this applies to all human culture, not just math. Everything people have figured out needs to be in living form to carried on. The more people the better. If math, or any product of human skill, is only recorded in papers or videos, that isn't the same as having millions of people understanding it in their own ways.
Modern culture often emphasizes innovation and fails to value mere maintenance, tradition, and upkeep. This can lead to people like the OP feeling that they have nothing to contribute, when actually, just learning math, being able to do it, being able to help others learn it - all of these are contributions.
We are all needed to keep civilization afloat, in ways we cannot anticipate. We all need to pursue some kind of excellence just to keep human culture alive.
This is why I think Brady Haran is one of the coolest living mathematicians. Numberphile is educating a new generation of young mathematicians for anyone with access to youtube. Accessible math communication is so important. So many cool things are buried in textbooks and papers the average person would never read.
I believe that I have learned cumulatively double in the past 10 years from YouTube compared to what I learned in 6 years of middle and high school. And I don’t spend 8 hours per day on YouTube.
Plumbing, react, combinatorics, real analysis, python, c++, cad, micro and macro economics, reinforcement learning, to name just a few of the things I learned through YouTube.
We don’t give enough credit to what we take for granted today.
I simply don't care to gatekeep what counts as education. It has taught me things from videos I can still recall a decade later and pushed me to explore different areas of math I wouldn't have done otherwise.
"If you want to build a ship, don’t drum up the men to gather wood, divide the work, and give orders. Instead, teach them to yearn for the vast and endless sea" -- Saint-Exupery
This is exactly what Numberphile does. Those who are hooked will find the resources on their own. They need a reason to look for them and Numberphile gives them one.
I agree with that, but inspiration is not education. If you watch everything Numberphile has produced, you'll come out of that with no new skills and trivial new knowledge.
If topics are presented engagingly these efforts will no doubt inspire the next generation.
Young people are curious, sometimes all they need is a spark and to be introduced to a new topic in an engaging way. These forms of content deliver that spark.
> Everything people have figured out needs to be in living form to carried on.
It would appear that LLMs are invalidating this claim. Things can live in synthetic form and carry on just fine. Instead of cultivating a population of learned minds we are just feeding a few dozen egregores of models and training corpuses.
They are not invalidating this claim, and cannot, unless we'd actually try it out for a few generations. Which we shouldn't and won't.
LLMs are quite good at simulating life and living intelligence (in the short term), but they aren't any of that. That's why we call it artificial intelligence. It's true that we can't put our finger on what exactly the difference is, but it's not like reality has ever felt encumbered by our limited understanding.
Thank you for highlighting that answer. It is one of my favourite pieces of writing about the culture of mathematics. I just want to add that that particular answer is now affectionately known as Thurston's Paean.
In theory, sure. In practice, our society is a) not set up to value things which don’t have an immediate financial ROI, b) is valuing them less as time progresses, not more, and c) is experiencing some very serious transitions that may destroy the financial viability of devoting a lot of your time and energy to some very important things.
Living culture is a concept that I think is quite unintuitive to modern minds. Examples of it are all around us... but it's usually blatantly missing from our "big picture" thinking.
For example. Take a modern country with a modern economy. Flatten it. Destroy all the factories. Bankrupt all the companies. You can get back to a fully modern economy again quite quickly. WWII demonstrates it.
Taking an unindustrialized country through the development process... that's very tricky. It can't really be rushed.
For a long time, economic development was seen as mostly capital and technology. You need time to develop all the capital needed. Roads, factories, etc. But... development efforts underperformed. Then the idea of "human capital" got popular as a way of explaining the deficit. Education, mostly. Development efforts still underperformed.
I think the "living community" thing is the answer to this. It' ecology. You can't make a rainforest by just dumping all the necessary organisms into the right climate. It's the endlessly complicated relationships between all those organisms that make the rainforest.
This is one of the things that worries me about the pace of modern change. When writing and literacy resurged in classical antiquity... we totally lost all the ways of (for example) doing scholarship orally. Socrates (through Plato) wrote about some of the downsides to this.
...and we did completely lose oral scholarship. We have no idea how to do it. Once the living culture died... it stayed dead. All the knowledge contained within it went away.
> I think the "living community" thing is the answer to this. It' ecology.
I agree. A body of knowledge, mathematical knowledge being one of them, is a give-and-take between its producers and consumers; a market for ideas. It grows in that ecology of people with its pathfinders, specialists, generalists, historians, educators, etc. Committing to a body of knowledge is becoming part of its living culture.
Where I disagree: I believe some of the loss is inevitable. Keeping in mind the example of a body of knowledge above, as the scale of what's accumulated until now grows, the role each of us play in the sustenance of its culture shrink. This is a direct consequence of the modern developmental process (ie division of labour to the point where it feels like we are all modular parts of a much larger whole).
I can't say whether its better to focus on recovering what's lost, or, trust the process, as it were.
> the should-be-illegal process of putting debt on the acquired company's balance sheet.
I agree it's weird but ultimately the check against dumb lending is natural consequences for the lender, right? If you ask me for billions in loans for your zero revenue company and I give it to you, whose problem is that but my own?
The 2023 mini banking crisis has its own wiki page and it's quite informative. Of the three banks involved, one bank saw its shares drop 97%, another "shareholders lost all invested funds" and the third got auctioned off for pennies on the dollar. No investors were bailed out.
Banks go bankrupt all the time. Community Bank and Trust of West Georgia went bankrupt just 3 days ago. The Metropolitan Capital Bank & Trust that went bankrupt back in January. 99% of the time the investors are completely wiped out. Bailouts almost never happen, which is precisely why it's such big news when it happens.
Eh, they were kind of bailed out in that uninsured deposits were made whole. Not saying they shouldn't have done it (fighting contagion is like fighting fire, earlier is prob better and potentially cheaper in the end if your confidence bluff succeeds) but "bail out" is a flexible enough term that electing to cover uninsured deposits at the expense of uninvolved parties feels like it qualifies to me. Plus it has some of the same smells as other bailouts - weighing moral hazard vs systemic risk.
“They” in your sentence is not SVB. Depositors who put money in an accredited bank should not have to worry about bank runs. Risk free banking for depositors is a cornerstone of the US economy.
There’s also no moral hazard here - the shareholders, equity partners, and debt holders were correctly wiped out.
The problem is that leveraged buyouts allow me to effectively inflict that debt on other companies, making a buyout offer the existing shareholders won't be able to resist and then reorienting its operations around servicing the debt I took out. In fact, lenders arguably favor this, letting me use the company I'm acquiring as collateral to acquire more debt at better terms than would otherwise be available.
The people who work at the bought-out company who will then be fired due to PE now gutting workforces to pay off the debt. Laborers are getting the shaft
It's the problem of all the employees (and potentially customers) of the company being plundered.
They have no say in the matter, and given that the lender can probably absorb the loss without, you know, missing mortgage payments or losing health insurance, I would absolutely argue it's not just their problem.
You can certainly hold the opinion that "it's just business" but it feels like an unnecessary part of business that very often has real disruptive and detrimental effects on average working people, for the sole benefit of rich people getting richer.
And yes I get that it's not just a PE problem, but PE is a big one of these kinds of problems.
This is a fundamental misunderstanding of the US employment model. Businesses can do all sorts of dumb things that end up making them unable to continue to invest in employees. The check against that is the greedy owners.
Regulations designed to ensure businesses never take risky bets lest they have to lay people off would be a nightmare of unintended consequences and surely in aggregate hurt employment.
I assume the person you answered is saying that level of risk taking should be regulated. Not that no level of risk should be allowed if they have to fire people. Surely there is a point where you want some guardrails, so the C-suite has to at least take in account their employees as part of their risk assessment
Is the idea that big companies take too many risks today? If so, I’d love to see data, because the usual knock on big companies is they become dinosaurs and risk-averse, and therefore stop innovating and eventually get displaced by upstarts.
You're bringing your own conclusions to this, I never said anything about regulation.
I was just responding to OP who said that PE plundering via debt loading is only the lender's problem when things don't work out, and I assert that it is not.
Employees often pay a much more impactful price when PE-driven cuts occur (whether by design or because the plan failed).
Wow, amazing. They had an AI robot running o1 look at live ER patients coming in just like a real doctor and they did that much better? Incredible! (literally)
> To Zeilberger, believing in infinity is like believing in God. It’s an alluring idea that flatters our intuitions and helps us make sense of all sorts of phenomena. But the problem is that we cannot truly observe infinity, and so we cannot truly say what it is.
When the author says we cannot truly observe infinity, what does that mean? Infinity is a mathematical symbol we can observe. We can't observe infinitely many objects, but even if we could, it wouldn't be the same as observing infinity. You can't observe the number one by observing one stone.
I think there is some confusion in this article between symbols and what they can stand for, and I can't help but wonder if that same confusion is at the root of ideas like ultrafinitism.
> Infinity is a mathematical symbol we can observe.
This is like confusing the map for the territory.
Symbols live in syntax (like the syntax of programming languages), while mathematical concepts live in semantics. Infinity is not a symbol, it's not ∞. ∞ is the symbol we use to represent infinity.
There is a way to look at mathematics as just a bunch of rewrite rules for things on paper. It might not be particularly inspiring, but it's a valid way to look at things.
Indeed, there's a way to get a semantics for free, based on the syntax alone. For example, in the first order logic this is the Herbrand interpretation
The problem to me seems to be that we are trying to map everyday language onto the mathematics. Even though we have a symbol for infinity, infinity is not necessarily a "thing" that the symbol points to.
In analysis, when we write "the limit as x goes to infinity" this translates into a logical statement like "for all x, there exists some y > x such that ..." I don't really see anything conceptually difficult or contradictory here.
I think Douglas Adams had one of the best quotes regarding observing infinity:
"Infinity itself looks flat and uninteresting. Looking up into the night sky is looking into infinity – distance is incomprehensible and therefore meaningless."
Mathematical concepts don't have to have an obviously physical analogue. I mean, you'd find it difficult to observe minus two apples and certainly tricky to observe i apples.
To my mind, maths is like a "what if?" puzzle and whether or not infinity makes sense in the physical world, there's still fun to be had by considering the consequences of it.
That also means that it can be interesting to consider limited number systems which don't have any concept of infinity.
No. I believe that is more apples than there are atoms in the universe, so not only it is impossible to observe, it is a fundamental contradiction with our universal reality. No one and nothing will ever be able to observe or interact that many apples, and so a reference to that many apples is only an abstract mathematical convenience that has no direct bearing to reality.
Like infinity.
I'm not sure I actually believe that, I'm just thinking out loud. But it leads me to think the question "Does infinity exist?" should be answered with the question "An infinity of what?"
Some might say that 2 is as made up as infinity.
Let me elaborate a little - your brain together with society made an abstraction "apple", and only by not distinguishing between these "sets" of atoms you can have numbers.
Well do you say it or are you just playing devils advocate? The post you are responding to seems very straightforward.
If you wanna go all philosophical, “real” might just be anything that is useful. In that way infinity is real because you can use it to do calculus. On the other hand, there are ways of doing calculus that do not involve thinking about infinity. But if you’re gonna count to three apples you pretty much have to go through “two” no matter what.
You cannot observe infinity operationally. Take 0 and add 1 repeatedly. For what n does n+1 become infinite? Never. Since you can't construct infinity you can only believe in it like God.
Hence the jargon "completed infinity". Semantically --- but not in the symbols themselves 0+1+1..." one can pass from finite to infinite by arguing since every n has a successor define Z to be the set of all successors "completing into infinity".
Not having infinity is the real reason a+b=b+a can't be proved in ultra finitism. Induction which depends on the idea of completed infinity is what is otherwise is used.
Well, yeah. If I went straight to the press to trash the reputation of my client's product, rather than communicating internally first to help them proactively address the issues, I would expect to get fired.
Not that I am remotely interested in defending Meta, or optimistic that they would proactively address privacy issues. But I don't feel that sympathetic to the outsourcing company here either.
I don't know what happened behind the scenes. I'm just going off what is said and not said in the article. If I were whistleblowing about something like this, I would take pains to describe what measures I took internally before going public. I didn't see any of that here.
EDIT: Look, to be clear, I think it's bad that naive or uninformed people are buying video recorders from Meta and unintentionally having their private lives intruded on by a company that, based on its history, clearly can't be trusted to be a helpful, transparent partner to customers on privacy. I think it's good that the media is giving people a reminder of this. I think it's good that the sources said something, even though the consequences they suffered seem inevitable. But to me, there is nothing essentially new to be learned here, and I don't know what can or should be done to improve the situation. I think for now, the best thing for people to do is not buy Meta hardware if they have any desire for privacy. Maybe there are laws that could help, but what should be in the laws exactly? It's not obvious to me what would work. I suspect that some of the reason people buy these products is for data capture, and that will sometimes lead to sensitive stuff being recorded. What should the rules be around this and who should decide? Personally I don't know.
What makes you think the outsourcing firm didn't raise these concerns in email or meetings? You think these people wanted to lose jobs and income? That's irrational.
Why reflexively defend a massive tech corporation caught repeatedly violating the law?
There are transgressions severe enough that your duty to stop them is heavier than your responsibility to "the reputation of your client's product." Amazing this needs to be stated, frankly.
More like a bright future being someone's fall guy. The ignorance to think that a large tech giant like Facebook would give a crap about any of those concerns makes this person too politically inept to make it anywhere
What specifically do you mean? It is by design that smart glasses see the things happening in front of their users? Yes, it is. That is why people buy them.
Huh. There you go again, thinking everyone else is an idiot. Capture of video data of users by Meta is never acceptable. It would not be acceptable for any phone, and it is not acceptable for any glass, ever.
Saving the data for any purpose other than allowing users to access it is bad enough; allowing Meta employees or contractors to view personal videos is on a whole new level.
I don't know why people buy smart glasses. Maybe they buy them for video capture. If so, the videos go to Meta's servers and Meta might do things with them. They might be criticized for not reviewing them in certain cases. That's one reason why I wouldn't buy Meta smart glasses.
The main issue here is Facebook employees viewing users' private video streams (including of user nudity) without the users' knowledge.
The secondary issue is that it's generally frowned upon to make your employees view nudity in the workplace. Are there extenuating circumstances here? No, we have no evidence there are any extenuating circumstances here.
> I think most ML people now think of neural-network architectures as being, essentially, choices of tradeoffs that facilitate learning in one context or another when data and compute are in short supply, but not as being fundamental to learning.
Is this a practical viewpoint? Can you remove any of the specific architectural tricks used in Transformers and expect them to work about equally well?
I think this question is one of the more concrete and practical ways to attack the problem of understanding transformers. Empirically the current architecture is the best to converge training by gradient descent dynamics. Potentially, a different form might be possible and even beneficial once the core learning task is completed. Also the requirements of iterated and continuous learning might lead to a completely different approach.
I skimmed the article for an explanation of why this is needed, what problem it solves, and didn't find one I could follow. Is the point that we want to be able to ask for visualizations directly against tables in remote SQL databases, instead of having to first pull the data into R data frames so we can run ggplot on it? But why create a new SQL-like language? We already have a package, dbplyr, that translates between R and SQL. Wouldn't it be more direct to extend ggplot to support dbplyr tbl objects, and have ggplot generate the SQL?
Or is the idea that SQL is such a great language to write in that a lot of people will be thrilled to do their ggplots in this SQL-like language?
EDIT: OK, after looking at almost all of the documentation, I think I've finally figured it out. It's a standalone visualization app with a SQL-like API that currently has backends for DuckDB and SQLite and renders plots with Vegalite. They plan to support more backends and renderers in the future. As a commenter below said, it's supposed to help SQL specialists who don't know Python or R make visualizations.
I was quite psyched when I read this so maybe I can tell you why it's interesting to me, although I agree the announcement could have done a better job at it.
In my experience, the only thing data fields share is SQL (analysts, scientists and engineers). As you said, you could do the same in R, but your project may not be written in R, or Python, but it likely uses an SQL database and some engine to access the data.
Also I've been using marimo notebooks a lot of analysis where it's so easy to write SQL cells using the background duckdb that plotting directly from SQL would be great.
And finally, I have found python APIs for plotting to be really difficult to remember/get used to. The amount of boilerplate for a simple scatterplot in matplotlib is ridiculous, even with a LLM. So a unified grammar within the unified query language would be pretty cool.
I share your pain. You might enjoy Plotnine for python, helps ease the pain. The only bad thing about ggplot is that once you learn it you start to hate every other plotting system. Iteration is so fast, and it is so easy to go from scrappy EDA plot to publication-quality plotting, it just blows everything else out of the water.
But isn't this then just another tool that you're including in your project? I don't get why I would want to add this as a visualization tool to a project, if it's already using R, or Python, etc...
I mean, is it to avoid loading the full data into a dataframe/table in memory?
I just don't see what the pain point this solves is. ggplot solves quite a lot of this already, so I don't doubt that the authors know the domain well. I just don't see the why this.
Well there's always going to be a dependency anyway: loading the data, making it a dataframe, visualizing it, this might be 3 libraries already.
In a sense I really get your complaint. It's the xkcd standard thing all over, we now have a new competing standard.
I think for me it's not so much the ggplot connection, or the fact that I won't need a dataframe library.
It's that this might be the first piece of a standard way of plotting: no matter which backend (matplotlib, vega, ggplot), no matter how you are getting your data (dataframes, database), where you're doing this (Jupyter or marimo notebook, python script, R, heck lokkerstudio?). You could have just one way of defining a plot. That's something I've genuinely dreamt about.
And what makes this different from yet another library api to me is that it's integrated within SQL. SQL has already won the query standardisation battle, so this is a very promising idea for the visualization standardisation.
I see, that's insightful. At first sight I thought of it as a kind of novelty, extending SQL with a visual grammar to integrate with a specific plotting library. But from your comments I can now imagine it has potential as a general solution for that space between data - wherever it comes from, it can typically be queried by SQL - and its visualization.
Thinking further, though, there might be value in extracting the specs of this "grammar of graphics" from SQL syntax and generalized, so other languages can implement the same interface.
I completely agree, and I think this is also where I'm quite excited. This project's connection with ggplot
, which has one of the most respected grammar for plotting, means that it would be in a good position to achieve what you describe.
There’s certainly some benefit in a declarative language for creating charts from SQL. Obviously this doesn’t do anything that you can’t also do easily in R or Python / matplotlib using about the same number of lines of code. But safely sandboxing those against malicious input is difficult. Whereas with a declarative language like this you could host something where an untrusted user enters the ggsql and you give them the chart.
So it’s something. But for most uses just prompting your favorite LLM to generate the matplotlib code is much easier.
What makes it interesting is the interface (SQL) coupled with the formalism (GoG). The actual visualization or runtime is an implementation detail (albeit an important one).
I would even add that it fits into a more general trend where operations are done within SQL instead of in a script/program which would use SQL to load data. Examples of this are duckdb in general, and BigQuery with all its LLM or ML functions.
> mathematics only exists in a living community of mathematicians that spreads understanding and breaths life into ideas both old and new. The real satisfaction from mathematics is in learning from others and sharing with others. All of us have clear understanding of a few things and murky concepts of many more. There is no way to run out of ideas in need of clarification.
Yes! And this applies to all human culture, not just math. Everything people have figured out needs to be in living form to carried on. The more people the better. If math, or any product of human skill, is only recorded in papers or videos, that isn't the same as having millions of people understanding it in their own ways.
Modern culture often emphasizes innovation and fails to value mere maintenance, tradition, and upkeep. This can lead to people like the OP feeling that they have nothing to contribute, when actually, just learning math, being able to do it, being able to help others learn it - all of these are contributions.
We are all needed to keep civilization afloat, in ways we cannot anticipate. We all need to pursue some kind of excellence just to keep human culture alive.
reply