On the topic of "24. A Sony Walkman-style device that you can give to children so they can ask questions to an LLM...", I would strongly caution against this:
- short of AGI, what a child will hear are explanations given with authority, which would probably be correct a very high percentage of the time (maybe even close to or above 99%), BUT the few incorrect answers and subtles misconceptions finding their way in there will be catastrophic for the learning journey because they will be believed blindly by the child.
- even if you had a perfect answering LLM who never makes a mistake, what's the end result? No need to talk to others to find out about something, ie reduced opportunities to learn about cooperating with others
- as a parent, one wishes sometimes for a moment of rest, but imagine that your kid just finds out there's another entity to ask questions from that will have ready answers all the time, instead of you saying sometimes that you don't know, and looking for an answer together. How many bonding moments will be lost? How cut off would your kid become from you? What value system would permeate through the answers?
A key assumption here for any parent equipping their child with such a system is that it would be aligned with their own worldview and value system. For parents on HN, this probably means a fairly science-mediated understanding of the world. But you can bet that in other places, this assistant would very convincingly deliver whatever cultural, political, or religious propaganda their environment requires. This would make for frighteningly powerful brainwashing tools.
>> child will hear are explanations given with authority, which would probably be correct a very high percentage of the time (maybe even close to or above 99%), BUT the few incorrect answers and subtles misconceptions finding their way in there will be catastrophic for the learning journey because they will be believed blindly by the child.
Much better results than asking a real teacher at school, though.
Disagree with this. Kids are sponges who pick up on many secondary factors when an actual human gives them an answer. These factors add significant weight to their view of the response. In many cases, this actually reaches an extreme where what is said end up being tertiary to how it was said and who said it. I am sure you've experienced this even as a an adult.
An AI walkman removes this aspect of the interaction. As a parent, this is not something I would want my children to use regularly.
Wouldn't you know whether a teacher is reliable or not? If reliable, they probably have this reputation also because they can also say when they don't know something. And if you found out a given teacher isn't reliable, you'd be careful about what they say next - or you would just ask someone else.
The problem here is for a child to be thinking this system is reliable when it is not. For now, the lack of reliability is obvious as chatGPT hallucinates on a very regular basis. However, this will become much harder to notice if/when chatGPT will be almost reliable while saying wrong things with complete confidence. Should such models be able to say reliably when they don't know something, this would be a big step for this specific objection I had, but it still wouldn't solve the other problems I mentioned.
the amount of misinformation i had a kid due to a lack of internet is nothing compared to the rare hallucination a kid might get from chatgpt
swallowing gum is bad for you, or watermelon seeds, cracking knuckles causes arthritis, sitting too close to tv ruins your eyes, diamonds come from coal, newton's apple story, a million other things
Just two days ago, I asked ChatGPT to provide an explanation of the place-value system that my six-year-old could understand. The only problem was that it mixed up digit value and place value, which caused it to become confused. I spotted the mistake, and ChatGPT apologised, as it usually does. But if my six-year-old had asked it first, she wouldn't have noticed.
I'm not sure how much misinformation my child would learn as truth from this device.
Fair point in your case, but my experience across companies and industries is that people just don't read. It's true in any customer facing experience where tutorials etc are routinely unread and ignored, it's true for execs who are always short on time and want the exec summary in order no to read a whole memo (however misguided this may be in certain instances), it's true for seasoned professionals who prioritize and decide to ignore certain requests until they are clear enough / repeated enough, the list goes on.
Even people getting @mentionned on slack or in emails seem to find it acceptable to say routinely they didn't see/read whatever it was they were specifically asked to look at.
You're right, and that tracks with my experience too, sad as it is to have to admit.
However, if you're not holding people accountable for not reading the directive/memo, then that's on you. When you have something in writing that you can point to and say "look, there it is, I provided you with the information, you chose to not acknowledge it," it's very damning to the person who ignored it.
Without getting into details about the time I nearly left my company, I can tell you that one of my greatest weapons was (and still is) being able to literally recall emails, SOPs, and SMS messages that had been ignored. It makes me a thorn in the side of lazy managers and legacy hires that turned out to be freeloaders in my industry.
The people at the bottom of any organization have a responsibility to hold the people at the top accountable, just as it works the other way around. This is extremely hard for those of us near the bottom of an organization to do, I know, but if we don't, we are giving permission for the problem to persist and make our work that much harder. We all know that managers and those above them will avoid doing as much work as possible at any given time, but willful ignorance is not admissible in court of law, so why should it be any different in the work place?
This matches my experience, with email in particular every organisation I've worked for it's been normalised that it's OK to not read every email, particularly if it's long or detailed. Clearly if the CEO emails you alone about something important that doesn't hold but for the vast majority of emails it was seen as acceptable to not read them and to even openly admit that. I remember being phoned by a department head once asking what an email was about - they weren't going to read it until I explained why they should.
It's a side effect of the information noise we're all subjected to, if we all received 6 messages a day we'd probably read them all but as we often get hundreds (thousands if you're getting automated messages) it's "OK" to miss a few.
Can't help but notice that their reduced 4-days work week still is 36h long. 9h work days would be considered long in most European countries, where the norm is around 40h for a five-days work week. So they are not just cutting a day, they are also working 1 hour more than average on each other day. This probably explains a lot in terms of productivity.
On a different note, the increased ability of men to participate in family life and household chores sounds amazing. It might sound weird to younger professionals without kids, but as soon as you have a child, having one extra day to deal with everything household related makes a huge difference.
I concur: participating in household stuff (being men or women) is a huge thing. When you have kids, you have sometimes to meet the teachers, go to a doctor, do some adminsitrative work. Having a day for that (even if working a bit more the other days) adds flexibility for these tasks to be much bearable
Yes but if the whole country is on that schema, then your options to deal with teachers, doctors or public workers on the free day tend to disappear. Case in point: countries where Saturdays used to be a working day, and where you also had school for kids on Saturday (France and Spain, if I'm not mistaken).
Looking at the link shared in another comment with the actual study, it seems that only 55% of private sector workers are happy with the new arrangement, and that higher satisfaction rates are in the public sector. I am a bit skeptical about some of the conclusions of the report saying for example that the private sector needs to take more inspiration from the public sector, which seems blissfully unaware that the private sector can't rely on tax revenue to pay salaries, and has a very different set of operating constraints.
It makes a case for considering what the work day really is (number of hours given in a day towards work, including commute).
If a greater number of people could access 36h work weeks at 9h a day, and more people got access to 4 day work weeks, with a long weekend every weekend, it might not be a bad thing.
Ongoing demo of integrations with Claude by a bunch of A-list companies: Linear, Stripe, Paypal, Intercom, etc.. It's live now on: https://www.youtube.com/watch?v=njBGqr-BU54
Qwant is infamous in France for having made big claims and failed repeatedly. It was for the longest time just a wrapper around Bing, while claiming otherwise.
That isn't the case, the engine is quite capable. Have been using for 10 years as replacement for google. Only stopped recently because any AI is now better at answering most technical questions.
It is certainly more private than Google and capable enough to answer my daily questions. For me it is important to reduce as much as possible the entrapment from google services.
All families can't afford meat daily, let alone any type of meat they want. So, some families might favor chicken and certain processed meats here and there, but most cuts of beef and lamb for example might be out of the question, and so will daily meat consumption. The same applies for dairy, they might afford big blocks of tasteless cheese, but won't touch often cheese with taste, French or otherwise. There is no enforced rationing, but choice is reduced if not removed for a good fraction of the population, purely for economic reasons.
What's missing from this type of discussions is performance quantification and scientific attitude. It assumes solid food is indulgence and ultraprocessed vegan shakes paired with appropriate willpower is all it takes to do anything.
Things just don't work that way. Been there, done that. Willpower does nothing. Extra calories around your berry is extra calories around your berry. It's much more worthwhile to think about reducing GHG emission from farming than deceiving yourself into self-harming and projecting your own doing onto greater society.
I think many people have very positive experiences and data, at scale, speaking to the kind of success Edtech can have.
I was involved with a study by the Center for Game Science (University of Washington), led by Zoran Popovic (of Foldit fame), with over 40 000 kids in the US, Norway and France participating, from grade 1 to the end of high school. I think the numbers were 93% of kids managing to achieve mastery in solving an equation for x in one hour and a half of this, starting from first principles in their learning (it didn't matter what they knew before or didn't).
This was met by downright hostility from some schools systems, with the institutions saying in essence "it's impossible kids learn like this", ignoring empirical evidence in the process. Teachers on the other hand, thought it was great and had a profoundly positive impact on their students. Nordics seemed to be less averse to letting their students progress along this path. Ultimately the company that had developped the game went towards more traditional school publishing with paper methods + digital tools, which in my opinion is vastly less efficient, but that has the huge benefit of being something school systems know how to buy and implement.
This is meaningful when looking at the promise of edtech, because a lot of what's called edtech is frankly of poor quality, but some things are pure gems, and saying edtech has failed like the author of this article is not only misguided but dangerous in the extreme for the kids, often from underprivileged backgrounds, who benefit the most from this kind of cooperative, adaptive, and gamified approaches.
These approaches don't feel like school, they don't feel complicated, and kids can just have fun and explore and learn logical rules, verbalize what they are doing with one another and help one another, progress at their own pace, and end up learning stuff considered "hard" when it really isn't, like math, physics, chemistry, etc, ie logical ruleset that can be represented with meaningful manipulatives and made into a fun learning journey.
A scientist that causes, through willful fraud, the death of people seems to be guilty of something like manslaughter. Using fake data is a pretty clear-cut example of willful fraud, and a reasearcher fudging data over such a life and death question should 100% be held accountable.
Scientists making errors in good faith should on the other hand be insulated from any kind of liability.
You cannot insulate scientists making errors in good faith from any kind of liability, if you make the wilful frauds liable. Because there is no 100% way of distinguishing the two.
You don't need to be 100%. We assume innocent until proven guilty in other contexts. At least some criminals are known to go free because we cannot prove beyond a shadow of a doubt they really did it. However we get a lot of them. It isn't perfect, but it is a standard.
Despite innocent until proven guilty, there are innocent people in prison or on death row. I doubt that is a standard that any scientist would agree to.
Scientists who publish research in a journal, and then it turns out that this research is wrong, for whatever reason, should not be held liable for the consequences of this.
Not that I'm in favor of the proposed measure, but saying because we can't identify wilful frauds 100% of the time then we can't protect the non-fraudsters, is just a bit silly, no? You have this kind of problem detecting any kind of fraud.
One test is, is there written communication between people about committing the fraud? If so, there you go.
I don't believe you're engaging in good faith here, so I'm not going to reply any more, but if you're interested in having a productive conversation, try to think about what I might be meaning a little more and then reply instead of taking the least sensical interpretation and responding to that. Or, if you like, you might reply to multiple interpretations of what I said if you're not sure which one I mean, and that way we can advance the dialogue.
Let me sum up the discussion for you. I am arguing that scientists should not be held liable for the consequences of published research that turns out to be wrong (for whatever reasons). That is the status quo.
Now you are saying, introducing liability is fine, as we can deal with that in this or that way. I am pointing out that all of these ways are inherently flawed, to which you respond that yes, this is true, but in other walks of life we are dealing with these things too. To which I am replying, that's fine, but if we don't introduce liability, we will have none of these problems in the first place.
So you see, it is not me who is not engaging properly with the other's argument. So I am happy to finish this discussion as well.
Thank you very much for sharing this, and the accompanying documentation. I had no idea that Ireland had such a system in place. It seems really elegant and efficient, and better than say the Condorcet voting method. (https://en.wikipedia.org/wiki/Condorcet_method)
- short of AGI, what a child will hear are explanations given with authority, which would probably be correct a very high percentage of the time (maybe even close to or above 99%), BUT the few incorrect answers and subtles misconceptions finding their way in there will be catastrophic for the learning journey because they will be believed blindly by the child.
- even if you had a perfect answering LLM who never makes a mistake, what's the end result? No need to talk to others to find out about something, ie reduced opportunities to learn about cooperating with others
- as a parent, one wishes sometimes for a moment of rest, but imagine that your kid just finds out there's another entity to ask questions from that will have ready answers all the time, instead of you saying sometimes that you don't know, and looking for an answer together. How many bonding moments will be lost? How cut off would your kid become from you? What value system would permeate through the answers?
A key assumption here for any parent equipping their child with such a system is that it would be aligned with their own worldview and value system. For parents on HN, this probably means a fairly science-mediated understanding of the world. But you can bet that in other places, this assistant would very convincingly deliver whatever cultural, political, or religious propaganda their environment requires. This would make for frighteningly powerful brainwashing tools.