Hacker News .hnnew | past | comments | ask | show | jobs | submit | Timpy's commentslogin

I really like Crash Course World History. I'm older than the target demographic for this resource to be sure, but it's a great high level overview and having this under my belt helped me feel a little more oriented in every other historical thing I've dabbled in since.

https://www.youtube.com/playlist?list=PLBDA2E52FB1EF80C9


I discovered Anki 12 years ago while living in Japan. I was trying my hardest and absolutely failing to remember any of the Japanese I was studying. Maybe I was due for a learning-style renaissance for myself and Anki was just the catalyst, but it really made a positive impact on my life. More than just memorizing kanji on AnkiDroid during my commute, I just started to believe I could learn anything. I was starting to take my coding hobby more seriously at the time and hacking on Anki was a big part of that too. Thanks for all the hard work Damien and David Allison. I'm so grateful for the software you've worked on.


Agreed, Anki has really helped me with learning new languages. The creation of cards was always a slog though, so recently I've been playing with an Anki MCP server hooked up to Claude. I can dump my iTalki lessons in, or ask Claude to make cards based on a song I've been listening to, etc and get a bunch of relevant cards generated for me. It's honestly been kind of magic.


That’s… genius. This might get me using anki again, I gave up because of the friction of card creation. Thank you for this!


You're welcome! Here's the one I've been using: https://github.com/ankimcp/anki-mcp-server

I've definitely hit walls with Anki over the years, and while the community decks help a lot, it's really nice to just tell Claude "can you take this assignment my tutor gave me, extract all the infinitive verbs, and then make cloze style cards for conjugations at an A1/A2 level?" and get it all done in a couple minutes.


Ha, I love this!

In a way making the cards helps a ton to learn the content and decide what's really important to retain. On the other hand, it's such a slog that I usually end up relying on community cards, or skipping it altogether. The MCP idea may be a nice middle ground. Will give it a try for an upcoming exam.


The MCP approach is brilliant! I've been solving the same friction problem from a different angle. I mostly use Anki to improve my vocabulary, and ended up building a tiny browser-side tool for myself (now called Wordwise / Anki Dictionary) that lets me double click any word on a webpage, get a clean definition + the sentence it appears in, and export it straight into Anki with one click.

It’s been a surprisingly good middle ground between fully manual cards and fully LLM dumps. If anyone’s curious: https://wordwise.me


Yeah, llms change the game for card creation. I'm trying to learn Rust (programming language) and I have Codex ingesting books/articles and generating sensible cards from them. It's able to consistently get the HTML right for syntax highlighting in examples too.


Same for me. I was doing my PhD in another country and was just overwhelmed and disoriented at the sheer scale of information I suddenly had to remember and digest. Anki was on again/off again for me at first, but once I learned to edit and update the cards and add my own, I really began to understand how to boil concepts down into something I could remember, i.e. I could structure it to my own personal chaotic mode of thinking, and I've flourished with it since then


I've been barely keeping my head above water (ok, much better than that honestly) for 35 years intellectually due to lack of more methodical learning. Your post might convince me of trying Anki...


The real trick is not realising that its working until you stop using it :-)


I've just started using Anki and I'm almost grieving. If I had had this 15 years ago I probably would have done so much better in school. I've always struggled with memorizing, but Anki has made this much easier for me. I started learning Japanese 4 months ago and I'm baffled by how much I've retained in that period. Now I'm playing with using it to learn the rules for the OneRing TTPRG.


Same for me. I discovered spaced repetition through Anki. It helped me study Japanese, Agile, and countless other topics, and the Android and macOS apps work perfectly together. A friend used it so much that he ended up contributing to the Android app as OSS.

> with provisions in place to ensure that Anki remains open source and true to the principles I’ve run it by all these years.

I really hope this holds.


I think Anki, originally, was for studying Japanese too.

And I recently wrote about making my own Anki Japanese cards in my blog[1]

[1] https://alt-romes.github.io/posts/2026-01-30-from-side-proje...


You're right about that, Anki is named after the Japanese word for memorisation. (暗記 - あんき "anki")

https://jisho.org/word/%E6%9A%97%E8%A8%98


Is Anki that much better than, say, Quizlet?


Same for me, while I learned Danish, which is even harder then Japanese.


Sorry if I’m missing an implied /s :D

Ftr Danish is a category 1 language, while Japanese is category 4 ("https://2009-2017.state.gov/m/fsi/sls/orgoverview/languages")


... for the native speakers of English which not all the people in the world are.


> Sorry if I’m missing an implied /s :D

You caught me. ;-)


tbh I was perhaps also eager to reshare that website someone linked the other day :P I saw a chance and by golly I had to take it.


A lot of training data was curated in Kenya[0]. I would imagine if LLM data was curated in Japan our LLMs would sound a lot like the authors of their most popular English text books. Maybe other common Japanese idioms would leak in to the training data, like "ね" or "でしょう", ChatGPT would say "Don't you agree?" at the end of every message.

[0] https://www.theverge.com/features/23764584/ai-artificial-int...


The Indian-born textbook author mentioned (Malkiat Singh [0]) had an inordinate influence on many Kenyan students because his textbooks were the de-facto standard for years. Its interesting how this influence extends as his students get to curate the LLMs on which the world has come to rely.

[0] https://en.wikipedia.org/wiki/Malkiat_Singh


So twists of training data procurement bring us the best of doing the needful through Africa.


You are completely right dajou~ ^_^ !


Maybe we all should start writing Japanglish to show our authenticity? Or rather, ”Maybe we all should start writing the Japanglish, so that peoples can feel our real soul, you know?”


I guess it can't be helped.


It's not because I like you or anything.


This is a wild misunderstanding of LLMs. Data labeling has nothing to do with generating the astronomical text corpus used to train modern LLMs.


The HF part of RLHF to refine the output of LLMs also happens in these places


Note RLHF can only perform selection on existing model outputs, adding new data is SFT or else just more pretraining.

ChatGPT speaking African English was mostly just 3.5. 4o speaks like a TikTok user from LA. 5 seems kind of generic.


樣 is just setting us up for

ChatGPT :|

ChatGPT (japan) XD


Yes but I think it's a good virtue to signal considering the circumstances. If they paid the ransom that would signal that ransoming this company works, incentivizing more ransoms. If they refuse to pay the ransom it might signal that they care more about money than they do integrity. Taking the financial hit of the ransom, but paying it to something that signals their values, is about the best move I can imagine.


This is cool, very much the hacker ethos. But I didn't see any evidence that it can make a phone call though?


Yeah, I actually left a comment to that effect on the video when I saw it last week, because I’m pretty sure the placement of the mic / earpiece is incompatible with a traditional voice call. Although arguably traditional phone calls are the least important feature of smartphones for gen z. Even if it was important I would guess Bluetooth headsets are more common than actual holding the phone up to your ear.


I had stopped in the bar next to my gym after workout to grab a pint and watch a bit of the game while waiting for a Lyft home. My adult daughter called. I full on panic sprinted out of the bar cause I heard my name and call broke up. Several texts later realize she's ok and just asking about dinner. She never calls so it's obviously an emergency. Is she being attacked, car breakdown what?!?

Also phone didn't work in basic phone mode. These things are getting more useless without earbuds


I think this guide is nice, and having a variety of articles like this is great so everybody can look at the different ideas and find what's right for them.

I would urge people to consider going a little bit further than this guide, consider not using your phone as a reading device. Imagine deciding to sit down with a physical book, but keeping your phone nestled on the opposite page as you read. It would be a lot nicer to read without interruption, without being exposed to notifications at all times. Sure there are going to be use cases where the phone is more convenient, but I think sacrificing convenience is worth it.


This is something I want to see in the world. Do you have a public repo? I'm currently doing third party application development for the Yoto, and I've done a lot of hacking on MP3s. If you're open source I'd be interested in helping, or at the very least chatting about the project.


Eventually I will write a blog post. The software is actually not much, just some basic Arduino stuff. I am using an ESP32, a full size SD card board and a VS1053 board (both connected via SPI). The software is currently just trying to read from the SD card in a loop and when it can it just plays the MP3 files in order. Other things that are not connected to software is a Li-Ion battery, charger circuit, step-up converter, LM386 based amplifier circuit and a speaker :)


I keep feeling like there's a huge disconnect between the owners/CTOs/managers with how useful they think LLMs _are supposed to be_, vs the people working on the product and how useful they think LLMs _actually are_. The article describes Harper Reed as a "longtime programmer", so maybe he falsifies my theory? From Wikipedia:

>Harper Reed is an American entrepreneur

Ah, that's a more realistic indicator of his biases. Either there's some misunderstanding, or he's incorrect, or he's being dishonest; it's my job to make sure the code that I ship is correct.


This is totally orthogonal to the issue but I think the best fix possible is to block the YouTube home page. I have gained value from algorithm-curated feeds in the past but it's no longer a net positive in my life. I recommend checking out News Feed Eradicator[0], Distraction Free YouTube[1], and set up some extremely aggressive uBlock Origin rules.

[0] https://addons.mozilla.org/en-US/firefox/addon/news-feed-era...

[1] https://addons.mozilla.org/en-US/firefox/addon/df-youtube/


Unhook is also good. https://unhook.app/


The models outlined in the white paper have a training step that uses reinforcement learning _without human feedback_. They're referring to this as "outcome-based RL". These models (DeepSeek-R1, OpenAI o1/o3, etc) rely on the "chain of thought" process to get a correct answer, then they summarize it so you don't have to read the entire chain of thought. DeepSeek-R1 shows the chain of thought and the answer, OpenAI hides the chain of thought and only shows the answer. The paper is measuring how often the summary conflicts with the chain of thought, which is something you wouldn't be able to see if you were using an OpenAI model. As another commenter pointed out, this kind of feels like a jab at OpenAI for hiding the chain of thought.

The "chain of thought" is still just a vector of tokens. RL (without-human-feedback) is capable of generating novel vectors that wouldn't align with anything in its training data. If you train them for too long with RL they eventually learn to game the reward mechanism and the outcome becomes useless. Letting the user see the entire vector of tokens (and not just the tokens that are tagged as summary) will prevent situations where an answer may look or feel right, but it used some nonsense along the way. The article and paper are not asserting that seeing all the tokens will give insight to the internal process of the LLM.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: