I was writing some Python code (not my primary language), and wanted to know if there was something built into f-strings that would nicely wordwrap the output.
I did a Google search for "how can I create f-strings in python so the output is wrapped at some fixed length". Except, for Google, I did not use the "how do I" etc. and just threw some keywords at it: "python f-string wrapping", "python f-string folding", etc.
All I got back was various results on how to wrap f-strings themselves.
Frustrated, I typed "how can I create f-strings in python so the output is wrapped at some fixed length" into ChatGPT, and back came the answer: ... However, f-strings themselves don't provide a built-in method for wrapping text at a specific width. ... For text wrapping, you can use the textwrap module ... Here's how you can combine f-strings with the textwrap module to achieve text wrapping: ... (followed by a full example).
I think "Search" will be changing dramatically over the next year or two. And websites which depend on Search traffic to survive will be in deep trouble.
The double standard here is hilarious. The ChatGPT result you got is better largely because you typed in a longer search doc. The illusion of superiority that surrounds ChatGPT is significantly due to the human factors curiosity that people are willing to type in a long-ass search string in the conversational context. They will type "best pants" into Google then complain about the futility of it, then switch tabs and spend ten minutes writing a short essay on pants for ChatGPT, and rave about the results.
If you take the entire quoted search sentence from your third paragraph, then Google also presents a complete solution.
Google used to do keyword based search only; now, it’s a hybrid of keywords and semantics, and they seem to be upweighting the latter more and more. This can be good if you don’t care about precision (“pasta recipes” -> “penne alla vodka”), but frustrating if you know exactly what you are wanting to match.
Google and the companies that litter the results page only have themselves to blame. When I was trying to find some boiler plate for pulling something from AWS parameter store I got total nonsense docs that weren't even what I wanted but it was mostly AWS docs or online tech diploma mill nonsense. Then I used chatgpt and it did 90% of the work in 20 seconds. Sometimes Google should just show me the code on GitHub but instead Google wastes my time.
I mean, that’s just completely wrong though! (1) It attempts to wrap and then substitute, which is nonsense (2) it doesn’t even expand the string you give it! That f-string in there is basically a no-op and (3) it truncates to only the first line, and throws away the rest.
> Except, for Google, I did not use the "how do I" etc. and just threw some keywords at it: "python f-string wrapping", "python f-string folding", etc.
Here's the thing. For the longest time (since circa 1999) I've been a Google user. Google trained me to not use superfluous words like "how do I .... " etc. Just put down the keywords, and spare me the babble, Google said.
With LLMs, on the other hand, it feels like I'm dealing with a human being on the other end: so I have to add stuff like "how can I"... and "please" and "thank you" (and sometimes "no yapping" ;-D ). So I queried Google just like I normally do.
The way GP worded it is a bit confusing and ambiguous but it doesn't seem like they typed the same question into both, rather they typed what they thought each wanted to know about the same question. In one sentence they say they typed an exact quote into Google but in the very next sentence they clarify they actually typed a bunch of keywords about that quote instead.
> I think "Search" will be changing dramatically over the next year or two. And websites which depend on Search traffic to survive will be in deep trouble.
Yes, and no. A lot of sites generate crap content as it is. Not all search queries are just questions that can be answered, some are looking for businesses, shopping items, legal help, etc things that ChatGPT, etc can't emulate. AI will hurt sites like Reddit/StackOverflow/Quora, etc which answer -actual- questions. The issue is that once that happens there will be less and less content to solve random issues and ChatGPT, etc will not have been trained on them creating a bit of a gap. Then couple that with all the junk AI articles people will be publishing, it will be increasingly more difficult to solve minor issues that you randomly come across.
ChatGPT/Midjourney, etc right now is just feeding into the "Dead Internet." I'd say currently, on net, it has been more of a bad thing than a good thing. That may change, or it could get worse.
> some are looking for businesses, shopping items, legal help, etc things that ChatGPT, etc can't emulate.
Except it can. It doesn’t have to try and “remember” the details in its weights, it can RAG and all sorts of fun tricks. As Google’s Search Generative Experience shows, LLM interfaces don’t have to be strictly chat. These tools surface and aggregate existing content, instead of hallucinating new ones. It can’t stop people from pasting generated content into Reddit, but it can be used for actual search.
Imagine just asking ChatGPT (or more likely, Gemini) for “space heater recommendations for a 300sqft office in a shed” and having the LLM walk you through the options - you no longer need to use a UI to toggle filters, and you don’t need to explicitly throw keywords into a search bar and hope you picked the right ones.
Regarding the “dead internet” - you’ll always have humans create new content. People wanna talk about their interests on Reddit. That won’t change. People will file bug reports and new code in GitHub. Companies will post support articles. Journals will publish research papers. These “good” sources will still exist because there are external reasons to. The only thing that will die are the SEO crap now that they’re really not special.
As a side note, "Search" outside of google is changing too. I have been using Kagi for a few month now, my search experience is so much better since then.
An LLM is not an assistant. It's a tool that will fill up the gaps with plausible sounding content. If it happens to have a lot of data on what you're looking for in its training good. If not, you SOL, risking being fooled by confidence.
The beauty of tools is you can pick the tool based on the job. Personally I use both search and ChatGPT4 and Copilot in IDE at various times throughout any day.
Not if the tools changes? Like in this example, where Google Search is no longer a librarian(though arguably it hasn’t been a good one for 10+ years) but has become a chat bot.
I feel like context is far more important than the actual answer.
If you want to develop your skills, it's just as important to be aware of the things that aren't the answer, rather than just a single hyper-specific curated answer.
As others have mentioned in nearby threads, Kagi has some good examples of this in action. Appending a question mark to your query will run a GPT model to interpret and summarize the results, providing sources for each reference so that you can read through in more detail. It's way less "trust me bro" than google and feels more like a research assistant.
I would have been on board with it, if I didn’t get a completely wrong AI generated response from Google the other day.
I was for some reason looking into history of Reddit and ended up searching “what happened to former AMA coordinator Victoria Taylor”. Google AI summary got confused and told me that she got fired because hundreds of redditors voted for her ouster (clearly mixing up Victoria’s story and Ellen Pao’s)
I know n=1, but feels like maybe a little too early to get it out of the experimental mode.
I think this points to bigger issues with the AI field generally with respect to adoption and development. The output of these systems is fundamentally unverifiable. Building a system to verify these answers would be orders of magnitude more expensive and maybe computationally impossible. It looks really impressive because it's 95% of the way there, but 5% error rate is atrocious, in was expensive to get this far, and improving it much more will be more and more expensive. What we've essentially built is a search engine that is 5 times more expensive to operate and that alienates the content from its context, which makes it less valuable.
Maybe it's good enough for programming since you can immediately verify the output but I suspect we're pretty far away from the breakthroughs ai boosters insist we are mere months away from. We've had some form of self driving for a while now and the other company that seems close is waymo, and that seems to have taken over a decade of huge research and capital expenditures.
I wonder if Kagi's implementation of the same feature influenced anyone at google or if this was an inevitable development. Google has been providing the AI answer boxes for years but it's never been very user driven.
With Kagi you can use !fast/!expert bangs to use one of several LLMs fed the top few search results or as of last month, just end the search query with a question mark (no affiliation just a happy customer). It's almost completely changed how I use search.
I pay for Kagi, I'm only vaguely aware that they have AI features and I have no interest in using them. What I like is that it doesn't annoy me about that stuff or anything, it just lets me use it (kagi search) as I want. Completely different from aggressive big tech "you're the product" companies that use modal "got it?" popups to try and push features you don't want. I hope that doesn't change for Kagi more than anything else.
Also, I rarely use search to answer questions and anyway would never just trust an answer. I even read SO before pasting in the code and decide if it fits with what I'm doing and what modifications the answer may need, I guess I'm old. But more importantly, I use "search" mostly to get to pages I know exist or expect to exist. And getting to the right page faster is what I care about, not any answer.
I pay for Kagi and was similarly only vaguely aware that they have AI features and I had no interest in using them.
I started adding a question mark to my query out of curiosity and their instant answers are really good. Also they have links to their references, which I often check to verify the answer.
To me, this is the key UX component that Kagi does correctly -- traceability to the underlying source result (which is usually also quite good with its linkbait penalizing).
I've found Kagi to be a great start for "I want to know a current fact in an area I don't follow actively."
It's quite interesting what a search engine can be in 2024 when the product isn't the one using it, but the search results themselves.
Kagi with a wayy smaller budget than Google have really managed to make something pretty cool. Still does not replace google for 100% of my searches, but for the ones it does it is remarkably good, and so much less mentally taxing when working on a coding problem. I don't have to on top of everything manually filter search results, I can just click the top link.
"Our data includes anonymized API calls to traditional search indexes like Google, Yandex, Mojeek and Brave, specialized search engines like Marginalia, and sources of vertical information like Wolfram Alpha, Apple, Wikipedia, Open Meteo, Yelp, TripAdvisor and other APIs. Typically every search query on Kagi will call a number of different sources at the same time, all with the purpose of bringing the best possible search results to the user."
Not to mention that they have their own index that they're constantly working to expand:
> But most importantly, we are known for our unique results, coming from our web index (internal name - Teclis) and news index (internal name - TinyGem). Kagi's indexes provide unique results that help you discover non-commercial websites and "small web" discussions surrounding a particular topic. Kagi's Teclis and TinyGem indexes are both available as an API.
The comment I replied to implied that they have their own large index equivalent to Google's and so don't need to use Google. It's not true, and I bet you would find that a very significant portion of the search results come directly from Google queries behind the scenes.
This brings an interesting dilemma. Google could try the Kagi approach.
If you're sitting in the CEO chair at Google, you're looking at a tough decision. Google Search isn't just another product; it's the heart of the business, raking in 60% of the revenue. They've been king of the search hill for over 20 years because they did one thing better than anyone else: search.
Now, with LLMs entering the scene, adding them to search results could slash ad views and revenue. Remember the days when a simple search would send you down a fascinating rabbit hole of articles? Those days could be numbered.
But here's the kicker: if you resist integrating LLMs into Google Search, you might slow down the revenue decline, sure. However, sticking to the old ways might also get you branded a Luddite and cost you your job. It's a classic tech dilemma - innovate or perish.
I think there are different classes of result that we should discuss. First, there are pure facts. I know that weather.com wants to interpose between me and the weather forecast that my government produced for me using my tax money, but Google doesn't need to assent to weather.com's scheme. That is something they can just answer. In the same way, whether my local chaat stand is open on Tuesdays in not a fact that belongs to Yelp.
Then there are materials that are freely available but someone wants to stand in the way of them. For example, all of the wikipedia-but-with-ads hosts out there. Do we owe them anything? I say no. Do we owe anything to all of the journal indexes that try to convince you to pay $26 for the PDF of a paper from 1913 that's in the public domain anyway? I think not. And I don't think it is problematic if I ask Google whether Heidegger was a nazi, and Google answers with an answer instead of referring me to the gatekeepers of centrury-old information.
There's a third class too, perhaps the most important one -- facts which take time, money, and/or skill to assemble.
E.g. a summary of the minutes from your local city council, a respin of government economic statistics that answers a topical question, or a news investigation into a topic.
I'm ambivalent as to the business model that sustains that third class, but I resolutely think one needs to exist.
And Google, as currently constructed and operated, is not it. It doesn't originate content.
Ergo, passthrough of Google revenue to some entity that can / does do that seems reasonable.
> a summary of the minutes from your local city council, a respin of government economic statistics that answers a topical question, or a news investigation into a topic.
The city council minutes would likely be published by the city. Nobody is entitled to revenue from that, it's paid for with tax dollars.
Government economics statistics are also published openly. I happened to have asked a bunch of questions about life expectancy and GDP recently and Google answered those questions with info boxes and links to the data sources. If I ask a question which can be answered in a few words, I'm perfectly content not to have to view ads to read an article from an economics wonk to manually find the tiny nugget I'm after.
And as for a news investigation, there's almost no question you could ask in isolation that would be better answered by reading a full article, I think. The person asking "Who is facing legal action in the aftermath of the Sandy Hook massacre?" wants to get back "Alex Jones." They almost certainly didn't want to read a full article, and if they did, they'd click on it.
Which is to say, Google is doing nontrivial work to accomplish a specific outcome that the "third class" sources don't solve (and usually don't even try to solve). Besides, how do you know the content Google is pulling from...
1. Came from a single source and was not corroborated with multiple sources?
2. Wasn't just scraped from somewhere else or compiled with AI?
3. Isn't wrong? Should sources get paid for incorrect information?
A passthrough system creates perverse financial incentives to answer questions that someone might someday have, without caring about whether the answers to those questions are genuine and ethically produced. And frankly, that's far worse than what we have now.
> The city council minutes would likely be published by the city.
Have you tried to look for yours, in a smaller city?
> Government economics statistics are also published openly
Certain statistics are published openly. However, the most useful derived statistics are a result of combining these with other data sets.
> And as for a news investigation, there's almost no question you could ask in isolation that would be better answered by reading a full article, I think
I'm not arguing excerpt vs full article. I'm arguing as to whether or not the full article exists in the first place.
There is a huge amount of critical, socially-useful information that requires effort, money, or skill to generate.
Previously, journalists created it.
Now, nobody does.
> Which is to say, Google is doing nontrivial work to accomplish a specific outcome...
What would Google be able to provide without content provided by underlying sources?
Nothing.
Google doesn't have reporters or posters. They provide platforms, then take a cut (or all) of the revenue generated by content on those platforms.
That's worth something, but it certainly doesn't deserve the 100% they're getting now.
> Besides, how do you know the content Google is pulling from...
Google knows. And if it doesn't, then it's unattributable content, which should open them up to legal liability.
ML/LLMs shouldn't be GPU-powered copyright washing machines: feed copyrighted content in, get uncopyrighted results out.
> A passthrough system creates perverse financial incentives to answer questions that someone might someday have, without caring about whether the answers to those questions are genuine and ethically produced. And frankly, that's far worse than what we have now.
You've literally described Google's current business model, and what they've turned the web into.
Does ChatGPT which is what the op was comparing to?
I've been using both ChatGPT and Gemini (both the paid versions), more and more. My habit is to go search on Google, get frustrated I can't get a relevant result. As Gemini or ChatGPT and 2 of 3 times, get a useful answer.
At some point my habit will likely change to ChatGPT/Gemini first.
Like the OP. I type something cryptic (like keywords) into Google search vs typing 1 to 3 sentences into Gemini/ChatGPT. I should try typing the entire thing into Google search.
Interestingly this reminds me of stack overflow. Their search is legendarily bad but I'd often go start a new question, add tags, start typing my question in detail, and, in it's recommendations based on the content of my long form question it would find the relevant existing answer.
If I search for how old Taylor Swift is, is a clickbait website entitled to the revenue from my visit? If I'm searching for an objective fact, nobody owns that knowledge. Arguably if I want an objective answer to a question and Google gives it to me, they're the ones who did the work in this case.
People want their question answered accurately, which is many times more likely to happen after clicking through than by relying on some opaque Google process that may or may not have interpreted the information correctly.
I use the same Google account at work and home but only got AI-powered results at work, making me wonder what else Google does differently when you connect from a favored IP.
This could be considered a new era for the internet. Or whenever it starts being used to serve the majority of search results.
People who are skeptical of AI won't like it of course. But it will be a literal and practical change to the way the majority of internet searches work.
Google basically becomes the de facto personal agent for most people at that point. I would not be surprised to see Google Assistant merge into the main search product.
Microsoft had actually been strategic by promoting Copilot.
I know that many HNers generally hate Blockchain and smart contracts and decentralized technologies, but I feel this direction is the only viable alternative to monopoly platforms that are now even more directly acting as our interface to the world via agents.
Exactly what that looks like I don't know. But I do know it involves open protocols and probably open marketplaces for knowledge and other types of tasks.
> I know that many HNers generally hate Blockchain and smart contracts and decentralized technologies
Whoa, slow down. There's definitely a lot of skepticism about blockchain and smart contracts around here, but that skepticism does not generalize to decentralization as a whole. Self-hosters are strongly represented on here, as are proponents of federated tech and P2P.
There's specifically a lot of skepticism about blockchain and web3 on HN, and I'm really curious to know how you think those specific decentralized technologies are going to make a difference in the search space.
I've been using both ChatGPT and Gemini for months as my primary question-seeking-response portals. I find Gemini to provide more relevant and recent responses while ChatGPT provides slightly more thorough code guidance. For me, LLMs are a shortcut to researching information where I am hoping to find one or two sources which answer the majority of my questions. Now, I can chat and reason through my questions in one thread without navigating to other sources. The biggest pitfall is surfacing differing perspectives but I think that is solvable.
If Google was smart, they would use their existing ranking systems to find the highest quality creators and pay them to do this. Instead, they’ll probably train an LLM on SEO’d blogspam and call it a day.
I've had these enabled since launch and I think they are great. It usually provides 2-3 stanzas of summary, each with a carousel of sources. It's a useful format because in a ranked list it is not always immediately obvious when there are multiple classes of results, the union of which answers your question. With the summary result, the correspondence between sources and relevant facts in more obvious.
Side-question but relevant (wanted to do an Ask HN about it for a while).
I have a relatively large corpus (~10k pages), and would like to augment traditional keyword search with AI. Eg: write my question and get an LLM answer *backed-up* by search and linking to the part of the corpus used.
What are my options here? Local option would be best.
Yeah, but this is essentially the same as Google results now but instead of just links it uses AI to summarize the results into a text blob. It still contains links to where it got it information in the summary and it allows you to ask follow up questions.
It's not like it generating new information. It just summarizes information based on the same links you would have gotten without it.
Well, the way I see it, you shouldn't really trust any information that's out there on the internet. It's your own responsibility to put search results into perspective, whether it's someone's blog or AI generated content.
AI is faceless, a website is not. The other content on the website will provide a clue as to whether the site is trustworthy or not. This is how web search has worked since the beginning.
Right, so I want a list of results where I can assess the information from each in the context of the source it came from, its reputation, etc. And other metadata like how old the article is.
It's a non starter to have that critical context stripped out and be given an opaque answer by an AI without having seen the underlying information in the context it originated from.
Correct, and I have my workflow to reduce noise/signal ratio, for example by adding: site:stackoverflow for programming questions, or by adding site:ycombinator.com for tech queries, and reading discussions how answer was derived.
For AI, I expect noise-signal will be not as good as I expect.
Sure, but do we trust Google's non generative summaries / search results for some reason? It's all a black box to us on the outside. It's already heavily ML'ed, like at no point was a human ranking these things or determining relevance
It's not the quality of the AI that's at issue. In fact the higher quality Google's AI is, the less I will trust it. Google wants to extract maximum value from me. I want to find useful information on the Internet. These two goals are frequently at odds with each other. The better Google's AI gets, the better it will be at doing it's job (which I do not like).
AI as a mechanism for allowing Google (or whoever) to deliver the same information we already have through an inevitable additional opaque layer of bias and advertising would be seen as a horror if so many people weren't impressed by the gee whiz factor of it.
The quality of these results is in my experience quite poor, so this is worrying to me. (The other comment about "distilled blogspam" hits the nail on the head I think.)
It highlights a fundamental tension around Google's core product: Users want accurate information. From their perspective, that is the singular purpose of using Google. But Google's internal purpose is only to sell ads. It makes no difference to their bottom line whether the information they give users is correct unless it gets so bad they start losing ad impressions.
Speaking even more broadly, it's very depressing to me how secondary the goal of building a good, useful or functional thing is subservient or at best orthogonal to the goal of making money.
I disagree, I think (especially as evidenced by Gemini) Google's people has a specific mission to make the world a better place, and "better" being defined by them such as ensuring that DEI is injected into everything. I am generally very supportive of diversity, but I think it's pretty clear that the goals of helping you find information and the goals of filtering/shaping that information to make the world more reflective of what they think it should be, are fundamentally in tension. That is what concerns me the most.
I do not believe anyone with real power at Google values liberal values above revenue. The RLHF and fine-tuning is solely to prevent news articles of the form "Google unveils new shockingly racist AI!!", which is absolutely what would happen without explicit fine-tuning, and which would invite regulation and scrutiny.
I too think artificially "fixing" models is the wrong way to go. If the models are biased towards racism it's because the training data is biased towards racism which is because society is biased towards racism. Which is true, and we'd be better served by acknowledging that and shining a light on it. Just ban AI (as a known tainted product) from being used for making any decisions of importance.
> The RLHF and fine-tuning is solely to prevent news articles of the form "Google unveils new shockingly racist AI!!", which is absolutely what would happen without explicit fine-tuning, and which would invite regulation and scrutiny.
Yes that is undoubtedly true, and is a great point. I'm not sure whether it just so happened that the revenue incentives lined up with the liberal values well enough that nobody ever questioned or pushed back, or if the revenue goals outweighed the liberal values, but my guess is it's probably more the former. Though once revenue and liberal values are in tension, it will be interesting to see which direction they go. My guess is it will be a mixture that leaves no clear trump card, and makes it very difficult to predict given situations.
Separately: we should be careful of carrying the right wing's water and adopting terms like "DEI" as negatives without thinking critically about it.
This is straight out of the Chris Rufo playbook; identify a well-intentioned but possibly flawed concept, create a caricature of it, and make that strawman the punching bag of every anti-inclusive political voice. Then, because it's a term liberals were already using, turn around and use it to attack existing institutions.
This is his explicit strategy for kneecapping "CRT" (formerly an academic subfield), "woke" (a social concept among American Blacks), and you can see it unfolding in real time against "DEI" (formerly the way HR departments tried to comply with the Civil Rights act, but now a catch-all negative term.)
Yes thanks, that's a good thing to keep in mind. I actually didn't intend it in a negative way when I used it, but probably more people perceive it that way than not so it's good to keep in mind. You're right, there are lots of opportunists and people with agendas going the opposite way that will absolutely try to get us to throw out the baby with the bathwater. It also doesn't help that we humans tend to be pendulum swingers (and also have a tendency to over-correct) which then leads to backlash and backtracking, and it's not hard to hijack those natural motions to get people to react emotionally.
Reading commentary from 18th century religious leaders for example after Franklin invented the lightning rod, is quite illuminating. It's far enough in the past that there aren't really (serious at least) people making the case that we are tampering with God's methods for punishing the wicked anymore, so there isn't a personal/emotional connectino to the arguments for most people. Seeing people of the day seize on parts of the science that were slightly wrong and using it to enflame the passions of people to wholly abandon the Lightning Rods (including some cases where people actually mobbed and tore them off of buildings) is very much in my mind.
RE: "Speaking even more broadly, it's very depressing to me how secondary the goal of building a good, useful or functional thing is subservient or at best orthogonal to the goal of making money."
This is what happens when things are "free". They have to make money somehow and "free" products typically have bad incentives (i.e. the company puts making money ahead of making the best product). One of the nice things about paying for news, TV (not cable, streaming), software, etc. is you are the customer and they company's continued survival depends on making you happy. In the short term, they can can do all sorts of crappy things to their customers (think Cable TV's high prices and meh content), but in the long term, bad behavior kills companies.
Instead of visiting content creators or information sources directly, you allow google to insert themselves between you and them. The sources get starved and google later has the opportunity to shape what is said to influence you (ie, “advertise” in a more varied way, or else give a political spin or some pro corporate or PR approved way)
see also: amazons ai review summarizer will one day have a larger role and will surely skew things to increase sales
Surprised to see negative reactions to this. Generated search results that distill actual content are definitely better than blogspam that has only a few useful tidbits in thousands of words and dozens of huge ad blocks.
One of my biggest gripes with Google search these days are it's bad UI. Started using searxng a few years ago when Google made everything above the fold useless videos, excessive whitespace, and dynamically poping content to accedently click.
Do the swe/people actually doing the implementation of this really think llms are ready to be used like this and won't just piss off and turn people against "ai"
What chat gpt did was interesting, in hindsight I think it was a mistake calling it ai.
At least google will still walk back on things when they truly suck. Recent example is where they replaced the default Shopping, Images, Videos tabs when you search for something with autogenerated suggestions. It was awful. But, even though it took them a few months, that entire change seems to be reverted.
The problem with that is that Google, now more then ever, can show you whatever they want. Another step toward being just a dumb TV. (AKA propaganda engine)
so search is no longer a result of links to a page where information is found. search is now just an answer to a question. I don't really have a problem as that's what people really want. what I do have an issue with is that the people that put time/energy/effort into providing the information that Google has taken to train their data service get no credit or any attempt at being rewarded for their effort. sure, they might still provide links to SEO crap sites to make it "feel" like web search. this is just the latest of steps to remove the ability to actually finding a website.
to me, this should be a totally separate product. the Googs Answer All 5000(TM) will just give the answer to whatever is being asked with no method of inspecting the validity of the answer.
I did a Google search for "how can I create f-strings in python so the output is wrapped at some fixed length". Except, for Google, I did not use the "how do I" etc. and just threw some keywords at it: "python f-string wrapping", "python f-string folding", etc.
All I got back was various results on how to wrap f-strings themselves.
Frustrated, I typed "how can I create f-strings in python so the output is wrapped at some fixed length" into ChatGPT, and back came the answer: ... However, f-strings themselves don't provide a built-in method for wrapping text at a specific width. ... For text wrapping, you can use the textwrap module ... Here's how you can combine f-strings with the textwrap module to achieve text wrapping: ... (followed by a full example).
I think "Search" will be changing dramatically over the next year or two. And websites which depend on Search traffic to survive will be in deep trouble.