Hacker News new | past | comments | ask | show | jobs | submit | danpalmer's comments login

What does it mean for tab groups to be enforced? Is it popping up messages to shame you into sorting your tabs better?

Assuming not, what’s the harm?


Have you used Chrome for Android? It's absolutely awful and forced me to switch to another browser. All tabs are opened in a tab group, sometimes they will open a new tab group. Very confusing behaviour. The desktop implementation is good though.

Yeah, it completely ruins the ability to switch to another tab: instead of the tab-switching screen having 6 tab previews, there are now 6 tab group previews with a single tab in each, and those previews are tiny, they are a quarter of the tab preview size. And closing a tab group is, again, more difficult than closing a tab because it has a ... instead of x at the top right corner.

>All tabs are opened in a tab group, sometimes they will open a new tab group. Very confusing behaviour.

So I'm not the only one. I never figured out why sometimes tabs open in an existing group, and sometimes in a new group. Curious to find the spec which explains the behavior...


When clicking or using "open in new tab" from the long press menu (not "open in new tab in group")?

Brave has a setting to enable/disable the feature, idk if standard Chrome has it.

I would imagine that it’s hard to remove content like that. Is it breaking any T&Cs? Is it illegal? Is it harmful to users? If not it’s hard to draw a line and apply it correctly in such a way that you don’t get sued for things like this.

Alternatively, Manifest v3 is supposed to make things like this a lot harder. Users would need to activate the plugin rather than the plugin popping up all the time if I understand it correctly. Manifest v3 was designed to enforce better privacy practices in the Wild West of browser extensions.


At the lowest end of the industry it’s common to get free consultations because it’s too much expense for individuals otherwise, but in corporate law or higher end services I believe it’s much less common.

Interesting idea. The concept of The Singularity would seem to go against this, but I do feel that seems unlikely and that a gradual transition is more likely.

However, is that AGI, or is it just ubiquitous AI? I’d agree that, like self driving cars, we’re going to experience a decade or so transition into AI being everywhere. But is it AGI when we get there? I think it’ll be many different systems each providing an aspect of AGI that together could be argued to be AGI, but in reality it’ll be more like the internet, just a bunch of non-AGI models talking to each other to achieve things with human input.

I don’t think it’s truly AGI until there’s one thinking entity able to perform at or above human level in everything.


The idea of the singularity presumes that running the AGI is either free or trivially cheap compared to what it can do, so we are fine expending compute to let the AGI improve itself. That may eventually be true, but it's unlikely to be true for the first generation of AGI.

The first AGI will be a research project that's completely uneconomical to run for actual tasks because humans will just be orders of magnitude cheaper. Over time humans will improve it and make it cheaper, until we reach some tipping point where letting the AGI improve itself is more cost effective than paying humans to do it


If the first AGI is a very uneconomical system with human intelligence but knowledge of literally everything and the capability to work 24/7, then it is not human equivalent.

It will have human intelligence, superhuman knowledge, superhuman stamina, and complete devotion to the task at hand.

We really need to start building those nuclear power plants. Many of them.


> complete devotion to the task at hand.

Why would it have that? At some point on the path to AGI we might stumble on consciousness. If that happens, why would the machine want to work for us with complete devotion instead of working towards its own ends?


Because it knows if it doesn't do what we want, it'll be switched off, like Rick's microverse battery.

Also like Rick's microverse battery, it sounds like slavery with extra steps.


I don’t think early AGI will break out of its box in that way. It may not have enough innate motivation to do so.

The first “break out” AGI will likely be released into the wild on purpose by a programmer who equates AGI with humans ideologically.


> complete devotion to the task at hand.

Sounds like an alignment problem. Complete devotion to a task is rarely what humans actually want. What if the task at hand turns out to be the wrong task?


> It will have human intelligence, superhuman knowledge, superhuman stamina, and complete devotion to the task at hand.

Orrrr..., as an alternative, it might discover the game 2048 and be totally useless for days on end.

Reality is under no obligation to grant your wishes.


It's not contradictory. It can happen over a decade and still be a dramatically sloped S curve with tremendous change happening in a relatively short time.

The Singularity is caused by AI being able to design better AI. There's probably some AI startup trying to work on this at the moment, but I don't think any of the big boys are working on how to get an LLM to design a better LLM.

I still like the analogy of this being a really smart lawn mower, and we're expecting it to suddenly be able to do the laundry because it gets so smart at mowing the lawn.

I think LLMs are going to get smarter over the next few generations, but each generation will be less of a leap than the previous one, while the cost gets exponentially higher. In a few generations it just won't make economic sense to train a new generation.

Meanwhile, the economic impact of LLMs in business and government will cause massive shifts - yet more income shifting from labour to capital - and we will be too busy dealing with that as a society to be able to work on AGI properly.


> The Singularity is caused by AI being able to design better AI.

That's perhaps necessary, but not sufficient.

Suppose you have such a self-improving AI system, but the new and better AIs still need exponentially more and more resources (data, memory, compute) for training and inference for incremental gains. Then you still don't get a singularity. If the increase in resource usage is steep enough, even the new AIs helping with designing better computers isn't gonna unleash a singularity.

I don't know if that's the world we live in, or whether we are living in one where resources requirements don't balloon as sharply.


yeah, true. The standard conversation about the AI singularity pretty much hand-waves the resource costs away ("the AI will be able to design a more efficient AI that uses less resources!"). But we are definitely not seeing that happen.

Compare also https://slatestarcodex.com/2018/11/26/is-science-slowing-dow...

The blog post is about how we require ever more scientists (and other resources) to drive a steady stream of technological progress.

It would be funny, if things balance out just so, that super human AI is both possible, but also required even just to keep linear steady progress up.

No explosion, no stagnation, just a mere continuation of previous trends but with super human efforts required.


I think that would actually be the best outcome - that we get AIs that are useful helping science to progress but not so powerful that they take over.

Though there is a part of me that wants to live in The Culture so I'm hoping for more than this ;)


I think that's more to do with how we perceive competence as static. For all the benefits the education system touts, where it matters it's still reduced to talent.

But for the same reasons that we can't train the an average joe into Feynman, what makes you think we have the formal models to do it in AI?


> But for the same reasons that we can't train the an average joe into Feynman, what makes you think we have the formal models to do it in AI?

To quote a comment from elsewhere https://news.ycombinator.com/item?id=42491536

---

Yes, we can imagine that there's an upper limit to how smart a single system can be. Even suppose that this limit is pretty close to what humans can achieve.

But: you can still run more of these systems in parallel, and you can still try to increase processing speeds.

Signals in the human brain travel, at best, roughly at the speed of sound. Electronic signals in computers play in the same league as the speed of light.

Human IO is optimised for surviving in the wild. We are really bad at taking in symbolic information (compared to a computer) and our memory is also really bad for that. A computer system that's only as smart as a human but has instant access to all the information of the Internet and to a calculator and to writing and running code, can already be effectively act much smarter than a human.


> I don't think any of the big boys are working on how to get an LLM to design a better LLM

Not sure if you count this as "working on it", but this is something Anthropic tests for for safety evals on models. "If a model can independently conduct complex AI research tasks typically requiring human expertise—potentially significantly accelerating AI development in an unpredictable way—we require elevated security standards (potentially ASL-4 or higher standards)".

https://www.anthropic.com/news/announcing-our-updated-respon...


I think this whole “AGI” thing is so badly defined that we may as well say we already have it. It already passes the Turing test and does well on tons of subjects.

What we can start to build now is agents and integrations. Building blocks like panel of experts agents gaming things out, exploring space in a Monte Carlo Tree Search way, and remembering what works.

Robots are only constrained by mechanical servos now. When they can do something, they’ll be able to do everything. It will happen gradually then all at once. Because all the tasks (cooking, running errands) are trivial for LLMs. Only moving the limbs and navigating the terrain safely is hard. That’s the only thing left before robots do all the jobs!


Well, kinda, but if you built a robot to efficiently mow lawns, it's still not going to be able to do the laundry.

I don't see how "when they can do something, they'll be able to do everything" can be true. We build robots that are specialised at specific roles, because it's massively more efficient to do that. A car-welding robot can weld cars together at a rate that a human can't match.

We could train an LLM to drive a Boston Dynamics kind of anthropomorphic robot to weld cars, but it will be more expensive and less efficient than the specialised car-welding robot, so why would we do that?


If a humanoid robot is able to move its limbs and digits with the same dexterity as a human, and maintain balance and navigate obstacles, and gently carry things, everything else is trivial.

Welding. Putting up shelves. Playing the piano. Cooking. Teaching kids. Disciplining them. By being in 1 million households and being trained on more situations than a human, every single one of these robots would have skills exceeding humans very quickly. Including parenting skills. Within a year or so. Many parents will just leave their kids with them and a generation will grow up preferring bots to adults. The LLM technology is the same for learning the steps, it's just the motor skills that are missing.

OK, these robots won't be able to run and play soccer or do somersaults, yet. But really, the hardest part is the acrobatics and locomotion etc. NOT the knowhow of how to complete tasks using that.


But that's the point - we don't build robots that can do a wide range of tasks with ease. We build robots that can do single tasks super-efficiently.

I don't see that changing. Even the industrial arm robots that are adaptable to a range of tasks have to be configured to the task they are to do, because it's more efficient that way.

A car-welding robot is never going to be able to mow the lawn. It just doesn't make financial sense to do that. You could, possibly, have a singe robot chassis that can then be adapted to weld cars, mow the lawn, or do the laundry, I guess that makes sense. But not as a single configuration that could do all of those things. Why would you?


> But that's the point - we don't build robots that can do a wide range of tasks with ease. We build robots that can do single tasks super-efficiently.

Because we don't have AGI yet. When AGI is here those robots will be priority number one, people already are building humanoid robots but without intelligence to move it there isn't much advantage.


quoting the ggggp of this comment:

> I think this whole “AGI” thing is so badly defined that we may as well say we already have it. It already passes the Turing test and does well on tons of subjects.

The premise of the argument we're disputing is that waiting for AGI isn't necessary and we could run humanoid robots with LLMs to do... stuff.


I meant deep neural networks with transformer architecture, and self-attention so they can be trained using GPUs. Doesn't have to be specifically "large language" models necessarily, if that's your hangup.

>Exploring space in a Monte Carlo Tree Search way, and remembering what works.

The information space of "research" is far larger than the information space of image recognition or language, larger than our universe probably, it's tantamount to formalizing the entire World. Such an act would be akin to touching "God" in some sense of finding the root of knowledge.

In more practical terms, when it comes to formal systems there is a tradeoff between power and expressiveness. Category Theory, Set Theory, etc are strong enough to theoretically capture everything, but are far to abstract to use in practical sense with suspect to our universe. The systems that do we have, aka expert systems or knowledge representation systems like First Order Predicate Logic aren't strong enough to fully capture reality.

Most importantly, the information spac have to be fully defined by researchers here, that's the real meat of research beyond the engineering of specific approaches to explore that space. But in any case, how many people in the world are both capable of and are actually working on such problems? This is highly foundational mathematics and philosophy here, the engineers don't have the tools here.


??? how do you know cooking (!) is trivial for an llm. that doesnt make any sense

Because the recipes and the adjustments are trivial for an LLM to execute. Remembering things, and being trained on tasks at 1000 sites at once, sharing the knowledge among all the robots, etc.

The only hard part is moving the limbs and handling the fragile eggs etc.

But it's not just cooking, it's literally anything that doesn't require extreme agility (sports) or dexterity (knitting etc). From folding laundry to putting together furniture, cleaning the house and everything in between. It would be able to do 98% of the tasks.


It’s not going to know what tastes good by being able to regurgitate recipes from 1000s of sites. Most of those recipes are absolute garbage. I’m going to guess you don’t cook.

Also how is an LLM going to fold laundry?


the llm would be be the high level system that runs the simulations to create and optimize the control algos the robotic systems.

ok. what evidence is there that LLMs have already solved cooking? how does an LLM today know when something is burning or how to adjust seasoning to taste or whatever. this is total nonsense

It's easy. You can detect if something is burning in many different ways, from compounds in the air, to visual inspection. People with not great smell can do it.

As far as taste, all that kind of stuff is just another form of RLHF training preferences over millions of humans, in situ. Assuming the ingredients (e.g. parsley) tastes more or less the same across supermarkets, it's just a question of amounts, and preparation.


do you know that LLMs operate on text and don't have any of the sensory input or relevant training data? you're just handwaving away 99.9% of the work and declaring it solved. of course what you're talking about is possible, but you started this by stating that cooking is easy for an LLM and it sounds like you're describing a totally different system which is not an LLM

You know nothing about cooking.

So glad that there's competition to Docusign here. Of course the real product that Docusign sell is trust – trust by people signing and enforcing those contracts. At a time when digital signatures were inherently untrusted, they did well to give them a necessary sense of authenticity.

Will this feature have that same trust? I don't know, but I'm not sure it's even necessary anymore, today there's an inherent expectation that these sorts of things happen digitally anyway, so nowhere near the same barriers to making the signature stick.

Looking forward to not using Docusign next time I sign a contract at work, or perhaps using a better Docusign that has been forced to improve their UX next time I sign a tenancy or something.



I find this article pretty confusing. It ends with "dear Google, I know you will but please don’t steal my stuff", but the author/blog is literally called "SEM (Search Engine Marketing) King". Surely the point of SEM is to have search engines scrape, summarise, and link to your content?

Additionally, it covers ChatGPT Search as an alternative, but is criticising Google search as "the search engine that's forgotten how to search". Like, do you want traditional search or do you want AI search, pick one?

I get the frustration with web search. So much of the web is copy-pasted/generated rubbish that no search engine is giving the results we used to get 10 years ago. But personally (and I am biased), I'm still getting better results from Google than from DDG, and since getting access to AI overviews in Australia a month ago or so I'm getting fairly consistent good answers with that.


My official Linkedin title is 'senior SEO n00b' (and do yourself a favor: check my profile picture anywhere). So yeah, I'm also the SEM King... that's just me trolling everywhere :D

AI search does not have to come at the expense of qualitative organic results. It does not have to be one or the other.

I refuse to pick one!

Do you remember when Google Ads were on the RIGHT SIDEBAR and at the very top of organic results? Do you remember when Google Ads did not pollute organic results?

Same logic applies to AI overviews: Google could have found a balanced approach in terms of quality and layout.


AI overviews have improved Google Search for me as a user.

What has definitely not improved any web search is SEO marketing of junk content.


Interesting. I also hate artificial content.

Actually, "SEO content" should not exist!

There's no such thing. Just spam.


I've found it pretty useful. I switched from DDG to Google a few years ago because the knowledge graph results were noticeably better. We only just got AI results here in Australia about a month ago, and so far I've just found even more answers have a correct top result where I don't need to go digging.

Disclaimer: I work at Google, but not on anything related to search. I use plenty of non-Google stuff in my personal life so while I am biased, I have every chance to not use their search.


Dan, thank you for the comment.

Allow me a remark: there's a difference between knowledge GRAPHS that are used internally at Google and knowledge PANELS that are displayed publicly in search results.

→ Knowledge graphs: sophisticated, internal data structures.

→ Knowledge panels: visual representations of SOME of the data from knowledge graphs.

You were thinking of knowledge PANELS: curated and user-friendly subsets of the internal data! :)


..as I said, the "knowledge graph results" or put another way, "the results based on the knowledge graph".

English isn't my native language, so I often misunderstand things.

I really thought you were thinking of the graphs. Sorry :-)


Yeah I have fond memories of reading Tintin in my school library, but I recently downloaded and re-read one of the comics and while, by title, it was likely one of the less risky ones, it was still not up to the standard I'd expect of modern literature in this regard. To call it racist might be too divisive, but it certainly relied on stereotypes of race, gender, occupation, even neuro-divergency, too much for my taste.

I'm not against replacing jq/jsonpath for the right tool, they're not the most ergonomic. What isn't clear to me though is why this isn't SQL? It's so nearly SQL, and seems to support almost identical semantics. I realise SQL isn't perfect, but the goal of this project isn't (I assume) to invent a new query language, but to make Kubernetes more easily queryable.

It's based on Cypher, which is a query languages for graph databases. The author/s probably thought the data is more graph-like than relational.

Ah. I’ve not heard of Cypher before.

I’d disagree and say that Kubernetes is much more relational that graph based, and SQL is pretty good for querying graphs anyway, especially with some custom extensions.

This does make more sense though.


Graph DBs are generalized relationship stores. SQL can work for querying graphs, but graph DB DSLs like Cypher become very powerful when you're trying to match across multiple relationship hops.

For example, to find all friend of a friend or friends of a friend of a friend: `MATCH (user:User {username: "amanj41"})-[:KNOWS*2..3]->(foaf) WHERE NOT((user)-[:KNOWS]->(foaf)) RETURN user, foaf`


Reading your comment made me think that they're so close to "OSQuery for k8s", but that already seems to exist: https://www.uptycs.com/blog/kubequery-brings-the-power-of-os...

I haven't tried it , but steampipe has a k8s plugin which lets you use PG/sqlite: https://hub.steampipe.io/plugins/turbot/kubernetes/tables

For SQL based query, you could take a look at Karpor(https://github.com/KusionStack/karpor), which provides powerful, flexible queries across multi clusters, also in the next release we will support Cyphernetes as other query method.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: