Even if the autonomous weapon systems ‘perform as intended’, this does not in any way mean that they are not an enormous danger.
Secondly, as that is department policy and not a law or regulation, they appear to be saying that the cited directive is presently the only thing standing between the DOD and the use of autonomous weapons.
If that’s the case how hard is it to change or alter a directive?
This is neat, but I think I would avoid it given the speed with which I can make edits in Vim.
One thing I’ve learned about writing prose vs. code is that you should not be quick to edit your prose and instead continue writing and finish the complete draft. This is why studies show that typewriters and pen and paper give a better creative process. I can’t foresee me looking for things like autocomplete or pure speed when trying to put thoughts to paper.
It's probably both. We've already achieved superintelligence in a few domains. For example protein folding.
AGI without superintelligence is quite difficult to adjudicate because any time it fails at an "easy" task there will be contention about the criteria.
As someone who thought Google+ doomed facebook, because of Gmail accounts and everyone with Google as their homepage already, I learned not to overestimate Google’s abilities.
FB is what it is because of advertising revenue. Google already had a giant advertising business where jettisoning Google+ made no difference to their bottom line.
1. Google had recently exploited their home page to push chrome browser successfully altering the browser market. They pushed anyone visiting Google to chrome with a popup on the home page. The same opportunity was there for G+, but with updates from friends.
2. Everyone already had a Google account and many millennials were using Google Talk at the time. It appeared Google could undermine the network effects.
3. The UI of G+ appeared better
4. Facebook had released the newsfeed otherwise known as ‘stalker mode’ at the time and people recoiled at the idea of broadcasting their every action to every acquaintance. The circles idea was a way of providing both privacy and the ability to broadcast widely when needed.
5. Google had tons of money and devoted their world class highly paid genius employees to building a social network.
You can see parallels to each of these in AI now. Their pre existing index of all the world’s information, their existing search engine that you can easily pop an LLM in, the huge lead in cash, etc. They are in a great position but don’t underestimate their ability to waste it.
Google definitely benefited from being able to push Chrome on the homepage, but it was also a bit of a layup given every other browser completely sucked at the time. Chrome said that browsing the Internet didn't have to be slow and caught MS+Mozilla with their pants down. Safari is still working on pulling theirs back up.
> Safari is still working on pulling theirs back up.
not sure about this take, given that chrome‘s rendering engine was famously based on Safari‘s - WebKit - before they forked it (Blink). V8 was indeed faster than Safari‘s JS engine at the time. However, today, Safari is objectively faster in both rendering (WK) and JS performance (JSCore).
They caught up in performance but failed at what Apple was historically good at, vertical integration. Safari still sucks, and nobody talks about it because nobody uses it.
I'm with you on this. I've been an early paid Antigravity IDE user. Their recent silent rug pull on quotas, where without any warning you get rate-limited for 5 days in the middle of code refactoring, enrages users, not simply making them unsatisfied with the product. It actually makes you hate the evil company.
So is Gemini tbh. It's the only agent I've used that gets itself stuck in ridiculous loops repeating "ok. I'm done. I'm ready to commit the changes. There are no bugs. I'm done."
Google somehow manages to fumble the easiest layups. I think Anthropic et al have a real chance here.
Google's product management and discipline are absolute horsesh*t. But they have a moat and its extreme technical competence. They own their infra from the hardware (custom ASICs, their own data centers, global intranet, etc.) all the way up to the models and product platforms to deploy it in. To the extent that making LLMs work to solve real world problems is a technical problem, landing Gemini is absolutely in Google's wheelhouse.
You are stating generalities when more specific information is easily available.
Google has AI infrastructure that it has created itself as well as competitive models, demonstrating technical competence in not-legacy-at-all areas, plus a track record of technical excellence in many areas both practical and research-heavy. So yes, technical competence is definitely an advantage for Google.
Just imagine how things change when Google realizes they can leverage their technical competenence to have Gemini build competent product management (or at least something that passes as comparatively competent since their bar is so low).
I use Claude every day. I cannot get Gemini to do anything useful, at all. Every time I've tried to use it, it has just failed to do what was required.
Three subthreads up you have someone saying gemini did what claude couldn't for them on some 14 year old legacy code issue. Seems you can't really use peoples prior success with their problem as an estimate of what your success will be like with your problem and a tool.
People and benchmarks are using pretty specific, narrow tests to judge the quality of LLMs. People have biases, benchmarks get gamed. In my own experience, Gemini seems to be lazy and scatter-brained compared to Claude, but shows higher general-purpose reasoning abilities. Anthropic is also obviously massively focusing on making their models good at coding.
So it is reasonable that Claude might show significantly better coding ability for most tasks, but the better general reasoning ability proves useful in coding tasks that are complicated and obscure.
Hard to bet against Hassabis + Google's resources. This is in their wheelhouse, and it's eating their search business and refactoring their cloud business. G+ seemed like a way to get more people to Google for login and tracking.
Thats pretty telling that on the search's / ad placement on the web where it matters, OAI has had no impact or its muted and offset by continued market power / increased demand for Google's ad-space on the web.
A couple months ago things were different. Try their stronger models. Gemini recently saved me from a needle in a haystack problem with buildpacks and Linux dependencies for a 14-year-old B2B SaaS app that I was solving a major problem for, and Gemini figured out the solution quickly after I worked on it for hours with Claude Code. I know it's just one story where Gemini won, and I have really enjoyed using Claude Code, but Google is having some success with the serious effort they're putting into this fight.
They recently replaced “define: word” (or “word meaning”) results with an “ai summary” and it’s decidedly worse. It used to just give you the definition(s) and synonyms for each one. Now it gives some rambling paragraphs.
My google gives me the data from oup for word meaning, and doesn't show any AI. It opens up the translator for word meaning language. It is really fast and convenient.
I think they had no choice but to release that AI before it was ready for prime time. Their search traffic started dropping after ChatGPT came out, and they risked not looking like a serious player in AI.
I thought it was a far superior UI to facebook when it launched. I tried to use it but the gravity of the network effect was too strong on facebook's side.
In the end I'd rather if both had failed. Although one can argue that they actually did. But that's another story.
I very much wanted Google Plus to succeed. Circles was a great idea in my opinion. Google Plus profiles could be the personal home page for the rest of us but of course, Google being Google...
That being said, tying bonuses for the whole company on the success of Google+ was too much even for me.
Everything was obviously DOA after it dies. I also thought it wouldn't last but it wouldn't be the first or last tech company initiative that lived on long after people thought it would die. Weird things happen. "Obviously" isn't a good filter.
It was a little different. Facebook was eventually (after harvard only) wide open for college email holders so it wasn't some exclusive club that kept people you want to be in their out. It kept your parents out though and your lame younger sibling. You could immediately use it with your friends. No invite nonsense like with g+.
I still don’t see your evidence of these claims. HFCS is also known as glucose-fructose syrup - it’s not all fructose, either 42% or 55% typically. Glucose and Fructose often go together in your body, so the signal would be there. In your gut, sucrase breaks sucrose (table sugar) into glucose and fructose. In drinks, when sucrose is exposed to CO2 and other acids it turns into fructose and glucose before it hits your gut!
So if you are saying fructose is bad, you are saying table sugar is bad in much the same way and that fruits like apples which are high in fructose would be problematic.
- https://pubmed.ncbi.nlm.nih.gov/23181629/ - countries with higher HFCS availability had higher T2D prevalence, and the association persisted after adjusting for country-level BMI and other factors.
> Glucose and Fructose often go together in your body, so the signal would be there. In your gut, sucrase breaks sucrose (table sugar) into glucose and fructose. In drinks, when sucrose is exposed to CO2 and other acids it turns into fructose and glucose before it hits your gut!
Nothing in this contradicts me. But the more fructose versus glucose, the more hepatic processing, and the fewer hormonal signals, leading to increased ingestion.
It's not that one is objectively "bad and only bad", its that our metabolism is not tuned to such a heavy fructose vs glucose ratio.
How do you know that the followers are not bots? It seems that on platforms with a clear bot problem like X, it would be relatively easy to hit that 1M target if you are willing to pay a bot farm.
If reddit was a squeaky clean place, or if I could pick certain subs, maybe I would be interested, but I really wouldn't want ALL of reddit on my machine even temporarily.
the torrent has data for the top 40,000 subs on reddit. thanks to watchful1 splitting the data by subreddit, you can download only the subreddit you want from the torrent
How were you using it? I have only ever used Wise for bank transfers. There are travel credit cards without any foreign transaction fees and that’s what I always use.
I’ve used it like a debit card/credit card. On phone and as a physical card for tap-and-go on transport. I’ve used it for booking accommodation bookings online too.
Fees are low/non existent and conversion rates good.
There is a gulf between the two and that’s what is sacrificed on the FW13. I m not saying someone can’t decide to prioritize modularity, storage, and repairability over performance, but there is a ‘price’ to making that choice.
The MBA has about a 20% better score on the single thread perf benchmark. It's better, but is it that significant ?
Especially as it has no active cooling. By the time thermal throttling kicks in the FW13 will keep chugging along. The MBP solves that issue, albeit at significantly higher price range.
Then again, the amount of RAM the FW13 can take will also help in many cases.
Right, I was going to bring up real world over geekbench.
For example, in the real world, you’ve gotta run most PC games using CrossOver on Mac with significant performance implications or have them not work at all, where modern Linux/x86 is nearly fully PC game compatible, and the AMD integrated graphics are much more game-optimized than Apple’s.
The arbitrary spec limits on Apple systems also get in your way. Want 4TB of storage? Want more than 32GB of RAM? You have to upgrade to a MacBook Pro even if you don’t want all the other features and expense of the Pro model.
Is all you want a USB-C port on the right side or an SD card reader? Pony up the extra $600 for the 14” MacBook Pro.
A Dodge Neon SRT4 is faster than a lot of BMWs but it doesn’t make it a better car to live with.
This is Apple’s price anchor in action. The base price is essentially not the real price. Anyone who can use the capability of their chips to their fullest will need more RAM and storage. Even casual users will find 256GB tight sometimes. Goodbye, “Optimize storage.”
In practical use, there really isn’t anything my system can’t do that a MacBook Air can besides battery stamina. Since moving to Linux/x86 gaming has become way easier (goodbye CrossOver). Programming and containerization is way better on Linux, and I finally have the RAM for it.
I acknowledge Apple’s lead in their chips but that’s only one component of the experience, and it’s not so far ahead that it’s a major detriment to choose something else.
Secondly, as that is department policy and not a law or regulation, they appear to be saying that the cited directive is presently the only thing standing between the DOD and the use of autonomous weapons.
If that’s the case how hard is it to change or alter a directive?
reply