I go back and forth. I do miss the simplicity of objc at times though. I think in a short amount of time someone can become close to an expert in objc. Swift is already incredibly complicated and there's no end in sight.
I hate how pendantic and useless some of the features of swift being pushed down by academics that don't write apps or services themselves.
Simple example:
Objective-C
if myObject {
}
in swift
if myObject != nil {
}
Also opitionals in swift could have totally be avoided if they adopted a prototype based langue (basically object are never nil). Lua did this, and it is very elegant
But meanwhile, we got a half backed optional system, which is backwards (similiar to Java), and didn't help with the practicality of the language at all, and meanwhile you still can crash an app doing myArray[1]
I love Obj-C, but the Swift version isn't as bad as you say:
if let myObject {
// myObject is non-nil in here
}
The Swift version is also usingfirst-class optionals. In Obj-C there is very small chance you'll confuse `NULL` with `nil` or `0`. Or that you'll message `nil` resulting in `nil`.. and in well-built software you have to guard against that.
Aside: Obj-C is narrowly focused on adding objects (in the Smalltalk sense) to C whereas Swift is trying to deliver a compiler and language with memory safety _guarantees_... Turns out that means you need a lot more language. Not to mention the `async` syntax/feature explosion.
Obj-C is "hippie" and Swift is "corporate suit" + "we're doing serious work here!"
Finally I want to say: I believe Obj-C was a huge competitive advantage and secret weapon that let Apple deliver an OS with so much more built-in functionality than any competitor for years and years. (Obj-C is great for system APIs) That's under-appreciated.
Even with that there is nothing from you accidentally using [i]. Also there are just a ton of Swift APIS and bridge API that take an index and then crash… for full coverage you would need hundreds of safe wrappers… (doing what you propose though at least gives you. Some peace of mind..
Also Swift has a lot of other areas where it just lacks any safeguards… Memory issues are still a thing. It’s using ARC under the hood after all.
Infinite recursion is still a thing (not sure if this would even detectable - probably not).
Misuse of APIs.
And it introduces new issues: which methods are being called depends on your imports.
In my experience Swift lulls you into a false sense of safety while adding more potential safety issues and “only” solving some of the less important ones. objc has null ability as well. Which can warn u if used appropriately. objc also has lightweight generics. In practice this is all you need.
> And it introduces new issues: which methods are being called depends on your imports.
also depending on how you casted it, it will call the method of the cast, not the actual one in the instance (which completely caught be off-guard when i started swift)
> objc also has lightweight generics. In practice this is all you need.
i feel this too sometimes; sometimes simple really is best... tho i think some of these decisions around complexity is to allow for more optimization so c++ can be chucked away at some point...
I don't think objc has the equivalent of a null pointer exception. You can freely send messages to a deallocated object. Since ARC, it is rare, at least in my experience, running into any memory related issues with objc.
That is an API choice from Apple that isn't something inherent to objc. This is true of any method. It is up to the person who wrote it to decide how to handle a nil being passed in.
Frankly, all of this is an API and ABI choice from Apple. It was not the case that sending a message to nil always returned nil/NULL/0 before Apple's Intel transition, and the subsequent introduction of their modern Objective-C ABI. From Apple's 2006 Cocoa coding guidelines:
> If the message sent to nil returns anything other than the aforementioned value types (for example, if it returns any struct type, any floating-point type, or any vector type) the return value is undefined
And from the Intel transition guide:
> messages to a nil object always return 0.0 for methods whose return type is float, double, long double, or long long. Methods whose return value is a struct, as defined by the Mac OS X ABI Function Call Guide to be returned in registers, will return 0.0 for every field in the data structure. Other struct data types will not be filled with zeros. This is also true under Rosetta. On PowerPC Macintosh computers, the behavior is undefined.
This wasn't just a theoretical issue, either. You could run the same Objective-C code on a PPC Mac, an Intel Mac, the iPhone Simulator, and an iPhone – you'd get a zero-filled struct on Intel and the Simulator, while you'd get garbage on PPC and on real iPhone hardware.
This seems only partially correct. If by "they" you mean Germans then yes, Heroin was engineered by them, or at least first made commercially available by Bayers. The US government had nothing to do with it. It was marketed as a less addictive alternative to morphine although I highly doubt anyone who made it actually believed it was safer. I have no source for this but I think it is a safe assumption to make.
The temperance movement was mainly related to alcohol. There were groups who wanted abstinence from everything but that was not its primary focus. They may have played a part in said act but I don't know. They were definitely not the driving force behind it though. Racism played a bigger role than the temperance movement. The government was also aware there was a very real problem with drug addiction.
What percentage? I imagine it is insanely low. The risk to reward ratio of making money off a random bag at the airpot has got to be as low, if not lower than, the actual percentage stolen. One thing I've never been worried about it is organized crime, or anyone really, stealing my bag at an airport.
My understanding is it's taken as a given that the authorities at US airports aren't bothering to catch baggage / item thieves amongst airport staff. The only exception is when a firearm (or luggage containing a firearm) goes missing.
Not sure that's true. I am by no means gifted in the sense of Terence Tao, or even people much more gifted than me but far less gifted than him, but I did well in school up to a certain point in college. I never really learned how to study until I got fairly far in my education process. I put very little effort into school up until that point. That's when I actually had to put effort in and it was quite a wake up call.
Exactly. Up to some point, you actually do well because you are smart. Then, in the middle of the game, the rules change (from your perspective), and it may catch you by surprise.
It would be much better for the gifted children to attend schools where their effort is visible since the beginning. That is, schools with other gifted children.
For example, in math, my kids didn't learn anything new during their first three years of the elementary school, because they already knew numbers and addition at kindergarten age. Yet they were forced to sit there for three years. It would have been better to give them a book to read, or a collection of interesting problems to solve.
> Exactly. Up to some point, you actually do well because you are smart.
No, you do well because you put in the effort. Even trivial effort is still effort. Sometimes it's all that's genuinely needed (quite often in fact!) and it's important to realize this - but not always, and that's important to realize as well.
> my kids didn't learn anything new during their first three years
I think it's almost always possible to revise even these 'trivial' subjects in more depth. Granted, it's hard to do this in elementary math without some kind of outside involvement - someone to explicitly introduce the nuances. Other subjects this is quite a bit easier.
To your last point, the centipawns thing doesn't make a whole lot of sense from an interpretation perspective because it is so shallow. WDL can give you much more insight into how tame or chaotic things are. A 1 pawn evaluated advantage with a 95% chance to win is wildly different from a similar evaluation and a 50% chance to win. The first position likely has an obvious tactic that leads to a win, the latter may require perfect play for 15 moves that only a computer can calculate.
Also, from a computer perspective, a >= 1 pawn is usually sufficient for a computer to win 100% of the time so it's not really interesting and says very little about whether a person could win 100% of the time.
Yep, exactly. I spent a lot of time trying to figure out better ways for interpreting the evaluations of engines for https://www.schachzeit.com/en/openings/barnes-opening-with-d... and I ended up liking WDL much better than centipawns. A blunder defined in terms of decreasing your chance of winning by such and such percentage is, to me, a much better definition than a blunder losing such and such material. What does that mean? It makes sense to me now, but it took a long while.
Relatedly, there is an interesting thing that lc0 has been doing as well, where it takes the contempt concept even further, and can beat you with queens odd. https://lczero.org/blog/2024/12/the-leela-piece-odds-challen... It assumes it is better than you and that it shouldn't just give up because you might be up a knight, rook or even queen.
If there is a 15 move sequence that leads to a guaranteed win, stockfish would not call it a 1 pawn advantage (given the sufficient calculation time) instead calling it a won position.
I think you may be mistaking your understanding of stockfish as shallow in that regard.
Where the big differences might emerge is in strategic mid game positions without any clear tactics or forcing moves. There lc0 can somehow "feel" that a position seems better.
One of the biggest issues I ran into years ago was debugging outside of macOS was a nightmare. Even now, debugging is a terrible experience on a non-trivial project. I am not really sure if it the size of the projects I've worked on, interop with objc, compiler configs, project configs, or what, but it has always been a bad experience. I used it on/off for a project on Linux and the debugger didn't work at all. This was so long ago I am sure that has changed but at least so far in my experience, lldb will stop working at some point. I've worked on large Obj-C and C++ codebases and never ran into any of the problems I've run into with swift in this area.
Apple really needs to decouple swift debug and profiling tools from Xcode. I've been using vscode for swift work but hate having to switch back to Xcode for the tools.
Wasn't your intention whatever you typed in? That doesn't make you an artist and I don't want to hear the music AI made that you happened to type some words to and hit enter.
Tons of professionals use logic. Really, you will find money making musicians using any of the major daws. Pro tools might still be the standard for recording studios but that's likely it.
My point was more that creators will often use more than one tool.
I know Logic is widespread amongst beat producers and songwriters, especially in the US. But you will also often see tracks getting produced on Logic but the final mix then happens on Pro Tools (by professional mixing engineers).
But that's why I explicitly mentioned Logic, I think it's the one pro app from Apple that still deserves the moniker, at least in regards to where it is used. The video stuff not so much anymore.
Logic Pro gets regular updates. I believe most of it is AI driven nonsense but they are making changes. Flashback capture was a nice fairly recent addition and surprising this wasn't implemented sooner. There are also regular bug fixes and performance improvements. I can't speak for the other apps.
reply