HN2new | past | comments | ask | show | jobs | submit | theseoafs's commentslogin

You were being overpessimistic: Python's GC handles cyclic references perfectly well.


A is indented with a tab.

B is indented with a tab, plus the requisite number of spaces to line up with the above line.

C is indented with a tab.


Does anyone actually do that? I've never noticed that style in any code I've read, but maybe that just means it works.


I don't think anyone manually counts out their tabs and spaces, but I think every major text editor has a mode or plugin to make this automatic.


> My girlfriend is MOSTLY supportive but I don't dump everything on her. Learn to deal with stuff yourself and you'll be happier.

Learning to deal with stuff yourself doesn't preclude asking your partner for help with making a decision that can impact both of you for the rest of your lives.


You do what you like doing, and he'll do what he likes doing. This isn't an interesting argument.


It's a discussion, not an argument. Why would you marry someone if you couldn't talk to them about decisions like this? The whole damn point of a life partner is to have someone you can figure out life with.


It's a tough call, realistically speaking.

On the one hand, you'd expect your significant other to be supportive. But on the other hand, your significant other may be used to a certain kind of lifestyle, and may not take too kindly to losing that lifestyle.


Linear types = "you must use this value exactly one time" Affine (uniqueness) types = "you can use this value either zero or one times, but you can't use it more than once"

As an example, compare two languages: one with linear types and one with affine types. Let's say that you had a "File" type, a function "open()" that creates a file (either linear or affine as appropriate), and a function "close()" that consumes the file. Then this code would be legal in both languages with linear types and languages with affine types:

    let f: File = open()
    close(f)
Note that "f" is used once and only once, at which point the value is consumed. In a language with affine types, this is also kosher:

    let f: File = open()
    // Do nothing
This fails to compile in the language with linear types because "f" is never used. Finally, this code is never legal in either language:

    let f: File = open()
    close(f)
    close(f)
... because "f" is used twice here, and neither linear types nor affine types allow that.

So affine types give you a little more leeway.


So in this example when would you read or write the file?


Depends on the implementation of the language. In a typical functional language with linear types, the functions that mutate an object return a "new value" that is conceptually a modified version of the first value. So fleshing out our example:

    open : () -> File
    close : File -> ()
    write : (File, String) -> File
    let f: File = open()
    let new_f : File = write(f, "Hello world!\n")
    close(new_f)
If this seems clumsy or error-prone, notice that you can't accidentally close() f instead of new_f, and you can't forget to close() new_f, so there is actually very little room for error here.


Wild guess, this is monadic style friendly right ? threading implicit variables (rng, etc) is often used in tutorials.


Yep -- actually, if you look at how the IO type is defined in Haskell, it looks basically like this (there is an object that represents the "state of the world" that gets threaded through computations).


Alright. One distinction though is that vanilla IO monad has no logical constraint on consumption. You could technically thread a resource through a function many times and Haskell wouldn't complain. (This is blurry but I had something clear in mind, someone at a Haskeel meetup showed how to enforce constraints on monads through some kind of Phantom Type trick)


Given the importance of zero in mathematics to simplify a lot of things (see Roman numerals), my intuition is that affine types are much more flexible without losing the nice properties of linear types?


What kind of guarantees do you want from your type system? Strictly speaking the only guarantee that linear types give you is that every single linear value will be consumed at some point, which generally means that no resources ever get "leaked". If the idea of accidentally leaking a resource is unacceptable to you then you might find that affine types are too flexible, as languages that implement affine types generally give you some way to leak a value.

In practice I don't know if anyone is overly concerned about this. While Rust doesn't have linear types, I find you have to bend over backwards to introduce a situation where you might accidentally leak a resource.

Tarean makes a good point with respect to fusion, but "guaranteeing" fusion requires additional assistance from the compiler in addition to the basic implementation of linear types.


I never quite got the finer points but apparently affine types are great if you want to manage sharing in the presence of mutability and linear types are great if you want to guarantee fusion.


I see. Is the unused eliminated automatically on a later stage in the case of an affine type ?


Not sure what you're asking here. Usually the semantics behind what happens to an unused variable (i.e. is the file automatically closed, etc.) come down to how the language is designed. It could be automatically destroyed (like RAII in C++) or it could just be forgotten.

In a purely functional language with affine types, unused variables are usually just optimized out.


Why are you under the impression that what Apple calls "ARC" is substantially different than reference counting?


> I mean... you could have just called it a ref. ref T instead of &T. And then just ref(foo) or foo.ref to get a ref to foo.

Ugh, no. There are references all over the place when I write Rust code -- it would be a serious bummer for me if annotating those types and making those references took up 4 characters for "ref " instead of just the 1 for &.


You can have both, but you use a lot of vocabulary all the time in rust (clone, let, mut, static) and nobody is harping on how the most common words need to be made glyphs to save 2-3 characters per use.


Haskell is a compiled language.


There is no such thing as a "compiled language" or an "interpreted language", as this is a property of language implementations: interpreters (GHCi, Hugs) and compilers (GHC, JHC).


I have not seen any evidence that GC is the primary bottleneck in Haskell, or that a form of compiler-driven manual memory management would be faster.


GC has non-zero overhead compared with linearly typed de-allocations, but that's not the main point. The title mentions predictability, and one of the big blockers to using almost any GC-ed language is the lack of predictability about runtime latency, which is very different than throughput, which I generally associate with the term bottleneck.

Additionally, predictable performance is about what the programmer can predict (whether you are talking about throughput or latency), and the argument is that linear types make it much easier to reason about the performance of a system, hence making it more predictable.


IANA expert, but as far as I can tell...

1) When it comes to sources of unpredictability, laziness is a way bigger problem than garbage collection

2) Assuming that by "GC" we mean "some variation on mark-and-sweep" (rather than meaning "any type of automatic memory management", which I hope is not the usage of anyone in this conversation), GC cost (in terms of CPU-time consumed) doesn't care how many allocations you've done, nor how much garbage it needs to collect. All it cares about is the size of the working set at the time that it runs. As such, all those lingering copies of the old tree nodes (etc, etc) are irrelevant. So I don't see why Haskell's additional allocations should cause GC to be any more of an overhead.


> So I don't see why Haskell's additional allocations should cause GC to be any more of an overhead.

In this kind of garbage collector, time of an individual garbage collection should not depend on the amount of garbage that has been generated. However, producing more garbage should mean more frequent collections, and so more time spent in GC overall.


Under some workloads, GC pauses are definitely the primary bottleneck: http://stackoverflow.com/questions/36772017/reducing-garbage...


> Except you can bound it, and you can escape it that way. IE "not gotten anywhere after 30 iterations, so don't know".

It should go without saying that this system is not Turing-complete anymore if you bound the runtime.


that is correct. the whole point is we can make practical real world tradeoffs and break out of the shackles of theoretical constraints.


But you aren't bounding the runtime of the thing. You are bounding the runtime of your abstract interpreter of the thing.


As someone points out, no system we use is really turing complete, since no system has truly infinite memory.

In most systems with finite memory, the halting problem is solvable anyway.

(this includes linear bounded automatons and deterministic/non-deterministic machines with finite memory).

It's just going to take a long ass time.


This is like saying that no programming language is Turing-complete since they are all implemented with finite memory.


Have there been writings on what exactly git's migration strategy to a new hash function will be? Apparently they have a seamless transition designed that won't require anyone to update their repositories, which seems like a pretty crazy promise in the absence of details.


In git the SHA-1 hash is simply an identifier for an object - it's used in the filename, but not stored in the object. And when a commit or tree object references others, it's just a name that can be looked up in the database. So a commit object hashed with SHA-256 can easily reference a previous commit that was hashed with SHA-1.

During the switch, a bit of deduplication may be lost. But the only interesting issue I can see is how git fsck will tell which hash an object was created with when verifying the hash (maybe with length?).


Git update repo kind of command may be ??


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: