Hacker News .hnnew | past | comments | ask | show | jobs | submit | zedr's commentslogin

> think I could come up with a python example that maps 1:1

My take on it:

    class Stuff:
        def __init__(self):
            self._list = [1, 2, 3, 4]
        
        @property
        def each(self):
            for el in self._list:
                yield el
    
    for item in Stuff().each:
        print(item)
It's even less verbose than the Ruby equivalent in the original article, thanks to the indentation-defined blocks.


AFAIK, there is no reason to use the form “for el in self._list: yield el”, unless you are running Python 3.2 or older.

Why not:

  each = self._list
Or, if you need to be able to re-assign self._list to a new object:

  @property
  def each(self):
    return self._list
Or, if you for some reason need it to return an iterator:

  @property
  def each(self):
    return iter(self._list)
Or, if you really want it to be a generator function:

  @property
  def each(self):
    yield from self._list


Archive.org copy: http://web.archive.org/web/20211021141127/https://www.calver...

My favourite brutalist building is in Belgrade: the Western City Gate https://en.wikipedia.org/wiki/Western_City_Gate


I love Belgrade and have stayed there maybe a dozen times in the past few years. Usually I stayed in Old Belgrade, in the formerly (or perhaps still?) state-run hotel, which is partly staffed by students from a hospitality/catering college attached to it as they do their training.

A couple of years ago I stayed at an AirBnB apartment in New Belgrade. Beautiful apartment, but the building was brutal and huge, built in a long sort of zig-zag that went on and on. The name of the street nearest to the entrance I was using translated as "Anti-fascist struggle street".

Here's a picture of it:

https://belgrade.tips/wp-content/uploads/2021/04/kineski-zid...

And here's the full article (not terribly good) about the building, from which I took the picture link.

https://belgrade.tips/index.php/2021/04/27/belgrade-is-adorn...


I was just about to mention Belgrade, imo it has the best brutalist buildings outside of the former Soviet Union countries, that's (also) why it is on my to-visit-soon list.


Might be helpful for people interested in this, an acquaintance of mine recently started working on a project for an online archive of socialist modernist concrete-based (so not only brutalism in strict sense) architecture (contains photos, info, publications, art projects inspired by the subject, etc.): https://belgradesocialmodernism.com/


very cool!


Belgrade was not a part of the Soviet Union


That's why I've said "outside of the former Soviet Union", inside the former Soviet Union there are cities which can compete with Belgrade on the brutalist front, from what I was able to see from IG Sankt Petersburg and Kyiv are quite interesting on that front. To say nothing of the brutalist Soviet bus stations which deserve an architectural category/style all for themselves [1]

[1] https://www.theguardian.com/artanddesign/gallery/2015/sep/02...


Uh...that's what he said? Had it been a part of USSR, it wouldn't have been outside of it.


I had this for years a my default ring tone.


He could also donate it to the Internet Archive's collection of documents.


> Easily discoverable data, e.g. user ID 3 would be at /users/3. All of the CRUD (Create Read Update Delete) operations below can be applied to this path

Strictly speaking, that's not what REST considers "easily discoverable data". That endpoint would need to have been discovered by navigating the resource tree, starting from the root resource.

Roy Fielding (author of the original REST dissertation): "A REST API must not define fixed resource names or hierarchies (an obvious coupling of client and server). (...) Instead, allow servers to instruct clients on how to construct appropriate URIs, such as is done in HTML forms and URI templates, by defining those instructions within media types and link relations. [Failure here implies that clients are assuming a resource structure due to out-of band information, such as a domain-specific standard, which is the data-oriented equivalent to RPC’s functional coupling].

A REST API should be entered with no prior knowledge beyond the initial URI (bookmark) and set of standardized media types that are appropriate for the intended audience (i.e., expected to be understood by any client that might use the API). "[1]

1. https://roy.gbiv.com/untangled/2008/rest-apis-must-be-hypert...


You are quite correct, but by this stage the original definition of REST to include HATEOAS has pretty much been abandoned by most people.

Edit: Pretty much every REST API I see these days explains how to construct your URLs to do different things - rather than treating all URLs as opaque. Mind you having tried to create 'pure' HATEOAS REST API I think I prefer the contemporary approach!


I agree with your preference. I too lean towards a pragmatic approach to REST, which I've seen referred to as "RESTful", as in the popular book "RESTful Web APIs" by Richardson.


I don't understand why the original dissertation is treated like gospel


I don't think its completely unreasonable to look at a definition like REST and be dogmatic about certain aspects like HATEOAS which are arguably absolutely central to the original concept.

However, in retrospect, it might have been an idea to give what developed from Fielding's original work a clearly different name.


The web has a standard for identifying resources, URIs. One nice thing about URIs is they can be URLs. A single integer id doesn't even identify a user since if I give you 3 you have no idea if it is users/3 or posts/3, or users/3 on Twitter or users/3 on Google, or the number of coffees I have drunk today.


Bill Booth's rule #2: "The safer skydiving gear becomes, the more chances skydivers will take, in order to keep the fatality rate constant".


Known as "moral hazard" in the insurance field.

https://en.wikipedia.org/wiki/Moral_hazard



That too. There's plenty of conceptual overlap.


I've asked myself the question, in other sports:

Should all this equipment really be thought of as safety gear? Because it encourages me to take risks I wouldn't otherwise take. Really it's danger gear.


I'm working on my pilot's license and think about this all the time.

My instructor is insistent that I learn all my flying fundamentals in a 1940s-era Aeronca Champion (stick and rudder, no flaps, no electronic instruments) on the theory that it's much better to learn the fundamentals with the least help possible so that when we add complexity with things like an artificial horizon, VSI, and eventually an EFIS and autopilot, I won't have to depend on those tools to fly.

On the one hand, my flying now is certainly less safe than if I was learning with all those modern safety advancements, but on the other hand I'm learning how to fly purely with an airspeed indicator, an altimeter, a tach, and my own eyes. It's a little scarier, but if god forbid I ever had an electrical system die on me mid flight I can still fall back on my basics.

Ultimately, safety systems paradoxically make you more likely to make bad choices if you don't understand what they do. Autopilot doesn't make you a safer pilot, it allows you to reduce some of the load while you're flying by reducing the amount of fine motor skill you need to engage. A GPS doesn't make you a safer pilot, it allows you to reduce some of the load by not having to fiddle with VORs or rely on ground references for navigation as much. But you have to be ready for either of those systems to fail and still finish your flight safely. I feel much more confident that I could do that now than friends of mine who learned on fancy brand new aircraft with glass cockpits.


I learned to fly long before GPS was a twinkle in anybody's eye. As such I was taught map-reading. And even in places like the 'featureless' Australian outback, it's surprising how detailed modern maps are, and how easy it is to follow them.

But it's so much easier to plug in the co-ordinates or name of your destination into the good old GPS, that I fear 99.9% of pilots do their navigation that way.

But what happens if your GPS malfunctions somewhere along the way? Can you get to your destination by map and wristwatch?


Exactly. It's wild how accurate just plain ol' pilotage and wayfinding can be if you learn how to do it. Of course, the learning part is the problem :)


You just need to look at Formula One to find a counter example to that. 15 deaths in the fifties, 13 in the sixties but only 3 in the last decade and that is with more races and higher speeds.


Police officers don't take their body armor into consideration when deciding on use of lethal force in self-defense. Even though IIIA kevlar can stop multiple .44 magnums.


Just because a vest can prevent death does not mean it does not incapacitate the officer(preventing his further response) or doing serious harm.



Yes and it works.

What is the difference between cython and mypyc? I think they should answer the question why anyone would want this over cython on the readme.


Not having worked with cython, the difference seems to be that cython requires using special types in its annotations as well as not supporting specializing the standard types like ‘list’.

Mypy aims to be compatible with the standard Python type annotations and still be able to optimize them. So in theory, you don’t need to modify your existing type-annotated program. In practice I believe there are limitations currently.


Cython has first class treatment for Numpy arrays. Can Mypyc generate machine optimized code for chomping Numpy arrays element-wise?


I don’t think I want my toolchain to have first class knowledge of specific libraries...


Python is married to Numpy for scientific computing.


In my opinion it's this sort of short-sighted thinking that has cursed the Python project. "Everyone uses CPython" leads to "let's just let third party packages depend on any part of CPython" which leads to "Can't optimize CPython because it might break a dependency" which leads to "CPython is too slow, the ecosystem needs to invest heavily in c-extensions [including numpy]" which leads to "Can't create alternate Python implementations because the ecosystem depends concretely on CPython"[^1] and probably also the mess that is Python package management.

I'm not sure that the Numpy/Pandas hegemony over Python scientific computing will last. Eventually the ecosystem might move toward Arrow or something else. In this case it's probably not such a big deal because Arrow's mainstream debut will probably predate any serious adoption of Cython, but if it didn't then the latter would effectively preclude the former--Arrow becomes infeasible because everyone is using Cython/Numpy and Cython/Arrow performance is too poor to make the move, and since no one is making the move it's not worth investing in an Arrow special case in Cython and now no one gets the benefits that Arrow confers over Numpy/Pandas.

[^1]: Yes, Pypy exists and its maintainers have done yeoman's work in striving for compatibility with the ecosystem, and still (last I checked) you couldn't do such exotic things as "talking to a Postgres database via a production-ready (read: 'maintained, performant, secure, tested, stable, etc') package".


You are mixing up "how things are implemented" with "stuff that data scientists interact with."

Arrow is a low-level implementation detail, like BLAS. "Using" Arrow in data science in Python would mean implementing an Arrow-backed Pandas (or Pandas-like) DataFrame.

Your rank-and-file data scientist doesn't even know that Arrow exists, let alone that you can theoretically implement arrays, matrices, and data frames backed by it.

If you want to break the hegemony of Numpy, you will have to reimplement Numpy using CFFI instead of the CPython C API. There is no other way, unless you get everyone to switch to Julia.


Scientists are typically not trained computer scientists. They do not care, nor appreciate these technical arguments. They have two datasets A, and B, and want their sum, expressed in a neat tidy form.

C = A + B

Python with Numpy perfectly service just that need. We all have our grief with the status quo, but Python needs data processing acceleration from somewhere. In my view, Python needs to implement a JIT to alleviate 95% of the need for Numpy.


Scientists aren't the only players at the scientific computing table these days. There's increasing demand to bring data science applications to market, which implies engineering requirements in addition to the science requirements.

> In my view, Python needs to implement a JIT to alleviate 95% of the need for Numpy.

Numpy is just statically typed arrays. This seems like best case for AOT compilers, no? I'm all for JIT as well, but I don't have faith in the Python community to get us there.


JIT works great here too. It would see iteration and the associated mathematical calculations as a hotspot, and optimize only those parts, which is easy since the arrays are statically typed and sized.

I say this as a Computer Scientist at NASA that tends to re-write the scientific code in straight C. But for many workloads, a JIT would make my team more productive, basically for free as a user.


Yes, JIT would work well also, and I would strictly prefer a JIT, but I don’t think we’re likely to see a JIT Python with good ecosystem compatibility in the next decade. Good luck to the people who are using Python these days, but I’m tired of fighting the same major problems we had 15 years ago. Other ecosystems solved those problems and they actually improve materially.


That is why I am so much into Julia, even with its adoption bumps.

The problem is not that Python lacks JITs, rather the community culture of rewriting code in C instead of contributing to JIT efforts.

Personally I just use a JVM/.NET based language, and if I need I can use the same C, C++ and Fortran libraries that Python uses anyway.


Julia was created to tackle problems in applied disciplines (physics, neuroscience, genetics, material engineering, etc.). I was expecting it not to be picked up by your everyday app developer or by the overly abstract functional programmer. As an afterthought, personally I think Julia can do much more than that, I would say it can do at least as much as Python is capable today, but better.


The ecosystem is slowly expanding beyond applied disciplines, because when those people need to code something else, e.g. a Web site for their research data, then as usual they try to use the hammer they already know.


I'm really interested in Julia's performance for general purpose application development. It's great that it can work with large numerical arrays very efficiently, but what about large, diverse graphs of small objects like you commonly find in general purpose application development? I think I want a hybrid between Julia and JVM or something.


Does your team use Numba?


Where able, but its poor treatment of SWIG makes interfacing with standard tooling a royal pain. In many cases, I've rewritten Numba code in Cython or C for this very reason.


https://pythoncapi.readthedocs.io/roadmap.html

The hope is to create a new C API which doesn't expose CPython interpreter details, is easily exposed by interpreters other than CPython, and then port C-based APIs to it. Sadly it seems they aren't making much progress in 2020/2021. And I don't think it will eliminate Cython/Numpy overhead entirely, so Cython adding Numpy-specific features will still improve performance.

Also Pypy now has a compatibility shim for CPython extension modules. But last time I checked, it was slower than CPython for running one of my Numpy-based programs (corrscope), due to interfacing overhead.


Cython was around long before Python got type annotations so they kind of had to come up with their own thing. Cython will also happily compile Python WITHOUT type annotations, you just won't see much of a performance boost.

Even without types cython provides a neat way to embed your code and the interpreter into a native executable and has applications for distributing python programs on systems that are tricky for python like Android and WASM.


> Note the use of cython.int rather than int - Cython does not translate an int annotation to a C integer by default since the behaviour can be quite different with respect to overflow and division.

This seems like an important difference to me. Your regular type annotations can be used.


Cython is great, but it (used to?) introduce its own language with its own type syntax.


But that's because Python didn't have type annotations. Now that it has them, cython can just use those instead of its own and developers will get the benefit of being able to compile to C using pure Python.


I am not qualified to make any technical arguments. There’s a strong security and tech-managerial argument for using the software that’s aligned to the reference implementation. Obviously cython is currently the better choice for risk-adverse organizations that need compiled Python. But I think C-ish level people have a good reason to trust the stability, longevity, and security of a product built by the “most official” Python folks. There would need to be a deeply compelling technological reason to choose cython, not merely chasing a few wasted cycles or nifty features.

Obviously organizations that don’t manage human lives or large amounts of money can use ‘riskier’ tools without as much worry. This isn’t an argument against cython generally. But I worked at a hospital and wrote a lot of Python, and would not have been able to get the security team to support cython on their SELinux servers without a really good argument. Cython is just an unnecessary liability when your job manages identifiers and medical details on servers accessible on a fairly wide (albeit private) network.


Cython lets you use C structs to speed up memory access, and generally gives you lower-level access.

Note that GraalPython has the C structs memory layout too.


> how would this work at scale?

Homeopathy


Do you have an impacted wisdom tooth? Consider getting it extracted, even if it doesn't appear to be causing any problems. It can be the root cause of chronic migraines.


Kivy: https://kivy.org

Write your app in pure Python. Deploy on desktops and mobile devices.

The main disadvantage is that it does not use the native widget toolkit, although there are projects like KivyMD that attempt to replicate the native look and feel by theming the UI.


What is "the" native widget toolkit supposed to be?

As far as I see it, it doesn't exist, on any platform. Every operating system has multiple drawing APIs and multiple frameworks that build on them. The HTML engines are just another framework.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: