One of the very, very few "Show HN's" I would give an A+ for the idea. Even better would be a focus on FTP, e.g., datasets available as bulk data without silly "API's". In the early days of the internet posting about such datasets was quite common. Meanwhile I have to continually write flex scanners to transform MySQL and JSON into something useable with kdb (someday maybe I can do all the munging in q). Not to mention all the "scraping". But I am so used to cutting through Javscript and other cruft it does not bother me anymore.
But someone downthread said the .co.uk version was not blocked (before you unblocked the .com). Are you implying the .co.uk version did not have the same "design elements"?
As to your proposed argument, I think selectively blocking well-reasoned analysis by law professors and letting memes go unblocked makes the most sense for your company.
Probably because this post was getting lots of links for a domain that is otherwise rarely seen on FB? You can read about the challenges of spam-fighting at scale (and people actually getting paid to write Haskell) here: http://www.wired.com/2015/09/facebooks-new-anti-spam-system-...
> Are you implying the .co.uk version did not have the same "design elements"?
Classifiers do weird things, and features you don't expect to be significant can suddenly have a much larger than intended effect. The domain name, as a feature, must've thrown it over the edge.
It's like that Google+ post where a picture of a couple of black people got tagged as "gorillas" by their auto-tagging ML system. No, Google isn't racist, their classifier just hates their PR department.
When something better comes along I'll switch away from using the UNIX shell.
But I am not sure I'll live long enough to see that day.
Meanwhile I am too busy using the shell to engage in that battle with hypertext. It could just be my perception, but after years of practice, I think I am winning.
Anyway, I strongly agree. And I think it takes balls to state this opinion because you will be opposed by so many.
I also think the Bourne shell, which accepts good ole text as input (as someone downthread points out), is my most powerful application. Among other things because it is everywhere, it's relatively small, fast, and seems to have an infinite lifetime; it appears forever protected from obsolescence. It's reliable.
Stating this opinion never fails to draw protest. It's just an opinion. Relax.
One time I stated it to what I thought was a sophisticated audience that I was sure could handle it. Somebody still went bananas, claiming that "make" could do everything the shell can do. I must be wrong but at the time I thought "Doesn't make just run the shell?"
There will always be people who are hell bent on arguing against plain text. And the Bourne shell. Why is anyone's guess.
Yet no matter how much internet commentators might complain, I doubt these two things are ever going to disappear. They might get buried beneath 20 layers of abstraction, but they will still be there.
Year after year, they just work. And for that I'm thankful.
"Any one language cannot solve all the problems in the programming world and so it gets to the point where you either keep it simple and reasonably elegant, or you keep adding stuff. If you look at some of the modern desktop applications they have feature creep. They include every bell, knob and whistle you can imagine and finding your way around is impossible. So I decided that the shell had reached its limits within the design constraints that it originally had. I said ‘you know there’s not a whole lot to more I can do and still maintain some consistency and simplicity’. The things that people did to it after that were make it POSIX compliant and no doubt there were other things that have been added over time. But as a scripting language I thought it had reached the limit."
I have to take a moment to thoroughly agree with you on the shell sentiment, although it's not quite on topic. My zsh has always been my most reliable and used tool. Almost every piece of software I've ever used has caused me some headache---GNOME, Firefox, even vim, you name it. zsh never froze, or screwed up my work due to a bug, or anything like that. It always follows what I tell it to do with almost annoying diligence, and if I don't get the result I wanted, I know it's my fault. I've never liked using graphical file managers or, God forbid, some graphical renaming tool, to name an example off the top of my head. I feel immense power when I'm at my shell, because whenever I want to manipulate stuff---from the directory structure, to chaining some commands together to achieve a task, convert audio between formats, getting some quick info out of big files, etc., I always do it using the shell (well, and also the coreutils, sed, awk and others, but I believe my point stands). I know it's a basic thing that all people using Unix systems do and don't think much about it, but the amount of complex things it can do while staying so minimalist sometimes blows my mind. To me, the simple yet powerful text interface feels almost Zen. And although many shells have come and passed, each with its own quirks and weirdness, the underlying principles have barely changed. It's just that perfect. So here's one for the shell: the quiet hero of Unix :)
I could be wrong but I think beginners learn about -9 because beginners are more likely to start processes that they change their mind about and then cannot stop. Nothing wrong with this in the spirit of experimentation.
For more advanced users more familiar with the processes they start, I would guess -15 should be sufficient most of the time.
This is a URL shortener that just redirects to the full URL that has same number. Easier to type but otherwise acomplishes nothing. Server with the content still needs full URL. All this shorter URL gets you is the full URL.
If you're thinking that memory for the program is less likely to be contiguous, then, yes, that is a way it may end up being less likely. But there are already techniques that do exactly that for systems with virtual memory (see https://plasma.cs.umass.edu/emery/diehard.html). But as long as you have a stack frame contiguous with a previous stack frame, you're susceptible to buffer overflows, and I think that will be likely even without virtual memory. (That is, the stack frames are not that large, and are likely to be well within the contiguous block size for the memory system.)
As long as you have the ability to address memory directly, I would say that it's possible. I'm not 100% sure about how memory is allocated though. Outside of virtual memory, are the actual chunks of memory sequential, or would overflowing an offset get you a chunk of another process? (I imagine the answer might be implementation dependent)
I thought this was quite amusing when I read it back in the late 90's. At one point their proposed shell was called "Monad". I wondered "Why do they want to do this?"
I have gone to a number of conferences with MS employees over the years, especially MS lawyers, and with respect to software, they often seem like they have been indoctrinated into a cult. As if the existence of software outside of MS Windows does not exist and as if MS holds the key to solving all problems. But MS does not offer solutions. They create problems (e.g., for anyone who has to deal with their licensing) and then offer "solutions" to the problems they created. It is like someone who proposes to "simplify" something by introducing more complexity.
The idea of MS embracing a proper shell seemed laughable to me. Windows users expect pointing and clicking (and today they want to be able to touch the screen).
Now fast forward to 2015 and it seems like PowerShell has a significant number of users. I am sure it is helping Windows users greatly. Good for them.
But I still think it is laughable. The name still makes me laugh.
I'd rather use something like Back Orifice (which has a simple UNIX client) to control Windows from a proper shell on a computer running UNIX. Or SSH via Cygwin. Or even SUA which has ksh and tcsh. Alas no simple Bourne shell (better for writing portable scripts).
It's almost better. It's good for automating some system administration tasks. It's not so good for text processing; there are a number of traps and mis-steps. The object pipelines are great, but have to be held in memory (you can't pipe an intermediate state to disk). By default text output wraps at the terminal width even if you're writing to a file, which is just stupid and inconvenient for further automatic processing.
In order to write a script, you first have to change your system security settings.
More subtly, because it's comparatively new it's not very dogfooded and there isn't a rich source of examples. UNIX systems usually have lots of shell scripts in /etc for you to learn from.
It has it's pros and cons, but in the end it's fundamentally different. Powershell wouldn't work nearly as well on a UNIX system, where much of the tooling is already just text, provided by many different sources, and uses a much more diverse set of utilities. Text makes a good lowest common denominator in that situation.
"Text" only makes a good lowest common denominator on UNIX because the designers of UNIX decided that ASCII would be the lowest common denominator of the system. The designers of modern Windows decided that MSIL objects would be their lowest common denominator.
If the UNIX designers had chosen some other lowest common denominator, all of these diverse utilities would be communicating over that protocol instead of ASCII. The choice to use "text" came first and the ecosystem came afterward.
> "Text" only makes a good lowest common denominator on UNIX because the designers of UNIX decided that ASCII would be the lowest common denominator of the system. The designers of modern Windows decided that MSIL objects would be their lowest common denominator.
"Text" makes a good lowest common denominator because it's low. Everything, including humans looking at it, can deal with text. Essentially all programming languages on all platforms provide basic primitives analogous to getline(), find_first_of(), substr(), etc. You can take text from Unix and easily do something useful with it on any arbitrary platform.
MSIL assumes an entire infrastructure that isn't universally available. You can't take MSIL from Windows and easily do something useful with it on any arbitrary platform. It requires you to have a solid .NET virtual machine for your operating system, an API for dealing with MSIL, bindings for that API in every language you want to use, etc.
Users of System360 computers, which used EBCDIC instead of ASCII, might not agree with you about the inherent portability of ASCII-based text interchange. Not saying .NET objects are as portable, but it's a continuum and a trade off.
ASCII and EBCDIC are just different ways of encoding the same information. It's like saying comma-delimited files aren't portable because there are also tab-delimited files and some programs support one or the other. It doesn't matter because there are trivial, widely available utilities to convert between them.
There is nothing analogous to convert MSIL to. EBCDIC and ASCII both have a representation for 'A'. Only .NET has "Microsoft.Win32.Registry.CurrentUser.CreateSubKey()".
Can you explain me a bit the difference if you mind (I don't know much about Windows).
In Linux/Unix I can also pipe audio to /dev/audio for example. And image processing through a sequence of steps by pipes is quite often done. Are these objects coming with default metaparameters to make piping easier? Or are there other things implemented that might be cool?
I have never used PowerShell, so I don't know a lot about the details. But PowerShell pipes .NET objects from one command to the next, which are not just bytestreams but are objects, with methods and such. It's fundamentally different.
In UNIX, you have bytes, and the underlying system doesn't know what they represent. Most tools assume they're ASCII or UTF-8 (or whatever) encoded text, but that's all they agree on. In practice, you're working almost always working with structured data - tables or lists or mappings - but you have to remember that for the output of this program, you want to cut on '=' and take -f4, and first you need to pipe it through head and tail to clean up some junk output[1], and this other thing for that program, and whatever.
[1] This is a way to get the `time` field from the output of `ping`.
That's because the object pipeline is functionally analogous to function chaining within a single runtime like in Python or Ruby, not streaming data between arbitrary executables like in Unix pipelines.
This would be a lot clearer if PowerShell advocates compared it to things that it's actually functionally analogous to, but they are trying to position it as a Unix shell competitor.
> You are the same person who tried to falsely claim that cmdlets are not instances of .NET classes
The fact stands, cmdlets can be implemented as .NET classes, but they can also be implemented through script (i.e. not .NET classes) or through manifests where cmdlets are generated for WMI/CIM classes.
I don't know why you continue to ignore this, or even why it is important. I point out that you are incorrect and point to a authoritative source that demonstrate the point.
The relevant section of the PowerShell section:
---
... A module can contain one or more module members, which are commands (such as cmdlets and functions) and items (such as variables and aliases). The names of these members can be kept private to the module or they may be exported to the session into which the module is imported.
There are three different module types: manifest, script, and binary. A manifest module is a file that contains information about a module, and controls certain aspects of that module's use. A script module is a PowerShell script file with a file extension of ".psm1" instead of ".ps1". A binary module contains class types that define cmdlets and providers. Unlike script modules, binary modules are written in compiled languages. Binary modules are not covered by this specification.
Windows PowerShell: A binary module is a .NET assembly (i.e.; a DLL) that was compiled against the PowerShell libraries.
---
I will point out when your claims about PowerShell are incorrect. It's an opportunity to learn something. Please address the topic, not the poster.
> Then those generated cmdlets must be instances of .NET classes, because by definition that's what cmdlets are
Nope. The problem is that you searched for confirmation that cmdlets was "just" .NET classes (I really don't get why that is important), and then you dive into documentation for C# programmers and point out how the documentation for C# programmers is telling the C# programmer that if he wants to create a cmdlet he can do it with a C# class. D-oh!
When I point to the PowerShell specification and demonstrates how it does not claim that cmdlets are .NET classes you ignore it.
I point out how it enumerates the 3 different ways to make cmdlet modules: Binary, script and manifest, of which only the binary module variety implement cmdlets as classes.
You could equally well claim that Web Services are just MVC Controllers, because in the documentation for MVC I can find out how to implement web services using MVC controllers.
> ... and they exist within the .NET runtime, which is the relevant issue
Why? If the .NET runtime has become sediment to the point where the OS is always distributed with the runtime, where important OS functions (like troubleshooting) is implemented using the runtime, where remote administration is based on the runtime, at what point do you accept the runtime as part of the OS?
.NET IS part of Windows now, has been since Windows 7 SP1 (IIRC). Troubleshooting packs, guides etc. are written using PowerShell. When your network gets the hiccups, and Windows proactively offers to troubleshoot the network connection, it is PowerShell that is running underneath. The new PackageManager is PowerShell based.
> Thank you, however, for providing yet another demonstration of how PowerShell advocates consistently resort to handwaving, obfuscation, and false claims in order to try to pretend PowerShell is functionally different than it actually is.
No need for snide remarks. I stand by all of my claims. It is you who selectively quote C# programmers documentation out of context. All you have proven is that if a C# programmer wants to create a cmdlet using C#, he must implement it using a class.
PowerShell's object pipeline only exists within the PowerShell/.NET runtime and is functionally analogous to method/function chaining in languages like JavaScript and Ruby. The object pipeline does not exist outside of the PowerShell/.NET runtime.
Pipelines in Unix shells use OS-level functionality to connect standard streams of data, often encoded as text, between arbitrary executables written in any language.
The reason this is confusing is because one of the main tactics used in PowerShell promotion is to disingenuously compare PowerShell's "object pipeline" to Unix pipelines in order to try to make it appear to be more novel than it is, when they should be comparing it to what it's functionally equivalent to, which is function chaining within a single runtime.
> which is function chaining within a single runtime
If PowerShells pipelines are merely equivalent to function chaining, then you should have no problem replicating the following very simple pipeline with function chaining in Ruby, Python or JavaScript:
cat log.txt -wait | sls "error"
In case you need an explanation for what it is doing: It continously monitors the log.txt file for new lines, and selects those that contains the substring "error".
But I am curious: Why is it important whether the pipelines could be implemented using function chaining or not? PowerShell is a shell where I interact with the system through commands. Yes - those commands are not the typical Unix executables - but they act as commands within the shell
Is your problem that it is not a Unix shell? You could equally well say that Unix shell pipelines are just processes connected by file descriptors. That is factually correct but does not represent the true utility of pipelines.
PowerShell pipelines are not file descriptors. PowerShell commands do not execute in separate processes. But PowerShell commands are versatile and can be combined in a way analogous to Unix shell pipelines where the output of one command is consumed and acted upon by the next command.
The object of a operating system command line shell is to expose the operating system features to the user of the shell. Why does it matter whether it uses Unix file descriptors. Even if PowerShell pipelines were equivalent to "just" chained functions, why does it matter?
I get the point that you need the .NET runtime to run PowerShell, because it is implemented using .NET. At what point do we consider a runtime part of the operating system. When it is intrinsically distributed with the operating system and cannot be uninstalled?
To underscore how the idea of a UNIX shell working on file descriptors between executables isn't really important, it's worth nothing that bash allows piping between it's built in commands/keywords. Not only can you pipe the aggregates output from a for, if or while construct, you can pipe the output of bash builtins such as bg, fg and disown.
"objects" means data structures. It means you can have some structure with fields, or list or array or whatever as the output of one command, and input of the next one, so the next one doesn't have to do character-level parsing.
Of course, this kind of piping is present in nice languages that precede the PowerShell.
A one page Lisp macro will give you a left-to-right syntactic sugar for filtering. Clojure has a threading operator, Ruby has cascades of dots: object.{ blah }.foo().bar() ... and so on.
I don't think it is fundamentally different. When I think of a shell I think of pipes and interactivity. What you pass between through the pipe is secondary. The fundamental nature of the shell is those two things and everything else is just sugar on top. PowerShell has that and more so I'd say it is not fundamentally different and in fact it is objectively better.
The word "shell" in this context already means something. A text shell is a command line interpreter (https://en.wikipedia.org/wiki/Shell_(computing)#Text_.28CLI....). There are many shells that do not use the pipeline convention (e.g., irb), so your attempt to redefine it to promote PowerShell fails from the start.
PowerShell's object pipeline is functionally analogous to function chaining in languages like Ruby, Python, and JavaScript, not Unix pipelines. PowerShell's object pipeline does not exist outside of the .NET runtime and you can not stream objects between arbitrary executables outside of the .NET runtime.
You can't compare an apple to an orange and say one is "objectively better," no matter how hard PowerShell advertisers try to pretend their apple is an orange.
> PowerShell's object pipeline is functionally analogous to function chaining in languages like Ruby, Python, and JavaScript
No, in those languages function results are bound upon returning from the function. PowerShell's pipeline would be more like function chaining where each function is a co-routine that can yield partial results, which would make it considerable more complicated.
So your attempt at trivializing PowerShell fails from the start.
For your information, here is roughly how a PowerShell pipeline is set up, processed and torn down:
1. BeginProcess is invoked for each command in the pipeline, starting with the last command of the pipeline, working towards the first. When each command is initialized this way, it can start receiving objects from preceding command on the pipeline. Each command may start produce objects already during this phase, as all subsequent commands are ready to process them. If it produces objects, ProcessRecord of the subsequent command is invoked immediately.
2. ProcessRecord is invoked on the first command of the pipeline. If it yields objects, ProcessRecord of the subsequent command is invoked for each object, thus "pushing" objects through
3. EndProcess is invoked for the first command of the pipeline. It it yields objects, ProcessRecord of the subsequent command is invoked for each object. When EndProcess completes for the first command of the pipeline, EndProcess is invoked for the subsequent command and so forth until EndProcess has been invoked for all commands of the pipeline.
As you can see, each cmdlet has 3 phases: initialization, processing and completion. All cmdlets (functions?) become active at the same time, with the control flow passing to each command (function?) multiple times during each phase. This can be implemented in any turing complete language, but function chaining is a lame attempt at trivializing it. Throw in reactive sequences and it may get closer.
Consider how Sort-Object (alias sort) is implemented: It does nothing during BeginProcess, during ProcessRecord it adds to an internal collection but does not yield any objects (yet), during EndProcess it sorts and yields all of the collected records. This is clearly analogous to how sort in byte/text stream is implemented: When the process starts it does nothing but initialize and starts listening on stdin, for each "line" on stdin it adds to an internal collection, and when stdin is closed it sorts the collection and writes the "lines" to stdout. 3 phases.
Even so, it doesn't really matter how it is implemented, a shell is inherently about how you interact with the shell.
> PowerShell's object pipeline does not exist outside of the .NET runtime and you can not stream objects between arbitrary executables outside of the .NET runtime.
You are locked in the Unix view of the world and is unable to consider that long standing assumptions about "the way to do it" may need a refresh. If you want to leverage objects for system management then yes, you need a system-wide object model. Unix was not built with an object model. Windows was sort-of build with an object model (handles) but it is not versatile enough as a system-wide management model.
However, the good thing about objects is that they are abstractions which can easily wrap existing operating system concepts, such as files, processes, access control lists, drivers, users, groups, network connections etc. .NET is a mature object model and with a lot of tooling. Building PowerShell so that the abstractions over network adapters, users etc become represented by .NET classes enable a lot of tooling and infrastructure right from the beginning.
useerup, virtually everything in your comment is irrelevant handwaving and obfuscation, including a bunch of irrelevant comments about features that are internal to the PowerShell runtime.
No matter how much you want to try to talk around it, PowerShell is a .NET CLI, the object pipeline exists within its .NET runtime, and the commands are instances of .NET classes. Unix shells work primarily with arbitrary executables outside of the shell runtime using OS system calls to create pipelines with standard streams.
Here's the cold hard fact stated yet again: PowerShell's object pipeline is functionally analogous to function chaining in languages like Ruby, Python, and JavaScript, not Unix pipelines.
> Here's the cold hard fact stated yet again: PowerShell's object pipeline is functionally analogous to function chaining in languages like Ruby, Python, and JavaScript, not Unix pipelines.
Ok, what then is the functionally analogous function chaining of Ruby, Python or JavaScript to the following:
Better than Unix tools may be debatable. (Syntax for basic file operations is a bit verbose) But it's definitely better integrated into overall system administration than any shell available on Unix.
I know enough and have had enough tangential contact with PowerShell to know that it works on a fundamentally different paradigm, as it does not pipe text, but objects. As such, it's hard to say it's "better" since it's so different. I would hazard that it's better some respects for some things, and the reverse is true as well. It fits well into windows, but would not fit well into UNIX systems, where most things are already just text.
I haven't used inforno-shell, es or rc, so can't comment on them directly. That said, if they are easily comparable to bash, ksh, etc, then you probably owe it to yourself to see what powershell is about and why it's fundamentally different.
The commenter is correct that you cannot start an application in another session from remote. However, this is a security restriction that prevents cross-session contamination and shatter attacks. It is a (security) limitation of Windows, not of PowerShell.
Under Windows, all services run in session 0, the interactive user run in session 1, and if more users log on (e.g. remotely through PowerShell) they are each created in their own session. Windows put severe restrictions on what programs can do cross-session - even if the processes run under the same user account: They cannot send messages, interact with the desktop windows, inject code, start threads etc.
The commenter also alludes to other security settings that you'll encounter when you first start using PowerShell:
- Execution of script files are disabled by default. You need to use the Set-ExecutionPolicy cmdlet to allow script execution, and for desktop editions you need to enable remote management. Both has secure-by-default values.
To enable script execution:
Set-ExecutionPolicy RemoteSigned LocalMachine
I.e. scripts that originates from the Internet (downloaded through a browser or received through mail/IM/etc) or other untrusted zones require digital signatures to run; locally authored scripts are allowed to run with no digital signatures.
To enable remote management run this command:
Enable-PSRemoting
On Windows Server versions it is now assumed that the administrator will want to use remote management, so it is enabled by default for servers. For desktops you'll need to set it through a group policy or with the command above.
The only attribute you contrasted was how they compare to powershell on the axis of better or worse, and I just stated my belief that it's hard to compare them accurately due to fundamental differences. That's not a strong basis for me to make assumptions about those shells.
By asking if inferno-shell and friends were easily comparable with C-like shells I was attempting to determine if they were additionally hard to compare with powershell, and infer from that your unfamiliarity with powershell. That, obviously, was a stretch.
There was once a time when I as a junior employee was left to create a backup system out of shared drives on the Novell server, XCOPY and AUTOEXEC.BAT.
It worked better than one might expect. You'd get emails about failed backups (backup flag files removed after a successful backup), backups were versioned, and restore was quick & easy (execute a simple script from the shared drive, then sys c: from a boot disk).
Dataset discovery. A+ for idea.