I don't understand why Linux users want Win8- or WinRT Pads.
Not only are they pretty expensive but there already are many cheap good Android Pads running a Linux kernel. There are also plain Linux pads on the market, and even Ubuntu plans to sell a Ubuntu pad soon.
I still don't get it. Microsoft is not that important anymore today as it had been for decades. There are extremely cheap ARM boards (Raspberry, Beagle Bone, etc.) that are almost suitable for Internet and Office applications right now. And I think soon we will have the next generation of Raspberry and Beagle Bones with 1.5 GHz dualcore or quadcore, and we can even built our own cluster of these boards for servers -- for instance, this one with 32 nodes:
It's my reason for owning a laptop that came with Windows 8. This particular one isn't affected by the problem discussed here, but many other laptops are.
I would also recommand EclipseFP. It is an amazingly well working Eclipse Plugin for Haskell with incremental compiling, debugger, Smalltalk like browser, and other nice features.
For this simple reason the Linux Foundation should stop wasting further energy for UEFI. As a Linux user from the very beginning (since 1992) I won't buy any UEFI device for Linux in general even if Linux would be 100% supported. I won't buy an expensive WinRT tablet just to put Linux on it.
When the Linux Foundation was founded I was happy to have a strong organization behind Linux. But today they make me really angry for supporting a system of someone who considered Linux, and the Open Source movement in general, as enemy. UEFI proves that they don't have changed their mind.
LF, please stop your unrealistic UEFI dreams, and please focus on the interests of the Linux community. I believe that most Linux users want functional hardware alternatives which will likely remain free from DRM restrictions. I would appreciate that even at the cost of incompatibility with the internet. In that case we would have to build our own new Internet, free from spam etc.
There are so many good devices like Raspberry and Beagleboard Black. They are powerful enough to be used as a desktop workstation when using a small and efficient Linux distro. Why not even enhance them with an e-ink display to make them a cheap alternative tablet pc that everyone can afford?
By the way, for the price of a single UEFI WinRT tablet we could build a whole server cluster of RPis or BBB.
It amazes me how programming languages and APIs look more and more like Lisp. Modern languages copy essential features from Lisp, and JSON as one of the most popular JS libs almost look identical to Lisp s-expressions.
Someday also more people will realize how useful and effective the equivalence of control structures and data really is.
Yeah. Well. Also known as "in the end, everything is just an abstract syntax tree". But while a few people will always delight and excel in reading and writing everything as s-expressions, many of us will probably always find either line-breaks or different kinds of braces, brackets and little syntactic doodads make for easier and saner writing and reading -- even if the parser needs to do a bit more work.
No matter what kind of code I'm looking at: "could I express this as s-expressions?" Sure. "Would I want to?" Hell no.
> "could I express this as s-expressions?" Sure. "Would I want to?" Hell no.
Of course it is possible to implement syntactic sugar in Lisp which supports JSON style expressions. DSLs are common in Lisp, and that actually became a weakness of Lisp (so-called "DSL hell").
The interesting thing about s-expr is that Lisp doesn't need special data conversion tools to handle them. Even control structures are expressed as s-expr, and they can be created and modified dynamically which means that even code can be exchanged at runtime on the fly.
This is one of the most attractive things of lisp. The fact that the language has no notion of compiletime, evaltime and runtime. The user just doesnt have to care abouut it. Very powerful
Everyone says that, but the thing I loved most about dabbling in elisp was how much nicer things become when your text editor works at the s-expression level. Moving around and manipulating a lisp program is just plain easier than it is in nearly any other language.
There is a distinction between reader macros and compiler macros, for example, which is relevant for allowing using special syntax be optional for end users.
Certain things also need to be defined if you want them to be available in the compile-time environment. And, sometimes you have to do a bit of extra work if you want to have literal objects in your compilation environment and pass them to runtime.
No. Just no. I could agree about reading, but writing is much easier with s-exps. There's just less symbols to type. And remember, you still can put newlines and indent however you want.
It's JavaScript, that's the syntax notation for JavaScript objects. Sure it resembles lisp and python, but that's just how languages come to be. If I were to invent a new language, if likely use {} for dictionaries and [] for lists too.
Lisp is harder to read than JavaScript because it doesn't disambiguation between lists and dictionaries in a clear way like js and python do.
I think it's more about the tree data structure than about LISP. LISP just makes the underlying data structure obvious, but as you implied we are far from seeing any LISP-semantics in JS(ON).
S-exps are "just" data structures, so that you are reading s-exps when you see other data-structures presented is no surprise.
This is easy to identify: nearly all json is just atoms thought- very few json definitions encode any kind of machine-works or program-code.
So, to put it another way: it amazes you how all Lisp looks like data structures. And hopefully I've helped let you know why you are so amazed here, thanks for reading.
I'm not sure I like this syntax. It's unfamiliar both to lispers and "mainstreamers". The former will be irritated with { and commas, the latter will try to insert : everywhere.
I read an interview with Rich Hickey where he said that adding these three syntax constructs reduced cognitive burden on programmers. He meant () for lists, [] for vectors and {} for hashes.
I should implement it in Racket and just see how it feels.
There was a discussion on the mailing list concerning colon in maps, here's the JIRA issue [1]. IIRC it was kind of decided to let it die, but maybe someone will implement it.
Personally I think that differentiating between lists and maps is important and the way Clojure does it (as well as JSON) is quite easy to write and read. FWIW example given by bitcracker is broken because there's no way to understand what does the following mean:
Features are much more relevant than marketing. Look at Ruby, some years ago it was hyped a lot but that enthusiasm has fade away. Look at Lisp, it wasn't hyped for about 50 years but it is still alive today, and many modern language developments still copy features from Lisp.
If the features are good then I don't care about the name. I really like Rust (http://www.rust-lang.org/) although it sounds "rusty".
Obviously you haven't seen the lisp ads from the 80's :) Lisp was surely hyped then. Not to mention all the hype from people like ESR and PG. Even clojure owes part of it's success to the great presentation skills of Rick Hickey and the other early clojure adopters. Languages DO succeed mostly because of marketing, if it was features, then smalltalk would have won, and C++ and Java never would have existed, Lisp would have won and python, ruby and perl would have been forgotten. Even other technologies works that way too, NeXT and the other Unixes would have won, and windows would have never had a chance if what you are saying is true.
As for the name, if clojure was named anything that had the word "lisp" in it, like "foo lisp" or the like, it would have been dead already. Even racket had to get rid of the word "scheme" from their name for marketing reasons.
> Obviously you haven't seen the lisp ads from the 80's :)
I studied computer science in the 80's. Maybe that there were some business ads of Lisp (especially Lisp machines). But I had the strong impression that Ada was much more hyped than Lisp. Where is Ada today?
> if it was features, then smalltalk would have won
It depends on how "features" is defined. C won over Lisp and Smalltalk because of the single feature of performance. At the time when C was invented hardware was very expensive.
> Even racket had to get rid of the word "scheme" from their name for marketing reasons.
Racket is a different language. Scheme (R6RS etc.) is a subset of that.
According to your logic Rust will have no chance at all. We will see.
Trust us, Lisp was hyped (I worked for "the other Lisp Machine company" in the early '80s). Albeit perhaps not as much as Ada.
One big thing to consider about the normal Lisp and Smalltalk implementation style vs. C and Clojure by virtue of it running on top of the JVM (and CLR, and for the Javascript version I hope it inherited a lot of this) is that the former are "take over the world" approaches, back in those days and still a lot today (e.g. SBCL) implemented by dumping an image to memory and restoring that when you want to run it.
Whereas C is very much a building blocks approach, and within the JVM Clojure is as well. E.g. for the latter the popular Leiningen tool uses directives like
And known repositories to pick up the corresponding .jar files. Java and any other language that fits into the JVM ecosystem is an equal player on the JVM with Clojure, and it defers to Java for things the latter already does quite well enough.
This sort of approach, which in the case of Java and Clojure is NOT Worse is Better/The New Jersey Way, seems to have survival characteristics.
Hard to determine how much this was a factor vs. performance, C being a general purpose portable assembler. After UNIX(TM) and C grew up on tiny machines, we then largely squeezed into barely larger ones, the 8086 and 68000 based ones when DRAM was still pretty dear. Or for "engineering workstations", I'm told a non-common, although not all that good configuration early on was one Sun with a hard drive and 2-3 diskless ones all sharing the drive.
AFAIK the GreenArray chip provides only 128 bytes per core while Parallela supports 32 KBytes per core.
As a former Forth hacker I was enthusiastic at the first glance of the GA but 128 Bytes per core were really disappointing. What could that amount of RAM be useful for?
One interesting application could be realtime 3D rendering because this is an area with small overhead. I know that the chip does not support floating point but that could be simulated by fixed point integers.
I agree with Shamanmuni that the great advantage of Parallela chip over GPUs is open source (full documentation). It's a practical study tool for real parallel programming tasks that many students can afford.
Lisp is too powerful only for unexperienced developers because the way of programming in Lisp is totally different from all other languages.
Actually many modern languages (Python, Ruby, Java, even C++11) copy more and more features from Lisp since the language designers suddenly realized how useful these features are.
Lisp was never "too powerful" but simply far ahead.
Lisp makes it really easy to build infrastructure and multiplier-level (Level 2) contributions that affect the whole shop. When you have 1.3-1.4 programmers (scale here: http://michaelochurch.wordpress.com/2012/01/26/the-trajector... ) who aren't ready to take on projects that will affect other people in such a major way, you can get bad results. That's a management problem, though. Inexperienced developers need to have opportunities to experiment, but if others are forced to sit downwind of their work, then management is behaving badly.
I am concerned how fast IT evolves to DRM-locked devices. The PC will become obsolete by tablets so we will lose control over our computers. Currently DRM looks harmless but when you consider the long-term consequences it doesn't look good at all.
First we fought TCPA and Trusted Computing, now we have the first global players (Apple, Microsoft) put DRM into their gadgets to get maximum control and minimum competition. And the customers buy, consume, and play, without considering the consequences.
Our technology today is the foundation for the political systems of tomorrow, and when you consider the global demographic development you know what I mean. Within the next twenty years we will have a completely different global political and economical system, and I don't want to have them total control over us with their super-safe cascaded DRM chips which nobody will be able to break.
Unfortunately we can't stop Apple, Microsoft & Co. from using DRM. That's why I appreciate pioneer projects like this selfmade FPGA very much as they help to jump over the big hurdle from open source to true open hardware. Someday when all commercial chips will be DRM-locked I hope we will be able to 3D-print our own processors and RAM chips.
I am convinced that Open Hardware will become vital for the survival of our freedom of speech and liberty in general.
Check out Jeri Ellsworth's homebrow NMOS transistor for some real fun. Just get your bare silicon wafer from E-bay and have at it. http://www.youtube.com/watch?v=w_znRopGtbE
I know that there are several open hardware projects (Opencores for instance). But I consider _true_ open hardware as hardware were we always will have _full_ control over _every_ tiny detail of the system.
The TTL level is the right foundation for that. If we could go down to the NMOS level or so for 3D printing, that would be even better. We'll see what the future shows.
You've been able to build your own hardware from 74xx series chips for decades now, infact about half a century! Infact, that was the only option you had back in the day. This is retro rather than new, like making a ham radio out of thermionic valves, say. Nothing new, but kinda cool and certainly educational if you're new to the field.
I think that's why the GP took objection to your rather statesmanly proclamation that this was heralding some kind of revolution in open hardware. It isn't, it's just a rather neat project.
However, in your followup comment I definitely agree with you in that as soon as I can fab my own silicon in my garage, the world will be my mollusc. But we're pretty close nowadays with a $20 FPGA from digikey.
diy silicon is definitely a cool concept but this project, while cool in itself, is not a step towards that goal. and that was (one of) my issue with GP post.
I understand what you mean, and you are basically correct. But nevertheless I see this project as a milestone as it helps to focus on selfmade FPGAs instead of soldering TTL chips to copy some hard wired retro systems.
If we someday will be able to print our own chips then they will surely be FPGAs because debugging and reprinting hardwired chips would be likely too expensive.
I agree with your entire comment except for the first sentence. When working for a ideological goal, it is important to continually and earnestly evaluate what goals the imagined solution will actually fulfill, lest one end up misplacing faith in an implementation that can never achieve the desired ends.
First, this FPGA is still not implemented by things that are the lowest level - how do you know that 7404 doesn't actually contain secret logic looking for specific patterns on the 6 "independent" inputs (based on your widely-propagated design) that alter it's behavior?
Second, if one wanted to build an auditable CPU, they would not start by building auditable reconfigurable logic in order to develop a soft core CPU on top of it. This is a needless extra layer of interpretation, and the number one constraint you're trying to overcome is the constant factors that are left out of the Big-Oh notation.
We don't buy Intel processors because we're lazy and have money to burn, but because they contain a billion transistors switching a billion times a second. If you build something with 10k transistors switching at 10M Hz, something that would take 1 second on the modern CPU will take roughly 3 months.
So I think there's two properties to be achieved for actual Free Computing. The first is roughly getting something that can compute. These days, this can be done by buying a standard PC and installing your own software on it. This may not continue to be possible (so we would have to eg rely on integrating our own ARM chips, then FPGAs, then microcontrollers, etc), but trying to put a solution forward before it is necessary isn't going to get very far. There also is available computation that is unlikely to go away - Javascript in web browsers, available to all but sandboxed programming toy apps, etc.
Which brings us to the second quality of Free Computing, which is auditability - guaranteeing that the code running is doing what it says, rather than a subtle corruption which renders the whole sensitive algorithm broken. To get performance, we would like it if such a CPU could be made in a necessarily non-trusted fab, rather than independently assembled from discrete transistors. This seems like a harder, but yet better defined problem - meaning it might actually be possible to solve through some combination of clever proofs and bootstrapping trust up from an extremely simple auditable CPU.
> I agree with your entire comment except for the first sentence.
I meant the term "first step" historically. It is really the first step to true open hardware because it is the first project (as far as I know) that solely relies on basic electronics you can solder together on your own pcb.
There are many open hardware projects but all of them are based on complex hardware (FPGA kits etc.). There are several good open cores you can download into your fpga but what if someday your fpga stops working and you can't buy another one because these fpgas are not produced anymore?
I agree with you that selfmade processors will never be able to compete with modern processors. But even a 1 MHz 8 bit cpu can do a lot of things. Remember the huge success of Commodore 64. And if you are able to 3d-print one such a cpu you can also print many, and build a multicore system.
Who says that selfmade processors have to work with silicon and copper? If someone finds a way to produce cheap conductive plastic filaments that would be a huge leap forward for printed circuits. I believe that there are several smart hardware hackers who could achieve a solution we could live well with. 3D printing is just in the beginning. I expect amazing times to come.
1. Plenty of CPUs have been built from discrete logic.
2. As soon as you start talking about complex printed items (including PCBs), you're necessitating either a trusted computational device to compile the design and drive the printer, or the ability to audit the output of the printer (eg read and understand the traces on a 2-layer PCB).
First, this FPGA is still not implemented by things that are the lowest level - how do you know that 7404 doesn't actually contain secret logic looking for specific patterns on the 6 "independent" inputs (based on your widely-propagated design) that alter it's behavior?
Don't let your paranoia get the better of you.
With only 6 inputs, if there is any anomalous behavior, it will quickly be noticed. And this subverted 7404 would need to have all its gates used on the same bus to have a hope of seeing that you're illegally copying a movie, and that is a rare application.
And I think someone would notice that these inverters are using 6 orders of magnitude more power than necessary.
Even for more complicated parts like CPUs, such tampering would be readily evident.
For a long time to come, it will be a lot easier to hide nefarious code in software in some corner of a general purpose system (PC, tablet, etc.) than to put it into the hardware layer. Even nefarious code needs network communication these days, otherwise there's not much point.
I hadn't considered the added power draw due to the likely complexity involved in making a good backdoored hex inverter, good point there. It seems like there's a natural contour such that as the individual component gets smaller and less complex, the amount of circuitry required for a backdoor increases.
Software certainly is easier to backdoor, but hardware is much more insidious. At this point I'd be surprised if there weren't backdoors in the widely used processors for at least the NSA. (Remember that innocent time when the question whether the 'net was tapped and recorded was up for discussion? :P)
So I guess I view both of my aforementioned properties as mandatory due to the use case for my analysis (trustable function execution), while the OP was really just talking about the first one. I'd originally considered this question in the context of cryptographic key generation and management, where the second property is of utmost importance to be confident that there aren't low level backdoors giving bias to your crypto keys and nonces. Plus if you're spending the time to bootstrap trustable computation up from a hand-built circuit, you might as well go all the way instead of trusting something like an MCU.
... where the second property is of utmost importance to be confident that there aren't low level backdoors giving bias to your crypto keys and nonces.
If there are backdoors in modern processors, it is likely located in the on-chip firmware for the boot sequence and/or in the secure key store. Those are the two main areas I'd worry about, and I'd also keep a close eye on any binary blobs needed for the secondary processors cores on a modern SoC. And also the firmware for the WLAN chip.
Not only are they pretty expensive but there already are many cheap good Android Pads running a Linux kernel. There are also plain Linux pads on the market, and even Ubuntu plans to sell a Ubuntu pad soon.
So, what's the gain?