Hacker News .hnnew | past | comments | ask | show | jobs | submit | tqs's commentslogin

The free fonts in Cuttle are mostly from Google Fonts and other freely licensed fonts.

With a Pro account you can upload your own font files.

You may be interested in Cuttle’s Connected Text feature. This will automatically connect dots on i’s, etc so you can cut out text in one piece. There’s also an option to “thicken” text so it’s not as delicate.

You can try a live demo here: https://cuttle.xyz/@cuttle/Connected-Text-29M9IXUSr5yr

That page also describes the algorithm we use to make this work with any font.


On a free account you can create up to 5 projects in the Cuttle Editor (and you can delete them if you want to create more...)

We don't laser cut anything for you. You can download your project as an SVG file (or DXF, etc) which you can then send to a laser cutter hooked up to your computer.

The product is designed for people who have access to a laser cutter, e.g. at home or at a makerspace.


I am loving your program making designs for my 3d printer. Thank you for sharing it to the world!


I am one of the creators of Cuttle. It stems from my research building direct manipulation + programming environments like http://recursivedrawing.com/ and http://aprt.us/

From a programmer's perspective, you can think of Cuttle as a direct manipulation vector editor (like Inkscape or Adobe Illustrator) that can be driven with parameters and JS code where you need it.

Unlike my previous research projects, this is a commercial startup mostly catering to laser cutting small businesses, though you can use it for anything where you want a 2D vector editor + some programmatic capabilities.

I'll try to answer questions that come up in this HN thread.

Thank you for sharing your work Hannah! Very cool stuff!


Cuttle is v cool, congratulations! I had previously seen Apparatus and liked it, so I can clearly see the genetic resemblance now ;)

I’m wondering how many of you are on the team, and does it actually support you as a business yet? Even for a quite niche-y app Cuttle deserves to be better known, and higher-priced, imo.


Thanks! There are 5 of us on the team. Yes, the business is profitable.


It would be neat to have STL and STEP output for 3D printers.


Thanks for the suggestion! Yes, we should do this. I've been seeing more and more people use Cuttle for 3D printing (exporting a DXF, then bringing that into another program to extrude and output a STL).


See Solvespace for that.


there's openscad


I'm mainly curious whether the concepts in Cuttle could be exposed as plugins in Inkscape, or as a standalone application written in Qt-Python.


A Cuttle project is — behind the scenes — a program. Each “component” is a function. “Modifiers” are functions that take input geometry (and parameters) and use JS code to create arbitrary output geometry. All of this code can be live edited.

At the same time you can do arbitrary “drawing” with a bezier pen tool and move/transform shapes. In this case you are essentially using the canvas drag-and-drop to manipulate literals in the program.

But fundamentally a Cuttle project is a program and the Cuttle Editor is an IDE that looks like a vector editor on the surface.

Because of this I’m not sure how much of Cuttle could be grafted onto a program whose architecture is more rooted as an editor of static vector graphics. I do know that Inkscape has some “live effects” which are similar to Cuttle’s “live” modifiers.

If you are interested in Cuttle’s architecture, I did a one hour walkthrough on this interview, https://www.youtube.com/watch?v=2el-85vG-IU


Very cool work.

FWIW the aparatus site says

> Apparatus is under active development. Discuss how Apparatus should evolve on the Apparatus Google Group.

Which seems like it's no longer accurate. Has it moved to a different repo?


Hmmm.. I would say Apparatus is no longer under active development. Researcher Joshua Horowitz was doing some work on it for a bit, but yeah I don’t think it’s changed much in several years. It should be regarded as a research project that scouted out several areas of the “programming experience” world that others can build on.


Hey, I've just spent an hour creating my first design with Cuttle - and I'm very impressed! You're right that it has a very short learning curve, especially coming from a hobbyist CAD and programming background.

A few random thoughts/questions/suggestions (and apologies if I've missed options or existing functions):

* Would it be possible to add edge snapping?

* Could there be a slicker way to switch from the central origin for shapes? (Central is frustrating as it means having to change position every time I tweak the size.) Being able to easily select an arbitrary corner as the origin with a click would be great; maybe even auto-selecting the origin based on snapping two shapes together - i.e. 'anchoring' the shape based on snapping it to another one?

* Any way of exporting multiple components in one go, without clicking through for each one?

* Being able to set custom rounded corners with a list is great, but it's not easily discoverable!

* It might be neat to have a pop-up offering boolean options when snapping or overlapping shapes?

* Given the (I think?) JS under the surface, is it possible to import data, either as a CSV or an image from which pixel values can be read?

* Lastly, any thoughts on offering an intermediate 'hobbyist' paid tier? I'd be strongly tempted to support you and access some of the paywalled features, but $19/month is honestly too high for an occasional/fun hobby product.


Cuttle’s snapping is very good (imo). But you have to drag from the snap point to the snap point. So if you want to snap a particular midpoint (say) to a particular corner, you need to drag from that midpoint to that corner. That is, your drag is explicit about the from and to snap points.

To export multiple components, we usually create a component (called “Cut Layout” for example), then drag out each of the other components (the pieces) into that component.

There is a modifier called Flatten which might be useful for you for doing lots of Boolean operations at once. I think this video shows the workflow, https://www.youtube.com/watch?v=LGHKRfIC6QA

Yes it’s all JS. You could copy and paste your data in and then process it and create geometry. (Sorry we don’t have any data import other than copy and paste at the moment.) Our scripting documentation is pretty good: https://cuttle.xyz/learn/scripting/getting-started-with-scri...

We are experimenting with being able to read pixel values of raster images. Coming soon likely.

Perhaps some kind of middle tier that gives full access to the Editor but does not give full access to the Pro templates makes sense. Many of our customers subscribe for access to those templates rather than the Editor!

You are also free to subscribe when working on a project and need the Pro features, then cancel when you’re dormant. We don’t delete your projects when you downgrade to free, you can still access/edit/download them.

I appreciate the feedback and the questions!


Thanks for the very comprehensive reply!

For snapping, I was missing being able to drag the side of an object (not on a corner or midpoint) and have it snap to any parallel line it encounters. You're right that using a corner or midpoint does work very well, but you just have to be more conscious about the action you're making.

For the point about a boolean operation, I was only thinking that if I drag one object on top of another, it would be a timesaver to have icons quickly pop up as a shortcut to the different boolean operations. But probably not a big gain.

RE: the subscription, agree that unlocking projects, downloads, and maybe folders might suit people that are mostly into designing things for themselves. Also, have you considered allowing read-only access to the pro templates within the editor? There are a few that I have no interest in making, but I'd like to learn the more advanced techniques of Cuttle by viewing.

Lastly, one other suggestion; you may already to thinking of this, but given the proliferation of selling 3D/CNC designs online (e.g. Printables, Etsy) have you thought about allowing creators to sell their designs directly from your site?


From one perspective, Jobs and Raskin ended up going down two opposite paths.

Raskin emphasized that "modes" were a major cause of UI problems. Essentially you want a UI that the user can habituate to as strongly as possible, so that all one's attention can be focused on the actual task and all of the administrivia you need to do tell the computer your intention are handled subconsciously. Like touch typing at a higher level.

The issue with modes is that they break habituation. If performing a given UI gesture does one thing in one mode and another thing in another mode, you can't make that gesture a habit. For example: Cmd+Z is a conventional gesture that many applications interpret as undo. If the keyboard shortcut changed between applications (or worse Cmd+Z meant something else in another application/mode), it wouldn't be so habitual and you'd be less productive.

Raskin was very serious about no modes. The Cat for example didn't even have an on/off mode. It would go to sleep to save power, but if you started typing it would buffer all your key strokes and have them in your document by the time the thing woke up. That is, you didn't have to switch from off mode to on mode! And because you were always in a document (that is, there were no separate application modes), you knew that typing some words always did the same thing, so the scheme really worked.

On the other hand, Apple has really pushed, especially since iOS, on the "App" model. Applications are, of course, giant modes. And the strategy has been to push a separate App (mode) for every single use of the machine. So rather than learning a few powerful gestures and then combining them to do disparate tasks, users need to learn a separate, surface-level gesture complex (App) for each individual task they want to do on their machine.

Which is more efficient or appealing? On what time scale of use?

Did Apple end up this way because Apps are a more natural fit for a consumerist model? "Want to do this task with your machine? Don't bother figuring out how you can do it yourself. There's an App for that!" No Modes vs Buy More Modes?

Raskin's book, The Humane Interface, talks extensively about his UI design philosophy. In addition to explaining the above problems with modes, he discusses how to actually design a computer system with no modes (I believe an elaboration of what he did with the Cat). He also explains other really important UI principles and their ramifications, for example, "The user's data is sacred" (hence undo).

PS:

Raskin's definitions: A gesture is defined as an action that can be done automatically by the body as soon as the brain "gives the command". So Cmd+Z is a gesture, as is typing the word "brain". What constitutes a single gesture will be different depending on the user! A mode is defined as any situation not at the user's locus of attention that would cause a gesture to perform an action different from another mode. So "pseudomodes" where the user holds down a modifier key or holds the mouse button while performing a drag gesture get around this since they keep the user's locus of attention on the fact that they are performing the pseudomode.

I think both the above definitions are still a bit problematic but Raskin's definitions are better than any other that I've heard. I hope there are more people who study and discuss these deep UI design issues! What do you think about modes?


You are painting a picture with black and white where in reality there are shades of grey.

The dichotomy you are trying to conjure up does exist to an extent and it is interesting, but in reality no one even knows how to build a modeless mobile interface (that is as powerful as an interface that is allowed to use modes).

Deciding whether modes would be a good idea pretty much has to be done on a case-by-case basis, considering the involved tradeoffs. While avoiding modes is in general a good rule of thumb I have a hard time believing that it is realistic as a hard and fast rule.

(The characterisation of apps as giant modes doesn’t make much if any sense in the context of iOS.)


>>>Did Apple end up this way because Apps are a more natural fit for a consumerist model? "Want to do this task with your machine? Don't bother figuring out how you can do it yourself. There's an App for that!" No Modes vs Buy More Modes?

Bingo! Evidence that this is the case: not many computers ship with compilers onboard any more. OS vendors want you to continue to invest in their platform - Apps just happen to be a way to do that. There is a vested interest in making developer tools as weighty and difficult (for the PITS) as possible .. why learn programming when you can just buy an app? In many ways, we've gone completely backwards in the computer industry - users have to spend a lot more time and effort to maintain their platforms than they should have to.. same is true of developers, as well. The fuss and nonsense required to just get a window up on the screen being one case in point. We're all scrambling to learn these new - but not necessarily better - technologies, because the OS vendors have a vested interest in capturing the minds of their users.


There is a vested interest in making developer tools as weighty and difficult (for the PITS) as possible .. why learn programming when you can just buy an app?

This is nonsense. You appear to be arguing that every computer user should be able to develop software - equivalent to arguing that every driver should be able to build a car.

You know why modern computers don't ship with development tools? Because they are no longer used exclusively by people who are computer programmers. They're mass-market devices now, and one of the requirements of that is that they need to allow users to perform tasks without learning how to develop software - an act that is totally orthogonal to actually performing those tasks.

Coupled with this, those people who are developers and require access to development tools have immediate access to them, thanks to the Internet. You can immediately download SDKs and IDEs for both Windows and MacOS, and every major Linux distribution comes bundled with a compiler. Same for Android, iPhones, etc. - it would be nothing but a waste to provide development tools when the vast majority of users do not require them, and when they are so easily available elsewhere.

I'm also completely unclear what "fuss and nonsense required to just get a window up on the screen" you think exists, given the ~10 lines of code which is required to do this on any platform.


Its not nonsense - it used to be that you could write a new application for your computer if you wanted to, easily enough, and you'd have the exact same tools as anyone else would have - because the computer shipped with them.

The 'developers' arose as a class of society simply because computer use became decoupled from application development.

>>10 lines of code to make a window

But this doesn't actually do something. Used to be, you could write a functional application with 10 lines of code -but now, because computing is being run by Fashion Directors (I agree with you on this, btw) where 'trends' are more important than actual use, we get a lot of weight added to the truss normally bearing the load of 'usefulness' to the user.

I don't consider that we've actually made a lot of progress with human computer interaction over the decades since the Canon Cat was around. In many ways, I think we've been side-tracked in the computer industry. The rise of Windows, for example, set the whole computer industry back 10 years ..


"But this doesn't actually do something. Used to be, you could write a functional application with 10 lines of code -but now, because computing is being run by Fashion Directors (I agree with you on this, btw) where 'trends' are more important than actual use, we get a lot of weight added to the truss normally bearing the load of 'usefulness' to the user."

I've been programming since the mid-80s, when I started out doing BASIC on the Commodore 64. Along the way, I touched on BASIC on the Apple II, Pascal on the Apple IIGS, Mac, and Windows, C and C++ on the Mac, Visual Basic, and various other environments. These days I mostly do Objective-C and Java on Mac, iOS, and Android, with a sprinkling of Python and other languages.

It has never been easier to write a functional application than it is today. The ratio of code to functionality for any given app is lower than it has ever been.

Take the C64 or Apple II with built-in BASIC that so many hold up as a shining example of empowering the user. Now compare it to a modern Mac, which ships with AppleScript, Python, Ruby, Perl, and bridges to let many of these languages be used to create real, fully functional GUI apps. I can build a decently functional text editor in a few dozen lines of Python on a brand new Mac, while back in the day that much code would probably not even get you inline editing, let alone spell check, saving, rich text, printing, versioning.... Furthermore, the "exact same tools" as anyone else would have, Xcode, are available free from Apple and you'll get a prompt to download and install the stuff automatically if you try to use the compiler. Massive quantities of high-quality documentation are also available for free.

The reason most computer users don't program is because they don't want to, not because it's harder than it used to be.


I've been writing code since the 70's, and this statement:

"It has never been easier to write a functional application than it is today. The ratio of code to functionality for any given app is lower than it has ever been."

.. in my personal opinion, is false. The apparency-of-ease is there, but in reality its just not true. The runway to get something running and useful is about 10x as long as it used to be ..


Do you disagree with my claim about the text editor, or do you think that's just a bad example, or what?


Its not nonsense - it used to be that you could write a new application for your computer if you wanted to, easily enough, and you'd have the exact same tools as anyone else would have - because the computer shipped with them.

Yes, it is nonsense. Development tools are easily and freely available to anybody who wants to use them.

The 'developers' arose as a class of society simply because computer use became decoupled from application development.

Developers exist because development of applications is inherently more complex than use of applications, almost by definition.

But this doesn't actually do something

It displays a window.

Used to be, you could write a functional application with 10 lines of code

You still can, and you'll be able to accomplish a lot more than you could 25 years ago.

but now, because computing is being run by Fashion Directors (I agree with you on this, btw) where 'trends' are more important than actual use, we get a lot of weight added to the truss normally bearing the load of 'usefulness' to the user.

I don't agree with you. Computers are being designed to aid users in accomplishing tasks, which is exactly what they should do. That they are not required to understand the internal workings is a good thing.


I believe that the reason its so difficult to understand the internal workings is the intention of the OS vendors - and this is not an altruistic purpose! If computers were very easy to develop for, a lot of people who have a vested interest in maintaining their control and secrecy might have to re-educate themselves. What I see happening in the industry is the same thing that happens in Class-based societies - as soon as there is an opportunity to draw a line, it is drawn - and we then have two classes of people.

Repeat, ad infinitum ..


Saying that every driver should be able to build a car is like saying that every computer user should be able to build on operating system from scratch. What (I think) the gp and I are saying is that every driver should be able to do basic repairs and maintenance on their car, and should be able to write basic programs.


Fascinating. To play the devil's advocate, would you agree that sometimes a new mode's optimization for the task can outweigh its cognitive overload? If you try to shoehorn too many disparate tasks into the same interface, usability can suffer.


Some of Raskin's idea is really like what Emacs does. Ian Bicking once wrote an article THE vs. Emacs. http://blog.ianbicking.org/the-vs-emacs.html


All the DOM is synced in real time.


This is a really nice cleanup/update on Knockout. The source code at ~400 lines is very understandable. I've looked at a lot of reactive frameworks, this is great work!

I'm glad you're thinking about how to manage garbage collection. It's tricky with these push-based frameworks.

Have you given consideration to asynchronous vs synchronous reactivity? The advantage of asynchronous is you don't propagate the changes as they are made but instead once all the changes are done. You can avoid redundant computations this way, for example if a bind is dependent on multiple observables that change in the same "turn". And in some ways (worth debating) asynchronous semantics are more understandable than synchronous since unknown code is not being run underneath you while you are in the middle of changing observables.

Here's Ember's writeup on their rationale for asynchronous: http://emberjs.com/guides/understanding-ember/managing-async...

Also since Object.observe is asynchronous, Google's polymer/MDV will also I believe have these semantics.


Yes! There's already a lag-delay primitive, but it's a one-off hack we needed to get things working - more general asynchronous propagation is definitely a requirement.


Engelbart was perhaps the most important influence in realizing the computer as an interactive medium.

Around 1960, most were still thinking of the computer as a sophisticated calculator. You give it a problem, it spits out an answer. Even the idea of artificial intelligence was framed this way.

Engelbart realized the potential of the computer to augment human intelligence. High bandwidth, continuous interaction between human and computer allows the computer to be an extension of the mind. Chasing this vision led him and his team to naturally invent the mouse, bitmapped screens, hypertext, networked computers, etc.

IMO, more important than these individual inventions is Engelbart's foundational principle that technologies change what the mind is capable of thinking. And by augmenting our thinking, we enable ourselves to create new technologies. Mind and technology co-evolve. This is "bootstrapping" in the purest sense and the source of that profound feeling many of us have experienced while augmenting our minds through programming and other rich, creative interactions with computers.


The technology used here to do the image processing is GLSL, in particular fragment shaders (aka pixel shaders). GLSL is a very small C-like language that's become a standard for GPUs. GLSL code gets sent (as a string) to the GPU by javascript via the WebGL API.

Seriously is a JS library for handling the boilerplate of WebGL, composing and compositing multiple shaders in a pipeline/graph, and adjusting their parameters. In addition it comes with a bunch of pre-written shaders.

Shaders themselves are a lot of fun to write, IMO. A pixel shader is just a function that computes a color given an input pixel location. (A shader can also take in additional input such as "textures" to sample from, e.g. a video frame.)

The shader is run with massive parallelization in the GPU. In theory every pixel can be processed simultaneously. This is how these effects can run in real time.

Here are some more examples of what you can do with pixel shaders (including sampling from the webcam in the browser -- should work in Chrome and FF), http://pixelshaders.com/examples/

I'm in the process of writing an interactive book about pixel shaders, http://pixelshaders.com/


https://www.shadertoy.com/browse (WARNING: it can freeze your browser)


Whose idea was it to actually run all those shaders at once? I guess someone just didn't want to store and host some thumbnails?


I think this would be really great with back and forward controls, i.e. rewind.


Along these lines: placebo buttons, e.g. walk buttons at crosswalks, elevator door close buttons

http://en.wikipedia.org/wiki/Placebo_button


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: