I was pretty excited to give this a try because it might be a good fit for some frustrations I've had with running a GPU-dependent task on my laptop without discrete graphics.
Onboarding experience is terrible. Registered for an account from the Workstream page linked here, and was taken to a console with absolutely no mention of Workstream. Went back and forth a few times trying to figure out if auth provider had redirected to the wrong page after registration only. Ultimately searched through comments here to find out that Workstream is a variant of the CORE product. Having figured that out, went to the Core product, was rejected on creating an instance due to no payment on file. Not a huge issue, but would have been nice if I was prompted up front to complete this. Once I'm past that hurdle, I go back to the CORE product and find out that I'm not allowed to create GPU instances until I put in a ticket.
I'm hoping that this was some accident where this landing page ended up on HN when it wasn't supposed to be public knowledge yet, otherwise the lack of thought on just the setup process here is pretty concerning. I'm also pretty confused at the requirement that I put in a ticket to have GPU instances enabled when they've already taken payment info, and it seems like a test auth would be a better anti-fraud measure than the half-sentence I typed in.
Man, my excitement here sure was short lived.
Edit: got an email that I had been enabled. went to look. Three of the GPU types still show as locked but one no longer shows the lock icon, so I guess approvals are per-type? But I can't select the "unlocked" GPU type, it's disabled. There's no explanation of why. A couple of minutes of trying different combinations and I find out that it's due to the DC, if I choose a different DC I can select it. So I guess they're only available in one DC, at least right now? It would have been nice to be told that.
Edit: got the machine to provision, installed the client from another HN comment. I logged into Paperspace using a Google account. The client does not list this as an option. Only email, github, AD, and SAML. I give up. Except for the machine is still provisioning and I have paid $10 for storage. Great.
I don't know if it has anything to do with this announcement (e.g. some malicious actor taking advantage of a dormant vulnerability), but just a few days ago someone managed to almost invade my Paperspace account. Fortunately the system detected it as suspicious, but it seems they managed to get the password right.
The weird thing is I haven't used this thing since years ago when I registered and used it, like, twice? And I used a unique, secure password which I'm pretty sure I've never shared with anyone or typed anywhere else or in any unsafe devices. No Google login either (but I've logged out of all devices and removed authorization for all external services just to be sure).
So I emailed them asking what happened and how I could cancel my account (because there's no way to do it via the website). They didn't give any detail except say someone did manage to "semi" sign in, more than once, in a few separate days recently. At least they responded quickly and deleted my account.
I've used Paperspace pretty extensively for cloud-gaming without the lock-in. Unless you're pretty familiar with their UI, it can be an extremely frustrating experience
I've had problems where I've been locked out of GPU types with little to no explanation from their staff. Instances have also gone completely unresponsive and essentially "bricked" from my end.
At some point I reached a decently stable state and haven't mucked around with instances. Sad to see things haven't improved for new customers
A few weeks ago I tried to subscribe to their machine learning product but their payment processor only accepts US credit cards and I’m Europe based.
While support says it will work with non-US credit cards as well, I could not get it to work. Tried about 20 suggestions made by their support over the course of 1 week what to put into the payment form; nothing worked.
I don't think it's strange to have to write a ticket to get a GPU auth. Google Cloud Platform requires the same, can take days to approve, and is more expensive and WAY less intuitive. Plus I've had ton of glitches working with GCP where they said they approved my request but didn't -- multiple times.
I've used Paperspace's Gradient product (hosted Jupyter notebooks), and while it is cheaper than most other options, I can't recommend it. The service has a very janky feel. When I first started, there was an issue where my instance was was stuck "shutting down" for half an hour or more and I couldn't access my code. Then there was a weird issue where some kind of geographical split between different types of instances resulted in my code getting lost (I think it happened twice). It has the feel of a product held together by duct tape and chewing gum with a thin veneer of slick graphic design. I recently tried using their CORE product (GPU heavy VMs), but for some reason instances with GPU's are not enabled until you write them a message (I did, and they haven't responded in a week).
My advice would be to use the cheap, high performance machines (VC subsidized?) if it makes sense, but never ever store data with them without backing up to a different service (git or Dropbox maybe?)
CEO of Paperspace here. I'm really sorry about this. That is not the experience that we are striving for and FWIW, since leaving beta, Gradient is much more mature at this point (many millions of hours of runtime and lots of developer work). We have been aggressively stabilizing (and building out new features) over the past few years and continue to improve the product every release. My sincerest apologies for your negative experience early on I hope you will give it another try.
It's cool that you're responding in this thread, and thanks, but I just have to mention that these experiences are from about a month ago. Not sure if that's what you mean by "early on".
I'm a customer, and one thing that I think would be helpful would be to ensure that notebooks launch inside of the storage folder. In fact, I'm not sure there should be any folders other than the storage folder and another folder labelled to tell the user this data isn't going to persist, or is volatile.
I really like a lot of the features you have in place, but had weird things like files disappearing, git repos being erased except for folder name, etc.
That being said, I think there's a lot of value here, and once I defaulted to using only storage, had less issues with Gradient. I think the concept behind your company, seems to be democratizing the devops needed for getting on-demand compute, and I dig it.
Hit me up privately if you want to talk more. There's some potential ideas I'd like to discuss with you involving my employer and potentially augmenting our product with yours (and bringing customers with us). Hope that last statement tells you that despite the warts I've experienced, I really admire what you have accomplished. I'm a big, big fan of intuitive UX that simplifies high-value tasks, and I think you've achieved a lot there.
Hey, I'm old and operate in a space where most things run at single Mhz speed (but have to work underwater, in a vacuum, and other interesting places).
What does your service do? Is it like VNCing into my dev box in the shop, except (I hope) more responsive?
I’d guess he’s a spacecraft engineer who does work in a neutral buoyancy tank. In the Us that usually means NASA/Johnson. That or he’s a spacecrqft engineer at a contract firm that picks up side projects doing underwater automation work, which is not that uncommon in this small field. Oceaneering Space Systems, for example.
The second one. I've done a couple things at NASA/Ames and it has been generally fun, although there have been a couple of weird episodes (Like me not being authorized to check out stuff from my own repository due to not being a US citizen, and being told that it wasn't a unique case).
I also can't recommend them. Service is operational when working, but the times it isn't working as expected I don't have the patience to message support. Things just aren't straightforward.
This was 2 years ago when I used them to run a couple of Windows apps for work.
1. It advertises itself as a gaming service, not a 9-5+overtime service. Granted, some people do game for >8 hours per day, but those people probably own their own hardware. Which leads into...
2. Many people used cloud gaming services for casual play. It doesn’t make sense for someone to buy a gaming computer when all they want is the occasional PUBG match or to play a grand total of 15 hours of Far Cry.
With those two assumptions, you can oversell the machines because the average user will be offline for the majority of the time.
Ah makes sense, thanks. I'm guessing that if a number of people start using it for ML workloads, they're going to tamp down on that. Because holy crap, that's an awesome deal, $14/instance doesn't even pay for the power for a 24/7 ML workload.
But maybe the messaging will be enough to stop a significant number of people from jumping on that.
That seems like an amazing deal. Can you drop a link to Shadow? That's such a vague name I don't have a lot to go on. I am interested to dig into the details of this deal. Is there some kind of Acceptable Use Policy or is it more like a pay-as-you-go VPS?
Just saying that on Paperspace, it's all about using the storage folder. Had some challenges before, but definitely getting a lot of value out of it now.
I think the product has a ton of potential, but certainly has warts to iron out. I've so far just used gradient, and haven't experimented with the core APIs feature yet, but looking forward to seeing what can be done there.
Lol, yeah, I had the same issue. Followed their advertising for ‘gaming in the cloud’, only to find out that the VM’s with GPU’s were disabled. Not going to do any gaming without a GPU guys.
Eventually they enabled them, but it was such a pointless experience that I was completely turned off using it.
Not sure when you tried their service, but I think I started around... probably a year ago? Maybe a bit more.
After I created my account, I had to specifically request access to GPU plans. Once I got approval (1-2 business days), I was able to connect to a machine with their web client, set up Parsec, and get a decent cloud-based gaming experience.
Round trip latency was usually less than 50ms, which was tolerable by my eyes for a lot of games. Something like an FPS was of course out of the question.
Was going to try their Free account, until I signed up and the discovered the big black "Free" had a tiny light grey "+ Machine Utilization Charges" beneath it. Everything is "Free + Charges".
I have a pet theory that some of the managers that set up the "free - just pay shipping and handling" type businesses found a new home in e-commerce. I obviously dislike it.
When I opened the pricing page I was 'afraid' the hourly prices would come with the caveat of a monthly subscription (or separate fee). Seeing Free + Charges immediately answered that for me. But yeah, I also see your point.
Yeah, I signed up for free but it wouldn't even let me create a machine. If they're going to expect to sell it, then at least make it possible to put together a machine and then charge to launch it. So far all I know now is tha they have 3 DC's to choose from. The rest, no idea. Can't get that far.
They have free GPUs, and the account is free. Doing fast.ai currently on a free GPU. The extra charges are if you want to upgrade, like needing more storage.
Please breed if you can ;-) Hum, I mean, transmit who you are to other, infect them with greatness, however you can (books is a great idea, teaching and mentoring as well).
FWIW I think there's an ethical way to profit, but we have to be willing to wait for the value we created to actually materialize in the real world. A not-so-rare case, I think, is people donating large amounts to their mentors, once they "get there", once they're established and earning enough (likely 2-5 years after the mentorship or transmission took place; sometimes way more).
You certainly can't live from those returns. Some of them might be fabulous though, if what you transmitted was really life-changing. I guess. Never been on that side of things (gave, did not receive) but I'm thinking putting out the value is the mission, and anything else is a secondary concern (just keep that day job to pay the bills).
No cookies, no ads, source code always available, pay for a prebuilt version, user names price ... in excess of $100k/year (after 20 years :) ... ardour.org
I'd love to see a graph of that income over those 20 years! Also, do you get any license payments from Harrison Consoles? And if yes, are those included in the 100k per year?
There is a small "back payment" from Harrison, but it is not included in the cited numbers. I work closely with Harrison and they have worked very hard to not make Mixbus a "fork" of Ardour.
Note that there was essentially zero income for the first 8 years, and then a slow, steady rise once I needed to make a living again and instituted the Radiohead-inspired "pay tunnel".
The world is full of risks. Got to deal with them.
It helped that I had 8 years (post-amzn) not needing to make any income and thus able to build a large amount of goodwill and visibility within relevant communities.
People do "steal" the software in the sense of putting it into DVD/ISO packages that they charge for.
My concern has never been that everyone pays, only that enough people pay. But perhaps that's why Ableton Live changed the zeitgeist of computer music production, and Ardour is just ... Ardour.
I'm just going to state again what I want, which is only tangentially related to the topic...
1) I go on Github and configure a service
2) I make a wallet that people can donate to
3) I start up a virtual machine, aimed at the Github, using the wallet to pay for the time on the machine. The virtual machine host guarantees that the code at that Github is what's really running.
I can imagine lots of other things I want, too, but this is the bare minimum. I think it'd be really useful in a lot of scenarios.
Theres some interesting security ideas you could try to solve with something like this. With an open source service that is hosted by someone else, you never know what is actually running. You can't trust it.
I was thinking about something similar a few months back, and I think it could be doable. You'd need a CI service that creates reproducible builds, and a hosting service that can show what build artifact is currently loaded. You'd allow the public to view the state of the service. I think it could work with heroku or similar.
That gets me closer to: I trust the code, and I trust the hosting service (I.E. AWS), but I don't need to trust the person running the code as I can verify that it matches what's on GitHub.
This—the ability to audit what's running on the backend you're talking to—is in large part what people get out of smart contracts on e.g. Ethereum. You can take the contract source code from GitHub, compile it (deterministically), and validate that the deployed-on-chain smart-contract binary is the same. The blockchain nature of the platform then ensures that the contract will do exactly and only what you "expect" it to do (i.e. it'll do the same thing "in production" that it does when you test it on your own machine, since any node that tries to execute the code differently would diverge its state from the consensus, and be ignored.)
In essence, a Turing-complete smart-contract blockchain is a deterministically-trustworthy compute-hosting service. It's one that has the disadvantage of all the overhead distributed auditability requires; but at least has the advantage (compared to centralized compute-hosting with remote-attestation) that it already exists and is usable right now for real-world use-cases.
(And you can also reduce the blockchain-y overhead by moving whatever backend business logic you can out of the "trust kernel" into untrusted regular machines, and then just having the "trust kernel" do the important stuff. CryptoKitties is a good example of this: the only thing their smart contract does is track who owns what kitty, because that's what people would try to dispute by forging transactions. The rest of the stuff is state in a regular centralized RDBMS, because it's dictated by the service, rather than by user input, and so is not under dispute.)
This is a really hard problem with "solutions" that usually run counter to privacy and, you know, controlling the machine consuming your electricity. Remote Attestation has come a long way, but (at least on Linux) still in its infancy.
First time I hear about "Remote Attestation", got any trusted sources/resources for someone to read up on it more? (besides Wikipedia and it's sources)
Here's some words Red Hat folks wrote[0] about Keylime[1][2], "open source scalable trust system harnessing TPM Technology,"[3] written in python and rust, originally created within MIT Lincoln Labs.[4] It leverages TMP 1.2 and 2.0[5] and also involves/includes/references code from Intel[6] and Cloudflare[7].
True true, wasn't meant to be hostile but I see that I wrote my comment unkindly.
I do think the point stands though. They are names for the same technology. How it is used, and who uses it will determine what type of spin I'd put on it.
Monitoring the integrity of a trusted audio decoder in my system's kernel: DRM
Monitoring the integrity of an open source tool that I bought and paid for: remote attestation.
Both will come down to various arrangements of trusted computing enclaves, asymmetric cryptography and groups trying to bypass said arrangements.
I couldn’t agree more. DRM is just hardware and software. It’s a tool and implementations matter. HDCP is one of those implementations that seem like a good idea but which have all kinds of side effects that make the product the DRM is part of (HDMI in this case) much less useful for certain fair use law abiding use cases. As long as we have the interoperability exception for breaking DRM there’s a way around but it would be better if interoperability was a requirement of accessibility standards in the first place.
Agreed, something like "Netlify for virtual machines" would be useful.
You could probably get close by auto-deploying from GitHub Actions but you wouldn't get the main benefit of a service like that (having ops done in a clean way without having to think much about it).
Without having any experience in the field, aren't lambda functions supposed to provide something like that?
Netlify (primarily) takes a git repo containing a program to generate a static site and turns that into a live running website that automatically updates when the generator repo changes. You don't have to think about configuration or security or updates and you don't have to maintain a server.
Ideally this would be a service with the same ergonomics but the outcome isn't a hosted static website but a machine (somewhere in the cloud) running whatever the user wants. "Tedious" tasks like hardening, monitoring and upgrading (and redeploys) would be handled by the service provider, as opposed to the user.
For people who like doing ops this might not sound appealing but developers (especially single devs and small teams) often just wanna focus on their app, not their ops, even more so when it comes to continous maintenance.
Addendum: I think Laravel has some of this covered in their ecosystem with tools like Forge and Envoyer. But I've never tried any of them so I can't judge.
This is what Platform-as-a-service (PAAS) is and has been around for a long time. Google's Appengine, Heroku, and many others offer it.
Since then we've seen "serverless" or functions-as-a-service (FAAS) which is just smaller bits of code (down to a single javascript function) but the same concept of everything else being managed by the provider. This is AWS Lambda, Cloudflare Workers, Google Cloud Functions, etc.
The latest offerings now take a Dockerfile or container image that you build, which means you can run anything on any stack while the provider manages the rest. Google Cloud Run, AWS Fargate, Azure Container Instances, Fly.io, Zeit Now, etc.
The host would prove that the source available online is the exact same code running on the instance, so you know that it wasn't modified by the developer/bad-guy before it went into production.
What you are describing is almost exactly how Codius is envisioned to work. Our company is actively working on this right now and it is close to being in an alpha/beta state.
You should really look into GitLab. It's free to self-host the whole thing if you want, and there's a full CI/CD pipeline (you can plug whatever you want in there).
It's miles and leaps beyond what GitHub offers (different target I suppose).
You'd still have to code the linking to a wallet part, but that's definitely doable.
The GP poster's point was that the hosting service needs to prove to the consumers of the app that what's running on the hosting service is the same code they can browse through on GitHub.
When you're both the developer and the consumer, you can certainly prove this to yourself (to five nines of confidence or so) by just setting up the hosting yourself. But we're talking about the case of a centralized backend for a multiuser service, e.g. a forum, where the users want to not have to trust the moral fibre of the person who set up the server, but merely trust that the infrastructure used in the hosting guarantees that it can't run anything other than the codebase (that they control.)
I think there is a market here. I don't use this service, but instead use an EC2 instance as my developer box (for personal projects) and I do like being able to easily move between multiple machines, no matter where I am and easily spin up another instance when I want to start fresh or try out other setups like having a more powerful machine or something with a GPU. At work (I work at AWS) most people do the same for their developer machines and don't have physical desktops, just using their laptops for access or lightweight local development.
A service that makes setting up and managing those developer machines easier for folks who don't want to learn or fiddle with AWS options would be valuable.
AFAIK if you only want a box to act as a dev environment, that would not be much work beyond ec2:run-instance? Is there anything obvious that I'm missing?
Yeah, but even for fairly technical people, there's a lot more of a learning curve there than there needs to be (starting up an instance is easy, getting all the security stuff set up is more of a pain). Having something which is easier to manage without all the complexity of AWS seems reasonable.
standard installations (set yourself up with the tools you want running on the machine), keeping track of and managing the security parts (security keys/access credentials), mapping to any static IP or domain name you may want to use for ease of access, and other niceties could make the process smoother if you were purely focused on the developer workflow
also when you're testing things out on the machine you have to know how to change security settings for opening ports and providing access externally. These aren't too complicated to do, but just generally having simple tools/UI for making this simpler could be useful. for example, if you're a flask dev you may want to just run a site "locally" and make a few setting changes before being able to access your site. i've helped friends with how to do that before
https://aws.amazon.com/workspaces/ perhaps? Seems like that is the existing AWS option similar to this offering. Would neat to see a comparison between the two.
workspaces (in my personal point of view) would cover a chunk of what this company is talking about but doesn't tie into or focus on the developer workflow. that product is perfect for general office machines, but developers likely want the option for more control and need different features
Lots of red flags. These guys apparently have very little clue what they're talking about.
I mean "fast + virtual" already makes me doubtful. And I'm sure that with the internet being overloaded by everyone working from home, using remote desktop is going to be even more painful than usual.
But when they claim "Get a new computer in less than 5 minutes, zero configuration required." and then mention 3ds max, I am 100% convinced that they are clueless. Pretty much nobody uses the stock 3ds renderer, so you'll need to install something like Redshift or V-Ray, too. And setting up the plugin licensing alone will take you hours. Plus you have to re-do it every time the underlying hardware of your virtual instance changes.
Oh, and then there's pricing. "P6000
For extreme 3D apps, rendering" starts at 50GB SSD? WTF? I have a 8TB storage raid here to keep the scene files around. And then "Only pay for what you use" but "For hourly plans, if your machine is powered off, you will only be billed for your storage."
OK, so you pretend to cater to 3D professionals, but you fail to mention licensing, the number 1 cloud issue, you provide too little storage to load even a hobby-sized rendering project, and your on-demand service will bill me for storage unless I download and re-upload terabytes of data every time I switch projects.
I'd say you can greatly improve your pitch by NOT promoting it as a professional solution. It's probably great for hobbyists and gamers, so I would focus on them with the marketing.
Oh and how are you going to be "Delivering every frame in less than 16.6 ms" when there's 50ms of latency (due to speed of light) between me and your closest server?
This is so true. I've done GPU heavy High performance rendering on remote VNC-like connections, something which requires lot's of preparation time -- and evventually you will be frustrated by working remotely.
I used paper space for remote gaming via Parsec.
I also purchased one of their machines for remote development purposes. While I was waiting for parts to fix my laptop.
The service was really good. The machine was fast, with RDP it felt close to local. They also offered very powerful hard ware.
I moved on because it was a bit pricey at that point. This was about a year, year and a half ago. I think it was about 50 a month for me. I moved onto a dedicated server that I used KVM and LibVirt to manage.
That worked but it's obviously a lot of over head.
Signing up right now for this - glad to hear it worked well. I have a Mac at home (not ideal for games) and at most a couple hours a week to do any gaming (so it's not worth buying a PC). For the absolute best machine they have, my outlay will be less than ten bucks a month. An easy sell.
It's not much different than stadia or nvidia geforce(?).
I played online multi player games. If you have a 144hz gsync/freesync monitor. Then this isn't meant for that. But it was totally playable.
The weak portion is you're beholden to your internet. Traveling for work it didn't work at all. Most places I stayed had very slow internet and then the latency became unplayable.
I see this as a completely viable option if you're on Linux, or Mac and want to game every now and then. Or if you're waiting for a new GPU to come out and still want to game.
On the off chance you see this, I'm pleased to report that the answer is, no! It works brilliantly as long as you use the free "Parsec" service's client to access the desktop and your games. Browser access to the Paperspace desktop is fine for normal app use but no good for gaming.
Setup wasn't quite push-button but I got it up and running by dumbly following the instructions. Parsec was the most confusing part. Trust the docs, and it'll work.
Mostly older FPS multiplayer games against friends. Garry's Mod, the Source version of Counterstrike. Haven't seen any issues at all, really feels like it's on local desktop (other than the quite obvious graphical degradation in fast-moving play). I have a hardwired ethernet connection to my router though; I bet it'd stink over Wifi.
Also - am not good. :)
EDIT the latency figures are nothing like 50ms though. Low tens maybe, total. This isn't a huge surprise though: on the same machine and connection I've also (using JamKazaam, not over Paperspace) had decent success doing online realtime music jam sessions with local friends too, where the latency tops out around 25ms - can't really get into the pocket playing funk, but just fine for rock.
Better known as 'Desktop Virtualization'. Established competitors in this space are Citrix/XenDesktop, Microsoft/HyperV, Oracle/VirtualBox, VMware/HorizonView used in combination with remote desktop clients like X11 and Remote Desktop Connect. Is there an angle here that is new and interesting?
This is specifically "desktop as a service" so while there may be Xen or Horizon in the mix it is abstracted away from the customer. The established competitors are Amazon Workspaces and Azure Virtual Desktop.
Curious what folks think the outlook on this technology is. Rather, will there be any significant shift towards centralized compute in the next decade?
Currently, a large institution/corporation has to manage thousands of individual machines. Say a physical component fails, now a technician must go to the location of the machine and give the user a temporary replacement. Alternatively, in a centralized compute environment, they could just allocate a new machine, and work entirely out of a data center.
And what about software updates and upgrades?
In the centralized model, both software updates and hardware upgrades can be managed more easily. Sure, we have good software tools to update all networked devices, but if that fails, the admin still sometimes needs physical access.
One market I see this potentially taking off in is academia and hospitals. (Though I’m biased because I’m employed by a Medical School)
Much of the record keeping is already done with a centralized infrastructure. Liberal use of active directory and low powered clients is the norm.
And particularly for research, there’s the added benefit of being able to allocate more resources without any physical action. Say I’m trying to run a script to fold proteins on my lab workstation. Usually, I’d be limited to the hardware on hand, but in the centralized model I could request or allocate a more powerful machine. Sure, the current solution is to spin up your own VM and move your program. Often, academic institutions have their own on-prem compute for this purpose. However, both still require technical ability on part of the user.
How close are we really to the model of giving all users a dummy client (think Chromebook) and centralizing The real compute? What challenges, disadvantages am I missing?
As someone in the Apple ecosystem, the first thing I think when I see a service like this is that this would let me keep doing the type of computing I want to be doing (e.g., software development, and using the big creative apps[0]), while still using devices that Apple is invested in improving (i.e., iOS devices, because they're not investing in macOS, or at least not the parts of macOS that support the type of software I want to run).
Therefore, unless Apple changes course, if I want to stay in their ecosystem (which is debatable), the only way I'd be able to do that is to start using a a service like this.
The way I see it there are three options:
1. Apple changes course and starts supporting powerful software again.
2. All powerful software becomes web-based a à la Figma.
3. Start using services like this.
The status quo cannot continue indefinitely, history has shown that when a popular product stagnates, an external player eventually figures out how to capitalize on it and takes over from the existing players (e.g., see the iPhone vs. flip phones, Firefox vs. IE, Sketch vs. Photoshop).
Even if there's "centralized" computing, everyone still needs a device they can use to access this central environment. Of course these devices can still fail and need repair. (The Chromebook in your example). At least in that case, these machines are fungible so a permanent replacement can just be given while the faulty device is repaired and used to replace another device the next time one fails.
> they could just allocate a new machine, and work entirely out of a data center.
Not really, you still have to manage endpoints, and they still have peripherals. One of the big advantages to systems like this is access to high-performance compute, but generally you'll want good displays and peripherals to interface with machines like that.
With a remoting system, you also need to make sure that your network performs well enough not to cause strain for your workers.
All that is not to say that it's exactly the same workload, but it may not be as major of an improvement in administrative complexity as it seems at first blush.
The benefits to this are substantial, which is a bit worrying.
If this style of computing becomes the norm it would centralise and fragilize any systems, companies, economies that rely on it. As well as passing control over what can be run to a third party (see: "The war on general purpose computing").
When making multiplayer games in the 90's I realized that the players computer was sometimes more powerful then the server, so the strategy since have been to put as much work on the client as possible, in order to fit as many players as possible on a server. Because hosting servers is expensive. You want reliability on the server side. Server goes down - hundreds of people cant play. One client goes down - not your problem.
The advance in network capacity and low latency io opens the possibility of thin clients. For example gaming consoles - in a few generations youll probably just connect via your smart tv. The biggest incentive is probably DRM. Where you stream the content rather then owning it.
This looks like a consumer targeted product. If so, they should use a "per month" pricing instead of the per hour pricing which only caters to nerds who will go through the trouble to calculate.
I hope it's actually an affordable option overall compared to any other options out there.
Clearly this is a product targeted to professionals, designed for using heavy application while mobile.
Gaming is a also a use case, but gamers that run heavy games tend to be nerds anyway.
Per month pricing would make it very hard to price competitively for users who work 6+ hours a day vs. users who need to work with CAD files a couple of times per month.
I don't know what kind of every-day consumers need to to run heavy CAD software.
> Moving your workflow into the cloud gives you the best hardware and networking performance possible, and helps you work on the tough stuff a normal computer just won’t do.
This is then followed by their "Advanced" specs @ 6vCPUs, 16GB RAM and 2GB VRAM. That's a mid-range laptop.
You can choose between per hour and per month pricing. The per hour pricing only runs when you're using the machine (only the cost of storage when it's off) and the per month pricing is a flat rate. They should offer a calculator for estimated usage costs.
Anybody used it for gaming? I was a beta user of the Nvidia service which is dedicated to gaming and while overall acceptable, lag spikes would render the use of the platform really frustrating. Everything would be smooth for 30s and then "lag" or freeze for up to 1-2s and then resume. Fine if playing the sims, but when you are in a fight in fast paced games it's a deal breaker. To my knowledge, so far, no platform has figured out the proper streaming/input lag management to make the experience truly seamless. Is this one different?
Yeah. I use it for work (healthcare, 3D simulations, GPU, our own custom apps for treatment planning) and also for gaming. Would recommend with Parsec.
I agree. Parsec makes it excellent. I really like the Paperspace vm combo there. Super easy to setup and I've noticed no issues. It's more expensive than Nvidia's service (to which I also subscribe), but you can play whatever game you want or do anything else you want.
I just tried for the last couple hours. Installed Steam then Fallout 4 and Grand Theft Auto 5. I tried to get GTA to start 5 times but it never would. Just black screen with music playing.
Fallout would start but I couldn't aim with my mouse. I could walk and shoot but it would register mouse movement. Then accidentally hitting ESC would stop full screen and eventually crash to the point I had to hard restart the system.
It sounds like you were using Paperspace's remote desktop. I find that if you connect to it with Parsec, none of the issues that you're describing apply.
My issue with "traditional" cloud gaming providers of the like of Nvidia (on top of non stellar perfs), is they they have a limited catalog and offer no system control. I like to play old games that only run on windows and could therefore use a service like workstream. (Currently playing Warhammer online which requires dx9 so unless you have a way to manually install old drivers etc you can't really play)
I wish there was a way to automatically nuke the image on shutdown, and then spend 15 minutes to have a script auto-populate a few games in my steam library at next boot. It’d save $5/month, and also solve the OS update problem.
You can use the CLI to create/destroy instances, so what you could do is write a script that has Chocolatey (or boxstarter, which is a bit more for this purpose) install Steam, run it when you login, and then have another script destroy the instance using the cli command.
So I read through the whole landing page and I don’t really understand what this is. It sounds like it’s just another virtual private server? How is this different from getting a VPS on any hosting provider and remote desktoping into it? I guess since it’s paid by the hour rather than pay by month, but generally a service like this charges you for every hour your virtual machine exists, not just when it’s running. So unless you want all your data deleted you’re going to be paying longer-term anyway. Or maybe that’s not the case, they certainly don’t go into any detail to explain it one way or the other.
First, that is a pretty nice web site. The notion of this sort of virtual computer goes back a ways, the one thing I keep expecting to see but haven't yet is a fully encrypted experience where the "cloud" bits don't work unless the user's local display has a security token attached.
My first experience with type of architecture was in the 80's with something called an "X terminal". All the same elements, a machine with a display / keyboard /mouse that had a network protocol to talk to some hardware acting on my behalf in a machine room. I could sit down at any station and open up my desktop, just as I had left it. It is a pretty decent experience even with only a 10mbit Ethernet network.
The second time I got to experience this architecture was in the Citrix days (late 90's early 2000's) Now the OS was always Windows and the network protocol was proprietary, but you could bring up an entire enterprise without all that pesky installing/rebooting that was the life of an early Window's system administrator. Also gave you control over employees trying to put bogus software on the computer. No Netscape Communicator for you! Work work work. :-)
Somewhere after Citrix imploded with all the problems that come with a walled garden with razor wire on the walls, we started seeing deployments using virtual machines and VNC. That gave you the "what you see is what is happening" feel of Citrix but now you had a different choice of operating systems and better vendor flexibility. VNC over a SSH SOCKS proxy replaced X11 over SSH as a more "universal" way of implementing this architectural design pattern.
Of course Google gave us "the browser is the computer" with ChromeOS and now the machine as a browser target where the browser is something more standard.
I like the architecture but in the previous iterations there was always some big problem it was addressing, X terminals addressed mobility, Citrix addressed enterprise configuration management, VNC/VM addressed multiple OSes other than Windows while retaining enterprise configuration management, and ChromeOS went for a better security model, mode-less configuration, and minimal cost of entry.
What problem does Workstream address either better or differently than what the above solutions address/addressed?
I would like a minute of silence for all sysadmins working in healthcare. Having to deal with COVID on top of a Citrix infrastructure (if it can be called that) must be... well need I say more. Working with Citrix.
(PS: I'm joking, I hope the product has become stellar. It very much was not back in the day when I had 200 users on it).
My experience in the early 2000's supporting enterprises with network attached storage was they started moving away from Citrix to the VM/VNC solutions often from VMWare. At the same time Citrix started buying a bunch of different companies to expand into other markets.
That people still have to use it is a testament to this architecture pattern's resiliency in face of systemic challenges :-)
I hear that, after it left beta, the number of games that you can play on GeForce NOW is greatly reduced. I have read that you can no longer play arbitrary games from your Steam library (like you could in beta), and furthermore, a few major publishers have pulled their games from the service.
So the advantage of Paperspace would be that you can play many more games than you can on GeForce NOW.
I don't know how to show you an actual list compared to my Steam library but I only know a handful of games that have been removed (The Longest Dark was big drama a few weeks ago) but I'm sitting here testing a bunch of my most popular games (Bannerlord, DOS2, POE2, Destiny 2, LOL, Space Engineers, etc) and I haven't hit a game yet that I can't play.
Some publishers have been very anti-competitor regarding it and taken down their entire libraries but there is still a ton to play.
On the flip side, none of the games I play are available on GFN. It's incredibly frustrating. I've moved on to Shadow where I'm not subject to the arbitrary whims of publishers and developers. I can install any game that I own and nobody can do anything about it.
If your use case is gaming, does Linux support matter much? If anything I would think this would be for someone who has a Linux setup but wants to access games that aren't available.
"If your use case is gaming, does Linux support matter much?"
It really depends on the game.
For instance, Factorio, which is enormously popular on HN, has a perfectly functioning Linux version. Many other games do too, and some games even perform better on Linux than on Windows, not to mention having a better user experience on Linux (especially if you are technical and know what you're doing).
I have an old Linux laptop, and Factorio performs well enough in single player, but I run in to serious performance issues on multiplayer. It's be nice to be able to run this Linux game on a more performant Linux machine, without shelling out the $$$ for a new gaming rig.
Further, I'd like to avoid Windows as much as possible, where it can be avoided.
Shadow.tech I have had good experiences with. It does take a while to go from sign-up to them actually spinning up a machine for you but the latency is extremely good for cloud gaming.
It doesn't look like this is launched yet, at least I don't see it when I log in. I've been happy with their notebook product Gradient though, and it will be interesting to see if latency is good enough for this to replace some local desktop usage.
Sorry for the confusion here, we are in the midst of separating out the two products and this is not reflected everywhere yet. Under the CORE section of the interface, you'll find these instances and you can download the desktop app here: https://paperspace.com/download
How's native peripheral support? I've been looking for a cloud solution that would act as a virtual rig for VR. However, from everything I read there's hardware limitation on memory to support meaningful peripheral throughput. I'm curious about what you and DTE have come across!
What kind of VR hardware are you using? I'm working on a cloud based virtual computer solution that leverges google's additive GPUs (You can add as many as you want to facilitate what you need to do).
That sounds awesome! I don't have a VR rig set up and am looking to build a PC and have some spec I can share. How do you hook up VR peripheral with the cloud?
The idea is that the VR hardware would have a companion software, and that this software can be run on the could-computer (where all the heavy lifting is done) and displayed through a browser on the basic device that you already have.
So in your case - instead of personally fronting the costs for a whole lot of VR ready hardware, you can just pay for a powerful virtual computer (with as much Ram and GPU power as you want for your task).
Connecting your own assets/files to the cloud-computer can be done via cloud storage services like Dropbox, Google Drive, etc.
But under your core product the only options for running Windows or Ubuntu are server instances. From what I can see I can't run a graphical desktop on top of your Ubuntu VMs.
What I wonder, is there a special cloud service for compilation? Compilation is a spike in IO and CPU and most of the time nothing (and I don't mean a build server, but an exported file system I can edit on my computer and compile very fast in the cloud).
I've been using Paperspace Air machine since they first launched (still paying early bird prices too) and their basic Windows VM is pretty amazing.
I'm mostly a Linux/Mac guy, but I spend a few hundred hours per year logged into my Paperspace machine and almost always have it doing something for me.
It's perfect for getting access to your own toolchain on any computer. And it's got plenty of resource to give it time-consuming jobs to do. It has just enough GPU to give it an edge over some of my cheap VPS options.
It was perfect when I was contracted out to a highly restrictive corporation and couldn't access my own employer's tools due to firewall policy. The browser-based remote desktop always worked.
I'm curious what others here are paying for a cloud VM for dev tasks. My current favorite VM size on Azure is Standard_NC6_Promo that includes a K80 GPU, 56GB RAM with 8 vCPUs for $0.39/hour.
When you say dev tasks, are you talking about machine learning? What dev tasks do you need 56G of ram for?
I’m trying to figure out who these products are for. 0.39/h * 8h = $3.20/day, or approx $70/working month. And that’s on top of buying the computer you have to connect to this, with the annoyance of extra latency in all tasks. I can see the use of occasional demanding tasks using very large VMs on inexpensive computers, but I don’t see how it works out in the long term if that’s your man machine that you always live out of.
Sorry - yes ML tasks. The pricing is on par with a D8v3 but offers more RAM and a GPU, but likely with slower CPU perf (though I haven't benchmarked the machine).
My laptop doesn't have a discrete GPU, so I can't do any ML tasks on it so running a Jupyter server on a remote VM is my way of doing those tasks.
Not a heavy user, but I've been using Paperspace a few times for gaming. They make it really easy. They have a "template" you spawn a new instance from, or use a blank windows install. Their template is handy since it has Steam and Parsec already installed.
What they seem to be doing here is saying you don't need Parsec and can just use their browser implementation instead.
I'm assuming this works by streaming your keyboard/mouse inputs to the VM, then it runs the game and then streams high-res video and audio back to you. What's the total latency with this approach? And the required bandwidth for the video stream?
You'd want to install Parsec on the server, then controllers work automatically (generally). It works pretty well (though I had to stop using Paperspace because I couldn't afford the cost).
I use Paperspace's core compute machines for my IDE and dev machines. It's great not having to run everything local, can use a cheap low end laptop locally but still have a beast dev machine. If I end up hosing my dev I can just start it from the last snapshot.
Granted could probably do this with any host/cloud infrastructure. But I'm happy with it.
That's also how I use it. I tried other hosters but for dev use, theirs is particularly nice because it's much cheaper. Also I like the user interface ;) Unfortunately I didn't try Gradient yet but Core already has so many possibilities...
You could build a desktop beating that for around $800 today - including a Ryzen 3600, motherboard, case, better GPU etc. At $0.18/hr for Workstream, that means after just 44 hours you breakeven on building your own machine.
With a little extra work on setting up VNC/RDP/Parsec you can then access this desktop from anywhere. I already do this and it works great for the most part. The streaming app/protocol is key, and thanks to teams like Parsec and Moonlight we have high quality software doing this for free.
----
But given their target market, this does look really cool. Amazon Workspaces costs 3x-4x more for comparable machines with Windows. Paperspace looks like its offering OSX and some good bundled software for designers, VFX etc.
Oh good catch! That works out to 185 days so it won't change anything, my pc will last a lot longer than that.
But then again, even a much lower cost wouldn't change things for me. For one I completely own my pc and can do anything I want on it, second I know it's secure & private and my data won't be used in ways i might not know about, and third there's no internet latency on input/output which is nice for gaming etc. I could go on but yeah I feel like it's clear owning is generally better than renting for many reasons
This is a new world to me... I'm looking to learn a GPU intensive application like Davinci Resolve before committing to a new machine to run it locally -- would this be an appropriate solution, and would it obviate the need for the local machine, i.e. can I do everything on this that I could do locally?
Would something like paperspace be a good platform for software development? My slow computer doesn't handle IntelliJ that well. Instead of buying a new PC I could rent this for a few hours at a time.
> For hourly plans you are charged a flat rate(depending on machine type) per month to cover storage and access. Additionally, you are charged an hourly rate when the machine is running. Monthly plans are offered at a flat rate for unlimited access.
Then the pricing page doesn't list the flat rate, when looking at hourly pricing - but does show an hourly price for monthly billing? Possibly monthly/(30 * 24)?
Ed: looks like hourly for monthly plan is calculated as monthly/(365÷12×24)
I really like the idea of Paperspace and I've tried to get going a few times on the free plan but I hate notebooks. I didn't fully understand the server + gpu offerings, which is what I want. I would love it if I could just get an IP to scp some scripts to, then ssh in and run them. I don't think it's possible to get billed only for GPU time when you're running it this way though, so it would be pricey to keep the server up.
Hi. Developer and part-time computer science high school teacher here. Does Workstream support graphical desktop linux machines, or only terminal? Thanks!
I use Paperspace for running Fusion360 which is a CAD application. I use a Linux laptop and there are no really good CAD applications in Linux space(I tried FreeCAD but Fusion 360 is so much easier to use though). Works like a charm and extremely light on the wallet(I bill roughly 15 ~ 20 USD per month with around 40 to 60 hours of usage). It's been a godsend so far.
I'm curious about something because I'm working on a side project that's "in the area".
How does workstream, or indeed any remote computing provider, handle the possibility of people using their service to attack other computers, or to store/transfer extremely illegal data? They're not exactly running KYC identification.
My ideal scenario is something like this that's $20/month for unlimited usage, with the caveat that you have to have the server in your own house with your own internet, but the service is that it provisions your machine and proxies it for fast speeds everywhere.
If the goal is gaming, GeForce Now and Moonlight are amazing. You can also get a remote desktop via GeForce Now but probably better just to use remote desktop.
I have to recommend https://shadow.tech!
It is a Windows VM intended specifically for (but not limited to) gaming!
And it works extremely well, I am playing games on it for ~ 2 years by using their client app on Linux.
I only recently got mine last week and I love it. The CPU is a bit underpowered but the GPU is so much better than what I have in my PC and I’ve been able to play a bunch of more recent games that I wasn’t able to before. Also much prefer the freedom to install what I want over services like GFN or Stadia.
Signed up for a paper space account about a year and a half ago, they had a dark UI pattern to stop people from deleting their VMs, and it kept charging me every month with no method to stop it. After a month or too, they changed their UI to allow deleting the VM.
+1 on Shadow!! - I would definitely recommend Shadow. You get a full instance of Windows 10 to do anything you want (I primarily game). I wrote a quick review of it, including 3dmark scores for the base version here: https://medium.com/@ec822/playing-in-the-cloud-d0c023b77bdd
I've had great experience with shadow. I am pretty close to one of their servers (Dallas) so get around 8ms ping, but even up to 40ms ping it works very well.
Are there services like these that also “rent” the software (aside from the OS) licenses to you? I wouldn’t mind paying a pro-rated license for sharing photo or video editing software for example.
I really wish that the monthly flat rates were included when hourly prices are quoted on the pricing page and with "only pay for what you use" on the front page. Right now it's very easy to get a first impression of feeling misled.
this works out to ~$375/yr for their advanced plan if you figure 8 hr work days and 261 working days per year (https://www.google.com/search?q=how+many+work+days+per+year). How does this compare vs an equivalent instance with some kind of remote desktop running on a cloud service?
You can also set up a SOCKS proxy on a much cheaper basic VPS as well, giving you the advantage of using a web browser on your local computer instead of dealing with the overhead of a completely virtualized desktop.
I’m relishing the switch back to the thin client / main frame model, but I want to own my main frame. Even if it physically sits in a company’s data center. With 5G we are getting closer to disposable glass thin client interfaces.
this looks awesome. I wish i could rent machines preloaded with the applications i use. but i guess the incentives for the licensers aren't aligned there.
My thoughts exactly. If I could just rent photoshop for a few hours that would serve my needs perfectly. But it looks like I'd need to buy a license, load it on, and pay monthly for storage, so I might as well just pay for just the application and use it locally.
I'd like to see a cost comparison across different providers. A comparable v100 machine on Google Cloud is $2.015/hr, while Paperspace seems to charge 2.30/hr for a dedicated v100 (https://www.paperspace.com/pricing)
Google Cloud is very sneaky with their pricing as they don't include the instance itself in the advertised GPU pricing. Here's an instance very comparable in specs (8 vCPU, 30GB RAM, 1 V100, 50 GB SSD): https://cloud.google.com/products/calculator/#id=a9fbcab5-cb... It's $3.19/hr. The Linux version is $2.87/hr.
Thanks for this link, I've been going by the estimated cost on the Google Cloud VM deployment page which seems to give significantly lower prices than your link. With GC I always seem to end up paying a lot more than I planned to...
Great idea. What do you recommend as a good standard/tool we could use to publish benchmarks? We have some CPU/memory-centric instances but our primary focus is on GPUs.
Why do you need to use a Virtual Computer when you already have a laptop/desktop you use to connect to this Virtual Computer? What am I missing? Is it only for folks connecting from Chromebooks?
Onboarding experience is terrible. Registered for an account from the Workstream page linked here, and was taken to a console with absolutely no mention of Workstream. Went back and forth a few times trying to figure out if auth provider had redirected to the wrong page after registration only. Ultimately searched through comments here to find out that Workstream is a variant of the CORE product. Having figured that out, went to the Core product, was rejected on creating an instance due to no payment on file. Not a huge issue, but would have been nice if I was prompted up front to complete this. Once I'm past that hurdle, I go back to the CORE product and find out that I'm not allowed to create GPU instances until I put in a ticket.
I'm hoping that this was some accident where this landing page ended up on HN when it wasn't supposed to be public knowledge yet, otherwise the lack of thought on just the setup process here is pretty concerning. I'm also pretty confused at the requirement that I put in a ticket to have GPU instances enabled when they've already taken payment info, and it seems like a test auth would be a better anti-fraud measure than the half-sentence I typed in.
Man, my excitement here sure was short lived.
Edit: got an email that I had been enabled. went to look. Three of the GPU types still show as locked but one no longer shows the lock icon, so I guess approvals are per-type? But I can't select the "unlocked" GPU type, it's disabled. There's no explanation of why. A couple of minutes of trying different combinations and I find out that it's due to the DC, if I choose a different DC I can select it. So I guess they're only available in one DC, at least right now? It would have been nice to be told that.
Edit: got the machine to provision, installed the client from another HN comment. I logged into Paperspace using a Google account. The client does not list this as an option. Only email, github, AD, and SAML. I give up. Except for the machine is still provisioning and I have paid $10 for storage. Great.