We're social creatures. We like interacting with other people more than we like interacting with machines, even if machines are more efficient in a narrow technical sense. I think this is especially clear for nannies--the main job of the nanny is to cuddle and interact with the baby in ways that only a human being can do.
There are plenty of fitness videos and apps, but people still pay a lot of money for in-person fitness classes. People wouldn't stop going to Starbucks even if someone invented a vending machine that could produce coffee that tastes identical for half the cost.
> We like interacting with other people more than we like interacting with machines
I remember a '70s article in which Bank of America, I think, said that they wouldn't be rolling out ATMs like one of their competitors because customers prefer interacting with a warm friendly pretty teller, not a machine.
We know how the ATM vs human teller preference worked out. (I even have to stick the word "human" in there for clarity because "teller" is almost an archaic word.)
> People wouldn't stop going to Starbucks
People go to Starbucks or similar coffeehouses for many reasons. Better coffee than at home, chance to get away from home or the office, solitude (yes, ironically, you can have more privacy or alone time in a Starbucks than at home or office).
I figure that a Starbucks that was completely automated with no employees and little social interaction, but otherwise identical to a normal Starbucks, would still be quite popular.
Starbucks has these machines and I find it awesome to get my coffee and even chilled latte there for 2,50 instead of 5 or 6 $
The machines taking our jobs is already happening and it is due to our human nature that we are not seeing it: We extrapolate the past into the future and we are seeing physical robots
As some of you (technical people) can better imagine it is and will be much more 'just' software (but a journalist can't that well)
The "reduced" demand for nannys is 'kids playing with their iPhones' and it is already here and going stronger
What will we pay people for in the future? That they are likeable, sympathetic and because 'we like to "interact" with them'?
Seems like this would split the world into very hard skilled people programming the machines and a lot of people catering "what's left" (on the human side, that machines can't do / didn't replace yet) to those
Luckily I am a software developer
"Doctors to explain the recommendations": You could also say "Selling the pills of the Pharma Industry with low regards to side effects"
"Nurses to provide hands-on care": I see "Human Issues" like: She's telling you all her problems she has at home and you have to listen
Tell me the Internet isn't better in quality (than "real random people") already?
I think it is important here to draw a distinction between equivalence without type conversion (eg, ===) vs. equivalence with type conversion (==).
While it may be the case that a robonanny can change a diaper or administer a bottle with a precision greater than or equal to a human human nanny (==), it is not the case that the robonanny understands viscerally what its connection to its charge is (===). Humans are a product of biological evolution and as such, there are certain features of our cognition that are very, very difficult to overcome, not least of which is our collective hangup on our own internal categories (ie, types). This is precisely the reason many people experience possiveness, nostalgia, a fear of the "other", etc., and I believe it also goes a long way toward explaining why so many folks speak of interacting with robots as if they were human as 'icky' or otherwise suboptimal.
That said, I personally believe that we are doing ourselves a disservice by dismissing the humanistic value of providing, on a cultural/societal level, a means for most of us to keep ourselves occupied, fed, and sheltered, regardless of whatever impact such activity may or may not have on the finances of giant corporations.
You must live in a world where people don't make coffee at home due to cost considerations.
The main point of a nanny is to have someone else deal with your kid so that you don't have to.
Machines will almost certainly provide a worse experience than a human at the beginning, but the lower price expands the market significantly to those who could not afford those things before.
Teachers probably won't get replaced for the longest time since they're not paid out of your pocket.
I agree. I think robot nannies will become pretty commonplace as many of the people who can't afford human nannies take advantage of the cheaper mechanical option.
I was probably projecting, but I'd say day care, in a home or a centre, is a financial necessity, and a nanny is when you have too much money or too many children :)
But the larger point I was trying to make is that most people who need child care are looking for someone to make sure it doesn't die while they're away, not provide some special level of care.
A robot nanny might not be as good as the best humans, but it will be much better than the worst, and people don't like to expose their children to avoidable risk. It's like how I'll eat at McDonalds if I travel somewhere new instead of a possibly superior local restaurant. Consistent mediocrity can beat unreliable quality.
This seems worth dividing into a couple of concerns. One is: Can it provide an accurate enough simulation for the baby? I think we would definitely get there with advanced enough AI, since the level of sophistication needed for cuddling and interacting with babies is lower than, say, having an intelligent conversation.
The other concern is that even if the simulation is good enough, there's still an ick factor to "tricking" a baby into forming an emotional connection with a machine that doesn't have real human feelings.
If we build caretaker robots with software that closely models the internal function of human brains, to the point that we're satisfied that their outward behavior is accompanied by complex and human-like feelings of caring, that problem goes away too. Whether that's even possible is more of a philosophy of mind question; depends on whether you think internal experience depends on the exact nature of the physical substrate.
(Of course if it is possible, there's yet another layer there with the ethical issues of designing a slave race of beings with real feelings, see e.g. Robin Hanson's ideas about "ems.")
Maybe nanny is a bad example. I'd pay good money to get impartial therapy from an AI (guaranteed non-abusive) and then when it comes to solving the problems of humanity in general, a sufficiently smart AI seems like our best bet, on the basis that human nature is yet to be fully understood by humans after centuries of trying...
"It seems safe to conclude that someone asked them to write positive stories." As the author of the article in question, I can tell you that this is false.
Given the context and the way corporate boards generally communicate, I'd say an offer to "engage on qualified strategic proposals" is a pretty clear signal that they're interested in acquisition offers.
I agree with you in general, but I had to pick some limits to make the piece manageable. Perhaps I'll do 40 maps on the Byzantines at some point in the future.
I understand, and as I said earlier, it's a great summary, especially considering how much you managed to cram into it while keeping it easy to digest.
OK, but the same logic can invalidate almost any software patent. Apple's "data detectors" patent, for example, claims the concept of detecting data in a document (an abstract idea, it seems to me) plus a generic description of the steps someone would have to take to implement this on a computer. A data compression patent would cover some mathematical principle (replace frequently-repeated sequences with a shorter representation) with some details about what steps you need to do to implement the idea. I think the court's reasoning could be plausibly read as invalidating all software patents.
The key phrase in your comment is: "an abstract idea, it seems to me." The phrase "abstract idea" is, in this context, a legal term of art. It means what the Supreme Court wants it to mean.
If you look at CLS Bank v. Alice, the Court concludes that intermediated settlement is an abstract idea because it is "a fundamental economic principle." So to use your example, data compression (replacing frequently-repeated sequences with a shorter representation) might be a "fundamental computer science principle." But Lempel-Ziv-Welch, a specific compression algorithm, wouldn't be.
So now the court has to decided about how complex said algo is and weather or not it is simple enough to not deserve a patent.
Basic Lossless compression can function like this:
Imagine a string of 1's and 0's e.g. 10010000011000101111001
This string can be trivially compressed in a losses manor using this algo, every time the bit changes to a one or zeor, record the previous run of bits. So we would compress this string to look like:
[1,1][0,2][1,1][0,5][1,2][0,3][1,1][0,1][1,4][0,2][1,1]
I understand now this is a terrible example however it is good enough here.
So since most/all compression algos are just using a pre-defined set of choices on how to compress data in either a lossless or lossy manor. The only difference between two compression algos would be their rules for what data to keep and how to arrange it more efficiently into a different data structure.
I think the analogy of board games can be used here. While you are free to get a trade mark on many aspects of your game, you can not patent the actual rules or game play. http://www.copyright.gov/fls/fl108.html
The rules and game play is what makes Risk different from mouse-trap.
Replace rules with compression algo (or any software algo...) and we come to the conclusion of software is not patentable.
So at what point does a collection of fundamental computer science concepts become patentable?
The line is completely arbitrary using the compressions algo example. I honestly have not been able to reason through a real life example that hold up to this scrutiny.
Maybe I am misinterpreting your answer, so if I am, I apologize, just ignore me =D
EDIT
Thanks to person for the discussion free down vote. Why engage when you can suppress.
At what point do you go from fundamental electronic gates to a Snapdragon CPU? The line drawing involved isn't unique to software. But life is full of line drawing.
There's a good argument to be made that the cost of the line drawing exceeds the benefits. I don't think it does, generally, but maybe it does for software. That said, I think you should make some money if you invent LZW. I don't like the idea of an economy where you can't make money off R&D unless you package it into a product with lots of advertising and sales people. I don't think that creates the best incentives.
Snapdragon is a flawed example. No one person or organization goes from gates to a Snapdragon CPU. It took a large community many decades to do that. But that's a side point.
I think the second example provides a great illustration of the divide between software patent proponents and detractors. Some people think a thing like LZW should be patentable, because they imagine inventing something on that scale of ingenuity and want to be able to make money off of it. Other people, in the scope of a larger project, usually, come up with things on the scale of ingenuity of LZW compression and are exasperated to discover that someone else patented it a few years prior and wants prohibitively large licensing fees, rendering the technology unusable; they don't think such things should be patentable because from their perspective it reduces innovation.
Sorry but I think your premise is flawed here. You can patent rules for board games (http://www.ipwatchdog.com/2011/12/22/patenting-board-games-1...) afterall they're just processes. It's copyright (which you linked to) which precludes getting protection over game rules.
I think you've confused copyrights and patents; the document you linked to is from the U.S. Copyright Office and refers to the copyrightability of game rules (not their patentability).
I've often heard the claim that game rules are uncopyrightable but that they might possibly be patentable.
I think this description describes an actual "invention" of RLE (although personally I'd still shoot it down on grounds of obviousness). But the average software patent would be written the exact same way and then claim to own "lossless data compression" which is ridiculous. I'm not a lawyer but I'm optimistically reading this as the Supreme Court seeing exactly this distinction and doing the right thing.
It sounds like you're saying that Alice is really about invalidating what Mark Lemley calls functional claiming: attempting to claim components by their function rather than by their structure. I read parts of the Alice opinion, and it seemed like that might be what the justices were getting at, but I didn't see the point made as clearly as I would have liked.
Despite your assurances, I am uncomfortable with this decision. Indeed, I am uncomfortable with Gottschalk v. Benson. I don't think a bright line exists between patent-eligible software and ineligible algorithms -- in this I agree with the Vox article. I would much rather have seen a decision that invalidated this patent on the grounds that taking an existing manual process and computerizing it is, by itself, obvious.
Here's a 5,000-word explainer I wrote about how Bitcoin works. It took significantly longer than an hour to write. https://arstechnica.com/tech-policy/2017/12/how-bitcoin-work...