Isaac Asimov’s predictions for 2014

Roughly 50 years ago, on August 16, 1964, Isaac Asimov wrote an essay in the New York Times predicting what the world would be like fifty years hence. (He was inspired by the World’s Fair of 1964, to which my sister and I were taken by our parents.) It’s a longish piece, and concentrates on increasing population pressure as well as the ability of technology to deal with that pressure and also improve our lives.

I’ll highlight just three of his predictions, one mostly right, on partly right, and one wrong.

This is what he got mostly right:

In 2014, there is every likelihood that the world population will be 6,500,000,000 and the population of the United States will be 350,000,000. Boston-to-Washington, the most crowded area of its size on the earth, will have become a single city with a population of over 40,000,000.

He was accurate on most of these. In fact, he slightly underestimated the world’s population, which is now 7.1 billion (see the U.S. and world population clock here), while the population of the U.S. is 317,309,000 and that of the Northeast Corridor is about 49.6 million.

This one wasn’t quite right:

The situation will have been made the more serious by the advances of automation. The world of A.D. 2014 will have few routine jobs that cannot be done better by some machine than by any human being. Mankind will therefore have become largely a race of machine tenders. Schools will have to be oriented in this direction. Part of the General Electric exhibit today consists of a school of the future in which such present realities as closed-circuit TV and programmed tapes aid the teaching process. It is not only the techniques of teaching that will advance, however, but also the subject matter that will change. All the high-school students will be taught the fundamentals of computer technology will become proficient in binary arithmetic and will be trained to perfection in the use of the computer languages that will have developed out of those like the contemporary “Fortran” (from “formula translation”).

That hasn’t quite come to pass, although MOOCs are making inroads into conventional teaching and most high-school students don’t know computer programming, due largely to the fact that computers are already user-friendly. And we still have conventional subjects in schools.

The next one is mostly wrong, due largely to Asimov’s inability to foresee the rise of the Internet and its ability to dispel boredom:

Even so, mankind will suffer badly from the disease of boredom, a disease spreading more widely each year and growing in intensity. This will have serious mental, emotional and sociological consequences, and I dare say that psychiatry will be far and away the most important medical specialty in 2014. The lucky few who can be involved in creative work of any sort will be the true elite of mankind, for they alone will do more than serve a machine.

Yes, psychiatric drugs like antidepressants are on the rise (20% of Americans take them), but psychiatry is not our most important medical speciality, for most of its medications are dispensed by general practitioners.

More important, the world is becoming plugged in: when you walk down the street in an industrialized country, or ride in a bus or train, notice how many people are using their cellphones, iPads, iPods, or computers. Google Glass, the wearable computer, is next. This is the way the whole world will go. (My theory, which is mine, is that eventually the whole world will be like New York City.) Connectivity has brought tremendous advantages: think of the ability to access information at your desk instead of a making a laborious trip to the library. And electronic journals and instant publication have markedly sped up the progress of science. Well, perhaps we won’t be as bored, but we may lose the skills of interpersonal communication.

Asimov made many other predictions. In general I think he did pretty well—certainly better than I would have—but it’s remarkable how many other people got stuff wrong, usually predicting a more technologically advanced or ideologically repressive society than we have now. Remember Nineteen Eighty Four (written in 1949), or The Jetsons cartoon series, which supposedly took place in 2062?

h/t: OpenCulture via reader Jim E.

84 Comments

  1. Posted January 3, 2014 at 4:55 am | Permalink

    “(My theory, which is mine, is that eventually the whole world will be like New York City.)”

    ___

    That is my definition of how being on the web feels to me, a native New Yorker. Open all the time. High random connectivity. Lots to see. :-)

    My best girlfriend and I practically lived at the 1964 World Fair!

  2. Marella
    Posted January 3, 2014 at 5:04 am | Permalink

    They thought the old fashioned methods of repression would triumph. They didn’t realise how easy it would be to convince people that they were better off poor and powerless and handing their countries over to the aristocracy. Crappy education and control of the media is all that’s needed. The rest is superfluous.

  3. Posted January 3, 2014 at 5:44 am | Permalink

    I have a little book “The Future” by A. M. Low published in 1925 which was pretty accurate, apart from his notion of flying and submarine cars. Or that everyone by now would be wearing unisex romper suits!

    • Diana MacPherson
      Posted January 3, 2014 at 6:35 am | Permalink

      I read, I forget where, predictions of the year 2000 that was written in 1900 and it included statistics that the modern editor had added. What struck me most was that the average life span was so low. Something crazy like you were elderly in your 40s.

      • JBlilie
        Posted January 3, 2014 at 6:48 am | Permalink

        Old enough (in the day) to see your grandchildren pass infancy, and that was sufficient, from an evolutionary standpoint.

      • Latverian Diplomat
        Posted January 3, 2014 at 8:33 am | Permalink

        Those numbers are always heavily distorted by high infant/childhood mortality.

        A statistics like average life span at adulthood, or at age 50 are much more meaningful.

        Not say that there haven’t been improvements, or that adults couldn’t be felled by an infection that would be trivial to treat today, but people living into their 60s was not rare.

        • Erp
          Posted January 3, 2014 at 9:17 am | Permalink

          Certainly wasn’t rare to live until your 60s if you reached adulthood. I have quite a few relatives who lived quite a bit longer (including my grandmother’s great grandparents who celebrated their 60th wedding anniversary and who both died in their 90s in the 1920s [admittedly as strict Quakers of the upper middle class they likely didn't drink or smoke nor did they suffer from malnutrition or lack of up-to-date medical attention]).

        • Diana MacPherson
          Posted January 3, 2014 at 9:20 am | Permalink

          Well yes but children dying young is a big deal. People forget that just a century ago, children died regularly because of diseases we can now prevent.

          • Latverian Diplomat
            Posted January 3, 2014 at 11:23 am | Permalink

            Yes, child mortality was huge tragedy we can be glad of reducing so significantly.

            My point is trying to reduce lifespan to a single number leads to misapprehensions like the one implied by your post. I’m not sure that you were making that mistake, but I’ve seen plenty of well meaning people have a sincere misunderstanding of this statistic.

            It happens frequently enough, that I dislike seeing lifespan expressed as a single number. Some things cannot be boiled down to a single number.

            • infiniteimprobabilit
              Posted January 4, 2014 at 5:32 am | Permalink

              The fatal drawback of taking an average of a highly non-normal distribution. Like ‘the average mammal has 0.8 arms and 3.1 legs’. (And 0.05 wings if you include bats).

              Or my current favourite – since for most of recorded history I wasn’t around, then: To a first approximation, I don’t exist.

              More significantly, our traffic authorities make major pronouncements on road safety and driving behaviour based on comparing this New Year’s road deaths with last year’s when both sets of figures are within the limits of statistical fluctuations and have no statistical validity whatever.

              • Doug
                Posted January 5, 2014 at 6:21 am | Permalink

                Or, “The average human is a hermaphrodite.”

    • Chris
      Posted January 3, 2014 at 6:57 am | Permalink

      Um… have you heard of the “onesie”, the adult romper suit craze that’s hit far too many people in the UK.

      It’s not great. You really don’t want to google it.

      • Posted January 3, 2014 at 7:41 am | Permalink

        Ack! I cannot un-see it! Adult Onesie

        • Diana MacPherson
          Posted January 3, 2014 at 7:49 am | Permalink

          The Stay Puft Marshmallow Man is pretty funny.

      • moarscienceplz
        Posted January 3, 2014 at 11:18 am | Permalink

        Ugh! It’s horrible enough that I see people wearing their pajama pants in the grocery store.
        (OMG! I just realized I sound like my grandmother!)

  4. Occam
    Posted January 3, 2014 at 5:56 am | Permalink

    First the mandatory xkcd.

    Matt Novak at Paleofuture makes the case that Asimov’s predictions were relatively conservative. Nor entirely implausible in the context.

    Like science fiction, futurology seems to me far more interesting in what it reveals about the times and mentality of its authors. In that respect, New Maps of Hell: A Survey of Science Fiction by Kingsley Amis (1960) sets the standard.
    But my own favourites of unintentional futurology, fifty years on, are two classics of political economy by, unsurprisingly, J. Kenneth Galbraith: The Affluent Society (1958) and The New Industrial State (1967). Their predictive power results precisely from Galbraith’s adage:

    Agreeable as it is to know where one is proceeding, it is far more important to know where one has arrived.

    Given the harsh diagnostic, it can be distressing to see what has changed since, and why, and embarrassing to observe what has not.

    To end with three quotes. From The New Industrial State:

    The real accomplishment of modern science and technology consists in taking ordinary men, informing them narrowly and deeply and then, through appropriate organization, arranging to have their knowledge combined with that of other specialized but equally ordinary men. This dispenses with the need for genius. The resulting performance, though less inspiring, is far more predictable.

    From The Affluent Society:

    — The greater the wealth the thicker will be the dirt.

    — …in the United States, the survival of poverty is remarkable. We ignore it because we share with all societies at all times the capacity for not seeing what we do not wish to see. … “Poverty,” Pitt exclaimed, “is no disgrace but it is damned annoying.” In the contemporary United States it is not annoying but it is a disgrace.

    • Larry Gay
      Posted January 3, 2014 at 7:06 am | Permalink

      I always appreciated Galbraith’s emphasis on self-deception, perhaps coming from his study of the Wall Street crash of 1929. He saw the folly of Viet Nam very early. I think there should be more emphasis in education on how easy it is to be mistaken.

      • Occam
        Posted January 3, 2014 at 7:37 pm | Permalink

        Then you may like another bitter excerpt from The New Industrial State. Bear in mind it was written in the mid-1960s!

        In the United States suspicion or resentment is no longer directed to the capitalists or the merely rich. It is the intellectuals — the effete snobs — who are eyed with misgiving and alarm. This should surprise no one. Nor should it be a matter for surprise when semi-literate millionaires turn up leading or financing the ignorant in struggle against the intellectually privileged and content. This reflects the relevant class distinctions in our time.

        • infiniteimprobabilit
          Posted January 4, 2014 at 5:38 am | Permalink

          I’ve only read one of Galbraith’s books but his gift for a wittily ironic phrase made him a delight to read.

    • Posted January 3, 2014 at 7:15 am | Permalink

      “Like science fiction, futurology seems to me far more interesting in what it reveals about the times and mentality of its authors.”

      Right. That’s why Fred Pohl, childhood friend of Asimov who recently died, called his autobiography The Way the Future Was.

      Star Trek is a good example. Yes, it was futuristic, but the 1960s shone through in the miniskirts and hairdos. Some of this was intentional, of course, as many of the episodes were social commentary.

      Also note that it is not always the fault of the authors if something was wrong. Roddenberry wanted the crew of the Enterprise to be 50/50 men/women. The network censors insisted on 1/3 female. Their reasoning: Otherwise people will think about all the fooling around [those words] going on up there. It also reflects on the times that they didn’t consider the options of 1/3 female (more male homosexuality, group sex, polyandry etc), despite Jan and Dean having sung “two girls for every boy” just a few years previously.

  5. Posted January 3, 2014 at 6:05 am | Permalink

    Maybe Orwell’s 1984 played a part in preventing the kind of world it depicted. It certainly did look as if many parts of the world were heading in that direction at the time.

    • TJR
      Posted January 3, 2014 at 6:14 am | Permalink

      Many of them still are……

    • Posted January 3, 2014 at 7:22 am | Permalink

      The place where Orwell’s vision breaks down is that he assumed that the mechanisms of information and publishing (i.e., computers and printers) would remain enormously expensive and only within the means of a government to maintain. If someone could reach back to the past and tell him about personal computers, printers and the internet in millions of homes, 1984 would end up a different story.

      That said, media conglomerates like Disney and Fux News are doing their best to make Orwell’s version of 1984 a reality.

      • Posted January 3, 2014 at 7:27 am | Permalink

        Someone said: “You know that you are reading old science fiction when, the further into the future the story goes, the computers get bigger and bigger instead of smaller and smaller.”

      • Latverian Diplomat
        Posted January 3, 2014 at 9:06 am | Permalink

        You know, from a purely technological standpoint, Orwell was not far off from what was actually possible in 1984. A lot of information processing and almost all video was still analog at that point.

        A few people with isolated PCs printing pamphlets at home (there was no Internet yet) would have been no challenge to Big Brother at all.

        This is aside from the fact that a government which was reshaping the very language to make rebellious expression difficult, would ever permit anything like a standalone PC.

      • Posted January 3, 2014 at 9:14 am | Permalink

        Eric Arthur Blair… George Orwell’s real name. Hmm, did someone reach back to the past after all?

  6. Posted January 3, 2014 at 6:12 am | Permalink

    “The world of A.D. 2014 will have few routine jobs that cannot be done better by some machine than by any human being.”

    He was referring to Siri in iOS 7. Annoying and disgraceful.

    • gbjames
      Posted January 3, 2014 at 6:22 am | Permalink

      “Siri, where is my flying car?”

    • Kevin
      Posted January 3, 2014 at 6:45 am | Permalink

      Siri is not so bad…especially when my kids get a hold of her.

  7. Ken Pidcock
    Posted January 3, 2014 at 6:16 am | Permalink

    My recollection of the 1964 fair was that it was a retrospective on 20th century futurism. There was a “What did they think in 1939?” feel to the whole thing.

  8. Diana MacPherson
    Posted January 3, 2014 at 6:38 am | Permalink

    This post reminded me of a video I watched recently where Arthur C. Clarke predicts the future of computers & telecommuting (my favourite perk of the modern age) in this 1974 interview. What’s up with the B&W? I remember 1974 & TV was in colour.

  9. JBlilie
    Posted January 3, 2014 at 6:45 am | Permalink

    “My theory, which is mine, is that eventually the whole world will be like New York City”

    Maybe. But I hope (and reckon) I’ll be long dead before that happens, thank Ceiling Cat. Sounds like hell to me.

    Some of my most formative experiences happened many miles from roads (many days of hard hiking in some cases), usually with only a 1/2-inch nylon rope between me and a fatal fall from a mountain peak. No phones, no radios, no electricity, no computers, just good comrades and some real adventure (not the kind in video games). And no cell phones to call in the “cavalry” on.

    • JT
      Posted January 3, 2014 at 9:53 am | Permalink

      I agree. A world like New York City, or any other city for that matter, sounds like pure hell to me.
      I finally gave in and got my first cell phone a few weeks ago. I plan to use it only for emergencies. Whenever I travel to the city (Vancouver in my case) I see everyone walking around like robots staring into a screen of some sort. What a species we are! I hate, hate, hate the city.

    • infiniteimprobabilit
      Posted January 4, 2014 at 6:49 am | Permalink

      Absolutely. I like being away from it all. Love the mountain views (but I’ll skip the 1/2″ rope, I’m not that good at heights ;)

      I shudder to think of a future where all the world is like New York (and shortly thereafter, like the most overcrowded parts of Calcutta or Hong Kong not that I’ve been to either).

      I’m reminded of a sci-fi story by L Sprague de Camp, ‘Lest Darkness Fall’, where a time traveller went back to Rome and gave them modern medicine. I haven’t read it, I only know of it through a very short black satire on it written by Fred Pohl, ‘The Deadly Mission of P. Snodgrass’, where the resulting population explosion is graphically described. Eventually, with the last remaining resources on earth, a second time machine was constructed and one volunteer (from among the millions who applied) was sent back with a rifle and one cartridge, with which he assassinated P. Snodgrass as he trudged up the steps of Rome. It ends “To the unvoiced relief of millions of never-to-be-born humans, darkness finally fell.”

      (Quote not verbatim, I can’t find a copy of the story among my stack of old pulp sci-fi dammit).

  10. Kevin
    Posted January 3, 2014 at 7:00 am | Permalink

    New Mexico will never equal New York. That is one reason Georgia left, it was too stifling for her.

    I am not sure connectivity leads necessarily to homogeneity. And the number of options that connectivity is offering is growing all the time in terms of technologies, content, context, and the overall ability to both enable and direct a user’s choices.

    New Mexico will not look like New York, but connectivity will eventually make it look like what connectivity makes it look like more than anything else: the weather, food, cows, chiles, etc..

    • Diana MacPherson
      Posted January 3, 2014 at 7:11 am | Permalink

      I think yes & no with homogeneity. Yes because there is a democratization of access to information though some may have more access than others. No because people tend to cluster with like minds so there is homogeneity in the cluster but fracturing and isolation from differing perspectives at the same time.

  11. Posted January 3, 2014 at 7:09 am | Permalink

    “The next one is mostly wrong, due largely to Asimov’s inability to foresee the rise of the Internet and its ability to dispel boredom:”

    Arthur C. Clarke was better in this respect. (I find that, depending what time in the future one is discussing, one or the other is better.) He didn’t call it the internet, but his “wired world” (few will get the pun these days) had it all. Work from home? Check. Virtual communities of people who have never met personally? Check. Meeting your mate online? Check. Clarke also said that any sufficiently advanced technology is indistinguishable from magic. Type in a few keywords and get more information than you can read in a lifetime? Check. He even wrote a story in which most of the bandwidth is being used for porn. :-)

    • Diana MacPherson
      Posted January 3, 2014 at 7:17 am | Permalink

      See my link to Clarke’s predictions you mention at #8 to enjoy him chatting about them in 1974.

    • TJR
      Posted January 3, 2014 at 8:16 am | Permalink

      But did he predict how much of 2014 popular culture would be remarkably similar to 1970s popular culture?

  12. Timothy Hughbanks
    Posted January 3, 2014 at 7:15 am | Permalink

    Connectivity has brought tremendous advantages: think of the ability to access information at your desk instead of a making a laborious trip to the library.

    …or, even better, looking up and scanning through a paper referenced by a speaker during her seminar – during her seminar.

    • Timothy Hughbanks
      Posted January 3, 2014 at 7:24 am | Permalink

      On the other hand, there is the experience of sitting in the back of the classroom of a colleague and watching students who are checking out Facebook, watching Drew Carey, or just going through e-mail while she is talking about osmotic pressure – and knowing they’re doing the same things when I’m lecturing too.

      • Diana MacPherson
        Posted January 3, 2014 at 7:26 am | Permalink

        It’s why we need an implant so we can do things on the sly. :) I’m sure people think I’m replying to work emails, when I’m typing a response on this site during a meeting. Shhhhh, it’s a secret!

      • JBlilie
        Posted January 3, 2014 at 7:34 am | Permalink

        It’s amazing how people think that they can “multi-task” as well (not that those studennts probably have any illusions about “multi-tasking”).

        In spite of all studies showing people can’t do it, they think they can. They just do more than one thing very poorly. People can really only focus on one thing at a time. You can shift serially; but most people quickly lose the threads. We are not like time-sharing computer CPUs (at least not with higher level cognitive tasks).

        I wonder how long it will continue to be socially acceptable to stare at your device instead of engaging with your (for example) dinner partner(s). A reaction to this addition to screens surely will have to come some day. Soon I hope.

        • Diana MacPherson
          Posted January 3, 2014 at 7:37 am | Permalink

          Oh yes, I know when I’m not paying attention but I’m good at shifting attention quickly. If I were in a class I would concentrate and not multi task. It was bad enough trying to force my mind not to wander in classes; any other distraction would be deadly & I tend to learn by hearing so I get more out of listening to a lecture than reading a text book.

        • Posted January 3, 2014 at 8:23 am | Permalink

          And there are situations in which “multi-tasking” is dangerous to yourself and others. I’ve taken two courses at the BMW performance driving center in South Carolina, and one of the first things the instructors tell you is “Turn off all the electronic accessories. They distract you from your job, which is driving.”

          • JBlilie
            Posted January 3, 2014 at 1:32 pm | Permalink

            This si what I always tell people: When you get behind the wheel, you have one and only one job: Driving. 100%, constant situational awareness.

            • Posted January 3, 2014 at 1:50 pm | Permalink

              That’s also why driverless vehicles will take over very quickly. In commercial delivery vehicles first; FedEx and UPS cross-country ground deliveries will almost certainly be the first to go driverless in a big way. Once insurance companies start giving big discounts to owners of driverless vehicles — and they will, believe me — only gearheads will buy new cars without onboard automatic navigation.

              Because even the best, most dedicated and professionally devoted driver can’t maintain 100% constant situational awareness, while the computer does exactly that.

              What that means for accidents is that computers will only crash in unavoidable situations where even Mario Andretti would have crashed, and there aren’t going to be any situations where the car crashes that the human would have avoided. That right there eliminates 99% of crashes, and the remainder aren’t worth worrying about.

              Cheers,

              b&

              • Gregory Kusnick
                Posted January 3, 2014 at 2:12 pm | Permalink

                …and there aren’t going to be any situations where the car crashes that the human would have avoided.

                This seems unreasonably optimistic, given our track record with bug-free software so far. Airplane autopilots have existed for decades, but could one of them do what Chesley Sullenberger did?

                There might be very few situations where the machine might screw up where a human wouldn’t, and maybe that’s good enough. But zero? Not likely any time soon, I suspect.

              • Posted January 3, 2014 at 2:44 pm | Permalink

                Actually, from what I understand, Mr. Sullenberger “merely” perfectly executed the standard procedures he had drilled in for decades. Sure, most (all?) aviation autopilots are designed for routine flight with the intention that the human will take over in an emergency, but I’d expect modern autopilots designed with emergency handling in mind to outperform humans.

                Plus, flying is much more complex than driving. The parameters are much more constrained, and in a crisis, “all” you need to do is reduce speed as rapidly as safely possible while avoiding obstacles. That itself is but a minor variation on what the car does normally all the time.

                (Yes, in slick conditions you can make things worse by trying to slow down too rapidly; yes, an automated vehicle is going to know that. And, more to the point, the automated vehicle is going to detect such conditions much faster and better than an human, and isn’t going to be arrogant enough to think that it can macho its way through them at an unsafe speed anyway.)

                Look at any other related field. Milling machines operate so fast that humans can’t even follow what they’re doing, let alone compete with them on raw speed — and the machines are working at precisions that humans can’t even match in theory. In chess, computers left humans in the dust ages ago. Some of the earliest computers replaced telephone switchboard operators, and modern routers are doing things that, again, humans aren’t even capable of fathoming. I’m sure UPS and FedEx is using similar technology to plan their drivers’s routes already.

                Again, don’t expect perfection from driverless vehicles. But they’ll be close enough to perfect that any human-comprehendible rounding of the figures will equate to perfection.

                Cheers,

                b&

              • E.A. Blair
                Posted January 3, 2014 at 3:01 pm | Permalink

                We will still need people to manage house-to-house deliveries, at least until FedEx or UPS or DHL come up with an ambulatory robot that can sneak up to my porch, leave the package, press the doorbell and pound on the door, then run away quickly enough that I won’t be able to tell anyone they got the wrong address and I’ll end up having to redeliver it myself (which happened seven times last year).

              • Posted January 3, 2014 at 3:15 pm | Permalink

                Yeah, that last mile is going to be the last mile that gets automated. But Amazon already has robots in its warehouses that could do the job if the path from the street to your door was as straight and level and clean as those in its warehouses; that day is coming, too. Just not until after almost everything else has been automated — low-hanging fruit first, and all that.

                b&

  13. Posted January 3, 2014 at 7:34 am | Permalink

    I recently came across a reference to the language spoken by a group of people in eastern Peru, the Matses. In the Matses language, verbs are marked with morphemes indicating the source of the information – whether it is from firsthand knowledge, inference, hearsay or speculation. Under these constraints, lying is ungrammatical in Matses*.

    Within twenty minutes, I had located and downloaded a document with comprehensive grammar of the Matses language, all 1,279 pages, and was able to read for myself the details of the inflection system for evidentiality.

    *I am intrigued by the notion that if everyone woke up tomorrow speaking this language, creationists, tea partiers and blowhard news pundids would have a difficult time staying in business.

    • W.Benson
      Posted January 3, 2014 at 8:43 am | Permalink

      Obviously such language conventions work only for people who are honest. Could you please post the link to Matsés grammar?

    • James Walker
      Posted January 3, 2014 at 1:45 pm | Permalink

      A number of languages mark evidentiality, which only indicates the speaker’s claim as to how he/she acquired the information. It’s entirely possible that a speaker of one of those languages could be mistaken or lying.

      • E.A. Blair
        Posted January 3, 2014 at 2:29 pm | Permalink

        The information I have on the Matsés so far indicates that lying is not only ungrammatical, but also has strong cultural taboos against it. The markers for evidentiality include categories that cover hearsay and speculation, and there is also a special marker for uncertainty. Apparently, use of one of those markers both absolves the speaker from taking responsibility for a statement while ensuring that its truth value is in doubt. There is also, apparently, a special mode of speech for transmitting legends and stories.

        What other languages mark evidentiality? I’ve run across references to other Panoan languages doing so, but there’s not much documentation. I’m interested in finding languages with some literature.

        So far, I have managed to get through the morphology of the verb system, but the whole grammar is 1,279 pages and I have yet to get to the parts covering indirect statements, stories and cultural considerations. One of the examples Fleck mentions is that if you ask a Matsés man how many wives he has, the answer will be something on the order of “Two the last time I checked”. While it would be possible to give an equivalent answer in English to the question, “Are you married?” the telling point about Matsés is that it directs the speaker to account for what is said where English does not.

        • James Walker
          Posted January 3, 2014 at 4:24 pm | Permalink

          Well there’s a whole book on evidentiality in different languages:

          Relevant to the Matsés is Dan Everett’s work on evidentiality in the Amazonian language Pirahã, which he argues is restricted to “immediate exponence”, which means that you can’t make a statement about something in Pirahã that you didn’t witness for yourself, or hear directly from the person who did experience it:
          http://www.pnglanguages.org/americas/brasil/PUBLCNS/ANTHRO/PHGrCult.pdf

          However, you’re talking about the morphological encoding of something the speaker believes to be true. How would they deal, for example, with something the speaker saw in a dream? From an objective perspective that wouldn’t be ‘evidence’ but from the speaker’s (and perhaps the culture’s) perspective it would be.

  14. Latverian Diplomat
    Posted January 3, 2014 at 8:43 am | Permalink

    I find it interesting that he came so close on global population, yet seemed to believe that we would be required to start building cities in desert, polar regions, and underwater to find places to put people.

    This might be tied to his pessimistic view of agricultural productivity. Perhaps he thought we could never afford to pave over as much prime farmland as we have.

  15. Posted January 3, 2014 at 11:52 am | Permalink

    The situation will have been made the more serious by the advances of automation. The world of A.D. 2014 will have few routine jobs that cannot be done better by some machine than by any human being.

    This is actually much more true than you apparently realize.

    Compare a modern manufacturing plant with one from 50 years ago, and it’s essentially entirely robotic now. Rather than people actually manufacturing stuff, the few people left are supervising the robots who’re doing things.

    Amazon’s shipping warehouses are almost entirely roboticized, with Roomba-style robots moving shelves to and from the packing stations. The human there is told exactly which items to place in which boxes, and everything after that is again automated. And it won’t be long before the packing itself is similarly automated.

    Google’s self-driving cars are going to revolutionize the shipping industry long before Ford or GM are selling them to the general public. If you’re a long-haul truck driver, now’s the time to plan your next career. In under a decade tractor-trailer rigs are going to be running driverless 24/7, stopping only for fuel…and refueling those trucks is going to be a minimum-wage job even more soul-crushing than flipping burgers. But the good news there is that it won’t take all that much longer for somebody to automate truck refueling.

    Agriculture is mostly about large machines driving up and down the rows, and I’m sure those either have already been automated or soon will be.

    Retail sales, in the form of Amazon, is again largely automated.

    Honestly, there’s not a whole lot left except designing new robots and automating more tasks.

    For that matter, I’m as much to blame as anybody else. There used to be several times as many accountants employed by the company I’ve billed most of my hours to over the past decade or so. As normal attrition has dwindled their numbers, the ones that leave generally haven’t been replaced…in large part because the tools I’ve created have let them get by without. That’s an awful lot of well-paying jobs that simply don’t exist any more because of software — robots, if you will — that I created.

    In Utopian fantasies, the resulting productivity gains get shared by the whole society, resulting in not only greater average personal wealth but substantially more leisure time. In reality, the gains are being sucked up by the top fraction of a percent, and the masses are being left to starve because there’s not enough work left for them to earn a paycheck. This is not sustainable, and is fueling a bubble so massive that it makes the recent housing and stock market crashes look insignificant in comparison. Combine that with the exhaustion of petrochemical reserves and global pollution, and the future doesn’t at all look rosy.

    Cheers,

    b&

    • Gregory Kusnick
      Posted January 3, 2014 at 12:11 pm | Permalink

      In under a decade tractor-trailer rigs are going to be running driverless 24/7, stopping only for fuel…

      But will they still be trying to pass each other on steep uphill grades, and holding up traffic for five minutes in the process?

      • Posted January 3, 2014 at 1:14 pm | Permalink

        I would very much doubt it. One of the great things about driverless trucks is that the personal reasons to be in an hurry don’t exist with a machine — and the desires to avoid lawsuits and / or bad press would be pretty high on the programmer’s list. Also, it would be relatively straightforward for inter-vehicle communications to help mitigate those sorts of problems; an heavy truck could communicate its anticipated speed to other nearby faster trucks, and they could negotiate an ordering amongst themselves that minimizes the delays for all. They could also analyze surrounding traffic and plan passing for times when no other vehicles would be adversely affected.

        And if you’re in a driverless car of some sort (personal, rented, taxi, whatever), your car would presumably also be in on the negotiations and would attempt to position itself in front of the laggard before the start of the climb.

        Lots of similar types of strategies can be employed to greatly increase throughput and decrease congestion in all sorts of ways — but not with dumb, selfish human drivers in control.

        Cheers,

        b&

        • Gregory Kusnick
          Posted January 3, 2014 at 1:58 pm | Permalink

          But selfish human beings will still be in control of the priorities, if not of the steering wheel and pedals.

          Automating the vehicles won’t create a utopia in which everyone’s interests are magically aligned. There will still be competition among freight carriers, and between the carriers and other road users, so there will still be incentives for vehicles to optimize their own progress at the expense of other vehicles. Such systems are still vulnerable to the tragedy of the commons, where pursuit of individual advantage degrades the system for everyone.

          • Posted January 3, 2014 at 2:15 pm | Permalink

            There’s a very good reason to think that the result will be far better than what we have today, though — and that’s the Internet.

            Routing vehicles on the roads is very much like routing packets on data links. And, yes, the major network operators have reasons to boost their own performance, even at the expense of their competitors…except that, when they do, game theory kicks in and actively hindering the competition comes back to bite you harder than what you gain.

            Plus, there’s damned little room in long-haul trucking for anticompetitive behaviors to do you much good. You could shave a bit of time off your deliveries by running the trucks faster, but only at a significant increase in fuel costs. Your competitors will run their trucks at more optimal speeds, still make essentially the same number of deliveries per day, and eat you for lunch on the prices. You could be a dick about jockeying for position on the climbs, but then your competitors are going to do the same to you. All your competitors, and they’re not going to do it to each other, so you lose out big time and all you do is annoy everybody else.

            The eventual system will likely not be the theoretically optimal one, exactly because of the tragedy of the commons. But it’ll be so much better than our current system that such worries about falling short of ultimate perfection aren’t worth worrying about at all.

            Same deal with accidents. Will automated vehicles crash? Yes, unquestionably. Will they crash far less often than human-operated vehicles? Yes, again, unquestionably so. Which means that you’re much better off rolling the dice with the machines…and financial analysts and especially insurers are all about rolling the dice.

            Cheers,

            b&

            • Gregory Kusnick
              Posted January 3, 2014 at 2:49 pm | Permalink

              I grant that automated vehicles will, on average, be safer and more efficient than human-operated vehicles, and that’s all it takes to get them widely adopted.

              But earlier you seemed to be making the much stronger claims that they would be optimal in both efficiency (“they could negotiate an ordering amongst themselves that minimizes the delays for all”) and safety (“computers will only crash in unavoidable situations where even Mario Andretti would have crashed”). Those stronger claims seem untenable, and aren’t essential to your main argument, so I’m glad you’re backing off from them.

              • Posted January 3, 2014 at 3:10 pm | Permalink

                Eh, I don’t think I’m at all backing off of either.

                Of course, as I’ve repeatedly mentioned, they’re not going to be 100% perfect.

                But Google’s driverless car record is already perfect, within human rounding. In comparison with human drivers, their incident-per-mile range is so far above the 99th percentile that you can basically call them perfect.

                And no human driver can compete with a driverless car as far as operating efficiency goes. That’s triply the case if the car is allowed to optimize its driving for efficiency rather than the human telling it to get to the destination in the minimum amount of time possible.

                For commercial operators, they’re going to be planning trips well in advance and so will have every reason to optimize for fuel consumption rather than travel time — just as freight rail operators already do. If you need it faster than a truck can drive at 55 mph, you’re not going to go with a truck driving 65 mph; you’re going to send it by air. And since your truck can be on the road for at least 23 hours out of every day rather than eight, the 5% hit you take in per-mile range is obliterated by the tripling of per-day range. Not to mention, you can cut your fleet size in half and still move more freight more cheaply.

                Really, the business case for driverless trucking is so overwhelming that I’m personally a bit surprised we haven’t seen it sooner. It won’t be long before people will view humans driving trucks as perverse as humans picking cotton or humans welding vehicles together on an assembly line or humans operating telephone switchbanks. Sure, there’ll always be a few who do it for special purposes, such as prototyping or art or nostalgia or other sorts of one-off reasons. But not for a job.

                Cheers,

                b&

              • Gregory Kusnick
                Posted January 3, 2014 at 4:42 pm | Permalink

                Ben, you’re still arguing the safer-on-average point that I already conceded, and declining to defend the Mario Andretti claim that I challenged. That looks like a retreat to me.

              • Posted January 3, 2014 at 4:52 pm | Permalink

                Oh, I’ll still make the Andretti claim.

                Start a challenge between Andretti and an autopilot, both in identical vehicles. New York to Los Angeles (or vice-versa), and all laws (especially including speed limits) must be obeyed. Points deducted for every moving traffic violation. Immediately upon reaching the destination, both drivers drive a standard obstacle course of the type used for sobriety research — slaloms around traffic cones, balls bouncing in from out of nowhere, unpredictable traffic signals, that sort of thing.

                The robot will absolutely smoke Andretti, on every single metric.

                Cheers,

                b&

              • Gregory Kusnick
                Posted January 3, 2014 at 5:14 pm | Permalink

                So when you said “even Mario Andretti”, you meant “even Mario Andretti on a bad day”. Glad we got that cleared up.

              • Posted January 3, 2014 at 5:53 pm | Permalink

                Not even just bad days.

                First, my test is not at all remarkable for robot drivers. Indeed, that sort of non-stop operation is going to be the very first way they’ll be put to use.

                But, beyond that…the robots have response times measured in milliseconds. Mario even at his best has his measured in seconds. If response time is going to be the deciding factor — as it virtually always is — then, once again, Mario doesn’t stand a chance.

                And the robot’s reaction times are always going to be exactly the same. I’m sure Mario would readily admit that his fresh-off-the-starting-line self would beat his just-crossed-the-finish-line self, even if he’s still the best of the pack at the end of the race.

                I know there’s a certain type of romanticism that says that the best human is always going to beat the best machine, but even John Henry laid his hammer down and died, and even Garry Kasparov tipped his king to Deep Blue. And driving is nowhere near the top of the list when it comes to complex tasks that machines are significantly superior to humans at.

                Driving is an activity that lots of humans engage in, and that many enjoy and take some sort of pride in performing. Not everybody is a machinist or a railroad worker or a typesetter or whatever, so it doesn’t bother them that machines are better at all those tasks than they are. But many humans get bent out of shape when it comes time to face the fact that they’re no longer the top of the heap at something they themselves take pride in…and humans are overwhelmingly convinced that they’re above-average drivers.

                In reality, even the best humans wouldn’t even make it through the first round of Q/A testing for robotic drivers.

                Cheers,

                b&

              • Gregory Kusnick
                Posted January 3, 2014 at 8:29 pm | Permalink

                Ben, If you think I’m making some sort of John Henry argument from human exceptionalism, then I must not be expressing myself clearly.

                My point is simply that there are cognitive skills that humans have and that robot vehicles may not have in the foreseeable future. And contrary to your claim, there may be situations in which such skills contribute to traffic safety.

                Among such skills is the ability to intuit the intentions or state of mind of your fellow drivers (“That guy’s lost.” “That guy’s cruising for a parking space.” “That guy’s about to change lanes without signaling.”).

                I’m not claiming these situations are necessarily common. But your claim seems to be that either they don’t exist, or that they have zero impact on safety, or that robots can somehow finesse them with fast reflexes (even when driving multi-ton big rigs). I don’t find those claims plausible. The best defense against bad drivers is to see them coming and be somewhere else when they screw up. I think we’re a long way from robots that can do that.

                If you’re tempted to say that robots won’t have to do that because there won’t be any human drivers left, I’m not buying that either. You can’t get there from here without passing through an intermediate phase in which robots and humans share the road.

                And that’s all I have time for this evening. Feel free to have the last word.

              • Posted January 3, 2014 at 9:05 pm | Permalink

                Ah — that clarification helps.

                Because it is, actually, the John Henry argument, and, in fact, is the typical justification for said argument.

                The exact same reasons were given as to why a computer would never beat a grandmaster at chess, or replace a skilled machinist, or do any number of other things.

                And it points straight at the nature of the fallacy. Specifically, you’re assuming that the only way, or perhaps the best way, to solve these problems is the same way that humans solve them.

                It’s not necessary to have a theory of mind to keep from crashing into other cars, even if those cars are being driven by maniacs. It’s not even necessary to understand how it is, exactly, that Google’s cars manage to not get into crashes with maniacs (and, let me assure you, the roads in the Bay Area have almost nothing but maniacs on them). At this point, all that’s necessary to know is that Google’s cars, do, indeed, have whatever they need to not get into crashes with maniacs.

                I can speculate on how they might do this. One obvious solution is to drive predictably, yourself; that right there reduces your chances of zigging just when some other maniac is zagging.

                The next obvious solution is to maintain separation within the limits of your reflexes. Just as you’d slow down and back off if somebody cuts you off (whether you were expecting it or not), the robot is going to do the same.

                Also very easy for the programmers to do is to spot erratic movement, just the same as you do. If you see somebody several lengths back weaving in and out of lanes and catching up on you, you’re going to keep an eye out for that driver and give him extra space and / or attention. I assure you, the robots are capable of making similar judgments. (And you can bet that the robots not only don’t have any blind spots, but that they’re constantly giving as much attention to the sides and to the rear as they are to what’s ahead.)

                Computers are also pretty good at face recognition. Maybe the Google car isn’t going to be able to identify who’s behind the wheel, but it should have no trouble figuring out what direction the head is pointed in. Does it do so today? No clue. Will it in the future? If it doing so proves useful, yes.

                And then there’s all the other sorts of things that the robots can do that humans can’t. For example, they might (not necessarily now but hypothetically) be able to use infrared systems to detect the change in exhaust from nearby tailpipes as other cars change throttle position. See the exhaust ease off and the computer knows that the car is going to start slowing down well before the human even notices the change in speed. See the exhaust pick up again and, again, the computer knows that the car is starting to take off before the human does.

                Once you start to get a significant number of driverless cars on the road, they’re going to start talking to each other, too; your car will “see” a quarter mile up the road on the other side of the hill that traffic is stopped, and start slowing in a much more intelligent and coordinated matter than you will — and in a way (perhaps by automatically turning on the hazards) that also alerts the non-automated cars that something’s up.

                In the end, the computer is going to be guaranteed to do at least some things much, much differently from the way that an human would. And, yes, a lot of that will involve brute-force analysis of far more possibilities than any human could ever consciously contemplate, just as in chess. Some will involve picking up on cues that humans are incapable of detecting or don’t consider significant, or other possibilities that neither you nor I would ever dream of.

                But, whatever it is, it’s already better than anything humans have to offer, and it’s only going to get better still.

                Cheers,

                b&

    • Posted January 3, 2014 at 12:39 pm | Permalink

      Self-driving trucks? Done.

      When I started as a computer programmer, one of my colleagues told me that my job was essentiality to put others out of work… :-/

      /@

    • Latverian Diplomat
      Posted January 3, 2014 at 1:13 pm | Permalink

      I think the driverless truck thing is more than a decade away, partly due to legal issues like liability, and partly because significant parts of infrastructure are pretty crappy and don’t look like the well painted and maintained test roads. Even train automation is going slower than expected.

      I’m also not quite as pessimistic as you. I hardly think it’s utopian to envision a society where a person’s identity is so heavily invested in whatever crappy job they happen to have as in our society. Even the Romans got bread and circuses to work for centuries, and that’s a very low bar.

      • Latverian Diplomat
        Posted January 3, 2014 at 1:14 pm | Permalink

        Sigh. “is not so heavily invested”

      • Posted January 3, 2014 at 1:42 pm | Permalink

        Look at the video that Ant just posted. That’s over rough, often roadless terrain. Granted, it’s a six-wheel-drive truck with tires at least four feet tall; still, it has no trouble navigating roads and tracks when available.

        Google’s driverless cars have driving records better than almost any human, with almost unmeasurably low incidents per mile driven. And that’s mostly been in the San Francisco Bay Area, which has far and away the most aggressive drivers on the West Coast, possibly the whole country.

        A driverless tractor-trailer rig, in contrast, would be spending almost all of its time on the Interstates, and the few remaining miles would be to warehouses in industrial districts generally no more than a couple miles from the freeways. Compared with Google cars and DARPA-worthy off-road vehicles, that’s a walk in the park.

        The only remaining hurdles for tractor-trailer rigs are legal, with a significant portion of the legal challenge being how to deal with Teamsters’s Unions. But the unions are so weak these days and corporations so powerful that I really don’t anticipate that being any significant hurdle.

        Combine that with the fact that driverless trucks are going to be safer, more environmentally friendly (for financial reasons, they’ll only ever be operated at peak fuel economy parameters), and better for traffic congestion (you’d be amazed at what just a few vehicles being operated at a constant speed will do to clear up stop-and-go traffic jams)…and that they’ll save Big Corporations™ big heaps o’ cash…well, it’s nigh inevitable, and will happen much sooner than you think.

        Cheers,

        b&

        • Latverian Diplomat
          Posted January 3, 2014 at 8:50 pm | Permalink

          I’m not saying it won’t happen, and I’m not saying it won’t be a good thing when it does.

          I’m saying that I don’t think it will happen in a decade. A single truck driving around with no other vehicles or human drivers around. is proof of concept. A vehicle driving in traffic including human drivers, in all forms of weather, on equipment that’s not brand new and perfectly maintained — a whole lot more testing has to be done before that will be permitted.

          And even then, unless the law is changed to provide some other form of redress, the first accident with one of these that kills someone is going to result in a lawsuit that goes after everyone involved in the manufacture and operation of the vehicle. Google is the definition of deep pockets these days.

          Dealing with both of these issues and probably others that I haven’t thought of is going to take time and can’t be rushed.

          • Posted January 3, 2014 at 9:16 pm | Permalink

            Google cars are already regularly putting hundreds of thousands of miles on Bay Area roads, and with an inhumanly low incident rate; if that’s not real-world enough for you, nothing is. And some jurisdictions are already preparing legislation to permit autonomous vehicles to operate without an human sitting in the driver’s seat.

            The liability question is pretty much a non-starter. If a Big Brown Truck gets in an accident and bad things result, the legal and financial result for UPS will be the same whether or not there was an human at the wheel. Indeed, with the human, they have to investigate the driver, decide whether or not to fire, maybe get sued by the driver for any number of reasons, and more. With the robot, they settle the claim the exact same way they would without the human, plus they’ve got insane black-box telemetry that will, 99 44/100% of the time, demonstrate that the truck was perfectly following both the law and best practices — and, ohlookyhere, see the other car running that red light.

            Maintenance, too. The big companies that’ll be the first to go driverless already take superb care of their vehicles. With everything they’re going to save on drivers, they won’t have any problem at all focussing all that much more on maintenance. And unscrupulous companies that don’t keep up on maintenance will have just as many wrecks as a result as they do today…except, of course that their own black boxes are going to get subpoenaed and it’ll be obvious that it was their tire that blew out for no good reason that caused the wreck, thereby giving them a powerful incentive to stop being unscrupulous.

            Fasten your seatbelt; robot drivers are already on the road, and they’ll very soon be replacing commercial drivers.

            Cheers,

            b&

            • Latverian Diplomat
              Posted January 4, 2014 at 10:56 am | Permalink

              This is just handwaving. I guarantee the people working on deploying this technology are taking these sorts of issues seriously; they’d be incompetent to do otherwise.

              Large scale deployments of complex new machines take time, there are always obstacles both anticipated and not, both technological and social. Real change takes time. Success is never guaranteed, it takes hard work on a long time scale with eyes open.

              • Posted January 4, 2014 at 11:55 am | Permalink

                I believe you’re overestimating the scale of what’s involved. These are generally relatively minor retrofits of off-the-line vehicles, and they’re well past the proof-of-concept phase and into the experimental prototyping phase. Next comes industrial prototyping, then manufacturing design, and finally production.

                None of the pieces involved are exotic. Everything is an adaptation of already-mature technologies, including many that the military has already beat the shit out of. And a number are already shipping on production vehicles; a car that can parallel park itself (as many current models can) has most (but, of course not all) of what it needs to drive itself on the road.

                Look at how quickly hybrid electric vehicles became popular in the personal vehicle market, especially taxis. The shift to autonomous vehicles, especially in the commercial market, is going to happen faster than that — much faster. Hybrids just offer marginal improvements in fuel economy and environmentally-friendly warm-n-fuzzies to the owners. Driverless commercial vehicles will offer companies almost-similar fuel economy improvements…plus they’ll save the cost of at least an hundred grand per person per year in personnel costs, plus they’ll double or triple the per-vehicle productivity of the fleet, plus they’ll decrease insurance costs, and and and and and.

                I’ll be very surprised if any new driver-dependent tractor-trailers are still being manufactured in a decade in any kind of volume, and I’ll be almost as surprised if any taxi companies are still using human drivers save for the novelty (such as limousines). By then, too, pure-manual-drive mode in rental cars will be as rare as manual transmissions are in them today.

                That’s even if it adds fifteen grand to the cost of a consumer car or fifty grand to the cost of a tractor-trailer rig. Those expenses will pay for themselves in just a few months in each of those segments. The early adopters will slash their rates well below what their competitors can afford to operate at while still increasing their own profit margins, and the competitors will be hot on their heels to keep up. Hell, even if an autonomous system doubles the price of the vehicle, manufacturers still wouldn’t be able to keep up with demand.

                Cheers,

                b&

  16. Gregory Kusnick
    Posted January 3, 2014 at 12:07 pm | Permalink

    Interestingly, Vannevar Bush foresaw something vaguely like the Internet as far back as 1945, in an essay called As We May Think.

    He envisioned a machine called a “memex” through which users could access an electronic library of all human knowledge linked together via hypertext. The key difference is that he imagined them as standalone machines receiving periodic updates via microfilm, rather than a dynamically connected network of machines.

  17. Jim Thomerson
    Posted January 3, 2014 at 1:17 pm | Permalink

    I don’t have numbers at hand, but I think a large majority of people in the world live in urban areas. Corporate style farming is much more automated these days, and will contine to go in that direction. Just like machinery has replaced almost all of the pick and shovel workers in the infrastructure/construction industries.

  18. James Walker
    Posted January 3, 2014 at 1:41 pm | Permalink

    To be fair to Orwell, 1984 was intended more as an exercise in “what if?” (“what if totalitarianism became a worldwide phenomenon?”) than a genuine prediction.

  19. E.A. Blair
    Posted January 4, 2014 at 2:09 pm | Permalink

    It turns out I’ve been ignoring my events calendar the past couple of days. 2 January was Isaac Asimov’s birthday – although the actual date is in doubt, the second is the date he observed.

    I had the good fortune to meet him twice.

    One amusing reference to him i remember was a poem describing a visit by the good doctor to a nudist resort. The final lines were:

    “And when the word is given
    ‘All clothing you must doff’
    Without a moment’s hesitation
    Isaac Asimov.”


Post a Comment

Required fields are marked *
*
*

Follow

Get every new post delivered to your Inbox.

Join 27,064 other followers

%d bloggers like this: