Donald Norman: Cautious Cars, Cantankerous Kitchens.
Nielsen Norman Group and Northwestern University
The turnout for this event was huge. People were sitting on the floor, on tables and standing. This talk centered on two major areas: recent technological innovations in the automobile world, and smart homes.
Notes are in the extended. (Will add pictures of slides later. Too tired to do it now.)
He's been bothered recently by the automation entering everyday lives. He has seen it in aviation, both commercial and military.
Automation is coming in ways that are quite dangerous. People are making assumptions. People often had one idea in mind, and machines had another idea in mind.
There were many conflicts.
three mile island
Take for instance, a skilled and trained airplane pilot. He has a few minutes to correct a problem. In an automobile, you have far less time, and
ill-skilled person who is not paying attention and doesn't want to pay attention, he has a half second. The autmobile is far more dangerous than a airplane.
that's what he's worried about. In a year, there are 40,000 killed in auto accidents in the United States. 6 million injuries in car accidents, and possibly 0 fatalities in commercial aviation.
story:
In theory he's writing a book and when he's ready he can write a book in a month or two. People say that's fast, they don't realize it takes 2 - 3 years of struggling with the ideas that make up the book.
So when they approached Norman to do this talk, he figured it's not until March, he should be ready by then.
He is ready, and this is the first presentation he's going to give on this matter, and says it should be pretty good 3 or 4 times from now.
Imagine this: He is driving a mountain road from his home in palo alto to the ocean. His wife, in the passenger is tense. Her feet are pushed down to the front, her hands against the dashboard he says "It's okay, I know what I'm doing."
Now, imagine another scenario. It's the same drive, but he notices his car is tense. The seats straighten up, the dashboard beeps, the brakes are applied, seatbelt tighten up. He thinks "Oops, I better slow down".
He was giving the above story at a talk for an Automobile association, and Shirley Turkel, a consultant and a colleague at MIT and a good friend of his wife's was in the audience and came up to him later and asked "How come you trust your car more than you trust your wife?"
Why does he respond differently to the two?
It's not that he trusts his car more than his wife, it's that he can't do anything about it when his car starts complaining. There's no communication. With his wife, he has multiple options, he can ask her, reassure her, or change his driving. With a car, he doesn't know what's going on, all he knows is that the car isn't letting him do it, and therefore he has to change his driving.
He tells a story of being driven to an airport by a friend (Tom) in his new car. The car is one of those with a navigation system. Don says "How do you like the car?"
Tom says: "I love the car, but I never use the nav system. I like to decide what course I take, and the nav system doesn't give me any say."
Notice Tom's predicament -- he's doesn't have any say.
When we are communicating with our machines, we aren't communicating.
We are getting more "intelligent" machines, but they aren't intelligent.
human-machine interaction. human-machine communication, but it's not communication. It's actually two monologues. Two monologues don't make a dialogue.
We command our machines, and our machines command us. There's no give and take, there's no discussion. There's no common ground, there are no essentials for communication.
He has a quote from Socrates which essentially says "You can't have an argument with a book."
Socrates had it right, you can't argue with a book or a machine.
More and more devices in front of us give us no choice.
"My car almost got me into a accident" his friend Jim said.
"Your car, how can that be?" Norman said.
"I was driving down the highway using adaptive cruise control."
Norman goes on to explain for a bit what adaptive cruise control is, and how it works to slow down to keep a distance between you and the car in front of you. And the distance depends on the speed. The limitation to this is about 30mph -- if it's slower than that, it turns off. He goes on to explain the various weaknesses of adaptive cruise control at lower speeds such as heavy traffic, and detection of pedestrians, stop signs and traffic lights.
Norman goes on to further explain Jim's situation -- driving in San Diego on a crowded highway, and adaptive cruise control is on, and so he's going slow. His exit is coming up, and so he switches lanes to get off one his exit, and his car says "hey there's no one in front of me" and takes off.
Norman has told this story to several engineers of various automobile companies.
There are always two components, The first is to blame the driver. Well, why didn't he turn off the adaptive cruise control? He forgot the adaptive cruise control was on? Stupid driver.
You can't design for the ideal person, you have to design for the real person. And real people are going to forget.
He goes on to explain how the cruise control indicator is misleading, in that the light on the dashboard lets you know it's ready to work, but not whether it is working.
The second component is to fix the problem.
Engineers say: "we'll fix it. Adaptive cruise control, always comes with navigation, and the nav system will know it's on an exit and slow down appropriately."
To this, Norman says "Unexpected things always happen. There are two things he guarantees about unexpected things: They will always happen, and the unexpected thing will not be expected."
In automobiles, it's getting worse and worse.
Norman outlines the adaptive cruise control technology that Toyota is using in Lexus models. It uses a camera to detect which way the head is facing, and using warnings when a collision is imminent.
4 types of automation:
* full automation
* collaborative automation
* mind reading technology
* cautious automation
80% solution - supervisory control theory
At the end of the talk, he wants to propose 2 different levels of automation full automation, and zero level automation, where it doesn't try. Norman believes it's the in between that gets you in trouble.
fuel inject. anti skid - they work very well.
Your climate control in the home is a very simple full automation device.
Full automation when it works is the correct way.
He shows a movie of bmw automatically parallel parking.
He notes that Toyota's Prius will auto-park as well. In Japan, but not in the US. The only difference is the legal system.
The Japanese Prius is using collaborative automation, because it requires a certain level of user input before it'll try to park, and while it parks, it requires the driver's foot to be on the brake.
With the BMW video, it's hard to tell what the driver is doing. He got the video just before the presentation, so this showing is only the second time he's seen it.
When things work well, they do work well.
Going back to navigation systems, he says there are different options to it, such as the fast route or the short route, but it's all or none. no discussion. So even though it's a "collaborative" system, it isn't really collaborative because you can't discuss anything with it.
Mind-reading is another approach.
Norman gives the example of brake assist.
Most people don't press the brakes fully. If you apply fast, the car assumes you want it full.
antiskid is another mind reading, as is stability control.
Professional drivers hate anti skid and stability control.
On some cars, stability control can be turned off. But if you brake too quickly, it turns stability control again.
Lane keeping systems, for instance, will move you back in. He puts up a picture of a Honda steering wheel, and notes it's complexity and points out the button for lane keeping.
They are very concerned that you might take advantage of this and that you may not be paying attention to the driving at all, so they only apply 80% of the torque required to steer you back. They monitor to see if you are turning the steering wheel. If you are, fine, but if you aren't, it beeps at you.
If it's beeping and you do nothing, then Honda shuts off the automatic lane keeping. This is because they don't want to steer you back in if you ignore the beeping, because it'll train the driver to ignore the beeping, and you could end up driving like this. [He puts up a picture of a journalist in the UK who did a review of the system and is sitting in the driver's seat reading a magazine while the car is driving. The reviewer claims the picture was faked, and the car was parked.]
These systems were extremely relaxing. Norman isn't convinced you want relaxed drivers.
He talks about swarm control. Wireless, network with cars around it. Keeps with other cars in the mass, and avoids collisions with other objects.
keeps itself a certain distance.
no leader, no need for lanes.
no stop signs or traffic lights.
take advantage of the swarm.
gaming the system.
Talks about the cruise ship, Royal Majesty, which ran aground. Bermuda to Boston. Why? Because the switch was thrown to dead reckoning and ended up 17 miles off course.
30 hours. enough to go aground.
7 million to repair the damage.
the antenna for the gps was disconnected.
no one knew.
latitude and longitude. 1 degree is about 60 miles. accurate up 60 feets
He puts up the readout related to navigation.
can you tell?
sol - computing the solution.
and dr is dead reckoning
the right angle, the right time?
quite large
In December of 2005, southwest airlines, chicago midway airport, a plane ran off the runway. failed to stop in time.
very short runways, very close to city streets.
many plans for what should be done to keep it away a certain distance away from the city, but nothing was ever done.
Some suggestions: should be 1000 feet away, and maybe heavy gravel.
On the day the problem occurred, it was snowing.
Use full thrust, slippery conditions.
They used a calculation program to determine when to land.
The pilots entered wet in the calculation and computed 500 feet to spare. But what they should have entered was very wet conditions.
If it was very wet, there was only 30 feet to spare.
As for the numbers for the calculations? Guesswork. A lot of guesses on weight of plane, distances, even conditions.
when did the thrust reverser come on? 5 - 10 seconds later.
you couldn't have stopped in time.
so whose fault is it? They won't know for a few years when the investigation is complete.
Michael Mozer's home
neural networked house, automatically adjusts for mozer.
Mind reading. He loves it. He also lives alone, so it's not obvious what will happen if another person were to live there.
every irritating issue of his house is another paper for him.
Another of his students is Abby Sellen at Microsoft in the UK, working on a different smart home from Mozer's. Opposite approach. What are the problems at home?
They use a bulletin board system to keep track of things.
Who's late, who's coming to dinner.
it doesn't do what the average smart home does (like using outlook to track people's calendars and where they should be), and it doesn't try to do anything automatically.
There's some communication built into the bulletin board system. Norman shows that a person can text message into the system for a pickup, and another user can write a response that goes out to everyone, so you don't have multiple people trying to pick up the person.
For scheduling things, each person has a magnet, and when the day arrives, it glows to remind you to look at it.
It's a nice way of doing things.
There's always something unexpected.
Robots are coming into the home.
[Puts up a picture of Roomba]
This is the Roomba, it has as much intelligence as your pool cleaner.
Of the robots in your home, the most intelligent ones are probably your dishwasher or your clothes washer.
In Norman's house, his most complex machine is the coffee machine.
We see automation coming into our homes. TiVos, for example.
It'll only be optional for a short time more.
We hear that about smart homes.
There are more and more devices for the elderly.
The abled bodied don't want it at all.
Talked about his last company working with robots. About how they tried to design a robot to get beer from the fridge and bring it to a person, but it had a problem with the door, so they had to make a special beer dispenser to make it work.
There's lots of research money.
Clarify: The Microsoft Smart House in England is the one he wants. He is arguing against the Microsoft Smart House in Redmond.
Machines can't communicate with us, and the real fundamental is real communication is difficult.
What machines need are error correction mechanisms. They need ask for clarification and let us change our own mind.
That's why you can listen to a nice fluid speech, and then read the transcript and it's unintelligible.
Without common ground, we can't ever have real systems to handle.We don't know how to give artificial systems common sense.
Even our best chess playing computers, it is all they can do. If you tried to redefine the rules for a 7 x 7 checkerboard instead of a 8 x 8, you'd have serious reprogramming to do, but a human can learn the new rules in just a matter of minutes.
no generality.
fundamental. we don't know enough.
all accidents are half way automation.
what you want it to know all the time.
you don't want that.
how to do you with a way to do it that feels natural.
natural mapping.
not enough to give lift, called stalling. vibrate control stick.
mapping to another things.
distance speed, intensity loudness.
He shows what he did for getting from his house in palo alto to get to berkeley today. [I actually drove behind him for much of the way to Berkeley -- he went up Telegraph, I went up Shattuck]
He displays the MapQuest route, and then shows the line drive maps made by a student of his. Norman says the LineDrive maps are hidden somewhere in the MSN maps site. Hard to find, but still exists.
Describes Altimeters. How the old one was analog. New one is digital.
People can't read clocks. Switched to digital. Digital displays show too much information. Pilots just want to know if they are going up or down.
[Shows a picture of a road rumble strip and a rumble strip for the blind in tokyo.]
He shows solutions for the Royal Majesty navigation readout. In his version, he would get rid of the minutes when it's in SOL mode, and have the display reflect the precision.
norman believes there is a science of natural interaction, and so we need to come up with a new science to recognize the machine intelligence.
Recognize that the intelligence is not in the machine, the intelligence is in the designer.
What the designer needs to do is imagine all the possible things that could happen and figure out how to how to measure where you were and then dealing with it.
Realize that human imagination in these problems is limited, and that these things are designed to fail.
If we realize that, in this peripheral way we can make more intelligent machines.
Use vibration a lot more. use things a lot more. make things a natural mapping to our perception.
That's the direction Norman wants to be going.
[End of Talk, open ground for questions]
Q: It seems that you aren't advocating full or zero, but rather increased levels of uncertainty.
Going back to Royal Majesty and it's navigation system: when full automation works, it's nice. 5 years w/out a problem.
It failed subtly, and the crew had come to trust and rely upon the system.
Let's take advantage of the fact that it works, and let's change the system so that we are much more aware of what's going on.
There's a bunch of people who believe that if we can get the systems to communicate better, or to cooperate more we can solve these problems. But Norman believes that is not right, because he believes that there's a certain amount of knowledge we can't put into machines, and let's realize this. Norman is arguing for "fundamental asymmetry"
Let's not get surprised when it gives up on us. Story about automatic pilot where it's compensating for a leaking fuel tank, and pilot doesn't realize it.
Talks about the polite nav systems in Japan, where everything ends with kudasai. No one was listening to the nav system.
Question:
Going back to the refridgerator problem -- is it a holdover? Should we redesign for machines rather than humans?
Things in house are designed for us. Not everything. Somethings are designed for other people.
Electrical plugs, they are designed for electricians. We have changed homes to accomodate new things. It will happen.
Last Question:
[He ran out of time, didn't really answer it]
Mind reading. Turn Signals. Brakelights are not optional. If Brakelights were optional, it'd be chaos.
Should automation supports communication rather good than support control?
He wrote a book called "Turn Signals are the Facial Expressions of Automobiles"
It was unique in that turn signals signaled intention to other car+driver.
We use false intentions sometimes.
--------
Leave a comment