Well this is kind of a goofy question to answer, but honestly, I’m glad someone is thinking to ask it now versus waiting for an autonomous car to plow through a Times Square crosswalk: How will autonomous cars let us know they see us? If you think about it, this is not usually an issue for pedestrians and drivers interacting on the streets today. It’s kind of pointing out the obvious, but being either a pedestrian or a driver and interacting at a crosswalk is pretty easy because there are people involved.
Communication Breakdown
What happens when you take people, flawed and distracted though they may be, out of half of that equation? How will an autonomous vehicle be working, what it be needing to do, and how will it communicate that to us? For me, it has always been rather easy to tell what a car (and therefor its driver) is going to do. I spend a lot of time around cars and racetracks and end up paying a lot of attention to what cars are doing. So it’s easy for me to tell if a car/driver “sees” me and whether it’s okay to walk into the street.
Dive, squat, roll, transitioning from one vehicle state to another is something you pick up at racetracks almost by instinct. “Yeah, he’s on the brakes early,” you can say because, over the years, you have been training yourself to notice things like weight transfer, causing the front to nose down by half an inch because the driver has gotten off the throttle. And it’s easy to transfer those traits from the racetrack to every day life.
A lot of people, however, do not think in ways that your everyday, run-of-the-mill gearhead does. Those are the people that, when waiting to cross at a crosswalk wait. They wait until they not only see the car is slowing, but until they see it is coming to a stop, and then they, sometimes, wait even further until the car comes to a complete stop and they make eye contact with the driver and the driver gives them a nod or motions them forward (or both) and then they cross the street.
For those people, who are the majority, how will the inevitable autonomous car let them know it’s okay to cross? Ford Motor Company and the Virginia Tech Transportation Institute have been working on just that.

Signals & Signs
FoMoCo and Virginia Tech are conducting user experience studies to suss out a way to communicate the vehicle’s intent by soliciting real-world reactions to self-driving cars on public roads. The team thought of using text displays, but reasoned that would require all people understand the same language. I would have rejected it because it requires people to stop, read, cognate, and react and that takes too much time. They also thought of using symbols, but that was nixed because symbols historically have low recognition among consumers.
Ford and VTTI found that lighting signals are the most effective means for creating a visual communications protocol for self-driving vehicles. Think of it as being akin to turn signals and brake lights, only more so. Turn signals and brake lights are already standardized and widely understood, so they reckon the use of lighting signals is the best way to communicate. The lighting signals will communicate if a vehicle is in autonomous mode or if it’s beginning to yield or about to accelerate from a stop. Makes sense, no?
To signal the vehicle’s intent to yield, two white lights that move side to side were displayed, indicating the vehicle is about to yield and come to a full stop. Active autonomous mode was signaled by a solid white light. Start to go was conveyed by a rapidly blinking white light that indicated the vehicle would soon be accelerating from a stop.

Candid Camera
To test this out, Ford equipped a Transit Connect with a light bar on the windshield. To go even further and not tip their hand that there was an actual human driving the thing, the VTTI team developed a way to conceal the driver with a “seat suit” so it looked like the van was empty. I know, I know, that’s going a bit far, isn’t it? Then again, this is real science, and a real study, and you best be figuring out how to negate outside undue influences and such. This is why studies are double-blind and things of that nature: eliminate all variables that can skew the results.
Ford and VTTI took it a step further even. While driving the simulated autonomous Transit Connect on public roads in northern Virginia, they captured pedestrian’s reactions on video. They logged over 1,800 miles of driving and more than 150 hours of data, including encounters with pedestrians, bicyclists, and other drivers at intersections, in parking lots, garages, and even airport roadways. The vehicle was studded with high-definition cameras to capture the behavior of other road users and provide a 360-degree view of surrounding areas as well.
Universal Language
Ford is hoping to create an industry standard and is already working with several organizations including the International Organization for Standardization and SAE International for a common visual communications interface across all self-driving vehicles, in all locations. Ford is also working on ways to communicate with those who are blind or visually impaired as part of this project too.
Will it work? They didn’t seem to run over anyone in northern Virginia so it worked in that sense, and besides, festooning cars with more lights and signals and all that, it does seem like a plausible and workable solution. Besides, we’ll have to do something along these lines, or nobody – cars, people the whole lot – will know what to do when the traffic light turns green.
Tony Borroz has spent his entire life racing antique and sports cars. He means well, even if he has a bias toward lighter, agile cars rather than big engine muscle cars or family sedans.

Photos & Source: Ford Motor Company.