You think cities-vs-Uber was complicated? Get ready for self-driving cars

Double-parking by ride-hail cars has gotten so bad in San Francisco that the city is ready to make a deal. It’s scouting trial locations for a program that would set aside curb space for Uber and Lyft drivers to pick up passengers, according to the San Francisco Examiner.

This is one more sign of change on the curbs of San Francisco, where space once used for on-street parking is giving way to transit, cycling, and other uses. It’s also the city’s latest attempt to craft a workable response to its fast-growing ride-hail services. But by the time City Hall solved the issues they’ve raised so far, autonomous vehicle technology is likely to raise a whole set of new ones, as I’ll explain in a moment.

There’s no turning back now

Ride-hail services have become so ubiquitous in the city that officially accommodating them with designated pickup zones now seems reasonable. The startups may have begun on the fringes of the law and devastated the cab industry that the city sanctioned (and rewarded with enforced scarcity) for decades, but they’ve become a fact of life. It’s too late for San Francisco to ban Uber and Lyft, even if it weren’t the companies’ hometown. If it makes sense to sacrifice street parking for safer cycling or transit, it seems reasonable to officially make some room for ride-hail pickups.

And the city might come out the winner in this deal if it gets what the Examiner reports it’s seeking: not just records of how the designated pickup spots are used, but raw GPS data on vehicle locations, information on collisions and brake-slamming, and data on wheelchair-accessible trips, among other things.

Though it may make life easier for riders, the plan is also intended to cut down on illegal activity by ride-hail drivers. They break the law every time they stop in a traffic lane to pick up or drop off passengers, wait for the next ride request, or work with the app on their phones. In addition to causing traffic backups, double parking can block drivers’ views, force cyclists into hazardous situations, and generally make streets more chaotic and less safe. It’s just one of many practices cited by critics who say ride-hail services have made the streets more dangerous.

Yet from a law enforcement perspective today, bad Uber and Lyft drivers are like any other scofflaws. Cops keep an eye out for violations, stop and question drivers, and issue tickets.

Here come the cautious robots

All that is about to get more complicated. On Thursday, General Motors said it will have a fleet of self-driving cars providing ride-hailing services in San Francisco by 2019. Last year it paid more than half a billion dollars to buy Cruise, a local autonomous car startup, which has been testing vehicles here for months. And Cruise isn’t alone: Uber, Tesla, and Google-affiliated Waymo all have self-driving ride-hail services in the works.

Cities that have found it hard to regulate ride-hail services with human drivers will face much greater uncertainty as software starts piloting the cars. The issues will expand from business models to sheer on-the-street control and enforcement. To begin with, how do you ticket a car with no driver? Can you do it automatically? Self-driving vehicles might be easier to regulate than those with humans behind the wheel, or they might be much harder to control.

Like so much else these days, it’s all about the power of data and algorithms. Whereas the inputs for a human driver’s road behavior may include driving skill, lack of sleep, distraction, caffeine, and how they feel about that driver who just cut them off, the way a self-driving car behaves is determined entirely by code someone wrote and uploaded (or sent over the air) to the vehicle. Thousands of hours of road testing go into this software, and the vehicles make decisions in novel situations based on algorithms rather than hard-coded rules. But someone designed and programmed how the car would learn and act.

So rather than simply reacting to unpredictable actions by human drivers, as they do today, cities may be able to determine ahead of time how self-driving cars use the streets.

The geofencing solution

For example, one step San Francisco officials are considering to ease the double-parking crunch is to geofence blocks where it’s a big problem and keep Uber and Lyft off those blocks. Geofencing draws a virtual border around an area on a digital map so devices can be instructed to stay in or out of that area. In this case, if ride-hail customers asked to be picked up on one of the geofenced blocks, the Uber or Lyft app wouldn’t allow it. The customer would have to choose another place to get picked up. UCLA just implemented a system like this on its campus.

It seems possible that customers and human drivers could get around this mechanism, because they can use text messaging for communication. A passenger who wanted badly enough to be picked up on a geofenced block instead of the next one over might be able to ask the driver to go there anyway, because the geofence wouldn’t actually stop the car.

I’m not suggesting that jumping geofences is likely to be a big problem. But if a city (or a ride-hailing company) could control where a self-driving car is able to go, that opens up a lot of other possibilities for control. For example, under the deal San Francisco wants to make with Uber and Lyft, the city would get some GPS data on the companies’ cars. With the right kind of GPS information, at a high enough resolution, in real time, the city might be able to make sure ride-hail cars simply can’t double park. And maybe they couldn’t exceed the speed limit, “block the box,” or cut someone off.

But if one tweak to a ride-hailing company’s software could stop self-driving cars from double parking, it could also make them start doing it. In an imagined future where almost every car in a city is operated by a service provider and controlled through software, traffic behavior throughout the city could change overnight based on changes made by that company. Ride-hailing companies might become something more akin to a utility running critical infrastructure rather than the vendor of a consumer service. Wouldn’t government want some say in how that infrastructure operated?

What’s under the covers?

That future vision is far off, but if self-driving cars are about to become part of ride-hailing services, it’s worth asking how those cars are programmed to do their jobs. When Cruise demonstrated its cars for journalists here earlier this week, some found the vehicles’ street behavior conservative to a fault. The Chevy Bolt electric hatchbacks, under computer control with an engineer behind the wheel as a precaution, traveled slowly, cautiously, and scrupulously within the law.

Will autonomous ride-hail vehicles be programmed to double park, to allow for convenient pickups, or not to double park, since it’s technically illegal and potentially dangerous? Or will the cars treat laws like most of us do, staying generally on the level but making exceptions where it seems safe to do so? Somewhere, software developers are making these decisions.

My guess is that when consumers start catching autonomous rides, they’ll see that cars are mostly safer than human drivers. Then, they’ll get antsy. And pretty soon they’ll be begging their programmed chauffeurs to speed, double park, and cut off that guy who jumped in front of them five minutes ago. The smartest cars in the world won’t be human enough for us.