Disclaimer: This is a machine generated PDF of selected content from our databases. This functionality is provided solely for your convenience and is in no way intended to replace original scanned PDF. Neither Cengage Learning nor its licensors make any representations or warranties with respect to the machine generated PDF. The PDF is automatically generated "AS IS" and "AS AVAILABLE" and are not retained in our systems. CENGAGE LEARNING AND ITS LICENSORS SPECIFICALLY DISCLAIM ANY AND ALL EXPRESS OR IMPLIED WARRANTIES, INCLUDING WITHOUT LIMITATION, ANY WARRANTIES FOR AVAILABILITY, ACCURACY, TIMELINESS, COMPLETENESS, NON- INFRINGEMENT, MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Your use of the machine generated PDF is subject to all use restrictions contained in The Cengage Learning Subscription and License Agreement and/or the Opposing Viewpoints in Context Terms and Conditions and by using the machine generated PDF functionality you agree to forgo any and all claims against Cengage Learning or its licensors for your use of the machine generated PDF functionality and any output derived therefrom.
Self-Driving Cars Are Not As Safe As Vehicles Operated by Human Drivers Opposing Viewpoints Online Collection. 2017. COPYRIGHT 2018 Gale, a Cengage Company Full Text:
Article Commentary
“An autonomous vehicle in heavy, but steady, freeway traffic would be a cybernetic grandma, stuck in the fast lane doing the speed limit and too scared to change lanes even as angry drivers behind it pressed closely to its bumper.”
Paul Wagenseil is a contributing writer to Tom’s Guide , an online guide to tech products. In the following viewpoint, the author discusses the safety hazards of self-driving car technology. He analyzes guidelines released by the National Highway Traffic Safety Administration in September 2016 and states that while the guidelines sound like an endorsement of self-driving vehicles, they are a sign of future regulations to come. The author then evaluates the safety of self-driving cars, including the threat of hacking to self-driving cars’ systems. Wagenseil inspects Tesla’s autopilot technology and a fatal traffic accident it caused in Ohio. He argues that this accident could have been avoided if a human, instead of a robot, had been driving. The author then discusses a study by the University of Michigan that found that self-driving cars are twice as likely to cause accidents than human drivers.
As you read, consider the following questions:
According to Wagenseil, how did the autopilot technology in the driverless car in Ohio cause a fatal traffic accident?1. Do you agree with the author’s argument that self-driving cars can’t make the same in-the-moment reactions that humans can? Why or why2. not? How might a car company releasing a new self-driving vehicle publicize that the driverless car is safer than a traditional car?3.
Self-driving cars—two-ton robots moving at high speed—are not ready for the road, and won’t be for many years. Technology companies such as Google and, especially, Tesla are moving far too fast toward granting robot cars total autonomy, because they’re not used to software and sensor problems leading to fatal accidents.
The U.S. government yesterday (September 19, 2016) made a half-step toward regulating self-driving cars. Most media coverage spun that as an endorsement of the technology, but there’s an alternative view: The National Highway Traffic Safety Administration (NHTSA) has accepted autonomous vehicles as inevitable, and is jumping in before more people get killed.
Legacy automakers may not be as quick as Tesla to issue security patches for their computerized cars, but at least they deeply understand the risks of taking control away from human drivers, and aren’t being so arrogant as to beta-test autonomous vehicles on public roadways. We should look to Detroit and Washington for leadership in this field, not Silicon Valley.
“Advanced automated vehicle safety technologies, including fully self-driving cars, may prove to be the greatest personal transportation revolution since the popularization of the personal automobile nearly a century ago,” the NHTSA said in the introduction to its Federal Automated Vehicles Policy paper yesterday (https://www.transportation.gov/sites/dot.gov/files/docs/AV%20policy%20guidance%20PDF.pdf): “Automated driving innovations could dramatically decrease the number of crashes tied to human choices and behavior.”
The policy paper lays out voluntary guidelines, not mandatory regulations, and does sound more like an endorsement than a warning. But in the fine print, the message is clear that regulations will come—and what those future regulations will look like depends on how well the makers of self-driving cars follow today’s guidelines.
The paper asks that all companies involved in self-driving cars, from software coders to car builders to sensor makers to taxi-fleet operators, submit safety assessments covering 15 different criteria to the NHTSA four months before road testing begins. Those criteria range from vehicle cybersecurity to post-crash behavior to fallback mechanisms for when self-driving systems fail.
Compliance with these guidelines will be easier for Tesla, Google and General Motors than it will be for solo tinkerers like George Hotz, the famous iPhone and PlayStation hacker who is building a self-driving car in his garage. But it will force even the big companies to slow down their aggressive testing of robot cars on public roadways.
Far from safe
That’s a good thing, because right now that on-street beta testing is leading to accidents. The only proven fatal accident involved Joshua Brown, the Ohio man killed in May when his Tesla Model S plowed into the side of a tractor-trailer as the car was speeding on Autopilot.
Tesla has publicly stated that Autopilot isn’t meant to be a truly autonomous system that would permit the driver to completely relinquish control. Yet the system’s name implies exactly that. Brown, who was reportedly watching a movie when his car hit the tractor-trailer, may have taken it a bit too literally.
“By marketing their feature as ‘Autopilot,’ Tesla gives consumers a false sense of security,” Consumer Reports executive Laura MacCleery said following Brown’s death. “We’re deeply concerned that consumers are being sold a pile of promises about unproven technology.”
That technology can be fooled, as Brown’s own death proved. His car’s cameras apparently didn’t “see” the white trailer in front of the car as an obstacle, possibly because the trailer was hard to distinguish against a bright sky.
His car’s radar would have picked it up, but the latest version of Autopilot at the time was configured to disregard obstacles detected by radar unless they could be confirmed by cameras. (The next version of Autopilot will give radar equal authority.)
It’s not just Tesla’s sensors that are fallible. At the DEF CON 24 hacking conference in Las Vegas in August, three Chinese researchers showed how easy it was to make fake obstacles appear, and real ones disappear, from the navigation systems of Tesla, Audi, Volkswagen and Ford vehicles. Most of these scenarios involve assisted rather than autonomous driving, and the humans behind the wheel would often be able to stop in time. Autonomous vehicles might not have that fallback option.
One of the biggest makers of vehicle camera systems is an Israeli firm called Mobileye, which supplies numerous car makers. But in May, following Brown’s crash, Mobileye and Tesla had a falling out. This month, Mobileye’s chairman and chief technology officer told Reuters that Tesla was “pushing the envelope” in terms of vehicle safety.
The human advantage
Let’s look at Tesla’s and Google’s claims that autonomous vehicles are safer than regular cars because there’s no factor of human error. That may be true on an empty road with unmoving obstacles. But self-driving cars have to share the road with human drivers, and human drivers seem to hit self-driving cars twice as often as regular vehicles, according to a University of Michigan study in October 2015 (http://www.umich.edu.dcccd.idm.oclc.org/~umtriswt/PDF/UMTRI-2015-34_Abstract_English.pdf) .
That may be because self-driving cars are too cautious, too observant of the law, and too slow to adapt to rapidly changing circumstances. A Google autonomous vehicle was famously rear-ended by a human driver in Mountain View, California—because the Google car braked too quickly at a stop sign. (The crash was at a whopping 4 mph.)
You might imagine that caution, lawfulness and moderate speed are good things. But they’re not. No one drives 55 on the freeway, and if they do, they’d better be in the slow lane.
An autonomous vehicle in heavy, but steady, freeway traffic would be a cybernetic grandma, stuck in the fast lane doing the speed limit and too scared to change lanes even as angry drivers behind it pressed closely to its bumper. Updated software could mitigate that behavior, but you’d have to program the robot car to regularly break the law—and that’s something no corporation wants to be caught doing.
And that rosy scenario is in freeway traffic during clear daylight, possibly the most predictable form of traffic there is. Regular driving involves having to react instantly to darkness, heavy rain, snow, kids following balls out into streets, cyclists and things suddenly falling off trucks.
“There’s nothing that’s even remotely approaching the ability to do that,” Steve Shladover, director of the University of California’s Partners for Advanced Transportation Technology (PATH) program, told the CBC in a May 2015 article. “Even the most sophisticated of those test vehicles is far inferior to a novice driver.”
I just stepped outside to get lunch in midtown Manhattan and watched taxis abruptly change lanes, bicycle deliverymen weave in and out of traffic, and pedestrians stand on the street (not the sidewalk) at corners—and then race across the street before the next car comes.
New York City drivers know how to drive in such chaotic situations. It’ll be a long time before a robot car programmed in suburban California can do that.
Source Citation (MLA 8th Edition) Wagenseil, Paul. "Self-Driving Cars Are Not As Safe As Vehicles Operated by Human Drivers." Opposing Viewpoints Online Collection, Gale,
2018. Opposing Viewpoints in Context, http://link.galegroup.com/apps/doc/ZLVJRK680638574/OVIC?u=txshracd2500&sid=OVIC&xid=1f868ad2. Accessed 8 Oct. 2018. Originally published as "Pull Over, Robot! Self-Driving Cars Should Be Off The Roads," Tom’s Guide , 20 Sept. 2016.
Gale Document Number: GALE|ZLVJRK680638574
Applied Sciences
Architecture and Design
Biology
Business & Finance
Chemistry
Computer Science
Geography
Geology
Education
Engineering
English
Environmental science
Spanish
Government
History
Human Resource Management
Information Systems
Law
Literature
Mathematics
Nursing
Physics
Political Science
Psychology
Reading
Science
Social Science
Home
Blog
Archive
Contact
google+twitterfacebook
Copyright © 2019 HomeworkMarket.com