Answer A Here this is our new world where day by day we are able to think more and more innovative what makes us great on planet, more over we are looking forward to self- driven cars who can run across cities without a driver. This concept has lots of criticalities that has to be considered and a lot has been resolved but the issue is that what is except from technical subject, that is what about drivers livelihood, there are millions of drivers who depends on this skills and the only thing they do is driving for their clients across glob, could we ever thought about them that how they will manage their home after losing jobs just because of technology. Millions of users from 233 countries and territories took the quiz, making 40 million ethical decisions in total. From this data, the study's authors found certain consistent global preferences: sparing humans over animals, more lives rather than fewer, and children instead of adults. They suggest these factors should therefore be considered as "building blocks" for policymakers when creating laws for self-driving cars. But the authors stressed that the results of the study are by no means a template for algorithmic decision-making. “What we are trying to show here is descriptive ethics: peoples' preferences in ethical decisions," Edmond Awad, a co-author of the paper, told The Verge. “But when it comes to normative ethics, which is how things should be done, that should be left to experts." The data also showed significant variations in ethical preferences in different countries. These correlate with a number of factors, including geography (differences between European and Asian nations, for example) and culture (comparing individualistic versus collectivist societies). It's important to note that although these decisions will need to be made at some point in the future, self-driving technology still has a way to go. Autonomy is still in its infancy, and self-driving cars (despite public perception) are still prototypes, not products. Experts also say that while it's not clear how these decisions will be programmed into vehicles in the future, there is a clear need for public consultation and debate.
"What happens with autonomous vehicles may set the tone for other Al and robotics, since they're the first to be integrated into society at scale," Patrick Lin, director of the Ethics + Emerging Sciences Group at Cal Poly University, told The Verge. “So it's important that the conversation is as informed as possible, since lives are literally at stake." The results from the Moral Machine suggest there are a few shared principles when it comes to these ethical dilemmas. But the paper's authors also found variations in preferences that followed certain divides. None of these reversed these core principles (like sparing the many over the few), but they did vary by a degree. The researchers found that in countries in Asia and the Middle East, for example, like China, Japan, and Saudi Arabia, the preference to spare younger rather than older characters was “much less pronounced." People from these countries also cared relatively less about sparing high net-worth individuals compared to people who answered from Europe and North America.
The study's authors suggest this might be because of differences between individualistic and collectivist cultures. In the former, where the distinct value of each individual as an individual is emphasized, there was a "stronger preference for sparing the greater number of characters." Counter to this, the weaker preference for sparing younger characters might be the result of collectivist cultures, "which emphasize the respect that is due to older members of the community." These variations suggest that “geographical and cultural proximity may allow groups of territories to converge on shared preferences for machine ethics," say the study's authors. However, there were other factors that correlated with variations that weren't necessarily geographic. Less prosperous countries, for example, with a lower gross domestic product (GDP) per capita and weaker civic institutions were less likely to want to crash into jaywalkers rather than people crossing the road legally, “presumably because of their experience of lower rule compliance and weaker punishment of rule deviation." The authors stress, though, that the results from the Moral Machine are by no means definitive assessments of different countries' ethical preferences. For a start, the quiz is self-selecting, only likely to be taken by relatively tech-savvy individuals. It is also structured in a way that removes nuance. Users only have two options with definite outcomes: kill these people or those people. In real life, these decisions are probabilistic, with individuals choosing between outcomes of different severities and degrees. ("If I Swerve around this truck, there's a small chance I'll hit that pedestrian at a low speed," and so on.) Nevertheless, experts say that doesn't mean such quizzes are irrelevant. The contrived nature of these dilemmas is a "feature, not a bug," says Lin, because they remove "messy variables to focus in on the particular ones we're interested in." He adds that even if cars won't regularly have to choose between crashing into object X or object Y, they still have to weigh related decisions, like how wide a berth to give these items. And that is still "fundamentally an ethics problem," says Lin, “so this is a conversation we need to have right now."