AI’s Ethical Dilemma: Human Interests vs Machine Interests

According to Pulitzer Prize author Thomas Friedman, the acceleration of technologies like artificial intelligence (AI) is coming on so fast, deep and interconnected, that the impact to society could be equivalent to the next industrial revolution (Friedman, 2019). To many, AI products provide positive benefits, but even the best intentions could obscure potential consequences. Ethical regulation lags far behind technological advancements, and it may prove difficult to not only identify but also prevent potential disasters before a true artificial mind passes the Turing Test.

In just five years, the value of the current industrial skill set will depreciate by 50%; in ten years, existing knowledge will be worth a quarter of what it is today (Estes, 2020). This depreciation of today’s occupational skills is a telling example of the accelerated pace of adoption of AI into the workplace, where business are incorporating AI into product offerings with a just-in-time learning approach to re-tool their workforce and establish partnerships with peers, customers, and even engage competitors to bridge knowledge gaps (CES, 2021). At CES 2021, for example, we saw Panasonic announce partnerships with Envisics and Phiar, and GM with Territory Studio and Rightpoint as new competitive alliances, deepening the contribution of AI to the driver’s experience and connection to their vehicles.

At CES 2021, Panasonic announces partnership with Phiar, developers of edge spatial-AI technology and augmented reality Heads Up Displays (CTA, 2021)

AI inside self-driving cars poses a complex set risks and rewards, especially when considering its associated ethics. To understand the ethical dilemmas alone, you must grapple with the Rules of Responsibility and Safety (RSS) that go into the driving skills and safety of autonomous vehicles (AVs) that define how a car is expected to react in dangerous situations. The rule of “if you can avoid a crash without causing another” demonstrates that it is acceptable for AVs to violate RSS in order to achieve its highest priority of not crashing (Mobileye, 2020). Self-driving cars are deemed safer because the driver is no longer distracted, however, these machines can drive recklessly in order to avoid accidents, potentially impacting other humans and objects on the road. How do we program carefulness and moral values into these machines?

Thomas Friedman (NY Times Author) and Prof. Amnon Shashua (Mobileye) sit down in different time zones for a virtual conversation about advancements in artificial intelligence (CES, 2021)

Today, artificial intelligence is doing more than executing on a narrow set of tasks. AI is now able to transfer knowledge from one situation to the next and approaching artificial general intelligence (AGI) faster than anyone anticipated. For example, Alexa can now “infer” latent goals even if they are not directly expressed by the user (Kumar, 2020.), such as turning off the lights when going to bed at night even if the user did not say the command. The limitations of processing power, once the bottleneck to achieving AGI has been overcome with the performance specs of the chip sets from companies like AMD and Intel announced CES 2021.

The next advancement in AI comes with understanding language (Lyamm, 2019), including the ability to read text, summarize main ideas, and (to some degree) have a conversation. Google’s AlphaGo Zero, where the AI agent uses deep learning and Monte Carlo Tree Search to come up with game strategies and solve puzzles in ways that are inconceivable to humans (Foster, 2017) demonstrates that soon machines will able to teach themselves and reach a “super-human” level of intelligence.

HIMirror’s smart beauty mirror connects the beauty industry with consumers, providing AR makeup applications and product recommendations (CTA, 2021)

Although we are already “fused” to the intelligence-assisted technology in our homes, we need to question the motivations of  AI product manufacturers, their motivations may be different from the interests of the user. For example, Rohit Prasad, Alexa’s head scientist, says Amazon’s goal is provide a device that “actively orchestrates the consumer’s life”; and Jeffery Chester, executive director of a consumer privacy advocacy organization cautions out that Amazon’s ultimate goal is to monetize on our daily lives (Hao, 2019).

Combine questionable intent with data privacy concerns and our increasing dependence AI to perform tasks in our daily lives, technology engineers need to encourage conversations around ethical considerations. We need to make our clients aware of the serious questions about how far AI agents can be trusted and the potential consequences to society and our complex globalized economic systems in the years to come.

References & Sources:

 

feedback
Send this to a friend