Nancy Carre’
31 August 2017
Is Artificial Intelligence Really a Threat?
On 19 October 2016, Stephen Hawking delivered the inaugural speech for the Leverhulme Centre for the Future of Intelligence, in which he cautioned that while the use of artificial intelligence (AI) can benefit humanity, it also poses a very real threat (“The best”). Hawking joined an international cohort of prominent AI and robotics researchers in 2017 to sign an open letter to the United Nations asking that it take steps to counter autonomous weaponry (“Killer robots”). While intelligent weapons systems are an undeniable threat, a more immediate concern is the growing reliance on AI to manipulate stock markets and financial transactions, which can negatively impact national economic interests globally. Using technology to provide a competitive advantage in any realm reveals a set of social constructs and assumptions. The question of whether AI is an actual threat must be framed within the contexts propelling its development.
The trend in modern scientific inquiry has often been knowledge for its own sake, frequently with inadequate consideration of potential outcomes. The Manhattan Project is a telling example of this type of blinder; the physicists developing the atomic bomb, having realized its destructive power, later objected to its use. The restraint these individuals called for was eclipsed by political imperatives, drivers that still propel weapons research and development. The use of AI in economic settings reflects the same conundrum, in that computer specialists developing computational intelligence designed to increase the efficacy of investment strategies appear to overlook the potential implications of their work.
Stock market transactions are rooted in analyses that seek to predict trends (Cavalcante, Rodolpho, et al. 196). Conventional evaluation of a company’s worth includes its assets, marketing strategies, and management policies, as well as other factors that influence its performance (196). Technical analysts dismiss this holism, assuming that share price must inevitably reflect these fundamentals and predicting market behaviors can be reduced to pattern recognition algorithms (197). The challenge has been how to increase the speed and reliability of predictive mechanisms— “intelligent trading systems that combine explicit drift detection, adaptive learning systems and trading rules for automatic negotiation” (208). These efforts do not appear to consider the global economic ramifications of system failure, or of groups seeking to control the market for personal benefit. AI increases the likelihood of both scenarios.
To counter the dichotomy between the benefits and drawbacks of AI, Waser suggests engineering a value system that will guide humanity and its AI creations (106). Humanity has never arrived at universally applicable definitions of right and wrong (107). In fact, Waser argues, research has convincingly demonstrated that human beings are essentially driven by self-interest and that the rational mind justifies decisions based on this (107). Waser would channel the power of collective thinking and social constructions of value to create a universal system integrated into AI utility functions, producing machines that “mirror our reflexive adherence to laws and customs dictated by the society around us” (109). This implies open, clear, and deliberate communication about AI applications, and collaboration on ethical parameters for AI development and use.
The inherent danger of AI is not exaggerated, given current social paradigms. When competition for resources and dominance motivate the creation of AI weaponry, it inevitably constitutes a serious threat to human survival. When scientists develop independently functioning predictive systems for financial transactions without constraining their application, those who can afford to buy and use it acquire dominance with little effort. But to say that AI is innately a threat and ignoring the reasons for that threat falls in the category of waging a “war on drugs” without addressing the underlying social malaise. Science is a holistic enterprise—every advance affects humanity, even if indirectly. AI development must be viewed within that context, not simply as a means for profit or a tool to establish hegemony.
.
Works Cited
Cavalcante, Rodolpho, et al. “Computational Intelligence and Financial Markets: A Survey and Future Directions.” Expert Systems with Applications, vol. 55, 2016, pp. 194-211, doi: 10.1016/j.eswa-.
“Killer robots: World’s top AI and robotics companies urge United Nations to ban lethal autonomous weapons.” Medianet, 21 Aug. 2017, www.medianet.com.au/releases/141447/. Accessed 31 Aug. 2017.
“” The best or worst thing to happen to humanity “-Stephen Hawking launches Centre for the Future of Intelligence.” University of Cambridge, 19 Oct. 2016, www.cam.ac.uk/research/news/the-best-or-worst-thing-to-happen-to-humanity-stephen-hawking-launches-centre-for-the-future-of. Accessed 31 Aug. 2017.
Waser, Mark. “Designing, Implementing, and Enforcing a Coherent System of Laws, Ethics, and Morals for Intelligent Machines (including Humans).” Procedia Computer Science, vol. 71, 2015, pp. 106-111, doi: 10.1016/j.proces-.