Sign in or Register

Fictron Industrial Supplies Sdn Bhd
No. 7 & 7A,
Jalan Tiara, Tiara Square,
Taman Perindustrian Sime UEP,
47600 Subang Jaya,
Selangor, Malaysia.
+603-8023 9829
+603-8023 7089
Fictron Industrial
Automation Pte Ltd

140 Paya Lebar Road, #03-01,
AZ @ Paya Lebar 409015,
Singapore.
+65 31388976
sg.sales@fictron.com

The Key To Successful AI: Hiding Its Use From People

30 Sep 2019
The Key To Successful AI: Hiding Its Use From People
View Full Size
AI is demonstrating itself superior to human intelligence in an stretching number of fields. That is, except when people know AI is being used.
 
Yes, seeing that in certain human-centric sectors, the performance of artificial intelligence starts to drop off if people are apprised of the involvement of an intelligent machine. As a matter of fact, human resistance would seem to be the achilles heel of artificial intelligence, since for all the recent advances of AI technology this resistance is preventing AI from doing its job in areas where human contact and interaction would usually play a central role.
 
This message was brought home recently by a study published in Marketing Science on September 20, titled ''The Impact of Artificial Intelligence Chatbot Disclosure on Customer Purchases.'' In it, an international team of researchers found that chatbots used by an unnamed financial services company were as effective in selling products as proficient sales employees, but also four times more effective than inexperienced workers. The thing is, when customers were informed before any conversation that they would be speaking to a chatbot, the ability of the AI-based assistants to encourage customer purchases fell by a massive 79.7%.
 
''Our findings show when people don't know about the use of artificial intelligence (AI) chatbots they are four times more effective at selling products than inexperienced workers, but when customers know the conversational partner is not a human, they are curt and purchase less because they think the bot is less knowledgeable and less empathetic,'' said co-author Xueming Luo, a professor in marketing at Temple University.
 
Similar results have emerged from other studies. Earlier this month, researchers from NYU and Boston University found that patients were less receptive to AI-based healthcare provision and were less likely to trust artificially intelligent healthcare services over human providers. This perhaps is not surprising, but it also isn't particularly rational, since AI has been shown in experiments to be at least equal with human experts in diagnosing medical conditions based on images, for example.
 
Moreover, in the context of money, research published by enterprise software firm VMware in early 2019 concluded that only 19% of people in the U.K. would be happy providing AI with a role in managing their finances. This is despite the fact that there's already evidence of the superiority of AI-based investment, with data from Eurekahedge revealing that the annualized rate of return for 14 AI-driven hedge funds is 12.74% (as of August 2019), whereas the average rate for the large hedge funds of its ''Eurekahedge 50'' is only 5.12%. And more broadly, the average annual rate of return for AI funds over the decade from 2010 to 2019 was 13.18%, while the ten-year average for the Eurekahedge 50 was only 5.4% (and the ten-year average for Eurekahedge's main index of 2,500 funds was 4.9%).
 
In other words, our in-built prejudice against AI is preventing the technology from being used and applied more regularly, and from reaching its fullest potential. Meanwhile, the Marketing Science suggests one of two ways out of this predicament: either by hiding the use of AI from people entirely, or from working to build trust gradually via incremental exposure to AI.
 
In some respects, firms engaged in AI are probably already hiding–or at least underplaying–its use of the technology. There is evidence that people are interacting with artificial intelligences without even knowing it, as indicated by a 2017 Pega survey which revealed that, while only 33% of people believe they use AI technology, around 77% actually do so. Likewise, a 2018 BarclayHedge survey indicated that around 56% of hedge funds are relying on artificial intelligence to some degree to inform their investment decisions, even though the vast majority of people are still wary about trusting AI with their finances.
 
And yet, while it may end up being effective to quietly ''force'' AI on the public, it's not likely to be a viable strategy in the long term. In July, the State of California passed a law requiring companies to ensure that chatbots disclose themselves to consumers, paving the way for similar rules to be passed in other states and possibly at a federal level.
 
As a result, the tactic of secretly infiltrating chatbots and AI into wider society looks like it’s already endangered. Instead, the AI and tech industry will have to settle for building trust in artificial intelligence over time, by conducting more studies demonstrating AI's efficacy, by gradually rolling out AI-based services in a piecemeal fashion, and by striving constantly to make AI more explainable to the general public. It's only by doing this that society will come to permit a more central role for artificial intelligence, and that AI will have a positive effect on us rather than an uncertain and potentially negative one.
 

You have 0 items in you cart. Would you like to checkout now?
0 items
Switch to Mobile Version