MansBrand.com Articles Provided as noted by attribution.
There’s no shortage of promises when it comes to AI. Some say it will solve all problems while others warn it will bring about the end of the world as we know it. Both positions regularly play out in Hollywood plotlines like Westworld, Carbon Black,Minority Report,Her, and Ex Machina. Those stories are compelling because they require us as creators and consumers of AI technology to decide whether we trust an AI system or, more precisely, trust what the system is doing with the information it has been given.
This content was produced by Nuance. It was not written by MIT Technology Review’s editorial staff.
Joe Petro is CTO at Nuance.
Those stories also provide an important lesson for those of us who spend our days designing and building AI applications: trust is a critical factor for determining the success of an AI application. Who wants to interact with a system they don’t trust?
The black box and understanding the unknowns
Even as a nascent technology AI is incredibly complex and powerful, delivering benefits by performing computations and detecting patterns in huge data sets with speed and efficiency. But that power, combined with “black box” perceptions of AI and its appetite for user data, introduces a lot of variables, unknowns, and possible unintended consequences. Hidden within practical applications of AI is the fact that trust can have a profound effect on the user’s perception of the system, as well as the associated companies, vendors, and brands that bring these applications to market.
Advancements such as ubiquitous cloud and edge computational power make AI more capable and effective while making it easier and faster to build and deploy applications. Historically, the focus has been on software development and user-experience design. But it’s no longer a case of simply designing a system that solves for x. It is our responsibility to create an engaging, personalized, frictionless, and trustworthy experience for each user.
I is data hungry: Know what you’re feeding it
The ability to do this successfully is largely dependent on user data. System performance, reliability, and user confidence in AI model output is affected as much by the quality of the model design as the data going into it. Data is the fuel that powers the AI engine that virtually converts the potential energy of user data into kinetic energy in the form of actionable insights and intelligent output. Just as filling a Formula 1 race car with poor or tainted fuel would diminish performance, and the driver’s ability to compete, an AI system trained with incorrect or inadequate data can produce inaccurate or unpredictable results that break user trust. Once broken, trust is hard to regain. That is why rigorous data stewardship practices by AI developers and vendors are critical for building effective AI models as well as creating customer acceptance, satisfaction, and retention
————
By: Jason Sparapani
Title: Beyond the AI hype cycle: Trust and the future of AI
Sourced From: www.technologyreview.com/2020/07/06/1004823/beyond-the-ai-hype-cycle-trust-and-the-future-of-ai/
Published Date: Mon, 06 Jul 2020 16:00:19 +0000
Did you miss our previous article…
https://www.mansbrand.com/the-future-of-telemedicine/
The post Beyond the AI hype cycle: Trust and the future of AI appeared first on MansBrand.
from MansBrand https://www.mansbrand.com/beyond-the-ai-hype-cycle-trust-and-the-future-of-ai/?utm_source=rss&utm_medium=rss&utm_campaign=beyond-the-ai-hype-cycle-trust-and-the-future-of-ai
via IFTTT
No comments:
Post a Comment