We’re excited to bring Transform 2022 back in person on July 19 and virtually July 20 – August 3. Join AI and data leaders for insightful conversations and exciting networking opportunities. Learn more about Transform 2022
Despite the need to maintain data integrity and security in enterprise artificial intelligence (AI) systems, an alarming number of organizations lack adequate AI governance policies and tools to protect themselves from potential legal challenges, O’ researchers report. Reilly Media.
The Boston-based publisher and researcher today released the results of its annual “AI Adoption in the Enterprise” survey. The benchmark report examines trends in how AI is being implemented, including the techniques, tools and practices organizations are using, to better understand business adoption outcomes over the past year.
Among respondents with AI products in production, the proportion of those whose organizations had a governance plan to oversee how projects are created, measured and observed was about the same as those who did not (49% yes, 51% no) . Of the respondents who evaluated AI, relatively few (22%) had a governance plan.
‘Disturbing’ AI Governance Trend
“The sheer number of organizations without AI governance is troubling,” Mike Loukides, VP of content strategy at O’Reilly and the author of the report, told VentureBeat. “While it’s easy to assume that AI governance isn’t necessary if you’re just doing some experiments and proof-of-concept projects, that’s dangerous. At some point, your proof-of-concept will likely turn into a real product, and then your governance efforts will catch up.
“It’s even more dangerous if you rely on AI applications in production,” says Loukides. “Without formalizing some form of AI governance, you’re less likely to know when models get old, when the results are biased, or when data has been collected incorrectly.”
This year’s survey results found that the percentage of organizations reporting AI applications in production — that is, organizations with revenue-bearing AI products in production — has remained constant over the past two years at a modest 26%, indicating that AI has moved on. into the next phase of the hype cycle, Loukides said.
“AI has been the focus of the technology world for years,” says Loukides. “Now that the hype is over, it’s time for AI to prove that it can deliver real value, whether it’s cost savings, increased productivity for businesses, or building applications that can create real value for people’s lives. This will no doubt require practitioners to develop better ways to collaborate between AI systems and humans, and more sophisticated methods of training AI models that can circumvent the biases and stereotypes that plague human decision-making.”
Among respondents with AI products in production, the proportion of those whose organizations had a governance plan to oversee how projects are created, measured, and observed (49%) was about the same as those who did not (51%).
When it comes to evaluating risk, unexpected outcomes (68%) remained the main focus for mature organizations, closely followed by model interpretability and model degradation (both 61%). Privacy (54%), fairness (51%) and security (42%) — issues that can have a direct impact on individuals — were among the least mentioned risks by organizations. While there may be AI applications where privacy and fairness aren’t an issue, companies with AI practices should prioritize the human impact of AI, Loukides said.
“While AI adoption is slowing, it is certainly not stagnant,” O’Reilly president Laura Baldwin said in a media advisory. “Significant venture capital investments are being made in the AI space, with 20% of all funds going to AI companies. What this probably means is that AI growth will plateau in the near term, but these investments will pay off later in the decade.
“In the meantime, companies should not lose sight of the goal of AI: to make people’s lives better. The AI community must take the steps necessary to create applications that generate real human value, otherwise we risk entering a period of reduced funding in artificial intelligence.”
Other findings include the following:
Among mature practice respondents, TensorFlow and scikit-learn (both 63%) are the most commonly used AI tools, followed by PyTorch (50%), Keras (40%) and AWS SageMaker (26%). Significantly more organizations with mature practices are using AutoML to automatically generate models. Sixty-seven percent of organizations are using AutoML tools, compared to 49% of organizations in the previous year, representing a 37% increase. Among mature practices, there was also a 20% increase in the use of automated implementation and monitoring tools. The most commonly used tools are MLflow (26%), Kubeflow (21%) and TensorFlow Extended (TFX, 15%). Similar to the results of the past two years, the biggest bottlenecks to AI adoption are a lack of skilled people and a lack of data or data quality issues (both at 20%). However, organizations with mature practices were more likely to see issues with data, a hallmark of experience. % of respondents in each group identified this as the main bottleneck. Organizations with mature practices saw the greatest skills gaps in these areas: ML modeling and data science (45%), data engineering (43%) and maintaining a range of business use cases (40%). Retail and financial services have the highest percentage of mature practices (37% and 35%), respectively. Education and government (both 9%) have the lowest percentage but also the highest number of respondents considering AI (46% and 50% respectively).
The full report can now be downloaded here.
VentureBeat’s mission is to be a digital city square for tech decision makers to learn about transformative business technology and transactions. Learn more about membership.
This post AI governance adoption is flattening – what it means for enterprises
was original published at “https://venturebeat.com/2022/04/01/ai-governance-adoption-is-leveling-off-what-it-means-for-enterprises/”