This post originally appeared on LinkedIn Pulse on January 24, 2017.
The noise around Artificial Intelligence in the last 12 months has arguably been accompanied by a good dose of hype. But there are reasons to believe we have reached an inflection point as far as the adoption of AI technologies goes. Has AI arrived? Let’s take a look at what the main drivers and enablers have been:
- Distributed Computing: The ability to simultaneously crunch complex data sets across multiple machines has delivered massive improvements in computing power, exponentially shrinking the amount of time it takes to determine an algorithm that explains the relationship between data sets.
- High Powered Chip Sets: GPUs (Graphics Processing Unit) deliver a magnitude improvement in clock speed and processing power over CPUs, which results in incrementally faster processing times.
- Internet of Things: Thanks to the billions of connected devices with sophisticated sensors that are generating vast amounts of data, IoT can be a major driver for AI (Click to Tweet). Without the intelligent filter and sophisticated rules engine that AI provides, much of this data is just nonsensical noise and doesn’t provide a basis of intelligent action. Thus, machine learning algorithms can provide intelligent notification and a basis of action for machine critical events. An example of this can be seen with SparkCognition, a Verizon Ventures portfolio company, which is able to take complex data sets from different components in a wind turbine or oil pipelines and provide insights on predictive maintenance that may be required to prevent unscheduled downtime.
From Proprietary Tech to Open Source
Early innovations in the AI ecosystem emerged from platform development with initial players building their own proprietary stack. While this has led to some degree of fragmentation in the ecosystem, in the last year we have seen many of the major tech companies (including Amazon, Google and Facebook) open source their machine learning tools. This move not only includes access to sophisticated machine learning software for determining interrelationships between data elements, but also the impressive data center computing power that these companies possess.
As a result of this move, innovations in the AI space will likely move toward domain-specific deep data sets, spurring a whole new set of applications built on that data.
The Next Wave of AI Innovation
This begs the question of what the future of AI innovation will look like. Considering the significant progress made so far in the development of a robust hardware stack, or core AI ‘operating system,’ it is reasonable to expect that future value will come from the creation or ownership of highly verticalized data models. Startups that are able to acquire unique data sets at the cheapest cost are most likely to be able to create highly valuable franchises over time (Click to Tweet). As their algorithms become more efficient (as a result of the additional data), the costs for someone else to replicate them become prohibitively expensive.
Companies may collect robust data quickly and cheaply by crowdsourcing in the consumer space, provided that consumers have an incentive to contribute to their data. Some interesting models have already emerged here, such as Comma.ai in the autonomous vehicles space. Intriguingly, virtual reality may have a role to play here as well. As an immersive platform that produces a rich flow of data, VR simulations could provide a low cost, rapid short cut to data gathering in selective applications such as autonomous driving or certain commerce use cases. For instance, accumulating data on how different users handle road conditions on a specific set of simulated roads in Palo Alto could lead to a new algorithm for real-world situations.
In the enterprise context there is unlikely to be such an effective tool to low cost data gathering. Corporations may be reluctant to hand over valuable data without some assurance of data integrity and security, as well as some proof points around capabilities.
However highly verticalized use cases are likely to see fairly rapid data collection once the initial set of customers is sourced. For example, an oil and gas customer will be more willing to hand over data once they understand the value that a machine learning solution has created for predictive maintenance and the elimination of costly machine downtime for other oil and gas customers. The incremental addition of customer data will help improve overall predictive intelligence and algorithm efficiency for all.
There is no doubt that AI and machine learning are having their moments. Beyond the breathless articles on the power of AI, it still seems clear that access to data, and strong linkages between data science and product engineering functions, will be more predictive of success than developing the best AI hardware or algorithms.