Making sense of the AI landscape
Irrespective of whether you’ve got a strong technical background or not, the subject is incredibly complex. As one customer put it — the whole subject is increasingly inaccessible. This is problematic because it’s a central part of so many organisations’ strategies as they look to the future.
What is it about the subject making it so inaccessible? And why is it getting worse?
Ultimately, the signal:noise ratio has gotten out of whack. Way too much noise — not enough signal.
We think there are a few reasons for this.
- The newness of the subject and the speed at the edge. There’s no question that it is relatively new. What is a bigger issue is that the speed that it’s progressing at the coalface is so advanced that it becomes incredibly difficult to keep track of. Think of what SpaceX is doing with ML or what’s happening in the world of autonomous vehicles — new advancements are being made every day. With those two things — both the newness and the speed of change — how do you get your head around what’s actually going on? And figure out how it affects your plans?
- The distribution of applied experience is not normal. You’ll be familiar with a normal distribution curve. The reality is, as you look at the experience of practitioners in this space, it’s dramatically different. I’ve written before about the LinkedIn research we did where we found that X number of participants had more than 18 months of experience in ML. The reality of this space is there are not that many people that have real applied expertise. Which leads to the next bit — who is teaching and who is learning this? There is a lot of noise in the market, created by people who don’t have a lot of applied experience. They’re academics or influential enthusiasts — and they hold passionate views about what they would do, were they given a chance. But that’s very different from people who have been on the tools and have been deploying real things into production. This gap is made worse when considering how much time people have to create content and give back to the community. We deal with the busiest practitioners in this space, and I can tell you, they don’t have endless hours to post on Reddit or Medium.
- Religious tendencies. Obviously, I mean this in the technical sense — not the theological sense. Anytime you talk about this subject with anyone, you run the risk of having someone who has religious fervour over a particular way of doing things. I remember the parabole “ask a carpenter to build you a monument, and they’ll build it out of wood”. It’s a similar thing with AI; people have their own particular flavours. At a positive level, that can be ok — they like doing things a certain way. But at a negative level, it is quite frankly religious myopia in that they can only see the world through one lens.
- And lastly, add to all of the above…billions of dollars of marketing spent by global mega-vendors. The sheer volume of content being created by these mega-marketing machines is phenomenal. You can’t move online without being influenced by Amazon, Microsoft, Salesforce, Google, etc. Take the top 20 global orgs in the AI/ML space — they would be spending well over NZ’s GDP just on marketing. And as we know, just because it’s marketing doesn’t make it true. But it adds to the noise. Now you have to think about the incredibly compelling thing you’ve just read — and try to interrogate the incentives and interests behind it. We’re all well-versed with the “fake news” phenomenon in the media — it’s now beginning to affect the tech industry.
It’s a challenge. But it’s an important one to deal with — as these technologies will become the key points of difference between those orgs that thrive and those that, well, don’t.
This all leads to what we’re working on.
We want to make this subject more accessible for Chief Product Officers. Those that hold the key to the roadmap — but have to make the gnarly trade-off decisions about what gets worked on.
We think about the components of the AI/ML landscape in a different way than most. So we want to share that with our community because we think there is a simplified way of looking at it that can cut through all the noise.
Last week we started by looking at the AI stack through a functional lens — and next we will look at some well-known (and some lesser-known) examples and compare and contrast them against this functional framework — as a way of simplifying the subject and “cutting through all the s&%t”.