The devil's tempting me to say we're doing AI
Just out of curiosity, I searched Google News for these three words: artificial intelligence investments.
Here's what came up.
I love the optimism. The report says #AI can collectively lift profits by $4.8 trillion. That's as big as the size of Japan's economy.
And look at this smart combo headline. It seems to imply this: if both Google and Facebook are doing it, you must be a fool not to do it.
And this one below is kinda desperate. To not get 'lost in changing trends'.
Thank God, we are still dominating. May be not in the future. But for now we are okay.
I just loved this one. Prime Minister Theresa May looks so Darth Vader-esque in her Davos gear. She's seeking 'safe and ethical' AI. (All the good people in UK - I mean so disrespect).
I understand there are 14x more startups and 6x more investments in AI. It sounds so cool. But when I think of AI today, I'm somehow reminded of 'SAAS' or 'big data' or 'analytics' or 'cloud' or 'app' or 'dotcom'.
(I once interviewed for a dotcom startup in Atlanta around the year 2000-01 to make lots of money. Funnily they went bust just before I joined).
One of the things that my company does is billing. Getting doctors paid for medical bills. It's lot of hard work to get money out of insurance companies.
We use certain automation algorithms to avoid doing repeated tasks. Like picking up stuff from a spreadsheet and entering it into the right areas of the software and clicking submit. We wrote those programs to improve our efficiency and save us money.
But I'm tempted to call what we do as AI. We could mask what we do by saying this. We use computer vision and artificial intelligence to rapidize revenue cycle management. This improves profitability by X. Our goal is to use machine learning to get claims paid with zero human intervention.
The odd thing is I wouldn't be entirely wrong. Our algorithms do 'see' things on their own. They do make things faster ('rapidize'). And they do improve profitability. As far as humans, we needn't worry. The healthcare industry will ensure we'll have enough confusing, changing, and longwinded manual work to do.
But I wouldn't be right either. The above glorious statement reminds me of a meeting I had with a few hospital executives.
They referred to algorithms as 'robots' and asked, "Show us some robots if you got any!"
(Aside: Have you seen this funny commercial explaining Bitcoin?)
I wonder then if we've collectively (industry + media) made artificial intelligence into lovely marketing jargon. So noisy and blurry that we find it increasingly difficult to spot real AI at work.
I wonder where exactly do we draw that line in the sand? To separate real AI? And other AI?
I plan to ask these questions to people who seem to be doing real AI work. On a panel that I'm moderating (at #HIT-NEXT). Like Predible Health. I was quite impressed how their algorithms 'see' images of the liver and stitch them back together to provide a 360 degree view. It helps doctors visualize a surgery before doing it. For example, what would happen if you remove cancerous tumors in the liver by X% vs Y%?
But I'm more curious to find out will it really make doctors better at their craft? Or is this just a nice-to-have? Will it really help patients get better care? When we reach the point of mining millions of slices of liver scans, what exactly would we have learnt? And is that data analysis or is there any wisdom in it?
Is Watson burdening us with more analysis and suggestions or is it giving us wisdom by showing fewer things?
And how does all this play out when you are boxed with healthcare standards? With regulations? Some of which are indeed meaningless.
But for now, let's simply enjoy this AI-phase. Until we all move to may be a newer, cooler trend. Because sooner or later, it'll sound jaded to say 'AI startup'.
May be we'll start storing data physically in DNA instead of just on the cloud. Then we'll see startups and their investors gold-rushing towards gooey DNA-storage startups.
Then may be I can invite the devil all over again.