How Will AI Impact the Future of Healthcare?
-
bookmark
-
print
Artificial intelligence promises to be a disruptive force. In a 2023 report, Experts suggest generative AI has the potential to increase global GDP by 7%, or nearly $7 trillion, over a 10-year period.1
In the healthcare industry, AI’s potential benefits include everything from help with administrative tasks to predicting health risks and enabling preventative care. But what will it take to make a possible AI-fueled sea change a reality?
There has been a lot of hype around AI over the past year, but we’re clearly in the initial stages of this disruption. To get some perspective on where we are now, I recently moderated a panel discussion with three experts on this subject:
-
Daniel Barsky, Partner, Holland & Knight, a global law firm with the nation’s largest healthcare practice.
-
Edmund Jackson, Ph.D., Co-founder and CEO, Unity AI, a start-up that applies AI technologies to help hospitals optimize patient flows.
-
Will Smith, Partner, McChrystal Group, a global management consulting firm.
We discussed everything from the uncertain regulatory environment to whether all the hype around AI is justified. Following is a summary of our conversation.
Risks and limitations
The generative AI2 tools currently grabbing all the headlines, such as ChatGPT and Google Gemini, are built on large language models3, or LLMs. Unlike traditional AI, these tools are not trained on specific databases to follow specific rules. Instead, they synthesize and reproduce information from vast amounts of existing content. That means while generative AI can create new content, the results they provide may be based on information that’s outdated or of low quality. It’s also why they’ve been known to “hallucinate,” or make up information presented as factual. For these reasons, the panelists agreed that LLM-based AI tools are not currently suitable for tasks like patient care.
“If you unleash a generative AI model, your liability is going to be insane,” Barsky said. “These are tools that are best used as helpers in their current state.”
Ultimately, the usefulness AI is in the healthcare space will come down to input from the industry itself. “AI is about the data you put in, and the healthcare industry has a lot of data,” Barsky said. “When you put these tools in the hands of healthcare professionals, they can come up with hundreds of ideas of where this technology could be widely and usefully deployed. Everything from helping interpret medical imaging all the way down to creating a more customized patient experience. Maybe AI can generate a nice welcome letter, or summary notes to take home with information relevant to the patient’s specific needs.”
Getting started with AI
When it comes to implementing AI in a healthcare setting, Smith said a receptive workforce is the key to a successful rollout. “The number one thing we’ve observed across this industry is that workforce readiness is the thing senior leaders don’t think about early enough,” Smith said. “Are people open-minded? Are they fearful that they’re going to lose their jobs? Workforce readiness is the first and easiest thing to think about.”
Smith also noted that a successful AI pilot program likely won’t be driven by the IT department. If the people who need to embrace and leverage the technology aren’t the ones driving the project, you run the risk of user resistance.
“If it’s a sales-enabled tool or it’s a front-line caregiver who wants to use this technology, they need to be the ones leading the pilot project,” Smith said. “The closer you can get to the person who’s supposed to benefit from it, the better.”
Jackson recommended determining where you have opportunities for AI to improve efficiency. “Find places where people are doing a lot of busy work, and where you have opportunities to streamline and improve their work,” he said.
Barsky referred to a study on the impact of AI on call centers4. The centers that replaced humans with generative AI bots resulted in user frustration. But the results were different in call centers that used generative AI in collaboration with humans.
“The top-performing call center workers got a bit better,” Barsky said. “The bottom performers got much better. That delta between the top and bottom performers got a lot smaller, the cost of training employees went down, and customer satisfaction went up. Using it as a replacement is potentially going to be a massive liability and result in a lot of blowback from your customers, because you don’t have a way to guarantee accuracy. It’s a helper, not a replacement.”
When getting started, Smith also recommends testing the technology for internal use cases first. “If you want to do a pilot project, do it as far away from your customers as possible,” Smith said. “See how your colleagues react, then inch closer to your customers.”
Regulatory uncertainty
Given the stakes involved, including concerns about job losses, privacy and misinformation, most people agree that some kind of regulatory framework around AI in healthcare is necessary. But Barsky characterized the current environment as “a mess,” with several ongoing issues that require careful attention.
He noted that President Joe Biden’s recent executive order5 on AI will spark a wave of proposed regulations. At the same time, however, the U.S. Supreme Court is deliberating on whether to overturn a 40-year-old legal precedent known as the Chevron deference, under which courts defer to the expertise of federal administrative agencies to interpret ambiguous congressional statutes. If the court abolishes the Chevron deference, Barsky said all kinds of regulations could be rendered moot, leaving the interpretation of statutes to individual judges on a case-by-case basis.
“It would put us in a tenuous regulatory position,” Barsky said. “Regulation by case law would result in a lot of litigation. Florida, where I reside, has three [federal judicial] districts—Southern, Middle and Northern. If you’re in Miami, you could get one ruling on a statute. In Orlando, they could interpret it entirely differently, and in Jacksonville they could say we interpret it a third way. There are 94 federal judicial districts in the U.S. You could have that situation play out across the country.”
Given that healthcare is one of the most regulated industries, particularly when it comes to how data is used, Barsky said the industry is trying to figure out how to proceed.
Believe the hype?
The panelists characterized the current buzz around AI as both over- and underhyped. For Smith, what’s overhyped is the notion that using generative AI will offer immediate improvements in the workplace, noting that we’re still about two years away from that scenario. What’s underhyped is the ability of AI to help forge great leaps in research and development, such as bench scientists using AI to automate certain tasks, such as identifying data trends or simulating complex scenarios. User acceptance, however, is the sticking point.
“The higher the education level, the more resistance we see,” Smith said. “We could have massive scientific breakthroughs sooner than we expect, but the actual users [in scientific research] are some of the slowest to adopt it.”
Barsky said while LLM-based generative AI tools are often fun to use, they’re currently inaccurate to replace many tasks. Still, he acknowledged that AI in general has the potential to be a game-changer for the healthcare industry. “People have not yet truly understood the true power and fundamental shift that this is.”
2 https://research.ibm.com/blog/what-is-generative-AI
3 https://www.ibm.com/topics/large-language-models?mhsrc=ibmsearch_a&mhq=large%20language%20model
4 https://mitsloan.mit.edu/ideas-made-to-matter/workers-less-experience-gain-most-generative-ai
Ishtvan McGee
Managing Director, Healthcare Investment Banking, BMO Capital Markets
View Full Profile
Artificial intelligence promises to be a disruptive force. In a 2023 report, Experts suggest generative AI has the potential to increase global GDP by 7%, or nearly $7 trillion, over a 10-year period.1
In the healthcare industry, AI’s potential benefits include everything from help with administrative tasks to predicting health risks and enabling preventative care. But what will it take to make a possible AI-fueled sea change a reality?
There has been a lot of hype around AI over the past year, but we’re clearly in the initial stages of this disruption. To get some perspective on where we are now, I recently moderated a panel discussion with three experts on this subject:
-
Daniel Barsky, Partner, Holland & Knight, a global law firm with the nation’s largest healthcare practice.
-
Edmund Jackson, Ph.D., Co-founder and CEO, Unity AI, a start-up that applies AI technologies to help hospitals optimize patient flows.
-
Will Smith, Partner, McChrystal Group, a global management consulting firm.
We discussed everything from the uncertain regulatory environment to whether all the hype around AI is justified. Following is a summary of our conversation.
Risks and limitations
The generative AI2 tools currently grabbing all the headlines, such as ChatGPT and Google Gemini, are built on large language models3, or LLMs. Unlike traditional AI, these tools are not trained on specific databases to follow specific rules. Instead, they synthesize and reproduce information from vast amounts of existing content. That means while generative AI can create new content, the results they provide may be based on information that’s outdated or of low quality. It’s also why they’ve been known to “hallucinate,” or make up information presented as factual. For these reasons, the panelists agreed that LLM-based AI tools are not currently suitable for tasks like patient care.
“If you unleash a generative AI model, your liability is going to be insane,” Barsky said. “These are tools that are best used as helpers in their current state.”
Ultimately, the usefulness AI is in the healthcare space will come down to input from the industry itself. “AI is about the data you put in, and the healthcare industry has a lot of data,” Barsky said. “When you put these tools in the hands of healthcare professionals, they can come up with hundreds of ideas of where this technology could be widely and usefully deployed. Everything from helping interpret medical imaging all the way down to creating a more customized patient experience. Maybe AI can generate a nice welcome letter, or summary notes to take home with information relevant to the patient’s specific needs.”
Getting started with AI
When it comes to implementing AI in a healthcare setting, Smith said a receptive workforce is the key to a successful rollout. “The number one thing we’ve observed across this industry is that workforce readiness is the thing senior leaders don’t think about early enough,” Smith said. “Are people open-minded? Are they fearful that they’re going to lose their jobs? Workforce readiness is the first and easiest thing to think about.”
Smith also noted that a successful AI pilot program likely won’t be driven by the IT department. If the people who need to embrace and leverage the technology aren’t the ones driving the project, you run the risk of user resistance.
“If it’s a sales-enabled tool or it’s a front-line caregiver who wants to use this technology, they need to be the ones leading the pilot project,” Smith said. “The closer you can get to the person who’s supposed to benefit from it, the better.”
Jackson recommended determining where you have opportunities for AI to improve efficiency. “Find places where people are doing a lot of busy work, and where you have opportunities to streamline and improve their work,” he said.
Barsky referred to a study on the impact of AI on call centers4. The centers that replaced humans with generative AI bots resulted in user frustration. But the results were different in call centers that used generative AI in collaboration with humans.
“The top-performing call center workers got a bit better,” Barsky said. “The bottom performers got much better. That delta between the top and bottom performers got a lot smaller, the cost of training employees went down, and customer satisfaction went up. Using it as a replacement is potentially going to be a massive liability and result in a lot of blowback from your customers, because you don’t have a way to guarantee accuracy. It’s a helper, not a replacement.”
When getting started, Smith also recommends testing the technology for internal use cases first. “If you want to do a pilot project, do it as far away from your customers as possible,” Smith said. “See how your colleagues react, then inch closer to your customers.”
Regulatory uncertainty
Given the stakes involved, including concerns about job losses, privacy and misinformation, most people agree that some kind of regulatory framework around AI in healthcare is necessary. But Barsky characterized the current environment as “a mess,” with several ongoing issues that require careful attention.
He noted that President Joe Biden’s recent executive order5 on AI will spark a wave of proposed regulations. At the same time, however, the U.S. Supreme Court is deliberating on whether to overturn a 40-year-old legal precedent known as the Chevron deference, under which courts defer to the expertise of federal administrative agencies to interpret ambiguous congressional statutes. If the court abolishes the Chevron deference, Barsky said all kinds of regulations could be rendered moot, leaving the interpretation of statutes to individual judges on a case-by-case basis.
“It would put us in a tenuous regulatory position,” Barsky said. “Regulation by case law would result in a lot of litigation. Florida, where I reside, has three [federal judicial] districts—Southern, Middle and Northern. If you’re in Miami, you could get one ruling on a statute. In Orlando, they could interpret it entirely differently, and in Jacksonville they could say we interpret it a third way. There are 94 federal judicial districts in the U.S. You could have that situation play out across the country.”
Given that healthcare is one of the most regulated industries, particularly when it comes to how data is used, Barsky said the industry is trying to figure out how to proceed.
Believe the hype?
The panelists characterized the current buzz around AI as both over- and underhyped. For Smith, what’s overhyped is the notion that using generative AI will offer immediate improvements in the workplace, noting that we’re still about two years away from that scenario. What’s underhyped is the ability of AI to help forge great leaps in research and development, such as bench scientists using AI to automate certain tasks, such as identifying data trends or simulating complex scenarios. User acceptance, however, is the sticking point.
“The higher the education level, the more resistance we see,” Smith said. “We could have massive scientific breakthroughs sooner than we expect, but the actual users [in scientific research] are some of the slowest to adopt it.”
Barsky said while LLM-based generative AI tools are often fun to use, they’re currently inaccurate to replace many tasks. Still, he acknowledged that AI in general has the potential to be a game-changer for the healthcare industry. “People have not yet truly understood the true power and fundamental shift that this is.”
2 https://research.ibm.com/blog/what-is-generative-AI
3 https://www.ibm.com/topics/large-language-models?mhsrc=ibmsearch_a&mhq=large%20language%20model
4 https://mitsloan.mit.edu/ideas-made-to-matter/workers-less-experience-gain-most-generative-ai
What to Read Next.
Fraud on the Farm: It’s Time to Protect Yourself
Brad Guse | April 03, 2024 | Agriculture
Pick up any business publication and you’ll likely come across an article about cybercrime and payment fraud, yet this topic is rarely front an…
Continue Reading>More Insights
Tell us three simple things to
customize your experience.
Contact Us
Banking products are subject to approval and are provided in the United States by BMO Bank N.A. Member FDIC. BMO Commercial Bank is a trade name used in the United States by BMO Bank N.A. Member FDIC. BMO Sponsor Finance is a trade name used by BMO Financial Corp. and its affiliates.
Please note important disclosures for content produced by BMO Capital Markets. BMO Capital Markets Regulatory | BMOCMC Fixed Income Commentary Disclosure | BMOCMC FICC Macro Strategy Commentary Disclosure | Research Disclosure Statements.
BMO Capital Markets is a trade name used by BMO Financial Group for the wholesale banking businesses of Bank of Montreal, BMO Bank N.A. (member FDIC), Bank of Montreal Europe p.l.c., and Bank of Montreal (China) Co. Ltd, the institutional broker dealer business of BMO Capital Markets Corp. (Member FINRA and SIPC) and the agency broker dealer business of Clearpool Execution Services, LLC (Member FINRA and SIPC) in the U.S. , and the institutional broker dealer businesses of BMO Nesbitt Burns Inc. (Member Canadian Investment Regulatory Organization and Member Canadian Investor Protection Fund) in Canada and Asia, Bank of Montreal Europe p.l.c. (authorised and regulated by the Central Bank of Ireland) in Europe and BMO Capital Markets Limited (authorised and regulated by the Financial Conduct Authority) in the UK and Australia and carbon credit origination, sustainability advisory services and environmental solutions provided by Bank of Montreal, BMO Radicle Inc., and Carbon Farmers Australia Pty Ltd. (ACN 136 799 221 AFSL 430135) in Australia. "Nesbitt Burns" is a registered trademark of BMO Nesbitt Burns Inc, used under license. "BMO Capital Markets" is a trademark of Bank of Montreal, used under license. "BMO (M-Bar roundel symbol)" is a registered trademark of Bank of Montreal, used under license.
® Registered trademark of Bank of Montreal in the United States, Canada and elsewhere.
™ Trademark of Bank of Montreal in the United States and Canada.
The material contained in articles posted on this website is intended as a general market commentary. The opinions, estimates and projections, if any, contained in these articles are those of the authors and may differ from those of other BMO Commercial Bank employees and affiliates. BMO Commercial Bank endeavors to ensure that the contents have been compiled or derived from sources that it believes to be reliable and which it believes contain information and opinions which are accurate and complete. However, the authors and BMO Commercial Bank take no responsibility for any errors or omissions and do not guarantee their accuracy or completeness. These articles are for informational purposes only.
This information is not intended to be tax or legal advice. This information cannot be used by any taxpayer for the purpose of avoiding tax penalties that may be imposed on the taxpayer. This information is being used to support the promotion or marketing of the planning strategies discussed herein. BMO Bank N.A. and its affiliates do not provide legal or tax advice to clients. You should review your particular circumstances with your independent legal and tax advisors.
Third party web sites may have privacy and security policies different from BMO. Links to other web sites do not imply the endorsement or approval of such web sites. Please review the privacy and security policies of web sites reached through links from BMO web sites.
Notice to Customers
To help the government fight the funding of terrorism and money laundering activities, federal law (USA Patriot Act (Title III of Pub. L. 107 56 (signed into law October 26, 2001)) requires all financial organizations to obtain, verify and record information that identifies each person who opens an account. When you open an account, we will ask for your name, address, date of birth and other information that will allow us to identify you. We may also ask you to provide a copy of your driver's license or other identifying documents. For each business or entity that opens an account, we will ask for your name, address and other information that will allow us to identify the entity. We may also ask you to provide a copy of your certificate of incorporation (or similar document) or other identifying documents. The information you provide in this form may be used to perform a credit check and verify your identity by using internal sources and third-party vendors. If the requested information is not provided within 30 calendar days, the account will be subject to closure.