Blog Archives - CapeStart https://www.capestart.com/category/resources/blog/ Your dev and data Partner Wed, 20 Dec 2023 09:50:12 +0000 en-US hourly 1 https://wordpress.org/?v=5.9 https://www.capestart.com/wp-content/uploads/2020/06/cropped-favicon-02-32x32.png Blog Archives - CapeStart https://www.capestart.com/category/resources/blog/ 32 32 AI: Revolutionizing Healthcare with Enhanced Diagnostics and Personalized Interventions https://www.capestart.com/resources/blog/ai-revolutionizing-healthcare-with-enhanced-diagnostics-and-personalized-interventions/ Fri, 03 Nov 2023 03:22:50 +0000 https://stage.capestart.com/?p=111142 The post AI: Revolutionizing Healthcare with Enhanced Diagnostics and Personalized Interventions appeared first on CapeStart.

]]>

AI: Revolutionizing Healthcare with Enhanced Diagnostics and Personalized Interventions

Although the medical community has seen numerous advances in medical imaging and other diagnostic technologies, errors in diagnosis are still rampant. One recent National Library of Medicine (NLM) study estimated that diagnostic errors affect five percent of all U.S. outpatients and are responsible for up to 17 percent of all adverse events in hospitals.

Artificial intelligence tools, however, can help reverse that trend by augmenting the effectiveness of human clinicians and improving diagnostic accuracy. Big data and AI algorithms can even make health interventions more personalized and effective.

Let’s examine how in more detail below. 

What Promise Does AI Hold For Diagnostics and Personalized Interventions?

While AI systems have clear limitations, so do (often overworked) clinicians: Most physicians work between 40 and 60 hours per week, with one-quarter of U.S. doctors working up to 80 hours a week. Many nurses routinely work 16-plus-hour days

Diagnostics

Given such an environment, it’s unsurprising that diagnostic mistakes can and do happen. The promise of AI is that it can help augment and scale the effectiveness of health workers when diagnosing complex ailments within such a busy environment. 

Predictive AI models can help doctors detect patterns from medical images such as CT and MRI scans and other data to predict or diagnose disease. And generative AI models trained on large medical datasets are increasingly being used to suggest diagnoses or alert physicians to trends the latter may not spot on their own.

Although not yet widespread in the healthcare industry, the steady improvement of generative models means their use will likely increase. A recent Stanford study found that OpenAI’s most recent GPT iteration, GPT 4, outperformed first- and second-year medical students on complex clinical care exam questions. The model also outperformed its predecessor, GPT 3.5, in the same exercise. 

OpenAI’s official position, however, is that neither model is fine-tuned enough to be used as a standalone diagnostic tool for complex diseases. Always visit your doctor – and don’t just log into ChatGPT – if you’re experiencing a potential medical issue. 

Personalized Health Interventions

Traditional “one-size-fits-all” health interventions – where the treatment approach is more or less the same for every patient – for complex diseases such as cancer have increasingly fallen out of favor as personalized or precision medicine (PM) approaches become more common. And AI has a lot to do with that.

Natural language processing (NLP), a type of machine learning (ML), can ingest and make sense of medical records, doctors’ notes, social media data, conversation transcripts, and other data points in seconds by rendering unstructured data (such as text) into numeric representations. It can examine these large datasets to detect relevant trends orders of magnitude faster than any human.

While tailored medical treatment based on individual needs is certainly nothing new, precision health interventions based on multiple large data sources simply weren’t possible without modern big data and AI/ML tools. 

And indicators show the use of AI and NLP techniques is on the rise: A recent systematic review and meta-analysis of scientific papers on mental health interventions (MHI) showed that more than 50 percent of all MHI studies mentioning NLP-related techniques were published between 2020 and 2022. That suggests “a surge in NLP-based methods for MHI applications,” according to News Medical

How AI Helps With Diagnosis

Diagnosing a relatively common ailment like a broken arm is pretty straightforward. But it’s much more difficult for more complex, rare, or less predictable conditions that can manifest differently in different individuals, such as Multiple sclerosis (MS).

AI can help by analyzing thousands of images or other data points to spot anomalies or differentiate between healthy and diseased tissue. Because conditions such as heart disease and cancer change the physical behavior of tissue, experts can train algorithms to detect such differences – even if they’re so minute that the human eye can’t spot them.

“It’s that component of machine learning that allows us to identify disease before it can be spotted by trained physicians,” says Dr. Mark Traill, director of Medical Imaging AI Projects at the University of Michigan Health-West, in Elevance Health

He explains that AI models can also help predict disease with surprising accuracy. “Already, we’re using AI risk algorithms to go deeper into a standard 3D mammogram and identify patterns that suggest a person is at risk of developing an aggressive breast cancer over the next 12 months,” he says, adding that these algorithms often outperform trained radiologists. 

Traditional screening mammograms miss around 20 percent of breast cancers, after all. And because missed diagnoses inevitably lead to delayed treatment, the results of such diagnostic mistakes can be catastrophic for patients. 

How AI Helps With Personalized/Precision Medicine

The millions of data points generated and collected by modern medical devices and monitoring apps have changed how physicians can manage an individual’s health. They’ve also changed how physicians deliver the proper health intervention at the correct time, according to Executives for Health Innovation.

As mentioned earlier, the concept of personalized interventions is nothing new. It dates back to the ancient Greek physician Hippocrates (4th century BCE), who evaluated data points such as patient age, physical appearance, and even the time of year when prescribing medicines.  

Modern personalized interventions use more sophisticated data from medical devices, electronic health records (EHRs), and genetic information for improved therapeutic targeting and other treatments. 

Health professionals can also use patient preference data around side effects and medication delivery to ensure better overall patient satisfaction.

And when dealing with complex diseases involving several different medications, such as epilepsy or MS, AI can determine which drugs may work best for which people. The traditional alternative is to conduct trial-and-error on actual patients, which carries inherent risk and can take months or even years to determine the best treatment.

The advent of big data analytics and AI has made harnessing insights from these datasets possible. 

Examples of AI-enhanced diagnostics

Plenty of examples exist that demonstrate the promise of AI-powered diagnostics. Here are just a few of its potential applications:

Lung cancer: A deep-learning algorithm outperformed six trained radiologists in detecting lung cancer tumors in a study of more than 42,000 low-dose computed tomography scans (LDCTs), detecting fewer false positives and negatives. 

Pancreatic cancer: A proof-of-concept study by Johns Hopkins’ Sidney Kimmel Comprehensive Cancer Center involving the ML tool CompCyst outperformed current clinical management in identifying patients with a low risk of malignancy (60% vs. 19%).

Brain and central nervous system (CNS) cancer: Sturgeon, an AI model developed by researchers from the Oncode Institute, Center for Molecular Medicine, and other organizations, could accurately diagnose brain tumors within 40 minutes up to 90 percent of the time. Traditional tumor identification is often much more time-consuming and can be inaccurate.

Eye diseases: A new retinal image foundational AI model, RETFound, can recognize signs of eye diseases in retinal images to help enhance diagnostic effectiveness. RETFound is a self-supervised learning (SSL) masked autoencoder-based foundation model trained on 1.6 million unlabeled retinal images.

Further studies have shown that multimodal AI models, or models that draw on more than one data type, can be even more effective for diagnosis than models that only use images alone. 

For example, one multimodal model developed by University Hospital Aachen that uses clinical data combined with images to diagnose 25 different diseases showed a 77 percent accuracy rate. That’s compared to 70 percent for models that used only images and 72 percent for models that used only clinical data.   

Challenges For AI-Enhanced Diagnostics and Interventions

While AI shows excellent promise in diagnostics and personalized health interventions, several challenges remain. 

Critics often cite patient privacy issues to account for the lack of large amounts of training data for AI models in healthcare. Tools now exist to automate the removal of personal health information (PHI) from large datasets, but privacy in healthcare data remains an issue.

Additional roadblocks facing the widespread adoption of AI in diagnostics and health interventions include:

  • A lack of trust: It’s a big leap for both patients and physicians to put their faith entirely in the hands of technology when dealing with a life-or-death issue. That’s why even the most ardent AI advocates say physicians shouldn’t entirely rely on AI-based diagnosis, and instead should treat it as just another tool in the clinician’s toolbox.
  • A lack of time: Most physicians don’t have the time to implement another tool into their workflow. AI tools need to integrate with current workflows to spur widespread adoption, similar to how the Mayo Clinic embedded its new algorithm to spot atrial fibrillation within the clinic’s EHRs, which the doctors use anyway.
  • A lack of humanity: ChatGPT talks a great game. But AI models can’t compete with people regarding bedside manner, when making subtle observations of patient behavior, or when listening for gaps in a patient’s backstory. 
  • Data quality: Data quality issues in healthcare are often related to the privacy issue mentioned above since some datasets simply can’t be used due to privacy concerns. But other issues include generative AI hallucinations, which is when large language models (LLMs) produce inaccurate or even nonsensical outputs.

Conclusion

Despite these challenges, AI shows tremendous promise in helping to scale and improve diagnoses and personal interventions when combined with knowledgeable health professionals. 

AI and big data can improve the accuracy of diagnosing complex and rare diseases. These technologies can also help tailor health interventions – such as prescribing certain medicines to treat diseases – based on patterns in large amounts of personal health data that often go unrecognized.

But to implement these technologies effectively and responsibly, healthcare organizations need the right technology partner. CapeStart works with healthcare researchers and organizations worldwide to improve the quality of care and patient experience while lowering delivery costs through AI and data-driven healthcare

Contact CapeStart today to schedule a one-on-one discovery call with our AI, ML, and data experts.

Contact Us.

[contact-form-7]

The post AI: Revolutionizing Healthcare with Enhanced Diagnostics and Personalized Interventions appeared first on CapeStart.

]]>
AI’s Groundbreaking Role in New Drug Target Discovery https://www.capestart.com/resources/blog/ais-groundbreaking-role-in-new-drug-target-discovery/ Fri, 06 Oct 2023 05:07:27 +0000 https://stage.capestart.com/?p=110947 The post AI’s Groundbreaking Role in New Drug Target Discovery appeared first on CapeStart.

]]>

AI’s Groundbreaking Role in New Drug Target Discovery

Drug discovery and target identification is a notoriously expensive and slow process: Preclinical drug development (the research stage that precedes clinical trials) can sometimes take more than five years and cost billions of dollars. 

Even the experimental drugs that make it to clinical trials tend to flame out quickly. Around 90 percent of clinical drug development ultimately fails, costing companies between US$30 million and more than $300 million per clinical trial.

Failure during the preclinical or clinical trial phases is costly for drug companies and potentially disastrous for patients waiting for treatment.

But AI has begun to revolutionize the drug discovery and target identification process, helping pharma companies improve time to market while creating safer, more effective products for less money.

What is Target Identification in Drug Discovery?

Drug discovery is the process of discovering new medications and involves several steps, including molecular simulation, the prediction of drug properties, de novo drug design, and drug target identification.

The latter step, target identification, helps researchers understand how drugs will interact with the body, determine appropriate dosing levels, and whether the drug could trigger an adverse reaction by the patients. It is the process of identifying biological molecules and cellular pathways that drugs can affect to realize therapeutic benefits.

How Does AI Help the Target Identification Process?

Despite the well-oiled clinical trials processes in place among most established pharmaceutical firms, even the most productive among them can only manage between three and a dozen clinical trials per year. 

Increased Velocity

Mock et al. have reported the injection of AI tools into drug development, including drug target identification, can help speed up this process, according to Amgen’s Marissa in Nature.

Thanks to such AI techniques (along with other innovations, such as robotic workstations), the authors from Amgen say their company spends 60 percent less time on drug development up to the clinical trial stage than it did five years ago.

Developing Protein Therapeutics: A Comparison

Manual (traditional) approach AI approach
Screening to identify proteins that will bind to a desired target at the appropriate strength 6 months 3 months
Modifying proteins to have the right properties 18 months 6 months
Rate of successful production of a clinical trials drug candidate ~50% >90%

SOURCE: https://www.nature.com/articles/d41586-023-02896-9

Greater Efficiency

AI has already proven effective at improving efficiency across the entire drug discovery lifecycle, including target identification. The target identification process helps researchers determine appropriate dosing levels, understand how drugs will interact with the body, and ascertain whether the drug could trigger an adverse reaction.

To improve drug targeting, AI models are trained on large datasets (including omics, phenotypic, expression, disease association, patient, and clinical trial data), which helps them understand how diseases work while identifying new proteins with specific therapeutic benefits.

More Accurate Predictions

When properly trained, these AI models can quickly recognize patterns in the amino acid sequence of a protein and other data to predict a drug candidate’s efficacy, safety, and ease of manufacture. The models can then predict the medicinal properties of a specific protein or design an augmented protein with more desirable properties, such as a longer best-before date.

AI models can also run deep analyses on which other processes within the body could be affected by a particular compound when attempting to target a specific pathway. They can even predict the molecular structure of targets in 3D, helping to accelerate drug design through more effective drug binding.  

Before AI, much of this painstaking work was performed manually and took much longer. 

Milestones in AI-Enabled Drug Discovery

Researchers have made much progress over the past few years in applying AI models to drug discovery and target identification. Here are some of the most important recent milestones in the quest for AI-enabled drug discovery:

  • Early 2020: Exscientia announces the first AI-designed drug molecule in human clinical trials
  • July 2021: Google DeepMind’s AlphaFold predicts the protein structures of 330,000 proteins (the AlphaFold Protein Structure Database now includes more than 200 million proteins) 
  • February 2022: Insilico Medicine begins Phase I clinical trials for the first AI-discovered molecule based on a novel drug target discovered by the company’s Pharma.AI platform
  • January 2023: AbSci creates and validates de novo antibodies in silico using generative AI, the first company ever to do so 
  • February 2023: The FDA grants an Orphan Drug Designation to Insilico Medicine’s AI-developed drug for experimental idiopathic pulmonary fibrosis therapy
  • September 2023: Deep Genomics launches BigRNA, an AI foundation model with “the potential to reshape the landscape of RNA therapeutic discovery”
  • September 2023: Researchers from the University of Cambridge and Insilico Medicine develop a new way to find novel drug targets for diseases caused by dysregulation of the protein phase separation process
  • September 2023: DeepMind releases its new AI tool, AlphaMissense, which researchers say can analyze missense variants to predict genetic diseases at 90 percent accuracy

The Future of AI-Enabled Drug Targeting

The industry’s enthusiasm for AI-enabled drug discovery and drug targeting is evident in the numbers: Morgan Stanley says “even modest improvements” in early-stage drug development could equate to 50 new therapies (representing approximately $50 billion) over the next decade. 

Indeed, third-party investment in AI drug discovery research hit $5.2 billion at the end of 2021 after doubling yearly for the previous five.

However, one significant blocker to progress in this area is that individual drug companies sometimes can’t generate enough internal data to perform AI-enabled drug targeting on their own. That’s why some in the industry, including the authors from Amgen cited earlier, have proposed data collaboration among companies without compromising competitive information.

Suggested approaches in this regard include federated learning, a type of decentralized machine learning that uses raw data on edge devices to train AI models (improving data privacy among participants) rather than using global servers. 

Implementing decentralized federated learning in the drug development pipeline can “improve developers’ predictive abilities, benefiting both the firms and the patients,” according to the authors of the Nature study quoted earlier.

This approach could yield even more accurate predictions when combined with active learning, a type of semi-supervised learning model that can determine what kind of training data it needs to improve. 

Either way, given the rapid advances of the past few years, it’s not difficult to envision a near future featuring fully automated drug discovery powered by AI.

CapeStart’s machine learning engineers and data scientists work with pharmaceutical and medical companies every day to get products to market faster, enhance internal efficiency, and improve health outcomes. Contact us today to learn more about how we can help accelerate your medical research. 

Contact Us.

[contact-form-7]

The post AI’s Groundbreaking Role in New Drug Target Discovery appeared first on CapeStart.

]]>
The Secret Weapon of Large Language Models: Vector Databases https://www.capestart.com/resources/blog/the-secret-weapon-of-large-language-models-vector-databases/ Fri, 08 Sep 2023 03:19:37 +0000 https://stage.capestart.com/?p=107108 The post The Secret Weapon of Large Language Models: Vector Databases appeared first on CapeStart.

]]>

The Secret Weapon of Large Language Models: Vector Databases

The popularity of large language models (LLMs) such as ChatGPT and GPT-4 has taken the world by storm, with enterprises integrating generative AI models into business workflows and governments already taking steps to regulate the technology.

But how can LLMs so quickly provide rich and comprehensive answers to various prompts? Part of the answer lies in the existence of vector databases, which hold several performance advantages over traditional relational database management systems (RDBMS). 

What are Vector Databases?

RDBMSs are excellent for storing and retrieving structured data but have well-known limitations when it comes to unstructured data such as free text, video, and images – data types that are extremely common today. 

Vector databases are NoSQL databases with advanced search algorithms and indexing able to handle large amounts of complex unstructured data. They do this by storing information as vectors, which are arrays of numbers capable of representing an image, video, or text. 

“For instance, the word ‘bank’ might be represented as a 300-dimensional vector in a word embedding model,” writes The New Stack, “with each dimension capturing some aspect of the meaning or usage of ‘bank.’ The vector’s dimensions help us perform a quick semantic search that can easily differentiate between the phrases ‘river bank’ and ‘cash in the bank.’”

How do Vector Databases Work With LLMs?

LLMs also use vectors by transforming text into vector embeddings. These embeddings within language encode the semantic meaning and context of text, allowing LLMs to understand context and judge similarity when returning answers to prompts. Vector embeddings allow LLMs to determine context when looking at a particular word. 

“The resulting embeddings encode various aspects of the data, allowing AI models to grasp intricate relationships, detect patterns, and uncover concealed structures,” explains Youssef Hosni of Level Up Coding. “In essence, embeddings serve as a bridge between raw data and the AI system’s ability to make sense of it all.”

Data scientist Simon Thompson describes a vector embedding as a “concept space” with several dimensions that provide context and measure similarity. Thanks to these dimensions, vector databases can define the relationship between multiple vectors by measuring the relative distance between them, with closer numbers indicating similarity and big differences indicating the opposite.  

The below image, for example, shows a vector embedding describing cartoon animals using just two dimensions: Color and size. Low numbers represent small, dull animals, and higher numbers represent larger, more colorful animals.

Large Language Models

The mouse (bottom left) is small and not very colorful, giving it very low size and color numbers, while the very colorful and large giraffe gets much higher numbers. 

While the above is a straightforward representation of a vector embedding, others can get quite complicated: For most BERT models, embeddings can run as high as 768 different dimensions. Thompson illustrates it this way: A model with 1M vectors using a traditional database could take several years to index.

In a general sense, how vector databases work with LLMs is like this:

  • Text is converted into vectors and stored in a vector database
  • Text being searched due to a prompt is converted into vectors and compared for similarity
  • The vectors with the closest matches are selected
  • The vectors are converted back into natural language and returned to the user

When models or prompts have thousands of vectors with hundreds of dimensions, vector databases become necessary. The database connected to a LLM must be able to handle the heavy lifting of indexing and querying all those potential data combinations – and as it turns out, vector databases store data as high-dimensional vectors that allow for fast and efficient similarity searches. 

Along with the indexing and search of vectors quickly and efficiently, vector databases also enhance the memory of LLMs, which often hallucinate when they don’t have enough information or context. By encoding and storing unstructured data as vectors, these databases allow for semantic/vector search – searching data based on meaning instead of just literally – dramatically improving performance. 

Are There Different Kinds of Vector Databases?

The indexing and search system, Facebook AI Similarity Search (Faiss), unveiled by Facebook (now Meta) in 2017, was one of the first-ever vector databases. It’s still renowned for its suitability for high-speed natural language processing (NLP) tasks involving massive data volumes.  

But there are other vector databases out there, including:

  • Chroma: Open-source vector database able to handle multiple data types and formats, and can be deployed in the cloud or on-premises.
  • Pinecone: Cloud-based vector database ideal for real-time applications and large-scale machine learning (ML).
  • Weaviate: Open-source vector database that stores vectors and objects, making it ideal for combining vector- and keyword-based searches. 
  • Milvus: Compatible with ML frameworks such as PyTorch and TensorFlow, making it a popular choice among data scientists and ML engineers. 
  • DataStax: Its AstraDB vector database can integrate with the Apache Cassandra distributed database.
  • MongoDB: A data platform aimed at developers that offers vector search as a feature.
  • Vespa: Offers real-time analytics capabilities, along with high data availability and fault tolerance.

Power Up Your AI Innovation With CapeStart

Organizations of all kinds have discovered they can help scale their operations – and employee effectiveness – using LLMs such as ChatGPT and GPT 4, along with customized AI and ML technology and workflows. But not every company has AI expertise on hand, and integrating such advanced technology can often be a big challenge.

Contact us today to learn more about how CapeStart’s data scientists and ML experts can help put your organization on the road to greater efficiency with AI.

Contact Us.

[contact-form-7]

The post The Secret Weapon of Large Language Models: Vector Databases appeared first on CapeStart.

]]>
Revolutionizing Healthcare With Multimodal AI https://www.capestart.com/resources/blog/revolutionizing-healthcare-with-multimodal-ai/ Thu, 03 Aug 2023 05:02:36 +0000 https://stage.capestart.com/?p=105080 The post Revolutionizing Healthcare With Multimodal AI appeared first on CapeStart.

]]>

Revolutionizing Healthcare With Multimodal AI

Humans rely on various data sources to make decisions, including the information we receive through sight, taste, touch, hearing, and smell. By combining the data we receive through these inputs, we can make complex decisions.

But imagine trying to make sense of our environment using just one of those data sources.

That’s the case with most AI and machine learning (ML) models, which use just one ML model trained on one type of data source to generate predictions and insights. Multimodal AI – which uses various data types and models for improved accuracy – is different. 

Let’s dive into what multimodal AI is, how it differs from other types of AI, and its implications in the healthcare industry. 

What is Multimodal AI?

There are two types of multimodal AI: Multimodal learning and combining models. Here’s a breakdown of each:

  • Multimodal learning combines information from more than one data source or type – such as text, images, audio, and video – to provide a richer, more comprehensive, and more effective learning model. Multimodal learning has applications in healthcare, autonomous cars, speech recognition, emotion recognition, and other areas.
  • Combining models involves bringing together more than one ML model to improve overall model performance. All ML models have strengths and weaknesses that can be overcome by combining models, thus improving accuracy. 

In this post, we’ll focus primarily on multimodal learning, which fuses disparate data and uses unique unimodal neural networks on each input type, such as convolutional neural networks for images and recurrent neural networks for text. 

These models extract features from the data using unimodal encoders to process each data type individually, then use a fusion network (using techniques such as cross-modal interactions or concatenation) to integrate these features into a unified representation. A classifier then uses the unified representation to make task-specific predictions, classifications, or decisions.

Why is Multimodal Learning Important?

The main argument for multimodal AI is the lack of data heterogeneity in healthcare, which has long been an obstacle in AI and ML applications, along with a desire to create more accurate models. 

At the same time, multimodal learning has become possible thanks to the increasing availability of disparate biomedical data such as electronic health records (EHRs), medical images, data from large biobanks (such as the U.K. Biobank, the U.S. Million Veteran Program, Biobank Japan, and the China Kadoorie Biobank). 

Multimodal AI has several distinct benefits, including improved accuracy, better problem-solving capabilities, and increased ability to handle more complex tasks. Multimodal models are also more versatile since they can handle a wider variety of data and more robust since they aren’t reliant on just one data type (as is the case in unimodal models). 

What Are Some Multimodal Techniques?

There are several different techniques inherent in both multimodal learning and combining models. 

Multimodal learning techniques include:

  • Fusion-based approach: Encodes various data types into a common representation
  • Alignment-based approach: Aligns the data types to enable a direct comparison
  • Late fusion: Combines the predictions of models separately trained on each data type

Combining model techniques include:

  • Ensemble models: Combine the outputs of multiple base models into one overall model
  • Stacking: Uses the outputs of multiple models as inputs to another model
  • Bagging: Averages the predictions of several base models trained on different data

How Are Multimodal Models Used? 

Multimodal AI has already shown itself capable in a number of applications, including:

  • Internet of things (IoT) and smart cities
  • Image, speech, and pattern recognition
  • Cybersecurity
  • Natural language processing (NLP)
  • E-commerce
  • Robotics
  • Education
  • Sustainable agriculture

But one of the most exciting fields for multimodal AI is that of healthcare, where it can integrate data from multiple sources to create a more accurate diagnosis. Take medical imaging, for example, where a multimodal AI system can integrate data from various image types (MRI, CT, PET) to improve the accuracy of diagnosis and proposed treatment.

A March 2023 study indicated that there are approximately 130 applications of multimodal AI in healthcare, with the most prevalent areas being cancer and neurology. The technology has shown promise in several areas of healthcare, including:

  • Cardiovascular
  • Gastrointestinal
  • Pediatric
  • Respiratory
  • Musculoskeletal
  • Urogenital
  • Psychiatric
  • Ocular
  • Endocrine
  • Nephrology
  • Autoimmune
  • Infectious diseases

Other potential applications of multimodal AI in healthcare include the development of personalized “omics” for precision health, digital clinical trials, remote monitoring, and pandemic surveillance and outbreak detection. 

Multimodal models have shown great promise in several areas of healthcare, including diagnosing and treating cardiovascular diseases. In this study, researchers developed a multimodal data fusion AI model using a convolutional neural network that could predict hypertension with an accuracy of around 94 percent.

This study used a multimodal data fusion model to predict hospital readmission rates of those who had suffered heart failure, achieving an accuracy of more than 75 percent. 

And these researchers developed a multimodal large-scale model framework called Stone Needle, which integrates an array of data sources such as text, video, audio, and images and can be tailored to perform specific healthcare tasks. 

“The fusion of different modalities and the ability to process complex medical information in Stone Needle benefits accurate diagnosis, treatment recommendations, and patient care,” the authors write, adding that the model consistently outperformed other methods such as GPT-4, LLaMA-7B, Visual ChatGPT, and LLaVA. “By effectively integrating multiple modalities and specifically addressing the needs of healthcare applications, Stone Needle can provide healthcare professionals with valuable insights and improve patient care.”

Contact Us.

[contact-form-7]

The post Revolutionizing Healthcare With Multimodal AI appeared first on CapeStart.

]]>
3 Major Bottlenecks of AI in Healthcare https://www.capestart.com/resources/blog/3-major-bottlenecks-of-ai-in-healthcare/ Mon, 10 Jul 2023 05:20:45 +0000 https://stage.capestart.com/?p=104935 The post 3 Major Bottlenecks of AI in Healthcare appeared first on CapeStart.

]]>

3 Major Bottlenecks of AI in Healthcare

In 2016, neural network pioneer and Turing Award winner Geoff Hinton made a bold prediction: “We should stop training radiologists now,” he said. “It is just completely obvious deep learning is going to do better than radiologists.”

Fast forward nearly a decade, and you’ll notice that while AI and machine learning (ML) models have made strides in image-based diagnosis and other medical tasks, radiologists haven’t yet gone anywhere. 

It’s a similar situation across the healthcare industry, where AI hasn’t had the paradigm-busting impact initially predicted. At least, not yet.

Just have a look at the share of U.S. job postings that require AI-related skills in the chart (that’s healthcare close to the bottom).
But even though companies such as Pera Labs, HyberAspect, NeuraLight, Protai, and others have made noise in the healthcare AI space, a series of bottlenecks have made full-on implementation by large hospitals and medical systems extremely difficult. 
Indeed, implementing human-centric AI is critical to its widespread adoption in healthcare, which can improve patient satisfaction and hospital efficiency. But several significant roadblocks stand in its way. Here are the most important.

Data Issues and AI Bias

AI models are useless without being fed high-quality data, which doesn’t happen nearly enough in healthcare. Despite the massive amounts of data produced by the healthcare space – around 30 percent of all the world’s data – data quality issues have plagued the sector for years and have harmed the clinical implementation of AI.

Part of this is due to the massive data surface area that must be probed for relevant information:

  • Medical databases containing peer-reviewed literature and studies such as PubMed, EMBASE, Cochrane Library, and MEDLINE
  • Insurance databases
  • Medical imaging databases
  • Electronic health records (EHR) and electronic medical records (EMR)
  • Postmarket surveillance

Much of this information is siloed in different repositories, often making healthcare data difficult to access and collect. Busy medical professionals often view data collection as an inconvenience. Collected clinical data can be incomplete or contain errors. And EHR/EMR systems are often incompatible across various providers, resulting in localized data that are difficult to integrate.

Additionally, data privacy considerations around the presence of personally identifying information (PII) and protected health information (PHI) adds another challenge. Companies and healthcare systems need to be sure they’re on the right side of the Health Insurance Portability and Accountability Act (HIPAA) and other regulations before using healthcare data.

These are all big problems for AI, which requires large, high-quality datasets to provide accurate results. The inevitable outcome of this lack of data is potential AI bias, such as racial or gender disparities, based on a dearth of relevant data or even the inherent biases of those who built the AI model.  

“In many cases,” writes AI reporter Kyle Wiggers in VentureBeat, “relevant datasets simply don’t exist, and there is a challenge of either learning from a small number of examples or leveraging AI to construct a good proxy for these datasets.”

Explainability and User Trust

Hand-in-hand with data issues and bias is the danger of black box-style AI models that make it difficult or impossible to understand how they generate specific predictions. Not only does a lack of understanding undermine trust in AI models, but it’s also dangerous because technicians may not discover flaws in the models until well after deployment.

It’s an especially important issue given the prevalence of bias we mentioned above, along with AI hallucinations, which are confident responses by AI systems such as large language models (LLM) that sound plausible but are completely wrong. 

Explainable AI (XAI) can help in this regard. XAI includes technologies and processes that help users interpret AI algorithms and how they work, along with explaining the rationale for making significant recommendations (such as major surgery). XAI provides this rationale in natural language, making it easier for clinicians and patients to understand and trust the models.  

Compliance and Regulations

We already mentioned issues around privacy regulations when training AI models for healthcare, which can be a major – but necessary – barrier to more widespread AI adoption. The sheer sensitivity around health data and privacy makes using real health data to train AI models extremely difficult.

But other regulatory hurdles often help to stonewall AI adoption. The arcane regulatory approval process for new medical technologies is time-consuming and one many companies take years to navigate successfully. And especially in the U.S. – known as the most litigious society in the world – liability concerns often also play a role in new technology adoption.

Researchers from the Brookings Institution, a U.S.-based think tank, add that developing complementary technologies or processes can help improve explainability, build trust, and facilitate greater AI adoption. This can include innovations in:

  • The ownership of health data and who can use it
  • Approval processes for new medical devices and software
  • Algorithmic transparency, data collection, and regulation
  • The development of clinical trial standards for AI systems
  • Liability rules involving medical providers and developers of new technology

CapeStart and Healthcare AI

CapeStart’s AI, machine learning (ML), and natural language processing (NLP) experts work with some of the world’s largest drug, biologics, medical device manufacturers, and healthcare systems, to facilitate the safe and responsible adoption of the technology.

From improving the speed and efficiency of systematic literature reviews, pharmacovigilance, and clinical evaluation reports, to providing image and data annotation for AI models that use computer vision, CapeStart can help remove bottlenecks and improve the efficiency of your next project.

Contact Us.

[contact-form-7]

The post 3 Major Bottlenecks of AI in Healthcare appeared first on CapeStart.

]]>
Human-Centric AI: The Interactions Between Humans and AI in Healthcare https://www.capestart.com/resources/blog/the-interactions-between-humans-and-ai-in-healthcare/ Tue, 06 Jun 2023 01:03:32 +0000 https://stage.capestart.com/?p=104583 The post Human-Centric AI: The Interactions Between Humans and AI in Healthcare appeared first on CapeStart.

]]>

Human-Centric AI: The Interactions Between Humans and AI in Healthcare

Healthcare systems across the world are under stress. The grinding nature of the pandemic put many hospitals on the back foot, and the aging population’s growing need for care has led to healthcare staff and physician shortages across Europe, the U.S., Canada, and beyond.

Artificial intelligence (AI)-based tools were designed, in part, to help make healthcare staff more efficient – making them an ideal tool for the age of worker shortages. 

But up to now a remarkably small number of AI applications have made it from the research lab into clinical practice. To a large degree, that’s because many AI applications for healthcare weren’t designed with human requirements in mind.

The Main Bottlenecks of AI Adoption in Healthcare

Many hospitals have already dipped their toes into the AI implementation pool through predictive analytics (to analyze and improve spending, patient flow, and other indicators) and robotic surgery

But we’re still a long way away from having ubiquitous AI systems throughout the healthcare workflow. A recent study in the British Medical Journal concluded that of more than 200 AI prediction tools developed for healthcare, only two demonstrated real usefulness in guiding clinical decisions – and many were deemed capable of doing more harm than good. 

Why? 

Poor Quality Data

The healthcare industry generates around 30 percent of the world’s data volume, a number that will likely reach nearly 40 percent by 2025 – a rate of data growth that’s a full 10 percent faster than the financial services business.

But it’s an increasingly acknowledged fact that poor healthcare data quality issues have had a significant effect not just on the healthcare system – through inaccurate diagnoses and delayed treatments – but also in developing AI applications for the industry. 

After all, AI and machine learning (ML) applications are generally only as good as the data upon which they are trained. “If the datasets are bad,” explains Harvard Business Review, “the algorithms make poor decisions.” 

Interoperability

Data quality and other issues can cause potential interoperability issues when deploying algorithms and AI models at different hospitals with different infrastructures, physicians, patients, and data. Many systems developed within the clinical confines of a single hospital often face extreme difficulty when deployed at a new facility.

Bias and Discrimination

Civil liberties groups have often argued that AI models, if not designed with the utmost care, can perpetuate discrimination and racism and even make them worse. 

Studies have shown that AI systems are capable of various kinds of bias based on their training materials. These include making predictions based on the brand of medical device used, or the model concluding that people lying down for an X-ray are more likely to develop serious Covid illness (in that study, the most seriously ill patients’ images were taken lying down).

Researchers say that to avoid any potential bias, AI models must be designed and trained using a diverse group of stakeholders, keeping in mind human-centered AI principles (which we’ll explain further below).

Poor Model Design

A common pitfall of new technology products the world over is a potential lack of focus or direction. Technology developers often become so enamored with their technology that they forget the problem they’re trying to solve, and the same is true for some AI models in healthcare. An AI model designed to solve a poorly defined problem, or that doesn’t consider the needs of physicians and other healthcare workers, is usually doomed to failure. 

At the same time, AI models designed and tightly focused on solving pain points at any stage in a specific healthcare workflow can provide significant value. 

Workflow Fit and Lack of Trust

Workflow fit and trust issues are probably the biggest stumbling block. It’s no secret that most physicians have tight schedules and are creatures of habit who rely on well-documented routines formed through the crucible of years of experience in healthcare settings. 

Add that most healthcare professionals are often required to make quick decisions in life-or-death situations, and it’s easy to see why they’ll reject a new tool that doesn’t fit seamlessly into their workflow or causes complexity issues. 

New tools need to earn healthcare workers’ trust through accuracy and reliability – not an easy proposition, especially if an AI tool provides recommendations that are wrong or don’t make sense, or if there’s no way of knowing how the model came to its conclusion. 

At the same time, some health workers may trust AI models too much, leading to wrong diagnoses – or worse. 

“Undertrust occurs when trust falls short of the AI’s actual capabilities. Several studies have shown how radiologists overlooked small cancers identified by the AI,” writes Sean Carney, former Chief Experience Design Officer at Royal Philips. “On the other extreme, overtrust leads to over-reliance on AI, causing radiologists to miss cancers that the AI failed to identify.”

You, Your Physician, and Your AI Agent

One area where AI has already shown its value is in healthcare research, from pharmacovigilance to the creation of systematic literature reviews (SLRs) and clinical evaluation reports (CERs). But clinical applications of AI are less common for now. 

Developers of healthcare AI systems have realized that it’s not about replacing humans. Rather, it’s about finding the right applications that provide the most value and that augment the work performed by human healthcare professionals. 

A hybrid human-AI approach combines the value of AI in finding and exploiting efficiencies while keeping healthcare an empathetic, personalized, and human-centric experience for patients. Empathy is a major driver in positive healthcare experiences and outcomes. 

That’s a big reason why human-centered AI – medical AI models driven by what is “humanly desirable” from the perspectives of various stakeholders – has become a hot topic in healthcare. Indeed, instead of simply evaluating models by accuracy, developers must also consider the clinical context in which these models are deployed – including the empathetic, intellectual, and emotional elements often present in healthcare situations. 

That’s why the healthcare team of the future is likely a complementary relationship between your physician and an AI agent. AI agents can help scale the effectiveness of human physicians by automating menial tasks such as writing letters. Chatbots can help evaluate symptoms and triage patients. And models designed to scan medical images can spot anomalies much faster than humans.  

At the same time, those models need a steady pair of human hands on the wheel so they don’t go into the ditch. Viewed in this way, a future AI model can act as a physician’s co-worker by offering evidence-based suggestions. 

Human-Centric AI Can Improve Healthcare

The benefits of using AI in healthcare settings are clear, but it’s not a magic pill able to replace all healthcare workers – and that’s a good thing. Experts agree AI is at its best when combined with human interaction and empathy: AI-driven automation can significantly lower the day-to-day menial task burden on healthcare workers while not removing the human-centric approach provided by a live nurse or doctor.  

In such a situation, everyone wins:

  • Healthcare systems, who can see more patients while shrinking waiting lists.
  • Healthcare workers, who have less menial tasks to perform.
  • Patients, who can get diagnosed and treated faster and more efficiently.  

CapeStart’s AI and machine learning engineers, data scientists, and data annotation specialists work daily with leading healthcare organizations to improve efficiencies and health outcomes. 

Get in touch with us today to learn what we can do for your healthcare or health research organization.

Contact Us.

[contact-form-7]

The post Human-Centric AI: The Interactions Between Humans and AI in Healthcare appeared first on CapeStart.

]]>
The Latest AI Breakthrough: Agent-Oriented Programming https://www.capestart.com/resources/blog/the-latest-ai-breakthrough-agent-oriented-programming/ Fri, 05 May 2023 02:42:35 +0000 https://stage.capestart.com/?p=104379 The post The Latest AI Breakthrough: Agent-Oriented Programming appeared first on CapeStart.

]]>

The Latest AI Breakthrough: Agent-Oriented Programming

ChatGPT and Microsoft’s AI-powered Bing search engine took the world by storm earlier this year, with fascinated observers feeding endless prompts into the models – sometimes with decidedly weird results

But the pace of modern change is relentless – even over the space of just a few months – and a new breed of agent-oriented AI models appears poised to upend everything we thought we knew about the power of AI.

What are agent-oriented AI models, you ask? Let’s find out.

What is Agent-Oriented Programming?

Conversational and chat-based large language models (LLMs) such as ChatGPT rely entirely on human inputs – also known as prompts – to generate text or images, translate languages, and even drive cars.

While these models have the advantage of improving their results as they are fed more data, these human inputs can be difficult and time-consuming to refine enough that the model performs to expectations. They can also only perform one task at a time. 

That’s why the latest round of AI innovation driven by agent-oriented programming focuses on scalable models that can solve complex problems without prompting and even facilitate autonomous cooperation between agents. Automated AI models can chain autonomous tasks together in an automatic loop to solve an overarching problem without requiring human intervention.

The idea is that users only need to ask these autonomous models once to solve a complex problem, instead of ChatGPT needing continuous manual prompts and a human to chain its answers together.

According to some observers, models driven by agent-oriented programming are the next step to developing strong AI – or AI that essentially thinks through complex problems no differently than the human brain. 

“Strong AI or artificial general intelligence (AGI) is a generalized AI that is theoretically capable of carrying out many different types of tasks,” explains AutoGPT.net, “even ones it wasn’t originally created to carry out, much the same way as a naturally intelligent entity (such as a human) can.”

Which Agent-Oriented AI Systems Exist?

Various autonomous AI models already exist, including:

AutoGPT and Baby AGI must be used via a command-line interface. The others work through a web browser (HuggingGPT can support either).

Each model has its strengths, weaknesses, and preferred use cases. We’ll outline a few of them below.

AutoGPT

AutoGPT, probably the best known of the bunch, is driven by OpenAI’s GPT-4 and can chain together LLM “thoughts” to achieve its assigned goal. The model can break large tasks into smaller subtasks and use the internet and other connected data sources to solve problems.

Indeed, instead of simply answering manual commands one by one, Auto-GPT can automatically assign itself new objectives, generate prompts and responses, and revise prompts in response to learned information. It can autonomously search the web or perform API interactions, with some observers saying it’s possible Auto-GPT may be able to automatically improve its own source code

Baby AGI

Baby AGI is a Python-based task management system that leverages GPT-4 (for its natural language processing (NLP) capabilities), Pinecone vector search (to store results), and the LangChain framework (for decision-making) to create, organize, prioritize, and execute tasks. Baby AGI is able to create its own tasks to achieve predefined objectives, often based on the outcome of previous tasks.

 

Baby AGI

How the Baby AGI system works.
SOURCE: https://www.kdnuggets.com/2023/04/baby-agi-birth-fully-autonomous-ai.html

Running on an infinite loop, the system is able to do all this through just four steps:

  1. The highest-priority task is selected from the task list
  2. The model’s execution agent receives and completes the task (based on context learned via the OpenAI API)
  3. Results are then stored in Pinecone
  4. New tasks are then created and prioritized based on the outcomes of steps 1-3 and the overall objective

CAMEL

CAMEL, short for the catchy title of Communicative Agents for “Mind” Exploration of Large Scale Language Models, uses a novel communicative agent to create a role-playing situation among AI agents. Agents and assistants in the system can be assigned a particular role (such as stock trader, accountant, doctor, or actor), given a preliminary idea, and then asked to discuss the topic.

Users can deploy AI agents to discuss the topic in real-time, then watch them collaborate and (hopefully) solve the problem at hand.

CapeStart: Your AI and NLP partner

Whether your business has already implemented AI solutions into your workflows or you’re an AI novice, CapeStart’s teams of AI, NLP, and machine learning experts can help you scale your innovation through cutting-edge automation.

We deliver accurate data annotation for machine learning models, software development for AI, and custom and pre-built machine learning models to suit virtually any use case. Contact us today to set up a brief one-on-one discovery call.

Contact Us.

[contact-form-7]

The post The Latest AI Breakthrough: Agent-Oriented Programming appeared first on CapeStart.

]]>
Improving Patient Satisfaction by Automating Patient Experience With NLP and CV https://www.capestart.com/resources/blog/improving-patient-satisfaction-by-automating-patient-experience-with-nlp-and-cv/ Tue, 04 Apr 2023 04:53:28 +0000 https://stage.capestart.com/?p=104132 The post Improving Patient Satisfaction by Automating Patient Experience With NLP and CV appeared first on CapeStart.

]]>

Improving Patient Satisfaction by Automating Patient Experience With NLP and CV

While ninety-one percent of healthcare executives revealed in a recent survey that improving the patient experience is their No. 1 priority, many health systems unfortunately still fall woefully short.

Indeed, another poll found that nearly half of more than 2,000 respondents had experienced difficulties scheduling appointments with their healthcare providers, and another one-quarter had suffered treatment delays.

Those aren’t great numbers when it comes to maintaining – let alone improving – the patient experience, which is why health systems across the board have begun seriously investing in automation.

Push Factors Leading to Healthcare Automation

Healthcare systems and hospitals have traditionally employed legions of administrative and data entry staff to process the immense amount of paperwork required during day-to-day workflows. This paperwork includes preparing, collating, quality-assuring, correcting, scanning, and transcribing documents and other materials, as required.

Indeed, many hospital staff spend their days drowning in paper forms, each of which requires manual actions. That’s not counting the reconciliation of data among health organizations, providers, and payers; the continued (and extremely outdated) use of fax machines for referrals; and call centers tasked with making countless outbound patient engagement calls.

All that manual work adds up: Up to 33 cents of each healthcare dollar spent goes to such back-office operations. And the CAQH Index, a third-party source for tracking health plan and provider adoption of electronic transactions, estimates the industry cost of administrative transactions at $39 billion.

The Index adds that fully automating even some of these transactions could save more than $16 billion annually.

But it’s not just the allure of greater efficiency and cost savings pushing healthcare systems toward greater automation: Data security and Health Insurance Portability and Accountability Act (HIPAA) compliance also play a role.

After all, manually transferring so much personal health information (PHI) between physicians, nurses, and admin staff means a greater chance of a mistake and potential non-compliance – or, worse, a data breach.

Patients, as well, have come to expect a greater level of service from their healthcare providers. Nearly 50 percent of patients have recently expressed dissatisfaction with their level of healthcare service in the U.S.

How Can Automation Improve the Patient Experience?

The good news is that workflow automation holds tremendous potential to reduce some of these pressures, improve service, and safeguard sensitive personal information by streamlining workflows.

Healthcare providers have taken note: A recent Deloitte survey of executives across 25 health systems indicated that 92 percent hope to improve the patient experience through digital transformation.

Healthcare automation is largely driven by AI and machine learning (ML), specifically natural language processing (NLP), to enable machines to read and process text and computer vision (CV) for image recognition.

These and other technologies have already proven adept at tackling some of the healthcare system’s low-hanging fruit that consumes an outsized amount of staff time. This includes patient onboarding, appointment scheduling, claims processing, report generation, prescription management, discharge instructions, and account settlement.

Here are a few other ways automation can help improve the patient experience:

  • Patient admissions: Admitting patients is one of the most paperwork-intensive aspects of healthcare and includes countless personal information forms, consent forms, and other documents. These documents are often copied, scanned, faxed, and emailed, a tedious process that also puts health data at risk. NLP models combined with strict access and authorization controls can automate this process, making it faster, more efficient, and less prone to mistakes.
  • Appointment scheduling and patient queries: Chatbots and virtual assistants can be integrated with NLP to provide personalized and real-time responses to patients’ queries. They can help with appointment scheduling through intelligent triage (to ensure the patient sees the right provider), along with answering frequently asked questions.
  • Understanding the patient journey: The first step to improving the patient journey is to understand its many stages, from appointment booking to post-care follow-up. By mapping out the patient journey, healthcare systems and hospitals can better identify areas where automation could enhance patient experience.
  • Medical image annotation and analysis: CV can speed up the process of anomaly detection in medical images, leading to faster diagnoses, triage, and, ultimately, speedier treatment and better outcomes.
  • Patient communications: Hospitals can use NLP to personalize patient communications, such as appointment reminders, follow-up messages, and educational materials, improving patient engagement and satisfaction. Automation can also improve post-hospitalization outreach and data collection from patients after they leave the hospital (if necessary).
  • Telemedicine: CV can also be used to improve the patient experience. For example, a patient’s facial expression can be analyzed during a telemedicine session to identify signs of discomfort or pain. This kind of human movement analysis can help healthcare providers provide better care and treatment.
  • Patient discharge: Most discharged patients receive some kind of exit package that can include discharge, follow-up, or prescription instructions; results and diagnoses; and a hospitalization summary. Automation with security controls can accelerate the production of these packages and ensure PHI isn’t accidentally misdirected or stolen.
  • Other administrative tasks: Automation can also streamline administrative tasks, such as insurance verification and billing. By automating these tasks, healthcare providers can focus more on patient care, improving the overall patient experience.

TeleVox Healthcare’s Vik Krishnan argues that any automated healthcare communication must abide by the following best practices:

  1. All activities must be integrated with current electronic health record (EHR) systems to mitigate the amount of ramp-up time required of physicians, nurses, and admin staff
  2. Closed-loop service, which enables patients to book appointments and perform other tasks digitally, is now a requirement
  3. All messaging and communications to patients must be relevant and meaningful – especially when it originates from a technology source. Otherwise, patients may feel like they’re being spammed

CapeStart: The Healthcare Automation Experts

CapeStart’s NLP, CV, and ML experts work with healthcare researchers, systems, and hospitals every day to improve efficiency and help scale their activities, so our clients can do what they do best – provide industry-leading (and sometimes life-saving) healthcare.

Contact us today to learn how our NLP and CV solutions can help your journey to digital transformation.

Contact Us.

[contact-form-7]

The post Improving Patient Satisfaction by Automating Patient Experience With NLP and CV appeared first on CapeStart.

]]>
How Does AI-based Supply Chain Optimization Help Pharma Companies Save Money? https://www.capestart.com/resources/blog/how-ai-can-help-improve-the-pharma-supply-chain/ Wed, 08 Mar 2023 05:10:54 +0000 https://stage.capestart.com/?p=103934 The post How Does AI-based Supply Chain Optimization Help Pharma Companies Save Money? appeared first on CapeStart.

]]>

How Does AI-based Supply Chain Optimization Help Pharma Companies Save Money?

The pharmaceutical supply chain is incredibly complex: From sourcing and supplying materials to manufacturing and distribution of incredibly sensitive products that could be rendered inactive without the right environmental conditions. 

This complexity – and the dependence of governments and populations on the pharmaceutical industry’s life-saving products – are why pharma supply chains are so important. 

But they’re also very fragile, which is a big reason why AI (and in particular machine learning, or ML) has become an important element in the pharmaceutical supply chain.

Challenges Inherent in the Pharma Supply Chain

Plenty of challenges exist within the process of sourcing materials for a drug, manufacturing it, getting it to market, and conducting postmarket surveillance.

Professional services firm Deloitte calls the pharma supply chain a “golden thread between the discovery of new therapies and patients receiving them.” Links in the pharmaceutical supply chain include research and development, clinical development, manufacturing, launch/commercialization, and postmarket surveillance.

Each link in the chain also includes smaller sub-activities, with the manufacturing step alone also including sourcing, manufacturing, distribution, delivery, and patient care.

But one of the most prevalent challenges for pharma companies is the reality of the cold supply chain, which describes the transportation of temperature-sensitive products. The cold supply chain requires extra considerations around refrigeration and thermal packaging, lest drug companies risk spoiling the fruits of their labor. Covid-19 vaccines, for example, must be kept at -94F at all times. Many anticancer drugs must be kept between +2 °C and +8 °C, or they could become ineffective or even toxic.

It’s a real issue for pharmaceutical companies, who lost more than $35B in 2021 from products spoiling within cold supply chains. 

Add to this several other issues plaguing pharmaceutical supply chains, including:

  • An inability to manage unexpected peaks or troughs in demand, often leading to drug shortages or oversupply
  • A lack of strong processes to ensure drug integrity
  • A lack of transparency into several links in the supply chain
  • No mechanisms to examine environmental footprints or medical waste
  • No fall-back in case of natural or human-made disasters

There’s also the pressing issue of new zoonotic diseases such as Covid-19 and Ebola, a potentially devastating issue the UN predicts will rise exponentially thanks to climate change and loss of wildlife habitat, putting more pressure on the pharma supply chain.

Indeed, plenty of researchers and governments are already mildly concerned about the latest version of H5N1 making the rounds in birds – along with the worrisome fact that it has reportedly jumped to mammals relatively recently. 

All this is to say that pharmaceutical supply chains are one of humankind’s main lines of defense against epidemic and pandemic diseases, along with a score of other conditions.

Keeping these supply chains running smoothly is imperative for both governments and their populations. Which is why the pharma supply chain’s digital transformation using AI and ML makes so much sense.

How AI Can Help Improve the Pharma Supply Chain

AI can create efficiencies and better reliability across the pharma supply chain, from optimizing schedules to improving decision-making and removing manual busywork from supply chain workers in favor of more value-added tasks.

ML, in particular, is well-suited to improving pharma supply chains because ML models learn from data – and data is not in short supply in the pharma industry. 

“Digital transformation is critical for us,” says Bertrand Bodson of global pharma company Novartis, which served nearly 700 billion patients in 2022, in BBC Storyworks. “We have around 60 manufacturing facilities across the world, and we need the ability to be flexible. How do you adjust in real-time and be agile enough to serve those almost 800 million patients every day?”

The answer lies with AI and ML technologies able to give companies such as Novartis better market intelligence and the flexibility to change course in real-time. “If our manufacturing team knows where ingredients are and how our supply facilities are behaving in real-time, we can adjust and proactively plan for issues instead of reacting to them,” says Bodson.

But how, exactly, can AI help pharma supply chains? Here are some of the most profound ways.

  • Cold chain management: Combined with internet-of-things (IoT) sensors, ML models can track any changes in the temperature of shipped products, determine the risk, and (if necessary) alert the driver to the problem.
  • Route optimization: Just like riders want the shortest and most efficient route from their Uber driver, sensitive pharmaceuticals can benefit from the power of ML models to select the most efficient route and even combine that data with other factors, such as weather or road closures. This can potentially trim hours or days from a drug’s journey.
  • Demand planning: The pharma industry’s just-in-time model, similar to the food industry, is necessary because of the highly perishable nature of many drugs. But drug shortages are currently very common. The European Association of Hospital Pharmacists estimated in 2019 that approximately 95 percent of hospitals dealt with shortages over the past couple of years. To avoid over- or undersupply, drug companies need to be able to forecast and even predict surges or drops in demand more accurately. 

ML models can be very good at predicting demand with the right data. Models applied to this problem can be fed with data around historical ordering patterns, market trends, consumer behavior, competitors, and epidemiological trends. 

  • Predictive maintenance: Losing a batch of expensive drugs because a refrigerator failed is not an option for pharma companies with millions of dollars invested. Based on equipment data, ML-fueled predictive models can tell maintenance crews which equipment is the most likely to fail – and when.
  • Supply chain and inventory management: It has been noted that replenishment times from manufacturer to retailer takes more than twice as long for the pharma business compared to other industries. Intelligent supply chain management based on ML has the potential to make this process more efficient.
  • Warehouse automation: ML can learn and predict which items need storing for longer and which will likely be ordered soon, allowing warehouses to speed up the pick-and-pack distribution method. According to Forbes, one cold-chain food supplier increased productivity by 20 percent using this approach.
  • Identifying and eliminating counterfeit drugs: The World Health Organization (WHO) reported in 2017 that one in 10 drugs in developing countries are counterfeit. ML models combined with big data and IoT sensors can help identify these products in real-time.

Pharma Supply Chain Data Sources

ML models aren’t very effective without the right data to learn from – and luckily, the pharmaceutical industry is awash in various big data sources perfect for feeding ML models hungry for information.

Such data sources can include:

  • Product data, including information around drug composition, expiry, price, and ideal prescription conditions.
  • Demand data, including sales history and trends, and demographics, cross-referenced with sales.
  • Planning data, such as internal performance metrics, marketing metrics, and production plans.
  • Manufacturing data, including production capacity and data generated by IoT devices. 
  • Inventory data, including available stock, which stock needs replenishment, stock in transit, and inventory policies.
  • Logistics data such as warehousing information, transportation information, and returns data.
  • Supplier data, such as information around specific suppliers.
  • Customer data, including unstructured data such as medical histories, prescriptions, bills, and phone transcripts. It must be noted this kind of protected health information (PHI) may fall under HIPAA and must be kept private, although aggregated and depersonalized data can be used.
  • Public data such as government websites, news articles, and social media posts can be invaluable, especially when performing postmarket surveillance.

Power Up Your Supply Chain With CapeStart

CapeStart’s teams of data scientists and ML experts work with hospitals, medical device makers, pharmaceutical companies, and other healthcare organizations every day to drive efficiencies and improve the bottom line. 

From helping to scale pharmacovigilance and postmarket surveillance, to improving the efficiency and accuracy of systematic reviews and clinical evaluations, CapeStart can help you push innovation forward and scale your healthcare and pharmaceutical business.

Contact Us.

[contact-form-7]

The post How Does AI-based Supply Chain Optimization Help Pharma Companies Save Money? appeared first on CapeStart.

]]>
Healthcare and ChatGPT: How Does Prompt Engineering Help? https://www.capestart.com/resources/blog/healthcare-and-chatgpt-how-does-prompt-engineering-help/ Mon, 06 Feb 2023 02:30:32 +0000 https://stage.capestart.com/?p=103647 The post Healthcare and ChatGPT: How Does Prompt Engineering Help? appeared first on CapeStart.

]]>

Healthcare and ChatGPT: How Does Prompt Engineering Help?

In late 2022 Clifford Stermer posted a video on TikTok. The short clip showed the rheumatologist typing a prompt into OpenAI’s ChatGPT. The program then wrote a fully formatted letter to a medical insurance company, complete with treatment explanations, references, and a request for approval for a specific procedure on a specific patient. 

The video went viral almost overnight. “Use this in your daily practice,” Stermer says at one point. “It will save time (and effort).”

While impressive, however, it’s important to note that generative AI models such as ChatGPT aren’t modern-day miracle workers – at least not yet. To return useful and accurate results, and lessen the possibility of inappropriate or downright false results, these models require the right prompting and supervision.

We’ll get into the importance of prompting and prompt engineering shortly. But in the meantime, we have to ask…

What are ChatGPT and Generative AI?

Stable Diffusion. Midjourney. Synthesia. Murf. DALL-E. BLOOM. GPT-3. GPT-4. ChatGPT.

You’ve probably heard of one or more of the above generative models. Generative models come in all shapes and sizes – from generative adversarial models (GANs) to diffusion models to large language models. What each has in common is the ability to generate original content, such as text, video, or illustrations. 

Indeed, one non-filmmaker recently generated headlines after creating Salt, a series of short films completely generated by a few of the AI tools mentioned above. 

Specifically, large language models and other generative pre-trained transformers (GPT) models, such as GPT-J 6 by EleutherAI and OpenAI’s GPT-3, have shown an impressive ability to generate text based on commands (or prompts) from a human user. 

These models are typically deep neural networks with large numbers of parameters (elements of the model that change as it learns) trained on massive amounts of data from the internet. The models function by predicting “the next token in a series of tokens,” according to Towards Data Science. While not trained on specific tasks out of the box, they are flexible and well-trained enough to react appropriately to most prompts.

Large language models with the right prompting can handle many downstream natural language processing (NLP) tasks, such as:

  • Named entity extraction
  • Text corrections or editing
  • Text classification
  • Topic modeling

ChatGPT, in particular (based on OpenAI’s GPT 3.5 series of models), uses an internal weighting engine to make real-time predictions, along with reinforcement learning (RL) to help fine-tune the model by rewarding it after appropriate user interactions. 

What is Prompt Engineering? 

Prompt engineering is necessary because models such as ChatGPT don’t consistently deliver optimal answers. Models left to themselves can become sarcastic or provide inappropriate, incorrect, or downright false responses – an especially egregious result in a healthcare scenario where lives are often on the line.

Fortunately, well-executed prompt engineering and fine-tuning can spur the most consistent and accurate results from models such as ChatGPT.

Prompt engineering combines machine learning with creative writing and is a way of designing, executing, and testing prompts for NLP systems. Some say prompt engineering is more or less the only skill a human requires to create compelling content using large language models.

“A prompt engineer can translate from human language to AI language,” says Hackernoon. “It’s like being an expert Googler.”

Prompt engineering has also been compared to playing a game of charades with an AI system: Users must ascertain what the system knows about a topic, then provide well-articulated clues to prompt the system to deliver intelligent responses.

There are four main methods of prompting text-generating AI systems:

  1. Zero-shot: Zero-shot prompting doesn’t require explicit training for a certain task, allowing a model to make predictions about data it has never seen before. 
  2. One-shot: One-shot prompting typically uses a single example to allow the model to generate text.
  3. Few-shot: Few-shot prompting uses a small number of examples (usually between two and five).
  4. Corpus-based priming: This provides the model with the full corpus around a particular prompt. 

Many AI experts recommend starting with zero-shot prompting. If that doesn’t provide satisfactory results, users can move to one- or few-shot prompts before attempting corpus-based priming.

ChatGPT and Prompt Engineering in Healthcare

While there are plenty of applications suitable for ChatGPT in healthcare, most medical professionals preach extreme caution and that such tools are not ready for deployment in clinical situations. 

Human users must also police models such as ChatGPT to guard against AI hallucinations, which are errors or made-up facts that sound convincing to users (see the peregrine falcon example from earlier).

But medical practitioners have already identified several potential uses of the technology in their day-to-day tasks. In this video, Dr. Keith Grimes of the University of Warwick illustrates several practical applications of ChatGPT in healthcare:

  • Medication weaning. The weaning process can be complicated to explain to patients and is often very time-consuming when done manually. But written materials around this topic can be generated by ChatGPT in seconds, which could help with patient compliance with medication. 
  • Summarizing medical, radiology, and triage reports. Users can upload an entire report into ChatGPT, and the system will summarize and explain it thoroughly, along with defining terminology. The technology can summarize triage data, as well, by turning a lengthy triage questionnaire into a written explanation of what a patient is suffering. 
  • Diagnostics. ChatGPT can also diagnose ailments based on triage reports and will often admit when it doesn’t have enough information to make a confident diagnosis.
  • Responding to or writing hospital letters. Just like the example of Dr. Stermer we provided earlier, medical professionals can use ChatGPT to shave hours off the process of writing professional, polite letters between doctors and providers.

Other healthcare experts say ChatGPT could power more intelligent chatbots able to answer broad medical questions and collect patient information, immediately integrating it with a patient’s medical records. Other users mention translating clinical notes into patient-friendly versions (although deciphering a doctor’s handwriting may be out of reach for now), including translating acronyms and other high-level terms. 

However, we should note that the technology is currently somewhat limited in healthcare settings because it doesn’t support services covered under the Health Insurance Portability and Accountability Act (HIPAA) that involve personally identifiable information (PII). 

Potential Problems with Healthcare and ChatGPT

While one recent study from the University of Toronto observed that those interacting with GPT-3 chatbots found it to be non-judgmental and easy to understand, others had concerns around data privacy or theft, unfriendliness, and response repetitiveness.

The same study examined the 900 transcripts and “did not find a single conversation that suggested serious risk,” the report reads. But the authors say that doesn’t mean problems can’t arise in longer user interactions, which, even among humans, have a better chance of going sideways.

The study’s findings “underscore the need for real-time monitoring and likely automated detection of when a chatbot may engage in inappropriate/harmful behavior, especially when expectations are not appropriately set and participants may be vulnerable,” the authors write. 

CapeStart’s seasoned teams of machine learning and AI experts can help you harness the potential of generative AI models such as ChatGPT. Contact us to schedule a one-on-one discovery call and start scaling your AI innovation today.

Contact Us.

[contact-form-7]

The post Healthcare and ChatGPT: How Does Prompt Engineering Help? appeared first on CapeStart.

]]>