Paragon Health Institute Icon White

Healthcare AI

Artificial intelligence (AI) has become a major issue in American healthcare. The technology has the promise to greatly improve diagnosis and treatment as well as drug development. Consequently, it is rapidly advancing through the clinical, administrative, and research spheres of medicine. However, characteristics such as unpredictability and a lack of full explainability in some AI implementations have fed debates on the technology’s appropriate role in healthcare and what guardrails should surround it. Paragon Health Institute has made AI a key area of policy research and is working to preserve AI’s many benefits while maintaining effective safety protocols.

Healthcare AI

Artificial intelligence (AI) has become a major issue in American healthcare. The technology has the promise to greatly improve diagnosis and treatment as well as drug development. Consequently, it is rapidly advancing through the clinical, administrative, and research spheres of medicine. However, characteristics such as unpredictability and a lack of full explainability in some AI implementations have fed debates on the technology’s appropriate role in healthcare and what guardrails should surround it. Paragon Health Institute has made AI a key area of policy research and is working to preserve AI’s many benefits while maintaining effective safety protocols.

Key Research

Paragon Pics

5DG Ai PIC 01 (1)

Artificial Neural Networks (ANN) in Health Care Technology

One of the exciting features of machine learning is its capacity to detect patterns that go unrecognized by human analysts. For example, a collaboration between M.I.T. and Massachusetts General Hospital has produced a machine learning tool named Sybil that, using a single low-radiation chest scan, can predict lung cancer risk for the following six years without input from a radiologist. Dr. Lecia Sequist, medical oncologist, noted, “In our study, Sybil was able to detect patterns of risk…that were not visible to the human eye.” Sybil has, on occasion, detected early lung cancer signs that radiologists did not recognize until lung nodules were visible on scans years later. This could potentially direct personalized screening programs to make earlier cancer diagnoses and improve outcomes.

Sybil calculates its lung cancer prediction using an Artificial Neural Network (ANN), a subset of machine learning. Though there are several types of ANNs, a common feature is layers of artificial neurons (also called nodes). Like biological neurons, artificial neurons communicate with one another and operate within multiple layers. However, an artificial neuron is an individual software module that, as part of its collaboration on a shared ANN task, receives and processes an input. The processing is a mathematical calculation that will determine whether the neuron will activate, i.e. pass data to one or more other neurons. A numeric constant, known as a bias, may be added to the values being calculated to affect the neuron’s propensity for activation. When activation occurs, the result is weighted before it is passed to the next neuron. The weighting value affects the importance of the activated neuron’s output for the next neuron receiving it, and weighting itself is based on previous training of the ANN. This data point, in some cases, may also be modified by the results produced by other nodes. The representation of an ANN in the Paragon Pic has individual circles signifying individual artificial neurons, and these are stacked in columns to represent separate layers.

The lines emanating each individual neuron portray potential data exchanges between the neuron and other neurons belonging to the next layer within the ANN. These exchanges occur when the starting neuron is activated and passes an input to the next neural layer. The nature of a neuron’s activation determines to which neurons the activated neuron’s input is communicated. For a neuron receiving input from multiple neuron activations, the weights applied to the connections influence the importance of each input. Most ANNs process data from the input layer to the output layer in a movement called feedforward or forward propagation. The reverse movement is known as backpropagation. Backpropagation, where the system works from predetermined outputs toward earlier layers, may be used to train a system to produce the desired outcomes by fine-tuning weights within neurons.

The multiple layer architecture within an ANN is advantageous for performing progressive tasks. For example, in the case of an X-ray or medical scan, the first layer in an ANN may establish the edges or basic form of a physical feature (e.g. a lung nodule). Subsequent ANN layers, using those initial determinations, may identify more complicated structures as part of its overall pattern recognition. When the ANN has at least two intermediate layers of nodes between the input and output layers, the result expressed by the output layer is described as an instance of “Deep Learning.” As demonstrated by systems such as Sybil, Deep Learning may perform feats beyond the capacity of human clinicians.

Issue Experts

Kev Coleman Headshot
Visiting Research Fellow at 

Kev Coleman oversees the Health Care AI Initiative at Paragon Health Institute.

Related Glossary Terms

Artificial Neural Network
An Artificial Neural Network (ANN) is a form of Machine Learning. A common feature of ANNs is the use of layers of artificial neurons (called nodes). Each node within a layer is an individual software module that, as part of its collaboration on a shared ANN task, processes an input. If the result from a single node meets (or exceeds) a threshold value, the data point is passed to the next layer within the ANN. This result may, in some cases, be modified by the results produced by other nodes. Additional Resources Deep Learning Additional Paragon Research Artificial Neural Networks…
Chatbot
A chatbot is a software program that interacts with humans via conversations. The conversations may be written or spoken. While chatbots can be simple rule-based programs, an AI chatbot relies upon a Large Language Model (LLM) to interpret speech or text provided by the human using the chatbot. Generative AI, in contrast, works in conjunction with a LLM to respond to the chatbot user with original content expressed through language. In health care, chatbot examples include some symptom checkers as well as customer service automations. Additional Resources Large Language Model
Deep Learning
Deep Learning is a phrase used to describe the result of an Artificial Neural Network (ANN) analysis where there were at least two intermediate layers between the ANN’s input and output layers. Deep Learning can accommodate very large datasets and uncover data relationships unrecognized by humans. Additional Resources Artificial Neural Network
Generative Adversarial Network
A Generative Adversarial Network is an instance of Generative AI where original content (text, speech, images, etc.) is produced through the interactions of two Artificial Neural Networks: a generator and a discriminator. The generator neural network generates novel content and the discriminator neural network evaluates the content. The discriminator is adversarial because it compares the new content to examples already confirmed to be authentic. If the discriminator can distinguish the new content from the confirmed content, the generator must improve the content through subsequent iterations until the discriminator cannot reliably differentiate between the new content and the confirmed content. Additional…
Generative AI
Generative AI is a category of Artificial Intelligence that can produce original content creations that have not been preprogrammed within the system. Generative AI leverages Artificial Neural Networks to produce these creations. The creations themselves resemble the artifacts on which they are based, whether that artifact is textual, visual, or acoustic. The characteristics are procured through Deep Learning on large data sets and may, in some cases, be processed using a Generative Adversarial Network (GAN) to produce a new object of the same type. Additional Resources Artificial Neural Networks Artificial Intelligence
Large Language Model
A Large Language Model (LLM) is an instance of Machine Learning that characterizes human language within a computer system so that speech and writing may be interpreted and generated. A LLM contains complex relationships among words, expressions, contexts, and grammar. These relationships, along with the proximity of words to one another within a given linguistic expression (e.g. a sentence or paragraph), inform a probabilistic analysis that constructs meanings from the semantic possibilities of the words and phrases. When used in conjunction with Generative AI, a LLM enables a computer system to generate outputs in the form of human language. Additional…
Machine Learning
Machine Learning is a category of Artificial Intelligence that produces decisions or predictions through algorithms and statistical inferences. These inferences are typically refined over time through continual data analyses rather than explicit instructions from programmers or system users. Machine Learning may be “supervised” or “unsupervised,” the prior using labeled data sets identifying some information relationships and the latter using unlabeled data where the system detects relationships on its own. A popular example of Machine Learning is a Large Language Model (LLM). Additional Resources AI Large Language Model

Other Related Content

Subscribe

Sign up now for your health policy updates.

This field is for validation purposes and should be left unchanged.
Name(Required)