Paragon Health Institute Icon White
Paragon Pic

Artificial Neural Networks (ANN) in Health Care Technology

5DG Ai PIC 01 (1)
Click the Pic to expand the image
Kev Coleman Headshot
Visiting Research Fellow at 

Kev Coleman oversees the Health Care AI Initiative at Paragon Health Institute.

Drew Gonshorowski
Senior Research Fellow at Paragon Health Institute

Drew Gonshorowski is a Senior Research Fellow at Paragon Health Institute. He brings a decade of experience conducting quantitative research and building models examining health policy and entitlement programs.

One of the exciting features of machine learning is its capacity to detect patterns that go unrecognized by human analysts. For example, a collaboration between M.I.T. and Massachusetts General Hospital has produced a machine learning tool named Sybil that, using a single low-radiation chest scan, can predict lung cancer risk for the following six years without input from a radiologist. Dr. Lecia Sequist, medical oncologist, noted, “In our study, Sybil was able to detect patterns of risk…that were not visible to the human eye.” Sybil has, on occasion, detected early lung cancer signs that radiologists did not recognize until lung nodules were visible on scans years later. This could potentially direct personalized screening programs to make earlier cancer diagnoses and improve outcomes.

Sybil calculates its lung cancer prediction using an Artificial Neural Network (ANN), a subset of machine learning. Though there are several types of ANNs, a common feature is layers of artificial neurons (also called nodes). Like biological neurons, artificial neurons communicate with one another and operate within multiple layers. However, an artificial neuron is an individual software module that, as part of its collaboration on a shared ANN task, receives and processes an input. The processing is a mathematical calculation that will determine whether the neuron will activate, i.e. pass data to one or more other neurons. A numeric constant, known as a bias, may be added to the values being calculated to affect the neuron’s propensity for activation. When activation occurs, the result is weighted before it is passed to the next neuron. The weighting value affects the importance of the activated neuron’s output for the next neuron receiving it, and weighting itself is based on previous training of the ANN. This data point, in some cases, may also be modified by the results produced by other nodes. The representation of an ANN in the Paragon Pic has individual circles signifying individual artificial neurons, and these are stacked in columns to represent separate layers.

The lines emanating each individual neuron portray potential data exchanges between the neuron and other neurons belonging to the next layer within the ANN. These exchanges occur when the starting neuron is activated and passes an input to the next neural layer. The nature of a neuron’s activation determines to which neurons the activated neuron’s input is communicated. For a neuron receiving input from multiple neuron activations, the weights applied to the connections influence the importance of each input. Most ANNs process data from the input layer to the output layer in a movement called feedforward or forward propagation. The reverse movement is known as backpropagation. Backpropagation, where the system works from predetermined outputs toward earlier layers, may be used to train a system to produce the desired outcomes by fine-tuning weights within neurons.

The multiple layer architecture within an ANN is advantageous for performing progressive tasks. For example, in the case of an X-ray or medical scan, the first layer in an ANN may establish the edges or basic form of a physical feature (e.g. a lung nodule). Subsequent ANN layers, using those initial determinations, may identify more complicated structures as part of its overall pattern recognition. When the ANN has at least two intermediate layers of nodes between the input and output layers, the result expressed by the output layer is described as an instance of “Deep Learning.” As demonstrated by systems such as Sybil, Deep Learning may perform feats beyond the capacity of human clinicians.

Related Research

5DG Ai PIC 01 (1)

One of the exciting features of machine learning is its capacity to detect patterns that go unrecognized by human analysts. For example, a collaboration between M.I.T. and Massachusetts General Hospital has produced a machine learning tool named Sybil that, using a single low-radiation chest scan, can predict lung cancer risk for the following six years without input from a radiologist. Dr. Lecia Sequist, medical oncologist, noted, “In our study, Sybil was able to detect patterns of risk…that were not visible to the human eye.” Sybil has, on occasion, detected early lung cancer signs that radiologists did not recognize until lung nodules were visible on scans years later. This could potentially direct personalized screening programs to make earlier cancer diagnoses and improve outcomes.

Sybil calculates its lung cancer prediction using an Artificial Neural Network (ANN), a subset of machine learning. Though there are several types of ANNs, a common feature is layers of artificial neurons (also called nodes). Like biological neurons, artificial neurons communicate with one another and operate within multiple layers. However, an artificial neuron is an individual software module that, as part of its collaboration on a shared ANN task, receives and processes an input. The processing is a mathematical calculation that will determine whether the neuron will activate, i.e. pass data to one or more other neurons. A numeric constant, known as a bias, may be added to the values being calculated to affect the neuron’s propensity for activation. When activation occurs, the result is weighted before it is passed to the next neuron. The weighting value affects the importance of the activated neuron’s output for the next neuron receiving it, and weighting itself is based on previous training of the ANN. This data point, in some cases, may also be modified by the results produced by other nodes. The representation of an ANN in the Paragon Pic has individual circles signifying individual artificial neurons, and these are stacked in columns to represent separate layers.

The lines emanating each individual neuron portray potential data exchanges between the neuron and other neurons belonging to the next layer within the ANN. These exchanges occur when the starting neuron is activated and passes an input to the next neural layer. The nature of a neuron’s activation determines to which neurons the activated neuron’s input is communicated. For a neuron receiving input from multiple neuron activations, the weights applied to the connections influence the importance of each input. Most ANNs process data from the input layer to the output layer in a movement called feedforward or forward propagation. The reverse movement is known as backpropagation. Backpropagation, where the system works from predetermined outputs toward earlier layers, may be used to train a system to produce the desired outcomes by fine-tuning weights within neurons.

The multiple layer architecture within an ANN is advantageous for performing progressive tasks. For example, in the case of an X-ray or medical scan, the first layer in an ANN may establish the edges or basic form of a physical feature (e.g. a lung nodule). Subsequent ANN layers, using those initial determinations, may identify more complicated structures as part of its overall pattern recognition. When the ANN has at least two intermediate layers of nodes between the input and output layers, the result expressed by the output layer is described as an instance of “Deep Learning.” As demonstrated by systems such as Sybil, Deep Learning may perform feats beyond the capacity of human clinicians.

Related Research

Kev Coleman Headshot
Visiting Research Fellow at 

Kev Coleman oversees the Health Care AI Initiative at Paragon Health Institute.

Drew Gonshorowski
Senior Research Fellow at Paragon Health Institute

Drew Gonshorowski is a Senior Research Fellow at Paragon Health Institute. He brings a decade of experience conducting quantitative research and building models examining health policy and entitlement programs.