For this week’s assignment, you will prepare a presentation on theclinical informatics pillar of health informatics. Create yourPowerPoint presentation with speaker notes that critically address eachof the following elements. (Remember that your presentation slidesshould have short, bullet-pointed text with your speaker notes includingthe bulk of the information provided in the following list.)Choose a minimum of three areas of study within the larger disciplineof clinical informatics. (These would include areas such as clinicaldecision support, visual images, clinical documentation, and providerorder entry systems.) Research a minimum of three full-text scholarly,peer-reviewed, or other credible source for each of the chosen areas ofstudy.
- Summarize each of three chosen areas of study within the larger framework of clinical informatics.
- Evaluate the current state of research for each selected area of study in clinical informatics.
- Compare and contrast how clinical informatics changes the practice of medicine.
- Explain how automated interpretation of data and control systems impact provider care and health care delivery.
- Assess the potential impact each area has on potential changes in health care delivery and medical care costs.
You may wish to include visual enhancements in your presentation.These may include appropriate images, a consistent font, appropriateanimations, and transitions from content piece to content piece andslide to slide. Images should be cited in APA format as outlined by the
APA Key Elements (Links to an external site.)
. It is recommended that you access Garr Reynolds’
Top Ten Slide Tips (Links to an external site.)
and
Simple Rules for Better PowerPoint Presentations (Links to an external site.)
, which provide useful assistance with creating successful PowerPoint presentations. 28.1 Dynamic signals can vary in both the time and frequency domains
The essence of any dynamic signal is that when we measure it at different points in
time, the measured value changes. If that varying signal is plotted over time, a
waveform is the result. Most dynamic signals of interest have the important property of
periodicity – the waveform increases and decreases in value according to some regular
cycle.
There are three basic attributes of any time-varying signal (Figure 28.1):
1. Amplitude – The greatest value the signal achieves as it varies over a cycle.
2. Frequency – The time it takes to complete one cycle.
3. Phase – The state a system is in at any point in time.
When comparing two otherwise identical repeating signals (e.g. sine waves) that are
just offset from each other in time, we say they are ‘out of phase’ with each other. As
shorthand, sometimes phase is recorded simply as the degree of offset between the two
signals, usually measured from the origin.
Each of these properties can be analyzed, in different combinations, to provide three
different views of a dynamic signal. Each view reveals different aspects of the process
producing the signal, and it can be used to reason about what is happening to the
process at any point in time and what may be done to keep it well controlled. The time
domain provides the first and most natural view of a dynamic signal, by recording simply
how a measured signal varies over time.
To obtain the second view of a signal we note that although most waveforms have a
complex shape, for practical purposes any such complex can be deconstructed into a
set of simpler component sine waves, a discovery made by the famous French
mathematician Fourier (1768–30). Each such component sine wave has its own intrinsic
frequency. Understanding this allows us to unpack any waveform into its component
sign waves and record their unique frequencies. This frequency ‘fingerprint’ of a
waveform provides us with our second view – the frequency domain – a time- and
phase-free view of the signal (Figure 28.2). The mathematical process for converting a
waveform into the frequency domain is known as a Fourier transform.
The third view typically used is of the changing phases of a signal over time – its phase
portrait. In this time- and frequency free-free view, we graph the different values a
system takes over repeated cycles. Consider, for example, a frictionless pendulum as it
swings toward and then away from a central or zero point (Figure 28.3). In the time
domain, it looks like a sine wave. In the frequency domain, we see a single frequency
band, associated with that sine wave. In the phase portrait, the pendulum’s different
states are captured by graphing its distance from the middle point and its velocity at that
point. At some part of its cycle, it is moving toward the middle point, and at the opposite
end of the cycle it moves away. The resulting phase portrait is circular, describing this
ever-repeating motion. In a world with friction, the pendulum loses energy. Its phase
portrait spirals toward the zero middle point, as the pendulum slowly loses velocity, and
displacement from the middle point, as it comes to rest.
Examination of phase portraits is crucial to understanding the nature of a dynamic
system. We can tell whether system states recur regularly, moving in a tightly defined
band, and whether this recurrence has natural limits to the values the state can take.
We can tell whether system states vary around a particular value – called an attractor.
Independently of where a system starts in its phase space, its tendency is to be
attracted toward the attractor value(s) (Figure 28.4). The portrait can also help tell
whether a dynamic system is chaotic, which means both that initial conditions have a
significant effect on final outcome, and also that it eventually traverses all its operating
space at some point in time.
The ability to separate a complex dynamic signal into time and frequency domains, as
well as into phase space, is crucial to informing our understanding of the processes that
generate a signal. It is also significant because we can use time, frequency and phase
information to help us monitor changes over time and interpret the changes that are
observed.
28.2 Statistical control charts are a way of detecting when a dynamic system moves
from one state to another
Most patients’ tests, such as a blood test or blood pressure reading, are taken only
sporadically. If we were to plot them in time, the graph would be very sparse. In a
patient monitoring setting, the same measures are taken repeatedly, often for practical
purposes continuously, thus producing a dense time line. For sporadic measures, such
as blood test for cholesterol, it is typical to define a normal or abnormal value based on
the population distribution of the measure in question (often normally distributed). Any
reading within two standard deviations of the population mean is considered normal,
and beyond that we define the test result as abnormal.
In the data-dense domain of patient monitoring, the task is a little different. Because we
are gathering many data from a single individual, we are now able to compute statistics
on what is ‘normal’ for that the individual alone. For example, we can repeatedly sample
a measure over time, and as long as we are happy that the measure is stable, we can
calculate what is called a patient-specific normal range (Harris et al., 1980). This range
allows us to detect when a new value falls out of the patient’s own normal distribution of
results. It is a good way of determining whether anything significant has changed
significantly from previous measures over weeks or months. Importantly, a value in the
patient-specific range does not mean that the patient is well, but only that the patient is
stable.
The next feature of time-dense data is that we see much more moment-to-moment
variation. The time plot of very normal patient measures thus is anything but a straight
line. If we monitor such dynamic physiological signals, it is important to separate two
sources of signal variation:
• The first is intrinsic or natural to any stable system that is being monitored and is often
called common cause variation. Thus, a heart rate varies with level of exertion, body
position and sleep. Blood pressures also vary over a day for similar reasons. Variation
may also occur from noise on the signal, small variations in how a measurement is
taken or indeed changes to the measurement system itself, such as different length
leads, replaced skin transducers and so forth.
• New events can be imposed on top of a stable system, or the system can shift from
normal to abnormal, with a resultant deviation in performance. Such changes are
sometimes called special cause variation. For example, a patient may move from
normal cardiac function into heart failure, with resultant significant changes in the
patterns and values of cardiac function metrics. Equally, a pulmonary embolus would be
a new event that would immediately cause changes to a patient’s physiology.
When the monitoring task is to detect special cause variation (as opposed to finegrained analysis of a signal), a standard approach is to build an SPC chart for the signal
you wish to track.
In the data-dense domain of patient monitoring, the task is a little different. Because we
are gathering many data from a single individual, we are now able to compute statistics
on what is ‘normal’ for that the individual alone. For example, we can repeatedly sample
a measure over time, and as long as we are happy that the measure is stable, we can
calculate what is called a patient-specific normal range (Harris et al., 1980). This range
allows us to detect when a new value falls out of the patient’s own normal distribution of
results. It is a good way of determining whether anything significant has changed
significantly from previous measures over weeks or months. Importantly, a value in the
patient-specific range does not mean that the patient is well, but only that the patient is
stable.
The next feature of time-dense data is that we see much more moment-to-moment
variation. The time plot of very normal patient measures thus is anything but a straight
line. If we monitor such dynamic physiological signals, it is important to separate two
sources of signal variation:
• The first is intrinsic or natural to any stable system that is being monitored and is often
called common cause variation. Thus, a heart rate varies with level of exertion, body
position and sleep. Blood pressures also vary over a day for similar reasons. Variation
may also occur from noise on the signal, small variations in how a measurement is
taken or indeed changes to the measurement system itself, such as different length
leads, replaced skin transducers and so forth.
• New events can be imposed on top of a stable system, or the system can shift from
normal to abnormal, with a resultant deviation in performance. Such changes are
sometimes called special cause variation. For example, a patient may move from
normal cardiac function into heart failure, with resultant significant changes in the
patterns and values of cardiac function metrics. Equally, a pulmonary embolus would be
a new event that would immediately cause changes to a patient’s physiology.
The general approach to creating an SPC chart is similar to other learning approaches
reviewed in Chapter 27. First, historical data are used as a training set. Rather than
learning specific relationships in the data, however, analysis is limited to determining
any natural (common cause) variability in the signal and to estimate some statistical
boundaries of stable behaviour. Once such a statistical framework is built, it can be
used prospectively to monitoring the signal and detect when special cause variation
occurs. SPC has found widespread application in healthcare and has been used to
monitor physiological parameters of individual patients, through to surveillance of
system-wide signals of health service performance (e.g. Tennant et al., 2007; Thor et
al., 2007) (Table 28.1).
A simple control chart consists of upper and lower bounds for a signal (its control limits)
and a centre line typically based on the mean of past values (Figure 28.5). Control limits
are often set at three standard deviations from the centre line, to allow for common
cause variations. As long as future measures remain in the envelope of the control
lines, the measure is said to be in control, or stable. There are many ways to calculate
the centre line, including exponentially weighted moving averages (EWMA) and a
cumulative sum (CUSUM) (Mohammed et al., 2008). If the mean or centre line is
calculated on a past data set only, then the control and centre lines will be straight. If,
however, they are constantly recalculated as new data arrive, then they will drift as new
measurements come in.
When a signal strays outside these control boundaries, this triggers a search for special
cause variations that may need attention. The actual rule for triggering such an alert
depends on the application and time available for recovery if an unexpected event
occurs. Alerts may trigger after a number of measures repeatedly fall outside a control
limit (to avoid triggering a false alarm from a transient variation), or they may trigger
much earlier if there is clear movement of values toward the control limit (e.g. a trend
line forms in the band between two and three standard deviations).
Table 28.1 Example variables that can be tracked by statistical process control
Biomedical and physiological variables
• Cardiovascular metrics e.g. heart rate, blood pressure, central venous pressure.
• Blood glucose and HbA1c.
• Peak expiratory flow rates.
• Urinary output.
• Oxygen saturation.
Biomedical instrumentation metrics
• Error in blood pressure measurements.
Other patient health variables
• Patient fall rate.
• Daily pain scales.
• Days between asthma attacks.
• Incontinence volume.
• Nausea after chemotherapy.
Clinical management variables
Time to complete a process element
• ‘Door to needle’ time (time from admission to thrombolytic therapy for acute
myocardial infraction).
• ‘Vein to brain’ time (time from a blood test being taken to a clinician reading the
reported test result).
• Average length of stay or mortality per patient diagnosis group in hospital, or in the
intensive care unit.
• Time from discharge to general practitioner receiving a discharge summary.
Process event (and defect) rates
• Compliance with defined clinical indicators of care quality, e.g. measure the blood
pressure of hypertensive patients in primary care.
• Percentage of stroke patients receiving a brain scan within 2 days.
• Days since last infection for patients with central venous lines.
• Number of operations since last complication.
• Days since last adverse event in a unit.
• Documentation of specific information items in the record, e.g. allergy, presenting
condition.
• Place in record where specific information items are documented e.g. free text versus
coded field.
• Deviations from protocol or guideline.
• Monthly medication errors.
• Out of hour “stat” blood test orders.
• Monthly cases of MRSA.
• Monthly admission rate for diarrhoea cases.
• Number of diabetic patients having an HbA1c test.
• Mortality after coronary artery bypass graft.
Clinical decision-making
• Number of patients with tonsillitis and without tonsillitis who were receiving antibiotics.
Patient experience
• Patient satisfaction or complaints.
• Staff ratings.
• Quality rankings for process of care.
Financial resources
• Average cost per procedure.
• Staff cost per shift.
• Number of support staff versus providers.
HbA1c, glycosylated haemoglobin; MRSA, methicillin-resistant Staphylococcus aureus.
After Thor et al., 2007.
Using knowledge about a signal in the frequency and phase domains as well as about
system structure allows tighter control boundaries to be determined
SPC charts are nearly model-free representations of system behaviour, given that they
make very few explicit assumptions about the mechanics of the system observed. For
example, there are not necessarily even assumptions about the statistical distribution of
the measures as they are tracked (e.g. normal, Poisson). As such, it should be possible
to create tighter boundaries around a signal if we knew more about the underlying
causes of variation being observed.
One way to model a time-varying signal better is to explore it in frequency domain and
phase space. Frequency domain analysis breaks a complex time-varying signal into its
separate time-varying components. Each component could have its own SPC chart, and
if we understood the separate underlying process that creates each component, control
limits could be set accordingly. For example, heart rate varies over 24 hours because of
wake and sleep cycles. On the small scale, heart rate varies from beat to beat because
of the physiological mechanisms of the myocardial pacemaker system. The phase
portrait of a complex waveform, or portraits of its frequency components, can tell us
which is stable or unstable and can help in decisions about setting trigger rules around
the control limits.
28.3 Signal processing and interpretation occur at different levels of signal interpretation
and require increasing amounts of clinical knowledge
For many patient monitoring tasks, the complex nature of the physiological signal such
as the ECG requires additional preparation of the signal before it can be analyzed.
Additionally, signal interpretation can extend well beyond that achievable using standard
SPC, which as we have just seen, is almost model-free. This process of signal
interpretation can occur at a number of levels, starting with signal acquisition, and a lowlevel assessment of the validity of the signal, through to a complex assessment of its
clinical significance. The different levels of interpretation that a signal may pass through
are illustrated in Figure 28.6.
Sensors interact with a physical system to generate a signal
Physical sensors (or transducers) first detect a physical process and turn it into a signal.
For example, the leads attached to skin are the sensor component of the ECG
measurement system. Sensors are designed to respond in some way to the physical
system being measured (a pressure, a temperature or blood concentration of a
substance), and that response is converted into an electrical signal.
For a continuous signal such as a pressure wave, we need to optimize the sensor
design so that the sample it takes produces a reliable picture of the changes that are
occurring within the physical system (much like designing the sample size and
composition of an experiment so that it is representative of the population being
studied).
The first process of interest is analogue to digital conversion (ADC). Although a sensor
is typically a physical device that is able to detect the changes in a dynamic system
continuously, digital computers see the world in discrete chunks. An AD converter takes
a continuous waveform and re-represents it as a sequence of digital numbers. The
precision of ADC describes the granularity of this transformation. If, for example, only
two bits are available (zero and one), then all we can do is represent two signal levels –
enough, for example, to detect when a peak occurs (Figure 28.7). As we increase the
number of bits available, a more detailed characterization of the signal is possible.
The next issue to consider in the fidelity of our digital representation of a continuous
signal is the sampling rate. As the process of ADC takes repeated discrete snapshots of
a continuously changing signal, the question is how often these snapshots must be
taken. Recalling that a complex continuous waveform can be deconstructed by Fourier
transform into its components, we identify the highest-frequency component because
this will require the greatest number of samples. The Nyquist rate tells us that the
minimum number of snapshots we have to take of a signal with a frequency f must at a
minimum be 2f. Anything less is undersampled and will not capture the true frequency
of the signal. In Figure 28.8, we can see that sampling a sinusoidal wave at its
frequency f yields a straight line. At 2f we obtain enough information to see the true
cyclical nature of the signal, and as we sample at rates higher than 2f, a richer picture of
the shape of the waveform is obtained. Aliasing is an interesting phenomenon that
occurs when undersampling. Instead of capturing a waveform at its true frequency, we
recover an alias of that signal, which has a lower frequency (e.g. sample rate of 1.8f in
Figure 28.8).
By picking the right ADC precision and sample rate, we are able to adjust our sensing
process to obtain as faithful a representation of the continuous process we are
measuring as needed. Sampling error is the difference between the actual and
estimated signal.
Signal processing is used to eliminate noise and artefact in a signal
The next task in signal interpretation is to decide whether the values that are measured
are physiologically valid. In other words, is the signal genuine, or is it distorted because
of excessive noise resulting in a low signal to noise ratio? Alternately, is it distorted by a
signal artefact arising from another source than the process being monitored, such as
patient movement? A signal artefact is defined as any component of the measured
signal that is unwanted. It may be caused by distortions introduced through the
measurement apparatus. Indeed, an artefact may result from another physiological
process that is not of interest in the current context, such as a respiratory swing on an
ECG trace. Thus, ‘one man’s artefact is another’s signal’ (Rampil, 1987).
Where possible, a noisy signal is ‘cleaned up’ by removing the artefactual or noise
components of the signal. Doing so is important for several reasons. First, an artefact
may be misinterpreted as a genuine clinical event and lead to an erroneous therapeutic
intervention. Next, invalid but abnormal values that are not filtered can cause alarm
systems to register false alarms when alarm limits are reached. Finally, artefact
rejection improves the visual clarity of a signal when it is presented to a clinician for
interpretation.
There are many sources of artefact in the clinical environment. False heart rate values
can be generated by diathermy noise during surgery or by patient movement. False
high arterial blood pressure alarms are generated by flushing and sampling arterial lines
(Figure 28.9). These forms of artefact have contributed significantly to the generation of
false alarms on patient monitoring equipment. One early study found that only 10 per
cent of 1307 alarm events generated on cardiac postoperative patients were significant
(Koski et al., 1990). Of these, 27 per cent were due to artefacts, e.g. sampling of arterial
blood. The net effect of the distraction caused by high false alarm rates has been that
clinicians have often turned off alarms intra-operatively, despite the increase in risk to
the patient.
Although an artefact is best handled at its source through improvements in the design of
the physical transducer system, it is not always possible or practical to do so. The next
best step is to filter out artefactual components of a signal or register their detection
before using the signal for clinical interpretation. Many signal processing techniques
have been developed to assist in noise reduction. Some artefacts, such as those
caused by sampling and flushing a blood pressure catheter line, can be detected by
their unique shape (see Figure 28.9). Other artefacts can be managed by using Kalman
filtering which computes a weighted average of the signal over a period of time and as a
result smoothes out the effects of random and transient noise in the signal.
As we saw earlier, a Fourier transform can deconstruct a complex time-varying signal
into a series of sine waves of different frequencies and allow us to manipulate a signal
in the frequency domain. If noise is known mainly to distort a signal in certain parts of its
frequency spectrum, then only that part of the spectrum can be attenuated or
completely filtered out. A low-pass filter would eliminate components in the lowfrequency range, to some cut-off point, and a high-pass filter achieves the reverse. A
signal can then be reconstructed in the time domain and should now be much cleaner
and better represent just the physiological measure we are after.
It is also possible to analyze frequency components of a signal to obtain information
about the performance of the measurement system. When measuring a waveform such
as arterial pressure, the signal may be overdamped, meaning the high-frequency signal
components are attenuated compared with lower frequencies (similar to a sound wave
being muffled in a padded room) (Figure 28.10). Damping may be caused by problems
in the pressure measurement system, such as tubes that are too long, or air bubbles
that are trapped in them. An overdamped blood pressure measurement underestimates
systolic pressures and overestimates diastolic pressures. A pressure measurement can
also be underdamped (similar to hearing sounds in a tiled room), in which highfrequency components are enhanced compared with lower frequencies. Overdamping
overestimates systolic pressures and underestimates diastolic pressures. Analysis of a
pressure signal in the frequency domain can detect these higher than normal frequency
components for an underdamped system or lower than normal components in an
overdamped system.
Multiple features can be extracted from a single channel to support behavioural
interpretation
Having established that a signal is probably artefact free, the next stage in its
interpretation is to decide whether it defines a clinically significant condition. This may
be done simply by comparing the value with that of a pre-defined patient or population
normal range, or using SPC lines. In most cases, simple thresholding is of limited value
because clinically appropriate signal ranges can be highly context specific and require a
richer model to interpret signal meaning. The notion of an acceptable range is often tied
up with expectations defined by the patient’s expected outcome and current therapeutic
interventions. Even wildly abnormal values may have several possible interpretations.
These limitations of simple threshold based alarm techniques have spurred the
development of more complex techniques capable of delivering ‘smart alarms’
(Gravenstein et al., 1987). Much information can be extracted from a single channel if it
can measure a time-varying and continuous waveform such as arterial pressure. For
example, estimates of clinically useful measures such as cardiac stroke volume can be
derived by analyzing the area under the curve of the wave (Figure 28.11).
Alterations in the behaviour of a repetitive signal can also carry information. Changes in
the ECG are a good example. Features such as the height of the QRS peak help to
label individual components within beat complexes. The presence or absence of
features such as P waves and the duration and regularity of intervals between waves
and complexes can carry diagnostic information about cardiac rhythm.
When such complex model-based inference is needed, signal interpretation typically
involves two computational sub-tasks:
• Pattern recognition techniques are used to extract significant features from a signal.
For example, they may detect edges and curves in pictures, or they may detect letters,
letter groups and words from speech (see Section 24.4). Pattern detection techniques
vary in the way they model events within signals. For example, they may be based on
statistical models, in which the frequency of certain patterns is used in the recognition
process. There are many classic recognition techniques that have clinical application,
such as blackboard systems (Nii, 1986) (initially developed for speech recognition).
Hidden Markov models and conditional random fields are now widely applied to pattern
recognition tasks, as are neural networks.
• Once patterns have been identified within a signal, they need to be interpreted and a
meaningful label assigned, e.g. picking a QRS complex from a T wave and interpreting
its clinical significance. Standard methods for achieving this include rule-based CDSSs,
as well as neural networks and statistical classifiers such as SVMs.
Cross-channel interpretation brings together several lines of evidence to permit more
complex reasoning about the meaning of signals
Often clinical conditions can be identified only by looking at more than one signal. Such
cross-channel information is useful at several levels, starting with artefact detection and
signal validation through to clinical diagnosis. Cross-correlation alternatives include the
following:
• Same signal, different interpretation method – To avoid errors measuring heart rate
from using a simple peak detection algorithm, one can validate the value by comparing
it with one obtained using a different calculation method, based on the same data.
• Different physical source, but same signal – Comparing different ECG leads is a
common technique for validating changes seen on one lead. Patterns across leads also
have diagnostic importance.
• Different signal – In Figure 28.6, a flat portion of an ECG trace does not trigger as
‘asystole’ because examination of the corresponding arterial waveform reveals pulsatile
behaviour consistent with normal cardiac function, and therefore there is more likely a
problem with the ECG signal than with the patient.
Cross-channel information can also be used to diagnose conditions. Many clinical
conditions can be distinguished by the time ordering of events in their natural history
(Coiera, 1990). For example, the cause of a hypotensive episode may be deducible
from the order in which changes occurred across heart rate, blood pressure and central
venous pressure (CVP) (see Figure 28.12). In the presence of a vasodilator, the arterial
blood pressure drop would precede the reflex tachycardia and CVP fall. In the presence
of hypovolaemia, the first parameter to shift would occur with the heart rate, followed by
CVP and blood pressure.
Decision support systems can access additional data from the electronic record to
interpret biomedical signals
For some signal interpretation tasks, even cross-channel information is insufficient to
disambiguate alternate explanations for the patterns seen, and more data about the
patient and their context are needed. Linking monitoring systems to the electronic
record is one way of achieving this. For example, it is not always possible to label
events in an ECG strip unambiguously, and additional contextual information is needed
to assist in the labelling process (e.g. Greenwald et al., 1990). By linking ECG
interpretation with information in a patient record, a CDSS can arrive at more complex
diagnoses or exclude competing diagnoses from the differential diagnosis.
High-dependency hospital settings
Complex monitoring systems are most typically found in high-dependency, highvigilance and high-risk settings such as the operating theatre or the intensive care unit.
The specialized tasks undertaken in such setting have resulted in electronic record
systems specially designed to integrate with clinical monitoring signals. Anaesthesia
information management systems (AIMSs) and anaesthesia workstations are cases in
point (Muravchick et al., 2008). An AIMS can in one ‘place’ assemble physiological
monitoring data, controls and data from the anaesthetic machine, surgical schedules
and laboratory results, as well as access the standard electronic record function to
retrieve patient details and past history. An AIMS does not just facilitate anaesthetic
record capture, or test medication ordering, but it may also generate quality reports (e.g.
based on SPC methods) for an individual patient or case series.
Decision support features can enhanced an AIMS with real-time signals and alerts.
When integrated with an anaesthetic machine, an AIMS can look at physiological and
machine data to warn of faults within the gas delivery system and can disambiguate
them from physiological causes and possibly suggest corrective actions.
Low-dependency hospital settings
Patient monitoring is now moving to low-acuity and low-vigilance settings such as stepdown units, or the hospital ward, as lightweight wearable sensors become cost-effective
and widely available. In hospital wards, medical emergency teams (METs), also known
as rapid response teams (RRS), are called when a patient is seen to be deteriorating in
health and requires attention ahead of the occurrence of a preventable event such as
respiratory or cardiac arrest (Hillman et al., 2005).
MET calls are triggered by changes in routinely charted measures such as pulse and
respiratory rate, temperature and blood pressure. These measures are scored, for
example, using the Modified Early Warning Score (MEWS), and when a patient’s score
exceeds a given threshold, a nurse or other member of clinical staff triggers a call to the
MET (Subbe et al., 2003). With the advent of wearable sensors, some of the logic to
trigger MET calls can be delegated to a computer system. For example, MEWS criteria
translate into simple rules and can automatically trigger an automated message or
phone call to the MET team. Significant reductions of in-hospital mortality have been
demonstrated in a number of clinician trials of electronic triggered calls based on early
warning score calculations from physiological monitors (Schmidt et al., 2014; Evans et
al., 2014).
Home and ambulatory settings
Home telecare systems have for several decades now used home sensors to assist
with remote monitoring of patient conditions, by focussing on individuals with chronic
conditions or on frail and elderly patients who wish to live at home rather than move to a
higher-dependency nursing setting (Barlow et al., 2007; Celler et al., 1999). Sensors
can track traditional clinical measures such as blood pressure, blood glucose or
spirometry. They also include more mundane devices such as scales to track the weight
of patients in cardiac failure. Indeed, the whole of a home can be instrumented to track
normal activities of daily living (ADL). Sensors may record when a refrigerator is
opened, when a patient is moving around the house (by a triaccelerometer worn around
the neck or attached to clothing) or resting in bed. After collecting baseline date, models
of an individual’s routine activity can be created. Using strategies such as SPC, when
an individual becomes less active, and control limits are breached, a remote clinical
team can intervene to check whether a patient has deteriorated or experienced a
significant event. Healthy individuals, who have an interest in maintaining good health,
or indeed improving their health, may also self-monitor and track activity and
physiological measures as they try to meet health goals (see Box 32.2).
Autonomous therapeutic devices deliver therapeutic interventions and titrate their doses
based on monitored parameters
Autonomous systems can operate independently of human interaction on complex tasks
once they have been appropriately set up. Simple examples include intravenous pumps
and drug delivery systems, insulin delivery systems (Atlas et al., 2010) or ventilators.
When such devices have access to physiological data and computational reasoning
capacity, they can adapt their rate of delivery to meet the changing circumstances of a
patient. For a therapeutic device to be autonomous typically requires well-validated
physiological models, as well as high-fidelity measurement technology (e.g. implantable
glucose sensors for insulin delivery systems). These are thus examples of closed
systems incorporating feedback control (see Chapter 3), and they physically embody
the model-measure-manage cycle (see Chapter 9).
For example, ventilator settings can be changed automatically in response to
measurements of a patient’s respiratory status made by the ventilator. Where possible,
these signals can be enhanced by other measures such as blood gas measures and
cardiovascular status (Johannigman et al., 2008). Early research into systems that
could wean patients from ventilators (Fagan et al., 1984) led to the design of a CDSS
that can guide humans through the ventilator weaning process. Closed-loop systems
can automatically and gradually reduce pressure support and conduct spontaneous
breathing trials. Automated weaning has been shown to reduce the duration of
mechanical ventilation and length of stay in the intensive care unit, compared with
physician-controlled weaning (Lellouche et al., 2006).
28.4 Automated interpretation and control systems can assist in situations with high
cognitive loads or varying expertise
Intelligent monitoring technologies have a role in assisting with clinical vigilance. This
may be necessary with slowly evolving conditions, in which the monitor can sample a
signal more frequently than a human and obtain a better estimate of the changes under
way. New conditions may also be missed because of distractions in the workplace and
inexperience, as well as human decision biases that impeded clear decision-making.
Such challenges in continuously monitoring patient data are not unique to healthcare.
They are also an issue, for example, for airline pilots and nuclear power plant operators.
High cognitive load can lead to a failure to detect monitored events
There are finite limits to cognitive resources such as memory that humans can devote to
reasoning (see Box 8.2). These resources can thus be overloaded by some activities at
the cost of others (Sweller et al., 2011). Such cognitive overloading can cause critical
patient information to be missed or misinterpreted when working with real-time
monitoring data – this condition is also known as inattentional blindness or looking but
not seeing. There are several major mechanisms that contribute to this phenomenon.
First, the amount of information available in some monitoring systems may be greater
than can be assimilated by an individual at one time. With multiple time-varying signals
to monitor, a human’s cognitive resources may quickly become stretched trying to keep
track when the situation is complex or evolving quickly. Such data overload can cause
the observer to fail to notice significant events.
This situation can be compounded by the clinical environment itself, which provides
many distractions that compete with monitored data for the clinician’s attention. These
include tasks other than monitoring that may need to be carried out at the same time,
especially in settings such as the emergency room, intensive care or the delivery of
anaesthesia. Worse still, monitors themselves may flood clinicians with false or trivial
alarms, providing further unnecessary distraction (Koski et al., 1990).
Interruption, as we saw in Chapter 4, can cause a current task to be disrupted, delayed
or entirely forgotten while an individual attends to the new task brought by the
interruption. All these sources of distraction reduce the cognitive resources, such as
working memory and attention, that can be devoted to signal interpretation and increase
the likelihood of an error of interpretation or a failure to notice data events.
Intelligent monitoring systems can assist clinicians working in high cognitive load
settings in a number of ways. First, these systems can generate alarms when significant
events occur, although as we have just noted, too many alarms add to cognitive load
and become a burden themselves. Less intrusive highlighting of events, such as colour
changes, and onscreen messages can provide non-interruptive cues to significant
signal changes.
At the other end of the scale, the process of data validation can be automated. At
present, it is up to the clinician to ascertain whether a measurement accurately reflects
a patient’s status or is in error. Although in many situations, signal error is clear from the
clinical context, it can also manifest as subtle changes in the shape of a waveform.
Without quite specialized expertise, clinicians may misinterpret measured data as being
clinically significant when the data in fact reflect an error in the measurement system.
For example, changes in the bedside height of a pressure transducer can significantly
alter the measurements it produces.
Variations in expertise are associated with variations in decision performance and the
need for automated support
The level of expertise that individuals bring to a task such as the interpretation of signals
varies enormously, and it is not always possible for such deficits to be remedied by
consultation with more skilled colleagues. For example, most complications associated
with anaesthesia, in which clinicians are highly dependent on the use of monitoring
equipment to assess patient status, result from inadequate training or insufficient
experience of the clinician (Cooper et al., 1984; Sykes, 1987).
Rare events may thus be missed or misinterpreted because they are outside the
experience of an individual. Complexity is also introduced when more than one disease
process is active in a patient. These disorders may interact to alter the normal
presentation of signals one expects. In the absence of previous experience, the only
way such diagnoses can be made is to work back from first principles, and this often
requires a deep knowledge of the pathophysiological mechanisms involved.
Again, alarms and highlights can direct an inexperienced user to noticing signal patterns
and, with CDSS support, suggest interpretations. Because there is a great difference in
the support needs of experienced and inexperienced clinicians, an alert that is of high
utility to a junior clinician can be seen as an annoying distraction to the experienced
practitioner. Many systems allow experienced clinicians to change alarm and CDSS
settings to suit their personal preferences, to allow some accommodation to experience.
However, the risk of this approach is that even an experienced clinician can be
distracted or overloaded and may miss events that he or she normally would have no
difficulty detecting. Arriving at policies about safe alarm settings is thus not an easy
task, given the need not to impede workflow while maintaining high standards of patient
safety.
Automation bias occurs when a human incorrectly agrees with incorrect guidance from
a clinical decision support system
There is a wide literature on the causes and effects of human error (see Chapter 13).
One significant cause of error is decision bias, covered in Chapter 8. Automation bias or
automation-induced complacency is an additional bias associated with CDSSs and
monitoring tasks (Parasuraman and Manzey, 2010).
In a laboratory experiment, users were given a simulated flight task. Some had the
benefit of using a computer that monitored system states and made decision
recommendations (Skitka, 1999). When the aid worked perfectly, the users of the
system outperformed those who did not use it. When the computer aid was not perfectly
reliable and occasionally failed to prompt the user when action was needed or
incorrectly prompted the user to carry out an action when none was needed, the
situation changed. Study subjects without the CDSS outperformed those with the
CDSS. Those using the CDSS made both errors of omission (they missed events
because the system did not prompt them to do so) and errors of commission (they did
what the decision aid told them to do, even when it contradicted their training and
available indicators). In the study, CDSS users had 59 per cent accuracy on the
omission error events, compared with 97 per cent for the non-computer users, and
performance was even worse with commission error, with an accuracy of only 35 per
cent.
There are many possible explanations for these types of error. It has been suggested
that humans who trust a computer system shed responsibility for tasks and devolve
them to the computer system. Computer users may as a result develop an ‘out of loop
unfamiliarity’ with the system they are meant to be monitoring because they have
delegated the monitoring of data to the automated aid and so have effectively taken
themselves out of the decision loop (Wickens et al., 2012). If an urgent event occurs,
the consequence of out of loop unfamiliarity may be that it would take much longer to
develop situational awareness because the human is unfamiliar with the current state of
the data and needs to develop a mental model that may take more time than is
available to solve the problem. In contrast, without a decision aid, the human has no
choice but to maintain an active mental model of the state of the system being
monitored.
Evidence suggests that explicit training in automation bias has a short-term benefit only,
but making individuals personally accountable for their decisions does seem to reduce
automation bias. Specifically, if individuals are told that their actions are socially
accountable, because the data of their performance are being recorded, and that they
will be held accountable for the outcome of their performance, then individuals spend
more time verifying the correctness of the decision aid’s suggestions by checking data,
and therefore they make fewer errors (Skitka et al., 2000).