Skip to main content

Lund University Publications

LUND UNIVERSITY LIBRARIES

Learning for safety in health care and air traffic control

Ternov, Sven LU (2011)
Abstract
Introduction



Risk management in enterprises, organisations and companies has had a long and complicated history.

During the eighties, and at least during the beginning of the nineties, the notion concerning risk management was that if an accident happened in an otherwise perfect system it was due to the human operator in some way being the cause of the error. The cause for the accidents was described in terms of “negligence”, “lack of competence” and such similar statements.



Gradually, during the late nineties, the risk management paradigm shifted.

James Reason, a psychologist, made a tremendous impact with his book Human error, published in 1990.

He introduced the term... (More)
Introduction



Risk management in enterprises, organisations and companies has had a long and complicated history.

During the eighties, and at least during the beginning of the nineties, the notion concerning risk management was that if an accident happened in an otherwise perfect system it was due to the human operator in some way being the cause of the error. The cause for the accidents was described in terms of “negligence”, “lack of competence” and such similar statements.



Gradually, during the late nineties, the risk management paradigm shifted.

James Reason, a psychologist, made a tremendous impact with his book Human error, published in 1990.

He introduced the term latent failures (or latent conditions). These, he said, are “resident pathogens”, built into the system. They are latent since the system can live with these pathogens for months and even years, and perform adequately, until something happens, which hampers the “immune system of the system”.

Reason states that the human operator goes to work everyday with the intention of doing a good job. The human operator has no wish “to screw up things”. When accidents happen, and operators make mistakes, it is therefore not a deliberate action. The causes should be sought in design flaws in the system.



In this thesis we are dealing with high-risk systems, though not high-risk technologies. We are studying acute somatic health care, air traffic control, pharmacy and cancer treatment. We will explore different ways for an organisation to receive feedback from safety related occurrences, in order to improve safety.



The aim with this thesis will be to explore methods for obtaining safety feedback in the above mentioned domains.

Four different approaches will be attempted:



• Retrospective learning from accidents (paper I)

• Proactive learning using an external agent (paper II)

• Operator centred learning (paper III)

• User centred proactive learning (paper IV)



Methods and material



Methods



In paper I we used MTO (Man-Technique-Organisation) analysis as described by the nuclear power operators in Sweden, with a certain adaptation for health care.



Paper II was inspired on the work with paper I. During the numerous interviews with doctors and nurses a quite common reaction was: “Why did we not think of these risks before? It is so obvious!”

Another concern was the limited value of retrospective investigations when it comes to improving safety.

This started us on designing a method for proactive risk analysis. Several methods were already described for this, but they were mainly tuned to technical systems with more or less tight coupling, assuming a high degree of linearity (as for instance the Failure Mode and Effect Analysis, FMEA). We felt these methods did not fit the way in which our studied organisations functioned. The result was the DEB (Disturbance- Effect-Barrier) analysis used in paper II. The identified system weaknesses by using this method was compared to system weaknesses extracted from the analysis (done by headquarter analysts) of 15 loss of separation incidents at the unit.



When working with this it became obvious that one category of incidents, i.e. the loss of separation incidents (AIRPROX) , was only the tip of the iceberg. Each day there were a number of near misses that did not result in loss of separation, and therefore not used for safety feedback. Talking to the controllers also revealed a hidden knowledge on questionable procedures that might constitute risks. Thus the idea was fairly simple: Why not let the controllers do the job of analysing safety occurrences? This led to the design of a method for operator-centred learning, i.e. paper III. The method included a brief to the controllers for 1½ days on system thinking.



The starting point for paper IV was particularly tragic. I investigated a case where an eight-year-old girl with cancer was killed by mistake. She was administered the total dose of cytotoxic agents each day for three days, i.e. a 300% overdose. We used the DEB analysis again, for a proactive risk analysis of the process of treating patients with cytotoxic drugs, but this time using a formalised user group.



Material

The material for paper I was a consecutive series of eight reports to the National Board of Health and Welfare, from acute somatic health care.



The material for paper II was a DEB analysis performed for the processes at the Malmoe air traffic control unit in Sweden.



In paper III a trial was performed for half a year with extended reporting of learning occurrences. In this way an additional 45 occurrences were reported which otherwise would not have been documented and analysed.



In paper IV the DEB analysis were performed at one ward unit at the department of oncology at the Lund University hospital, taking into consideration interface problems between the ward unit and the hospital pharmacy (which prepared the cytotoxic infusions).



Results

In paper I we could demonstrate that the notion of latent conditions was fruitful for analysing and learning from medical accidents. We identified a number of system weaknesses in seven out of eight cases, providing a good potential for improving safety.



In paper II we identified a number of risks (latent conditions) in the air traffic control system. We compared the identified system weaknesses with 15 loss of separation cases, investigated by the regulator. We identified all system weaknesses from 14 out of 15 as loss of separation analyses.

In paper III we could demonstrate that the operators indeed were able to analyse “learning occurrences”, and to identify preventive actions, one of these being training on the aircraft flight management system for controllers. Also, they could show that quite a few number of “unexpected flight behaviours” actually were actually partly caused by air traffic control actions.



In paper IV we refined the DEB analysis by using a formalised reference group of staff from the very beginning.. The analysis disclosed a number of system weaknesses, which were presented for the staff. The disclosed risks were accepted as valid, and quite a few of our recommendations were implemented during the next couple of years



Discussion

We discuss our methods in relation to current research, particularly we discuss MTO analysis in relation to root cause analysis, and DEB analysis in relation to FMEA. We are critical to both. We find that both methods could benefit from using the notion of latent conditions, and even applying the concept and vocabulary from the ISO 9000 quality management standard when describing risks.



We discuss the learning potential of retrospective vs. proactive analysis and are in favour of proactive methods.



We introduce complexity theory and relate this to our results. Our conclusion is that the operator-centred approach (paper III) seems to be the most effective way of influencing a complex system in a desirable manner, concerning self-organising and emergent properties. (Less)
Abstract (Swedish)
Popular Abstract in Swedish

Hantering av risker i företag och organisationer har haft en lång och komplicerad historia.

På åttiotalet, och i början av nittiotalet, var uppfattningen att om en olycka inträffade i en organisation, som påstods vara nära nog perfekt, måste det bero på ”den mänskliga faktorn”, dvs. att en eller flera medarbetare inte gjorde som de borde ha gjort. Orsakerna till olyckan beskrevs i termer av försumlighet, bristfällig kompetens och slarv.



Synen på riskhantering ändrades så småningom på sent nittiotal. James Reason, en psykolog vid University of Manchester, fick stor genomslagskraft med sin bok ”Human error”. Han introducerade begreppet ”latenta fel” (eller ”latenta... (More)
Popular Abstract in Swedish

Hantering av risker i företag och organisationer har haft en lång och komplicerad historia.

På åttiotalet, och i början av nittiotalet, var uppfattningen att om en olycka inträffade i en organisation, som påstods vara nära nog perfekt, måste det bero på ”den mänskliga faktorn”, dvs. att en eller flera medarbetare inte gjorde som de borde ha gjort. Orsakerna till olyckan beskrevs i termer av försumlighet, bristfällig kompetens och slarv.



Synen på riskhantering ändrades så småningom på sent nittiotal. James Reason, en psykolog vid University of Manchester, fick stor genomslagskraft med sin bok ”Human error”. Han introducerade begreppet ”latenta fel” (eller ”latenta förhållanden”). Dessa latenta fel, menade han, var ”sjukdomsalstrare”, inbyggt i produktionssystemen. Ett produktionssystem kan leva med dessa länge, tills något händer som stör produktionssystemets ”immunsystem”. En jämförelse från det medicinska området är att våra kroppar härbärgerar många potentiellt farliga mikroorganismer, men de hålls i schack av andra mikroorganismer och vårt immunsystem som utgörs av olika vita blodkroppar. Om vårt immunsystem slås ut, som t.ex vid cellgiftsbehandling, blir dessa bakterier plötsligt väldigt farliga för vår hälsa.



På samma sätt kan oförutsedda händelser inträffa i ett produktionssystem. Ett antal latenta fel byter skepnad från att vara latenta till i högsta grad aktiva, och kan orsaka en olycka.



Reason säger att människan i systemet, medarbetaren, går till arbetet varje dag med ambitionen att göra ett bra jobb. Medarbetaren går inte till jobbet med målsättningen att orsaka en olycka. När därför olyckor inträffar för att medarbetare gör fel, är det inte för att medarbetaren avsiktligt har gjort ett fel. Det är snarare så att medarbetaren har fångats i en ”felfälla” pga. brister i utformningen av produktionssystemet.



Denna avhandling rör högrisksystem, men inte högriskteknologi . Vi undersöker akut somatisk sjukvård, flygtrafikledning, apotek och en cancerklinik. Vi vill undersöka olika sätt för en organisation att få återkoppling från inträffade händelser/olyckor så att den kan lära av dessa, och förhoppningsvis förbättra säkerheten.



Målet för avhandlingen är att undersöka olika sätt att åstadkomma denna återkoppling på, samt att reflektera över om några av dessa sätt är bättre än andra när det gäller att få en organisationen att dra nytta av informationen.



Vi vill pröva fyra olika tillvägagångssätt för att inhämta information om risker:



• Retrospektivt lärande från inträffade händelser/olyckor (paper I)

• Proaktivt lärande, dvs. få information om risker i systemet utan att vänta på att en olycka skall avslöja riskerna. Här använder vi en expert för att göra analysen (paper II)

• Operatörscentrerat lärande. Här låter vi ”folk på golvet” stå för datainsamling och analys (paper III)

• Användarcentrerat proaktivt lärande. Analysmetoden är den samma som i paper II, men med den skillnaden att vi använder en referensgrupp som består av ”folk på golvet”, dvs. det blir mindre av expertinflytande och mer av medarbetarinflytande under analysens gång (paper IV).







Metoder och material



Metoder



I paper I använde vi MTO (Människa-Teknik-Organisation) analys som beskrivits av kärnkraftsoperatörer i Sverige. Dock gjorde vi en viss anpassning från kärnkraft till sjukvård.



Paper II var inspirerat av arbetet med paper I. Vid de talrika intervjuer med läkare och sköterskor som genomfördes i paper I var en återkommande reaktion: ”Varför tänkte vi inte på dessa risker innan olyckan? Det är ju så tydligt.”

En annan reflektion var att retrospektiva riskanalyser kan vara bra för att förhindra en liknande olycka men att sannolikheten för att detta skulle inträffa, med exakt de samma förtecken, är minimal (jfr Tage Danielssons monolog om Harrisburg (Three Mile Island): ”Det kanske var bra det som hände i Harrisburg, fast det var osannolikt, men nu kan det osannolika inte hända i Harrisburg… igen”).



Detta fick oss att börja designa en metod för proaktiv riskanalys. Åtskilliga metoder för detta hade redan publicerats men dessa var i huvudsak inriktade på tekniska system, och förutsatte en hög grad av linearitet (om A inträffar, så inträffar med stor sannolikhet B och sedan C). Vi tyckte inte att dessa metoder var applicerbara på de organisationer som var i fokus för vår forskning (sjukvård, flygtrafikledning). Dessa fungerade i högsta grad icke-linjärt, eller komplext. Resultatet var DEB (Disturbance-Effect-Barrier) analysen som användes i paper II. Med denna metod hittade vi ett antal systemsvagheter (latenta fel och bristfälliga säkerhetsbarriärer) som vi jämförde med systemsvagheter som framgick av den centrala utredningsavdelnings analys av 15 fall av separationsunderskridande .



Under arbetet med paper II fick vi klart för oss att händelser med separationsunderskridande (AIRPROX) endast utgjorde en liten del av den information som kunde användas för att öka säkerheten. Regelbundet ägde händelser rum som var ”nära händelser”, dvs. en mer eller mindre okontrollerad situation som upptäcktes i tid så att händelsen inte utvecklades till ett separationsunderskridande. Vi uppfattade således att AIRPROX - händelser var ”toppen av isberget” och funderade på hur vi kunde komma åt en del av resten av isberget. Genom samtal med flygtrafikledarna upplevde vi att de hade en stor ”dold” kunskap om potentiellt riskfyllda rutiner och procedurer. Ur dessa iakttagelser kläcktes följande idé: Varför inte låta flygtrafikledarna själva stå för analysen av säkerhetsrelaterade händelser? Vi utvecklade därför en metod för operatörscentrerat lärande (paper III). I metoden ingick utbildning av flygledarna i ”systemtänk” om 1½ dag, samt formulering av vilka säkerhetsrelaterade händelser (utöver AIRPROX) de skulle rapportera och analysera.



Utgångspunkten för paper IV var tragisk. Författaren till denna avhandling utredde ett fall där en åtta år gammal flicka miste livet. Hon hade cancer, och fick av misstag totaldosen av ett cellgift varje dag i tre dagar, således en överdosering med 300 %.



Kliniken bad oss göra en proaktiv riskanalys. Vi använde DEB - analysen igen, på processen ”att behandla patienter med cellgifter”. Denna gång använde vi en formaliserad referensgrupp av läkare och sjuksköterskor, utsedda av kliniken.



Material



Materialet till paper I var en konsekutiv serie av rapporter (så kallade Lex Maria-anmälningar) till Socialstyrelsen i Malmö, från akut somatisk vård.



Underlaget för paper II var en DEB - analys av flygtrafikledningsprocessen vid Malmö Air Traffic Control Centre (ATCC Malmoe).



Materialet för paper III var en utökad rapportering av säkerhetsrelaterade händelser under en sex månaders försöksperiod. Under perioden rapporterades 45 händelser som annars inte skulle ha dokumenterats och analyserats.



I paper IV gjorde vi en DEB - analys av risker vid behandling av patienter med cellgifter, vid en vårdenhet på onkologiska kliniken, Lunds universitetssjukhus. I analysen ingick även analys av gränssnittsproblem mellan kliniken och sjukhusapoteket (som tillverkar infusionspåsarna med cellgifter).



Resultat



I paper I visade vi att det var givande att använda idén om latenta fel/latenta förhållanden för analys och lärande från olyckor i sjukvården. I sju av de åtta analyserade händelserna hittade vi risker i form av inbäddade latenta fel som det var möjligt att åtgärda.



I paper II identifierade vi ett antal risker (latenta fel) i ett flygtrafikledningssystem med DEB - analys . Vi jämförde våra resultat med de latenta fel som framgick av 15 fall av AIRPROX, utredd av tillsynsmyndigheten (Luftfartsverket). DEB - analysen identifierade 14 av de 15 systembrister som framgick av AIRPROX utredningarna.



I paper III visade vi att flygledarna mycket väl kunde analysera ”lärande händelser” och föreslå preventiva åtgärder. En sådan preventiv åtgärd var behov av att utbilda flygledare i hur ett flygplans autopilot ”tänkte”. Ett annat resultat var att ganska många fall där flygplanet gjorde något oväntat, och som traditionellt av flygledarna hade betecknats som SBS (”Skit Bakom Spakarna”) faktiskt visade sig delvis ha orsakats av flygtrafikledningen, och alltså inte bara var flight deck error.



I paper IV förfinade vi DEB - analysen genom att från första början luta oss mot en formaliserad referensgrupp av läkare och sjuksköterskor från kliniken. Analysen visade på ett antal systemsvagheter/risker. Detta presenterades för medarbetarna vid ett möte och accepterades som förståndigt (en korridorskommentar efter mötet: ”Det var en bra sågning”!).

En stor del av våra förbättringsförslag implementerades sedan.





Diskussion



Vi diskuterar våra metoder utifrån det aktuella forskningsläget, i synnerhet diskuterar vi MTO - analys i relation till root cause analysis, och DEB - analys i relation till FMEA (Failure Mode and Effect Analysis). Vi är tämligen kritiska till båda. Vi menar att båda dessa metoder kunde förbättras och ge bättre resultat om man dels använde ”latenta förhållande konceptet”, dels tillämpade vokabulär och koncept från ISO 9000 - standarden för kvalitetsledningssystem.



Vi diskuterar potentialen för lärande för retrospektiva och proaktiva metoder och rekommenderar proaktiva metoder, såsom DEB - analysen.



Vi introducerar komplexitetsteori och relaterar denna till våra resultat. Vår konklusion är att den operatörscentrerade metoden (paper III) tycks vara den mest effektiva metoden för att påverka ett komplext system vad gäller systemets förmåga till självorganisering och önskvärda framväxande egenskaper (emergent properties). (Less)
Please use this url to cite or link to this publication:
author
supervisor
opponent
  • PhD Majumdar, Arnab, Imperial College, London, Great Britain
organization
publishing date
type
Thesis
publication status
published
subject
keywords
Risk management, accident models, complex systems, health care, air traffic control, MTO analysis, DEB-analysis, proactive risk analysis.
defense location
Auditorium, Ingvar Kamprad Design Centre, Sölvegatan 26, Lund University Faculty of Engineering
defense date
2011-06-15 10:15:00
ISBN
978-91-7473-118-7
language
English
LU publication?
yes
id
97137531-53c3-44ef-8b22-b7b142f99631 (old id 1963814)
date added to LUP
2016-04-04 13:18:19
date last changed
2018-11-21 21:13:07
@phdthesis{97137531-53c3-44ef-8b22-b7b142f99631,
  abstract     = {{Introduction<br/><br>
<br/><br>
Risk management in enterprises, organisations and companies has had a long and complicated history. <br/><br>
During the eighties, and at least during the beginning of the nineties, the notion concerning risk management was that if an accident happened in an otherwise perfect system it was due to the human operator in some way being the cause of the error. The cause for the accidents was described in terms of “negligence”, “lack of competence” and such similar statements.<br/><br>
<br/><br>
Gradually, during the late nineties, the risk management paradigm shifted. <br/><br>
James Reason, a psychologist, made a tremendous impact with his book Human error, published in 1990.<br/><br>
He introduced the term latent failures (or latent conditions). These, he said, are “resident pathogens”, built into the system. They are latent since the system can live with these pathogens for months and even years, and perform adequately, until something happens, which hampers the “immune system of the system”.<br/><br>
Reason states that the human operator goes to work everyday with the intention of doing a good job. The human operator has no wish “to screw up things”. When accidents happen, and operators make mistakes, it is therefore not a deliberate action. The causes should be sought in design flaws in the system. <br/><br>
<br/><br>
In this thesis we are dealing with high-risk systems, though not high-risk technologies. We are studying acute somatic health care, air traffic control, pharmacy and cancer treatment. We will explore different ways for an organisation to receive feedback from safety related occurrences, in order to improve safety.<br/><br>
<br/><br>
The aim with this thesis will be to explore methods for obtaining safety feedback in the above mentioned domains. <br/><br>
Four different approaches will be attempted:<br/><br>
<br/><br>
•	Retrospective learning from accidents (paper I)<br/><br>
•	Proactive learning using an external agent (paper II)<br/><br>
•	Operator centred learning (paper III)<br/><br>
•	User centred proactive learning (paper IV) <br/><br>
<br/><br>
Methods and material<br/><br>
<br/><br>
Methods<br/><br>
<br/><br>
In paper I we used MTO (Man-Technique-Organisation) analysis as described by the nuclear power operators in Sweden, with a certain adaptation for health care. <br/><br>
<br/><br>
Paper II was inspired on the work with paper I. During the numerous interviews with doctors and nurses a quite common reaction was: “Why did we not think of these risks before? It is so obvious!”<br/><br>
Another concern was the limited value of retrospective investigations when it comes to improving safety. <br/><br>
This started us on designing a method for proactive risk analysis. Several methods were already described for this, but they were mainly tuned to technical systems with more or less tight coupling, assuming a high degree of linearity (as for instance the Failure Mode and Effect Analysis, FMEA). We felt these methods did not fit the way in which our studied organisations functioned. The result was the DEB (Disturbance- Effect-Barrier) analysis used in paper II. The identified system weaknesses by using this method was compared to system weaknesses extracted from the analysis (done by headquarter analysts) of 15 loss of separation incidents at the unit.<br/><br>
<br/><br>
When working with this it became obvious that one category of incidents, i.e. the loss of separation incidents (AIRPROX) , was only the tip of the iceberg. Each day there were a number of near misses that did not result in loss of separation, and therefore not used for safety feedback. Talking to the controllers also revealed a hidden knowledge on questionable procedures that might constitute risks. Thus the idea was fairly simple: Why not let the controllers do the job of analysing safety occurrences? This led to the design of a method for operator-centred learning, i.e. paper III. The method included a brief to the controllers for 1½ days on system thinking.<br/><br>
<br/><br>
The starting point for paper IV was particularly tragic. I investigated a case where an eight-year-old girl with cancer was killed by mistake. She was administered the total dose of cytotoxic agents each day for three days, i.e. a 300% overdose. We used the DEB analysis again, for a proactive risk analysis of the process of treating patients with cytotoxic drugs, but this time using a formalised user group. <br/><br>
<br/><br>
Material<br/><br>
The material for paper I was a consecutive series of eight reports to the National Board of Health and Welfare, from acute somatic health care.<br/><br>
<br/><br>
The material for paper II was a DEB analysis performed for the processes at the Malmoe air traffic control unit in Sweden.<br/><br>
<br/><br>
In paper III a trial was performed for half a year with extended reporting of learning occurrences. In this way an additional 45 occurrences were reported which otherwise would not have been documented and analysed.<br/><br>
<br/><br>
In paper IV the DEB analysis were performed at one ward unit at the department of oncology at the Lund University hospital, taking into consideration interface problems between the ward unit and the hospital pharmacy (which prepared the cytotoxic infusions).<br/><br>
<br/><br>
Results<br/><br>
In paper I we could demonstrate that the notion of latent conditions was fruitful for analysing and learning from medical accidents. We identified a number of system weaknesses in seven out of eight cases, providing a good potential for improving safety.<br/><br>
<br/><br>
In paper II we identified a number of risks (latent conditions) in the air traffic control system. We compared the identified system weaknesses with 15 loss of separation cases, investigated by the regulator. We identified all system weaknesses from 14 out of 15 as loss of separation analyses. <br/><br>
In paper III we could demonstrate that the operators indeed were able to analyse “learning occurrences”, and to identify preventive actions, one of these being training on the aircraft flight management system for controllers. Also, they could show that quite a few number of “unexpected flight behaviours” actually were actually partly caused by air traffic control actions.<br/><br>
<br/><br>
In paper IV we refined the DEB analysis by using a formalised reference group of staff from the very beginning.. The analysis disclosed a number of system weaknesses, which were presented for the staff. The disclosed risks were accepted as valid, and quite a few of our recommendations were implemented during the next couple of years<br/><br>
<br/><br>
Discussion<br/><br>
We discuss our methods in relation to current research, particularly we discuss MTO analysis in relation to root cause analysis, and DEB analysis in relation to FMEA. We are critical to both. We find that both methods could benefit from using the notion of latent conditions, and even applying the concept and vocabulary from the ISO 9000 quality management standard when describing risks.<br/><br>
<br/><br>
We discuss the learning potential of retrospective vs. proactive analysis and are in favour of proactive methods.<br/><br>
<br/><br>
We introduce complexity theory and relate this to our results. Our conclusion is that the operator-centred approach (paper III) seems to be the most effective way of influencing a complex system in a desirable manner, concerning self-organising and emergent properties.}},
  author       = {{Ternov, Sven}},
  isbn         = {{978-91-7473-118-7}},
  keywords     = {{Risk management; accident models; complex systems; health care; air traffic control; MTO analysis; DEB-analysis; proactive risk analysis.}},
  language     = {{eng}},
  school       = {{Lund University}},
  title        = {{Learning for safety in health care and air traffic control}},
  url          = {{https://lup.lub.lu.se/search/files/6087791/1963820.pdf}},
  year         = {{2011}},
}