Top-down methods are used for all but which of the following intended usages?

Passages like this have been bouncing around the internet for years. But how do we read them? How do our brains so quickly make sense of these jumbled letters? The answer is simple: top-down processing.

What Is Top-Down Processing?

In top-down processing, perceptions begin with the most general and move toward the more specific. These perceptions are heavily influenced by our expectations and prior knowledge. Put simply, your brain applies what it knows to fill in the blanks and anticipate what's next.

For example, if half of a tree branch is covered, you usually have an idea of what it looks like, even though half is not being shown. This is because you know what trees look like from prior knowledge.

Processing information from the top down allows us to make sense of information that has already been brought in by the senses, working downward from initial impressions down to particular details.

Why We Use Top-Down Processing

In a world where we are surrounded by virtually limitless sensory experiences and information, top-down processing can help us quickly make sense of the environment.

Our senses are constantly taking in new information. At any given time, we're experiencing a never-ending stream of sights, sounds, smells, tastes, and physical sensations. If we had to focus equally on all of these sensations every second of every day, we would be overwhelmed.

Top-down processing helps simplify our understanding of the world. It allows us to quickly make sense of all the information our senses bring in. As you begin to take in more information about your environment, your initial impressions [which are based on previous experiences and patterns] influence how you interpret the finer details.

This type of processing can be useful when we are looking for patterns in our environment, but these predispositions can also hinder our ability to perceive things in new and different ways.

Influences on This Process

A number of things can influence top-down processing, including context and motivation. The context, or circumstances, in which an event or object is perceived can influence what we expect to find in that particular situation.

If you are reading an article about food and nutrition, for example, you might interpret a word you're not familiar with as something related to food. Motivation can also make you more likely to interpret something in a particular way. For example, if you were shown a series of ambiguous images, you might be more motivated to perceive them as food-related when you're hungry.

Examples of Top-Down Processing

In order to better understand how top-down processing works, it can be helpful to explore a few examples of this phenomenon in action.

The Stroop Effect

One classic example of top-down processing in action is a phenomenon known as the Stroop effect. In this task, people are shown a list of words printed in different colors. They’re then asked to name the ink color, rather than the word itself. 

Interestingly, people are much slower and make more mistakes when the meaning of the word and the ink color doesn’t match. So, for example, people have a harder time when the word “red” is printed in green ink instead of red ink.

Top-down processing explains why this task is so difficult. People automatically recognize the word before they think about the specific features of that word [like what color it's written in]. This makes it easier to read the word aloud rather than to say the color of the word.

Typos

You type a message to your boss, proofread it, and hit 'Send.' Only after the message has gone into the nether sphere do you spot three typos in the first few sentences.

If you've experienced some version of this scenario, you're not alone. Most people find it difficult to catch their own typos. But it's not because they're stupid. According to psychologist Tom Stafford, it may actually be because you're smart!

Tom Stafford, psychologist at the University of Sheffield in the UK

When you're writing, you're trying to convey meaning. It's a very high-level task... We don't catch every detail, we're not like computers or NSA databases. Rather, we take in sensory information and combine it with what we expect, and we extract meaning.

— Tom Stafford, psychologist at the University of Sheffield in the UK

Because writing is such a high-level task, your brain tricks you into reading what you think you should see on the page. It fills in missing details and corrects errors without you even noticing. This allows you to focus on the more complex task of turning sentences into complex ideas.

1Health Economics Unit, School of Public Health and Family Medicine, University of Cape Town, Cape Town, South Africa

Find articles by Lucy Cunnama

Edina Sinanovic

1Health Economics Unit, School of Public Health and Family Medicine, University of Cape Town, Cape Town, South Africa

Find articles by Edina Sinanovic

Lebogang Ramma

1Health Economics Unit, School of Public Health and Family Medicine, University of Cape Town, Cape Town, South Africa

Find articles by Lebogang Ramma

Nicola Foster

1Health Economics Unit, School of Public Health and Family Medicine, University of Cape Town, Cape Town, South Africa

Find articles by Nicola Foster

Leigh Berrie

2National Priority Programmes, National Health Laboratory Services, Johannesburg, South Africa

Find articles by Leigh Berrie

Wendy Stevens

2National Priority Programmes, National Health Laboratory Services, Johannesburg, South Africa

Find articles by Wendy Stevens

Sebaka Molapo

2National Priority Programmes, National Health Laboratory Services, Johannesburg, South Africa

Find articles by Sebaka Molapo

Puleng Marokane

2National Priority Programmes, National Health Laboratory Services, Johannesburg, South Africa

Find articles by Puleng Marokane

Kerrigan McCarthy

3Aurum Institute for Health Research, Johannesburg, South Africa

Find articles by Kerrigan McCarthy

Gavin Churchyard

3Aurum Institute for Health Research, Johannesburg, South Africa

4School of Public Health, University of Witwatersrand, Johannesburg, South Africa

5London School of Hygiene and Tropical Medicine, London, United Kingdom

Find articles by Gavin Churchyard

Anna Vassall

5London School of Hygiene and Tropical Medicine, London, United Kingdom

Find articles by Anna Vassall

Disclaimer

1Health Economics Unit, School of Public Health and Family Medicine, University of Cape Town, Cape Town, South Africa

2National Priority Programmes, National Health Laboratory Services, Johannesburg, South Africa

3Aurum Institute for Health Research, Johannesburg, South Africa

4School of Public Health, University of Witwatersrand, Johannesburg, South Africa

5London School of Hygiene and Tropical Medicine, London, United Kingdom

Corresponding author.

*Correspondence to: Health Economics Unit, School of Public Health and Family Medicine, Health Sciences Faculty, University of Cape Town, Anzio Road, Observatory 7925, South Africa. E‐mail: az.ca.tcu@amannuc.ycul

Received 2015 Jan 19; Revised 2015 Oct 27; Accepted 2015 Nov 5.

Copyright © 2016 The Authors. Health Economics published by John Wiley & Sons Ltd.

This is an open access article under the terms of the Creative Commons Attribution‐NonCommercial License, which permits use, distribution and reproduction in any medium, provided the original work is properly cited and is not used for commercial purposes.

Associated Data

Supporting info item

HEC-25-53-s001.doc [565K]

GUID: E14488FF-DE2A-4DC4-B9F9-19B2CB8D6407

Abstract

Purpose

Estimating the incremental costs of scaling‐up novel technologies in low‐income and middle‐income countries is a methodologically challenging and substantial empirical undertaking, in the absence of routine cost data collection. We demonstrate a best practice pragmatic approach to estimate the incremental costs of new technologies in low‐income and middle‐income countries, using the example of costing the scale‐up of Xpert Mycobacterium tuberculosis [MTB]/resistance to riframpicin [RIF] in South Africa.

Materials and methods

We estimate costs, by applying two distinct approaches of bottom‐up and top‐down costing, together with an assessment of processes and capacity.

Results

The unit costs measured using the different methods of bottom‐up and top‐down costing, respectively, are $US16.9 and $US33.5 for Xpert MTB/RIF, and $US6.3 and $US8.5 for microscopy. The incremental cost of Xpert MTB/RIF is estimated to be between $US14.7 and $US17.7. While the average cost of Xpert MTB/RIF was higher than previous studies using standard methods, the incremental cost of Xpert MTB/RIF was found to be lower.

Conclusion

Costs estimates are highly dependent on the method used, so an approach, which clearly identifies resource‐use data collected from a bottom‐up or top‐down perspective, together with capacity measurement, is recommended as a pragmatic approach to capture true incremental cost where routine cost data are scarce.

Keywords: LMICs, tuberculosis, cost analysis, economies of scale, diagnostics, scale‐up

1. Introduction

In many low‐income and middle‐income countries [LMICs], health system capacity is severely constrained, and the roll‐out of new technologies for common diseases often occurs at a rapid pace and large scale, often without demonstration of the cost of scale‐up. Initial assessments of the costs and cost‐effectiveness of new technology scale‐up are often based on costs collected from small‐scale demonstrations or trial settings. However, costs at scale and in routine settings may substantially differ. This difference is likely to operate in several directions: on the one hand, in routine settings, procedures and staff may perform less well or efficiently than in small‐scale trial sites; on the other hand, there may be economies of scale from roll‐out. In addition, trial‐based costing is often unable to assess the extent to which the existing capacity of fixed [health system] resources can sufficiently absorb any new technology at scale. Therefore, there may be considerable uncertainty around estimates of true system‐wide incremental cost. These factors have led some authors to conclude that key policy decisions should not be based on the results of costing studies that do not include an assessment of current capacity utilisation [Adam et al., ]. Finally, early trial‐based estimates often exclude any incremental above delivery site‐level/laboratory‐level costs; for instance, the human resource management, supply chain management and information technology support required to support rapid scale‐up.

While in high‐income countries, estimating costs at scale may be a matter of extraction from routine systems [Chapko et al., ]; in many LMICs, routine reporting of procedure‐specific/service‐specific costs is scarce [Conteh and Walker, ]. In practice, many health economists working in LMICs spend large proportions of their effort and time collecting empirical cost data as part of any economic evaluation of new technologies when first rolled‐out. This empirical effort is considered essential, both in terms of arriving at accurate incremental cost‐effectiveness ratios and assisting programme managers and funders in planning and budgeting for scale‐up, but these large studies can be both resource intensive and costly. Given the limited resources and capacity in economic evaluation in LMICs, developing efficient cost methods is an important methodological field of enquiry.

A key decision in costing, which fundamentally impacts study cost and resource requirements, is whether to use a top‐down or bottom‐up approach. In simple terms, in a top‐down approach, the analyst takes overall expenditures for each ingredient/input at a central level and then allocates costs using formulae [based on allocation factors such as building usage, staff time and staff numbers] to estimate unit procedure or service costs [Flessa et al., ; Conteh and Walker, ]. A bottom‐up approach uses detailed activity and input usage data from records [or observed usage] at the service provider level to estimate unit costs [Chapko et al., ; Batura et al., ].

Previous research, for example, Hendriks et al. [], mostly suggests that when bottom‐up and top‐down methods are compared, bottom‐up costing is likely to be more accurate, as it is assumed to capture more comprehensively the resources used in providing a particular service. It is also suggested by others that the data required for bottom‐up costing may be easier to access than that for top‐down costing. However, bottom‐up costing may be considered more time demanding, specific to the setting and expensive to undertake [Wordsworth et al., ; Simoens, ]. Another study comparing the two methods, based on national data in the United States of America [Chapko et al., ], searched for agreement between bottom‐up and top‐down methods and highlighted the strength of each method for assessing different constructs of costs: top‐down methods capture national long run average costs, while bottom‐up methods capture site‐level differences.

In practice, however, because of limited data availability in LMICs, most researchers costing at the site level use a mixture of bottom‐up and top‐down methodologies [Hendriks et al., ] depending on the importance and data availability for each ingredient in the costing. This mixed approach is also conventionally applied to obtain the incremental cost estimates used in economic evaluations [Wordsworth et al., ]. For example, staff costs and some material costs may be estimated using a bottom‐up approach, whereas equipment costs, consumables and building space are assessed using a top‐down methodology [Hendriks et al., ]. There is a plethora of disease‐specific costing tools currently available to guide researchers collecting costs in LMICs with a detailed list of ingredients and some collection of overhead costs [or a mark‐up] at the service level [UNAIDS, ; World Health Organization, ].

We argue here that, as commonly applied in LMICs, bottom‐up and top‐down methods capture fundamentally different types of cost; and therefore, more care has to be taken when mixing these methods, or assuming one method is more accurate compared with another. This is particularly the case when applying these costs to the estimation of incremental cost for the purposes of economic evaluation. Our central thesis is that bottom‐up costing methods, although accurate in capturing a comprehensive minimum cost, may under‐report inefficiency or under‐utilised capacity, whereas top‐down methods fully capture this inefficiency, although they may be less ‘accurate’ than bottom‐up costing. By reflecting different extents of inefficiency and under‐utilised capacity, a mixed approach may misrepresent true incremental costs of new technologies implemented at scale.

To illustrate this argument, we use the example of the introduction of a new diagnostic for tuberculosis [TB], the Xpert test which identifies Mycobacterium tuberculosis [MTB] Deoxyribonucleic acid [DNA] and resistance to riframpicin [RIF] [Cepheid, Sunnyvale, California] in South Africa. We compare the use of the top‐down and bottom‐up costing methods to answer the following questions: How do total costs of TB diagnosis change over time during Xpert MTB/RIF scale‐up?; and What are the unit and incremental costs of performing Xpert MTB/RIF under routine conditions during scale‐up? We also examine a range of other indicators of capacity usage and combine these with both sets of cost estimates to provide an indication of incremental cost, and in doing so, demonstrate a pragmatic method for improving the evaluation of costs of new technologies at scale in LMIC settings.

2. Study Design, Materials and Methods

2.1. Background

Tuberculosis control is an important global health concern. In 2012, the estimated number of individuals who developed active TB worldwide was 8.6 million, of which 13% were co‐infected with the human immunodeficiency virus [HIV]. South Africa is a high TB incidence setting with around 860 cases per 100 000 population [World Health Organization, ]. South Africa notifies the highest number of extensively drug‐resistant TB cases in the world. Historically, the diagnosis of TB has been conducted using smear microscopy [hereafter written as microscopy]. Microscopy has a limited sensitivity and cannot identify multi‐drug resistant tuberculosis [MDR‐TB]. In recent years, considerable effort and funding have gone into developing rapid diagnostic tests for TB that include the identification of MDR‐TB and that are affordable and feasible for use in LMICs. The Xpert MTB/RIF assay is an automated cartridge‐based molecular test that uses a sputum specimen to detect TB as well as rifampicin resistance. Results published in a recent Cochrane Systematic Review found that Xpert MTB/RIF has a specificity of 99% and a sensitivity of 89%, compared with a 65% sensitivity in microscopy. Xpert MTB/RIF is also both sensitive and specific in detecting pulmonary TB in those with HIV infection [Steingart et al., ]. In 2010, the World Health Organisation [WHO] made the recommendation that Xpert MTB/RIF replace microscopy where resources permit this change [WHO, ].

Model‐based economic evaluations by Vassall et al. [, ] and Menzies et al. [, ], which followed shortly after this recommendation, suggested that the introduction of Xpert MTB/RIF is cost‐effective and could lead to a considerable reduction in both TB morbidity and mortality in Southern Africa. However, the concern was raised that Xpert MTB/RIF may place increased demand on scarce healthcare resources [Menzies et al., ] and has a high opportunity cost. These early analyses used a mixed but primarily bottom‐up costing method, which was rapid and based either on product prices or on retrospective trial cost data collection at a small number of demonstration trial sites.

2.2. Xpert for TB: Evaluating a New Diagnostic [XTEND] study setting and site selection

Following the WHO [] recommendation, a substantial investment was made to rapidly roll‐out Xpert MTB/RIF in South Africa. The intention was for Xpert MTB/RIF to be rolled out nationally using a staged approach over 2 years, starting in 2011 [Vassall et al., ; Vassall et al., ]. After a request by the National Department of Health, the roll‐out was initiated with the introduction of 30 Xpert MTB/RIF instruments placed in laboratories with a priority for high‐burden TB areas [National Health Laboratory Service, ].

As part of the South African roll‐out, the Department of Health agreed to a pragmatic trial, XTEND [of which this work is a part], funded by the Bill and Melinda Gates Foundation, to evaluate both the impact and cost‐effectiveness of Xpert MTB/RIF introduction in routine settings. Ethical approval for the XTEND study was granted by the ethics committees of the University of Witwatersrand, the University of Cape Town, the London School of Hygiene and Tropical Medicine and WHO. The study ran from 2011 until the end of 2013. The XTEND study was a pragmatic cluster randomised study that evaluated the mortality impact and cost‐effectiveness of Xpert MTB/RIF roll‐out in South Africa [Churchyard, ]. The trial was implemented in four provinces, namely, the Eastern Cape, Mpumalanga, Free State and Gauteng [Table 1]. The Western Cape, Limpopo, North West, KwaZulu Natal and the Northern Cape provinces were not included in the XTEND study. In order to reflect a mix of implementation and geographical settings, a range of laboratories were included.

Table 1

Site description of 21 laboratories observed

Laboratory study arm and numberProvinceAnnual number of microscopy tests [for financial year 2012/2013]Annual number of Xpert MTB/RIF tests [for financial year 2012/2013]Number of observing visits for microscopyNumber of observing visits for Xpert MTB/RIFInterventionLab 1Free State801415 89222Lab 2Gauteng30 03115 25722Lab 3Gauteng12 65515 37822Lab 4Mpumalanga19 105995011Lab 5Eastern Cape537612 41022Lab 6Eastern Cape12 67612 73922Lab 7Eastern Cape14 940840622Lab 8Gauteng20 64412 47111Lab 9Gauteng0121201Lab 10Eastern Cape571311 74701ControlLab 11Gauteng27 40177411Lab 12Mpumalanga63 328108821Lab 13Free State29 42721121Lab 14Gauteng25 60237511Lab 15Eastern Cape32 700020Lab 16Eastern Cape52 565020Lab 17Eastern Cape25 526242521Lab 18Gauteng26 4652310Lab 19Eastern Cape15 710210Lab 20Gauteng2665500Reference laboratoryLab 21Gauteng73 115010

Open in a separate window

We selected all 20 XTEND peripheral laboratories and one reference laboratory to explore how costs vary with scale as comprehensively as was feasible. The 20 peripheral laboratories were stratified by province. They were then randomised [see Churchyard et al., for more information] by a statistician using stata [version 11, StataCorp LP, College Station, Texas, USA] to the intervention arm, where Xpert MTB/RIF was implemented immediately, or the control arm, where Xpert MTB/RIF implementation was delayed and laboratories continued to use microscopy until implementation [Churchyard et al., ]. In order to capture cost at different scales of Xpert MTB/RIF implementation, we measured costs at several time points [Figure ]. Firstly, intervention laboratories were costed at start‐up during observation 1 [July and October 2012] [early stage of Xpert MTB/RIF implementation] and approximately 6 months after Xpert MTB/RIF introduction during observation 2 [late stage of Xpert MTB/RIF implementation] at the beginning of 2013 [March 2013]. At the control laboratories, microscopy was also costed at the beginning of the trial [August 2012]. In addition, at the end of the trial period [post‐enrolment], the control laboratories began to implement Xpert MTB/RIF. This did not impact the trial because the intervention occurred at enrolment but gave the opportunity to cost Xpert MTB/RIF as it was being introduced at very low levels of usage [February and June 2013] [start‐up of Xpert MTB/RIF implementation].

2.3. Tuberculosis tests costed

Microscopy and Xpert MTB/RIF were costed in all control and intervention laboratories. At the reference laboratory, we costed microscopy, TB culture, drug‐sensitivity testing [DST] and polymerase chain reaction tests [see Supporting Information and Figure  for more information about tests and the current diagnostic algorithm]. TB culture was conducted using a Mycobacteria Growth Indicator Tube [MGIT] [BD Microbiology Systems, Cockeysville, MD, USA]. The polymerase chain reaction test assessed was the MTBDRplus test [Hain Lifescience, Nehren, Germany], a line probe molecular assay. DST is conducted to identify which specific TB drugs the TB strain is resistant to and also makes use of a MGIT tube and MGIT system [National Department of Health, ].

2.4. Costing methodology

We estimated costs using both a top‐down approach and a bottom‐up approach, with a complementary process assessment [Supporting Information]. Our bottom‐up method primarily used detailed records and observation to measure resource use. For example, the cost of an Xpert MTB/RIF machine is estimated according to the number of minutes it was observed to be used [for each test] multiplied by the cost per minute. Other methods for bottom‐up costing included using patient or provider semi‐structured interviews to determine resource use, particularly how long a service takes to deliver [Supporting Information].

We conducted our top‐down costing by allocating total laboratory expenditures between site‐level activities and tests, using measured or recorded allocation criteria. For example, the cost of a laboratory staff member may be allocated between several tests, based on their time usage recorded through timesheets. This allocated cost was then divided by the conducted number of tests. Timesheets were primarily used for top‐down cost allocation, whereas observation of time spent on processing was used for bottom‐up costing.

2.5. Measurement and valuation methods for site‐level costs

The overarching approach was micro‐costing rather than gross‐costing. All recurrent and capital costs involved in TB diagnosis were first identified for all the laboratories, using interviews and observations of processes. Recurrent inputs identified included human resource costs, overheads and running costs [utilities, etc.], chemicals and consumables. Capital costs included building, equipment and furniture costs. A data collection and management tool was developed in Microsoft Excel [Redmond, WA, USA], in order to allow for systematic gathering of information. This tool was tested in three pilot laboratories in the Western Cape prior to commencing data collection, which aided the understanding of practices in laboratories, allowed for modification of the tool and prepared the researchers for fieldwork.

Bottom‐up unit costs were then estimated through the observation of TB testing in the laboratories. A mixed method approach was utilised, incorporating timed observation of staff procedures and discussion with laboratory staff and managers to better understand processes and procedures, how observed resource usage is related to routine practice, assessment of workload [including batching] and other key diagnostic processes. Sputum specimens were observed from delivery to recording of test results. Staff members were observed, their level of qualification was noted and the hands‐on processing was timed for each component of the testing. Two to four batches were observed for each test [each microscopy batch had an average of 29 specimens, and each Xpert MTB/RIF batch had an average of 15 specimens], depending on the size of the batches and limits of the observation period. Equipment usage was documented, as was the quantity of consumables, chemicals and reagents utilised. The processing area [floor and desk spaces] was measured for each component of the testing. Management and space‐related overheads were allocated using mark‐ups, respectively, on the basis of a ‘per minute’ and ‘per metre’ usage using overall laboratory expenditures, staffing hours and space.

For the top‐down costs, total costs for the entire laboratory were first determined using financial records on the total expenditure and fixed assets of the laboratory [for overheads, transport, and personnel and building spaces], and total expenditures were used for the TB section for chemicals and reagents, consumables and equipment. Valuation of equipment, other capital items and building space used replacement prices. Allocation of total laboratory expenditure to the TB section depended on the input [overheads, transport, equipment, etc.]. The proportion of space used for TB diagnostics was measured and used for allocation of general building, utilities, cleaning and security costs. Timesheets were used for management and administration costs and were filled for the duration of 1 week. Transport costs [for the collection of specimens from the clinics] were allocated using the annual number of TB tests performed at the laboratory as a percentage of the total number of all tests processed in the laboratory. Allocation of all costs to specific TB tests was carried out through staff involved in TB procedures filling in timesheets for the period of 1 week themselves. After this period, interviews were held with the researchers, in order to apportion time usage and complete the timesheets. Building space was allocated according to proportional use [combining space measured and proportional time per test], and test specific consumables and reagents costs were sourced through scrutiny of financial and laboratory records.

2.6. Measurement and valuation methods for above site‐level costs

In order to capture the broader ‘above laboratory’ costs, expenditures for the above laboratory services such as human resource management, information technology and finance departments were obtained from the National Health Laboratory Service [NHLS]. Following interviews with NHLS and detailed discussions with management staff, management costs were divided by the national laboratory staff numbers for NHLS and then allocated to the specific laboratories based on the number of employees in the laboratory as a proxy for laboratory size. These costs were then allocated to specific tests using the number of tests performed and weighted by time observed for each test. These were then used for bottom‐up and top‐down costing.

The NHLS provided detailed costs for transport of specimens, calibration of instruments, personnel and training and other above laboratory costs for the laboratories. In addition, the NHLS provided data on key prices, such as replacement costs for buildings and equipment and prices of reagents and consumables. Cepheid provided prices for the Xpert MTB/RIF equipment, test kits and calibration. The prices of other items were obtained directly from medical suppliers and companies in the industry, such as Lasec, Sigma and The Scientific Group. These prices are all related to the study year 2013. All capital items were annuitised using a 3% discount rate, where buildings were expected to have a life expectancy of 30 years and equipment and furniture 2–15 years.

2.7. Data analysis

Total costs were estimated for each site. Average and median costs per test using both bottom‐up and top‐down methods were calculated and compared over time and between control and intervention groups. We also estimated the average incremental cost of Xpert MTB/RIF using two methods. The first was a simple comparison between the average [unit] cost of a diagnostic specimen in the control and intervention sites costs. However, as the division of sites, between control and intervention sites, was not balanced for efficiency, we adopted an additional method. This second method is a comparison of the total cost of both Xpert MTB/RIF and microscopy diagnosis in the intervention sites with the total cost of the same number of tests but assuming they were conducted using microscopy testing. Both methods use the top‐down unit costs described earlier.

Given the small sample size, and thus exploratory nature of results, in order to explore economies of scale, we used an ordinary least squares regression analysis of the number of Xpert MTB/RIF tests processed in the year [dependent variable] versus the unit costs [independent variable] [top‐down and then bottom‐up measured during the first and second observations]. We did not statistically analyse any other drivers of costs [again given the small sample size]. We use the results of our process analysis to interpret and arrive at conclusions on both the incremental costs and laboratory systems implications of the scale‐up of Xpert MTB/RIF. Results are presented in US dollars [$US1 = ZAR9.62] [www.oanda.com average exchange rate from January–December 2013].

3. Results

3.1. Sites costed

Our final sample, number of tests performed per laboratory and number of observations made, is depicted in Table 1. As is common in costing studies in ‘real‐world’ LMIC settings during scale‐up, in practice, it was not feasible to observe all the laboratories during both periods for various reasons [Tables ]. Firstly, it was not possible to observe microscopy in the first observation of the intervention sites for two sites because processing of microscopy specimens was not being performed at the time [lab 9 had stopped processing microscopy specimens because of downscaling, and lab 10 had shortages of staff and so was not processing microscopy during the time of observation]. In the second observation, it was not feasible to observe Xpert MTB/RIF testing in all sites because of a number of reasons: closing of laboratories due to downscaling; changing of laboratory information system and auditing of laboratory [both of which meant that researchers were unable to be in the laboratory over this period]; tests not being processed due to lack of stock or lack of staff; and night‐time processing in inaccessible areas. Of the intended 70 observations for microscopy and Xpert MTB/RIF, 49 were undertaken [70%].

3.2. Total costs

Figure 1a presents the trend of total costs for the intervention laboratories using the top‐down approach [averaged for the two observations] for both microscopy and Xpert MTB/RIF, respectively, over 1 year. The corresponding number of tests processed can be seen in Figure 1b. The overall cost of Xpert MTB/RIF broadly increases over time with the phased implementation of Xpert MTB/RIF in intervention laboratories. In intervention laboratories, the costs of microscopy do not fall to zero as microscopy tests are used to monitor response to treatment. Xpert MTB/RIF has an estimated annual total cost of $US2 805 727 across the 10 intervention sites [Figure 1a]. Figure  presents the corresponding total costs in the control sites, which have higher annual microscopy costs [$US1 992 434] than the intervention sites.

Open in a separate window

Figure 1

[a] Total costs for Xpert MTB/RIF and microscopy for 10 intervention laboratories for the financial year 2012/2013 [$US2013]. [b] Figures for Xpert MTB/RIF and microscopy tests processed for 10 intervention laboratories for the financial year 2012/2013

The breakdown of total costs for Xpert MTB/RIF by input during each observation is presented in Table . In the intervention laboratories, the proportion of total costs attributed to reagents and chemicals was on average 62% for observation 1 and 64% for observation 2. In the control laboratories, the cost for reagents and chemicals was only 24% in observation 2 [post‐Xpert MTB/RIF implementation] on average. Labour costs during the first and second observations accounted for 8% and 7%, respectively, at the intervention laboratories and 14% in the control laboratories for the second observation on average. Above laboratory costs account for 6% for the intervention and 7% for the control laboratories on average.

3.3. Unit costs

Bottom‐up and top‐down laboratory unit costs for Xpert MTB/RIF and microscopy are displayed in Tables 2 and 3, respectively. For Xpert MTB/RIF, the mean bottom‐up cost per test processed was $US16.9 [standard deviation [SD], $US6.1] in comparison with the mean top‐down cost of $US33.5 [SD, $US32.0]. The mean bottom‐up unit costs were the same for observation 1 $US14.6 [SD, $US2.0] and 2 $US14.6 [SD, $US1.8] [in intervention laboratories], whereas the top‐down first observation unit cost was $US28.4 [SD, $US28.0] [in intervention laboratories] and the second observation measured a $US19.7 [SD, $US3.5] top‐down unit cost. During the start‐up stages of implementation, the mean bottom‐up unit cost of Xpert MTB/RIF in control sites was $US24.3 [SD, $US9.2] per test compared with a mean top‐down measurement of $US68.4 [SD, $US45.5].

Table 2

Unit costs Xpert MTB/RIF: top‐down and bottom‐up costs per test processed by input type, study arm and observation [$US2013]

Start‐up stages of implementation [~3 months of implementation]Early stages of implementation [~3–6 months of implementation]Medium term stages of implementation [~12 months of implementation]Time periodControl [observation 2]Intervention [observation 1]Intervention [observation 2]Top down‐methodn = 5n = 10n = 9MeanRangeMeanRangeMeanRangeOverheads$0.3[$0.1; $0.6]$3.6[$0.2; $29.4]$0.7[$0.2; $1.7]Transport$0.5[$0.1; $1.0]$0.8[$0.1; $2.9]$0.9[$0.1; $2.9]Building space$0.3[$0.1; $1.2]$1.2[$0.2; $6.4]$0.7[$0.2; $2.1]Equipment$35.2[$0.6; $76.9]$1.3[$1.0; $1.7]$1.6[$0.9; $3.4]Staff$10.0[$2.2; $20.5]$2.7[$0.5; $13.5]$1.5[$0.1; $2.3]Reagents and chemicals$10.9[$9.8; $13.8]$15.3[$10.3; $39.7]$12.5[$9.7; $17.4]Calibration$1.9[$0.4; $4.1]$0.1[$0.1; $0.7]$0.1[$0.1; $0.1]Training Xpert MTB/RIF specific$5.8[$1.1; $14.1]$0.5[$0.2; $2.1]$0.2[$0.2; $0.5]Consumables$0.3[$0.1; $0.4]$1.5[$0.0; $12.4]$0.3[$0.0; $1.3]Above service‐level costs Xpert MTB/RIF$3.2[$0.1; $11.0]$1.4[$0.7; $3.6]$1.2[$0.7; $3.1]Mean unit cost for each Xpert MTB/RIF observation$68.4 [SD, $45.5]$28.4 [SD, $28.0]$19.7 [SD, $3.5]Mean top down unit cost for Xpert MTB/RIF$33.5 [SD, $32.0]Bottom‐up methodn = 5n = 10n = 6MeanRangeMeanRangeMeanRangeOverheads$0.3[$0.0; $1.0]$0.4[$0.1; $0.9]$0.3[$0.1; $0.6]Transport$0.4[$0.1; $1.0]$0.8[$0.1; $2.9]$1.1[$0.1; $2.9]Building space$0.2[$0.1; $0.5]$0.2[$0.0; $0.4]$0.2[$0.0; $0.8]Equipment$1.6[$0.2; $3.3]$0.5[$0.0; $0.9]$0.8[$0.3; $1.6]Staff$1.0[$0.3; $1.7]$0.9[$0.1; $4.1]$0.7[$0.1; $1.4]Reagents and chemicals$9.7[$9.7; $9.7]$9.7[$9.7; $9.7]$9.7[$9.7; $9.7]Calibration$1.9[$0.4; $4.1]$0.1[$0.1; $0.7]$0.1[$0.1; $0.1]Training Xpert MTB/RIF specific$5.8[$1.1; $14.1]$0.5[$0.2; $2.1]$0.2[$0.2; $0.3]Consumables$0.2[$0.1; $0.3]$0.1[$0.0; $0.3]$0.1[$0.0; $0.4]Above service‐level costs Xpert MTB/RIF$3.2[$0.1; $11.0]$1.4[$0.7; $3.6]$1.4[$0.7; $3.1]Mean unit cost for each Xpert MTB/RIF observation$24.3 [SD, $9.2]$14.6 [SD, $2.0]$14.6 [SD, $1.8]Mean bottom‐up unit cost for Xpert MTB/RIF$16.9 [SD, $6.1]

Open in a separate window

aAfter the last laboratory implemented Xpert MTB/RIF.

Table 3

Unit costs microscopy: top‐down and bottom‐up costs per test processed by input type, study arm and observation [$US2013]

ControlInterventionTime periodObservation 1Observation 2Observation 1Observation 2Top‐down inputsn = 9n = 9n = 9n = 9MeanRangeMeanRangeMeanRangeMeanRangeOverheads$0.6[$0.1; $1.3]$0.6[$0.1; $1.3]$1.2[$0.3; $2.8]$1.2[$0.3; $2.8]Transport$0.7[$0.1; $2.1]$0.7[$0.1; $2.1]$0.9[$0.1; $2.9]$0.9[$0.1; $2.9]Building space$0.6[$0.0; $1.5]$0.8[$0.1; $1.5]$1.1[$0.2; $4.4]$1.0[$0.2; $4.4]Equipment$0.3[$0.1; $0.8]$0.3[$0.0; $0.8]$0.7[$0.2; $1.4]$0.7[$0.2; $1.4]Staff$1.1[$0.4; $1.9]$1.1[$0.4; $1.7]$1.7[$0.5; $3.2]$1.6[$0.3; $3.1]Reagents and chemicals$1.0[$0.1; $4.2]$1.0[$0.1; $4.2]$2.6[$0.3; $7.8]$2.5[$0.1; $7.8]Consumables$0.3[$0.1; $0.4]$0.3[$0.1; $0.9]$0.3[$0.0; $1.3]$0.3[$0.0; $1.3]Above service‐level costs microscopy$1.9[$0.7; $4.0]$1.8[$0.6; $3.8]$2.1[$0.4; $3.4]$2.1[$0.4; $3.3]Mean unit cost for each microscopy observation$6.4 [SD, $2.0]$6.8 [SD, $1.9]$10.6 [SD, $3.9]$10.4 [SD, $4.0]Mean top down unit cost for microscopy$8.5 [SD, $3.6]Bottom‐up inputsn = 9n = 5n = 8n = 6MeanRangeMeanRangeMeanRangeMeanRangeOverheads$0.2[$0.0; $0.5]$0.1[$0.0; $0.3]$0.7[$0.0; $1.9]$0.5[$0.0; $1.9]Transport$0.7[$0.1; $2.1]$0.9[$0.1; $2.1]$0.9[$0.1; $2.9]$1.1[$0.1; $2.9]Building space$0.6[$0.0; $3.6]$0.4[$0.2; $0.6]$0.8[$0.1; $4.5]$0.9[$0.0; $4.5]Equipment$0.3[$0.1; $0.6]$0.1[$0.1; $0.3]$0.3[$0.1; $0.5]$0.2[$0.1; $0.4]Staff$0.4[$0.1; $1.2]$0.3[$0.2; $0.5]$1.1[$0.1; $4.4]$0.6[$0.2; $1.3]Reagents and chemicals$1.7[$0.4; $5.0]$1.0[$0.2; $1.6]$0.5[$0.1; $1.2]$0.8[$0.1; $2.9]Consumables$0.4[$0.2; $0.9]$0.5[$0.1; $0.9]$0.4[$0.2; $0.8]$0.3[$0.1; $0.7]Above service‐level costs microscopy$1.9[$0.7; $4.0]$2.2[$1.5; $3.8]$2.0[$0.4; $3.4]$2.2[$0.4; $3.3]Mean unit cost for each microscopy observation$6.2 [SD, $1.7]$5.5 [SD, $1.9]$6.6 [SD, $3.5]$6.7 [SD, $3.5]Mean bottom up unit cost for microscopy$6.3 [SD, $2.8]

Open in a separate window

A paired t‐test was performed to determine if the two methods [top‐down and bottom‐up] produced statistically different results. There were a total of 21 observations with complete data for top‐down and bottom‐up unit costs [n = 21], and these were used in the paired t‐test [Table ]. A 95% confidence interval about the mean showed a significant difference [0 did not fall in the range 5.36 and 31.88, leading us to reject the null hypothesis that there is no difference between the two observation means]. The mean [mean = 18.62; SD = 29.12; n = 21] was significantly greater than zero, and the two‐tail p = 0.008 provides evidence that the methods are notably different.

Table 4 presents the costs estimated at the reference laboratory. The mean bottom‐up and top‐down costs for each of the tests, respectively, were: microscopy $US3.6 and $US8.2; MTBDRplus $US20.3 and $US30.6; MGIT culture $US12.9 and $US16.1; and drug susceptibility testing for streptomycin, isoniazid, rifampin and ethambutol [DST MGIT SIRE] $US25.1 and $US53.7. These unit costs all include above laboratory costs. Table  presents our estimates of incremental costs using two methods. We estimate an average incremental cost of $US17.7 [top‐down] and $US14.7 [bottom‐up] for Xpert MTB/RIF. We present the ordinary least squares regression analysis in Figure 2. We find that scale, and by this we mean an increase in the number of specimens requiring processing and those being processed, is a strong determinant of both bottom‐up and top‐down costs with a correlation coefficient of 0.66 and 0.79, respectively, [p 

Chủ Đề