GLOBAL BROADBAND AND INNOVATIONS PROGRAM USAF CAPACITY BUILDING MODULE: MONITORING AND EVALUATION PART 1 JUNE 2013
June 2013 This publication was produced for review by the United States Agency for International Development. It was prepared by Integra Government Services International, LLC.
GLOBAL BROADBAND AND INNOVATIONS PROGRAM USAF CAPACITY BUILDING MODULE: MONITORING AND EVALUATION PART 1 JUNE 2013
DISCLAIMER The authors’ views expressed in this publication do not necessarily reflect the views of the United States Agency for International Development or the United States Government.
CONTENTS 1.
Introduction 1.1 Module Objectives, Contents 1.2 Essential M&E Concepts
2 2 3
2.
Monitoring 2.1 What to Monitor? 2.2 Who Should Monitor? 2.3 Monitoring Mechanisms 2.4 Institutional Arrangements for Monitoring 2.5 Monitoring Framework
5 5 6 6 6 6
3. 3.1 3.2
Evaluation Types of Evaluations Impact Evaluations
8 8 9
Annex 1: Template for Terms of Reference for Evaluation Firm
15
Annex 2: Sample Checklist of “Information requirements for Impact Assessment of Community Computer/Information Centers” 17 Annex 3: List of Core ICT Indicators
USAF Monitoring and Evaluation Part 1
19
1
Global Broadband and Innovations Program
1. Introduction This is Capacity Building Module # 3 of the USAID/GBI program to support enhancement of Universal Service and Access Funds (USAFs) as a resource to promote ICT development. This module addresses USAF Monitoring and Evaluation, and will be presented in two separate documents. The current document, Part 1, will present the background, definitions and examples concerning Monitoring and Evaluation in relation to Universal Service Funds. Part 2 (forthcoming) will provide a more practical application of the topic. Other modules in this series address the following topics: Module #1: USAF Strategic Planning Module #2: USAF Program Concepts Module #4: USAF Data Collection and Market Analysis Module #5: National Broadband Strategy Planning Collectively, these modules offer a set of useful information resources and practical tools, based upon international experience and best practices, in the management of Universal Service and Access Funds. Combined with other capacity building resources, including direct technical assistance from GBI and others, these modules can help USAF administrations and staff to enhance Fund operations, and improve the effectiveness of ICT development financing on many levels.
1.1 Module Objectives, Contents The main objective of this module is to provide USAF administrators and staff with information, advice, experience, and recommendations regarding the role of Monitoring and Evaluation (M&E) as a key component of the Fund’s operations. It highlights the importance of M&E in a project procurement cycle, and describes the high level contours of recommended M&E functions for USAFs. Part 2 of this module (forthcoming) will provide additional practical support to Universal Service Funds in relation to the establishment of an M&E function, whether inhouse or through outsourcing. Monitoring and Evaluation holds a pivotal position in efficient and effective project management for a Universal Service Fund. M&E can help the organization elicit required information from the activities being implemented. The information can be utilized for informed decision-making in addition to informing the project sponsors about various dimensions of project implementation such as efficiency, effectiveness, relevance, sustainability, etc. Without effective monitoring and evaluation there is a heightened risk that projects may fail and even worse, that failed projects might be replicated. In addition, Annex 1 provides a sample set of Terms of Reference for impact evaluation, for use in cases where M&E is to be outsourced. Annex 2 provides information specifically required for Evaluation of Community Computer and Information Centers. Annex 3 provides an indicative list of Core ICT Indicators.
2
JUNE 2013
1.2 Essential M&E Concepts Planning is the process of setting goals, developing strategies, outlining the implementation arrangements and allocating resources to achieve those goals. A Strategic Plan is a higher level document which contains a description of organizational Vision, Mission, Goals, Objectives and Strategies. This provides a general framework and sets organizational direction. Result Based Management (RBM) is defined as “a broad management strategy aimed at achieving performance and demonstrable results within the organizational goals and objectives”. An effective RBM system relies on constant feedback, learning and improving. Existing plans are regularly updated based on the lessons learned through monitoring and evaluation and future plans are developed based on these lessons. Result Based Management is focused on the result chain i.e.: Inputs result in Outputs, and in turn Outputs yield short term and long term Impact. This relationship is illustrated in the figure 1.
Focus of Monitoring
Focus of Evaluation
Figure 1: Logical Order from Goals to Impact
Monitoring is defined as the ongoing process by which stakeholders obtain regular feedback on the progress being made towards achieving their goals and objectives. Effective monitoring should be able to respond to two questions: “Are we taking the actions we said we would take?” and “Are we making progress on achieving the results that we said we wanted to achieve?” Monitoring can be done at various levels, including inputs, activity, output, project, program, policy, and organization. The scope of monitoring will be dependent on the level at which it is done, e.g., for an organization such as a USAF, monitoring of project implementation is of importance, whereas for a Ministry/Regulator, progress towards policy level objectives needs to be monitored. Evaluation is a thorough and independent assessment of either completed or ongoing activities to determine the extent to which they are achieving stated objectives. Evaluation, like monitoring, can apply to many things, including outcome, impact, project, program, strategy, policy, and organization. Performance Indicator is a metric for measurement of inputs, processes, outputs, outcomes, and impacts for development projects, programs, or strategies. A sample list of core indicators, USAF Monitoring and Evaluation Part 1
3
Global Broadband and Innovations Program which can be used as reference points for ICT projects, is given at Annex 3 (Source: List of Core Indicators, ITU). Inputs are the main resources required to undertake the activities and to produce the outputs. These include personnel, civil works, equipment, materials, training, operational funds, etc. Outputs are the physical and/or tangible goods and/or services delivered by the project, which describe the scope of the project. These may include kilometers of cable laid, number of villages connected to broadband, number of Telecenters established, etc. Outcome is the key anchor of the project design. It describes what the project intends to accomplish by the end of project implementation, and by doing so makes it clear what development issues the project will address. Examples of this may include number of mobile users per 100 persons, number of persons/households with broadband access per 100 etc., as well as utilization metrics. Impact also termed goal or long-term objective, refers to policy objectives or in certain cases international development objectives such as the Millennium Development Goals (MDGs). The impact is wide in scope, will accrue at a date in the future — medium to long-term — following project completion, and is influenced by many factors other than the project itself, e.g., increase in the income levels, economic empowerment, enhanced literacy levels etc. A detailed account of what to cover in M&E and how to implement M&E in the realm of project management is provided in the following sections.
4
JUNE 2013
2. Monitoring Monitoring is an ongoing process through which project implementation is monitored in relation to original plans and expectations. Monitoring is done through measuring relevant indicators on a regular basis. An Indicator can be defined as a unit or variable by which a measurement is made. The general purpose of monitoring is to capture information on project inputs, project activities, and project outputs periodically through a set of indicators. As a rule, consensus among key stakeholders needs to be developed on the type of indicators, periodicity and form of reporting, typically at the outset of the project, and formally incorporated within Terms of Reference.
2.1 What to Monitor? Various dimensions of a project or program may need to be monitored. These can include scope, schedule, quality, cost, risk, contract, human resources, etc. A possible list of monitoring indicators which can provide meaningful insight at the appropriate level is given below: ü Program/Organizational level o Funds spent in each region divided politically, geographically or on other criteria o Number of contracts awarded o Subsidy amounts disbursed o Percentage of amounts released against each contract/project o Percentage of operational expenditures viz-a-viz contributions collected o Subscriber base of mobile telephones in the intervened area o Computer usage rate in intervened area o Volume of local ICT content developed o Mobile subscribers per 100 persons (segregated geographically) o Internet availability per 100 persons (segregated based on technology) ü Project Level o Project implementation progress o Financial tracking o Deliverables monitoring o Schedule monitoring o Quality of Service provided under project The list above provides a non-exhaustive set of indicators, which can be measured with a frequency agreed by the key stakeholders. As a general convention, project level indicators are measured more frequently, ranging from daily to monthly depending on the nature of the project and usefulness of the collected information. The program level indicators are measured less frequently as compared to the project level indicators.
USAF Monitoring and Evaluation Part 1
5
Global Broadband and Innovations Program
2.2 Who Should Monitor? Ideally a USAF should have in-house capacity for conducting monitoring on indicators which are to be measured frequently. As for monitoring overall program indicators, specialized firms can be hired, or macro level secondary information may be used, e.g., data from ITU, UNDP, and State’s Bureau of Statistics and other credible sources.
2.3 Monitoring Mechanisms Monitoring tools and required metrics should be defined clearly during each project’s planning stage. Examples of monitoring tools are listed below for illustration: • • • • • •
Periodic reports: information is collected and reported with an agreed frequency Random field verifications Computer based project management tools, such as Microsoft Project, Primavera etc. For technical monitoring; network performance reports, e.g., network usage, billing information, Quality of Service (QoS) reports, call success ratios, etc. GIS Maps illustrating growth of the network over time Technical Parameters validation through drive tests and other technical metrics
2.4 Institutional Arrangements for Monitoring Monitoring should be conducted by an independent unit reporting directly to management, and should be separate from units directly involved in project implementation, to eliminate any possible conflict of interest. A monitoring department generally consists of a dedicated team leader having sufficient experience in projects monitoring in technical or social domains, and at least one dedicated resource having sufficient experience in designing and managing monitoring systems for similar projects/organizations. For the purpose of technical support, the number of staff required to assist the monitoring team is determined based on the geographical outreach of the organization and the overall workload. Furthermore for monitoring in general, information from finance, human resources, contracts, and other USAF departments should flow toward the Monitoring Section of the organization as needed. It may not be possible to allocate this scope of resources within a USAF toward the monitoring function initially. Some tasks may thus be outsourced, or otherwise minimized. As monitoring capabilities and experience grow within the organization, the required allocation of personnel and responsibilities of staff should become more consistent and predictable.
2.5 Monitoring Framework A structured Monitoring framework should address the following questions: • 6
What is to be measured, what indicators? JUNE 2013
• • •
How it will be measured and recorded? Who will measure that information? How frequently will the information be measured and reported?
One underlying factor that can make monitoring more effective is to build consensus on all of the above parameters among all the key stakeholders, including planners, implementers, management, and beneficiaries. This should be incorporated within standardized processes that are applied to all projects at the planning stage, and within project Terms of Reference. Any deviation from the standard agreements should be negotiated at the time of contract award, and based upon reasonable concerns.
USAF Monitoring and Evaluation Part 1
7
Global Broadband and Innovations Program
3. Evaluation ”Evaluation,” in the context of an M&E framework, represents the periodic, objective assessment of a planned, ongoing, or completed project, program, or policy. Evaluation is used to respond to specific questions related to design, implementation, and results. In contrast to continuous monitoring, evaluation studies are carried out at discrete points in time and often seek an outside perspective from experts to add credibility to the findings.
3.1 Types of Evaluations Evaluation can address two broad types of queries: ü Descriptive queries: The Evaluation seeks to determine what is taking place and describes processes, conditions, organizational relationships, and stakeholder views. ü Cause-and-effect relationships. The Evaluation examines outcomes and tries to assess what difference the intervention has made in the outcome and how much of the impact is attributable to the Project. Evaluations can also generally cover two dimensions: •
Program/Project Performance Evaluation addresses such issues as relevance of the project to the overall goal of the organization, efficiency in delivery of the desired outputs, effectiveness of approach, timeliness of interventions, and other questions pertinent to the project/program design, management and operational decision making.
•
Outcome/Impact Evaluation investigates the nature of the relationship between planned inputs and the outcomes and impacts that result from the project.
Program and project performance evaluations are linked to monitoring responsibilities, as ongoing project monitoring can provide a foundation for evaluation of programs’ efficiency and proper execution. A more thorough evaluation of this nature is typically conducted via an audit of the organization’s activities, spending, and implementation of mandates and plans, with comparison of performance against original expected results. Thus, such an audit may evaluate the number of locations in which ICT services or facilities have been established under a Fund program, and whether such services are being actively utilized as anticipated, in comparison with the projections made at the launch of the project. Such audits are typically required in connection with project subsidy payments, and under-performance may result in withholding of payments or other sanctions against contractors. Other aspects of such performance evaluations may include review of project costs and efficiency relative to original budget plans. The main focus of the rest of this report will address the second category above, outcome/impact evaluations. These involve more complex methods to design and implement, and their findings are ultimately of most substantial importance to a USAF’s mission. 8
JUNE 2013
3.2 Impact Evaluations Impact Evaluations are a particular type of evaluation that seek to determine the nature of causal relationships among certain variables involved with a project. Impact evaluations are structured around one particular type of question, i.e.: What is the impact (causality) of a program/project on an outcome of interest? An impact evaluation looks for the changes in outcomes that are directly attributable (attribution) to the program. This focus on causality and attribution is the cornerstone of impact evaluations and determines the methodologies that can be utilized for this purpose. To be able to estimate the causal effect or impact of a program, any method chosen must estimate the situation that would have existed for program participants if they had not participated in the program. A typical impact evaluation analysis, for a USAF that is involved in extending ICT services in un-served areas, might address the relationship between the level of poverty and technology/broadband diffusion, for example. Figure 2 illustrates the possible impacts of ICTs.
Figure 2: Possible Impacts of ICTs
3.2.1 When is Impact Evaluation Required? Impact evaluation is required when a USAF or related organizations/sponsors need to make decisions or obtain information on one or more of the following: Ăźďƒź To what extent and under what circumstances could a successful pilot or small-scale program be replicated on a larger scale or with different population groups? USAF Monitoring and Evaluation Part 1
9
Global Broadband and Innovations Program
ü What has been the contribution of the intervention to achieving the overall goals of the USAF? ü What are the potential development contributions of an innovative new program? Impact Evaluation may be justified when decisions have to be made about the continuation, expansion, or replication of a program and when the benefits of the evaluation (for example, money saved by making a correct decision or avoiding an incorrect one) exceed the costs of conducting the evaluation. An expensive Impact Evaluation that produces important improvements in program performance can be highly cost-effective; even minor improvements in a major program may result in significant monetary savings to the organization, as well as benefits to the public. Ideally impact evaluation should be performed a minimum of two times during the life cycle of a program/project: ex-ante before the implementation of the program, which is also known as a baseline evaluation; and ex-post after implementation of most or all projects under the program, when sufficient time and experience have developed to allow meaningful evaluation. The question of ‘when’ should be embedded in the program’s overall plan. If resources permit and the implementation period is longer with sizeable financial commitment, then it is viable to have mid-term evaluation as well.
3.2.2 How to Conduct Impact Evaluations There is no one-size-fits-all methodology for impact evaluation; however the best design depends on a number of factors: ü what is being evaluated (for example, a small project, a large program, or a nationwide policy); ü the purpose of the evaluation; ü budget, time, and data constraints; and ü the time horizon (medium- and long-term impacts; or initial estimates of potential future impacts). Impact Evaluation designs can also be classified according to whether they are commissioned at the start of the project, during implementation, or when the project is already completed; and according to their level of methodological rigor. A general framework which can be adopted for evaluation and subsequent capturing of impact is illustrated in Figure 3 below.
10
JUNE 2013
Impact= EC-‐E0 Approach -‐1
Project Start T0
Project Midpoint TM
Evaluation E without Evaluation E with
Project Beneficiary Cohort P With
•Demographic •Socio-‐Economic •Technology •Usage pattern •Other r elevant indicators
Approach -‐2
Project Non-‐beneficiary Cohort P without
Areas covered in Evaluation
Evaluation EC Post Evaluation
Impact = E with -‐ E without
Evaluation EM Mid-‐Term Evaluation
Evaluation E0 Baseline
Project Closure TC
Figure 3: Evaluation Framework
Two approaches used for impact evaluation are given below. Approach 1: Time series method, which is based on studying the same population, based on a uniform set of indicators checklist, at two points of time. This is also known as the before and after method. Approach 2: Cross-section method, studying two different cohorts at the same time, based on a uniform set of indicators, one cohort comprising of project beneficiaries and one where project activities are not implemented. This is also known as the with and without method. The focus of both approaches is to isolate the role of the program’s intervention in the overall impact on the status of the target beneficiaries. The impacts are the result of multiple factors, which can be broadly divided in two categories: 1) impacts attributable to the project, and 2) impacts attributable to all other factors. This relationship is given in the equation below: Impact = f (A, B), where A corresponds to project related interventions, and B corresponds to all other external factors.
USAF Monitoring and Evaluation Part 1
11
Global Broadband and Innovations Program Evaluations can be carried out using a variety of tools or methods. These include the following: • • • • • •
Beneficiary Assessment Surveys Econometric Analysis- Regression Case studies Cost benefit Analysis Cost Effectiveness analysis
3.2.3 Information Requirements and Data Sources Impact Evaluation is based on information Research questions regarding the status of the beneficiaries of the • What are the social, economic, and cultural development intervention. The information is benefits of the Telecenters? collected from the field by using a variety of data • What improvements in the existing services collection procedures, such as surveys, opinion were brought about by the establishment of polls, interviews, focus group, observations, etc., Telecenters-‐ social, economic, technological, as well as certain statistical or empirical cultural and environmental? observations. The prevailing question to ask at • How many community users benefit from the the outset is: “What are the essential information improved services? requirements for conducting this particular • How were specific community organizations evaluation?” and institutions impacted by the Telecenters? A sample checklist is given at Annex 2, which • How were benefits distributed across can be used for the impact assessment of individuals, groups, and organizations in the Community Computer/Information Centers. For community? the purpose of illustration Box 1 indicates a set • Did the Telecenters lead to more local of research questions which can be addressed development initiatives? to capture the impact of Telecenters. • What types of Telecenters’ services and ICT applications were most successful in delivering the intended development impact? • What are the critical success factors, in the financial sustainability of Telecenters? Box 1: Research questions for illustration
12
JUNE 2013
3.2.4 Institutional Arrangements for Conducting Evaluations Ideally, a separate evaluation unit/department needs to be established within the USAF to be responsible for evaluation (this can also include the Monitoring functions). However, in most cases the actual conduct of the evaluation studies will be undertaken by external firms, so the Evaluation unit will be mainly responsible for engaging and overseeing such outside experts. In this respect, the key responsibilities of the evaluation department are: ü ü ü ü ü
develop Scope of Work/Terms of Reference for conducting Evaluation studies; solicit bids from and enter contracts with qualified evaluation organizations/firms; facilitate and monitor implementation of the contract; review findings, assess validity and meaning of study results; and report results to the higher management.
The line of authority and reporting for the Evaluation department should be linked directly with top USAF management, to remove any chance of conflict of interest, particularly with groups involved in project design and award, for example. The evaluation team should typically be headed by a senior level evaluation expert having sufficient experience of conducting evaluations of the relevant projects, or similar types of research and analytical studies. The team leader should be supported by one to three mid-level professionals, depending on the required level of effort, and the frequency and scope of evaluations.
USAF Monitoring and Evaluation Part 1
13
Global Broadband and Innovations Program
14
JUNE 2013
Annex 1: Template for Terms of Reference for Evaluation Firm 1- BACKGROUND AND CONTEXT The background section makes clear what is being evaluated and identifies the critical social, economic, political, geographic and demographic factors within which the agency operates, which have a direct bearing on the evaluation. This description should be focused and concise (a maximum of one page) highlighting only those issues most pertinent to the evaluation. 2- EVALUATION PURPOSE This section explains clearly why the evaluation is being conducted, who will use or act on the evaluation results, and what will be done based on the results. A clear statement of purpose provides the foundation for a well designed evaluation. 3- EVALUATION SCOPE AND OBJECTIVES This section defines the parameters and focus of the evaluation. The section answers the following questions: ü What aspects of the intervention are to be covered by the evaluation? This can include the time frame, implementation phase, geographic area, and target groups to be considered, and as applicable, which projects (outputs) are to be included. ü What are the primary issues of concern to users that the evaluation needs to address or the objectives the evaluation must achieve? 4- EVALUATION QUESTIONS Evaluation questions define the information that the evaluation will generate. This section proposes the questions that, when answered, will give intended users of the evaluation the information they seek in order to make decisions, take action or add to knowledge. While the agency should initially define the questions it seeks to answer, responding firms may be encouraged to propose additional questions as well, based on their experience and insight with similar evaluations. 5- METHODOLOGY The ToR may suggest an overall approach and method for conducting the evaluation, as well as data sources and tools that will likely yield the most reliable and valid answers to the evaluation questions within the limits of resources. However, responding firms should also be asked to propose a methodology that is consistent with their past experience and best practices. 6- EVALUATION PRODUCTS (DELIVERABLES) This section describes the key evaluation products the evaluation team will be accountable for producing. These products may include:
USAF Monitoring and Evaluation Part 1
15
Global Broadband and Innovations Program ü Inception report—An inception report should be prepared by the evaluators before going into the full fledged data collection exercise. It should detail the evaluators’ understanding of what is being evaluated and why, showing how each evaluation question will be answered by way of: proposed methods, proposed sources of data and data collection procedures. The inception report should include a proposed schedule of tasks, activities and deliverables, designating a team member with the lead responsibility for each task or product. The inception report provides the USAF and the evaluators with an opportunity to verify that they share the same understanding about the evaluation and clarify any misunderstanding at the outset. ü Draft evaluation report—The sponsors and key stakeholders in the evaluation should review the draft evaluation report to ensure that the evaluation meets the required quality criteria. ü Final evaluation report—Finalized version of the report after incorporation of all the comments and suggestions. 7- EVALUATION TEAM COMPOSITION & COMPETENCIES This section details the specific skills, competencies and characteristics needed in the evaluation team and the expected structure and composition of the evaluation team, including roles and responsibilities of team members. Generally speaking a multidisciplinary evaluation team—depending on the nature of evaluation-- is required for comprehensive evaluation 8- IMPLEMENTATION ARRANGEMENTS This section describes the organization and management structure for the evaluation and defines the roles, key responsibilities and lines of authority of all parties involved in the evaluation process. Implementation arrangements are intended to clarify expectations, eliminate ambiguities, and facilitate an efficient and effective evaluation process. 9- TIME FRAME FOR THE EVALUATION PROCESS This section lists and describes all tasks and deliverables for which evaluators or the evaluation team will be responsible and accountable along with respective timelines e.g. desk review, interviews, data collection, inception report, and other key milestones. 10- COST This section should indicate resources available for the evaluation (consultant fees, travel, subsistence allowance, etc.). Exact budgets may be determined by competitive bidding.
16
JUNE 2013
Annex 2: Sample Checklist of “Information requirements for Impact Assessment of Community Computer/Information Centers� General information about area Public and private sector facilities mapping (schools, hospital post office, and other facilities) Demographic information on the area (no. of persons in age groups, gender segregated) Numbers of persons who can access computers (male, female) Information on cost per use of computer, PCO, and other services Information on persons using computers for educational, health, or other purposes Information on persons using computers for web based earning Distance and time from the nearest computer center Information on other uses of computers and ICT Number of computers per 100 population Number of persons who can access Internet Number of persons who know how to use computers Information on purpose for using computers Information on Internet and computer use for learning Information on the status of e-commerce Information on the status of e-health Information on the status of e-governance Information on the status of e-agriculture Information on use of computers for Social Networking Information on economic activities associated with Internet and computer use Information on the status of networking Information on presence of local ICT content and applications Information on the financial sustainability of Center Information on community participation in management of Center Note: The list given above is only for illustration however the true information requirements can only be determined after studying the objective of the evaluation and the context in which it is being conducted. In general, more information is better, as long as it is reliable and not too expensive to obtain.
USAF Monitoring and Evaluation Part 1
17
Global Broadband and Innovations Program
18
JUNE 2013
Annex 3: List of Core ICT Indicators Core indicators on ICT infrastructure and access A1 A2 A3 A4 A5 A6 A7 A8
Fixed telephone lines per 100 inhabitants Mobile cellular subscribers per 100 inhabitants Computers per 100 inhabitants Internet subscribers per 100 inhabitants Broadband Internet subscribers per 100 inhabitants International Internet bandwidth per inhabitant Percentage of population covered by mobile cellular telephony Internet access tariffs (20 hours per month), in US$, and as a percentage of per capita income A9 Mobile cellular tariffs (100 minutes of use per month), in US$, and as a percentage of per capita income A10 Percentage of localities with public Internet access centers (PIACs) by number of inhabitants (rural/urban) Core indicators on access to, and use of, ICT by households and individuals HH1 Proportion of households with a radio HH2 Proportion of households with a TV HH3 Proportion of households with a fixed line telephone HH4 Proportion of households with a mobile cellular telephone HH5 Proportion of households with a computer HH6 Proportion of individuals who used a computer (from any location) in the last 12 months HH7 Proportion of households with Internet access at home HH8 Proportion of individuals who used the Internet (from any location) in the last 12 months HH9 Location of individual use of the Internet in the last 12 months HH10 Internet activities undertaken by individuals in the last 12 months Core indicators on use of ICT by businesses B1 Proportion of businesses using computers B2 Proportion of employees using computers B3 Proportion of businesses using the Internet B4 Proportion of employees using the Internet B5 Proportion of businesses with a Web presence B6 Proportion of businesses with an intranet B7 Proportion of businesses receiving orders over the Internet B8 Proportion of businesses placing orders over the Internet Core indicators on the ICT sector and trade in ICT goods ICT1 Proportion of total business sector workforce involved in the ICT sector
USAF Monitoring and Evaluation Part 1
19
Global Broadband and Innovations Program ICT2 ICT3 ICT4
Value added in the ICT sector (as a percentage of total business sector value added) ICT goods imports as a percentage of total imports ICT goods exports as a percentage of total exports
Source: Core ICT Indicators 2010, ITU
20
JUNE 2013