Interesting fact

A solid evaluation can support the case for expanding the scope or duration of your incentive scheme, or help identify ways to improve it.


EMPOWER is setting out to show the effect of positive incentives on urban transport behaviour, so evaluation of the effectiveness of these when implemented is of crucial importance to the project. Locally, solid evaluations can also provide justification to continue or broaden the scope of pilots, or work out projects are worth transferring from one place to another.

Evaluations range from simply defining a scheme’s effectiveness in meeting narrowly defined targets right through to defining its effect on society as a whole. More narrowly defined evaluations will tend to consider direct effects (e.g. number of users, number of trips), while more holistic evaluations will require more elaborate data (e.g. users’ behaviour before using the service). In the case of the EMPOWER project, the projects’ Key Performance Indicators (KPIs) constitute the minimum criteria for evaluation, although a wider scope is recommended.

Evaluation (from an EMPOWER perspective) has two strands. Firstly, one related to the general success (or failure) factors of incentive schemes, examining factors such as travellers’ perception of urban accessibility and attractiveness, and customer/user satisfaction with the scheme. These will tend to be qualitative in nature, even if presented as statistics. The second strand of evaluation concerns the success of the schemes in changing users’ travel behaviour (away from Conventionally Fuelled Vehicles – CFVs – especially petrol and diesel cars) in response to the incentives, and will be primarily quantitative in nature (e.g. modal share in distance and trips before and after).

Description of evaluation procedure

The following describes the basic steps, which should be completed before the scheme is implemented to allow for a reliable evaluation of the schemes in your city.

1. Define the scope of the evaluation and the before and after cases

The scope of the evaluation must be defined according to the following factors.

  1. Which users are you interested in?
  2. Geographical coverage of the scheme and it’s evaluation?
  3. Which trip-type (e.g. all trips vs. only work trips) are you interested in?
  4. The duration of the scheme and it’s evaluation

The evaluation’s scope will be closely related to the design of the scheme itself, and possibly to the definition of the before and after cases. The definition of the cases, in turn, is crucial to allow the effect of the incentives to be distinguished from external influences. The before case should describe the situation before the scheme is implemented, including, inter alia, which services are already available (e.g. a multi-modal journey planner with ticketing, but sans incentives). This is crucial, especially if you need to make the case that your scheme has caused a modal shift (as opposed to causing additional trips). Likewise, the after case should describe the situation post implementation, identifying changes made as part of the scheme, but also any changes or events outside the scheme which might affect the same target group as your scheme, both expected (e.g. seasonal fluctuation in cycle use, public transport pricing changes) and unexpected (e.g. unseasonably hot or cold weather, an oil-price spike). Bear in mind, too, that different groups may have different needs and interests from the evaluation, so may define the success or failure differently to one another.

2. Define effects and evaluation criteria

Define all of the likely relevant effects of the proposed scheme, both negative and positive, direct and indirect, and how these should be measured. These will become the criteria for the evaluation. Examples of such effects are the number and length of trips taken using bike-sharing bikes (direct, positive), and the number and length of trips deferred from CFVs (indirect).

  1. As many of these should be defined as possible, from as wide a range as possible.
  2. Data should be available for as many of these as possible, but at the very least to allow evaluation of all four KPIs.
  3. Define a control case

Define a control case against which the scheme will be compared. This is crucial as it allows a clear case for the effectiveness of the scheme (and the scheme alone) to be made, with as little influence from external factors as possible.

Ideally the control group should represent what would have happened had the scheme not existed, a.k.a. BAU (business as usual). This is difficult to define in practice, but a control group of people separate from the group of (targeted) participants may suffice. Ideally this shouldn’t be self-selecting (i.e. not the group of eligible users who are aware of the service but choose to not use it), practicality may demand the use of approximations which allow conclusions to be drawn on the role of the scheme in influencing users’ behaviour, e.g. if the incentive scheme is only available for online ticket sales, the development of paper ticket sales in the same period might be a suitable approximation. Alternatively, known factors affecting the modes in question (e.g. seasonal and weather variation of cycling) can be applied to the before case to calculate an approximate BAU. Another alternative is to offer users incentives (or not) at set intervals; the time in which users are offered nothing could suffice as a ‘control’ group. Bear in mind that the control group is a minefield of different biases and imperfections; to the authors’ knowledge, no perfect method exists to define one, but you will have to do the best you can with local knowledge, conditions and the data you have available.

4. Determine data sources and data collection modalities

Potential data sources include, amongst others, mobility tracking data from an app, (in-app) surveys, existing transport data (e.g. traffic counters) and user profiles. It is crucial to ensure that the data necessary for each indicator is collected and accessible for the evaluation. This should be organised and agreed upon in advance, especially if data from external bodies or sources is needed.

5. Ex-ante vs. ex-post.

  1. The default option is an ex-post evaluation based on measured data from the scheme.
  2. If detailed and/or reliable projections of the likely effect of the scheme is available, also an ex-ante evaluation can be carried out using the same methodology (to ensure comparability of the results).

6. Key Points/Critical Success Factors/Limitations

  • Agree on data collection early on
  • Formulate distinct cases and boundaries
  • Record when and how the scheme is altered (e.g. change the incentives)
    • New questionnaires may be needed
  • Be aware that there might be a trade-off between the amount of data collected and the rate of participation
  • The evidence provided by the evaluation is only as good as
    • The data upon which it is based
    • The definition of the before, after and control cases (minimising sources of bias)