Overview & Considerations
The following attendance measures are both relatively simple to capture and provide a basic analysis of attendance at an event:
- Unique Attendees
- Total Attendance
- Attendance numbers at ticketed & non-ticketed events
To many event organisers, capturing this data will be extremely straight-forward, but it remains the fact that error in stating the overall 'event population' is the single biggest cause of inaccuracy in the reporting of other impacts (such as economic and environmental impact).
Attendance does not necessarily mean the numbers of spectators or audience alone. Where relevant, attendance can be taken to include participants (athletes/performers), support staff, media, sponsors, officials and volunteers. Categorising attendees in this way can help when measuring other impacts that assess the behaviour of different groups at the event.
Exaggerating crowd sizes can be common practice for the purposes of public relations, but it compromises the reliability of monitoring and evaluation that is based on estimates of attendance. This has implications for much of the research being undertaken at events, and we urge event organisers to recognise the implications of misrepresenting the popularity of an event in terms of spectator or audience numbers (or indeed competitor numbers at mass participation events). Exaggerating crowd sizes has the effect of overstating economic impact and at the same time overstating the carbon footprint attributable to an event. Other measures that are based on findings from a survey, such as the percentage of disadvantaged people attending the event, will be overstated if used subsequently to compute the absolute number of people from a particular group who attended an event. Thus regardless of the rigour with which monitoring and evaluation data is collected, its true value is unreliable if attendance levels are inaccurate.
Common forms of monitoring and evaluation involves conducting a survey of a sample of event participants and to aggregate the findings upwards to derive estimates for the population of participants. For example, it might be the case that 100,000 spectators attend a large scale equestrian event and event stakeholders wish to conduct an economic impact study. A research team would aim to interview around 1,000 spectators and then use the findings from this data to aggregate upwards on the basis that the 1,000 people interviewed are representative of all spectators. Assuming that the sampling has been conducted in a robust manner, the greatest source of error is likely to be the figure used to multiply the findings from the sample upwards to the population as a whole. For example the figure of 100,000 could have been used for the purpose of public relations, whereas in reality there were only 50,000 tickets sold. A practice such as this if left unadjusted would have the effect of doubling the economic impact attributable to the event.
At the majority of ticketed events there should not be a problem with spectator or audience levels as there are ticket sales databases which can provide accurate data with which to work. However, at free to view or open access events particularly along linear routes such as cycle races or cultural events that people can chance upon and drift in and out of, there needs to be well reasoned estimates of the number of spectators for economic impact, environmental and social impacts. In particular, there should be a clear differentiation made between the number of attendances (throughput) and the number of different people (attendees) who generated the throughput figure.
For example there could be 90,000 admissions at a three day equestrian event which in turn could be made up of 90,000 different people attending once, 30,000 different people attending all three days, or numerous combinations of people and days in between these two extremes. Furthermore in the case of events that take place over an extended distance such as a cycle race or a carnival parade, there is the possibility that people can watch the event from more than one place on the same day. For example, in the case of the London Marathon, and similar events, it is possible (and common practice) for spectators to move around the course and see runners in whom they have an interest at numerous locations.
To illustrate the problems associated with overstating crowd sizes, consider the case of a cycling road race with an estimated attendance of 10,000. If the 10,000 crowd is all different people residing outside the host economy and their average spend is £10 per head at the event, the economic impact would be £100,000. However, this is a free to view event and primary research amongst a sample of 1,000 spectators indicates that they watch from an average of two different locations each, given that they are free to move around the route. Consequently, the 10,000 attendance becomes 5,000 different people when the repeat viewing factor of two is applied (i.e. 10,000/2) and the economic impact will be £50,000 applying the same expenditure figure. Furthermore, as explained above, left unchecked the carbon footprint attributable to spectators would also be overstated. In short, event organisers need to be aware that should they exaggerate crowd sizes, the net effect is to undermine the reliability of monitoring and evaluation that is dependent upon accurate crowd size estimates.
Moreover, any over inflation of crowd sizes is also likely to have downstream effects in terms of social impacts. For example, if half of the 10,000 attendance are from the host economy and 50% of our sample report that they are more likely to cycle as a result of their attendance at the event; it would appear that there are 2,500 people at whom to direct any cycling interventions designed to increase participation. However, as suggested previously, if people watched from an average of two locations then this would halve the potential target group. This in turn might be the difference between implementing a cycling participation initiative, or shelving it due to an apparent lack of interest.