Overview & Considerations
The following attendance measures are both relatively simple to capture and provide a basic analysis of attendance at an event:
- Unique Attendees
- Total Attendance
- Attendance numbers at ticketed & non-ticketed events
To many event organisers, capturing this data will be extremely straight-forward, but it remains the fact that error in stating the overall 'event population' is the single biggest cause of inaccuracy in the reporting of other impacts (such as economic and environmental impact).
Attendance does not necessarily mean the numbers of spectators or audience alone. Where relevant, attendance can be taken to include participants (athletes/performers), support staff, media, sponsors, officials and volunteers. Categorising attendees in this way can help when measuring other impacts that assess the behaviour of different groups at the event.
Exaggerating crowd sizes can be common practice for the purposes of public relations, but it compromises the reliability of monitoring and evaluation that is based on estimates of attendance. This has implications for much of the research being undertaken at events, and we urge event organisers to recognise the implications of misrepresenting the popularity of an event in terms of spectator or audience numbers (or indeed competitor numbers at mass participation events). Exaggerating crowd sizes has the effect of overstating economic impact and at the same time overstating the carbon footprint attributable to an event. Other measures that are based on findings from a survey, such as the percentage of disadvantaged people attending the event, will be overstated if used subsequently to compute the absolute number of people from a particular group who attended an event. Thus regardless of the rigour with which monitoring and evaluation data is collected, its true value is unreliable if attendance levels are inaccurate.
Common forms of monitoring and evaluation involves conducting a survey of a sample of event participants and to aggregate the findings upwards to derive estimates for the population of participants. For example, it might be the case that 100,000 spectators attend a large scale equestrian event and event stakeholders wish to conduct an economic impact study. A research team would aim to interview around 1,000 spectators and then use the findings from this data to aggregate upwards on the basis that the 1,000 people interviewed are representative of all spectators. Assuming that the sampling has been conducted in a robust manner, the greatest source of error is likely to be the figure used to multiply the findings from the sample upwards to the population as a whole. For example the figure of 100,000 could have been used for the purpose of public relations, whereas in reality there were only 50,000 tickets sold. A practice such as this if left unadjusted would have the effect of doubling the economic impact attributable to the event.
At the majority of ticketed events there should not be a problem with spectator or audience levels as there are ticket sales databases which can provide accurate data with which to work. However, at free to view or open access events particularly along linear routes such as cycle races or cultural events that people can chance upon and drift in and out of, there needs to be well reasoned estimates of the number of spectators for economic impact, environmental and social impacts. In particular, there should be a clear differentiation made between the number of attendances (throughput) and the number of different people (attendees) who generated the throughput figure.
For example there could be 90,000 admissions at a three day equestrian event which in turn could be made up of 90,000 different people attending once, 30,000 different people attending all three days, or numerous combinations of people and days in between these two extremes. Furthermore in the case of events that take place over an extended distance such as a cycle race or a carnival parade, there is the possibility that people can watch the event from more than one place on the same day. For example, in the case of the London Marathon, and similar events, it is possible (and common practice) for spectators to move around the course and see runners in whom they have an interest at numerous locations.
To illustrate the problems associated with overstating crowd sizes, consider the case of a cycling road race with an estimated attendance of 10,000. If the 10,000 crowd is all different people residing outside the host economy and their average spend is £10 per head at the event, the economic impact would be £100,000. However, this is a free to view event and primary research amongst a sample of 1,000 spectators indicates that they watch from an average of two different locations each, given that they are free to move around the route. Consequently, the 10,000 attendance becomes 5,000 different people when the repeat viewing factor of two is applied (i.e. 10,000/2) and the economic impact will be £50,000 applying the same expenditure figure. Furthermore, as explained above, left unchecked the carbon footprint attributable to spectators would also be overstated. In short, event organisers need to be aware that should they exaggerate crowd sizes, the net effect is to undermine the reliability of monitoring and evaluation that is dependent upon accurate crowd size estimates.
Moreover, any over inflation of crowd sizes is also likely to have downstream effects in terms of social impacts. For example, if half of the 10,000 attendance are from the host economy and 50% of our sample report that they are more likely to cycle as a result of their attendance at the event; it would appear that there are 2,500 people at whom to direct any cycling interventions designed to increase participation. However, as suggested previously, if people watched from an average of two locations then this would halve the potential target group. This in turn might be the difference between implementing a cycling participation initiative, or shelving it due to an apparent lack of interest.
Routes to Measurement
Many events organisers will already have access to attendance data. The key consideration when assessing the scope to measure attendance is in determining whether it is a ticketed or non-ticketed event, as this strongly influences the approach required.
For ticketed events, organisers will have access to box office and ticket distribution data, although this may only provide an indication of sales and distribution. This may need to be cross-referenced with other sources that determine actual number of attendees, and which many organisers may already need to monitor linked to health and safety requirements.
For non-ticketed events, the approach is more complex and may require some primary research using a methodology to estimate crowd sizes. See below for further information.
Measuring Attendance at Ticketed Events
Attendance at ticketed events can be monitored by ticket sales, or tickets surrendered on entrance to gain admission. Where technology permits, other measures such as the total number of clicks on turnstiles can also be used as there can be no guarantee that all of those people who purchase a ticket for an event actually use them. The purpose of analysis is first to estimate the total number of attendances at an event, and then to down-weight it to the number of unique attendees by using a repeat viewing factor. We restate the requirement to differentiate between attendees and attendance, which applies not only to non-ticketed events but also ticketed events.
For some types of monitoring and evaluation (such as economic impact assessment) it may be necessary to distinguish between those whose attendance at the event is their primary motivation for being in the local area and those who are 'casuals', that is, people who are in the locality for some other primary purpose and their attendance at the event is a secondary consideration. It is conventional practice to exclude casuals from calculations such as economic impact and carbon footprint estimates because the impacts made by such attendees cannot be attributed to the event.
Suggested considerations when measuring attendance at ticketed events:
- Primary data source will be box office data, ticket sales and ticket distribution
- The number of tickets distributed or sold does not always equal the attendance at the event
- Ticket buyers may not be the people who use them (i.e. group bookings and one person not always representative of the group)
- Some of the larger ticketed events may operate a reuse policy whereby people who leave an event early would relinquish their tickets, to be sold on to other spectators wanting to watch the action.
Ticket sales and distribution may provide a broad indication of the nature of the audience attending, however, primary research is required to provide more detailed information on those attending.
Measuring Attendance at Non-Ticketed Events
Having explained the importance of crowd sizes, it is perhaps worth explaining an 'open access' methodology in more detail. The approach has been developed as a result of the increasing number of open access events at which monitoring and evaluation work has been undertaken, particularly economic impact studies.
The methodology is based on crowd densities along the 2.5m crash barriers found on linear routes; the assumption being that these accommodate five people side by side. If barriers are on both sides of a 1km route and the crowd is one deep for its entirety then there would be 800 barriers (400 on each side of the road) and 4,000 spectators (5 x 800). Clearly this is the approach in simplistic terms, however, the final estimate is refined according to the experience of the research team at an event and with reference to their photographs and video footage at the event (and from event websites, blogs etc.). Moreover, where available, any recorded TV footage of the event plus aerial stills (where in some instances it is possible to count the people attending); is also used to derive crowd densities.
The 'no-stadium' methodology is particularly useful as a test of reasonableness for the estimates put out by event organisers. For example, if organisers claimed that an event attracted 100,000 spectators around a 1km course, it would be reasonable to expect crowd densities of around 25 deep on both sides of the road. If photographic and other evidence refute the expected crowd densities, then it is likely that crowd numbers have been overstated. In the case of large scale events, it is possible to derive a reasonably accurate estimate of attendance levels by surveying the community concerned after the event.
Some events report attendance figures on the basis of police estimates. We have interviewed two senior officers from the Metropolitan Police who have advised us that the police do not make any scientific estimates of crowd sizes and ideally do not like to have figures attributed to them. Any estimates attributable to the police are based on little more than hunches. Finally, any repeat viewing factor derived from primary research amongst spectators will be applied to derive an estimate of the number of different people en route. The 'open access' methodology can be adapted for crowds in open spaces such as free concerts based on the number of people per square metre.