Evaluating UK search equipment


  • With decades of experience in tackling terrorism, Britain leads the way when it comes to testing aviation security equipment.

  • Testing procedures vary from country to country and, while there is no definitive best practice, UK methods have largely been seen as successful.

  • In the future, security equipment testing could be standardised across Europe.

The market for security and search equipment capable of detecting explosives and weapons was thrust into the spotlight following the attacks on New York and Washington on 11 September 2001. But while this market has grown considerably in the last five years, it has existed for decades.

Aviation security has long driven the security equipment sector. The UK was one of the first countries to seriously tackle terrorist threats to the aviation industry following the 1988 bombing of Pan Am Flight 103 over Lockerbie. The decision to screen all checked baggage being placed on aircraft flying from the UK for explosives was one outcome of the disaster. Two systems, Thermal Neutron Analysis and Vivid, were capable of meeting this operational requirement at the time. Within two years, Vivid, due to its high performance and low complexity, was selected and installed at all UK airports. Today, the same selection process would have to assess as many as 11 different systems that are already on the market, not to mention several novel systems still under development.

Due to some high-profile attacks, the market for equipment capable of detecting explosives and weapons has grown immensely and the number of offerings by commercial companies has increased substantially.

Difficult decisions

The glut of equipment developed to meet this increasing demand has been a source of difficulties for buyers purchasing new systems. How can an operator know which piece of equipment is best suited to its needs? The decision can be hard because there are so many variables involved in the selection process. Equipment with Automatic Threat Recognition technology allows the operator to tune the sensitivity to balance the probability of detection against the probability of false alarm or alarm rate, depending on the type of equipment. So not only must a user select a manufacturer and model of equipment, they must also choose appropriate settings when using the equipment - giving a potentially huge number of choices.

To overcome these difficulties, it is crucial that an impartial testing authority thoroughly assesses the capabilities of the equipment against realistic and consistent threats. By taking the results of the tests and relating them to operational requirements, users can evaluate the range of available equipment more effectively and choose that which is best suited to their needs.

In the UK, there are well-defined processes for evaluating security search equipment. Three main bodies are responsible for this type of testing: the Home Office Scientific Development Branch (HOSDB); the Defence Science and Technology Laboratory, an agency of the Ministry of Defence; and QinetiQ, a private defence technology company. Each of these has defined responsibilities for evaluating equipment for civil use, and there is much communication and resource-sharing among them.

Testing processes

The manufacturer of the equipment, a government department or an end user, such as the British Airports Authority, first brings a piece of equipment to the attention of the transport security group at the Department for Transport (DfT), which regulates the use of search equipment at ports, terminals and airports in the UK. If there is a perceived need for the equipment or it is novel enough to warrant further examination, the DfT then decides what action to take and whether a full evaluation is justified.

The process then forks in one of several directions, depending on the type of equipment that is being evaluated. For some equipment, there are well-defined test procedures and pass criteria. When this is the case, the manufacturer provides a sample of the equipment to the relevant evaluation agency, which then carries out the tests. The results of the tests and an objective decision on the possible future use of the equipment are then sent to the DfT. Equipment that passes the testing phase gains approval from the DfT and can be used by transport end users.

The UK is unusual in that manufacturers are allowed, and in some cases encouraged, to revise and upgrade equipment and parameters during the early phases of the testing process. Once final settings are agreed on, no further changes can be made and the equipment is formally tested.

This is in stark contrast to practices used in other countries. In the US, equipment is brought to the Transport Security Administration and left there by the manufacturer. Testing is carried out behind closed doors and the equipment passes or fails based on strict criteria.

There are advantages and disadvantages to both approaches. The danger in allowing manufacturers to change settings during early testing phases is that prototypes with serious faults are sometimes brought for evaluation well before they are ready. The main disadvantage in using the stricter evaluation method is that equipment that does not meet an exact requirement will fail, even though it may excel in a different operational environment. On the flip side, the first method gives more scope for governments to implement cutting-edge equipment; the second allows for well-structured evaluations that are easy to carry out. Whether one approach is better than the other is very subjective and provides stimulus for much discussion within government.

Novel equipment

Although there is a fairly straightforward method for evaluating equipment types with well-defined operational requirements and threats, novel equipment - which is based on new technologies or which responds to a previously unaddressed threat - poses a greater challenge for assessors and may require a more variable approach. The DfT must first decide whether to perform a quick or full evaluation of the equipment. If equipment performs well in a quick evaluation, it may then be subjected to exhaustive testing. In this case, a test protocol specific to the equipment under evaluation must be drawn up by the test agency and the DfT.

UK evaluations are often realistic and pragmatic rather than purely scientific. A metal detector that detects a gun-shaped piece of metal being pushed through by a robot will not necessarily detect a revolver being carried through by a person. A piece of equipment's response to these two scenarios will often be very different.

Although the DfT has been the main driver behind the evaluation of equipment, other users are also interested in the development of such equipment. In many cases, other users simply install equipment with the same settings used for aviation security, for example UK police forces have bought or hired equipment with DfT-approved settings. This has not been a problem in the past, as the threats to transport and secure events tend to be similar. Prisons, on the other hand, have very different operational requirements to transport operators. Because of this, they tend to use testing agencies independently of the DfT, although their evaluation procedures are very similar in practice.

Issues for the future

Although the science of evaluation is mature and to a certain extent stable in the UK, there are still issues to overcome.

One emerging hurdle is that of European unification. The EU is starting to merge evaluation procedures, with countries taking ownership for testing particular types of equipment. This is a huge step forward, and in the short term, the reality is that certain countries will take the lead and eventually extend their expertise throughout the EU.

It is probable that truly independent evaluation facilities will need to be created throughout the EU to take on the extra work generated in recent years.

Daniel Gifkins is a consultant and project manager who specialises in the evaluation of security search equipment.



Footnotes


Explore our related content