Page tree
Skip to end of metadata
Go to start of metadata

QPS: Quench Protection System

Introduction

The purpose of the QPS is to protect the superconducting elements (dipole, quadrupole, current lead, etc.) of the CERN Large Hadron Collider (LHC) against quench (loss of the superconducting state), splice fault, energy extraction, etc. Below is a simplified view of an electrical circuit of the LHC composed of magnets, power converter, energy extraction switch.

The QPS control system architecture is a traditional three layers architecture composed of:

  • field layer: read-out and control, protecting the superconducting element and extracting the energy accumulated in the superconducting magnets. The filed layer is developed and operated by the TE/MPE group.
  • control layer: reading and handling commands to the field layer
  • supervision layer: supervision of the QPS front-end and field layer, interfacing the QPS to the accelerator utilities.

The QPS is linked to:

  • PIC (Power Interlock Controller): by hardware means for the powering interlock and software means.
  • Power converter: via the PIC, hardwired signals
  • LHC_CIRCUIT: application presenting the view of all the electrical circuits of the LHC, software link.

In case of a global power loss, there is 10 minutes of autonomy from the field layer up to the control layer in case of an occurence of an event: quench, energy extraction, splice fault, etc. including the fieldbus and network in order to have the post-mortem data send to the post-mortem system. All the data read in the QPS and saved at each layer are time-stamped in the field layer.
During commissioning and beam operation, a high level of availability is required for the field and control layer as it has to protect the equipment. The availability of the supervision layer has also to be good enough because it collects and distributes the data to the long term archiving and analysis system, and also because the field layer readout buffers for the splice measurement are too small; the splice analysis uses the data from the long term archiving.

The QPS is split into four major type of devices:

  • DQAMC type: for MB (bending magnet) and MQ( quadrupole magnet)
  • DQAMS type: switches, 13kA, 600A, etc.
  • DQAMG type: global device for correctors magnet, splice and earth fault error detection, inner tripplet magnet, etc.
  • DQAMGS type: splice and earth fault error detection

All types can be on the same WorldFIP bus. However usually:

  • bus A, F, L and G are for DQAMG and DQAMS types
  • bus B, C, D, E, H, I, J and K are for DQAMC and DQAMGS types

Field layer

QPS field layer simplified view:

The field layer (developed and operated by the TE/MPE), with 2800 radiation-hard equipments, is connected to the WorldFIP fieldbus. This equipment includes the fast readout and control system to protect the superconducting magnets and the superconducting elements (e.g. current leads) of the powering circuits as well as the control and monitoring system of the energy extraction switches. In detail this equipement:

  • protects (hardwired with redundant readout) the superconducting magnets against quench, splice fault, earth fault, symetric quench by energizing the magnet heater strips and opening the energy extraction switches
  • protects (hardwired with redundant readout) the superconducting elements of the electrical circuits: main bus bars, inner tripplets, insertion region magnets, corrector magnets (>120A), HTS current leads
  • controls the energy extraction switches
  • interlocks the powering down of the power converters
  • gets the power converters and PIC interlocks and activates the protection by opening the energy extraction switches and energizing the magnet heater strip if needed.
  • push up to the control layer 10Hz and 0.1Hz data in logging mode.

The field layer equipment is time-stamping the data kept in a rolling buffer large enough to keep all the data read around the time of the occurence of an event. In case of an event, this buffer is sent to the front-end layer. The field layer is periodically synchronized with the LHC time by the front-end via WorldFIP.
QPS field layer readout buffer: Example of an event (plot over time):

There are four groups of field layer equipments, each equipment is named 'hardware device':

  • DQAMC for dipole and quadrupole magnets: one equipment per magnet
  • DQAMS for energy extraction switches: 600A, 13KA, one equipment per switch
  • DQAMG for all the other types of elements: superconducting elements of a powering circuit, current leads, etc.: one equipment can control from 1 to 4 elements, each element is named 'virtual device'
  • DQAMGS: splice measurement, etc.: one equipment can control from 1 to 3 elements, each element is named 'virtual device'

Contact person: Reiner Denz

Control layer

QPS control layer simplified view:

The control layer runs on a 44 BE/CO diskless industrial front-end PCs (also named FEC, DQGTW or GTW) and was developed with the FESA framework. Its main tasks are:

  • read-out the data from the field layer equipment: 10Hz and 0.1Hz in logging mode. There are about a total of 133000 signals: 27000 analog and 106000 boolean/status.
  • handling the commands to the field layer equipment, the set of commands is protected an restricted to a defined set of users via RBAC (role-based access control) setup.
  • synchronizing the field layer with the LHC time
  • handling the Post-Mortem buffer: in case of an event detected by the field layer equipment, the readout buffer is read, compacted and sent to the post-mortem system for analysis

The control layer follows the same 'hardware device' mapping as the field layer and is pushing up to the supervision layer the 'virtual device' whenever it is received by the field layer. The control layer pushes up also global summary bit information on group of devices. If the field layer equipment is not sending data, the control layer will not push it to the supervision layer.

The front-end contains a dedicated timing card which triggers the WorldFIP bus at 10Hz to start the polling of the data of each field layer equipment. The trigger occurs at the same time for all the WorldFIP bus of the control layer (property of the timing card, connected to the timing bus and subscribing to timing event).

An expert tool (in JAVA) is provided to diagnose, to debug and to have a full control over the field layer 'hardware device' and 'virtual device'. The communication interface between the control layer and the supervision layer is based on CMW (Controls Middelware) and protected by RBAC, one must enter his NICE credential to start it.

QPS expert tool:

Contact person: Bruno Dupuy

Supervision layer

QPS supervision layer simplified view:

The supervision layer runs on 16 HP proliant servers with the winccoa SCADA (Supervision Control And Data Acquisition) from ETM, the EN/ICE frameworks: JCOP, UNICOS and a dedicated QPS supervision package. It is interfaced to the field layer equipment by the front-end of the control layer by CMW and with RBAC protection, its main tasks are:

  • read-out of the field equipment data: 9% of the signal kept at 10Hz, 15% kept at 0.1Hz, 3% kept at 0.5Hz and all the remaining 73% occasionally.
  • all the field equipments must send periodically data to the supervision layer through the front-end layer in less than 2 sec. If not the device will be marked as not reachable with a 'F' letter (top rigth corner) in the QPS device widget and the ST_GTWDEV bit set (orange colour) in the QPS device faceplate.
  • handling a limited set of commands to the field equipment protected by user name and password
  • interfacing to the LHC Software suite: LHCLogging (LHC long term archiving, the total amount of data inserted in the winccoa ORACLE DB and the LHCLogging 123400 per second), LASER (LHC Alarm system), beam operation and hardware commissioning sequences,
  • interface to PIC for the QPS_OK software signal
  • interface to splice monitoring system by providing periodically the measured splice value.
  • running macros used with the LHC Sequencer, based on circuit:
    • to close the energy extraction switches
    • HDS (heater discharge strip) software interlocks, remove the PIC power permit on the corresponding circuit, trigger a circuit mail/SMS when there is HDS problem.
    • trigger a circuit mail/SMS when in case of a crate failure.
    • reset crate
    • automatic calibration of nQPS device
    • snapshot of QPS agent: to have a view of all the signals of the agents of a circuit
  • running macros during commissioning: automatic close/open switch, firing DQAMGA
  • monitoring the U_HDS voltage, send SMS, email in case of failure
  • monitoring the QPS agent COMM state, send SMS, email in case of failure

The archiving load in the winccoa dedicated ORACLE server is high about 123400 per second constantly.The ORACLE DB is installed in a dedicated cluster in IT building.

The supervision layer is dealing with 'virtual device'.

QPS 'virtual device' widget:

QPS 'virtual device' faceplate:

WorldFIP QPS view with QPS 'virtual device' widget, magnet voltage and magnet splice profile:

Powering sub-sector widget:

Circuit widget:

Circuit view with PIC, QPS 'virtual device' widget, power converter and cryo data:

Contact person: Hervé Milcent