Most organisations find out their incident management process has gaps at the worst possible moment. A security alert arrives, the team scrambles, and it quickly becomes clear that the steps, roles, and reporting obligations were never properly defined. Under the NIS2 Directive, that kind of improvisation carries real consequences.
NIS2 requires essential and important entities to detect, respond to, and document cyber incidents within defined timeframes, and to notify the relevant authorities accordingly. Penalties for non-compliance can reach €10 million or 2% of global annual turnover for essential entities, whichever is higher. Without a structured incident management process, organisations are exposed on both the operational and regulatory fronts.
NIS2 is enforceable EU law across member states, and its reach extends beyond EU borders. UK organisations with EU operations, subsidiaries, or customers providing services in EU member states fall directly within scope and should treat compliance as an active obligation.
This article covers how to build and implement a cyber incident management process that meets NIS2 requirements, including classification criteria, incident handling stages, team roles, documentation, and how to keep the process fit for purpose over time. Whether you are building a security incident management framework from scratch or reviewing an existing one, the principles here apply across sectors and organisation sizes.
- 1. NIS2 Requirements for Incident Management
- 2. Detecting and Classifying Incidents
- 3. How to Handle a Security Incident
- 4. Reporting Incidents to the Authorities
- 5. Roles and Responsibilities in Incident Response Management
- 6. Integrating Incident Management with Your ISMS
- 7. Business Continuity and Crisis Management
- 8. Testing and Continuous Improvement
- 9. Building a NIS2 Compliance Trail
- 10. Common Incident Management Challenges
- 11. FAQ
NIS2 Requirements for Incident Management
The NIS2 Directive obliges organisations to detect incidents promptly, limit their impact, and meet strict reporting timelines. It also requires the entire process to be documented and defensible under audit.
NIS2 Scope: Essential and Important Entities
NIS2 draws a distinction between essential entities and important entities. Essential entities are typically large organisations in high-criticality sectors such as energy, transport, health, banking, and digital infrastructure. Important entities include medium-sized companies from those same sectors, as well as medium and large organisations in areas such as postal services and food production. Organisations are expected to self-assess their classification and register accordingly with the relevant national authority.
The distinction carries practical consequences. Both categories share the same security and incident management obligations, but essential entities face more intensive supervision and higher financial penalties. Understanding which category applies is the first step in calibrating the required response.
What Makes a Cyber Incident Significant
Under NIS2, an incident is any event that compromises, or risks compromising, the confidentiality, integrity, or availability of network and information systems. A significant incident is one that causes, or has the potential to cause, serious operational disruption or financial loss, or that affects other organisations through material damage.
At the outset of a developing situation, organisations rarely have a complete picture. Security procedures should therefore allow for a rapid, provisional assessment. If an event touches critical services or sensitive data, treat it as potentially significant until analysis shows otherwise. That presumption reduces the risk of delayed notification and the regulatory consequences that follow.
Why NIS2 Holds Management Accountable
NIS2 places explicit accountability on management bodies, not just security teams. Senior leadership must approve cybersecurity risk management measures, oversee their implementation, and can be held personally liable for failures. Article 20 of the Directive states that management bodies can be held liable for infringements by their organisations, a provision that extends, in cases of repeated violations at essential entities, to temporary bans from management roles.
This means management must designate a process owner with authority to act outside standard business hours, set priorities when rapid service restoration conflicts with security requirements or formal obligations, and approve external communications. Incident response management is a board-level concern, not a purely technical one, and NIS2 makes that accountability explicit.
Detecting and Classifying Incidents
Early detection depends on whether signals reach the right person and are assessed consistently across the organisation. Classification is what turns a raw alert into a coordinated response.
Where Incidents Come From
Signals can reach an organisation through two complementary sources. The first is automated: security monitoring tools, log analysis, anomaly detection, and alerts from endpoint and network protection systems. The second is human: reports from employees, service owners, and business teams who notice something out of the ordinary.
Because cyber incidents increasingly originate in the supply chain, security procedures should also account for signals from suppliers and external partners. The process must specify who receives a report, who carries out the initial assessment, and within what timeframe. Without that clarity, valuable time is lost simply confirming whether an incident has occurred.
How to Assess Impact and Severity
The initial assessment should consider the effect on service availability, data security, and the breadth of the event. Context matters significantly. The same technical event in a system supporting a critical service carries far greater weight than an identical event in a peripheral system, even if the latter generates more alerts. Duration, rate of escalation, and any known supply chain dependencies are also relevant factors.
Building an Incident Classification Framework
The output of classification should immediately indicate who needs to act, in what capacity, and whether notification to authorities must begin in parallel with the technical response. The table below illustrates an example classification framework that organisations can adapt to their own environment.
| Level | Criteria | Response | Reporting obligation |
| Low | Localised, no service impact | IT team handles, log and document | Internal record only |
| Medium | Service disruption possible, cause unclear | Security and IT teams, management informed | Monitor; prepare to notify if escalates |
| High | Critical service affected, data at risk | Full incident response team, management engaged | Early warning within 24 hours |
| Critical | Major disruption, possible cross-border impact | Crisis management, board-level involvement | Immediate notification, parallel reporting and response |
When an incident reaches a high or critical level, containment and regulatory notification must run simultaneously. Waiting for the incident to close before starting the report is not an option under NIS2. The timelines begin from the moment the organisation becomes aware.
How to Handle a Security Incident
Incident handling should be documented in enough detail that teams can act under time pressure and with incomplete information. The stages below structure the work of security and IT teams and indicate when business-side decisions must be brought in.
Detection and Initial Analysis
The process begins when an organisation receives a signal. It may be an alert from threat monitoring tools, a report from an employee, or information from a supplier. The immediate task is to determine what the event involves, which service is at risk, and whether there are grounds to treat it as a significant incident.
From the very first moment, the team should record the time of detection, signal source, affected service, and initial classification. This information underpins the chronology that reporting requires and forms the basis for the post-incident review.
Containing a Cyber Incident
At this stage, speed and discipline matter most. Containment actions by IT and security teams may include account lockdowns, access restrictions, network segmentation, key rotation, disconnection of affected components, or halting suspicious processes. At the same time, the incident owner must be making organisational decisions, managing internal communications, and keeping senior leadership informed.
Formal obligations should not wait. The team must collect the information needed for incident notification in parallel, and significant decisions must be documented as they are made. A report that cannot be submitted on time because the evidence was never gathered is a compliance failure, not just an operational one.
Root Cause Analysis and Recovery
Once the situation is under control, the goal becomes removing the root cause and returning safely to normal operations. It is worth checking whether the incident left lasting changes such as additional accounts, unauthorised configuration modifications, or hidden access paths that could enable a recurrence.
This stage must connect with business continuity plans. If the incident affects a critical service, the team needs to agree on the sequence of restoration and the acceptable level of residual risk during the recovery period.
Incident Closure and Documentation
Closing an incident requires clear criteria to be met. Service stability over a defined period, completion of remediation steps, and a fully updated incident log all need to be confirmed before the incident is formally closed. It is also the point at which the team organises material for the final report and the post-incident review.
For essential and important entities, incomplete documentation is effectively the same as having no process at all, regardless of how competent the technical response was. Documentation is what allows an organisation to demonstrate to a supervisory authority that it acted in accordance with its security procedures.
Reporting Incidents to the Authorities
NIS2 requires organisations to notify the relevant authority within defined timeframes, even when the situation is still developing. Incident reporting must be built into the process from the start, not treated as something that begins only after the incident is closed. For most organisations, establishing a credible incident response management capability requires revisiting both the technical and organisational dimensions of how incidents are handled.
This is also where security policies and procedures play a direct role. They determine whether the right information can be gathered, approved, and submitted within the required windows.
NIS2 Reporting Deadlines
As set out in Article 23 of the NIS2 Directive, reporting takes place in three stages.
- Early warning
Submitted within 24 hours of the organisation becoming aware of the incident. Based on what is known at the time, it signals whether malicious or unlawful activity is suspected and whether there may be cross-border impact. - Incident notification
Submitted within 72 hours of becoming aware of the incident. Updates the initial warning with an assessment of severity and impact, and includes indicators of compromise where available. - Final report
Submitted no later than one month after the incident notification. Closes out the chronology, describes the full impact on services and data, identifies the root cause, and outlines corrective actions taken.
These deadlines run from the moment of awareness, not from when an investigation concludes. Even if the picture is incomplete, a timely notification is always preferable to silence.
Incident Notification Requirements
Each notification should contain at minimum the time of detection, the services affected, an initial assessment of impact, and the actions taken or planned. Preparing templates for all three report types in advance significantly reduces the burden during an active incident. Creating a structured document from scratch under time pressure increases both the risk of errors and the likelihood of missing the deadline.
Security procedures should also specify how the organisation defines the moment it became aware of an incident. A consistent internal definition helps maintain a coherent timeline and makes it easier to demonstrate, under audit, that the required deadlines were met.
How to Work With Your National CSIRT
Notifications go to the relevant Computer Security Incident Response Team (CSIRT) or national supervisory authority, depending on the sector and the member state. The process should specify in advance who drafts the report, who approves it, and how the organisation manages communication when supplier information is needed to complete the notification.
Correspondence with supervisory authorities forms part of the compliance record and may be reviewed during an audit. Organisations should document how that correspondence is archived and how it connects to the broader incident record.
Roles and Responsibilities in Incident Response Management
Clear role assignment shortens response times and ensures information flows where it needs to go. Every stage should have an owner, and formal obligations must be separated from operational tasks, even when both are running in parallel. Effective cyber incident management depends on this separation being established before an event occurs, not worked out in real time. Getting incident response management right at the organisational level is often harder than solving the technical challenges.
The Internal CSIRT
Many organisations maintain an internal CSIRT whose remit covers incident handling, co-ordination between security and IT teams, gathering information for breach notification, and maintaining the incident register. Where a security operations centre (SOC) also exists, the boundaries between the SOC, IT, and CSIRT need to be defined clearly, specifying who approves external communications, and who is accountable for documentation.
The internal CSIRT can also serve as the single point of contact with the external CSIRT or relevant authority. This ensures that information flows to one place rather than being scattered across teams and individuals.
Working with Suppliers and Partners
Cyber incidents increasingly originate outside the organisation. Arrangements with suppliers, including designated contacts, escalation paths, information-sharing timelines, and the minimum data a supplier must provide on request, should be agreed in advance rather than improvised during a live incident.
These arrangements directly affect the quality of the 72-hour incident notification. Without prior agreements, an organisation may be unable to obtain the supplier information needed to complete the report on time.
Employee Awareness and Responsibilities
The first signal of a cyber incident is often a report from an employee who noticed a suspicious email, an unexpected login prompt, or an error message that should not have appeared. Security procedures should tell staff clearly what to report, where to report it, and what not to do in the meantime, particularly anything that could hinder subsequent forensic analysis.
When employees understand their role, security monitoring is supplemented by ground-level observation, and the organisation can identify a cyber incident before the situation deteriorates further.
Integrating Incident Management with Your ISMS
Incident management works best when it is part of a broader information security management system (ISMS). The more tightly it connects with risk analysis, security policies, and business continuity planning, the fewer decisions need to be made under pressure during an actual event.
Alignment With Risk Analysis
Risk analysis identifies which services are critical, which scenarios are most likely, and where threat monitoring should be strengthened. It ensures that incident classification reflects genuine business priorities rather than the instincts of whoever is on call. Organisations that have completed a formal risk analysis before an incident occurs will respond faster and more consistently.
Security Policies and Incident Management
The incident management process should be aligned with policies governing access control, change management, backup, and communications. Security policies and procedures must specify not just the rules, but the order of actions and who is authorised to take them. Reviewing these policies and procedures together, rather than treating incident management as a standalone document, avoids gaps and contradictions that tend to surface at the worst possible moment.
Organisations operating under ISO 27001 already have a strong foundation for NIS2 compliance, but will need to supplement it with the specific reporting timelines and authority communication requirements that NIS2 introduces.
Business Continuity and Crisis Management
A significant incident can disrupt service continuity, expose customers to harm, and damage an organisation’s reputation. Incident management should therefore be connected to recovery plans and crisis procedures, particularly where critical services are involved. For organisations in the financial sector, this linkage should also account for DORA obligations, which run in parallel with NIS2 notification requirements.
Testing and Continuous Improvement
Effective security incident management requires regular testing and continuous improvement. Exercises, post-incident reviews, and updates in response to changes in risk or regulation expose weaknesses before real events do.
Exercises and Simulations
Exercises should not be limited to security and IT teams. People responsible for communications and regulatory notifications should take part, and supplier scenarios are worth including, since that is where information flows slowest and gaps in prior agreements tend to surface.
Tabletop exercises are particularly valuable at management level. They test whether the right people can be reached outside business hours, whether escalation criteria are clear, and whether teams can produce a 24-hour notification under realistic conditions.
Post-Incident Reviews
After an incident closes, a structured post-mortem review should identify the root cause. Findings should translate into concrete actions such as revisions to classification criteria, updates to report templates, or clarification of roles and escalation paths. A review that produces a report but no changes has limited value.
Keeping the Process up to Date
Technology, suppliers, and working practices all change, and the risk profile changes with them. The process owner should run a regular review cycle to keep documentation current and aligned with NIS2 requirements. Any significant shift in the threat landscape, or new guidance from the European Commission, is reason enough to review ahead of schedule.
Building a NIS2 Compliance Trail
Good documentation does more than satisfy an auditor. It makes information transfer easier, brings structure to decisions made under pressure, and gives an organisation the means to demonstrate that its security procedures were genuinely followed.
Core Documentation
Process documentation should cover role descriptions, classification levels, escalation criteria, report templates, and evidence collection rules. Maintaining this documentation as part of the broader set of security policies and procedures, rather than as a standalone file, makes it easier to keep consistent and to demonstrate coherence to an auditor. Without documented process, incident management depends entirely on the knowledge and experience of whoever happens to be available when an event occurs, and that is not a position any organisation subject to NIS2 supervision should accept.
The Incident Register
The incident register should allow the full chronology of any event to be reconstructed, with a complete record of decisions and actions taken. It is worth logging events that were assessed and did not meet the significance threshold, not just confirmed incidents. Patterns in near-misses often point to weaknesses that a register of major incidents alone would not reveal.
Preparing for Audits and Inspections
An audit requires organised evidence, including the register, submitted reports, records of decisions, and the results of process reviews. Where the organisation also runs internal audits, connecting them to the regular review cycle and exercises avoids duplicating effort.
The key question an auditor will ask is whether the process actually works. A consistent register, complete documentation, and notifications submitted on time are what separate an organisation that can demonstrate compliance from one that can only claim it.
Common Incident Management Challenges
The biggest obstacles to implementing effective incident management rarely come down to a lack of tools. More often, the challenge is connecting security operations, business decisions, and formal obligations into a single, coherent process. The following difficulties appear across industries and organisation sizes.
Organisational and Skills Gaps
The most common problem is fragmented accountability. Security and IT teams are active, but coordination is missing and business decisions are delayed. This weakens the response and makes it harder to submit notifications on time. Without a designated process owner, every cyber incident tends to be handled differently, which makes consistent documentation impossible and leaves the organisation exposed under audit.
Technical Challenges
When threat monitoring generates high volumes of low-quality alerts, initial triage takes longer and the risk of missing deadlines grows. Organisations lose time that should be spent on the incident and on documentation. The quality of detection signals and the accuracy of severity assessments matter as much as the monitoring infrastructure itself.
Maintaining Compliance Over Time
Without a planned review cycle covering exercises, documentation updates, and regulatory alignment, processes become stale and organisations lose the consistency that NIS2 requires. Regulatory changes, new threats, and staff turnover all require incident management to be treated as a living process rather than a one-time implementation project.
