All Collections
Compliance
Vulnerability Management Policy Guidance
Vulnerability Management Policy Guidance
Ethan Heller avatar
Written by Ethan Heller
Updated over a week ago

The following article contains guidance explaining portions of the Vulnerability Management Policy that we frequently see questions around, explaining what the sections mean.

Guidance statements will appear in bold and enclosed in brackets “[]” below the statements of the policy.

Vulnerability Management Policy

[COMPANY NAME]

____________________________________________________________________

Purpose

The purpose of this policy is to outline the requirements for (1) all product systems to be scanned for vulnerabilities at least annually, and (2) all vulnerability findings to be reported, tagged, and tracked to resolution in accordance with the SLAs defined herein. Records of findings must be retained for at least <RETENTION PERIOD>.

  • [It is at your discretion to determine the appropriate retention period for vulnerabilities. That said, we recommend having a retention period of at least 1 year.]

Roles and Responsibilities

<ROLES AND RESPONSIBILITIES>

Policy

Information Systems Audit

The following guidelines will be observed for setting information systems audit controls:

  • Audit requirements for access to systems and data should be agreed with appropriate management.

  • Scope of technical audit tests should be agreed and controlled.

  • Audit tests should be limited to read-only access to software and data.

  • Access other than read-only should only be allowed for isolated copies of system files, which should be erased when the audit is completed, or given appropriate protection if there is an obligation to keep such files under audit documentation requirements.

  • Requirements for special or additional processing should be identified and agreed.

  • Audit tests that could affect system availability should be run outside business hours.

  • All access should be monitored and logged to produce a reference trail.

Vulnerability Scanning and Infrastructure Security Testing

The scanning and identification of [COMPANY NAME]’s system vulnerabilities is performed by:

  • Automated Drata security agent installed on all employees’ machines.

  • [This should only be included if you are utilizing Drata's read-only agent.]

  • <SCANNING SOLUTIONS>

Additionally, periodic security scans of [COMPANY NAME] systems are done using a combination of external open-source and commercial vulnerability testing tools, including:

  • <EXTERNAL TESTING TOOLS>

  • [Examples of external testing tools include OpenVAS, Nmap, Nessus, OWASP ZAP, Burp Suite Pro, etc.]

Penetration Testing

Penetration testing is performed regularly by either a certified penetration tester on [COMPANY NAME]’s security team or an independent third party.

  • [We recommend having an independent third party perform the penetration test. If you do choose to have an internal person perform the penetration test, that person should not have any responsibility for implementing or developing the systems that are in scope for the penetration test in order to maintain impartiality and independence.]

Findings from a vulnerability scan and/or penetration test are analyzed by the Security Officer, together with IT and Engineering as needed, and reported through the process defined in the next section.

Security Findings Reporting, Tracking and Remediation

[COMPANY NAME] follows a simple vulnerability tracking process using <TICKETING SYSTEM>. The records of findings are retained for <RETENTION PERIOD>.

  • [Examples of ticketing systems include Jira, Clubhouse, Trello, Asana, Github Issues, etc.]

  • [It is at your discretion to determine the appropriate retention period for Security Findings. That said, we recommend having a retention period of at least 1 year.]

Reporting a Finding

  • Upon identification of a vulnerability (including vulnerability in software, system, or process), a <TICKETING SYSTEM> ticket is created.

  • The description of the Finding should include further details, without any confidential information, and a link to the source.

  • The Finding will be given a priority level in <TICKETING SYSTEM>

  • [See the table below for the different priority levels that we recommend using.]

Priority/Severity Ratings and Service Level Agreements

In an effort to quickly remediate security vulnerabilities, the following timelines have been put in place to address vulnerabilities:

Priority Level

SLA

Definition

Examples

Critical

<CRITICAL SLA>

Vulnerabilities that cause a privilege escalation on the platform from unprivileged to admin, allows remote code execution, financial theft, unauthorized access to/extraction of sensitive data, etc.

Vulnerabilities that result in Remote Code Execution such as Vertical Authentication bypass, SSRF, XXE, SQL Injection, User authentication bypass

High

<HIGH SLA>

Vulnerabilities that affect the security of the platform including the processes it supports.

Lateral authentication bypass, Stored XSS, some CSRF depending on impact

Medium

<MEDIUM SLA>

Vulnerabilities that affect multiple users, and require little or no user interaction to trigger

Reflective XSS, Direct object reference, URL Redirect, some CSRF depending on impact

Low

<LOW SLA>

Issues that affect singular users and require interaction or significant prerequisites (MitM) to trigger.

Common flaws, Debug information, Mixed Content

  • [It is at your discretion to determine the SLAs for vulnerability remediation. Common SLAs for each level are 24 hours for Critical, 30 days for High, 60 or 90 days for Medium, and best effort for Low.]

In the case a severity rating and/or priority level is updated after a vulnerability finding was originally created, the SLA is updated as follow:

  • Priority upgrade: reset SLA from time of escalation

  • Priority downgrade: SLA time remains the same from time of creation/identification of finding

Resolving a Finding

  • The Finding should be assigned to the owner responsible for the system or software package.

  • All findings should be addressed according to the established SLA.

  • No software should be deployed to production with unresolved CRITICAL or HIGH findings, unless an Exception is in place (see below).

  • A finding may be resolved by

    • providing a valid fix/mitigation

    • determining as a false positive

    • documenting an approved exception

Closing a Finding

  • The assignee should provide a valid resolution (see above) and add a comment to the finding.

  • The finding should be re-assigned to the Reporter or a member of the security team for validation.

  • Upon validation, the finding can be marked as Done (closed) by the Reporter.

  • Before the finding can be marked as closed by the reporter, the fix must be deployed to a development environment and have a targeted release date for deploying to production noted on the ticket.

Exceptions

  • An Exception may be requested when a viable or direct fix to a vulnerability is not available. For example, a version of the package that contains the fix is not supported on the particular operating system in use.

  • An alternative solution (a.k.a. compensating control) must be in place to address the original vulnerability such that the risk is mitigated. The compensating control may be technical or a process or a combination of both.

  • An Exception must be opened in the form of a <TICKETING SYSTEM> ticket

  • The Exception Issue must reference the original Finding by adding an Issue Link to the Finding issue.

  • Each Exception must be reviewed and approved by the Security Officer and the impacted asset owner.

Did this answer your question?