Skip to main content

Vulnerability Management Policy Guidance

Updated over 2 weeks ago

The following article contains guidance explaining portions of the Vulnerability Management Policy that we frequently see questions around, explaining what the sections mean.

Guidance statements will appear in bold and enclosed in brackets “[ ]” below the statements of the policy.

Vulnerability Management Policy

[COMPANY NAME]

__________________________________________________________________

Purpose

The purpose of this policy is to outline the requirements for (1) all product systems to be scanned for vulnerabilities at least annually, and (2) all vulnerability findings to be reported, tagged, and tracked to resolution in accordance with the SLAs defined herein. Records of findings must be retained for at least <RETENTION PERIOD>.

[Organizations may define their own retention period based on regulatory, contractual, or operational requirements. As a general best practice, a minimum of one year is recommended to support audit readiness and trending analysis.]

Roles and Responsibilities

<ROLES AND RESPONSIBILITIES>

[Refer to this guidance on how to define and document responsibility assignments: https://help.drata.com/en/articles/5829670-roles-and-responsibilities-guidance. For example, define ownership for updating, reviewing, and enforcing the policy. A typical statement might read: “The CISO is responsible for maintaining this policy, and engineering managers are responsible for remediation of assigned vulnerabilities”]

Policy

Information Systems Audit

The following guidelines will be observed for setting information systems audit controls:

  • Audit requirements for access to systems and data should be agreed with appropriate management.

  • Scope of technical audit tests should be agreed and controlled.

  • Audit tests should be limited to read-only access to software and data.

  • Access other than read-only should only be allowed for isolated copies of system files, which should be erased when the audit is completed, or given appropriate protection if there is an obligation to keep such files under audit documentation requirements.

  • Requirements for special or additional processing should be identified and agreed.

  • Audit tests that could affect system availability should be run outside business hours.

  • All access should be monitored and logged to produce a reference trail.

[This section explains that Information systems audits should be carefully controlled to avoid disruption, unauthorized access and planned ahead of time in coordination with management/leadership.]

Vulnerability Scanning and Infrastructure Security Testing

The scanning and identification of [COMPANY NAME]’s system vulnerabilities is performed by:

  • Automated Drata security agent installed on all employees’ machines.

[This should only be included if you are utilizing Drata's read-only agent.]

  • <SCANNING SOLUTIONS>

[Examples of scanning solutions include AWS Inspector, Google Cloud Security Scanner, Azure Security Scanner, Intruder.io, etc. For more guidance on Vulnerability Scanning, please see here: https://help.drata.com/en/articles/6136232-vulnerability-scanning-guidance]

Additionally, periodic security scans of [COMPANY NAME] systems are done using a combination of external open-source and commercial vulnerability testing tools, including:

  • <EXTERNAL TESTING TOOLS>

[Examples of external testing tools include Nessus, OpenVAS, Nmap, OWASP ZAP, and Burp Suite Pro. These are commonly used for perimeter and web application testing.]

Detection tools, threat signatures, and compromise indicators will be updated <FREQUENCY>, per the following procedures:

  • <PROCEDURES/TECH MEASURES FOR UPDATES>

[Examples of procedures/tech measures include CrowdStrike, Splunk, Cisco Firepower, Tenable Nessus, AWS GuardDuty, etc.)

Additionally, [COMPANY NAME] will exchange information with relevant security and privacy organizations (e.g., the Department of Homeland Security’s Cybersecurity and Infrastructure Security Agency (CISA), including information on newly identified threats and vulnerabilities, through bulletin subscriptions, email alerts from security advisories, participation in conferences, etc.

Penetration Testing

Penetration testing is performed regularly by either a certified penetration tester on [COMPANY NAME]’s security team or an independent third party.

Findings from a vulnerability scan and/or penetration test are analyzed by the Security Officer, together with IT and Engineering as needed, and reported through the process defined in the next section.

[We recommend having an independent third party perform the penetration test. If you do choose to have an internal person perform the penetration test, that person should not have any responsibility for implementing or developing the systems that are in scope for the penetration test in order to maintain impartiality and independence.]

Security Findings Reporting, Tracking and Remediation

[COMPANY NAME] follows a simple vulnerability tracking process using <TICKETING SYSTEM>. The records of findings are retained for <RETENTION PERIOD>.

[Examples of ticketing systems include Jira, Clubhouse, Trello, Asana, Github Issues, etc. It is at your discretion to determine the appropriate retention period for Security Findings. That said, we recommend having a retention period of at least 1 year.]

Reporting a Finding

  • Upon identification of a vulnerability (including vulnerability in software, system, or process), a <TICKETING SYSTEM> ticket is created.

  • The description of the Finding should include further details, without any confidential information, and a link to the source.

  • The Finding will be given a priority level in <TICKETING SYSTEM>

[See the table below for the different priority levels that we recommend using.]

Priority/Severity Ratings and Service Level Agreements

In an effort to quickly remediate security vulnerabilities, the following timelines have been put in place to address vulnerabilities:

Priority Level

SLA

Definition

Examples

Critical

<CRITICAL SLA>

Vulnerabilities that cause a privilege escalation on the platform from unprivileged to admin, allows remote code execution, financial theft, unauthorized access to/extraction of sensitive data, etc.

Vulnerabilities that result in Remote Code Execution such as Vertical Authentication bypass, SSRF, XXE, SQL Injection, User authentication bypass

High

<HIGH SLA>

Vulnerabilities that affect the security of the platform including the processes it supports.

Lateral authentication bypass, Stored XSS, some CSRF depending on impact

Medium

<MEDIUM SLA>

Vulnerabilities that affect multiple users, and require little or no user interaction to trigger

Reflective XSS, Direct object reference, URL Redirect, some CSRF depending on impact

Low

<LOW SLA>

Issues that affect singular users and require interaction or significant prerequisites (MitM) to trigger.

Common flaws, Debug information, Mixed Content

In the case a severity rating and/or priority level is updated after a vulnerability finding was originally created, the SLA is updated as follow:

  • Priority upgrade: reset SLA from time of escalation

  • Priority downgrade: SLA time remains the same from time of creation/identification of finding

[Organizations should define SLAs based on risk appetite and system criticality. Common industry targets are: Critical – 24 hours, High – 15–30 days, Medium – 60 days, and Low – 90 days or best effort.]

Resolving a Finding

  • The Finding should be assigned to the owner responsible for the system or software package.

  • All findings should be addressed according to the established SLA.

  • No software should be deployed to production with unresolved CRITICAL or HIGH findings, unless an Exception is in place (see below).

  • A finding may be resolved by

    • providing a valid fix/mitigation

    • determining as a false positive

    • documenting an approved exception

[Each finding must be assigned to a system or software owner and resolved within the SLA. Critical and High-risk issues must be resolved, identified as false positives, or formally accepted via an exception before any affected code or configuration is deployed.]

Closing a Finding

  • The assignee should provide a valid resolution (see above) and add a comment to the finding.

  • The finding should be reassigned to the Reporter or a member of the security team for validation.

  • Upon validation, the finding can be marked as Done (closed) by the Reporter.

  • Before the finding can be marked as closed by the reporter, the fix must be deployed to a development environment and have a targeted release date for deploying to production noted on the ticket.

[To close a finding, the assignee should provide a valid resolution and comment, then reassign it to the Reporter or security team for validation. The finding can only be marked as Done once it’s been validated.]

Exceptions

  • An Exception may be requested when a viable or direct fix to a vulnerability is not available. For example, a version of the package that contains the fix is not supported on the particular operating system in use.

  • An alternative solution (a.k.a. compensating control) must be in place to address the original vulnerability such that the risk is mitigated. The compensating control may be technical or a process or a combination of both.

  • An Exception must be opened in the form of a <TICKETING SYSTEM> ticket

  • The Exception Issue must reference the original Finding by adding an Issue Link to the Finding issue.

  • Each Exception must be reviewed and approved by the Security Officer and the impacted asset owner.

[Exceptions may be requested when a direct fix for a vulnerability is not available. Compensating controls should be implemented to mitigate the risk. An exception should be documented using a ticketing system and approved by the Security leadership and Asset Owner.]

Malware Protection

[COMPANY NAME] will deploy an anti-malware solution on endpoints (e.g., laptops, workstations) and all other system components identified at risk for malware as follows:

  • The deployed anti-malware solution will be configured to detect all known types of malware and to remove, block, or contain all known types of malware.

  • The anti-malware solution will be kept current via automatic updates.

  • The anti-malware solution will be configured to perform periodic scans and active or real-time scans or to perform continuous behavioral analysis of systems or processes.

  • For removable electronic media, the anti-malware solution will be configured to perform automatic scans of when the media is inserted, connected, or logically mounted, OR to perform continuous behavioral analysis of systems or processes when the media is inserted, connected, or logically mounted.

  • The anti-malware solution will be configured to perform real-time scans of files from external sources as files are downloaded, opened, or executed.

  • Audit logs for the anti-malware solution will be configured, enabled, and retained for a minimum of 12 months.

  • Access to modify configurations of anti-malware mechanisms will be restricted to authorized personnel. The mechanisms will be restricted from being disabled or altered by users, unless specifically documented, and authorized by management on a case-by-case basis for a limited time period.

[Anti-malware solutions will be deployed on all at-risk systems, configured for real-time and periodic scanning, automatic updates, media scanning, audit logging, and access restrictions to ensure comprehensive and controlled malware protection.]

Phishing Protection

[COMPANY NAME] will implement processes and automated mechanisms to detect and protect personnel against phishing attacks such as DMARC, DKIM, link scrubbers, server-side antivirus, etc.

[Implement automated tools and processes like DMARC, DKIM, and antivirus solutions to detect and protect personnel from phishing attacks.]

Revision History

Version

Date

Editor

Approver

Description of Changes

Format

Did this answer your question?