by Dwayne Stewart
3:45 min read or Audio
In the event of a security breach of your network, it is likely that the attackers have altered or destroyed important data and security configurations. The tenth CIS control, data recovery capabilities, addresses the importance of backing-up system data and properly protecting those back-ups. By doing so, you ensure the ability of your organization to recover lost or tampered-with data.
Every minute your network is down is productivity lost. Administrators must ensure up-to-date and functioning restoration data has been properly protected using physical safeguards and data encryption - both at rest and in transit. Failure to establish a reliable and secure data recovery solution could mean the difference between a smooth return to standard operations and scrambling to rebuild systems for days, or weeks--just to get back to where you were before the data loss. No one wants that.
A step-by-step breakdown of the proper controls to ensure you can recover your data:
Ensure Regular Automated Backups
A fundamental component in the implementation of an efficient backup process is automation. Humans are prone to err. Beyond mental lapses, we are susceptible to illnesses and mobility-limiting natural disasters to list a small subset of possible contingencies.
Numerous applications are available that can streamline the backup process and achieve data redundancy. Maintaining a redundant set of up-to-date backups at an off-site facility is essential and can help ensure data recovery in most situations. A useful rule-of-thumb is 3-2-1:
Perform Complete System Backups
It is important that a comprehensive backup strategy be implemented. This should allow for the speedy recovery of data, whether it be a few specific files, or an entire server. One useful technique for scheduling system back-ups is the Grandparent-Parent-Child system:
Test Data on Backup Media
All the automation in the world won't save you if your backups are corrupted. The integrity of both your backup system and the system images themselves must be tested regularly.
CIS Control #10 states, "Once per quarter (or whenever new backup equipment is purchased), a testing team should evaluate a random sample of system backups by attempting to restore them on a test bed environment."
Variations of the Grandparent system explained above can also be easily adapted to work here.
Backups could be directed to various locations, such as network-attached storage, removable media, or a cloud-based datastore. The size and budget of your department will directly affect what approaches are feasible for you. It is important to ensure that onsite backup data is not directly accessible by other hosts on the network. Direct access to backup data should be limited to the backup utility used to perform backup and restore activities. Ideally, archived data should be stored offsite and offline with physical safeguards.
The biggest mistake you can make is assuming your organization will not be targeted. Do not assume that because you are not handling government secrets it is alright to leave the removable media holding your backups sitting on your desk. Physical security measures for media containing backup data must be enforced as rigorously as those pertaining to the network. It is also important to ensure that backup data destined for off-site storage is encrypted when saved to removable media.
Ensure Backups Have At Least One Non-Continuously Addressable Destination
More explicitly, CIS control #10 specifically urges that "...all backups have at least one backup destination that is not continuously addressable through operating system calls."
The operating method of hackers is, after gaining a foothold in the system, to enumerate the systems present in your network, slowly mapping its architecture and attempting to escalate privileges across multiple points.
Because of this, it is unsafe to assume that any backup data accessible through your network is ultimately safe. As mentioned in the 3-2-1 method, and explicitly urged in CIS Control #10, at least one back-up should be located offline and preferably offsite.
The most important ideas to remember when designing your backup systems are
Addressing each of the above items will help to ensure the safety and recoverability of your network systems and company data.
by Andrea Lee Taylor
1:45 min read or Audio
Every once in a while in the annals of cybersecurity there is news that isn’t a warning about the newest breach or the release of the latest patch. In this case the news is good for Maryland buyers of cybersecurity.
The General Assembly of Maryland, on April 9th, passed the Cybersecurity Investment Incentive Tax Credit Bill (SB 228). It provides for “…authorizing certain buyers of certain technology to claim a credit against the State income tax for certain costs; providing that the credit may not exceed certain amounts under certain circumstances; requiring the Secretary of Commerce to approve each application that qualifies for a credit…For any taxable year, the credit allowed…may not exceed $50,000 for each qualified buyer.” (LegiScan)
The cyber incentive bill is unique in its agency and platform. Simply restated, it provides for a credit for buyers of cybersecurity services and products from Maryland companies. “This is a first-in-the-nation legislation and we’re looking forward to some really great successes,” said Senator Guy Guzzone (D), primary sponsor of the bill. Cosponsoring were Senators Adelaide Eckardt (R), George Edwards and Andrew Serafini (R).
Cybersecurity it this century’s absolute fact of life. For any business, coupled with the necessity for security are the budget parameters available to fund a flexible, strategic cyber plan. Any financial assistance in obtaining services or products is a welcome support and boost to doing business.
Qualified buyers may claim a credit on their state income tax up to 50% of the cost of the technology or service purchased from qualified sellers. As a qualified seller, we are excited to be able to share in this opportunity.
“Our focus has been strictly cybersecurity for over 16 years now and this legislation is a first and is a great help to businesses. Anchor looks forward to putting our experience to use helping small businesses improve their security posture,” said Anchor Technologies’ CEO, Peter Dietrich.
Cybersecurity is a necessity. A plan for what to implement and when keeps businesses on track in protecting their important data. Knowing one does not having to worry about whether the company’s data is as secure as possible allows owners to concentrate their efforts of conducting and growing a business. Thank you to the state legislators for helping to empower small business in Maryland.
by Marian Bodunrin
4:00 min read or Audio
Transmitting and receiving data via network ports is a necessary evil. Because your network process uses a specific port to communicate to another port there is no avoiding the inherent risk. The most perilous services on a network are the ones you don't know are running. Default system installations often activate services with little or no useful purpose and often go unnoticed. "Shadow IT" operations may start up unauthorized, poorly secured services.
There are 65,535 TCP ports and 65,535 UDP ports. Some of them are more vulnerable than others. For example, TCP port 21 connects FTP servers to the internet but have several vulnerabilities, such as cleartext authentication, which make it easy for an attacker with a packet sniffer to view usernames and passwords. Telnet on TCP port 23 sends data in cleartext which makes it vulnerable to attackers listening in to intercept user’s credentials, and man-in-the-middle attacks. Also, the busiest ports are the easiest for attackers to infiltrate. TCP port 80 for HTTP supports web traffic. Attacks on web clients that use port 80 include SQL injections, cross-site request forgeries, cross-site scripting and buffer overruns.
A well-run, secure network does not expose any service without a reason. The issue arises if no one notices the services that are running, no one may be monitoring them or keeping them up to date. CIS Control #9 addresses the Limitation and Control of Network Ports, Protocols and Services, and gives specific recommendations for avoiding the risk of unmanaged services and ports.
System administrators need an established baseline of what ports and services are supposed to be running on each machine. In addition, they need to run regular, automated port scans. Simple, free software is available that will do the job. The scan should note any differences from the baseline and notify the administrators.
The first time a scan is run, it's likely IT administrators will discover previously unaccounted for or undesirable services, possibly due to oversight. These services should be tracked and disabled upon discovery. Most importantly, perform a periodic performance of port scans on a regular basis to determine which services are listening on the network, which ports are open, and to identify the version of the protocol and service listening on each open port. All such efforts will further reduce the attack vector.
Every software installation carries some risk. It could open up unmanaged ports by default, just because they might be useful in certain cases.
When installing new software, the best practice is to identify any added services and configure it to run only the ones that have value for business operation. Running a port scan before and after installation will verify if any others were added--and all legitimate services should be securely configured.
For an organization to adequately mitigate risks, a layered perimeter of defenses such as application-aware firewalls, network access controls (NAC), intrusion detection/prevention systems should be deployed to avert unauthorized access. “Defense in depth” is the watchword of a good security setup.
Use of endpoint firewalls, removal of all unnecessary services and segmenting critical services across systems, and applying patches as soon as they become available, will reduce your organization’s risk exposure. For instance, a network scan can identify all servers which are visible from the internet--if any don't need to be visible, moving them to an internal VLAN will keep them safe. If they run any unauthorized services that aren't caught, at least they won't be directly reachable from outside.
Running multiple critical services on the same machine is an invitation to trouble. If the same machine runs DHCP, SMTP and HTTP, an attacker that breaches one could jump to the others. Each of those services should have its own virtual or physical machine, with just the ports needed to run it.
It's easy enough to install multiple virtual machines on one computer. That way, each port's services have their own operating system, root file directory, and network settings. If one of them gets compromised, the problem is more likely to stay localized long enough to identify it and fix it.
Minimize to Maximize
Just as building management needs to know every door through which people can enter and how that door is secured against sneaking in undetected, so IT management needs to know every port and service which the servers expose. If they're there for a reason, they should be managed and secured. If there's no reason for them, they could be an unguarded back door to the network, which should be shut and locked. Though it is impossible to eradicate all risk, exposure can be greatly minimized when appropriate controls are put in place to deter an attacker. Implementing CSC #9 will further mature the cybersecurity posture of your organization and a continuous monitoring tool deployed to serve as an ongoing exercise will contribute to the effort of reducing risk and maximizing cybersecurity.
by Marian Bodunrin
4:30 min read or Audio
Malware is a type of computer program designed to infect a legitimate user’s computer with the intent to inflict harm. Malware comes in various forms such as, viruses, Trojans, spyware, worms, etc. Malware is a huge and growing problem, costing businesses millions of dollars and typically exposes or damages vital data. New forms constantly appear and can be hard to catch. CIS Control #8 addresses recommendations that should be implemented to reduce an organization’s risk.
The degree of damage caused by malware varies according to the type of malware, the type of device that is infected and the nature of the data that is stored or transmitted by the device. As a result, defense strategy needs to act on multiple levels. Defenses need to prevent malware from being installed, from running if it is installed, and from spreading if it runs. This is defense-in-depth and requires a strong set of automated tools.
Automated malware detection and removal software is an absolute requirement. It needs to cover everything on the network: servers, workstations, mobile devices, and anything else that has a processor and runs code. Regular updates are necessary to keep up with new threats, and machines should be checked to make sure they're getting the updates. Also, periodic vulnerability scans, along with malware detection and blocking should prevent a network from being compromised and succumbing to a botnet.
Shadow IT increases risk. If people are running machines that aren't authorized, they aren't going to be consistently monitored and protected. The first and second CIS controls stress the importance of keeping track of everything on a network, and malware protection is one of the reasons that makes such inventories so important.
It isn't enough to put protective software on each machine without an overall plan. Defenses are very hard to manage if haphazardly installed. Each machine would need its own updates, and hostile code that gets blocked on one system could get through on another. Centrally administered and automated protection gives your network a more consistent defense.
Keeping track of what protective software finds is important. It should be set up to log all incidents, and part of administrators’ responsibilities is to review the logs. If an issue turns up on one machine, it may be present elsewhere as well. If an attack occurs repeatedly, it's time to check the defenses against it and strengthen them as necessary.
Network monitoring needs to check for traffic that could indicate malware. The most popular malware model today is the Command & Control (C&C), where it reports to a server, sends information, and gets instructions. The monitoring system should log DNS queries in order to catch requests to C&C domains. Effective firewalls can capture suspicious file transfers and block hostile traffic. This isn't limited to blocking ports and IP addresses; the best software can catch malicious packets at the application level, after SSL decryption.
If a device is caught running malware, the network protection software should quarantine it immediately. Keeping malware from spreading buys time to fix the problem in spite of its urgency.
Limiting the attack surface
External devices, such as thumb drives, are inherently convenient and yet they create risks. Many are too trusting of drives received as promotional giveaways, even legitimate ones are sometimes inadvertently infected. Auto-running when devices are inserted is a convenient feature that ought to be buried, and this feature should be disabled on all machines. Thumb drives are the most common, but the caution applies to all mountable devices brought in from the outside.
A solid defense will have anti-malware software scan for each newly mounted device. If there are suspicious files on it, the scan will automatically dismount it. Newly downloaded files need the same consideration. Each one should be scanned, and the ones that are flagged should be blocked from running.
The multi-layered approach
It's unrealistic to expect any defense to stop all malware at the perimeter. There are just too many threats, new ones being invented and unleashed all the time, and some will make it past the first line of defense. Stopping threats requires a coordinated effort in the firewall, devices on the network edge, server protection, and monitoring.
The multi-layered approach is to:
Everyone understands that malware protection is necessary but turning it into a systematic set of practices takes a coordinated effort. Everyone involved needs to be working on the same comprehensive cybersecurity plan.
by Marian Bodunrin
3:45 min read | Audio
Web browsers and email clients are very common points of entry for malicious code due to their daily usage by users. Content can be manipulated to entice users into taking actions that can greatly increase risk resulting in loss of data and other attacks. Controlling the use of browsers and having a defined list is critical. The CIS’ Control #7 addresses several key points in protecting an organization’s environment as well as provides recommendations to mitigate risks. While some of the controls may seem too restrictive for an organization's needs, most are clearly necessary and implementing them will ensure a more robust cybersecurity blueprint.
An organization’s browser, the portal to the internet, is also the first line of defense against malware threats. Minimizing attack vectors should be the number one goal-- ensuring only fully supported web browsers are allowed to execute and deploy updates. Obviously, as much as possible, updates should happen as soon as they become available, and a formal written policy should be developed addressing user behavior.
At times it can be difficult to control the sites users access. Enforcing a network-based URL filter that limits the system’s ability to connect to websites not approved by the organization will help to monitor this vulnerability.
Keep in mind that if vulnerabilities within the browser are not available, attackers also target common web browser plugins that may allow them to hook into the browser or directly into the operating system. To mitigate this risk, uninstall or disable any unauthorized browser plugins or add-on applications.
An e-mail security program needs to provide confidentiality, data origin authentication, message integrity, and nonrepudiation of origin. CSC # 7 provides several recommendations to help ensure email security. Using a spam filtering tool will aid in reducing malicious emails that come into the network. Deploying a Domain-based Message Authentication, Reporting and Conformance (DMARC) protocol will ensure that legitimate email is properly authenticated against established SPF (Sender Policy Framework) standards. Fraudulent activity appearing to come from the organization’s domains are blocked. Installing an encryption tool to secure email and communication adds another layer of security for users and the network.
Spoofed messages are dangerous because they can create a false sense of trust. Employees are more likely to respond to a message that seems to come from someone they know. The SPF standards guard against this by checking if messages are coming from a mail server that is authorized to use the sender's address. While the CIS specifically recommends SPF, other protocols such as DKIM work well with it, and implementing both is advisable.
Implementing this control should be neither very disruptive nor very difficult. In a security-focused organization, end users are typically not allowed to install their own software, and updates are deployed as soon as they are available by the authorized department. Software needs to be kept up to date in general, and Web browsers and mail clients will be part of this practice. Administrators should also restrict and monitor the use of plugins. At times there might be special work requirements that involve a plugin--such requests should go through the administrator for approval.
The simple rule to follow when implementing this control is, “Make it simple for the users or they will find a way around it.” Increasing complexity or the effort users have to put in often leads to privilege misuse or other methods to defeat the controls. It is worth mentioning that human error is still the major cause of most breaches and incidents. Overall, implementing this control provides a large improvement in safety for relatively little effort.
by Marian Bodunrin
2:30 min read
When properly implemented, Control #6 can bring an organization’s security program to a higher level of maturity. Maintaining, monitoring and analyzing audit logs helps gain visibility into the actual workings of an environment. Also, with proper implementation, the control can help detect, understand or recover from an attack.
Despite best practices, it is impossible to safeguard a network against every attack. Therefore, when a breach occurs the log data can be crucial for identifying the cause of the breach and help in collecting evidence for use. That is, if the logs were configured properly before the incident occurred.
Deficiencies in security logging and analysis allow attackers to hide their location, malicious codes and activities on victim’s machines. Without protected and complete logging records an organization is blind to the details of an attack which can go on indefinitely and cause significant damage.
To ensure readiness, and effective log maintenance, monitoring, and analysis, the Center for Internet Security (CIS) recommends the following controls:
Maintaining security logs and actively using them to monitor security related activities within the environment is essential, especially during post breach forensic investigation. Therefore, an organization must develop procedures to actively review and analyze logs in real time so that attacks can be detected quickly with appropriate response time. It's one of several best practices for an environment to achieve a safer, better, cybersecurity posture.
by Andrea Lee Taylor
We have considered individually the Center for Internet Security’s top 5 controls for effective cyber defense. Together, they are a force. Perhaps you’re already aware of CIS’s statistic. Of the 20 controls, to implement just the top 5 reduces known cybersecurity vulnerabilities by 85%. If I got that kind of return from the stock market I’d be retiring. Next week.
And it’s not that the recommended set of actions are impossible to implement--far from it! A shift in focus may be required, but we find most employees, most board members, are amenable. To implement procedures and processes means people may be inconvenienced, even personally so. But more often than not they are open to adopting and adapting when it is for the overall good, even the good of the organization.
When people are educated as to what is important, why it is important and, more importantly, how they can help—it’s been our experience they are more willing to be a part of what is being asked rather than a speed bump to greater security.
CIS has a resource that is not news; neither are the controls. Updated periodically, you can download the latest CIS Controls (V7) and read a white paper Practical Guidance for Implementing the Critical Security Controls (V6). It is a way and a place to start. The return on investment is in strengthened cyber defenses and protection, streamlined administrative security functioning and ultimately a savings in financial resources. That is not to say that this isn’t an ongoing work without financial backing. It is. But job security and interesting challenges are important, and being one breach away from exigency is no way to live or conduct business.
Someday the CIS Controls advice will not be revolutionary in its results because it will be boringly customary. Yet the controls have not been implemented to such an extent as to render their advice moot or their results less than stunning.
They’re that worth implementing.
Update: V7 of the Controls adds Control #6 to the basic list of controls. Their approach is always one that keeps an eye on the current threat landscape as well as the latest tools developed in cyber defense. And still, the essential remains the same--making sure the basics are covered makes an exponential difference in an organization's security stability.
by Dwayne Stewart
3:30 min read
A compromise of any account is a problem, but it's especially serious when an outsider gains access to an administrative account. An intruder with full control of a device, website or database and can do serious damage. CIS Control #5’s message is to apply strict control to the level of access that end-users have to network resources, ensuring that each user is granted just the necessary access required to perform their job duties.
Doing this can be unpopular among your users and can create feelings of untrustworthiness in those who are refused administrative privileges on their machines. End-users would much rather have the convenience of not having to rely on IT support staff to perform certain actions on their workstations, and some applications seem to require an admin account for no really good reason. Convincing executives that they shouldn't have administrative access can be a tough job.
All staff need to understand the necessity for stringent management of account privileges. It’s important to educate them about the inherent risk of using accounts with elevated privileges for general every-day tasks on their workstations. If an administrative account is hijacked, not only is all the data on the machine compromised, but the machine itself can now be used perform additional attacks on other network devices that it can access. The potential consequences that result from a compromised account are significantly reduced if that account has standard user account privileges.
Limit creation of accounts
Ensure that administrative accounts are only created for those employees that require them to perform administration of the various systems for which they are responsible.
Not all users that perform administrative tasks require administrator accounts or administrative privileges. Many systems have the option to make users members of certain pre-defined roles that allow them to perform certain administrative tasks, but not others; this provides them the required privileges without granting unlimited access. For example, in content management systems such as WordPress, it's straightforward to assign a specific pre-defined or custom role to accounts. Editors can manage content but can't install plugins or create new users. In Active Directory, the necessary rights to network resources can be assigned to domain security groups; domain users can then be assigned to those groups in order to more easily manage user rights throughout the network. Properly managing the level of access users have to both their own workstations and various applications on the network largely eliminates any need to assign administrative rights to those users that are not system managers.
Limit use of accounts
Even those who have administrative accounts shouldn't use them for non-administrative tasks, like checking email or researching an issue on the Internet. A phishing message opened while running as an administrator could have nasty consequences. Have system administrators login to their workstations using standard user accounts. To run applications or execute commands that require elevated privileges, they can use the “runas” feature in Windows, or “sudo” on a Unix or Linux machine. This allows admins to perform their duties without being logged in as an administrator.
Protect the accounts
Strong passwords should be required for all accounts; especially those that are used for system administration. It is imperative that passwords be changed for default accounts on all network devices during the initial configuration. The username should be updated as well, if possible. If that’s not an option, consider creating a new administrative account with a unique username and strong password; follow that up by disabling the default account all-together. If there will be more than one person administering a system or device, an appropriately configured account should be created for each one of them. This establishes accountability for all actions performed on the device.
It’s good practice to use an authentication server, such as TACACS+ or RADIUS, to manage administrative access to network resources that support it. This is an efficient and more secure method of managing both who has access to a network device, as well as the level of access they have on that device.
Another approach is to implement multi-factor authentication (MFA), which requires a combination of two or more types of authentication factors. An authentication factor can be something a user knows (username, password, PIN), something a user has in their possession (key fob, one-time password) or a biological trait of the user (fingerprint, voice, vein patterns). For example, logging into a firewall management interface could require the administrator’s username and password, as well as a temporary code on a security token or a one-time password (OTP) sent to their phone. Each factor provides an additional layer of security, which makes it much more challenging for an attacker to use valid credentials to gain access to a system.
Persuade the users
With Control #5, one of the biggest challenges is increasing the security awareness of the system managers. Convincing them to embrace and follow policies and procedures that help prevent the compromise of their administrative accounts can take time, especially if they are not typically security-focused individuals. While having administrative access is convenient, it significantly increases the potential of a network breach if an account with elevated privileges is somehow compromised. The way to frame the issue is in terms of risk reduction rather than prohibition. System managers need to understand that they're helping to make systems safer, which helps prevent network breaches and the resulting reputation damage and significant financial consequences. If administrators understand this, they may be more likely to accept the additional restrictions applied on their administrative accounts.
by Brian Nelson
1:30 min read
As the pace of security breaches continues to accelerate, a common thread in most breaches is the exploitation of a technical vulnerability--in either the operating system or an application running on top of the operating system. Just in the past two years at Anchor Technologies every breach investigation we have been a part of was associated with a known technical vulnerability. The epic Equifax breach was of a technical vulnerability that was public knowledge for months prior to the breach. An annual vulnerability assessment is no longer sufficient to protect your organization.
When it comes to technical vulnerabilities, many organizations are making themselves easy targets by either only scanning their external IP’s or scanning their internal networks just once a year. If you focus solely on your external exposure, you are ignoring over 90% of your risk.
Most breaches occur through the exploitation of internal resources, and if you are only looking at those internal assets once a year, it is quite likely those assets will have unpatched critical vulnerabilities. Malicious actors know, and count on, this.
To help make your organization a more difficult target, we recommend the following actions:
Implementing a robust scan-and-patch program may seem daunting in the short run but the payoff is exponential. What is the reduction of your cyber risk worth?
by Dwayne Stewart
3:30 min read
Vulnerabilities on Internet connected systems are targeted on a daily basis. The fourth CIS control addresses the need to keep them protected. "Continuous Vulnerability Assessment and Remediation" addresses keeping up with and fixing newly discovered security issues.
The need for vigilance
Every day, cyber security researchers find new security flaws in software. These software vulnerabilities are generally announced once a patch has been made available. However, once new vulnerabilities are announced, that information is available to both system managers and criminals alike. System managers need to determine whether or not these vulnerabilities exist on their systems and act on the information as quickly as possible to mitigate detected risks.
Vulnerabilities can arise from system misconfiguration as well as software flaws. For example, a host that is accessible from the Internet could expose functionality that should only be available locally, such as access to management interfaces. An external scan should discover these issues.
A vulnerability management process is necessary to keep up with the number of published vulnerabilities. A comprehensive process will identify vulnerabilities and recommend the necessary patches or configuration changes. This should be followed up with patch deployment and remediation scans to ensure that updates were successfully applied.
Even one existing critical vulnerability could allow an attacker to take complete control of a system. Therefore, it is important that the appropriate vulnerability scanning and patch management tools are implemented to identify and remediate the various points of risk throughout the company’s network.
Vulnerability scans should be performed from both an internal and external perspective to get a complete picture of what vulnerabilities exist on a network. External scans provide information around the exposure of an organization’s systems to the Internet. It should also highlight potential misconfigurations of the services on those hosts that are exposed to the Internet. Internal scans should detect vulnerabilities on all internal hosts accessible by the scanner, as opposed to just those services exposed to the Internet through the gateway firewall. To ensure early detection of new vulnerabilities, scans should be performed monthly at a minimum.
It is generally recommended to perform authenticated scans to get the most comprehensive and accurate set of results, including additional information about versions of installed software, missing patches, insecure configurations and the potential malware on scanned hosts.
Once scanning is completed, remediation efforts should be prioritized based on which vulnerabilities present the greatest risk to the organization. An immediate effort should put into addressing all discovered critical and high vulnerabilities. Higher priority should also be placed on those specific hosts that contain sensitive data, are considered to be mission critical, or are directly accessible from the Internet.
There are certain risks associated with patch deployment. Patches should be tested before being applied to verify whether or not they will have an adverse effect on dependent software. Additionally, it’s important that the process of deploying patches doesn’t disrupt operations; chances are you don’t want forced reboots on servers and workstations during business hours. This could possibly lead to data loss and will certainly lead to very upset users. This highlights the fact that communication is also a key component of the patch deployment process. It’s important to keep system managers and end-users abreast of planned patch deployment.
Also, it isn't just ‘computers’ in the usual sense that need patching. Any device with a processor and firmware could have security issues that could potentially lead to the compromise of that particular device as well as a potential network breach. Devices such as printers, scanners and routers all need to be routinely updated as a part of the patch management process.
Keeping up means greater safety
Unpatched software vulnerabilities are a major factor in system breaches. The cost in lost data and time spent recovering it as well as the hit to reputation is, more often than not, huge. A systematic approach to detecting and fixing security holes should stop the large majority of threats. It's an ongoing, sometimes tedious task; but it is beyond necessary, it is a vital work.