Archive for category: Security
Risk Management Frameworks
Risk Assessment should be considered separate from Application Threat modeling, although similar but Application Threat Modeling is more of a calculated approach.
Risk Management Activities
Application Threat Modeling
Application Threat modeling should be considered separate from Risk Assessment, although similar but Application Threat Modeling is more of a calculated approach.
Threat modeling allows you to systematically identify and rate the threats that are most likely to affect your system. By identifying and rating threats based on a solid understanding of the architecture and implementation of your application, you can address threats with appropriate countermeasures in a logical order, starting with the threats that present the greatest risk.
Threat modeling has a structured approach that is far more cost efficient and effective than applying security features in a haphazard manner without knowing precisely what threats each feature is supposed to address. With a random, “shotgun” approach to security, how do you know when your application is “secure enough,” and how do you know the areas where your application is still vulnerable? In short, until you know your threats, you cannot secure your system.
- Asset. A resource of value, such as the data in a database or on the file system. A system resource.
- Threat. A potential occurrence, malicious or otherwise, that might damage or compromise your assets.
- Vulnerability. A weakness in some aspect or feature of a system that makes a threat possible. Vulnerabilities might exist at the network, host, or application levels.
- Attack (or exploit). An action taken by someone or something that harms an asset. This could be someone following through on a threat or exploiting a vulnerability.
- Countermeasure. A safeguard that addresses a threat and mitigates risk.
Consider a simple house analogy: an item of jewelry in a house is an asset and a burglar is an attacker. A door is a feature of the house and an open door represents a vulnerability. The burglar can exploit the open door to gain access to the house and steal the jewelry. In other words, the attacker exploits a vulnerability to gain access to an asset. The appropriate countermeasure in this case is to close and lock the door.
The STRIDE approach to threat modeling was introduced in 1999 at Microsoft, providing a mnemonic for developers to find ‘threats to our products’  . STRIDE, Patterns and Practices, and Asset/entry point were amongst the threat modeling approaches developed and published by Microsoft. References to “the” Microsoft methodology commonly mean STRIDE.
The Process for Attack Simulation and Threat Analysis (PASTA) is a seven-step, risk-centric methodology. It provides a seven-step process for aligning business objectives and technical requirements, taking into account compliance issues and business analysis. The intent of the method is to provide a dynamic threat identification, enumeration, and scoring process. Once the threat model is completed security subject matter experts develop a detailed analysis of the identified threats. Finally, appropriate security controls can be enumerated. This methodology is intended to provide an attacker-centric view of the application and infrastructure from which defenders can develop an asset-centric mitigation strategy.
The focus of the Trike methodology is using threat models as a risk-management tool. Within this framework, threat models are used to satisfy the security auditing process. Threat models are based on a “requirements model.” The requirements model establishes the stakeholder-defined “acceptable” level of risk assigned to each asset class. Analysis of the requirements model yields a threat model from which threats are enumerated and assigned risk values. The completed threat model is used to construct a risk model based on asset, roles, actions, and calculated risk exposure.
VAST is an acronym for Visual, Agile, and Simple Threat modeling. The underlying principle of this methodology is the necessity of scaling the threat modeling process across the infrastructure and entire SDLC, and integrating it seamlessly into an Agile software development methodology. The methodology seeks to provide actionable outputs for the unique needs of various stakeholders: application architects and developers, cybersecurity personnel, and senior executives. The methodology provides a unique application and infrastructure visualization scheme such that the creation and use of threat models do not require specific security subject matter expertise.
DREAD and STRIDE
Application Threat Modeling using DREAD and STRIDE is an approach for analyzing the security of an application. It is a structured approach that enables you to identify, classify, rate, compare and prioritize the security risks associated with an application. DREAD methodology is used to rate, compare and prioritize the severity of risk presented by each threat that is classified using STRIDE.
To perform Application Threat Risk Modeling use OWASP testing framework to identify, STRIDE methodology to Classify and DREAD methodology to rate, compare and prioritize risks, based on severity
- Information Disclosure
- Denial of Service
- Elevation of Privilege
|Authentication||Spoofing||Impersonating something or someone else.||Pretending to be any of billg, microsoft.com or ntdll.dll|
|Integrity||Tampering||Modifying data or code||Modifying a DLL on disk or DVD, or a packet as it traverses the LAN.|
|Non-repudiation||Repudiation||Claiming to have not performed an action.||“I didn’t send that email,” “I didn’t modify that file,” “Icertainly didn’t visit that web site, dear!”|
|Confidentiality||Information Disclosure||Exposing information to someone not authorized to see it||Allowing someone to read the Windows source code; publishing a list of customers to a web site.|
|Availability||Denial of Service||Deny or degrade service to users||Crashing Windows or a web site, sending a packet and absorbing seconds of CPU time, or routing packets into a black hole.|
|Authorization||Elevation of Privilege||Gain capabilities without proper authorization||Allowing a remote internet user to run commands is the classic example, but going from a limited user to admin is also EoP.|
- Affected Users
DREAD Risk = (Damage + Reproduciblity + Exploitability + Affected Users + Discoverability) / 5. Calculation always produces a number between 10. Higher the number means more serious the risk is.
Threat Modeling Tools
There are currently five tools available for organizational threat modeling:
- Microsoft’s free threat modeling tool – the Threat Modeling Tool (formerly SDL Threat Modeling Tool). This tool also utilizes the Microsoft threat modeling methodology, is DFD-based, and identifies threats based on the STRIDE threat classification scheme. It is intended primarily for general use.
- MyAppSecurity offers the first commercially available threat modeling tool – ThreatModeler It utilizes the VAST methodology, is PFD-based, and identifies threats based on a customizable comprehensive threat library. It is intended for collaborative use across all organizational stakeholders.
- IriusRisk offers both a community and a commercial version of the tool. This tool focus on the creation and maintenance of a live Threat Model through the entire SDLC. It drives the process by using fully customizable questionnaires and Risk Pattern Libraries, and connects with other several different tools (OWASP ZAP, BDD-Security, Threadfix…) to empower automation.
- securiCAD is a threat modelling and risk management tool by the Scandinavian company foreseeti. It is intended for company cyber security management, from CISO, to security engineer, to technician. securiCAD conducts automated attack simulations to current and future IT architectures, identifies and quantifies risks holistically including structural vulnerabilities, and provides decision support based on the findings. securiCAD is offered in both commercial and community editions. 
- SD Elements by Security Compass is a software security requirements management platform that includes automated threat modeling capabilities. A set of threats is generated by completing a short questionnaire about the technical details and compliance drivers of the application. Countermeasures are included in the form of actionable tasks for developers that can be tracked and managed throughout the entire SDLC.
- Intel Threat Agent Risk Assessment (TARA)
- Factor Analysis of Information Risk (FAIR)
- OCTAVE® (Operationally Critical Threat, Asset, and Vulnerability EvaluationSM)
- NIST Risk Management Framework (RMF)
- OWASP Threat Risk Modeling
- Lockheed Martin – Kill Chains
- Dan Guido – Exploit Intelligence Project
- Dino Dai Zovi – Attacker Math 101
- Australian DSD – Strategies to Mitigate Targeted Cyber Intrusions
- SANS/CSIS – Twenty Critical Security Controls for Effective Cyber Defense
GDPR effects any company that handles personal data of EU citizens so also your company
for example: if you are planning to do marketing e-mail campaigns which includes European customers, you have to ask their approval for sending to those e-mail addresses.
You also need to describe for what purposes you will use their personal data and for how long and you need to guarantee the protection of this data and that you don’t use it for other purposes that the customer didn’t agree on.
If you plan on sharing this data with a partner company (example the website designer) you also need to ask approval from the end customer and describe this in your agreement.
You also need to provide a way that the customer has insights on what kind of data you own from him/her and the customer has the right to be forgotten. this means that you must delete all personal data if the customers ask you to do so.
Refer to this article which discusses how the GDPR will affect cloud hosting for both providers and users: https://www.kefron.com/blog/gdpr-cloud-hosting/
If your organization collects, hosts or analyzes personal data of EU residents, GDPR provisions require you to use third-party data processors who guarantee their ability to implement the technical and organizational requirements of the GDPR. To further earn your trust, we are making contractual commitments available to you that provide key GDPR-related assurances about our services. Our contractual commitments guarantee that you can:
Respond to requests to correct, amend or delete personal data.
Detect and report personal data breaches.
Demonstrate your compliance with the GDPR.
Here is a checklist that is designed to assist organisations in complying with the GDPR:
Things You Should Know About Governance and Management System for GDPR Compliance:
GDPR in a nutshell:
According to Rec.81; Art.28(1)-(3) of the GDPR regulation,
“The carrying-out of processing by a processor should be governed by a contract or other legal act under Union or Member State law, binding the processor to the controller”
A controller that wishes to appoint a processor must only use processors that guarantee compliance with the GDPR. The controller must appoint the processor in the form of a binding written agreement, which states that the processor must:
- only act on the controller’s documented instructions
- impose confidentiality obligations on all personnel who process the relevant data
- must ensure the security of the personal data that it processes
- abide by the rules regarding appointment of sub-processors
- implement measures to assist the controller in complying with the rights of data subjects
- assist the controller in obtaining approval from DPAs where required
- at the controller’s election, either return or destroy the personal data at the end of the relationship (except as required by EU or Member State law)
- and provide the controller with all information necessary to demonstrate compliance with the GDPR
- Respond to requests to correct, amend or delete personal data
- Detect and report personal data breaches
- Demonstrate your compliance with the GDPR
GDPR and Logfiles on WebServer:
I found a good reading here at: EU GDPR and personal data in web server logs
EU GDPR and personal data in web server logs
Daniel AleksandersenPublished: Updated:
Web server logs contains information classified as personal data by default under the European Union’s General Data Protection Regulation (GDPR). The new privacy regulation comes in effect in , and just about everyone needs to take action now to become compliant.
Disclaimer: I’m not a lawyer and I’m not providing you legal advise. Contact your legal council for help interpreting and implementing the GDPR. This article is provided for entertainment purposes, and amounts to nothing but my interpretation of the GDPR.
The General Data Protection Regulation shifts the default operating mode for personal data collection from collect and store as much information about everyone as possible for all eternityto don’t collect any information about anyone unless there is documented and informed consent for the collection; and don’t use that information for anything but the specific purposes consent were given for. The GDPR turns big-data collection of personal data on the web from an asset to a liability with fines as high as 20 000 000 Euro or 4 % of global revenue (whichever is greater).
I’ve limited the scope of this article to discuss and focus on some of the technical requirements surrounding personal data collected by default in the logs generated by popular web server software. I’ll not go through the entire GDPR and all the requirements, but focus on some actionable points.
Personal data in server logs
The default configuration of popular web servers including Apache Web Server and NGINX collect and store at least two of the following three types of logs:
- Access logs
- Error logs (including processing-language logs like PHP)
- Security audit logs (e.g. ModSecurity)
All of these logs contains personal information by default under the new regulation. IP addresses are specifically defined as personal data per Article 4, Point 1; and Recital 49. The logs can also contain usernames if your web service use them as part of their URL structure, and even the referral information that is logged by default can contain personal information (e.g. unintended collection of sensitive data; like being referred from a sensitive-subject website).
If you don’t have a legitimate need to store these logs you should disable logging in your web server. You’re not even allowed to store this type of information without having obtained direct consent for the purposes you intend to store the information for from the persons you’re storing information about. The less customer information you store the lower the risk to your organization.
Legal basis for collecting and storing logs without consent
You can’t collect and store any personal data without having obtained, and being able to document that you obtained, consent from the persons you’re collecting data from. You can, however, still collect and store personal data in your server logs for the limited and legitimate purpose of detecting and preventing fraud and unauthorized system access, and ensuring the security of your systems.
Here are the relevant excerpts from the GDPR that allows data collection for this type of purposes:
“Processing shall be lawful only if and to the extent that at least one of the following applies: […] (f) processing is necessary for the purposes of the legitimate interests pursued by the controller or by a third party, except where such interests are overridden by the interests or fundamental rights and freedoms of the data subject which require protection of personal data, in particular where the data subject is a child.”
Article 6, Paragraph 1, Point F
“The processing of personal data to the extent strictly necessary and proportionate for the purposes of ensuring network and information security, i.e. the ability of a network or an information system to resist, at a given level of confidence, accidental events or unlawful or malicious actions that compromise the availability, authenticity, integrity and confidentiality of stored or transmitted personal data, and the security of the related services offered by, or accessible via, those networks and systems, […] by providers of electronic communications networks and services and by providers of security technologies and services, constitutes a legitimate interest of the data controller concerned. This could, for example, include preventing unauthorised access to electronic communications networks and malicious code distribution and stopping ‘denial of service’ attacks and damage to computer and electronic communication systems.”
Recital 49 (excerpt)Notably, this doesn’t exempt such collection from the strict requirements of the GDPR. Gandalf the Grey offers some great advice for how you should treat personal data to achieve GDPR compliance in your organization:
Encryption, access restriction, and timely erasure
The specific requirements under the GDPR that apply to your organization depends on the scope and type of data you collect set against the needs to store the data. The regulation with all its recitals is 54 800 words long, but I’ll try to summarize some practical implementation requirements from the regulation:
There are no specific technical details offered regarding how these requirements should be implemented besides the suggestion to use “encryption” in Article 6, Paragraph 4, Point E and Recital 83. The take-away is still clear: data should be secured, access should be limited even within your organization, and data should be deleted (including from backups) when there is no longer a need to retain it.
Utility of the day: logrotate (+ gnupg)
logrotate is a very useful tool that can be used to encrypt logs in storage even on edge servers, and can help automate deletion of old log files. It can also be used to encrypt log files in storage which when combined with organizational measures can limit access to decrypting the log files.. Unless you encrypt your logs, an unauthorized third-party who gained access to your servers could extract a lot of data about your users from your logs. Depending on how much private information is stored in your log files and the potential sensitive nature of your business, you shouldn’t store log files for more than a few hours or days unless you take measures to protect them.
Managing PGP keys in GnuPG is beyond the scope of this article. However, in short you would create a key-pair on a secure machine, and then import the public key into the GnuPG key chain on your servers while storing the private key on a secure medium with limited access for authorized employees only (e.g. printed on paper or kept on a removable storage media). The server can then use the public key to encrypt its log files without with public key cryptography; resulting in the server being able to encrypt the data without being able to decrypt it without the private. The log files could even be transferred to a centralized log-storage server for cold storage.
I believe such a setup could be used to achieve GDPR compliance while still maintaining auditable logs in the event of breach of server security or other incident that would require a log trail.
The following logrotate configuration example demonstrates secure encrypted storage (using GnuPG) erasure after time intervals (
rotate in days) appropriate to how important it is to store the various log files following a security incident.
compressoptions --encrypt --default-key <em>your-key-id</em>
In the above example, access logs are deleted after 100 days, error logs after 200 days, and ModSecurity logs (which would only contain suspicious activity), are retained for 400 days. After this time, the logs would be securely erased using the shred utility.
The logs are still kept unencrypted for up to 24-hours when they were first recorded. This is a small time window when the data is not stored encrypted, but it is required to allow human technicians and automated log analyzing tools (like SSHGuard or Fail2Ban) to process the data and act upon it to help detect and prevent unauthorized or unlawful system access.
You can reduce the time window when data is kept unencrypted by rotating logs hourly instead of daily, or by piping logs directly from your web server into an encrypted storage. This may have a serious performance impact on your server and complicate the configuration of automated security monitoring tools.
Any identifier, including network or equipment identifiers like an IP address, are considered personal data. Don’t store server logs if you don’t have to. Encrypt logs in storage and limit access to decryption credentials. Delete logs as early as possible, including from any backups. Document what steps you’ve taken to secure data and limit impact in the case of a server breach.
This article has focused on server logs as they’re something every organization with a website or an online service will have to deal with. However, the same principles and even stricter requirements apply to other types of data that your organization keeps on people. The deadline for GDPR compliance is , and that is barely enough time to read through the 54 800 word regulation. With fines up in the 20 Million Euro range; you better get started auditing personal information collection and storage in your organization right away. This is the perfect time to rethink old decisions regarding what data your organization really needs to keep and for how long.
Datenschutz-Grundverordnung DSGVO 2018
Kompakt und aktuell:
Die gesamte Datenschutz-Grundverordnung (DSGVO) mit allen Neuerungen für Webseiten in der EU.
Die DSGVO werden am 25. Mai 2018 verpflichtend für jeden Webmaster in Kraft treten.
clamav - Antivirus scanner for Unix
clamav-base - Base package for clamav, an anti-virus utility for Unix
clamav-daemon - Powerful Antivirus scanner daemon
clamav-freshclam - Downloads clamav virus databases from the Internet
clamav-milter - Fast antivirus scanner for sendmail
clamav-testfiles - Use these files to test that your Antivirus program works
libclamav1 - Virus scanner library
libclamav1-dev - Clam Antivirus library development files
ClamAV: How to check
sudo apt-get -s dist-upgrade | less
sudo apt-get dist-upgrade
# If that result doesn't look like fixing your problem
sudo apt-get -sf install | less
ClamAV: Try to re-install
apt-get -y remove clamav*
sudo apt-get clean -y
sudo apt-get autoremove -y
rm -Rf /var/cache/apt/*
rm -Rf /etc/clamav
sudo apt-get update
sudo apt-get dist-upgrade
apt-get -y install clamav-freshclam
apt-get -y install clamav-milter clamav-daemon
Postfix: How to recover ClamAV broken install on Debian
# sudo apt-get install --reinstall clamav-daemon
sudo rm -Rf /var/cache/apt/archives/*
sudo apt-get remove -y apparmor* clamav*
sudo dpkg --remove clamav clamav-base clamav-freshclam
sudo apt-get autoremove -y
sudo apt-get update -y -o Acquire::ForceIPv4=true
sudo apt-get upgrade -y -o Acquire::ForceIPv4=true
sudo apt-get dist-upgrade -y -o Acquire::ForceIPv4=true
sudo apt-get autoclean -y
sudo apt-get clean -y
sudo apt-get check -y
sudo mkdir /var/run/clamav
sudo chown -R clamav:root /var/run/clamav
sudo apt-get update
sudo apt-get dist-upgrade
sudo apt-get install -y clamav clamav-base clamav-freshclam clamav-milter clamav-docs
sudo dpkg --configure -a
sudo apt --fix-broken install
Postfix: How to recover Clamav systemctl on Debian
systemctl list-unit-files | grep clam
sudo systemctl unmask clamav-daemon.service
sudo systemctl unmask clamav-freshclam.service
sudo systemctl unmask clamav-milter.service
systemctl show clamav-daemon.service
systemctl show clamav-freshclam.service
systemctl show clamav-milter.service
/usr/bin/freshclam -d --foreground=true
Where to find us
Am Leitenbruennlein 22
Get in touch with us
Latest Blog Entries
- The CyberWire2020-04-18 - 00:40
- Common Vulnerabilities and Exposures – CVEannounce2019-01-14 - 11:47