All tracked items across vulnerabilities, news, research, incidents, and regulatory updates.
This article argues against the idea that manual red teaming (the practice of simulating attacks to find security weaknesses) is dying due to automation. The author contends that red teaming is fundamentally about discovering unknown vulnerabilities and exploring creative attack strategies rather than just exploiting known bugs, and therefore cannot be fully automated even though adversaries will continue using AI and automation tools to scale their operations.
This is a disclaimer page for educational material about red team strategies (methods used by authorized security testers to find vulnerabilities by simulating attacks). The content emphasizes that penetration testing (authorized attempts to break into systems to find security weaknesses) must have proper permission before being performed.
CVE-2021-25758 is a vulnerability in JetBrains IntelliJ IDEA versions before 2020.3 where insecure deserialization (converting data back into executable code without proper validation) of the workspace model could allow an attacker to run code locally on an affected system. The vulnerability has a CVSS 4.0 severity rating (a moderate security threat).
Microsoft analyzed the Sunburst attack (a major 2020 breach targeting SolarWinds software) and found that attackers used Cobalt Strike (a tool for command and control, letting attackers remotely direct compromised systems) alongside custom modifications to hide their backdoors in software. The attackers made each compromised system unique with different names and folder locations to avoid detection.
openHAB, a home automation software, had a vulnerability in versions before 2.5.12 and 3.0.1 that allowed attackers on the same network to read files from the system using XXE attacks (XML external entity attacks, which trick an XML parser into loading external files or data). Multiple add-ons that process XML data from other devices were vulnerable to this flaw.
A critical vulnerability (CVE-2020-14756) exists in Oracle Coherence, a data management product, that allows attackers to take over the system without needing to log in. The flaw affects multiple versions of the software and can be exploited remotely through IIOP and T3 network protocols, with a severity rating of 9.8 out of 10 (CVSS score, which measures how dangerous a security flaw is).
This article describes a method for creating security scores that compare different teams or services based on their security issues, helping organizations identify which areas need the most attention. The scoring system uses a multiplier (a scaling factor that makes severe issues count for much more than minor ones) to weight critical bugs more heavily than lower-severity ones, then sums these weighted values into a single score that can be displayed on a dashboard. By showing these scores to management, teams can have discussions about why some services have worse scores than others, which encourages improvements in security practices.
Barco TransForm NDN-210 and NDN-211 devices before version 3.8 have a command injection vulnerability (a flaw that lets attackers run unauthorized commands) in their web login page that allows unauthenticated remote code execution (an attacker can run commands on the device without logging in) through the username and password fields. The vulnerability affects multiple device models across the Barco TransForm N solution.
CVE-2020-35370 is a remote code execution vulnerability (the ability to run commands on a server without owning it) in Raysync versions before 3.3.3.8 that allows an attacker without authentication (login credentials) to send a specially crafted request that overwrites a file on the server with malicious code, then log in as the admin user and modify shell files to gain control of the hosting server.
CVE-2020-26270 is a vulnerability in TensorFlow where LSTM/GRU models (types of neural network layers used for processing sequences) crash when they receive input with zero length on NVIDIA GPU systems, causing a denial of service (making the system unavailable). This happens because the system fails input validation (checking whether data is acceptable before processing it).
TensorFlow's release candidate versions 2.4.0rc* contain a vulnerability in the code that matches filesystem paths to globbing patterns (a method of searching for files using wildcards), which can cause the program to read memory outside the bounds of an array holding directory information. The vulnerability stems from missing checks on assumptions made by the parallel implementation, but this issue only affects the development version and release candidates, not the final release.
A bug in TensorFlow's tf.raw_ops.ImmutableConst operation (a function that creates fixed tensors from memory-mapped files) causes the Python interpreter to crash when the tensor type is not an integer type, because the code tries to write to memory that should be read-only. This crash happens when the file is large enough to contain the tensor data, resulting in a segmentation fault (a critical memory access error).
CVE-2020-26267 is a vulnerability in TensorFlow where the tf.raw_ops.DataFormatVecPermute API (a function for converting data format layout) fails to check the src_format and dst_format inputs, leading to uninitialized memory accesses (using memory that hasn't been set to a known value), out-of-bounds reads (accessing data outside intended boundaries), and potential crashes. The vulnerability was patched across multiple TensorFlow versions.
CVE-2020-26266 is a vulnerability in TensorFlow where saved models can accidentally use uninitialized values (memory locations that haven't been set to a starting value) during execution because certain floating point data types weren't properly initialized in the Eigen library (a math processing component). This is a use of uninitialized resource (CWE-908) type bug that could lead to unpredictable behavior when running affected models.
TensorFlow has a vulnerability where loading a saved model can access uninitialized memory (data that hasn't been set to a known value) when building a computation graph. The bug occurs in the MakeEdge function, which connects parts of a neural network together, because it doesn't verify that array indices are valid before accessing them, potentially allowing attackers to leak memory addresses from the library.
FireEye, a major security company, was attacked and adversaries accessed their internal red teaming tools (software used to test security by simulating attacks). The post warns that red teams are attractive targets for attackers and recommends implementing protective measures like honeypot machines (fake systems designed to detect intruders) and monitoring login attempts to quickly detect when attackers are trying to compromise their systems.
A bug was found in the Linux kernel before version 5.7.3 in the get_user_pages function (a mechanism that allows programs to access memory pages), where it incorrectly grants write access when it should only allow read access for copy-on-write pages (memory regions shared between processes that are copied when modified). This happens because the function doesn't properly respect read-only restrictions, creating a security vulnerability.
Fix: The vulnerabilities have been fixed in versions 2.5.12 and 3.0.1 by a more strict configuration of the used XML parser.
NVD/CVE DatabaseSurvivorship bias is the logical error of focusing only on successes while ignoring failures, which can lead to incomplete understanding. The article applies this concept to red teaming (security testing where a team acts as attackers to find vulnerabilities) by noting that the MITRE ATT&CK framework (a database of known adversary tactics and techniques) only covers publicly disclosed threats, potentially causing security teams to overlook attack methods that haven't been publicly documented or aren't in the framework.
Fix: Update to TransForm N version 3.8 or later, which includes the patch for this issue.
NVD/CVE DatabaseFix: This is fixed in TensorFlow versions 1.15.5, 2.0.4, 2.1.3, 2.2.2, 2.3.2, and 2.4.0. Users should update to one of these patched versions.
NVD/CVE DatabaseFix: This is patched in version 2.4.0. The implementation was completely rewritten to fully specify and validate the preconditions.
NVD/CVE DatabaseFix: This is fixed in versions 1.15.5, 2.0.4, 2.1.3, 2.2.2, 2.3.2, and 2.4.0.
NVD/CVE DatabaseFix: This is fixed in versions 1.15.5, 2.0.4, 2.1.3, 2.2.2, 2.3.2, and 2.4.0.
NVD/CVE DatabaseFix: This vulnerability is fixed in TensorFlow versions 1.15.5, 2.0.4, 2.1.3, 2.2.2, 2.3.2, and 2.4.0. Users should update to one of these patched versions.
NVD/CVE DatabaseFix: This is fixed in versions 1.15.5, 2.0.4, 2.1.3, 2.2.2, 2.3.2, and 2.4.0. Users should update to one of these patched versions.
NVD/CVE DatabaseFix: The source explicitly recommends several protective measures: (1) Create honeypot machines with fake credentials and trigger notifications and alerts when accessed; (2) Set up notifications for logon attempts and successful logons via email and forward events to the blue team (defensive security team); (3) Disable remote management endpoints and allow list source IP addresses in the firewall; (4) Lock down machines by blocking all inbound connections while allowing outbound ones using Windows command 'netsh advfirewall set allprofiles firewallpolicy blockinboundalways,allowoutbound' or Linux commands 'sudo ufw enable', 'sudo ufw default deny incoming', and 'sudo ufw default allow outgoing'; (5) Perform red vs. red testing (security assessments where one red team tests another) to verify the red team has proper security controls in place.
Embrace The RedFix: Update the Linux kernel to version 5.7.3 or later. A patch is available at https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?id=17839856fd588f4ab6b789f482ed3ffd7c403e1f. Debian users should refer to security updates referenced in the Debian mailing list announcements and DSA-5096.
NVD/CVE DatabaseThis is an index page summarizing a series of blog posts about machine learning security from a red teaming perspective (testing a system by simulating attacker behavior). The posts cover ML basics, threat modeling, practical attacks like adversarial examples (inputs designed to fool AI models), model theft, backdoors (hidden malicious code inserted into models), and how traditional security attacks (like weak access control) also threaten AI systems.
This post describes how Generative Adversarial Networks (GANs, a type of AI system where two neural networks compete to create realistic fake images) can be used to generate fake husky photos that trick an image recognition system called Husky AI into misclassifying them as real huskies. The author explains they investigated this attack method and references a GAN course to learn more about the technique.