aisecwatch.com
DashboardVulnerabilitiesNewsResearchArchiveStatsDataset
aisecwatch.com

Real-time AI security monitoring. Tracking AI-related vulnerabilities, safety and security incidents, privacy risks, research developments, and policy changes.

Navigation

VulnerabilitiesNewsResearchDigest ArchiveNewsletter ArchiveSubscribeData SourcesStatisticsDatasetAPIIntegrationsWidgetRSS Feed

Maintained by

Truong (Jack) Luu

Information Systems Researcher

Browse All

All tracked items across vulnerabilities, news, research, incidents, and regulatory updates.

to
Export CSV
3431 items

An alternative perspective on the death of manual red teaming

infonews
securitysafety
Feb 8, 2021

This article argues against the idea that manual red teaming (the practice of simulating attacks to find security weaknesses) is dying due to automation. The author contends that red teaming is fundamentally about discovering unknown vulnerabilities and exploring creative attack strategies rather than just exploiting known bugs, and therefore cannot be fully automated even though adversaries will continue using AI and automation tools to scale their operations.

Embrace The Red

Cybersecurity Attacks - Red Team Strategies Kindle Edition for free

infonews
security
Feb 4, 2021

This is a disclaimer page for educational material about red team strategies (methods used by authorized security testers to find vulnerabilities by simulating attacks). The content emphasizes that penetration testing (authorized attempts to break into systems to find security weaknesses) must have proper permission before being performed.

CVE-2021-25758: In JetBrains IntelliJ IDEA before 2020.3, potentially insecure deserialization of the workspace model could lead to loca

highvulnerability
security
Feb 3, 2021
CVE-2021-25758

CVE-2021-25758 is a vulnerability in JetBrains IntelliJ IDEA versions before 2020.3 where insecure deserialization (converting data back into executable code without proper validation) of the workspace model could allow an attacker to run code locally on an affected system. The vulnerability has a CVSS 4.0 severity rating (a moderate security threat).

Team A and Team B: Sunburst, Teardrop and Raindrop

infonews
security
Feb 2, 2021

Microsoft analyzed the Sunburst attack (a major 2020 breach targeting SolarWinds software) and found that attackers used Cobalt Strike (a tool for command and control, letting attackers remotely direct compromised systems) alongside custom modifications to hide their backdoors in software. The attackers made each compromised system unique with different names and folder locations to avoid detection.

CVE-2021-21266: openHAB is a vendor and technology agnostic open source automation software for your home. In openHAB before versions 2.

mediumvulnerability
security
Feb 1, 2021
CVE-2021-21266

openHAB, a home automation software, had a vulnerability in versions before 2.5.12 and 3.0.1 that allowed attackers on the same network to read files from the system using XXE attacks (XML external entity attacks, which trick an XML parser into loading external files or data). Multiple add-ons that process XML data from other devices were vulnerable to this flaw.

Survivorship Bias and Red Teaming

infonews
securityresearch

CVE-2020-14756: Vulnerability in the Oracle Coherence product of Oracle Fusion Middleware (component: Core Components). Supported versio

criticalvulnerability
security
Jan 20, 2021
CVE-2020-14756EPSS: 88.8%

A critical vulnerability (CVE-2020-14756) exists in Oracle Coherence, a data management product, that allows attackers to take over the system without needing to log in. The flaw affects multiple versions of the software and can be exploited remotely through IIOP and T3 network protocols, with a severity rating of 9.8 out of 10 (CVSS score, which measures how dangerous a security flaw is).

Gamifying Security with Red Team Scores

infonews
security
Jan 11, 2021

This article describes a method for creating security scores that compare different teams or services based on their security issues, helping organizations identify which areas need the most attention. The scoring system uses a multiplier (a scaling factor that makes severe issues count for much more than minor ones) to weight critical bugs more heavily than lower-severity ones, then sums these weighted values into a single score that can be displayed on a dashboard. By showing these scores to management, teams can have discussions about why some services have worse scores than others, which encourages improvements in security practices.

CVE-2020-17500: Barco TransForm NDN-210 Lite, NDN-210 Pro, NDN-211 Lite, and NDN-211 Pro before 3.8 allows Command Injection (issue 1 of

criticalvulnerability
security
Jan 7, 2021
CVE-2020-17500

Barco TransForm NDN-210 and NDN-211 devices before version 3.8 have a command injection vulnerability (a flaw that lets attackers run unauthorized commands) in their web login page that allows unauthenticated remote code execution (an attacker can run commands on the device without logging in) through the username and password fields. The vulnerability affects multiple device models across the Barco TransForm N solution.

CVE-2020-35370: A RCE vulnerability exists in Raysync below 3.3.3.8. An unauthenticated unauthorized attacker sending a specifically cra

highvulnerability
security
Dec 23, 2020
CVE-2020-35370

CVE-2020-35370 is a remote code execution vulnerability (the ability to run commands on a server without owning it) in Raysync versions before 3.3.3.8 that allows an attacker without authentication (login credentials) to send a specially crafted request that overwrites a file on the server with malicious code, then log in as the admin user and modify shell files to gain control of the hosting server.

CVE-2020-26270: In affected versions of TensorFlow running an LSTM/GRU model where the LSTM/GRU layer receives an input with zero-length

mediumvulnerability
security
Dec 10, 2020
CVE-2020-26270

CVE-2020-26270 is a vulnerability in TensorFlow where LSTM/GRU models (types of neural network layers used for processing sequences) crash when they receive input with zero length on NVIDIA GPU systems, causing a denial of service (making the system unavailable). This happens because the system fails input validation (checking whether data is acceptable before processing it).

CVE-2020-26269: In TensorFlow release candidate versions 2.4.0rc*, the general implementation for matching filesystem paths to globbing

highvulnerability
security
Dec 10, 2020
CVE-2020-26269

TensorFlow's release candidate versions 2.4.0rc* contain a vulnerability in the code that matches filesystem paths to globbing patterns (a method of searching for files using wildcards), which can cause the program to read memory outside the bounds of an array holding directory information. The vulnerability stems from missing checks on assumptions made by the parallel implementation, but this issue only affects the development version and release candidates, not the final release.

CVE-2020-26268: In affected versions of TensorFlow the tf.raw_ops.ImmutableConst operation returns a constant tensor created from a memo

mediumvulnerability
security
Dec 10, 2020
CVE-2020-26268

A bug in TensorFlow's tf.raw_ops.ImmutableConst operation (a function that creates fixed tensors from memory-mapped files) causes the Python interpreter to crash when the tensor type is not an integer type, because the code tries to write to memory that should be read-only. This crash happens when the file is large enough to contain the tensor data, resulting in a segmentation fault (a critical memory access error).

CVE-2020-26267: In affected versions of TensorFlow the tf.raw_ops.DataFormatVecPermute API does not validate the src_format and dst_form

mediumvulnerability
security
Dec 10, 2020
CVE-2020-26267

CVE-2020-26267 is a vulnerability in TensorFlow where the tf.raw_ops.DataFormatVecPermute API (a function for converting data format layout) fails to check the src_format and dst_format inputs, leading to uninitialized memory accesses (using memory that hasn't been set to a known value), out-of-bounds reads (accessing data outside intended boundaries), and potential crashes. The vulnerability was patched across multiple TensorFlow versions.

CVE-2020-26266: In affected versions of TensorFlow under certain cases a saved model can trigger use of uninitialized values during code

mediumvulnerability
security
Dec 10, 2020
CVE-2020-26266

CVE-2020-26266 is a vulnerability in TensorFlow where saved models can accidentally use uninitialized values (memory locations that haven't been set to a starting value) during execution because certain floating point data types weren't properly initialized in the Eigen library (a math processing component). This is a use of uninitialized resource (CWE-908) type bug that could lead to unpredictable behavior when running affected models.

CVE-2020-26271: In affected versions of TensorFlow under certain cases, loading a saved model can result in accessing uninitialized memo

mediumvulnerability
security
Dec 10, 2020
CVE-2020-26271

TensorFlow has a vulnerability where loading a saved model can access uninitialized memory (data that hasn't been set to a known value) when building a computation graph. The bug occurs in the MakeEdge function, which connects parts of a neural network together, because it doesn't verify that array indices are valid before accessing them, potentially allowing attackers to leak memory addresses from the library.

Actively protecting pen testers and pen testing assets

infonews
security
Dec 8, 2020

FireEye, a major security company, was attacked and adversaries accessed their internal red teaming tools (software used to test security by simulating attacks). The post warns that red teams are attractive targets for attackers and recommends implementing protective measures like honeypot machines (fake systems designed to detect intruders) and monitoring login attempts to quickly detect when attackers are trying to compromise their systems.

CVE-2020-29374: An issue was discovered in the Linux kernel before 5.7.3, related to mm/gup.c and mm/huge_memory.c. The get_user_pages (

lowvulnerability
security
Nov 28, 2020
CVE-2020-29374

A bug was found in the Linux kernel before version 5.7.3 in the get_user_pages function (a mechanism that allows programs to access memory pages), where it incorrectly grants write access when it should only allow read access for copy-on-write pages (memory regions shared between processes that are copied when modified). This happens because the function doesn't properly respect read-only restrictions, creating a security vulnerability.

Machine Learning Attack Series: Overview

infonews
securityresearch

Machine Learning Attack Series: Generative Adversarial Networks (GANs)

infonews
securityresearch
Previous157 / 172Next
Embrace The Red
NVD/CVE Database
Embrace The Red

Fix: The vulnerabilities have been fixed in versions 2.5.12 and 3.0.1 by a more strict configuration of the used XML parser.

NVD/CVE Database
Jan 22, 2021

Survivorship bias is the logical error of focusing only on successes while ignoring failures, which can lead to incomplete understanding. The article applies this concept to red teaming (security testing where a team acts as attackers to find vulnerabilities) by noting that the MITRE ATT&CK framework (a database of known adversary tactics and techniques) only covers publicly disclosed threats, potentially causing security teams to overlook attack methods that haven't been publicly documented or aren't in the framework.

Embrace The Red
NVD/CVE Database
Embrace The Red

Fix: Update to TransForm N version 3.8 or later, which includes the patch for this issue.

NVD/CVE Database
NVD/CVE Database

Fix: This is fixed in TensorFlow versions 1.15.5, 2.0.4, 2.1.3, 2.2.2, 2.3.2, and 2.4.0. Users should update to one of these patched versions.

NVD/CVE Database

Fix: This is patched in version 2.4.0. The implementation was completely rewritten to fully specify and validate the preconditions.

NVD/CVE Database

Fix: This is fixed in versions 1.15.5, 2.0.4, 2.1.3, 2.2.2, 2.3.2, and 2.4.0.

NVD/CVE Database

Fix: This is fixed in versions 1.15.5, 2.0.4, 2.1.3, 2.2.2, 2.3.2, and 2.4.0.

NVD/CVE Database

Fix: This vulnerability is fixed in TensorFlow versions 1.15.5, 2.0.4, 2.1.3, 2.2.2, 2.3.2, and 2.4.0. Users should update to one of these patched versions.

NVD/CVE Database

Fix: This is fixed in versions 1.15.5, 2.0.4, 2.1.3, 2.2.2, 2.3.2, and 2.4.0. Users should update to one of these patched versions.

NVD/CVE Database

Fix: The source explicitly recommends several protective measures: (1) Create honeypot machines with fake credentials and trigger notifications and alerts when accessed; (2) Set up notifications for logon attempts and successful logons via email and forward events to the blue team (defensive security team); (3) Disable remote management endpoints and allow list source IP addresses in the firewall; (4) Lock down machines by blocking all inbound connections while allowing outbound ones using Windows command 'netsh advfirewall set allprofiles firewallpolicy blockinboundalways,allowoutbound' or Linux commands 'sudo ufw enable', 'sudo ufw default deny incoming', and 'sudo ufw default allow outgoing'; (5) Perform red vs. red testing (security assessments where one red team tests another) to verify the red team has proper security controls in place.

Embrace The Red

Fix: Update the Linux kernel to version 5.7.3 or later. A patch is available at https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?id=17839856fd588f4ab6b789f482ed3ffd7c403e1f. Debian users should refer to security updates referenced in the Debian mailing list announcements and DSA-5096.

NVD/CVE Database
Nov 26, 2020

This is an index page summarizing a series of blog posts about machine learning security from a red teaming perspective (testing a system by simulating attacker behavior). The posts cover ML basics, threat modeling, practical attacks like adversarial examples (inputs designed to fool AI models), model theft, backdoors (hidden malicious code inserted into models), and how traditional security attacks (like weak access control) also threaten AI systems.

Embrace The Red
Nov 25, 2020

This post describes how Generative Adversarial Networks (GANs, a type of AI system where two neural networks compete to create realistic fake images) can be used to generate fake husky photos that trick an image recognition system called Husky AI into misclassifying them as real huskies. The author explains they investigated this attack method and references a GAN course to learn more about the technique.

Embrace The Red