This article takes an honest look at the features of Reflectiz.
Since you’re on the c/side website, we acknowledge our bias. That said, we’ve built our case honestly and based our analysis on publicly available information, industry information, and our own or our customers' experiences.
If you want to verify their claims yourself, please navigate to their product pages.
Criteria | c/side | Reflectiz |
---|---|---|
Approaches used | Proxy + agent based detections but also offers crawler and offers a free CSP reporting endpoint |
Crawler |
Real-time Protection | ||
Full Payload Analysis | ||
Dynamic Threat Detection | ||
DOM-Level Threat Detection | ||
100% Historical Tracking & Forensics | ||
Bypass Protection | ||
Certainty the Script Seen by User is Monitored | ||
AI-driven Script Analysis | ||
QSA validated PCI dash | ||
SOC 2 Type II | ||
PCI specific UI |
What is Reflectiz?
Reflectiz is a cybersecurity company that focuses on securing web dependencies like third-party scripts and open-source tools. It uses agentless monitoring to detect threats, prevent data leaks, and ensure compliance on websites.
How Reflectiz works
Reflectiz uses a “proprietary browser” which crawls the website. This maps the most important pages and simulates real user activity.
There are a few problems with this approach.
A crawler can indeed mimic user activity, but it isn’t user activity by definition. Nor does it get the exact payload of what all users receive.
Many dependencies use a dynamic system that serves different code based on various parameters. Reflectiz does mention that you can set the chosen geo-location and device settings, but we do not have insight into how comprehensive this is.
Other parameters Reflectiz doesn’t seem able to mimic are:
- Referrer
- Unique cookies and session data
- A/B testing or feature flags
- Browser fingerprinting details
- Network conditions
After these crawling sessions, Reflectiz will do behavior analysis, data analysis and finally alerts based on what they found.
Finally they use the words “most important pages” which likely refers to mostly payment pages, which is required by the PCI DSS 4.0.1 requirements 6.4.3 and 11.6.1.
Regardless of the vendor, any crawler based solution is unlikely to spot any advanced attacks first hand. The bad actor will simply serve a clean script, or not script, to the bot. Therefore the threat intel has to come from another source. A vendor that only offers a crawler by design would have to purchase this intelligence. C/side offers a crawler for cases where a customer can not make any changes to their code but the big difference is that we use the threat intelligence we see from all other websites that use our proxy service. While this approach is still not going to eliminate an attack, it sure is a lot more capable at detecting attacks than buying threat intel on the open market.
How c/side goes further
c/side primarily offers a hybrid proxy approach which sits in between the user session and the 3rd party service. It analyzes the served dependencies code in real-time before serving it to the user.
This allows us to not only spot advanced highly targeted attacks and alert on them, c/side also makes it possible to block attacks before they touch the user's browser. It also checks the box for multiple compliance frameworks, including PCI DSS 4.0.1. We even provide deep forensics, including if an attacker bypasses our detections. Allowing you to more tightly scope the size of the incident us to make our detection capabilities better every day. No other vendor has this capability.
We believe this is the most secure way to monitor and protect your dependencies across your entire website. We've spent years in the client-side security space before we started c/side, we've seen it all, this is the only way you can actually spot an attack.
Sign up or book a demo to get started.
FAQ
Q: How does c/side's hybrid proxy differ from Reflectiz's crawler-based approach?
A: The fundamental difference is real-time versus periodic protection. Reflectiz uses external crawlers that periodically scan your website from data center IPs, missing time-gated, geo-targeted, or user-specific attacks. c/side's hybrid proxy handles every real visitor request in real-time, catching edge-case payloads the moment they're delivered. We provide continuous protection, while Reflectiz offers periodic snapshots that miss dynamic threats.
Q: Can attackers bypass c/side's protection like they can with Reflectiz's crawler detection?
A: No, because c/side's core analysis happens on our proxy, completely invisible to attackers. attackers can easily detect Reflectiz's crawlers because they come from predictable data center IP ranges and can serve them clean versions of compromised scripts. Since c/side processes every real user request through our proxy, attackers cannot distinguish our analysis from legitimate traffic. Our protection is invisible to attackers, while crawler-based approaches are easily identified and deceived.
Q: What forensic evidence does c/side provide compared to Reflectiz's crawler reports?
A: Reflectiz provides periodic scan reports showing what their crawlers observed during scheduled visits, but c/side captures and archives every script payload served to real users. This gives you complete forensic evidence of actual attacks rather than just snapshots of what crawlers saw. Our approach provides immutable proof of threats that were blocked from reaching users, not just what was visible during scanning.
Q: How do compliance capabilities compare between c/side and Reflectiz?
A: c/side provides superior PCI DSS compliance with continuous monitoring and immutable payload archives covering both requirements 6.4.3 and 11.6.1. Reflectiz's crawler approach provides periodic inventory reports but lacks the real-time monitoring and comprehensive forensic evidence that regulators require. Our continuous protection creates the complete audit trail that compliance officers need for thorough documentation.
Q: Why is c/side's real-time protection better than Reflectiz's periodic scanning?
A: Real-time protection catches attacks the moment they're delivered to users, while periodic scanning misses threats that appear between scans or target specific user segments. Modern attackers use conditional logic to avoid detection by crawlers, serving malicious code only to real users. c/side's continuous monitoring ensures no attack goes undetected, while crawler-based approaches have inherent blind spots that sophisticated attackers exploit.