Security
Last updated
Last updated
Quasr is protected by various security measures:
First of all we understand security is an ongoing exercise and we know that we are not going to be immune to vulnerabilities and exploits. Hence we openly work with, and support the security community to help us find and mitigate them. Above all, we value transparency, and report any incidents to our customers and community, to help them understand the impact and mitigation.
Our business and platform is fully located in the EU. Hence we are not subject to US regulation, when it comes to business reporting and national security, of course we do adhere any privacy regulations with extra-territorial effect. Additionally our full staff are EU citizens and EU-based.
We hold no personal data in our own environments. Personal data of Quasr customers (meaning our customers) is kept in our customer management platform (Stripe) and in an EU environment. When personal data passes through our platform (either for our customers or your customers) we take measures to make sure the data is not recorded in any logs and passthrough time within the platform is kept to a minimum. We do not export personal data from Stripe to other environments.
We employ a clean data policy, meaning data is only kept when used and for a reasonable time. When unused it automatically expires and gets deleted. We have following policies in place:
Unused tenants get deleted after 90 days (default and maximum).
Unused accounts get deleted after between 90 days (default) and 400 days (maximum).
Unused extensions get deleted after 30 days.
Inactive customers get deleted after 400 days. Even though the customer gets marked as deleted, customer data is kept in our customer management platform (Stripe) for a certain time due to financial and compliance reasons.
All authentication secrets are kept securely hashed in our databases. We use an Argon2id hash algorithm with 1 level of parallelism, 2 iterations, and a hash length of 32. We do treat indexable secrets slightly different for a tradeoff in performance:
Indexable secrets use a fixed salt (required) and a memory size of 0.5 Mebibyte.
Non-indexable secrets use a random salt and a memory size of 15 Mebibyte.
We continuously monitor the security community and industry guidelines to determine if these need to be upgraded for better security. We do stress that we also hash usernames (ID's) and identifiers provided by federated identity providers.
All configuration secrets, like TOTP secrets and client secrets for identity providers, are kept in encrypted format in our databases. We employ symmetric encryption (AES-256-GCM) with our own keys, issued and managed by AWS Key Management Service (KMS) which uses Hardware Security Modules (HSM).
We adhere strictly to open industry standards such as JWT/JWS/JWE/JWK, OAuth 2.0 & OpenID Connect (OIDC) as to not re-invent the wheel, and bank on the shared knowledge and extensive review of the security community. We in fact already adhere to OAuth 2.1 while not yet mandatory and contains stronger security requirements.
All security tokens (JWT) are signed using an asymmetric 2048-bit key (RSA, with PKCS #1v.15 padding and SHA-256). The algorithm is chosen in part because of JWT standards as well as a trade off with performance. This is our own key issued and managed by AWS Key Management Service (KMS) which uses Hardware Security Modules (HSM).
We optionally provide the option to customers to additionally encrypt their tokens using an asymmetric key (RSA) provided through a public JWKS endpoint with the customer key. We currently support RSAES OEAP using default parameters which uses SHA-1 (RSA-OAEP) or using SHA-256 (RSA-OAEP-256) for encrypting the content key. We support AES-256-CBC with 128-bit key keys and HMAC using SHA-256 (A128CBC-HS256).
We have security token (JWT) revocation in place allowing us to revoke tokens issued by Quasr. This mechanism is already used to enforce single-use tokens, such as authorization codes and consent / refresh tokens.
We combat account hacking by having temporary lock-outs on authentication factors. We have current triggers in place:
Factor enrollments are locked after 5 consecutive failed or pending attempts with auto-unlock after 5 minutes. The counter is reset after a successful pass. Upon auto-unlock the counter is either unchanged or reset depending on the type of factor; only for federation and OTP factors the counter is reset. The latter allows for more pending attempts without immediate re-locking for factors that operate in such a manner, but will allow other factors to be blocked again upon just 1 new failed attempt after auto-unlock.
Source IPs are blocked after 20 consecutive failed or pending attempts with auto-unblock after 5 minutes. The counter is reset after a successful pass. Upon auto-unblock the counter remains unchanged; hence after just 1 new failed or pending attempt the IP will be blocked again.
Additionally, we enforce strong secret and identifier randomness requirements to minimize the risk of guessing. Of course this is always a tradeoff with usability and security scoring for your factors and controls should be set accordingly, to reflect your risk appetite. We have following measure in place:
All of our identifiers are securely generated Universally Unique Identifiers (UUID version 4).
Secrets need to pass a pre-set security threshold that evaluates their guessing resistance. This depends on the type of secret, usernames have a lower threshold compared to client secrets and user passwords.
We have various API defenses in place to combat malicious requests:
Our Authentication API (REST) uses global caching using a CDN on certain GET routes with generally highly static responses (or error responses with HTTP status codes of 400 / 403).
Our Authentication API (REST) is rate limited per route and allows for bandwidth even when other routes are getting overloaded (in case of DDOS). Our Management API (GraphQL) has a general rate-limit of 2000 requests tokens per second.
We have an overall IP-based rate limit (across both APIs) of 300 requests in 5 minutes, that results in a temporary block of the IP. Unblocking usually happens couple minutes after the the IP has reduced their rate.
We have a Web Application Firewall (WAF) in place that block request that look malicious or have known vulnerability exploit signatures (OWASP Top 10). Our backends don't use relational databases and hence are not vulnerable to SQL injection attacks.
We block IPs that are listed as either malicious or actively engaged in reconnaissance or DDOS activities (AWS IP reputation list).
Our Customer API (not a service for customers) is additionally protected with CORS to only allow our Account UI access within a browser environment (combatting XSS). Additionally it also has bot control activated to block out bots.
We have various UI defenses in place to combat malicious requests:
All UIs are globally cached using a CDN.
All UIs are Single Page Applications (SPA) that live in the browser, acces our services using APIs, and served directly from storage. They don't contain any sensitive algorithms or data, hence attacking the UIs by themselves provides no gain.
All UIs have bot control activated to block out bots.
All UIs are additionally protected with CORS to only allow our UI domains to load the content.
All APIs and UIs are only accessible over HTTPS with at least TLSv1.2. You can find all supported ciphers at (we use TLSv1.2_2021 as security policy).
All SSL/TLS certificates are issued and managed by AWS Certificate Manager (ACM) with a 2048-bit key (RSA). Certificate renewal is yearly and fully automated (every 12 to 13 months).
All internal communication is also only over HTTPS with at least TLSv1.2. This uses AWS own certificates and configurations.
All external communication (Stripe) is also only over HTTPS with at least TLSv1.2.
We have extensive monitoring and alerting in place that could help us understand if the service is under attack, corrupted or being used maliciously. This does allow us to quickly take extreme measures if we see the service is being misused or under large-scale attack. Some of potential mitigations at our disposal are:
Locking of malicious tenants.
Locking of malicious accounts.
Temporarily disabling of factors.
Revoking of security tokens (JWT).
Throttling of incoming requests.
Blocking access based on geo.
Blocking of malicious IPs.
We have detailed audit logs in place at various levels in our platform. Every action logs which account has performed it and data changes log differentials.
Account - everything related to a specific account
Tenant - everything related to a specific tenant
Platform - everything (tenants and non-tenant)
Cloud - everything related to our cloud environments
Full separation of production environment with restricted manual access. Our deployments are fully automated so our teams don't have access to production resources and data. Only during emergencies access can be granted on a granular level to either (a) assess situations, and / or (b) remediate data corruptions. Access to our environments is provided using a Single Sign-On (SSO) mechanism (SAML). Sessions are time-limited to 8 hours. Our internal identity provider is Google Workspace with has enforced Multi-Factor Authentication (MFA).
All developers are monitored and trained on security knowledge and skills. Additionally code is thoroughly reviewed by seniors / experts. During our deployment process all pulled third-party packages (NPM) are scanned for vulnerabilities (Snyk). Access to code repositories is provided using a Single-Sign On (SSO) mechanism (SAML), issuing time-limited access tokens (8 hours). Our internal identity provider is Google Workspace with Multi-Factor Authentication (MFA).
We make use of native cloud services (AWS) whenever possible to offload security introduced through the underlying platform. As we are build fully serverless we don't manage any storage, databases, virtual machines, container clusters, networks or firewalls ourselves.
We have secured our internal and outgoing email communication as follows:
We sign our outgoing emails (DKIM) using two separate asymmetric 2048-bit RSA keys; one for platform communication and one for internal communication.
We have a custom MAIL FROM domain for platform communication (mail.quasr.io) with SPF activated to only allow our outgoing email service provider (AWS SES).
We have DMARC enabled on our domain with a 100% quarantine policy.
We grant access to environments and systems on a least-privileged principle (both for users and software clients). Access is regularly reviewed and revoked upon account termination.
We can easily recover our environments as our applications are built stateless and are deployed fully encoded in infrastructure as code (IAC) templates through an automated proces (DevOps). We have following data loss mitigation strategies in place:
All data is replicated across at least 3 Availability Zones (AZ's) which are distinct, physically separated and isolated data centers in a Region of AWS, with redundant power, networking, connectivity and housed in separate facilities. See for more details.
All resource data (such as tenants, accounts, factors, controls, extensions or tokens) is kept in serverless non-relational databases (DynamoDB) with Point-In-Time Recovery (PITR) enabled of up to 35 days, with a window of 5 minutes.
All logs are kept in serverless storage (S3) with a designed durability of 99.999999999%. See for more details.
All cryptographic keys are kept in AWS Key Management Service (KMS), which is designed to be highly durable with redundant storage, including offline Hardware Security Modules (HSM) in emergency. See for more details.
Last but not least we're insured against inflicted damages that resulted from breaches or hacks in our platform. Please be mindful that only our enterprise customers enjoy certain contractual statements that allow for compensation.
If you have any questions, comments or suggestions on the above please don't hesitate to reach out to us at .